content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Carpentersville Algebra Tutor
Find a Carpentersville Algebra Tutor
I am very experienced and knowledgeable in many areas of math. I have a Bachelors Degree in Mathematics Education and will be certified as a Secondary Mathematics Teacher in Illinois. I have done
private tutoring for four years with all different levels of math and age groups.
10 Subjects: including algebra 1, algebra 2, calculus, geometry
...I've been tutoring test prep for 15 years, and I have a lot of experience helping students get the score they need on the GRE. I've helped students push past their goal scores in both the
Quant and Verbal. I took the revised version of the GRE the first day it was offered and I scored a 170 on the Quant and a 168 on Verbal.
24 Subjects: including algebra 2, algebra 1, calculus, GRE
Dear Parents and Students, I am a high school senior and I would be very grateful to have the opportunity to tutor students who need help understanding difficult concepts. I have had experience
working with elementary age kids. Over the summer I assistant taught a wilderness adventure class at Roslyn Road elementary school.
8 Subjects: including algebra 1, reading, grammar, vocabulary
...I finished my MBA in Marketing Strategy from DePaul U. My area of expertise has been in Professional IT Services Consulting for over a decade. I now work as a freelancer on multiple projects
through the year and teach Continuing Ed programs in Community Colleges with a focus on Entrepreneurship...
22 Subjects: including algebra 1, reading, English, precalculus
...I would love to help you or your student be successful in math! Please feel free to ask any questions you have!I have been teaching Algebra 1 for 7 years. I have taught students who have
previously failed, students with IEP's, and very low achieving students who often do not have the basic math skills that are necessary to be successful.
8 Subjects: including algebra 1, algebra 2, geometry, SAT math
Nearby Cities With algebra Tutor
Addison, IL algebra Tutors
Algonquin algebra Tutors
Barrington Hills, IL algebra Tutors
Carol Stream algebra Tutors
Cary, IL algebra Tutors
Crystal Lake, IL algebra Tutors
East Dundee, IL algebra Tutors
Elgin, IL algebra Tutors
Hanover Park algebra Tutors
Inverness, IL algebra Tutors
Lake In The Hills algebra Tutors
Port Barrington, IL algebra Tutors
Sleepy Hollow, IL algebra Tutors
Streamwood algebra Tutors
West Dundee, IL algebra Tutors | {"url":"http://www.purplemath.com/Carpentersville_Algebra_tutors.php","timestamp":"2014-04-18T11:33:50Z","content_type":null,"content_length":"24345","record_id":"<urn:uuid:ceae0a2e-d9ee-48c7-bb87-039a15b58462>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Baytown Geometry Tutor
Find a Baytown Geometry Tutor
...I offer a no-fail guarantee (contact me via WyzAnt for details). I am available at any time of the day; I try to be as flexible as possible. I try as much as possible to work in the comfort of
your own home at a schedule convenient to you. I operate my business with the highest ethical standar...
35 Subjects: including geometry, chemistry, physics, calculus
...I graduated from No.1 university in Taiwan, majored in Economics and came to the USA to pursue an MBA at Lamar University in 1988. I am a loving and patient Christian mom of three children. I
have 20 years of experience teaching Algebra and Chinese in elementary and middle school in Taiwan and USA.
12 Subjects: including geometry, reading, Chinese, algebra 1
...Please consider me and I know that I will be able to help your child understand and appreciate math. I will also make learning fun! Whether you are in junior high or high school, Pre Algebra is
20 Subjects: including geometry, Spanish, algebra 1, algebra 2
...I specialize in tutoring math (elementary math, geometry, prealgebra, algebra 1 & 2, trigonometry, precalculus, etc.), Microsoft Word, Excel, PowerPoint, and VBA programming. I'd love to talk
more about tutoring for your specific situation and look forward to hearing from you.During my time at T...
17 Subjects: including geometry, reading, calculus, algebra 1
...I will not help a student while that student is taking a quiz or test online, as that would be considered cheating. It is unethical for me, puts the student at risk of failing if discovered,
and does the student no favors in the long run. Success comes to those who are motivated and hard-working.
41 Subjects: including geometry, chemistry, reading, English | {"url":"http://www.purplemath.com/baytown_tx_geometry_tutors.php","timestamp":"2014-04-17T07:48:50Z","content_type":null,"content_length":"23789","record_id":"<urn:uuid:9788a121-5781-4d6b-a87e-15a4c0348b26>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00548-ip-10-147-4-33.ec2.internal.warc.gz"} |
Simulation of Groundwater Mounding Beneath Hypothetical Stormwater Infiltration Basins
Scientific Investigations Report 2010–5102
Prepared in cooperation with the New Jersey Department of Environmental Protection
Simulation of Groundwater Mounding Beneath Hypothetical Stormwater Infiltration Basins
By Glen B. Carleton
Groundwater mounding occurs beneath stormwater management structures designed to infiltrate stormwater runoff. Concentrating recharge in a
small area can cause groundwater mounding that affects the basements of nearby homes and other structures. Methods for quantitatively
predicting the height and extent of groundwater mounding beneath and near stormwater
Finite-difference groundwater-flow simulations of infiltration from hypothetical stormwater infiltration structures (which are typically
constructed as basins or dry wells) were done for 10-acre and 1-acre developments. Aquifer and stormwater-runoff characteristics in the model
were changed to determine which factors are most likely to have the greatest effect on simulating the maximum height and maximum extent of
groundwater mounding. Aquifer characteristics that were changed include soil permeability, aquifer thickness, and specific yield.
Stormwater-runoff variables that were changed include magnitude of design storm, percentage of impervious area, infiltration-structure depth
(maximum depth of standing water), and infiltration-basin shape. Values used for all variables are representative of typical physical
conditions and stormwater management designs in New Jersey but do not include all possible values. Results are considered to be a
representative, but not all-inclusive, subset of likely results.
Maximum heights of simulated groundwater mounds beneath stormwater infiltration structures are the most sensitive to (show the greatest
change with changes to) soil permeability. The maximum height of the groundwater mound is higher when values of soil permeability, aquifer
thickness, or specific yield are decreased or when basin depth is increased or the basin shape is square (and values of other variables are
held constant). Changing soil permeability, aquifer thickness, specific yield, infiltration-structure depth, or infiltration-structure shape
does not change the volume of water infiltrated, it changes the shape or height of the groundwater mound resulting from the infiltration. An
aquifer with a greater soil permeability or aquifer thickness has an increased ability to transmit water away from the source of infiltration Updated: November 2010
than aquifers with lower soil permeability; therefore, the maximum height of the groundwater mound will be lower, and the areal extent of
mounding will be larger. • Report PDF (4.7 MB)
The maximum height of groundwater mounding is higher when values of design storm magnitude or percentage of impervious cover (from which Part or all of this report is presented in Portable
runoff is captured) are increased (and other variables are held constant) because the total volume of water to be infiltrated is larger. The Document Format (PDF); the latest version of Adobe Reader
larger the volume of infiltrated water the higher the head required to move that water away from the source of recharge if the physical or similar software is required to view it. Download the
characteristics of the aquifer are unchanged. The areal extent of groundwater mounding increases when soil permeability, aquifer thickness, latest version of Adobe Reader, free of charge.
design-storm magnitude, or percentage of impervious cover are increased (and values of other variables are held constant).
For 10-acre sites, the maximum heights of the simulated groundwater mound range from 0.1 to 18.5 feet (ft). The median of the maximum-height
distribution from 576 simulations is 1.8 ft. The maximum areal extent (measured from the edge of the infiltration basins) of groundwater
mounding of 0.25-ft ranges from 0 to 300 ft with a median of 51 ft for 576 simulations. Stormwater infiltration at a 1-acre development was
simulated, incorporating the assumption that the hypothetical infiltration structure would be a pre-cast concrete dry well having side
openings and an open bottom. The maximum heights of the simulated groundwater-mounds range from 0.01 to 14.0 ft. The median of the
maximum-height distribution from 432 simulations is 1.0 ft. The maximum areal extent of groundwater mounding of 0.25-ft ranges from 0 to 100
ft with a median of 10 ft for 432 simulations.
Simulated height and extent of groundwater mounding associated with a hypothetical stormwater infiltration basin for 10-acre and 1-acre
developments may be applicable to sites of different sizes. For example, for a 20-acre site with 20 percent impervious surface, the
stormwater infiltration basin design capacity (and associated groundwater mound) would be the same as for a 10-acre site with 40 percent
impervious surface.
A spreadsheet was developed to solve the Hantush analytical equation, which can be used to calculate groundwater mounding. The Hantush
equation incorporates simplifying assumptions, including that all flow is horizontal. The spreadsheet accepts user-supplied values for
horizontal soil permeability, initial saturated aquifer thickness, specific yield, basin length, basin width, and duration and magnitude of
recharge rate. Comparison of results of finite-difference simulations of a multi-layer system that includes a vertical component of flow in
the saturated zone with the results from the analytical equation indicates that the horizontal-flow-only assumption in the analytical
equation can cause an under-prediction of the maximum height of a groundwater mound by as much as 15 percent. The more realistic
representation of the vertical component of flow and the ability to include site-specific details make finite-difference models such as
MODFLOW potentially more accurate than analytical equations for predicting groundwater mounding.
Suggested citation:
Carleton, G.B., 2010, Simulation of groundwater mounding beneath hypothetical stormwater infiltration basins: U.S. Geological Survey Scientific Investigations Report 2010–5102, 64 p.
Purpose and Scope
Previous Investigations
Physical Variables Affecting Height and Extent of Groundwater Mounding
Soil Permeability and Aquifer Thickness
Specific Yield
Percent Impervious Cover
Design Storm
Basin Shape and Depth
Depth to Water Table
Use of Finite Difference Numerical Models to Estimate Groundwater Mounding
Model Design
Simulation of Groundwater Mounding Beneath Hypothetical Stormwater Infiltration Basins for a 10-Acre Development
Model Discretization
Characteristics Varied to Estimate Groundwater Mounding
Model Boundaries, Recharge, and Difference Between Undeveloped and Developed Water Levels
Maximum Height of Groundwater Mounding
Maximum Extent of Groundwater Mounding
Simulation of Groundwater Mounding Beneath Hypothetical Dry Wells for a 1-Acre Development
Model Discretization, Boundaries, and Difference Between Undeveloped and Developed Water Levels
Model Limitations
Use of Analytical Equations to Estimate Groundwater Mounding
Description of Hantush Equation
Spreadsheet for Solving Hantush Equation
Comparison of Analytical and Finite-Difference Estimates of Groundwater Mounding and Effect of Vertical Layering
Summary and Conclusions
References Cited | {"url":"http://pubs.usgs.gov/sir/2010/5102/","timestamp":"2014-04-17T04:08:22Z","content_type":null,"content_length":"15727","record_id":"<urn:uuid:358877d3-46e9-44a7-95fc-09112a13c007>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00656-ip-10-147-4-33.ec2.internal.warc.gz"} |
Symmetry Reduction of the Two-Dimensional Ricci Flow Equation
Volume 2013 (2013), Article ID 373701, 6 pages
Research Article
Symmetry Reduction of the Two-Dimensional Ricci Flow Equation
^1School of Mathematics, Iran University of Science and Technology, Narmak, Tehran 1684613114, Iran
^2Department of Complementary Education, Payame Noor University, P.O. Box 19395-3697, Tehran, Iran
Received 16 October 2012; Accepted 21 November 2012
Academic Editor: Salvador Hernandez
Copyright © 2013 Mehdi Nadjafikhah and Mehdi Jafari. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and
reproduction in any medium, provided the original work is properly cited.
This paper is devoted to obtain the one-dimensional group invariant solutions of the two-dimensional Ricci flow ((2D) Rf) equation. By classifying the orbits of the adjoint representation of the
symmetry group on its Lie algebra, the optimal system of one-dimensional subalgebras of the ((2D) Rf) equation is obtained. For each class, we will find the reduced equation by the method of
similarity reduction. By solving these reduced equations, we will obtain new sets of group invariant solutions for the ((2D) Rf) equation.
1. Introduction
The Ricci flow was introduced by Hamilton in his seminal paper, “Three-manifolds with positive Ricci curvature” in 1982 [1]. Since then, Ricci flow has been a very useful tool for studying the
special geometries which a manifold admits. Ricci flow is an evolution equation for a Riemannian metric which sometimes can be used in order to deform an arbitrary metric into a suitable metric that
can specify the topology of the underlying manifold. If is a smooth Riemannian manifold, Ricci flow is defined by where Ric denotes the Ricci tensor of the metric . By using the concept of Ricci
flow, Grisha Perelman completely proved the Poincaré conjecture around 2003 [2–4]. The Ricci flow also is used as an approximation to the renormalization group flow for the two-dimensional nonlinear
-model, in quantum field theory; see [5] and references therein. The ricci flow equation is related to one of the models used in obtaining the quantum theory of gravity [6]. Because some difficulties
appear when a quantum field theory is formulated, the studies focus on less dimensional models which are called mechanical models.
In this paper, we want to obtain new solutions of ((2D) Rf) equation by method of Lie symmetry group. As it is well known, Lie symmetry group method has an important role in the analysis of
differential equations. The theory of Lie symmetry groups of differential equations was developed by Lie at the end of nineteenth century [7]. By this method, we can reduce the order of ODEs and
investigate the invariant solutions. Also we can construct new solutions from known ones (for more details about the applications of Lie symmetries, see [8–10]). Lie's method led to an algorithmic
approach to find special solution of differential equation by its symmetry group. These solutions are called group invariant solutions and obtained by solving the reduced system of differential
equation having fewer independent variables than the original system. This fact that for some PDEs, the symmetry reductions are unobtainable by the Lie symmetry method, caused the creation of some
generalizations of this method. These generalizations are called nonclassical symmetry method and was described in many references such as [11–14].
In this paper, we apply the Lie symmetry method to obtain the invariant solutions of ((2D) Rf) equation and classify them. This paper is organized as follows. In Section 2, by using the mechanical
model of Ricci flow, Lie symmetries of ((2D) Rf) equation will be stated. Also we achieve some results from the structure of the Lie algebra of the Lie symmetry group. In Section 3, we will construct
an optimal system of one-dimensional subalgebras of the ((2D) Rf) equation which is useful for classifying of the group invariant solutions. In Section 4, the reduced equation for each element of
optimal system is obtained. In Section 5, we will solve the reduced equations by method of Lie symmetry group and obtain the group invariant solutions of ((2D) Rf) equation.
2. Lie Symmetries of ((2D) Rf) Equation
As we know, transformations which map solutions of a differential equation to other solutions are called symmetries of the equation. The procedure of finding the Lie symmetry group of a PDE was
described in many studies such as [8, 9, 15]. Before performing the Lie symmetries of Ricci flow, let us restate the mechanical model of Ricci flow that introduced by Cimpoiaus and Constantinescu [16
The metric tensor of the space, , can be written in the conformally flat frame using Cartesian coordinates and or the complex variables [6]. According to (1), the function must satisfy where is
Laplacian. By introducing the field Equation (3) takes the form or in the equivalent form: Cimpoiaus and Constantinescu, also obtained the Lie symmetry group of this equation [16]. They proved that
this equation admits a 6-parameter Lie group, , with the following infinitesimal generators for its Lie algebra, : The commutator table of Lie algebra for is given in Table 1, where the entry in the
row and column is , .
Exponentiating the infinitesimal symmetries (6), we obtain the one-parameter groups generated by , as follows: Consequently, we can state the following theorem.
Theorem 1. If is a solution of (5), so are functions
3. One-Dimensional Optimal System of Subalgebras for the ((2D) Rf) Equation
In this section, we obtain the one-dimensional optimal system of ((2D) Rf) equation by using symmetry group. Since every linear combination of infinitesimal symmetries is an infinitesimal symmetry,
there is an infinite number of one-dimensional subgroups for . Therefore, it is important to determine which subgroups give different types of solutions. For this, we must find invariant solutions
which cannot be transformed to each other by symmetry transformations in the full symmetry group. This led to the concept of an optimal system of subalgebra. For one-dimensional subalgebras, this
classification problem is the same as the problem of classifying the orbits of the adjoint representation [8]. Optimal set of subalgebras is obtained by selecting only one representative from each
class of equivalent subalgebras. The problem of classifying the orbits is solved by taking a general element in the Lie algebra and simplifying it as much as possible by imposing various adjoint
transformation on it [15, 17]. Adjoint representation of each , is defined by Lie series where is a parameter and is the commutator of the Lie algebra for [8]. It is important to note that following
the convention of [8], we used the right invariant vector fields to define the Lie algebra in this paper. As a consequence a minus sign is present in Lie series.
Taking into account the table of commutator, we can compute all the adjoint representations corresponding to the Lie group of the ((2D) Rf) equation. They are presented in Table 2. Note that, the
entry indicate .
Now we can state the following theorem.
Theorem 2. A one-dimensional optimal system for Lie algebra of ((2D) Rf) equation is given by where and .
Proof. Let be the adjoint transformation defined by , for . The matrix of , , with respect to basis is respectively. If , then we have Now, we try to vanish the coefficients of by acting the adjoint
representations on , and choosing suitable parameters in each step. Therefore, we can simplify as follows.If , and , then we can make the coefficients of , , and vanish by , , and ; by setting ,
and , respectively. Scaling if necessary, we can assume that . So is reduced to the case .If , and , then we can make the coefficients of and vanish by and ; by setting and , respectively. Also we
can make the coefficient of , by ; by setting . Scaling if necessary, we can assume that . So is reduced to the case .If , and , then we can make the coefficients of and vanish by and ; by setting
and , respectively. Also we can make the coefficient of , by ; by setting . Scaling if necessary, we can assume that . So is reduced to the case .If and , then we can make the coefficient of vanish
by ; by setting . Scaling if necessary, we can assume that . So is reduced to the case .If , and , then we can make the coefficients of and vanish by and ; by setting and , respectively. Also we
can make the coefficient of , by ; by setting . Scaling if necessary, we can assume that . So is reduced to the case .If and , then we can make the coefficient of vanish by ; by setting . Scaling
if necessary, we can assume that . So is reduced to the case .If and , then we can make the coefficient of vanish by ; by setting . Scaling if necessary, we can assume that . So is reduced to the
case .If , then is reduced to the case . There is not any more possible case for investigating and the proof is complete.
4. Similarity Reduction of ((2D) Rf) Equation
In this section, the two-dimensional Ricci flow equation will be reduced by expressing it in the new coordinates. The ((2D) Rf) equation is expressed in the coordinates , we must search for this
equation's form in the suitable coordinates for reducing it. These new coordinates will be obtained by looking for independent invariants corresponding to the generators of the symmetry group. Hence,
by using the new coordinates and applying the chain rule, we obtain the reduced equation. We express this procedure for one of the infinitesimal generators in the optimal system (10) and list the
result for some other cases.
For example, consider the case in Theorem 2 when and ; therefore, we have . For determining independent invariants , we ought to solve the PDEs , that is For solving this PDE, the following
associated characteristic ODE must be solved: Hence, three functionally independent invariants , , and are obtained. If we treat as a function of and , we can compute formulae for the derivatives of
with respect to , , and in terms of , , and the derivatives of with respect to and . By using the chain rule and the fact that , we have After substituting the above relations into (5), we obtain So
the reduced equation is This equation has two independent variables and and one dependent variable . In a similar way, we can compute all of the similarity reduction equations corresponding to the
infinitesimal symmetries that mentioned in Theorem 2. Some of them are listed in Table 3.
5. Group Invariant Solutions of ((2D) Rf) Equation
In this section, we reduce the equations obtained in last section to ODEs and solve them.
For example, (17) admits a 4-parameter family of Lie operators with following infinitesimal generators: The invariants associated to the infinitesimal generator , are and . By substituting these
invariants into (17) and using chain rule, the reduced equation is obtained as follows: the solution of this equation is , where and are arbitrary constants. therefore we have . So is a solution of (
By similar arguments, we can obtain other invariant solutions of (17). Also by reducing other equations in Table 3, we can find other solutions of ((2D) Rf) equation. Some of the similarity reduced
equations are listed in Table 4.
In Table 5, we obtain the invariant solutions of ((2D) Rf) equation corresponding to some of the similarity-reduced equations.
6. Conclusion
In this paper, by using the adjoint representation of the symmetry group on its Lie algebra, we have constructed an optimal system of one-dimensional subalgebras for a well-known partial differential
equation in mathematical physics called: two-dimensional Ricci flow equation. Moreover, we have obtained the similarity-reduced equations for each element of optimal system as well as some group
invariant solutions of two-dimensional Ricci flow equation.
1. R. S. Hamilton, “Three-manifolds with positive Ricci curvature,” Journal of Differential Geometry, vol. 17, no. 2, pp. 255–306, 1982. View at Zentralblatt MATH · View at MathSciNet
2. G. Perelman, “Finite extinction time for the solutions to the Ricci flow on certain three-manifolds,” http://arxiv.org/abs/math/0307245.
3. G. Perelman, “Ricci flow with surgery on three-manifolds,” http://arxiv.org/abs/math.DG/0303109.
4. G. Perelman, “The entropy formula for the Ricci flow and its geometric applications,” http://arxiv.org/abs/math.DG/0211159.
5. K. Gawedzki, “Lectures on conformal field theory,” in Quantum Fields and Strings: A Course for Mathematicians, pp. 727–805, American Mathematical Society, Princeton, NJ, USA, 1996-97. View at
Zentralblatt MATH · View at MathSciNet
6. I. Bakas, “Ricci flows and infinite dimensional algebras,” Fortschritte der Physik, vol. 52, no. 6-7, pp. 464–471, 2004. View at Publisher · View at Google Scholar · View at Zentralblatt MATH ·
View at MathSciNet
7. S. Lie, “On integration of a class of linear partial differential equations by means of definite integrals,” Archive for Mathematical Logic, vol. 6, pp. 328–368, 1881, translation by Ibragimov,
N. H.
8. P. J. Olver, Applications of Lie Groups to Differential Equations, Springer, New York, NY, USA, 1986. View at Publisher · View at Google Scholar · View at MathSciNet
9. G. W. Bluman and J. D. Cole, Similarity Methods for Differential Equations, vol. 13 of Applied Mathematical Sciences, Springer, New York, NY, USA, 1974. View at MathSciNet
10. G. W. Bluman and S. Kumei, Symmetries and Differential Equations, Springer, New York, NY, USA, 1989. View at MathSciNet
11. P. J. Olver and P. Rosenau, “Group-invariant solutions of differential equations,” SIAM Journal on Applied Mathematics, vol. 47, no. 2, pp. 263–278, 1987. View at Publisher · View at Google
Scholar · View at Zentralblatt MATH · View at MathSciNet
12. R. Z. Zhdanov, I. M. Tsyfra, and R. O. Popovych, “A precise definition of reduction of partial differential equations,” Journal of Mathematical Analysis and Applications, vol. 238, no. 1, pp.
101–123, 1999. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
13. G. Cicogna, “A discussion on the different notions of symmetry of differential equations,” Proceedings of Institute of Mathematics of NAS of Ukraine, vol. 50, pp. 77–84, 2004. View at MathSciNet
14. G. W. Bluman and J. D. Cole, “The general similarity solutions of the heat equation,” Journal of Mathematics and Mechanics, vol. 18, pp. 1025–1042, 1969. View at MathSciNet
15. L. V. Ovsiannikov, Group Analysis of Differential Equations, Academic Press, New York, NY, USA, 1982. View at MathSciNet
16. R. Cimpoiasu and R. Constantinescu, “Symmetries and invariants for the 2D-Ricci flow model,” Journal of Nonlinear Mathematical Physics, vol. 13, no. 2, pp. 285–292, 2006. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
17. M. Nadjafikhah, “Lie symmetries of inviscid burgers' equation,” Advances in Applied Clifford Algebras, vol. 19, no. 1, pp. 101–112, 2009. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH · View at MathSciNet | {"url":"http://www.hindawi.com/journals/geometry/2013/373701/","timestamp":"2014-04-19T14:51:20Z","content_type":null,"content_length":"310361","record_id":"<urn:uuid:87f6bd49-d9ad-46a4-b0be-dcc067a4e08c>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statics eBook: Area Moment of Inertia
STATICS - EXAMPLE
Example 1
Find moment of inertia of the shaded area about
a) x axis
b) y axis
Solution (a)
Recall, the moment of inertia is the second moment of the area about a given axis or line.
For part a) of this problem, the moment of inertia is about the x-axis. The differential element, dA, is usually broken into two parts, dx and dy (dA = dx dy), which makes integration easier. This
also requires the integral be split into integration along the x direction (dx) and along the y direction (dy). The order of integration, dx or dy, is optional, but usually there is an easy way,
and a more difficult way.
For this problem, the integration will be done first along the y direction, and then along the x direction. This order is easier since the curve function is given as y is equal to a function of x.
The diagram at the left shows the dy going from 0 to the curve, or just y. Thus the limits of integration is 0 to y. The next integration along the x direction goes from 0 to 4. The final
integration from is
Expanding the bracket by using the formula,
(a-b)^3 = a^3 - 3 a^2 b + 3 a b^2 - b^3
Solution (b)
Similar to the previous solution is part a), the moment of inertia is the second moment of the area about a given axis or line. But in this case, it is about the y-axis, or
The integral is still split into integration along the x direction (dx) and along the y direction (dy). Again, the integration will be done first along the y direction, and then along the x
direction. The diagram at the left shows the dy going from 0 to the curve, or just y. Thus the limits of integration is 0 to y. The next integration along the x direction goes from 0 to 4. The
final integration from is
The area is more closely distributed about the y-axis than x-axis. Thus, the moment of inertia of the shaded region is less about the y-axis as compared to x-axis.
Example 2
Determine the moment of inertia of y = 2 - 2x^2 about the x axis. Calculate the moment of inertia in two different ways. First, (a) by taking a differential element, having a thickness dx and
second, (b) by using a horizontal element with a thickness, dy.
a) The area of the differential element parallel to y axis is dA = ydx. The distance from x axis to the center of the element is named y.
y = y/2
Using the parallel axis theorem, the moment of inertia of this element about x axis is
For a rectangular shape, I is bh^3/12. Substituting I[x], dA, and y gives,
Performing the integration, gives,
(b) First, the function should be rewritten in terms of y as the independent variable. Due to the x^2 term, there is a positive and negative form and it can be expressed as two similar functions
mirrored about y axis. The function on the right side of the axis can be expressed as
The area of the differential element parallel to x axis is
Performing the integration gives,
Performing a numerical integration on calculator or by taking t = 2(2 - y) the above integration can be found as,
As expected, both methods (a) and (b) provide the same answer. | {"url":"https://ecourses.ou.edu/cgi-bin/ebook.cgi?doc=&topic=st&chap_sec=07.4&page=example","timestamp":"2014-04-16T11:32:14Z","content_type":null,"content_length":"14990","record_id":"<urn:uuid:df44c634-9d3e-4a8e-9eb1-5afbebb38d09>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elementary Algebra Tutors In Los Angeles
Algebra Tutors
Success in math class enables students to be successful in many courses. Understanding the basic concepts of Algebra has an impact on chemistry, physics, engineering, technology, computer sciences,
etc. Students in middle school through high school must complete pre-algebra and algebra to graduate. Taking algebra courses is a necessary part of a student’s education. It is very important for a
student struggling in one of these areas to provide them with the help they need; sometimes that answer to helping them is to hire a private tutor. We can provide a private math tutor in the
following algebra classes:
Pre-algebra - Pre-algebra is a common name for a course in middle school mathematics. In the United States, it is generally taught between the seventh and ninth grades, although students have taken
this course as early as fifth or sixth grade. The objective of pre-algebra is to prepare the student to the study of algebra. Pre-algebra includes several broad subjects: Review of natural- and
whole-number arithmetic; introduction of new types of numbers such as integers, fractions, decimals and negative numbers; Factorization of natural numbers; Properties of operations (associative,
distributive and so on); Simple roots and powers; Rules of evaluation of expressions, such as operator precedence and use of parentheses; Basics of equations, including rules for invariant
manipulation of equations; Variables and exponentiation. Pre-algebra often includes some basic subjects from geometry, mostly the kinds that further understanding of algebra and show how it is used,
such as area, volume, and perimeter. Wikipedia Pre-algebra.
Algebra I & II - Algebra is a branch of mathematics concerning the study of structure, relation, and quantity. Together with geometry, analysis, combinatory, and number theory, algebra is one of the
main branches of mathematics. Elementary algebra is often part of the curriculum in secondary education and provides an introduction to the basic ideas of algebra, including effects of adding and
multiplying numbers, the concept of variables, definition of polynomials, along with factorization and determining their roots. Algebra is much broader than elementary algebra and can be generalized.
In addition to working directly with numbers, algebra covers working with symbols, variables, and set elements. Addition and multiplication are viewed as general operations, and their precise
definitions lead to structures such as groups, rings and fields. Wikipedia Algebra
Abstract Algebra - Abstract algebra is the subject area of mathematics that studies algebraic structures, such as groups, rings, fields, modules, vector spaces, and algebras. The phrase abstract
algebra was coined at the turn of the 20th century to distinguish this area from what was normally referred to as algebra, the study of the rules for manipulating formulas and algebraic expressions
involving unknowns and real or complex numbers, often now called elementary algebra. The distinction is rarely made in more recent writings. Contemporary mathematics and mathematical physics make
intensive use of abstract algebra; for example, theoretical physics draws on Lie algebras. Subject areas such as algebraic number theory, algebraic topology, and algebraic geometry apply algebraic
methods to other areas of mathematics. Representation theory, roughly speaking, takes the 'abstract' out of 'abstract algebra', studying the concrete side of a given structure; see model theory.
Wikipedia Abstract Algebra
No matter the level of the algebra course that the student is taking, we have expert tutors available and ready to help. All of our algebra tutors have a degree in mathematics, science, or a related
field (like accounting). We are so confident in our algebra tutors that you can meet with them for free. Just ask your tutoring coordinator about our Meet and Greet program.
Los Angeles Tutors
Known as L.A. and “The City of Angels,” Los Angeles, CA is the largest city in California and the second largest in the United States. Home of the Hollywood Bowl, Dodger Stadium, and Kodak Theatre,
one might not think of their schools. However, Los Angeles is home of some of the most coveted schools in America. L.A. boasts 162 Magnet Public Schools to compete with the private school sector.
There are over 20 public and private colleges in Los Angeles. These residents take their education very seriously and so do we! As a city with students of all ages, we have cultivated an elite group
of tutors to help students with any subject necessary. Our highly customized service means that you determine exactly who your tutor will be, where the tutoring will take place, and for how long. Our
reputation as a premium service is evident in the multitude of testimonials and referrals we have received from parents, students, and schools throughout the Los Angeles area.
Our Tutoring Service
We offer our clients choice when searching for a tutor, and we work with you all the way through the selection process. When you choose to work with one of our tutors, expect quality,
professionalism, and experience. We will never offer you a tutor that is not qualified in the specific subject area you request. We will provide you with the degrees, credentials, and certifications
each selected tutor holds so that you have the same confidence in them that we do. And for your peace of mind, we conduct a nation-wide criminal background check, sexual predator check and social
security verification on every single tutor we offer you. We will find you the right tutor so that you can find success! | {"url":"http://www.advancedlearners.com/losangeles/elementary/algebra/tutor/find.aspx","timestamp":"2014-04-18T10:54:22Z","content_type":null,"content_length":"30453","record_id":"<urn:uuid:bf228e0c-575e-414f-aac0-9bbab5fd60d4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00180-ip-10-147-4-33.ec2.internal.warc.gz"} |
Adaptive Asset Allocation: Combining Momentum with Minimum Variance
Adaptive Asset Allocation: Combining Momentum with Minimum Variance
The concept of Adaptive Asset Allocation (AAA) was presented in a whitepaper by Butler, Philbrick and Gordillo this summer: Adaptive Asset Allocation. One of the core principles of AAA is that
portfolio allocation should be dynamic versus strategic– in other words, an investor’s portfolio composition should adapt over time to respond to changes in both the expected return of different
asset classes and also the overall risk of the portfolio. This theoretically ensures that investors can adequately grow and preserve their capital and withdraw to meet liabilities through different
economic regimes (deflation, inflation, and many other variants).
The alluring promise of AAA rests upon the ability to dynamically adjust to different economic conditions. In a world where assets compete for capital, the best way to forecast economic conditions
is to observe the relative pricing of the most liquid securities across markets. The time series of major asset classes represent observed variables that allow investors to detect changes in the
economy. Equity markets give us the ability to observe expected business conditions, commodities give us the ability to observe expected changes in inflation, while bonds give us the ability to
observe expected changes in interest rates. Real Estate is a market that provides insight into all three factors, and is a direct measure of consumer purchasing power. Observing world equity markets
can tell us how business conditions are evolving globally. These asset classes are akin to a diverse ecosystem where different species thrive at different times due to changes in the environment.
If we can detect shifts in the system, we can understand in a probabilistic sense which species will do the best across a range of likely scenarios. By analyzing the financial network of asset
classes, it is possible to express our views more concisely through a portfolio allocation across assets that maximizes the chance of being right while minimizing the cost of being wrong.
The analysis of major assets classes through time series data requires integrating historical returns, correlations and measurements of volatility to create an efficient portfolio allocation. The
goal of this framework is to dynamically identify the best performing asset classes and to manage risk at the portfolio level. The paper presents an example of a strategy (not the actual proprietary
strategy) that effectively integrates these three variables into simple and robust framework. The logic for this example is very straightforward yet powerful in its simplicity and is validated by
academic research. Momentum is chosen as the method to integrate expected returns information since the rank of historical returns (versus the raw return) represents a robust method of dynamically
identifying which assets will perform the best in the near future (2 weeks to 3 months). It is currently well-accepted by even the greatest skeptics that momentum is a powerful and reliable anomaly.
Minimum-variance optimization (MVO) is chosen as the method to manage risk at the portfolio level. It is well-accepted that MVO is effective at managing portfolio risk because it minimizes variance
as a function of both asset cross-correlations and volatilities. Since MVO does not use expected returns and instead relies on the highly forecastable elements of volatility and correlation, it is
considerably more robust to estimation error than classic mean-variance optimization and performs better out of sample in terms of ex-post sharpe than all other existing conventional portfolio
algorithms. Besides the theoretical appeal of MVO, these claims have also been validated by numerous academic studies.
The test presented in the paper uses a 6-month parameter (120-day) for momentum that selects the top 50% of asset classes and uses a weighted version of minimum-variance with an approximate lookback
of 20 days to calculate portfolio allocations. The holding period for rebalancing was monthly. The obvious question is whether this particular combination of momentum and minimum-variance parameters
were “lucky” and whether this represents a particular case of data snooping. The charts below summarize backtests that were run from 1995 to 2012 using the assets provided in the whitepaper with
different momentum (ROC) and minimum-variance (MINVAR) parameters. What is clear from the tables is that the performance in absolute (CAGR) and risk-adjusted terms (Gross Sharpe) is very consistent
across parameters. While performance is more sensitive to the momentum parameters than minimum-variance, both are very stable. It is quite rare in system development to see this type of consistency.
When the average performance is very close to the optimal performance and the standard deviation of performance is low, the chance of regret by selecting any pair of parameters is also quite low. Of
course, one can avoid this pitfall by simply investing in the pool of all possible combinations. The methodology of this toy strategy can be vastly improved to increase risk-adjusted returns and
robustness, but that is perhaps a subject for another post.
12 thoughts on “Adaptive Asset Allocation: Combining Momentum with Minimum Variance”
1. David –
Thanks for diving further into this paper — I became intrigued when I first read it and your additional analysis is insightful.
First question: would you mind sharing the vehicles that you used (i.e. the mutual funds, etc.) for each of the asset classes? I’ve been hunting around Yahoo! Finance in an attempt to find ones
that work (at least back to 1995) and I’m coming up short.
Second question: what exactly is meant by “MINVAR parameter”? Is this the adjustment of the maximum portfolio variance allowed?
Again, thanks for the work!
2. hi jonathan, thank you for your comments. we extended the indices that are associated with the underlying etfs using bloomberg data. the minvar parameter is the lookback for minimum variance
optimization. (ie the variance/covariance matrix lookback)
3. Hi David, thank you for this – really interesting work. I’ve been appreciating reading your posts for some time now.
I tried implementing this myself on reading the Butler/Philbrick/Gordillo paper when it was published. I managed the minimum variance algorithm for two assets, and did the same sort of tests as
you did varying the lookback periods and found similar stability.
However, I got stuck on working out how to implement weightings for more than two assets. You have obviously implemented that fine – what is the formula you used please?
Keep up the good work,
□ hi nick, thanks for the kind words. i used a proprietary minimum variance algorithm that does not use a conventional matrix inversion. the results using this algorithm are similar-but
slightly better than the conventional method. this is a good paper on setting up minimum variance in excel: http://faculty.washington.edu/ezivot/econ424/
Efficient%20Portfolios%20in%20Excel%20Using%20the%20Solver%20and%20Matrix%20Algebra.pdf For long-only versions with backtesting you will have to implement the critical line algorithm unless
you have a quadratic solver that is available in packages like R (see Goldfarb Idnarni). using the solver in excel is adequate, but will require some manual labor or a very clunky programming
framework to do a longer backtest.
hope that helps,
4. Dear David,
Many thanks for that. I have looked at the excel paper, very helpful. I will try that first and then, as I want to try longer backtests, I think I had better learn how to integrate quadratic
solvers in R into my code…
5. David, a thesis very close to my heart. Thanks for posting and your extra comments.
6. David, can you tell me how to find the data from 1995 – 2012 referred to in the paper. I find that only a few of the ETFs go back to 1995 and I have not been able to find the corresponding index
data. Thanks………Toby
7. Hi, thanks for sharing this. Although the paper had a generic list of assets, I was also wondering if you could provide the list of ETF’s and/or assets from Bloomberg and or other source to
replicate. Will
8. David,
Would you mind sharing how you calculate volatility for your analysis?
9. Great writeup. One simple questions: Do the numbers 20,40,60,..240 represent the number of days of data used to calculate momentum and variance?
□ hi ashok, they represent the momentum lookbacks. i believe the variance used a 60 day lookback. | {"url":"http://cssanalytics.wordpress.com/2012/07/17/adaptive-asset-allocation-combining-momentum-with-minimum-variance/","timestamp":"2014-04-17T13:08:04Z","content_type":null,"content_length":"82556","record_id":"<urn:uuid:1bfb2873-aca4-4e3a-9eff-9dabeafbc494>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Let \(f: \mathbb{R} \to \mathbb{R}\) be a real valued function define on the set of real numbers that satisfies\[\begin{matrix}&&&&&&&&&(f(x+y)\le yf(x) +f((f(x))\end{matrix}\]for all real numbers \
(x\) and \(y\). Prove the \(f(x) = 0\) for all \(x\le0\)
Best Response
You've already chosen the best response.
Setting y = 0 \[f(x) \le f(f(x))\]Setting x = 0\[f(y) \le yf(0) + f(f(0))\]Setting y = f(x) - x, \[f(f(x)) \le f(x)^2 - xf(x) +f(f(x)) \implies f(x)^2 \ge xf(x)\]Setting y = f(0) -x\[f(f(0)) \le
f(x) f(0) - xf(x) + f(f(x)) \]Substitute the below value for f(f(x)), \[f(f(x)) \le f(x)^2 - xf(x) +f(f(x)) \implies f(x)^2 \ge xf(x)\]So you get \[f(f(0)) \le f(x)f(0) -xf(0) - xf(x) + f(f(x)) \
le f(x)f(0) -xf(x) + f(x)f(0) + f(f(0) \implies\]\[2f(0)f(x) \ge xf(x)\]If \[i) f(x) \ge 0 \implies 2f(0) \ge x\]\[ii) f(x) \le 0 \implies 2f(0) \le x\]Assume \[f(0) \neq 0\], Case 1: \[f(0) >0\]
Setting \[y = \frac{-1-f(f(0))}{f(0)} \]\[-1 - f(f(0)) \ge 2f(0)^2\implies f(f(0)) \le -1\]This contradicts with the first equation, therefore \[f(0) \le 0\]Case 2 f(0) < 0 Assume \[f(f(0)) \ge 0
\], but it contradicts with the first equation. Therefore,\[f(f(0))<0\]The second equation can now becomes \[f(y) \le yf(0)\]from the above equation, set y = f(x)\[f(f(x))\le f(0)f(x)\]Thus, the
first equation can be rewritten as \[f(x) \le f(0)f(x)\]If \[f(0) < 0 \implies f(x) \le 0\]But this contracts to the statement\[ f(x) \le 0 \implies 2f(0) \le x\]Hence,\[f(0) = 0\]So\[f(x) \le f
(x) f(0) \implies f(x) \le 0\]and \[2f(0)f(x) \ge xf(x) \implies xf(x) \le 0\]If x is a negative number, then the above equation,\[xf(x) \le 0 \implies f(x) \ge 0 \]Therefore, if \[x \le 0\]\[f
(x) \le 0 \] and\[f(x) \ge 0\], which means \[f(x) = 0\]
Best Response
You've already chosen the best response.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4ede7794e4b05ed8401a9ca6","timestamp":"2014-04-16T22:46:42Z","content_type":null,"content_length":"31574","record_id":"<urn:uuid:2bef3aef-a677-4b2e-8b23-4b3a78e33da1>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lévy Processes
Filed under: Special Processes,Stochastic Calculus Notes — George Lowther @ 2:30 PM
Tags: Brownian Motion, Feller process, Lévy process, math.PR, Poisson process, Stochastic Calculus
Continuous-time stochastic processes with stationary independent increments are known as Lévy processes. In the previous post, it was seen that processes with independent increments are described by
three terms — the covariance structure of the Brownian motion component, a drift term, and a measure describing the rate at which jumps occur. Being a special case of independent increments
processes, the situation with Lévy processes is similar. However, stationarity of the increments does simplify things a bit. We start with the definition.
Definition 1 (Lévy process) A d-dimensional Lévy process X is a stochastic process taking values in ${{\mathbb R}^d}$ such that
□ independent increments: ${X_t-X_s}$ is independent of ${\{X_u\colon u\le s\}}$ for any ${s<t}$.
□ stationary increments: ${X_{s+t}-X_s}$ has the same distribution as ${X_t-X_0}$ for any ${s,t>0}$.
□ continuity in probability: ${X_s\rightarrow X_t}$ in probability as s tends to t.
More generally, it is possible to define the notion of a Lévy process with respect to a given filtered probability space ${(\Omega,\mathcal{F},\{\mathcal{F}_t\}_{t\ge0},{\mathbb P})}$. In that case,
we also require that X is adapted to the filtration and that ${X_t-X_s}$ is independent of ${\mathcal{F}_s}$ for all ${s < t}$. In particular, if X is a Lévy process according to definition 1 then it
is also a Lévy process with respect to its natural filtration ${\mathcal{F}_t=\sigma(X_s\colon s\le t)}$. Note that slightly different definitions are sometimes used by different authors. It is often
required that ${X_0}$ is zero and that X has cadlag sample paths. These are minor points and, as will be shown, any process satisfying the definition above will admit a cadlag modification.
The most common example of a Lévy process is Brownian motion, where ${X_t-X_s}$ is normally distributed with zero mean and variance ${t-s}$ independently of ${\mathcal{F}_s}$. Other examples include
Poisson processes, compound Poisson processes, the Cauchy process, gamma processes and the variance gamma process.
For example, the symmetric Cauchy distribution on the real numbers with scale parameter ${\gamma > 0}$ has probability density function p and characteristic function ${\phi}$ given by,
$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle p(x)=\frac{\gamma}{\pi(\gamma^2+x^2)},\smallskip\\ &\displaystyle\phi(a)\equiv{\mathbb E}\left[e^{iaX}\right]=e^{-\ (1)
gamma\vert a\vert}. \end{array}$
From the characteristic function it can be seen that if X and Y are independent Cauchy random variables with scale parameters ${\gamma_1}$ and ${\gamma_2}$ respectively then ${X+Y}$ is Cauchy with
parameter ${\gamma_1+\gamma_2}$. We can therefore consistently define a stochastic process ${X_t}$ such that ${X_t-X_s}$ has the symmetric Cauchy distribution with parameter ${t-s}$ independent of $
{\{X_u\colon u\le t\}}$, for any ${s < t}$. This is called a Cauchy process, which is a purely discontinuous Lévy process. See Figure 1.
Lévy processes are determined by the triple ${(\Sigma,b,u)}$, where ${\Sigma}$ describes the covariance structure of the Brownian motion component, b is the drift component, and ${u}$ describes the
rate at which jumps occur. The distribution of the process is given by the Lévy-Khintchine formula, equation (3) below.
Theorem 2 (Lévy-Khintchine) Let X be a d-dimensional Lévy process. Then, there is a unique function ${\psi\colon{\mathbb R}\rightarrow{\mathbb C}}$ such that
$\displaystyle {\mathbb E}\left[e^{ia\cdot (X_t-X_0)}\right]=e^{t\psi(a)}$ (2)
for all ${a\in{\mathbb R}^d}$ and ${t\ge0}$. Also, ${\psi(a)}$ can be written as
$\displaystyle \psi(a)=ia\cdot b-\frac{1}{2}a^{\rm T}\Sigma a+\int _{{\mathbb R}^d}\left(e^{ia\cdot x}-1-\frac{ia\cdot x}{1+\Vert x\Vert}\right)\,du(x)$ (3)
where ${\Sigma}$, b and ${u}$ are uniquely determined and satisfy the following,
1. ${\Sigma\in{\mathbb R}^{d^2}}$ is a positive semidefinite matrix.
2. ${b\in{\mathbb R}^d}$.
3. ${u}$ is a Borel measure on ${{\mathbb R}^d}$ with ${u(\{0\})=0}$ and,
$\displaystyle \int_{{\mathbb R}^d}\Vert x\Vert^2\wedge 1\,du(x)<\infty.$ (4)
Furthermore, ${(\Sigma,b,u)}$ uniquely determine all finite distributions of the process ${X-X_0}$.
Conversely, if ${(\Sigma,b,u)}$ is any triple satisfying the three conditions above, then there exists a Lévy process satisfying (2,3).
Proof: This result is a special case of Theorem 1 from the previous post, where it was shown that there is a continuous function ${{\mathbb R}^d\times{\mathbb R}_+\rightarrow{\mathbb C}}$, ${(a,t)\
mapsto\psi_t(a)}$ such that ${\psi_0(a)=0}$ and
$\displaystyle {\mathbb E}[e^{ia\cdot(X_t-X_0)}]=e^{\psi_t(a)}.$
Using independence and stationarity of the increments of X,
$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle e^{\psi_{s+t}(a)}&\displaystyle={\mathbb E}[e^{ia\cdot(X_{s+t}-X_t)}e^{ia\cdot(X_t-X_0}]\smallskip\\ &\displaystyle={\mathbb
E}[e^{ia\cdot(X_s-X_0)}]{\mathbb E}[e^{ia\cdot(X_t-X_0)}]\smallskip\\ &\displaystyle=e^{\psi_s(a)+\psi_t(a)}. \end{array}$
So, ${\psi_{s+t}=\psi_s+\psi_t}$ and, by continuity in t, this gives ${\psi_t(a)=t\psi_1(a)}$. Taking ${\psi(a)\equiv\psi_1(a)}$ gives (2).
Again using Theorem 1 of the previous post, there is a uniquely determined triple ${(\tilde\Sigma,\tilde b,\mu)}$ such that
$\displaystyle t\psi(a)=ia\cdot\tilde b_t-\frac12a^{\rm T}\tilde\Sigma_t a+\int_{{\mathbb R}^d\times[0,t]}\left(e^{ia\cdot x}-1-\frac{ia\cdot x}{1+\Vert x\Vert}\right)\,d\mu(x,s).$ (5)
Here, ${t\mapsto\tilde\Sigma_t}$ is a continuous function from ${{\mathbb R}_+}$ to ${{\mathbb R}^{d^2}}$ such that ${\tilde\Sigma_t-\tilde\Sigma_s}$ is positive semidefinite for all ${t > s}$. Also,
${t\mapsto\tilde b_t}$ is a continuous function from ${{\mathbb R}_+}$ to ${{\mathbb R}^d}$ and ${\mu}$ is a Borel measure on ${{\mathbb R}^d\times{\mathbb R}_+}$ with ${\mu(\{0\}\times{\mathbb R}_+)
=0}$ and
$\displaystyle \int_{{\mathbb R}^d\times[0,t]}\Vert x\Vert^2\wedge1\,d\mu(x,s) < \infty.$
Taking ${\Sigma=\tilde\Sigma_1}$, ${b=\tilde b_1}$ and defining ${u}$ by ${u(S)=\mu(S\times[0,1])}$ it can be seen that (3) follows from (5) with ${t=1}$, and that ${(\Sigma,b,u)}$ satisfy the
required conditions. Conversely, if (3) is satisfied, then taking ${\tilde\Sigma_t=t\Sigma}$, ${\tilde b_t=tb}$ and ${d\mu(x,t)=du(x)\,dt}$ gives (5). Then, uniqueness of ${(\tilde\Sigma,\tilde b,\
mu)}$ implies that ${(\Sigma,b,u)}$ are uniquely determined by (3).
Finally, if ${(\Sigma,b,u)}$ satisfy the required conditions, then taking ${\tilde\Sigma_t=t\Sigma}$, ${\tilde b_t=tb}$ and ${d\mu(x,t)=du(x)\,dt}$, Theorem 1 of the previous post says that there
exists an independent increments process satisfying (5). This is then the required Lévy process. $\Box$
The measure ${u}$ above is called the Lévy measure of X, ${(\Sigma,b,u)}$ are referred to as the characteristics of X, and it is said to be purely discontinuous if ${\Sigma=0}$. Note that a Lévy
process with zero Lévy measure ${u=0}$ satisfies ${\psi(a)=ia\cdot b-\frac12a^{\rm T}\Sigma a}$, so is a Brownian motion with covariance matrix ${\Sigma}$ and drift ${b}$.
As an example, consider the purely discontinuous real-valued Lévy process with characteristics ${(0,0,u)}$ and ${du(x)=\frac{dx}{\pi x^2}}$. This satisfies (4), so determines a well-defined process.
Using the Lévy-Khintchine formula we can compute its characteristic function,
$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle\psi(a)&\displaystyle=\int_{-\infty}^\infty\left(e^{ia\cdot x}-1-\frac{iax}{1+\vert x\vert}\right)\frac{dx}{\pi x^2}\
smallskip\\ &\displaystyle=-\frac{4}{\pi}\int_0^\infty\frac{\sin^2(ax/2)}{x^2}\,dx\smallskip\\ &\displaystyle=-\frac{2\vert a\vert}{\pi}\int_0^\infty\frac{\sin^2 y}{y^2}\,dy=-\vert a\vert. \end
Here, the identity ${e^{iax}+e^{-iax}-2=-4\sin^2(ax/2)}$ is being used followed by the substitution ${y=\vert a\vert x/2}$. Comparing this with the characteristic function (2) of the Cauchy
distribution shows that X is the Cauchy process.
As mentioned above, Lévy processes are often taken to be cadlag by definition. However, Theorem 2 of the previous post states that all independent increments processes which are continuous in
probability have a cadlag version.
Theorem 3 Every Lévy process has a cadlag modification.
We can go further than this.
Theorem 4 Every cadlag Lévy process is a semimartingale.
Proof: Theorem 2 of the previous post states that a cadlag Lévy process X decomposes as ${X_t=bt+W+Y}$ where Y is a semimartingale and W is a continuous centered Gaussian process with independent
increments, hence a martingale. So, W is a semimartingale and so is X. $\Box$
The characteristics of a Lévy process fully determine its finite distributions since, by equation (3), they determine the characteristic function of the increments of the process. The following
theorem shows how the characteristics relate to the paths of the process and, in particular, the Lévy measure ${u}$ does indeed describe the jumps. This is just a specialization of Theorem 2 of the
previous post to the stationary increments case.
Theorem 5 Let X be a cadlag d-dimensional Lévy process with characteristics ${(\Sigma,b,u)}$. Then,
1. The process
$\displaystyle Y_t=X_t-X_0-\sum_{s\le t}\Delta X_s\Vert\Delta X_s\Vert / ( 1 + \Vert\Delta X_s\Vert)$ (6)
is integrable, and ${{\mathbb E}[Y_t]=tb}$. Furthermore, ${Y_t-bt}$ is a martingale.
2. The quadratic variation of X has continuous part ${[X^i,X^j]^c_t=t\Sigma^{ij}}$.
3. For any nonnegative measurable ${f\colon\mathbb{R}^d\rightarrow\mathbb{R}}$,
$\displaystyle tu(f)={\mathbb E}\left[\sum_{s\le t}1_{\{\Delta X_sot=0\}}f(\Delta X_s)\right].$
In particular, for any measurable ${A\subseteq{\mathbb R}^d}$ the process
$\displaystyle X^A_t\equiv\sum_{s\le t}1_{\{\Delta X_s\in A\setminus\{0\}\}}$ (7)
is almost surely infinite for all ${t > 0}$ whenever ${u(A)}$ is infinite, otherwise it is a homogeneous Poisson process of rate ${u(A)}$. If ${A_1,A_2,\ldots,A_n}$ are disjoint measurable
subsets of ${{\mathbb R}^d}$ then ${X^{A_1},\ldots,X^{A_n}}$ are independent processes.
Furthermore, letting ${\mathcal{P}}$ be the predictable sigma-algebra and
$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} &\displaystyle{\mathbb R}^d\times{\mathbb R}_+\times\Omega\rightarrow{\mathbb R},\smallskip\\ &\displaystyle(x,t,\omega)\mapsto f
(x,t)(\omega) \end{array}$
be ${\mathcal{B}({\mathbb R}^d)\otimes\mathcal{P}}$-measurable such that ${f(0,t)=0}$ and ${\int_0^t\int_{{\mathbb R}^d}\vert f(x,s)\vert\,du(x)ds}$ is integrable (resp. locally integrable)
$\displaystyle M^f_t\equiv\sum_{s\le t}f(\Delta X_s,s)-\int_0^t\int_{{\mathbb R}^d}f(x,s)\,du(x)ds$ (8)
is a martingale (resp. local martingale).
Proof: The first statement follows directly from the first statement of Theorem 2 of the previous post.
Now apply the decomposition ${X=bt+W+Y}$ from the second statement of Theorem 2 of the previous post, where W has quadratic variation ${[W^i,W^j]_t=\Sigma^{ij}t}$ and Y satisfies ${[Y^i,Y^j]^{\rm c}=
0}$. This gives ${[X^i,X^j]^{\rm c}_t=[W^i,W^j]_t=\Sigma^{ij}t}$ as required.
For the third statement above, define the measure ${d\mu(x,t)=du(x)\,dt}$ on ${{\mathbb R}^d\times{\mathbb R}_+}$. By the third statement of Theorem 2 of the previous post,
$\displaystyle {\mathbb E}\left[\sum_{s\le t}1_{\{\Delta X_sot=0\}}f(\Delta X_s)\right]=\int f(x,s)1_{\{s\le t\}}\,d\mu(x,s)=tu(f).$
Also, as stated in Theorem 2 of the previous post, for a measurable ${A\subseteq{\mathbb R}^d\times{\mathbb R}_+}$, the random variable
$\displaystyle \eta(A)\equiv\sum_{t > 0}1_{\{\Delta X_tot=0,(\Delta X_t,t)\in A\}}$
is almost surely infinite whenever ${\mu(A)=\infty}$ and Poisson distributed of rate ${\mu(A)}$ otherwise. Furthermore, ${\eta(A_1),\ldots,\eta(A_n)}$ are independent whenever ${A_1,\ldots,A_n}$ are
disjoint measurable subsets of ${{\mathbb R}^d\times{\mathbb R}_+}$. We can apply this to the process ${X^A_t=\eta(A\times[0,t])}$ defined by (7).
If ${A\subseteq{\mathbb R}^d}$ satisfies ${u(A)=\infty}$ then ${\mu(A\times[0,t])=tu(A)}$ is infinite for all ${t > 0}$, so ${X^A_t}$ is almost surely infinite. On the other hand, if ${u(A)}$ is
finite, consider a sequence of times ${0\le t_0 < t_1 <\cdots < t_n}$. The increments of ${X^A}$ are ${X^A_{t_k}-X^A_{t_{k-1}}=\eta(A\times(t_{k-1},t_k])}$ which are independent and Poisson
distributed with rates ${\mu(A\times(t_{k-1},t_k])=u(A)(t_k-t_{k-1})}$. So, ${X^A}$ is a homogeneous Poisson process of rate ${u(A)}$.
If ${A_1,\ldots,A_n}$ are disjoint measurable subsets of ${{\mathbb R}^d}$, then ${X^{A_k}}$ are Poisson processes (whenever ${u(A_k) < \infty}$) and, by construction, no two can ever jump
simultaneously. So, they are independent.
Finally, that (8) is a (local) martingale is given by the final statement of Theorem 2 of the previous post. $\Box$
The following characterization of the purely discontinuous Lévy processes is an immediate consequence of the second statement of Theorem 5.
Corollary 6 A cadlag Lévy process X is purely discontinuous if and only if its quadratic variation has zero continuous part, ${[X^i,X^j]^{\rm c}=0}$.
Any Lévy process decomposes uniquely into its continuous and purely discontinuous parts.
Lemma 7 A cadlag Lévy process X decomposes uniquely as ${X=W+Y}$ where W is a continuous centered Gaussian process with independent increments, ${W_0=0}$, and Y is a purely discontinuous Lévy
Furthermore, W and Y are independent and if X has characteristics ${(\Sigma,b,u)}$ then W and Y have characteristics ${(\Sigma,0,0)}$ and ${(0,b,u)}$ respectively.
Proof: Theorem 2 of the previous post says that X decomposes uniquely as ${X_t=bt+W_t+\tilde Y_t}$ where W is a continuous centered Gaussian process with independent increments, ${W_0=0}$, and Y is a
semimartingale with independent increments whose quadratic variation has zero continuous part ${[Y^i,Y^j]^{\rm c}=0}$. Furthermore, W and ${\tilde Y}$ are independent Lévy processes with
characteristics ${(\Sigma,0,0)}$ and ${(0,0,u)}$ respectively.
So, taking ${Y_t=bt+\tilde Y}$ gives the required decomposition, satisfying the required properties. Conversely, supposing that ${X=W^\prime+Y^\prime}$ is any other such decomposition, uniqueness of
the decomposition ${X_t=bt+W_t+\tilde Y_t=bt + W^\prime_t+(Y^\prime_t-bt)}$ gives ${W=W^\prime}$ and ${Y^\prime_t=\tilde Y_t+bt=Y_t}$. $\Box$
Recall that for any independent increments process X which is continuous in probability, the space-time process ${(X_t,t)}$is Feller. For Lévy processes, where the increments of X are stationary, we
can use a very similar proof to show that X itself is a Feller process.
Lemma 8 Let X be a d-dimensional Lévy process. For each ${t\ge0}$ define the transition probability ${P_t}$ on ${{\mathbb R}^d}$ by
$\displaystyle P_tf(x)={\mathbb E}\left[f(X_t-X_0+x)\right]$
for nonnegative measurable ${f\colon{\mathbb R}^d\rightarrow{\mathbb R}}$.
Then, X is a Markov process with Feller transition function ${\{P_t\}_{t\ge0}}$.
Proof: To show that ${P_t}$ defines a Markov transition function, the Chapman-Kolmogorov equations ${P_sP_t=P_{s+t}}$ need to be verified. The stationary independent increments property gives
$\displaystyle P_tf(x)={\mathbb E}[f(X_{s+t}-X_s+x)]={\mathbb E}[f(X_{s+t}-X_s+x)\mid\mathcal{F}_s]$ (9)
for times ${s,t\ge 0}$. As the expectation is conditioned on ${\mathcal{F}_s}$, we can replace x by any ${\mathcal{F}_s}$-measurable random variable. In particular,
$\displaystyle P_tf(X_s-X_0+x)={\mathbb E}[f(X_{s+t}-X_0+x)\mid\mathcal{F}_s].$
This gives
$\displaystyle P_sP_tf(x)={\mathbb E}[P_tf(X_s-X_0+x)]={\mathbb E}[f(X_{s+t}-X_0+x)]=P_{s+t}f(x)$
as required. So, ${P_t}$ defines a Markov transition function. Replacing x by ${X_s}$ in (9) gives
$\displaystyle P_tf(X_s)={\mathbb E}[f(X_{s+t}\mid\mathcal{F}_s],$
so X is Markov with transition function ${P_t}$.
It only remains needs to be shown that ${P_t}$ is Feller. That is, for ${f\in C_0({\mathbb R}^d)}$, ${P_tf\in C_0({\mathbb R}^d)}$ and ${P_tf(x)\rightarrow f(x)}$ as ${t\rightarrow0}$. Letting ${x_n\
in{\mathbb R}^d}$ tend to a limit ${x}$, bounded convergence gives
$\displaystyle P_tf(x_n)={\mathbb E}[f(X_t-X_0+x_n)]\rightarrow{\mathbb E}[f(X_t-X_0+x)]=P_tf(x)$
as ${n\rightarrow\infty}$. So, ${P_tf}$ is continuous. Similarly, if ${\Vert x_n\Vert\rightarrow\infty}$ then ${f(X_t-X_0+x_n)}$ tends to zero, giving ${P_tf(x_n)\rightarrow0}$. So, ${P_tf}$ is in $
{C_0({\mathbb R}^d)}$.
Finally, if ${t_n\ge0}$ is a sequence of times tending to zero then ${X_{t_n}\rightarrow X_0}$ in probability, giving
$\displaystyle P_{t_n}f(x)={\mathbb E}[f(X_{t_n}-X_0+x)]\rightarrow f(x)$
as required. $\Box$
Finally, we can calculate the infinitesimal generator of a Lévy process in terms of its characteristics.
Theorem 9 Let X be a d-dimensional Lévy process with characteristics ${(\Sigma,b,u)}$ and define the operator A on the bounded and twice continuously differentiable functions ${C^2_b({\mathbb R}^
d)}$ from ${{\mathbb R}^d}$ to ${{\mathbb R}}$ as
$\displaystyle Af(x) = b^if_i(x) - \frac12\Sigma^{ij}f_{ij}(x)+\int\left(f(x+y)-f(x)-\frac{y^if_i(x)}{1+\Vert y\Vert}\right)\,du(y).$ (10)
$\displaystyle M_t=f(X_t)-\int_0^t Af(X_s)\,ds$
is a local martingale for all ${f\in C^2_b({\mathbb R}^d)}$.
In equation (10) the summation convention is being used, so that if i or j appears twice in a single term then it is summed over the range ${1,2,\ldots,d}$.
Proof: Apply the generalized Ito formula to ${f(X)}$,
$\displaystyle \setlength\arraycolsep{2pt} \begin{array}{rl} \displaystyle dM_t=&\displaystyle f_i(X_{t-})(dX^i_t -b^i\,dt)+\frac12f_{ij}(X_{t-})(d[X^i,X^j]^{\rm c}_t-\Sigma^{ij}\,dt)\smallskip\ (11)
\ &\displaystyle+\left(\Delta f(X_t)-f_i(X_{t-})\Delta X^i_t\right)\smallskip\\ &\displaystyle\qquad-\int\left(f(X_t+y)-f(X_{t-})-\frac{y^if_i(X_t)}{1+\Vert y\Vert}\right)\,du(y)dt. \end{array}$
Now define the ${\mathcal{B}({\mathbb R}^d)\otimes\mathcal{P}}$-measurable function g by
$\displaystyle g(y,t)=f(X_{t-}+y)-f(X_{t-})-y^if_i(X_{t-})/(1+\Vert y\Vert)$
and let ${M^g}$ be the local martingale defined as in (8). Also, define Y by (6). Then, using the identity ${[X^i,X^j]^{\rm c}_t=\Sigma^{ij}t}$, equation (11) can be rewritten as
$\displaystyle dM_t = f_i(X_{t-})\,d(Y^i_t-b^it)-dM^g_t.$
As ${Y_t-bt}$ is a martingale, this shows that M is a local martingale. $\Box$
In particular, if f is in the space ${C^2_0({\mathbb R}^d)}$ of twice continuously differentiable functions vanishing at infinity and ${Af\in C_0({\mathbb R}^d)}$ then Theorem 9 shows that f is in
the domain of the generator of the Feller process X, and A is the infinitesimal generator. So,
$\displaystyle Af=\lim_{t\rightarrow0}\frac1t\left(P_tf-f\right),$
where convergence is uniform on ${{\mathbb R}^d}$. For any Lévy process for which the distribution of ${X_t}$ is known, this allows us to compute ${Af}$ and, then, read off the Lévy characteristics.
In particular, if ${f\colon{\mathbb R}^d\rightarrow{\mathbb R}}$ is twice continuously differentiable with compact support contained in ${{\mathbb R}^d\setminus\{0\}}$ then,
$\displaystyle u(f) = Af(0)=\lim_{t\rightarrow0}\frac1t{\mathbb E}[f(X_t-X_0)].$
Applying this to the Cauchy process, where ${X_t}$ has probability density function ${t/\pi(t^2+x^2)}$, gives
$\displaystyle u(f)=\lim_{t\rightarrow0}\int_{-\infty}^\infty\frac{f(x)}{\pi(t^2+x^2)}\,dx = \int_{-\infty}^\infty\frac{f(x)}{\pi x^2}\,dx.$
So, the Cauchy process has Lévy measure ${du(x)=dx/(\pi x^2)}$, agreeing with the previous computation.
There is a minor typo I found when following your calculations regarding the characteristic exponent of the Cauchy process.
“Here, the identity e^iax + e^−iax - 2 = -4sin^2(ax) is being used followed by the substitution … ”
The identity is e^iax + e^−iax - 2 = -4sin^2(ax/2) (as you correctly use it subsequently). Thanks for the nice site!
Comment by Anonymous — 29 December 11 @ 5:29 PM | Reply
• Fixed. Thanks!
Comment by George Lowther — 29 December 11 @ 5:50 PM | Reply
Hello, your posts are very interesting. By the way, I would like to ask you a question as follows:
Let X be a Levy processes with no positive jumps and $\tau_y:=\inf\{t> 0: X_t > y\}$ then we have
$X_{\tau_y}=y$ on $\{\tau_y <\infty\}.$
Could you explain that why? and does it hold for Levy process with no negative jumps? If X be Hunt process with no positive jumps then does this hold?
Thank you very much!
Comment by Duy — 17 February 12 @ 8:46 AM | Reply
• Hi. Strictly speaking that’s not true. If X[0] > y then $\tau_y=0$ and $X_{\tau_y} = X_0 > y$. You can only conclude that $X_{\tau_y} = y$ if you assume that X[0] ≤ 0 or, alternatively, if you
restrict to $0 < \tau_y < \infty$. Then, the conclusion holds for any cadlag process, and is nothing specific to Lévy processes. In fact, you have $X_{\tau_y}\ge y$ for any right-continuous
process and $X_{\tau_y-} \le y$ if it has left limits. If it also has no positive jumps then $X_{\tau_y}=X_{\tau_y-}+\Delta X_{\tau_y}\le y + 0$.
Comment by George Lowther — 22 February 12 @ 1:10 AM | Reply
Hallo, I have a question to George Lowther.
Do you know an easy proof of the fact that for two independent Levy processes $X$ and $Y$ the co-variation process $[X,Y]$ is equal to zero? I have a proof of this result but I feel that it is to
complicated and I would like to make it shorter. Thank you very much.
Best regards,
Comment by Paolo — 12 September 12 @ 8:22 AM | Reply
how can we get the value dv(x)?
Comment by susti — 23 May 13 @ 8:58 AM | Reply
Leave a Reply Cancel reply
Recent Comments
Alex on The Stochastic Integral
qvnbejbngg@yahoo.com on Girsanov Transformations
Dimas Abreu Dutra on Stochastic Processes, Indistin…
Dimbi on The Burkholder-Davis-Gundy Ine…
Junwei on Markov Processes
Pito on Lévy’s Characterization of Bro… | {"url":"http://almostsure.wordpress.com/2010/11/23/levy-processes/","timestamp":"2014-04-18T18:11:21Z","content_type":null,"content_length":"132003","record_id":"<urn:uuid:e2a44cc8-dc92-4855-8617-ba8ae1819e82>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00231-ip-10-147-4-33.ec2.internal.warc.gz"} |
On an eigenvalue inequality
up vote 1 down vote favorite
Let $\lambda_1 (\cdot)$ be the larger absolute value eigenvalue of a $2\times2$ matrix and $\lambda_2 (\cdot)$ the smaller absolute value eigenvalue of a $2\times2$ matrix, i.e. $|\lambda_1 (\cdot)|
\ge |\lambda_2 (\cdot)|$. Is it true that $$ \left|\left|\lambda_{1}\left(A+B\right)\right|^{1/3}-\left|\lambda_{1}\left(A\right)\right|^{1/3}\right|+\left|\left|\lambda_{2}\left(A+B\right)\right|^{1
/3}-\left|\lambda_{2}\left(A\right)\right|^{1/3}\right|\leq\left|\lambda_{1}\left(B\right)\right|^{1/3}+\left|\lambda_{2}\left(B\right)\right|^{1/3} $$ for any $2\times2$ symmetric real matrix $A$
(would suffice to prove or disprove for not positive-definite matrices $A$) and $2\times2$ diagonal real matrix $B$? Thanks a lot for any helpful answers! By the way, a relevant question was answered
by Suvrit here.
Where do you get these statements from? Suvrit gave a counterexample to a previous claim, so unless there is a good reason to believe this is true, it is not clear that it is worthwhile thinking
about these questions. – Igor Rivin Dec 28 '11 at 20:52
1 I think that this inequality does hold (even for $n \times n$ matrices), but I know the proof only for positive definite matrices, which implies a restricted version of the inequality that you
actually seem to be after. – Suvrit Dec 28 '11 at 20:55
Fair enough..... – Igor Rivin Dec 28 '11 at 22:01
4 META: tea.mathoverflow.net/discussion/1187/… – Will Jagy Dec 29 '11 at 6:19
add comment
1 Answer
active oldest votes
Below I highlight that a much more general claim holds for $n\times n$ positive definite matrices, and that a slightly weaker version of your inequality holds for general symmetric
Recall a classic theorem of Ando, (T.Ando, "Comparison of norms $\|f(A)-f(B)\|$ and $\|f(|A-B|)\|$, Math. Z., 197, (1988)):
Theorem (Ando). Let $A$ and $B$ be positive semidefinite matrices, and let $\|\cdot\|$ be any unitarily invariant norm, and let $|X| = (X^TX)^{1/2}$ denote the matrix absolute value.
For any nonnegative operator monotone function $f(t)$ on $[0,\infty)$,\begin{equation*} \|f(A)-f(B)\| \le \|f(|A-B|)\|\end{equation*}
up vote 8
down vote Now, in your case we can use $f(t) = t^r$ for $r \in [0,1]$, to obtain $$\|A^r-B^r\| \le \|\ |A-B|^r\ \|,$$ which when specialized to the trace-norm (sum of singular values) yields the
inequality that you desire (but for positive matrices).
This inequality immediately implies the following weaker one for general symmetric matrices $$\| f(|A|) - f(|B|) \| \le \left\|f\bigl(\bigl|\ |A|-|B|\ \bigr|\bigr)\right\|,$$ which is
somewhat weaker than what you desire (but may suffice for your needs---which can be elaborated upon only if you follow Igor's suggestion and tell us where you are getting these questions
from, and in what context!)
In that case, I'm curious to see your proof of the positive definite case! – Suvrit Dec 29 '11 at 12:01
@unknown: it actually does follow because of the following theorem: $\|\sigma^\arrowdown(f(A)) - \sigma^\arrowdown(f(B))\| \le \|f(A)-f(B)\|$ – Suvrit Jan 1 '12 at 11:08
add comment
Not the answer you're looking for? Browse other questions tagged matrices linear-algebra na.numerical-analysis inequalities real-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/84473/on-an-eigenvalue-inequality?sort=newest","timestamp":"2014-04-20T05:54:09Z","content_type":null,"content_length":"59492","record_id":"<urn:uuid:116c6038-0453-459f-9cca-6e2f9f30f459>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closed form solution for this?
I have an equation describing a waveform:
f(x) = a0 * sin( x ) + a1 * sin( 2 * x + p1 ) + a2 * sin( 3 * x + p2 )
This is a periodic waveform with period 2 * pi
For fixed values of a0, a1, and a2, I would like to find values of p1
and p2 which maximise the asymmetry of the waveform. This would be:
max( f(x) ) - min( f(x) )
It is extremely easy for me to write a search based optimiser likely
to find an optimal or very close to optimal solution. But is there a
closed form solution to this?
Relevant Pages
• Re: Closed form solution for this?
... and p2 which maximise the asymmetry of the waveform. ... to find an optimal or very close to optimal solution. ... closed form solution to this? ...
• Re: Closed form solution for this?
... This is a periodic waveform with period 2 * pi ... and p2 which maximise the asymmetry of the waveform. ... to find an optimal or very close to optimal solution. ... closed form solution to
this? ...
• Re: Closed form solution for this?
... This is a periodic waveform with period 2 * pi ... and p2 which maximise the asymmetry of the waveform. ... to find an optimal or very close to optimal solution. ... closed form solution to
this? ... | {"url":"http://sci.tech-archive.net/Archive/sci.math.num-analysis/2008-07/msg00028.html","timestamp":"2014-04-19T04:19:21Z","content_type":null,"content_length":"8885","record_id":"<urn:uuid:a84efe5d-45f4-4d87-8071-ec2d3271fa42>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
parameter constraints
Dear all,
sorry for this simple question. What I want to do is to impose a simple parameter constraint. If I want to set two parameters equal, I simply use the same label. However, what if I want to set
parameter a = 2*b, so that actually only one parameter is freely estimated?
After searching the archives, the only cumbersome way I came up with is to define two new matrices with a single element but new names, use the mxAlgebra function to multiply by two and then use the
mxConstraint function to set the matrices equal. So I did something like that
mxMatrix(type = "Full", nrow=1, ncol=1, free=F, values = 0, label="a", name="con_a"),
mxMatrix(type = "Full", nrow=1, ncol=1, free=T, values = 1, label="b", name="con_b"),
mxAlgebra(expression= 2*con_b, name="con"),
mxConstraint("con", "=", "con_a"),
But then something goes wrong, and I am not sure why. Do I have to define the elements of both matrices as free? fixed? Is not there a simpler way? This seems overly complicated.
Thank you very much! | {"url":"http://openmx.psyc.virginia.edu/print/446","timestamp":"2014-04-18T23:32:09Z","content_type":null,"content_length":"8483","record_id":"<urn:uuid:a0683bcf-6684-460d-bcab-6ea182b5436a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: A geometrical problem - Need help
Replies: 3 Last Post: Apr 11, 2011 7:31 AM
Messages: [ Previous | Next ]
dipanjan A geometrical problem - Need help
Posted: Apr 8, 2011 9:56 AM
Posts: 2
From: My problem statement:
Registered: You are given N (N<=1000) points in X,Y co-ordinates. X and Y are both positive. Now, the problem is using maximum number of point you have to draw a straight line and also using maximum
4/8/11 number of point you have to draw a 1/4 shape parabola. Actually, here you have to solve two problem at the same time. You can use any point to draw any one from those. You can use those
point that are use to draw a straight line for 1/4 shape parabola.
Now all I need is your help to solve this problem. Please discuss your opinion.
Date Subject Author
4/8/11 A geometrical problem - Need help dipanjan
4/8/11 Re: A geometrical problem - Need help Dan Cass
4/8/11 Re: A geometrical problem - Need help dipanjan
4/11/11 Re: A geometrical problem - Need help Dan Cass | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2254047","timestamp":"2014-04-17T04:09:21Z","content_type":null,"content_length":"19953","record_id":"<urn:uuid:8c6e35da-f616-4b30-95c5-7bed67f156da>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
Steady-State AC analysis and Frequency response
● Using PSpice to solve steady-state phasor problems
● Observing the frequency response characteristics of different types of analog filters
Part A: Steady-State AC Analysis in PSpice
In addition to DC circuit analysis and transient analysis, PSpice can be used to work steady-state phasor problems.
AC Voltage and Current Sources
The syntax for an AC source is very similar to its DC counterpart. The AC source is assumed to be a cosine waveform at a specified phase angle. Its frequency must be defined in a separate " .AC "
command that defines the frequency for all the sources in the circuit. The unique information for the individual source is: the name, which must start with "V" or "I," the node numbers, the magnitude
of the source, and its phase angle. Some examples follow.
*name nodelist type value phase(deg)
Vac 4 1 AC 120V 30
Vba 2 5 AC 240 ; phase angle 0 degrees
Ix 3 6 AC 10.0A -45 ; phase angle -45 degrees
Isv 12 9 AC 25mA ; 25 milliamps @ 0 degrees
Here source type, AC, must be specified, because the default is DC. If the phase angle is not specified it will be assumed as zero degrees. The units of the phase angle will be in degrees. For
voltage source the node on the left is the positive node and the node on the right is the negative node. Similarly, incase of current source positive current flows into the source from the node on
the left, passes through the source, and leaves the source from the node on the right.
In some of your previous experiments you have used "SIN" type of source which is one of several useful source types (also EXP, PULSE, PWL, etc) that are used for transient analysis. Do not attempt to
use SIN for steady-state (phasor) AC analysis nor for frequency sweeps. The SIN type is a time-based function for time-based analysis, whereas the AC type is used in frequency-based modeling. Since
phasor analysis uses frequency-based models of circuit elements, always use the AC type as described in this experiment.
Use of the .PRINT AC Command
To enable .PRINT command .AC command must be used. The .AC command was designed to make a sweep of many frequencies for a given circuit. This is called a frequency response and will be discussed in
Part B of this experiment. Three types of ranges are possible for the frequency sweep: LIN, DEC and OCT. At this time we only want a single frequency to be used so it does not matter which one we
choose. We will pick the LIN (linear) range to designate our single frequency.
* type #points start stop
.AC LIN 1 60Hz 60Hz; <== single frequency
.AC LIN 6 100 200; <== a linear range sweep
.AC DEC 20 1Hz 10kHz; <== a logarithmic range sweep
The first statement above performs a single analysis using the frequency of 60 Hz. Placing the units "Hz" after the value is optional. The second statement would perform a frequency sweep using
frequencies of 100Hz, 120Hz, 140Hz, 160Hz, 180Hz, and 200Hz. The third statement performs a logarithmic range sweep using 20 points per decade over a range of four decades. This will be useful later
for studying frequency response of circuits.
Finally, we can discuss the actual .PRINT AC command. Printing the components of phasor values (complex numbers) requires some options. There are four expressions needed for this: magnitude, phase
(angle), real part and imaginary part. In addition, we can print voltages or currents. For instance, to print the magnitude of a voltage between nodes 2 and 3, we would specify "VM(2,3)." The phase
angle of this same voltage would be "VP(2,3)" and would be printed in degrees. If we need the current magnitude through resistor Rload, we would specify "IM(Rload)." The real part of the voltage on
node 7 would be specified "VR(7)" and its imaginary part, "VI(7)." As with the .PRINT DC command, there is no limit on the number of times it can be used in a listing; nor is there a limit on how
many print requests can be on a single line.
.PRINT AC VM(30,9) VP(30,9); magnitude & angle of voltage
.PRINT AC IR(Rx) II(Rx); real & imag. parts of current through Rx
.PRINT AC VM(17) VP(17) VR(17) VI(17); the whole works on node 17
Example Circuit
We will analyze the following circuit at a frequency of 60 Hz.
60 Hz AC Circuit
Vs 1 0 AC 120V 0
Rg 1 2 0.5
Lg 2 3 3.183mH
Rm 3 4 16.0
Lm 4 0 31.83mH
Cx 3 0 132.8uF
.AC LIN 1 60 60
.PRINT AC VM(3) VP(3) .PRINT AC IM(Rm) IP(Rm) .PRINT AC IM(Cx) IP(Cx)
In the above listing, the .AC command sets up the analysis for a single solution at 60 Hz. The .PRINT AC command tells PSpice to report on the voltage magnitude and phase angle at node 3, and the
current magnitude and phase angle for the current through resistor Rm and the current magnitude and phase angle through capacitor Cx.
Practice Problem
Draw the circuit shown in figure 1 in schematic and analyze it for the source frequency of 50 Hz. Check whether KCL is satisfied in node 3. Also check KVL for both loops.
Part B: Frequency Sweep
In this section we will discuss frequency sweeps over a range of frequencies. The purpose of this type of analysis is to study the frequency response of different kinds of circuits.
Specifying frequency range for AC Sources
command is used to specify one of the following three types of frequency ranges.
LIN Range Type
The LIN range type is linear. It divides up the range between the minimum and maximum user-specified frequencies into evenly spaced intervals. This is best used to view details over a narrow
bandwidth. The first parameter after the keyword LIN is the number of points to calculate. This is followed by the lowest frequency value in Hz, then the highest frequency value in Hz. As with all
the range types, the unit "Hz" is optional.
.AC LIN 101 2k 4k ; 101 points from 2 kHz to 4 kHz
.AC LIN 11 800 1000 ; 11 points from 800 Hz to 1 kHz
OCT Range Type
The OCT range is logarithmic to the base two. Thus each octave has the same number of points calculated. This is somewhat useful for designing electronic equipment for musical applications. However,
the resulting graphs are very similar in appearance to sweeps made with the DEC range. The first parameter after the keyword OCT is the number of points per octave to calculate. This is followed by
the lowest frequency value in Hz, then the highest frequency value in Hz.
.AC OCT 20 440Hz 1.76kHz; 20 points/octave over 2 octaves
.AC OCT 40 110Hz 880Hz ; 40 points/octave over 3 octaves
DEC Range Type
The DEC range is logarithmic to the base ten. Thus each decade has the same number of points calculated. This is the most commonly used range for making Bode plots of a frequency response. The first
parameter after the keyword DEC is the number of points per decade to calculate. This is followed by the lowest frequency value in Hz, then the highest frequency value in Hz.
.AC DEC 50 1kHz 100kHz ; 50 points/decade over 2 decades
.AC DEC 25 100k 100MEG ; 25 points/decade over 3 decades
The independent variable used by PROBE in a .TRAN analysis is time. But in a frequency sweep the independent variable used by PROBE is frequency. When PROBE stores data in a transient (.TRAN)
analysis, the dependent variables are instantaneous voltages and currents; whereas in a frequency sweep these dependent variables are real and imaginary components of phasor voltages and currents.
Examples of Frequency Sweeps
First-order low-pass RC filter
Vin 1 0 AC 1.0V
R1 1 2 0.25
C1 2 0 50uF
.AC DEC 20 100Hz 100kHz
The above circuit is a first-order low-pass filter. Since we want the gain of this filter, it is convenient to make the input voltage 1 volt so the output voltage is numerically equivalent to the
gain. However, the post-processor within PROBE is fully capable of performing arithmetic such as dividing the input voltage into the output voltage.
After running this in PSpice, start PROBE, choose "Add" from the "Trace" menu and plot the output voltage. PROBE will provide the following graph.
Another option is to have PROBE plot the gain in decibels. To do this, choose "Add" from the "Trace" menu in PROBE. Then select the "DB" function in the right-hand column and choose "V(2)" from the
left-hand column. After selecting "OK," you should see the following trace.
Notice that the gain is -3db at a frequency of 1 kHz (the half-power frequency) and declines at 20 dB/decade thereafter. The remaining demonstration for this example is to have PROBE plot the phase
shift of the low-pass filter as a function of frequency. We simply specify "VP(2)" from the "Add Trace" dialog box. Notice that this is the same format used in the .PRINT AC command in PSPICE. PROBE
automatically shows the angles in degrees.
1 comment:
1. This Is Very Important for Us | {"url":"http://eee24h.blogspot.com/2011/04/steady-state-ac-analysis-and-frequency.html","timestamp":"2014-04-21T13:04:44Z","content_type":null,"content_length":"243800","record_id":"<urn:uuid:cf811b9d-eba9-4b31-aced-323bae4f8640>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00602-ip-10-147-4-33.ec2.internal.warc.gz"} |
complexity of
Results 1 - 10 of 19
- Notices of the AMS
"... We give a detailed treatment of the “bit-model ” of computability and complexity of real functions and subsets of R n, and argue that this is a good way to formalize many problems of scientific
computation. In Section 1 we also discuss the alternative Blum-Shub-Smale model. In the final section we d ..."
Cited by 32 (3 self)
Add to MetaCart
We give a detailed treatment of the “bit-model ” of computability and complexity of real functions and subsets of R n, and argue that this is a good way to formalize many problems of scientific
computation. In Section 1 we also discuss the alternative Blum-Shub-Smale model. In the final section we discuss the issue of whether physical systems could defeat the Church-Turing Thesis. 1
- Journ. Amer. Math. Soc
"... Polynomial Julia sets have emerged as the most studied examples of fractal sets generated by a dynamical system. Apart from the beautiful mathematics, one of the reasons for their popularity is
the beauty of the computer-generated images of such sets. The algorithms used to draw these pictures vary; ..."
Cited by 26 (6 self)
Add to MetaCart
Polynomial Julia sets have emerged as the most studied examples of fractal sets generated by a dynamical system. Apart from the beautiful mathematics, one of the reasons for their popularity is the
beauty of the computer-generated images of such sets. The algorithms used to draw these pictures vary; the most naïve work by iterating the center of a pixel to determine if it lies in the Julia set.
Milnor’s distance-estimator algorithm [Mil] uses classical complex analysis to give a one-pixel estimate of the Julia set. This algorithm and its modifications work quite well for many examples, but
it is well known that in some particular cases computation time will grow very rapidly with increase of the resolution. Moreover, there are examples, even in the family of quadratic polynomials, when
no satisfactory pictures of the Julia set exist. In this paper we study computability properties of Julia sets of quadratic polynomials. Under the definition we use, a set is computable, if, roughly
speaking, its image can be generated by a computer with an arbitrary precision. Under this notion of computability we show: Main Theorem. There exists a parameter value c ∈ C such that the Julia set
- Proc. of CCA 2004, in ENTCS, vol 120 , 2005
"... Although numerous computer programs have been written to compute sets of points which claim to approximate Julia sets, no reliable high precision pictures of nontrivial Julia sets are currently
known. Usually, no error estimates are added and even those algorithms which work reliable in theory, beco ..."
Cited by 16 (0 self)
Add to MetaCart
Although numerous computer programs have been written to compute sets of points which claim to approximate Julia sets, no reliable high precision pictures of nontrivial Julia sets are currently
known. Usually, no error estimates are added and even those algorithms which work reliable in theory, become unreliable in practice due to rounding errors and the use of fixed length floating point
numbers. In this paper we prove the existence of polynomial time algorithms to approximate the Julia sets of given hyperbolic rational functions. We will give a strict computable error estimation
w.r.t. the Hausdorff metric on the complex sphere. This extends a result on polynomials z ↦ → z 2 + c, where |c | < 1/4, in [RW03] and an earlier result in [Zho98] on the recursiveness of the Julia
sets of hyperbolic polynomials. The algorithm given in this paper computes Julia sets locally in time O(k · M(k)) (where M(k) denotes the time needed to multiply two k-bit numbers). Roughly speaking,
the local time complexity is the number of Turing machine steps to decide a set of disks of spherical diameter 2 −k so that the union of these disks has Hausdorff distance at most 2 −k+2. This allows
to give reliable pictures of Julia sets to arbitrary precision. Key words: Julia Sets, Computational Complexity. 1
, 2005
"... We establish a new connection between the two most common traditions in the theory of real computation, the Blum-Shub-Smale model and the Computable Analysis approach. We then use the connection
to develop a notion of computability and complexity of functions over the reals that can be viewed as an ..."
Cited by 15 (5 self)
Add to MetaCart
We establish a new connection between the two most common traditions in the theory of real computation, the Blum-Shub-Smale model and the Computable Analysis approach. We then use the connection to
develop a notion of computability and complexity of functions over the reals that can be viewed as an extension of both models. We argue that this notion is very natural when one tries to determine
just how “difficult ” a certain function is for a very rich class of functions. 1
"... Abstract. We show that if a polynomial filled Julia set has empty interior, then it is computable. 1. ..."
, 2004
"... We investigate different definitions of the computability and complexity of sets in R k, and establish new connections between these definitions. This allows us to connect the computability of
real functions and real sets in a new way. We show that equivalence of some of the definitions corresponds ..."
Cited by 12 (9 self)
Add to MetaCart
We investigate different definitions of the computability and complexity of sets in R k, and establish new connections between these definitions. This allows us to connect the computability of real
functions and real sets in a new way. We show that equivalence of some of the definitions corresponds to equivalence between famous complexity classes. The model we use is mostly consistent with
[Wei00]. We apply the concepts developed to show that hyperbolic Julia sets are polynomial time computable. This result is a significant generalization of the result in [RW03], where polynomial time
computability has been shown for a restricted type of hyperbolic Julia sets. ii Acknowledgements First of all, I would like to thank my graduate supervisor, Stephen Cook. Our weekly meetings not only
allowed me to complete this thesis, but also gave me a much broader and deeper understanding of the entire field of theoretical computer science. Working with him has made my learning process a
pleasant one.
- SOFSEM 2006: Theory and Practice of Computer Science – 32nd Conference on Current Trends in Theory and Practice of Computer Science, Merin, Czech Republic, January 21–27 , 2006
"... Abstract. Ever since Alan Turing gave us a machine model of algorithmic computation, there have been questions about how widely it is applicable (some asked by Turing himself). Although the
computer on our desk can be viewed in isolation as a Universal Turing Machine, there are many examples in natu ..."
Cited by 11 (3 self)
Add to MetaCart
Abstract. Ever since Alan Turing gave us a machine model of algorithmic computation, there have been questions about how widely it is applicable (some asked by Turing himself). Although the computer
on our desk can be viewed in isolation as a Universal Turing Machine, there are many examples in nature of what looks like computation, but for which there is no well-understood model. In many areas,
we have to come to terms with emergence not being clearly algorithmic. The positive side of this is the growth of new computational paradigms based on metaphors for natural phenomena, and the
devising of very informative computer simulations got from copying nature. This talk is concerned with general questions such as: • Can natural computation, in its various forms, provide us with
genuinely new ways of computing? • To what extent can natural processes be captured computationally? • Is there a universal model underlying these new paradigms?
- Commun. Math. Physics
"... Abstract. It has been previously shown by two of the authors that some polynomial Julia sets are algorithmically impossible to draw with arbitrary magnification. On the other hand, for a large
class of examples the problem of drawing a picture has polynomial complexity. In this paper we demonstrate ..."
Cited by 10 (4 self)
Add to MetaCart
Abstract. It has been previously shown by two of the authors that some polynomial Julia sets are algorithmically impossible to draw with arbitrary magnification. On the other hand, for a large class
of examples the problem of drawing a picture has polynomial complexity. In this paper we demonstrate the existence of computable quadratic Julia sets whose computational complexity is arbitrarily
high. 1. Foreword Let us informally say that a compact set in the plane is computable if one can program a computer to draw a picture of this set on the screen, with an arbitrary desired
magnification. It was recently shown by the second and third authors, that some Julia sets are not computable [BY]. This in itself is quite surprising to dynamicists – Julia sets are among the “most
drawn ” objects in contemporary mathematics, and numerous algorithms exist to produce their pictures. In the cases when one has not been able to produce informative pictures (the dynamically
pathological cases, like maps with a Cremer or a highly Liouville Siegel point) the feeling had been that this was due to the immense computational resources required by the known algorithms.
- MATH. LOGIC QUART , 2005
"... We discuss the question whether the Mandelbrot set is computable. The computability notions which we consider are studied in computable analysis and will be introduced and discussed. We show
that the exterior of the Mandelbrot set, the boundary of the Mandelbrot set, and the hyperbolic components sa ..."
Cited by 10 (0 self)
Add to MetaCart
We discuss the question whether the Mandelbrot set is computable. The computability notions which we consider are studied in computable analysis and will be introduced and discussed. We show that the
exterior of the Mandelbrot set, the boundary of the Mandelbrot set, and the hyperbolic components satisfy certain natural computability conditions. We conclude that the two–sided distance function of
the Mandelbrot set is computable if the hyperbolicity conjecture is true. We formulate the question whether the distance function of the Mandelbrot set is computable also in terms of the escape time.
- CCA 2004 , 2004
"... ... Although the hyperbolic Julia sets were shown to be recursive, complexity bounds were proven only for a restricted case in [13]. Our paper is a significant generalization of [13], in which
polynomial time computability was shown for a special kind of hyperbolic polynomials, namely, polynomials o ..."
Cited by 7 (3 self)
Add to MetaCart
... Although the hyperbolic Julia sets were shown to be recursive, complexity bounds were proven only for a restricted case in [13]. Our paper is a significant generalization of [13], in which
polynomial time computability was shown for a special kind of hyperbolic polynomials, namely, polynomials of the form p(z) = z2 + c with| c | < 1/4. We show that the machine drawing the Julia set can
be made independent of the hyperbolic polynomial p, and provide some evidence suggesting that one cannot expect a much better computability result for Julia sets. We also introduce an alternative
real set computability definition due to Ko, and show an interesting connection between this definition and the main definition. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=262945","timestamp":"2014-04-20T06:55:22Z","content_type":null,"content_length":"37543","record_id":"<urn:uuid:e806bdbc-c7f4-40f8-b0b8-5f0f960ae334>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00635-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example Mechanism for MechanicalSystems Kinematic Constraints
by Robert Beretta
Slider-Crank Animation
Slider-Crank Function
This sample notebook analyzes the motion of a classic slider-crankshaft mechanism using MechanicalSystems. This mechanism is first modeled with a simple and intuitive set of mechanical constraints
that are representative of how such a problem would usually be modeled. The same mechanism is then repeatedly modeled with different sets of constraints, all to achieve essentially the same end. In
all, this single mechanism is used to demonstrate each one of MechanicalSystems basic constraint objects.
Slider-Crank Model
The kinematic model of the slider-crank mechanism consists of three bodies.
1. Ground (black)
2. Crank (red)
3. Slider (blue)
A real slider-crank mechanism would have a fourth body--the connecting link that attaches the crank to the slider. In the kinematic model, however, this body is modeled by a single constraint that
specifies a constant distance between a point on the crank and a point on the slider. This technique decreases the overall size of the model.
Kinematic Model
The Modeler2D package is loaded into Mathematica with the Needs command.
The following symbols are used to reference each of the bodies in the model throughout the analysis.
Body Definitions
Three points must be defined on the ground (body 1).
P1. The location of the crank rotational axis on the ground.
P2 and P3. Two points to define the line upon which the slider slides.
Two points must be defined on the crank (body 2).
P1. The crank's rotational axis.
P2. The point where the connecting link attaches to the crank.
Note that no initial guess has been given for the crank, thus the default initial guess
Two points must be defined on the slider (body 3).
P1 and P2. Two points along the vertical axis of the slider.
Build bodies
The body objects are incorporated into the current model with the SetBodies command.
Three constraints and one driving constraint are required to model the slider-crank mechanism. Note the naming convention of Modeler2D constraint objects--each constraint's head ends with a number
that signifies the number of degrees of freedom constrained. Also note that the first argument to all constraint functions is an arbitrary constraint number. This integer must be unique among all
constraints in a single model.
Constraint 1 is a revolute joint, or pivot, between the crank and the ground.
Constraint 2 is a relative-distance constraint between the crank and the slider that replaces the pseudo connecting rod.
Constraint 3 is a translational or prismatic joint between the slider and the ground.
Constraint 4 is a rotational driver to turn the crank.
The driving function T goes from 0 to 1.
The constraint definitions are incorporated into the current model with the SetConstraints command.
Running the Model
CheckSystem tests for certain modeling errors. A return value of True means all is well.
SolveMech[t] runs the current model at time
The most current solution found by Modeler2D is always saved so that it can be used as the next initial guess. Whenever SolveMech is called, the most current solution rules are automatically updated.
LastSolve[] returns the most current solution.
The following graphic uses only one Modeler2D graphics function, Bar. The remainder of the image is generated entirely with built-in Mathematica graphics functions and the help of some Modeler2D
functions providing the location coordinates of bodies in the model. This technique allows a graphic image to be created that is a function of the location of the bodies in the model, so that the
graphic may be redrawn with the mechanism in different positions just by changing a set of replacement rules applied to the graphics object.
All of the Modeler2D graphics functions, such as Bar, return standard Mathematica graphics primitives whose coordinates are functions of location variables.
The graphics object can now be displayed with Show. The graphics primitives are located in 2D space by applying a solution rule such as the current default initial guess LastSolve[], or the return of
The following input will generate the series of 16 graphics in the animation cell at the beginning of this notebook.
Another constraint that could replace the Revolute2 constraint, in this case, is OriginLock2. OriginLock2 does not reference any particular points on the bodies it constrains, rather it directly
constrains the variables that define the location of a body, which is the same as referencing the origin of a body.
In this case, OriginLock2 can replace the Revolute2 constraint used in the slider-crank model, because the Revolute2 constraint referenced the origins of both of the bodies. An optional last argument
to OriginLock2 can be given that specifies the vector offset between the origins.
Here we rebuild the constraints.
The location of the center of the crank can now be changed through changing the values of xoffset and yoffset.
RotationLock1 simply controls the angular coordinate of a body.
The revolute joint is probably the single most common constraint in all kinematic models.
A revolute type constraint that allows an offset vector between the two points is DirectedPosition2.
Thus, DirectedPosition2[a__, {0, 0}] is identical to Revolute2[a__]. Note that DirectedPosition2 allows the second point object to be omitted, defaulting to the global origin.
This constraint set offsets the crank axis from its original base position.
The same result could be accomplished simply by setting the global coordinates of point 1 on the ground body to be {xoffset, yoffset}. However, this would not be true if the DirectedPosition2
constraint was applied between two bodies that were both rotating.
Some of the Modeler2D constraint functions reference geometric lines or axes instead of points (Translate2, PointOnLine1, ...). Lines are specified by two points lying on a single body or two
separate bodies and axes are specified by a point and a direction.
This constraint is identical to constraint 3 used initially.
PointOnLines2 is used to place a point at the intersection of two lines.
Thus, the following constraint could be used to replace the Revolute2 constraint by forcing the center of the crank to lie at the intersection of two lines that intersect at the global origin.
Instead of locating the slider with a single two degree of freedom constraint, a pair of one degree of freedom constraints can be used; one to enforce that one point on the slider moves along the
specified axis, and one to enforce the slider's angular orientation. PointOnLine1 and RotationLock1 are used to do this.
In this example, sliderangle sets the angular position of the slider through the RotationLock1 constraint (a very tippy slider!).
The DirectedDistance1 constraint is effectively very similar to PointOnLine1 except in the method used to define the translation line.
RelativeX1 and RelativeY1
The mechanism can be modeled with six one degree of freedom constraints by taking the previous model and replacing OriginLock2 with RelativeX1 and RelativeY1, each a one degree of freedom constraint.
The pnt2 argument can be omitted, in which case it defaults to the global origin.
This constraint set is identical to the previous one.
The relative angle between two bodies can also be controlled with RelativeAngle1. RelativeAngle1 uses vectors for reference, instead of using the angular coordinate of the body, like RotationLock1.
The following constraint set is effectively the same as the previous one defined for PointOnLine1.
Parallel1 and Orthogonal1
Two constraints that are actually special cases of RelativeAngle1 are Parallel1 and Orthogonal1.
The following constraint set is effectively the same as the first one in this notebook.
RelativeDistance1 constrains a point to lie on a circular path.
Optional Arguments
PointOnLine1 has two optional arguments to offset or rotate the translation line, relative to the reference line.
The following constraint set allows the slider to travel along a path that is offset and canted relative to the slider's original track.
SetGuess[] must be run to reset the initial guesses or the model will converge upside down because of the large difference between the last solution time and the current one. | {"url":"http://reference.wolfram.com/applications/mechsystems/Examples/2DExamples/Mech.Example.SliderCrank.html","timestamp":"2014-04-16T19:25:59Z","content_type":null,"content_length":"79445","record_id":"<urn:uuid:666d1704-92e7-45b2-b9d2-66a961bce631>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00098-ip-10-147-4-33.ec2.internal.warc.gz"} |
Extending Standard Quantum Interpretation by Quantum Set Theory
Set theory provides foundations of mathematics in the sense that all the mathematical notions like numbers, functions, relations, structures are defined in the axiomatic set theory called ZFC.
Quantum set theory naturally extends ZFC to quantum logic. Hence, we can expect that quantum set theory provides mathematics based on quantum logic. In this talk, I will show a useful application of
quantum set theory to quantum mechanics based on the fact that the real numbers constructed in quantum set theory exactly corresponds to the quantum observables. The standard formulation of quantum
mechanics answers the question as to in what state an observable A has the value in an interval I. However, the question is not answered as to in what state two observables A and B have the same
value. The notion of equality between the values of observables will play many important roles in foundations of quantum mechanics. The notion of measurement of an observable relies on the condition
that the observable to be measured and the meter after the measurement should have the same value. We can define the notion of quantum disturbance through the condition whether the values of the
given observable before and after the process is the same. It is shown that all the observational propositions on a quantum system corresponds to some propositions in quantum set theory and the
equality relation naturally provides the proposition that two observables have the same value. It has been broadly accepted that we cannot speak of the values of quantum observables without assuming
a hidden variable theory. However, quantum set theory enables us to do so without assuming hidden variables but alternatively under the consist use of quantum logic, which is more or less considered
as logic of the superposition principle. [1] M. Ozawa, Transfer principle in quantum set theory, J. Symbolic Logic 72, 625-648 (2007), online preprint: http://arxiv.org/abs/math.LO/0604349. [2] M.
Ozawa, Quantum perfect correlations, Ann. Phys. (N.Y.) 321, 744--769 (2006), online preprint: LANL quant-ph/0501081. | {"url":"http://www.perimeterinstitute.ca/videos/extending-standard-quantum-interpretation-quantum-set-theory","timestamp":"2014-04-18T22:04:26Z","content_type":null,"content_length":"28273","record_id":"<urn:uuid:53fe58ae-85bb-4db0-9d99-31fd1ec42c5f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent US5298859 - Harmonic-adjusted watt-hour meter
1. Field of the Invention
This invention relates to the field of electric power measuring devices. More specifically, this invention relates to the accumulating and recording of watt-hours.
2. Description of the Prior Art
Electric power is ordinarily delivered to residences, commercial facilities, and industrial facilities as an Alternating Current (AC) voltage that approximates a sine wave with respect to time, and
ordinarily flows through consumer premises as an AC current that also approximates a sine wave with respect to time. Ordinarily, a watt-hour meter is used to charge for the power that is consumed.
In an AC power distribution system, the expected frequency of voltage or current (usually 50 Hertz, 60 Hertz, or 400 Hertz) is usually referred to as the fundamental frequency, regardless of the
actual spectral amplitude peak. Integer multiples of this "fundamental" are usually referred to as harmonic frequencies, and spectral amplitude peaks at frequencies below the fundamental are often
referred to as "sub-harmonics", regardless of their ratio relationship to the fundamental.
It is widely recognized that loads which draw harmonic currents place an increased economic burden on the power distribution system by requiring derating of transformers and increased conductor area.
If the non-fundamental currents, harmonics or sub-harmonics, are large relative to the impedance of the distribution system, they can induce harmonic voltages in the voltage delivered to other loads
that share the distribution system. It is also possible for a load to accept power at the fundamental frequency and simultaneously act as a power source at a harmonic frequency. Under these
circumstances, accurate watt-hour measurements fail to accurately measure the economic impact of the load.
For example, an accurate watt measurement treats the following two loads identically: a load that consumes 80 kilowatts at the fundamental frequency, and a load that consumes 100 kilowatts at the
fundamental frequency and sources 20 kilowatts at the fifth harmonic. The latter, almost certainly has an adverse economic impact on the distribution system which traditional watt meters fail to
Harmonic adjustments to watt-hour measurements can provide a better estimate of the economic impact of a non-linear load, and can encourage behavior by electric power consumers that match the goals
of the electric power provider.
FIG. 1 shows a block diagram of an apparatus operating in accordance with an embodiment of the invention.
FIG. 2 shows a flow diagram of the key algorithm of the invention.
FIG. 3 shows a table of adjustment factors that cause the invention to perform identically to prior art. FIG. 4 and FIG. 5 show examples of adjustment factors that might be chosen by electric power
providers with differing economic strategies.
Beginning at the far left of FIG. 1, three voltages signals 1 from an AC power system and corresponding currents signals 2 are sensed in power lines. The sensed voltage signals 1 and current signals
2 are applied to circuits 3 that employ any well-known techniques to scale the signals to an appropriate level for further processing, filter out frequencies that are not of interest, and present
appropriately multiplexed signals to an Analog to Digital Converter 4.
A digital signal processor 5 with associated ROM memory 6 and RAM memory 7 executes the algorithm of FIG. 2, with the exception of the block 20 which is executed by the microprocessor 8. The digital
signal processor 5 calculates the frequency spectra for each voltage and current, using a Fourier transform or any other well known algorithm. The digital signal processor 5 then calculates the true
power flow at each frequency using any well known algorithm. It then adjusts the power spectrum according to a table of frequency adjustments stored in memory 6, examples of which are shown in FIG.
3, FIG. 4, and FIG. 5, making appropriate interpolations where necessary.
The digital signal processor 5 then calculates the harmonic-adjusted power flow, and transmits it, using any well-known technique such as serial communication or Direct Memory Access, to the
single-chip microprocessor 8 which incorporates internal timers, communication channels, program memory, and data memory. The microprocessor 8 then accumulates the harmonic-adjusted watt measurement
in a harmonic-adjusted watt-hour register, and displays the result in a display 10. A communication port 9 allows any other digital system to read the present measurement value, and to read and reset
the registers, which can also be read and reset with manual controls 11.
The circuits 3 through 8, or any parts thereof, may be included in a single integrated circuit such as an application specific integrated circuit (ASIC). The circuits 3 through 8, or any parts
thereof, may perform other functions in addition to the functions described above, such as measuring and accumulating other parameters related to power flow.
Turning now to FIG. 2, the invention continuously executes the process which begins at START 13. In the Block 14, the digital signal processor 5 begins by selecting the first phase for analysis. The
time-domain digital samples of voltage and current are passed to the subroutine WATT ADJUST by the block 15, which returns the value of harmonic-adjusted watts for this phase. The block 16
accumulates the harmonic-adjusted watts for this phase.
The blocks 17 and 18 cause the process to be repeated for additional phases. The block 19 calculates the sum of harmonic-adjusted watts across all phases; this is equivalent to the instantaneous
harmonic-adjusted power in watts. The block 20, which is executed by the microprocessor 8, accumulates the instantaneous harmonic-adjusted watts over time, thus calculating harmonic-adjusted watt
In the subroutine WATT ADJUST 21 time-domain samples of voltage and current for a single phase are transformed into the harmonic-adjusted watts for that phase. In the block 22, the subroutine uses
any well-known technique such as Fourier Transform to calculate the frequency domain data for these signals, including frequency domain voltage E.sub.f, frequency domain current I.sub.f, and the
phase angle φ.sub.f between each frequency domain voltage bucket and its corresponding voltage domain bucket. Beginning at the lowest frequency, the subroutine employs the blocks 23, 27, and 28 to
scan across each of the frequency components. The block 24 sets the initial value of harmonic-adjusted watts to zero.
The block 25 first determines if the watts are positive or negative at this frequency, using any well known technique to calculate watts such as the product of voltage, current, and the cosine of the
angle between the voltage and current. Block 25 then inspects the adjustment tables, examples of which are shown in FIG. 3, FIG. 4, and FIG. 5, and determines the appropriate adjustment factor
A.sub.f. The block 25 may use interpolation if necessary to determine an adjustment factor A.sub.f.
The block 26 adds the harmonic-adjusted watts at this frequency to the accumulated harmonic-adjusted watts. After all frequencies have been scanned by the blocks 27 and 28, the block 29 returns the
accumulated harmonic-adjusted watts for this phase.
Each of FIG. 3, FIG. 4, and FIG. 5, illustrates one possible set of adjustment factors which can be employed in the algorithm shown in FIG. 2. Using FIG. 3 as an example, each of these tables
consists of a set of frequencies 33 with corresponding adjustment factors for negative power flow (from the nominal load to the nominal source) 34 and adjustment factors for positive power flow (from
the nominal source to the nominal load) 35.
FIG. 3 illustrates adjustment factors that cause the invention to behave identically with prior art, as all of the adjustment factors have unity value. Thus, no matter what the configuration of the
load is, the adjusted watt-hour value is the same as the measured watt-hour value.
FIG. 4 illustrates adjustment factors that would be appropriate for an electric power provider whose policy is to sell power at standard rates for any frequency at which the power consumer consumes
power 38, but to refuse to purchase power from the power consumer at any frequency other than the fundamental 37.
FIG. 5 illustrates adjustment factors that would be appropriate for an electric power provider whose policy is to first charge higher rates for power consumer by the power consumer at frequencies
other than the fundamental 41, and second, to charge the power consumer when the power consumer delivers power to the electric power provider at frequencies other than the fundamental 40. The
adjustment factors 40 are negative because they are multiplied by negative watts, yielding a positive value in adjusted watts.
FIG. 3, FIG. 4, and FIG. 5 show frequencies that are odd multiples of 60 Hertz 33, 36, 39. Similar tables can be constructed for odd multiples of 50 Hertz, 400 Hertz, or any other fundamental
frequency of interest. It is convenient to restrict the tables to frequency values where power flow is most likely, but the table may contain entries for any frequency from 0 Hertz up to one-half the
sampling frequency employed by the analog to digital converter 4.
FIG. 4 and FIG. 5 are illustrative examples of possible tables of adjustment factors; tables with differing values can be constructed for electric power providers with differing economic goals.
The Adjustment Factor Tables shown in FIG. 3, FIG. 4, and FIG. 5 may be restored in RAM memory 7, allowing the values in the tables to be changed from time to time.
Various other modification may be made to preferred embodiment without departing from the spirit and scope of the invention as defined by the appended claims. Various other tables with different
values of adjustment factors can be constructed without departing from the spirit and scope of the invention as defined by the appended claims. | {"url":"http://www.google.ca/patents/US5298859","timestamp":"2014-04-20T16:00:50Z","content_type":null,"content_length":"66036","record_id":"<urn:uuid:24585561-f77a-427a-8807-311ab0c8327d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00618-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dynamical Behaviors of Stochastic Reaction-Diffusion Cohen-Grossberg Neural Networks with Delays
Abstract and Applied Analysis
Volume 2012 (2012), Article ID 369725, 14 pages
Research Article
Dynamical Behaviors of Stochastic Reaction-Diffusion Cohen-Grossberg Neural Networks with Delays
^1School of Mathematics and Computer Science, Wuhan Textile University, Wuhan 430073, China
^2Department of Mathematics, Zhaoqing University, Zhaoqing 526061, China
^3School of Management, Wuhan Textile University, Wuhan 430073, China
Received 22 August 2012; Accepted 24 September 2012
Academic Editor: Xiaodi Li
Copyright © 2012 Li Wan et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium,
provided the original work is properly cited.
This paper investigates dynamical behaviors of stochastic Cohen-Grossberg neural network with delays and reaction diffusion. By employing Lyapunov method, Poincaré inequality and matrix technique,
some sufficient criteria on ultimate boundedness, weak attractor, and asymptotic stability are obtained. Finally, a numerical example is given to illustrate the correctness and effectiveness of our
theoretical results.
1. Introduction
Cohen and Grossberg proposed and investigated Cohen-Grossberg neural networks in 1983 [1]. Hopfield neural networks, recurrent neural networks, cellular neural networks, and bidirectional associative
memory neural networks are special cases of this model. Since then, the Cohen-Grossberg neural networks have been widely studied in the literature, see for example, [2–12] and references therein.
Strictly speaking, diffusion effects cannot be avoided in the neural networks when electrons are moving in asymmetric electromagnetic fields. Therefore, we must consider that the activations vary in
space as well as in time. In [13–19], the authors gave some stability conditions of reaction-diffusion neural networks, but these conditions were independent of diffusion effects.
On the other hand, it has been well recognized that stochastic disturbances are ubiquitous and inevitable in various systems, ranging from electronic implementations to biochemical systems, which are
mainly caused by thermal noise, environmental fluctuations, as well as different orders of ongoing events in the overall systems [20, 21]. Therefore, considerable attention has been paid to
investigate the dynamics of stochastic neural networks, and many results on stability of stochastic neural networks have been reported in the literature, see for example, [22–38] and references
The above references mainly considered the stability of equilibrium point of neural networks. What do we study when the equilibrium point does not exist? Except for stability property, boundedness
and attractor are also foundational concepts of dynamical systems, which play an important role in investigating the uniqueness of equilibrium, global asymptotic stability, global exponential
stability, the existence of periodic solution, and so on [39, 40]. Recently, ultimate boundedness and attractor of several classes of neural networks with time delays have been reported. In [41], the
globally robust ultimate boundedness of integrodifferential neural networks with uncertainties and varying delays was studied. Some sufficient criteria on the ultimate boundedness of deterministic
neural networks with both varying and unbounded delays were derived in [42]. In [43, 44], a series of criteria on the boundedness, global exponential stability, and the existence of periodic solution
for nonautonomous recurrent neural networks were established. In [45, 46], some criteria on ultimate boundedness and attractor of stochastic neural networks were derived. To the best of our
knowledge, there are few results on the ultimate boundedness and attractor of stochastic reaction-diffusion neural networks.
Therefore, the arising questions about the ultimate boundedness, attractor and stability for the stochastic reaction-diffusion Cohen-Grossberg neural networks with time-varying delays are important
yet meaningful.
The rest of the paper is organized as follows: some preliminaries are in Section 2, main results are presented in Section 3, a numerical example and conclusions will be drawn in Sections 4 and 5,
2. Model Description and Assumptions
Consider the following stochastic Cohen-Grossberg neural networks with delays and diffusion terms: for and . In the above model, is the number of neurons in the network; is space variable; is the
state variable of the th neuron at time and in space ; and denote the activation functions of the th unit at time and in space ; constant ; presents an amplification function; is an appropriately
behavior function; and denote the connection strengths of the th unit on the th unit, respectively; corresponds to the transmission delay and satisfies denotes the external bias on the th unit; is
the diffusion function; is a compact set with smooth boundary and measure mes in is the initial boundary value; is -dimensional Brownian motion defined on a complete probability space with a natural
filtration generated by , where we associate with the canonical space generated by all and denote by the associated -algebra generated by with the probability measure .
System (2.1) has the following matrix form: where
Let be the space of real Lebesgue measurable functions on and a Banach space for the -norm Note that is -valued function and -measurable -valued random variable, where on , is the space of all
continuous -valued functions defined on with a norm .
The following assumptions and lemmas will be used in establishing our main results.(A1) There exist constants , , and such that (A2) There exist constants and such that (A3) is bounded, positive, and
continuous, that is, there exist constants , such that , for , .
Lemma 2.1 (Poincaré inequality, [47]). Assume that a real-valued function satisfies , where is a bounded domain of with a smooth boundary . Then, which is the lowest positive eigenvalue of the
Neumann boundary problem: is the gradient operator, is the Laplace operator.
Remark 2.2. Assumption (A1) is less conservative than that in [26, 28], since the constants , , , and are allowed to be positive, negative, or zero, that is to say, the activation function in (A1) is
assumed to be neither monotonic, differentiable, nor bounded. Assumption (A2) is weaker than those given in [23, 27, 30] since is not required to be zero or smaller than 1 and is allowed to take any
Remark 2.3. According to the eigenvalue theory of elliptic operators, the lowest eigenvalue is only determined by [47]. For example, if , then ; if , then .
The notation (resp., ) means that matrix is symmetric-positive definite (resp., positive semidefinite). denotes the transpose of the matrix . represents the minimum eigenvalue of matrix . .
3. Main Results
Theorem 3.1. Suppose that assumptions (A1)–(A3) hold and there exist some matrices , , , , , and such that the following linear matrix inequality hold:(A4)where means the symmetric term,
Then system (2.1) is stochastically ultimately bounded, that is, if for any , there is a positive constant such that the solution of system (2.1) satisfies
Proof. If , then it follows from (A4) that there exists a sufficiently small such that where
If , then it follows from (A4) that there exists a sufficiently small such that where , , and are the same as in (3.4),
Consider the following Lyapunov functional:
Applying Itô formula in [48] to along (2.2), one obtains
From assumptions (A1)–(A4), one obtains
From the boundary condition and Lemma 2.1, one obtains where “·” is inner product, ,
Combining (3.10) and (3.11) into (3.9), we have where or .
In addition, it follows from (A1) that Similarly, one obtains
From (3.13)–(3.15), one derives or where , Thus, one obtains
For any , set . By Chebyshev’s inequality and (3.20), we obtain which implies The proof is completed.
Theorem 3.1 shows that there exists such that for any , . Let be denoted by Clearly, is closed, bounded, and invariant. Moreover, with no less than probability , which means that attracts the
solutions infinitely many times with no less than probability , so we may say that is a weak attractor for the solutions.
Theorem 3.2. Suppose that all conditions of Theorem 3.1 hold. Then there exists a weak attractor for the solutions of system (2.1).
Theorem 3.3. Suppose that all conditions of Theorem 3.1 hold and . Then zero solution of system (2.1) is mean square exponential stability.
Remark 3.4. Assumption (A4) depends on and , so the criteria on the stability, ultimate boundedness, and weak attractor depend on diffusion effects and the derivative of the delays and are
independent of the magnitude of the delays.
4. An Example
In this section, a numerical example is presented to demonstrate the validity and effectiveness of our theoretical results.
Example 4.1. Consider the following system where , , , , , , , , is one-dimensional Brownian motion. Then we compute that , , , , , , , , and . By using the Matlab LMI Toolbox, for , based on Theorem
3.1, such system is stochastically ultimately bounded when
5. Conclusion
In this paper, new results and sufficient criteria on the ultimate boundedness, weak attractor, and stability are established for stochastic reaction-diffusion Cohen-Grossberg neural networks with
delays by using Lyapunov method, Poincaré inequality and matrix technique. The criteria depend on diffusion effect and derivative of the delays and are independent of the magnitude of the delays.
This work was supported by the National Natural Science Foundation of China (nos. 11271295, 10926128, 11047114, and 71171152), Science and Technology Research Projects of Hubei Provincial Department
of Education (nos. Q20111607 and Q20111611) and Young Talent Cultivation Projects of Guangdong (LYM09134).
1. M. A. Cohen and S. Grossberg, “Absolute stability of global pattern formation and parallel memory storage by competitive neural networks,” IEEE Transactions on Systems, Man, and Cybernetics, vol.
13, no. 5, pp. 815–826, 1983. View at Zentralblatt MATH
2. Z. Chen and J. Ruan, “Global dynamic analysis of general Cohen-Grossberg neural networks with impulse,” Chaos, Solitons & Fractals, vol. 32, no. 5, pp. 1830–1837, 2007. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
3. T. Huang, A. Chan, Y. Huang, and J. Cao, “Stability of Cohen-Grossberg neural networks with time-varying delays,” Neural Networks, vol. 20, no. 8, pp. 868–873, 2007. View at Publisher · View at
Google Scholar · View at Scopus
4. T. Huang, C. Li, and G. Chen, “Stability of Cohen-Grossberg neural networks with unbounded distributed delays,” Chaos, Solitons & Fractals, vol. 34, no. 3, pp. 992–996, 2007. View at Publisher ·
View at Google Scholar · View at Zentralblatt MATH
5. Z. W. Ping and J. G. Lu, “Global exponential stability of impulsive Cohen-Grossberg neural networks with continuously distributed delays,” Chaos, Solitons & Fractals, vol. 41, no. 1, pp. 164–174,
2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
6. J. Li and J. Yan, “Dynamical analysis of Cohen-Grossberg neural networks with time-delays and impulses,” Neurocomputing, vol. 72, no. 10–12, pp. 2303–2309, 2009. View at Publisher · View at
Google Scholar · View at Scopus
7. M. Tan and Y. Zhang, “New sufficient conditions for global asymptotic stability of Cohen-Grossberg neural networks with time-varying delays,” Nonlinear Analysis: Real World Applications, vol. 10,
no. 4, pp. 2139–2145, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
8. M. Gao and B. Cui, “Robust exponential stability of interval Cohen-Grossberg neural networks with time-varying delays,” Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1914–1928, 2009. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
9. C. Li, Y. K. Li, and Y. Ye, “Exponential stability of fuzzy Cohen-Grossberg neural networks with time delays and impulsive effects,” Communications in Nonlinear Science and Numerical Simulation,
vol. 15, no. 11, pp. 3599–3606, 2010. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
10. Y. K. Li and L. Yang, “Anti-periodic solutions for Cohen-Grossberg neural networks with bounded and unbounded delays,” Communications in Nonlinear Science and Numerical Simulation, vol. 14, no.
7, pp. 3134–3140, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
11. X. D. Li, “Exponential stability of Cohen-Grossberg-type BAM neural networks with time-varying delays via impulsive control,” Neurocomputing, vol. 73, no. 1–3, pp. 525–530, 2009. View at
Publisher · View at Google Scholar · View at Scopus
12. J. Yu, C. Hu, H. Jiang, and Z. Teng, “Exponential synchronization of Cohen-Grossberg neural networks via periodically intermittent control,” Neurocomputing, vol. 74, no. 10, pp. 1776–1782, 2011.
View at Publisher · View at Google Scholar · View at Scopus
13. J. Liang and J. Cao, “Global exponential stability of reaction-diffusion recurrent neural networks with time-varying delays,” Physics Letters A, vol. 314, no. 5-6, pp. 434–442, 2003. View at
Publisher · View at Google Scholar · View at Scopus
14. Z. J. Zhao, Q. K. Song, and J. Y. Zhang, “Exponential periodicity and stability of neural networks with reaction-diffusion terms and both variable and unbounded delays,” Computers & Mathematics
with Applications, vol. 51, no. 3-4, pp. 475–486, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
15. X. Lou and B. Cui, “Boundedness and exponential stability for nonautonomous cellular neural networks with reaction-diffusion terms,” Chaos, Solitons & Fractals, vol. 33, no. 2, pp. 653–662, 2007.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH
16. K. Li, Z. Li, and X. Zhang, “Exponential stability of reaction-diffusion generalized Cohen-Grossberg neural networks with both variable and distributed delays,” International Mathematical Forum,
vol. 2, no. 29–32, pp. 1399–1414, 2007. View at Zentralblatt MATH
17. R. Wu and W. Zhang, “Global exponential stability of delayed reaction-diffusion neural networks with time-varying coefficients,” Expert Systems with Applications, vol. 36, no. 6, pp. 9834–9838,
2009. View at Publisher · View at Google Scholar · View at Scopus
18. Z. A. Li and K. L. Li, “Stability analysis of impulsive Cohen-Grossberg neural networks with distributed delays and reaction-diffusion terms,” Applied Mathematical Modelling, vol. 33, no. 3, pp.
1337–1348, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
19. J. Pan and S. M. Zhong, “Dynamical behaviors of impulsive reaction-diffusion Cohen-Grossberg neural network with delays,” Neurocomputing, vol. 73, no. 7–9, pp. 1344–1351, 2010. View at Publisher
· View at Google Scholar · View at Scopus
20. M. Kærn, T. C. Elston, W. J. Blake, and J. J. Collins, “Stochasticity in gene expression: from theories to phenotypes,” Nature Reviews Genetics, vol. 6, no. 6, pp. 451–464, 2005. View at
Publisher · View at Google Scholar · View at Scopus
21. K. Sriram, S. Soliman, and F. Fages, “Dynamics of the interlocked positive feedback loops explaining the robust epigenetic switching in Candida albicans,” Journal of Theoretical Biology, vol.
258, no. 1, pp. 71–88, 2009. View at Publisher · View at Google Scholar · View at Scopus
22. C. Huang and J. D. Cao, “On $p$th moment exponential stability of stochastic Cohen-Grossberg neural networks with time-varying delays,” Neurocomputing, vol. 73, no. 4–6, pp. 986–990, 2010. View
at Publisher · View at Google Scholar · View at Scopus
23. M. Dong, H. Zhang, and Y. Wang, “Dynamics analysis of impulsive stochastic Cohen-Grossberg neural networks with Markovian jumping and mixed time delays,” Neurocomputing, vol. 72, no. 7–9, pp.
1999–2004, 2009. View at Publisher · View at Google Scholar · View at Scopus
24. Q. Song and Z. Wang, “Stability analysis of impulsive stochastic Cohen-Grossberg neural networks with mixed time delays,” Physica A, vol. 387, no. 13, pp. 3314–3326, 2008. View at Publisher ·
View at Google Scholar · View at Scopus
25. C. H. Wang, Y. G. Kao, and G. W. Yang, “Exponential stability of impulsive stochastic fuzzy reaction–diffusion Cohen–Grossberg neural networks with mixed delays,” Neurocomputing, vol. 89, pp.
55–63, 2012. View at Publisher · View at Google Scholar · View at Scopus
26. H. Huang and G. Feng, “Delay-dependent stability for uncertain stochastic neural networks with time-varying delay,” Physica A, vol. 381, no. 1-2, pp. 93–103, 2007. View at Publisher · View at
Google Scholar · View at Scopus
27. H. Y. Zhao, N. Ding, and L. Chen, “Almost sure exponential stability of stochastic fuzzy cellular neural networks with delays,” Chaos, Solitons & Fractals, vol. 40, no. 4, pp. 1653–1659, 2009.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH
28. W. H. Chen and X. M. Lu, “Mean square exponential stability of uncertain stochastic delayed neural networks,” Physics Letters A, vol. 372, no. 7, pp. 1061–1069, 2008. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
29. C. Huang and J. D. Cao, “Almost sure exponential stability of stochastic cellular neural networks with unbounded distributed delays,” Neurocomputing, vol. 72, no. 13–15, pp. 3352–3356, 2009. View
at Publisher · View at Google Scholar · View at Scopus
30. C. Huang, P. Chen, Y. He, L. Huang, and W. Tan, “Almost sure exponential stability of delayed Hopfield neural networks,” Applied Mathematics Letters, vol. 21, no. 7, pp. 701–705, 2008. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
31. C. Huang, Y. He, and H. Wang, “Mean square exponential stability of stochastic recurrent neural networks with time-varying delays,” Computers & Mathematics with Applications, vol. 56, no. 7, pp.
1773–1778, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
32. R. Rakkiyappan and P. Balasubramaniam, “Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays,” Applied Mathematics and Computation, vol.
198, no. 2, pp. 526–533, 2008. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
33. Y. Sun and J. D. Cao, “$p$th moment exponential stability of stochastic recurrent neural networks with time-varying delays,” Nonlinear Analysis: Real World Applications, vol. 8, no. 4, pp.
1171–1185, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
34. Z. Wang, J. Fang, and X. Liu, “Global stability of stochastic high-order neural networks with discrete and distributed delays,” Chaos, Solitons & Fractals, vol. 36, no. 2, pp. 388–396, 2008. View
at Publisher · View at Google Scholar · View at Zentralblatt MATH
35. X. D. Li, “Existence and global exponential stability of periodic solution for delayed neural networks with impulsive and stochastic effects,” Neurocomputing, vol. 73, no. 4–6, pp. 749–758, 2010.
View at Publisher · View at Google Scholar · View at Scopus
36. Y. Ou, H. Y. Liu, Y. L. Si, and Z. G. Feng, “Stability analysis of discrete-time stochastic neural networks with time-varying delays,” Neurocomputing, vol. 73, no. 4–6, pp. 740–748, 2010. View at
Publisher · View at Google Scholar · View at Scopus
37. Q. Zhu and J. Cao, “Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays,” IEEE Transactions on Systems, Man, and Cybernetics B, vol. 41,
no. 2, pp. 341–353, 2011. View at Publisher · View at Google Scholar · View at Scopus
38. Q. Zhu, C. Huang, and X. Yang, “Exponential stability for stochastic jumping BAM neural networks with time-varying and distributed delays,” Nonlinear Analysis: Hybrid Systems, vol. 5, no. 1, pp.
52–77, 2011. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
39. P. Wang, D. Li, and Q. Hu, “Bounds of the hyper-chaotic Lorenz-Stenflo system,” Communications in Nonlinear Science and Numerical Simulation, vol. 15, no. 9, pp. 2514–2520, 2010. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
40. P. Wang, D. Li, X. Wu, J. Lü, and X. Yu, “Ultimate bound estimation of a class of high dimensional quadratic autonomous dynamical systems,” International Journal of Bifurcation and Chaos, vol.
21, no. 9, pp. 2679–2694, 2011. View at Publisher · View at Google Scholar
41. X. Y. Lou and B. Cui, “Global robust dissipativity for integro-differential systems modeling neural networks with delays,” Chaos, Solitons & Fractals, vol. 36, no. 2, pp. 469–478, 2008. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
42. Q. Song and Z. Zhao, “Global dissipativity of neural networks with both variable and unbounded delays,” Chaos, Solitons & Fractals, vol. 25, no. 2, pp. 393–401, 2005. View at Publisher · View at
Google Scholar · View at Zentralblatt MATH
43. H. Jiang and Z. Teng, “Global eponential stability of cellular neural networks with time-varying coefficients and delays,” Neural Networks, vol. 17, no. 10, pp. 1415–1425, 2004. View at Publisher
· View at Google Scholar · View at Scopus
44. H. Jiang and Z. Teng, “Boundedness, periodic solutions and global stability for cellular neural networks with variable coefficients and infinite delays,” Neurocomputing, vol. 72, no. 10–12, pp.
2455–2463, 2009. View at Publisher · View at Google Scholar · View at Scopus
45. L. Wan and Q. H. Zhou, “Attractor and ultimate boundedness for stochastic cellular neural networks with delays,” Nonlinear Analysis: Real World Applications, vol. 12, no. 5, pp. 2561–2566, 2011.
View at Publisher · View at Google Scholar · View at Zentralblatt MATH
46. L. Wan, Q. H. Zhou, P. Wang, and J. Li, “Ultimate boundedness and an attractor for stochastic Hopfield neural networks with time-varying delays,” Nonlinear Analysis: Real World Applications, vol.
13, no. 2, pp. 953–958, 2012. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
47. R. Temam, Infinite Dimensional Dynamical Systems in Mechanics and Physics, Springer, New York, NY, USA, 1998. View at Publisher · View at Google Scholar
48. X. Mao, Stochastic Differential Equations and Applications, Horwood Publishing Limited, 1997. | {"url":"http://www.hindawi.com/journals/aaa/2012/369725/","timestamp":"2014-04-20T08:30:39Z","content_type":null,"content_length":"705603","record_id":"<urn:uuid:b6dd312a-8b19-4d4b-bd3f-5e1764c1112e>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Degree Name
PhD (Doctor of Philosophy)
First Advisor
Vincent G. J. Rodgers
In this thesis we accomplish two goals: We construct a two dimensional conformal field theory (CFT), in the form of a Liouville theory, in the near horizon limit for three and four dimensions black
holes. The near horizon CFT assumes the two dimensional black hole solutions that were first introduced by Christensen and Fulling (1977 Phys. Rev. D 15 2088-104) and later expanded to a greater
class of black holes via Robinson and Wilczek (2005 Phys. Rev. Lett. 95 011303). The two dimensions black holes admit a $Diff(S^1)$ or Witt subalgebra, which upon quantization in the horizon limit
becomes Virasoro with calculable central charge. These charges and lowest Virasoro eigen-modes reproduce the correct Bekenstein-Hawking entropy of the four and three dimensions black holes via the
Cardy formula (Bl"ote et al 1986 Phys. Rev. Lett. 56 742; Cardy 1986 Nucl. Phys. B 270 186). Furthermore, the two dimensions CFT's energy momentum tensor is anomalous, i.e. its trace is nonzero.
However, In the horizon limit the energy momentum tensor becomes holomorphic equaling the Hawking flux of the four and three dimensions black holes. This encoding of both entropy and temperature
provides a uniformity in the calculation of black hole thermodynamics and statistical quantities for the non local effective action approach.
We also show that the near horizon regime of a Kerr-Newman-$AdS$ ($KNAdS$) black hole, given by its two dimensional analogue a la Robinson and Wilczek, is asymptotically $AdS_2$ and dual to a one
dimensional quantum conformal field theory (CFT). The $s$-wave contribution of the resulting CFT's energy-momentum-tensor together with the asymptotic symmetries, generate a centrally extended
Virasoro algebra, whose central charge reproduces the Bekenstein-Hawking entropy via Cardy's Formula. Our derived central charge also agrees with the near extremal Kerr/CFT Correspondence in the
appropriate limits. We also compute the Hawking temperature of the $KNAdS$ black hole by coupling its Robinson and Wilczek two dimensional analogue (RW2DA) to conformal matter.
Copyright 2011 Leo L. Rodriguez | {"url":"http://ir.uiowa.edu/etd/1172/","timestamp":"2014-04-18T05:53:26Z","content_type":null,"content_length":"22958","record_id":"<urn:uuid:2c21dc60-dc5b-490a-8493-a27ce35e2a63>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Simple Expression?
Date: 1/26/96 at 8:32:18
From: Anonymous
Subject: Math Problem
Hi -
I heard that you are the right point of contact for any small or big
math problem.
My problem is the following: I know that
sum a^k/k! = exp(a)
But is it possible to have a simple expression for the same series
starting at k=b, that is:
sum a^k/k! = ?
Thank you for information,
Jean-Francois Hellings
Project Engineer, SAIT Systems
Brussels, Belgium
Date: 8/3/96 at 9:33:47
From: Doctor Jerry
Subject: Re: Math Problem
Are you asking if there is a simple expression for the tail of the
exponential series - if there is a closed form for the series
a^b/b!+a^(b+1)/(b+1)!+ . . . ?
e^a = 1+a/1!+a^2/2!+...+a^(b-1)/(b-1)!+a^b/b!+a^(b+1)/(b+1)! + ...,
we can write
a^b/b!+a^(b+1)/(b+1)! + ... =
e^a - (1+a/1!+a^2/2!+...+a^(b-1)/(b-1)!).
This is a closed form for the series on the left. If b were large,
this might not be much help.
The question as to whether there is a special function giving the
value of
a^b/b!+a^(b+1)/(b+1)! + ...
remains. We worked on this question for a while, trying various
things and looking in a book of special functions. We found nothing,
but an expert in special functions might be able to answer your
-Doctor Jerry, The Math Forum
Check out our web site! http://mathforum.org/dr.math/ | {"url":"http://mathforum.org/library/drmath/view/56874.html","timestamp":"2014-04-19T23:18:31Z","content_type":null,"content_length":"6303","record_id":"<urn:uuid:3cb58f23-b76e-4672-856c-10fca3f4fd8f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"} |
Comments on The Geomblog: Bob Morris and stream algorithms@Suresh: I think Martin is asking about the sum of...@Martin: there are general versions of F_1 estimat...Nice simple algorithm. I just wonder how much spa...This is a great algorithm! Is there also a variant...Lovely description of an approach to a problem
I h...
tag:blogger.com,1999:blog-6555947.post3618723215296796704..comments2014-01-12T10:46:48.153-07:00Suresh Venkatasubramanianhttps://plus.google.com/
112165457714968997350noreply@blogger.comBlogger5125tag:blogger.com,1999:blog-6555947.post-62705205108220211342011-07-09T12:38:48.136-06:002011-07-09T12:38:48.136-06:00@Suresh: I think Martin is
asking about the sum of positive and negative integers, rather than F1, which is more complicated.<br /><br />Does the obvious modification of the Morris counter still work? I feel like it
@Martin: there are general versions of F_1 estimation that allow for increments and decrements: they aren't as elegant as this one though. <br /><br />@williampan: if I understand correctly, the
log (1/eps) term relates to the space needed to generate the random coin toss from a "pure" source of randomness. There's a standard approach to doing space-efficient randomness
generation due to Nisan and Wigderson that's often invoked "in theory" as a way of generating bits in small space.Sureshhttp://www.blogger.com/profile/
15898357513326041822noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-29939952752234818032011-07-02T10:09:56.176-06:002011-07-02T10:09:56.176-06:00Nice simple algorithm. I just wonder how
much space is needed to generate the random "coin" with head-probability $2^{-C}$?williampanhttp://www.blogger.com/profile/
07817226646563706930noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-50829512768027725802011-07-01T04:56:59.579-06:002011-07-01T04:56:59.579-06:00This is a great algorithm! Is there also a
variant that would allow both, incrementing and <i>decrementing</i>, the counter?Martin Schwarzhttp://www.blogger.com/profile/
16935124381866975661noreply@blogger.comtag:blogger.com,1999:blog-6555947.post-26815118206534460652011-06-30T21:08:14.561-06:002011-06-30T21:08:14.561-06:00Lovely description of an approach to a
problem<br />I hadn't heard of before. Thanks, Suresh!Bryanhttp://www.blogger.com/profile/08736009689892219149noreply@blogger.com | {"url":"http://geomblog.blogspot.com/feeds/3618723215296796704/comments/default","timestamp":"2014-04-16T11:12:15Z","content_type":null,"content_length":"10701","record_id":"<urn:uuid:bf2041e4-3e28-420e-b85d-f8987f04f0ad>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Chern-Simons/Wess-Zumino-Witten correspondence
up vote 13 down vote favorite
I have often seen a relationship being alluded to between these two theories but I am unable to find any literature which proves/derives/explains this relationship.
I guess in the condensed matter physics literature this is the "same" thing which is referred to when they say that one has propagating chiral bosons on the boundary of the manifold if there is a
Chern-Simons theory defined in the interior("bulk")
Let me quote (with some explanatory modifications) from two papers two most important aspects of the relationship that is alluded to,
1. "...It is well known that any Chern-Simons theory admits a boundary which carries a chiral WZW model; however these degrees of freedom are not topological (the partition function of Chern-Simons
theory coupled to such boundary degrees of freedom depends on the conformal class of the metric on the boundary)..."
2. "...In general if the pure Chern-Simons theory (of group $G$) at level k is formulated on a Riemann furface then the number of zero-energy states equals the number of conformal blocks of the WZW
model of $G$ at level $k' = k - \frac{h}{2}$ ($h=$the quadratic Casimir of G in the adjoint representation)..(and when the Riemann surface is a torus) the number of conformal blocks is equal to
the number of representations of $\hat{G}$ at level $k'$.."
• I would like to know of a reference(s) (hopefully pedagogical/introductory!) which explains/proves/derives the above two claims. (..I looked through various sections of the book by Toshitake
Kohno on CFT which deals with similar stuff but I couldn't identity these there..may be someone could just point me to the section in that book which may be explains the above claims but may be
in some different garb which I can't recognize!..)
dg.differential-geometry chern-simons-theory conformal-field-theory 3-manifolds
As far as I remember Witten's famous paper "Jones polynomial & ... " also discusses this, in the section when tries to calculate examples - at some point he needs Verlinde formula for WZ to
1 calculate explicily something. The idea (AFAIR) is simple take locally M=R\times Sigma, main point is to choose appropriate gauge fixing, as far as I remember A_0 = 0 , then in this gauge dA_0 is
Lagrange multiplier in Feynman integral which sits on the space of the flat connections... If I will remember details I will write them... – Alexander Chervov Jul 2 '12 at 19:20
add comment
5 Answers
active oldest votes
Quantum field theories are understood/formalized at various levels of detail (e.g. action functional only, space of states/partition function only, full functorial QFT, full extended
QFT). Accordingly there are such different levels at which people will say "It is well-known that...".
For the general holographic principle there are still lots of gaps, but for the special case of 3dChern-Simons-TQFT/2dWZW-CFT things are pretty well understood.
The nLab entry
holographic principle -- Ordinary Chern-Simons theory / WZW-model
gives a list of pointers, some of which coincide with what is being said in other replies here.
First of all there is a direct relation between the action functionals: the CS action functional on a manifold with boundary is not gauge invariant. The boundary term that appears is
the action functional of the WZW model (the topological term, at least, and the kinetic term with due fine-tuning).
up vote 6
down vote More abstractly, the Chern-Simons action for $G$ simply connected arises by transgression of a differential universal characteristic map on higher smooth moduli stacks $\mathbf{B}G_
accepted {conn} \to \mathbf{B}^3 U(1)_{conn}$. The WZW action (the topolological term) similarly arises simply by the (differentially twisted) looping (as smooth $\infty$-stacks) of this map.
Then the famous original observation: geometric quantization of this action functional yields a space of states for Chern-Simons theory that may be naturally identified with the
partition function of the WZW model.
To promote this further to a relation between ful QFTs, one needs to know what the full QFT corresponding to Chern-Simons theory is. This hasn't as yet been fully established via
quantization, but the expectation is that it is what the Reshetikhin-Turaev construction gives when fed the modular tensor category of loop group representations of the gauge group.
Assuming this, there is a very detailed construction by Fuchs-Runkel-Schweigert and others that effectively construct the rational WZW CFT (as a full Segal-style CFT) from the TQFT.
Recently the holographic aspect of this construction has been further amplified by Kapustin-Saulina and then by Fuchs-Schweigert-Valentino.
See at the above link for references to all these items.
@Urs Schreiber Thanks for the details. Can you elaborate on this point you made that "The boundary term that appears is the action functional of the WZW model (the topological term,
at least, and the kinetic term with due fine-tuning)" AFAIK because the Chern-Simons action is not gauge invariant a gauge transformation on it produces an "extra" term which looks
like one of the terms of the WZW action. But I can't see how this can be interpreted to say that there is an effective WZW theory on the boundary when there is CS theory in the bulk.
– Anirbit Jul 3 '12 at 15:06
@Urs Schreiber Can you also link to the Kapustin-Saulina paper that you mentioned? (..infact the first of my italicized quotes is an adaptation from a Kapustin-Saulina paper..) –
Anirbit Jul 3 '12 at 15:08
Hi Nairbit, my comment above is really just an extended pointer to what I have written at that nLab entry, which contains the links that you are looking for and more: ncatlab.org/nlab
/show/holographic+principle#OrdinaryCSWZWModel . – Urs Schreiber Jul 6 '12 at 12:20
Sorry, my fingers introduced a key twist. I meant to type Anirbit. Sorry. (Wasn't there once the possibility to edit comments here? Where did it disappear to?) – Urs Schreiber Jul 6
'12 at 12:21
@Urs Schreiber Thanks for the details. But don't see anywhere in your links a derivation of the fact that about the bulk partition function of Chern-Simons' theory generically having
a dependence on the conformal class of the metric on the boundary (..and how specific boundary conditions for the gauge field can remove that dependence..) This to my mind is one of
the most important aspects of the correspondence. Can you give some references/explanations towards that? – Anirbit Jul 11 '12 at 15:14
show 2 more comments
The boundary of a Chern-Simons theory carries a Wess-Zumino-Witten model...
This comes from the following relation between the parameters of the two theories. Recall that a Chern-Simons theory is determined by an element $$\xi \in \hat H^4(BG,\mathbb{Z}),$$ an
element in the degree four differential cohomology of the classifying space of the gauge group. Often $\xi$ can be identified with an element in ordinary cohomology, and in turn with just an
integer, called level. Recall that a Wess-Zumino-Witten-model is determined by an element $$ \eta \in \hat H^3(G,\mathbb{Z}). $$ Now, there is a transgression map $$ t: \hat H^4(BG,\mathbb
{Z}) \to \hat H^3(G,\mathbb{Z}) $$ which converts a Chern-Simons theory into a WZW model.
This is discussed (using bundle gerbes) in
• A. Carey et al.: Bundle gerbes for chern-simons and wess-zumino-witten theories
up vote The states of the CS theory form the conformal blocks of the WZW model...
8 down
vote This is a result of Witten, a crucial ingredient for the relation between Chern-Simons theory and the Jones polynomial. You might want to start in Section 5 of
• E. Witten: Quantum Field Theory and the Jones Polynomial, Commun. Math. Phys. 121,351-399 (1989)
Another source with general information about the Chern-Simons states is Section 5 of
• K. Gawedzki: Conformal Field Theory: A Case Study
They key information is formula (5.15) in the latter paper. It expresses the partition function of the WZW model (coupled to a gauge field, and with field insertions) by scalar products of
CS states. The next formula (5.16) has (5.15) reduced to the torus, relating it to representations of $G$.
The main derivation though of what you wrote is from the reference I posted. – Chris Gerig Jul 2 '12 at 21:34
@Konrad Waldorf Thanks for the answer. Seems Alexander Chervov also recommended section 5 of this paper of Witten's. About Gawedski's lectures - probably its a bad question to ask! - how
far back do I have to start to understand equation 5.15? (...earlier my experience has been that Gawedski's writings can be very terse!..I am just wondering whether I have to read that
entire lecture to understand this point!..) – Anirbit Jul 3 '12 at 15:18
@Anirbit: the paper of Gawedzki's is a review about certain standard aspects of conformal field theory. If you want to understand the formula, I'm afraid you will either have to read the
whole paper, or to go to other sources that help you to understand at least Section 5. Gawedzki's writing might be terse, but it's also a landmark of elaborate, substantiated, and correct
writing! – Konrad Waldorf Jul 3 '12 at 23:28
@Konrad Waldorf I guess I will first look through the section 5 of Witten's paper and then will venture into Gawedski's writings. (..experience has been that Witten's writings are much
more beginner friendly!..) – Anirbit Jul 4 '12 at 20:11
add comment
The immediate paper that comes to mind is Topological Gauge Theories and Group Cohomology by Dijkgraaf and Witten, starting on pg403. The Wess-Zumino term appears because the Chern-Simons
functional is not gauge invariant, and the variation of this action depends on the connection at the boundary surface. This paper references Witten's Non-abelian Bosonization in Two
Dimensions in talking about the WZW model and CFT, so I think this would be useful to check out.
As for the second comment, pg411-413 brings up the conformal block stuff, but I'm not sure it explains what you want. It does have references:
up vote 5 1) Extended Chiral algebras and modular invariant partition functions (Karpilovsky, et.al.)
down vote 2) Spectra of WZW models with arbitrary simple groups (Felder, et.al.)
3) Taming the conformal zoo (Moore, Seiberg)
Hopefully one of those leads you to what you desire.
add comment
You are unlikely to find a proof of these claims, because Chern-Simons theory, as a quantum field theory in 3 dimensions, has not been precisely formulated mathematically.
up vote 5 You can find some partial results in the book Bakalov, Kirillov, Lectures on tensor categories and modular functor.
down vote
More precisely, what has not been carried out precisely is the quantization of the Chern-Simons Lagrangian to a full TQFT. On the other hand, it is expected that once done, the result is
3 the TQFT defined by the modular tensor category given by, say, the loop group representations of the given loop group. Under this assumption, the correspondence CS-TQFT / WZW-CFT has
been made precise by Fuchs-Runkel-Schweigert et al. see ncatlab.org/nlab/show/FFRS-formalism . – Urs Schreiber Jul 3 '12 at 7:53
add comment
There is yet one more perspective on the relation between $G$-Chern-Simons theory and the WZW-model on $G$: the background B-field of the latter can be regarded as being the prequantum
circle 2-bundle in codimension 2 for a "higher/extended geometric quantization" of Chern-Simons theory.
This is spelled out a bit at
nLab:Chern-Simons theory -- Geometirc quantization -- In higher codimension.
In brief the story is this:
We have constructed in Cech cocycles for differential characteristic classes a refinement of the generator of $H^4(B G, \mathbb{Z})$ to a morphism of smooth moduli $\infty$-stacks $\
mathbf{c}_c : \mathbf{B}G_c \to \mathbf{B}^3 U(1)_c$ from that of $G$-principal bundles with connection to that of circle 3-bundles (bundle 2-gerbes) with connection
(for $G$ a simple, simply connected Lie group).
This is such that when transgressed to the mapping $\infty$-stack from a closed compact oriented 3d manifold $\Sigma_3$ it yields the Chern-Simons action functional
$$ \exp(2 \pi i \int_{\Sigma_3} [\Sigma_3, \mathbf{c}_{conn}]) : CSFields(\Sigma_3) = [\Sigma_3, \mathbf{B}G_{conn}] \to U(1) \,. $$
But one can similarly transgress to mapping stacks out of a $0 \leq k \leq 3$-dimensional manifold $\Sigma_k$. For $k = 1$ with $\Sigma_1 = S^1$ one obtains a canonical circle 2-bundle
up vote 2 (circle bundle gerbe) with connection on the smooth moduli stack of $G$-principal connections on the circle
down vote
$$ \exp(2 \pi i \int_{S^1} [S^1, \mathbf{c}_{conn}]) : [\Sigma_1, \mathbf{B}G_{conn}] \to \mathbf{B}^2 U(1) \,. $$
Now since $\mathbf{B}$ is "categorical delooping" while $[S^1, -]$ is "geometric looping", the mapping stack on the left if not quite equivalent to $G$ itself, but it receives a canonical
map from it
$$ \bar \nabla_{can} : G \to [S^1, \mathbf{B}G_{conn}] \,. $$
In fact, the internal hom adjunct of this map is a canonical $G$-principal connection $\nabla_{can}$ on $S^1 \times G$, and this is precisely that from def. 3.3 of the article by Carey et
al that Konrad mentions in his reply.
So the composite
$G \to [S^1, \mathbf{B}G_{conn}] \stackrel{transgression}{\to} \mathbf{B}^2 U(1)_{conn}$
is thw WZW circle 2-bundle on $G$, or equivalently the Chern-Simons prequantum circle 2-bundle in codimension 2.
(The math parser here gets confused when I type in the full formulas. But you can find them at the above link).
add comment
Not the answer you're looking for? Browse other questions tagged dg.differential-geometry chern-simons-theory conformal-field-theory 3-manifolds or ask your own question. | {"url":"http://mathoverflow.net/questions/101098/the-chern-simons-wess-zumino-witten-correspondence/101223","timestamp":"2014-04-19T22:25:32Z","content_type":null,"content_length":"88659","record_id":"<urn:uuid:6955ca93-1153-4924-b321-946b352fb147>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00194-ip-10-147-4-33.ec2.internal.warc.gz"} |
limit question
Kind of a surprising result, I think. Abolute values can be tricky. It always helps if you can simply get rid of them. One way to get rid of them is to know when they are positive or negative. In
this case, there is a little leap of faith. It goes like this, as you approach zero, eventually, you will be rather close to zero. More to the point, eventually you will be closer than 1/2. 1/4, 1/8,
1/16, etc. or -1/4, -1/8, -1/16. This is VERY important, because IF we promise not to wander off by more than 1/2, we have the following delightful results. For |x| < 1/2, 2x + 1 > 0 and |2x + 1| =
(2x + 1) For |x| < 1/2, 2x - 1 < 0 and |2x - 1| = (1 - 2x) What can you do to the numerator with those two results? Can you finish? | {"url":"http://mathhelpforum.com/calculus/114007-limit-question-print.html","timestamp":"2014-04-18T06:58:44Z","content_type":null,"content_length":"5503","record_id":"<urn:uuid:f95f07f0-b65c-4e79-84cf-631ac9f298ff>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
This entry was posted by amsh on April 13, 2013 at 4:19 pm, and is filed under Antenna and wave propagation. Follow any responses to this post through RSS 2.0. You can leave a response or trackback
from your own site.
No comments yet.
No trackbacks yet. | {"url":"http://www.winnerscience.com/antenna-and-wave-propagation/friis-free-space-equation/","timestamp":"2014-04-19T19:35:04Z","content_type":null,"content_length":"57824","record_id":"<urn:uuid:ffe58067-8c3e-418f-a7cd-6f6243770ab8>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00393-ip-10-147-4-33.ec2.internal.warc.gz"} |
the "How Can Renee Make This Code Better?" blog
Problem: Like most other languages, English include two types of numbers: cardinal numbers (such as one, two, three, and four) that are used in counting, and ordinal numbers (such as first, second,
third, and fourth) that are used to indicate a position in a sequence. In numeric form, ordinals are usually indicated by writing the digits in the number, followed by the last two letters of the
English word that names the corresponding ordinal. Thus, the ordinal numbers first, second, third, and fourth often appear in print as 1st, 2nd, 3rd, and 4th.
The general rule for determining the suffix of an ordinal can be defined as follows:
Numbers ending in the digit 1, 2, and 3, take the suffixes "st", "nd", and "rd", respectively, unless the number ends with the two- digit combination 11, 12, or 13. Those numbers, and any numbers not
ending with a 1, 2, or 3, take the suffix "th".
Your task in this problem is to write a function ordinalForm(n) that takes an integer n and returns a string indicating the corresponding ordinal number. (Roberts ch 10, problem 9).
What it looks like:
/* File: ordinalForm.java
* -----------------
*This console program lets the user enter in any integer and will print out the ordinal form of that number. For example
*the ordinal form of 1 is "1st," 2 is "2nd" etc. The user enters the sentinel value "-1" to stop running the program.
*The program and the method of the same name takes an integer as the entered value and returns a string. We use the ACM library's (?)
*built-in Integer class and its toString method. Because the suffix of an ordinal value depends on the last digit
*of a number our program has to strip out and inspect the last digit of the supplied integer so we use the method
*"getLastDigit" to inspect the last digit. In the case of numbers ending with "11", "12" and "13" we need to see the last
**two* digits so we also use a similiarly-constructed getSecondToLastDigit.
import acm.program.ConsoleProgram;
public class ordinalForm extends ConsoleProgram {
public void run() {
while (true){
int cardinal = readInt("Enter number and we'll give you the ordinal form: ");
if (cardinal == -1){
private String makeOrdinal(int n) {
String ordinal = "";
int lastDigit = getLastDigit(n);
int secondToLastDigit = getSecondToLastDigit(n);
if ((lastDigit == 1) && secondToLastDigit !=(1)){
ordinal = Integer.toString(n) + "st"; //Not sure why it's "Integer.toString(n)" instead of "n.toString" or "toString(n)"
else if (lastDigit ==2 && secondToLastDigit !=(1)) {
ordinal = Integer.toString(n) + "nd";
else if (lastDigit == 3 && secondToLastDigit !=(1)) {
ordinal = Integer.toString(n) + "rd";
else ordinal = Integer.toString(n) + "th";
return ordinal;
private int getLastDigit(int n) {
int remainder = n % 10;
return remainder;
private int getSecondToLastDigit(int n) {
int remainder = 0;
for (int i = 0; i < 2; i++){
remainder = n % 10;
n /= 10;
return remainder;
What made it tricky:
My first version of this was running just fine....it just kept returning the wrong answer. Have a look at this sample run when the bug existed and guess what the problem was: Here's a clue: this is
what my private methods getSecondToLastDigit(n) and getLastDigit(n) looked like:
private int getLastDigit(int n) {
int remainder = 0;
while (n > 0) {
remainder = n % 10;
n /= 10;
return remainder;
private int getSecondToLastDigit(int n) {
int remainder = 0;
while (n > 10) {
remainder = n % 10;
n /= 10;
return remainder;
So while getLastDigit(n) should have just divided by n *once* and returned the remainder, instead it kept dividing by n until there was nothing left. So it was really stripping off and returning the
*first* digit, not the last! getSecondToLastDigit(n) was behaving the same way, except it stopped one iteration sooner so it returned the second digit, not the second-to-last. Ugh, so embarrassing,
blame it on the Christmas food abundance.
I didn't see the pattern or understand what was wrong for a while so I made this quick debugging program that printed out the results of just getLastDigit(n):
import acm.program.ConsoleProgram;
public class testLastDigit extends ConsoleProgram {
public void run() {
int n = readInt("Enter an int and we'll give you the last digit");
print (getLastDigit(n));
private int getLastDigit(int n) {
int remainder = 0;
while (n > 0) {
remainder = n % 10;
n /= 10;
return remainder;
I ran it a bunch of times and saw that indeed, testLastDigit was giving me the wrong answer. Once I saw that it was wrong and predictably wrong, the fix was easy.
7 comments:
1. First off I would like to say wonderful blog! I had a quick question which I'd like to ask if you do not mind. I was curious to know how you center yourself and clear your mind prior to writing.
I've had difficulty clearing my thoughts in getting my thoughts out
there. I do enjoy writing however it just seems like
the first 10 to 15 minutes are lost just trying to figure out
how to begin. Any suggestions or tips? Many thanks!
My homepage: Candy Crush Saga Hack
2. Definitely believe that which you said. Your favourite reason seemed to be on
the web the easiest thing to be aware of. I say to you, I definitely get irked whilst other
people think about worries that they plainly don't know about. You managed to hit the nail upon the top as neatly as outlined out the entire thing with no need side effect , other people could
take a signal. Will probably be again to get more. Thank you
Also visit my site - pirater un compte facebook
3. This post is genuinely a fastidious one it assists new internet visitors,
who are wishing for blogging.
Also visit my page - psn Code Generator
4. I couldn't refrain from commenting. Well written!
Feel free to surf to my blog; Psn Code Generator
5. This is a topic that's near to my heart... Many thanks! Where are your contact details though?
Feel free to visit my webpage World Of Tanks Hack
6. Do you mind if I quote a couple of your posts as long as I provide credit and sources
back to your website? My blog is in the very same niche as yours and my
visitors would definitely benefit from a lot of the information you provide here.
Please let me know if this okay with you. Thanks!
Also visit my blog post: download 7zip ( )
7. What's up to all, how is everything, I think every one is getting more from this site, and your views are nice designed for new visitors.
Also visit my web site ... how To jailbreak Ps3 | {"url":"http://reneecoding.blogspot.com/2011/12/program-to-determine-ordinal-value-of.html","timestamp":"2014-04-19T04:20:13Z","content_type":null,"content_length":"72457","record_id":"<urn:uuid:66baaa9d-100a-48f8-a842-ba58eda9119a>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole.
Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material,
please consider the following text as a useful but insufficient proxy for the authoritative book pages.
Do not use for reproduction, copying, pasting, or reading; exclusively for search engines.
OCR for page 184
--> 8 Theoretical Research: Intangible Cornerstone of Computer Science The theory and vocabulary of computing did not appear ready-made. Some important concepts, such as operating systems and
compilers, had to be invented de novo. Others, such as recursion and invariance, can be traced to earlier work in mathematics. They became part of the evolving computer science lexicon as they helped
to stimulate or clarify the design and conceptualization of computing artifacts. Many of these theoretical concepts from different sources have now become so embedded in computing and communications
that they pervade the thinking of all computer scientists. Most of these notions, only vaguely perceived in the computing community of 1960, have since become ingrained in the practice of computing
professionals and even made their way into high-school curricula. Although developments in computing theory are intangible, theory underlies many aspects of the construction, explanation, and
understanding of computers, as this chapter demonstrates. For example, the concept of state machines (described below) contributed to the development of compilers and communications protocols,
insights into computational complexity have been applied to improve the efficiency of industrial processes and information systems, formal verification methods have provided a tool for improving the
reliability of programs, and advances in number theory resulted in the development of new encryption methods. By serving as practical tools for use in reasoning and description, such theoretical
notions have informed progress in all corners of computing. Although most of these ideas have a basis in mathematics, they have
OCR for page 184
--> become so firmly fixed in the instincts of computer scientists and engineers that they are likely to be used as naturally as a cashier uses arithmetic, with little attention to the origins of the
process. In this way, theory pervades the daily practice of computer science and lends legitimacy to the very identity of the field. This chapter reviews the history and the funding sources of four
areas of theoretical computer science: state machines, computational complexity, program correctness, and cryptography. A final section summarizes the lessons to be learned from history. Although by
no means a comprehensive overview of theoretical computer science, the discussion focuses on topics that are representative of the evolution in the field and can be encapsulated fairly, without
favoring any particular thesis. State machines, computational complexity, and verification can be traced to the work of logicians in the late 1800s and early 1900s. Cryptography dates back even
further. The evolution of these subfields reflects the interplay of mathematics and computer science and the ways in which research questions changed as computer hardware placed practical constraints
on theoretical constructs. Each of the four areas is now ubiquitous in the basic conceptual toolkit of computer scientists as well as in undergraduate curricula and textbooks. Each area also
continues to evolve and pose additional challenging questions. Because it tracks the rise of ideas into the general consciousness of the computer science community, this case study is concerned less
with issues of ultimate priority than with crystallizing events. In combination, the history of the four topics addressed in this chapter illustrates the complex fabric of a dynamic field. Ideas
flowed in all directions, geographically and organizationally. Breakthroughs were achieved in many places, including a variety of North American and European universities and a few industrial
research laboratories. Soviet theoreticians also made a number of important advances, although they are not emphasized in this chapter. Federal funding has been important, mostly from the National
Science Foundation (NSF), which began supporting work in theoretical computer science shortly after its founding in 1950. The low cost of theoretical research fit the NSF paradigm of
single-investigator research. Originally, such work was funded through the division of mathematical sciences, but with the establishment of the Office of Computing Activities in 1970, the NSF
initiated a theoretical computer science program that continues to this day. As Thomas Keenan, an NSF staffer, put it: Computer science had achieved the title "computer science" without much science
in it, [so we] decided that to be a science you had to have theory, and not just theory itself as a separate program, but everything had to have a theoretical basis. And so, whenever we had a
proposal . . .
OCR for page 184
--> we encouraged, as much as we could, some kind of theoretical background for this proposal. (Aspray et al., 1996) The NSF ended up funding the bulk of theoretical work in the field (by 1980 it had
supported nearly 400 projects in computational theory), much of it with great success. Although funding for theoretical computer science has declined as a percentage of the NSF budget for computing
research (it constituted 7 percent of the budget in 1996, down from 20 percent in 1973), it has grown slightly in real dollars.1 Mission-oriented agencies, such as the National Aeronautics and Space
Administration or the Defense Advanced Research Projects Agency, tend not to fund theoretical work directly because of their emphasis on advancing computing technology, but some advances in theory
were made as part of their larger research agendas. Machine Models: State Machines State machine are ubiquitous models for describing and implementing various aspects of computing. The body of theory
and implementation techniques that has grown up around state machines fosters the rapid and accurate construction and analysis of applications, including compilers, text-search engines, operating
systems, communication protocols, and graphical user interfaces. The idea of a state machine is simple. A system (or subsystem) is characterized by a set of states (or conditions) that it may assume.
The system receives a series of inputs that may cause the machine to produce an output or enter a different state, depending on its current state. For example, a simplified state diagram of a
telephone activity might identify states such as idle, dial tone, dialing, ringing, and talking, as well as events that cause a shift from one state to another, such as lifting the handset, touching
a digit, answering, or hanging up (see Figure 8.1). A finite state machine, such as a telephone, can be in only one of a limited number of states. More powerful state machine models admit a larger,
theoretically infinite, number of states. The notion of the state machine as a model of all computing was described in Alan Turing's celebrated paper on computability in 1936, before any
general-purpose computers had been built. Turing, of Cambridge University, proposed a model that comprised an infinitely long tape and a device that could read from or write to that tape (Turing,
1936). He demonstrated that such a machine could serve as a general-purpose computer. In both academia and industry, related models were proposed and studied during the following two decades,
resulting in a definitive 1959 paper by Michael Rabin and Dana Scott of IBM Corporation (Rabin
OCR for page 184
--> Figure 8.1 Simplified state diagram for supervising a telephone line. States are represented by circles, inputs by labels on arrows. Actions in placing a call lead down the left side of the
diagram; actions in receiving a call lead down the right. The state labeled "Please" at the bottom of the diagram announces "Please hang up."
OCR for page 184
--> and Scott, 1959). Whereas Turing elucidated the undecidability2 inherent in the most general model, Rabin and Scott demonstrated the tractability of limited models. This work enabled the finite
state machine to reach maturity as a theoretical model. Meanwhile, state machines and their equivalents were investigated in connection with a variety of applications: neural networks (Kleene, 1936;
McCulloch and Pitts, 1943); language (Chomsky, 1956); communications systems (Shannon, 1948), and digital circuitry (Mealey, 1955; Moore, 1956). A new level of practicality was demonstrated in a
method of deriving efficient sequential circuits from state machines (Huffman, 1954). When formal languages—a means of implementing state machines in software—emerged as an academic research area in
the 1960s, machines of intermediate power (i.e., between finite-state and Turing machines) became a focus of research. Most notable was the ''pushdown automata,'' or state machine with an auxiliary
memory stack, which is central to the mechanical parsing performed to interpret sentences (usually programs) in high-level languages. As researchers came to understand parsing, the work of
mechanizing a programming language was formalized into a routine task. In fact, not only parsing but also the building of parsers was automated, facilitating the first of many steps in converting
compiler writing from a craft into a science. In this way, state machines were added to the everyday toolkit of computing. At the same time, the use of state machines to model communication
systems—as pioneered by Claude Shannon—became commonplace among electrical and communications engineers. These two threads eventually coalesced in the study of communications protocols, which are now
almost universally specified in terms of cooperating state machines (as discussed below in the section dealing with correctness). The development of formal language theory was spurred by the
construction of compilers and invention of programming languages. Compilers came to the world's attention through the Fortran project (Backus, 1979), but they could not become a discipline until the
programming language Algol 60 was written. In the defining report, the syntax of Algol 60 was described in a novel formalism that became known as Backus-Naur form. The crisp, mechanical appearance of
the formalism inspired Edward Irons, a graduate student at Yale University, to try to build compilers directly from the formalism. Thereafter, compiler automation became commonplace, as noted above.
A task that once required a large team could now be assigned as homework. Not only did parsers become easy to make; they also became more reliable. Doing the bulk of the construction automatically
reduced the chance of bugs in the final product, which might be anything from a compiler for Fortran to an interpreter for Hypertext Markup Language (HTML).
OCR for page 184
--> State machines were developed by a mix of academic and industrial researchers. The idea began as a theoretical construct but is now fully naturalized throughout computer science as an organizing
principle and specification tool, independent of any analytical considerations. Introductory texts describe certain programming patterns as state driven (Garland, 1986) or state based (Clancy and
Linn, 1995). An archetypal state-based program is a menu-driven telephone-inquiry system. Based on their familiarity with the paradigm, software engineers instinctively know how to build such
programs. The ubiquity of the paradigm has led to the development of special tools for describing and building state-based systems, just as for parsers. Work continues to devise machine models to
describe different types of systems. Computational Complexity The theory of computability preceded the advent of general-purpose computers and can be traced to work by Turing, Kurt Godel, Alonzo
Church, and others (Davis, 1965). Computability theory concentrated on a single question: Do effective procedures exist for deciding mathematical questions? The requirements of computing have raised
more detailed questions about the intrinsic complexity of digital calculation, and these questions have raised new issues in mathematics. Algorithms devised for manual computing often were
characterized by operation counts. For example, various schemes were proposed for carrying out Gaussian elimination or finite Fourier transforms using such counts. This approach became more common
with the advent of computers, particularly in connection with algorithms for sorting (Friend, 1956). However, the inherent degree of difficulty of computing problems did not become a discrete
research topic until the 1960s. By 1970, the analysis of algorithms had become an established aspect of computer science, and Knuth (1968) had published the first volume of a treatise on the subject
that remains an indispensable reference today. Over time, work on complexity theory has evolved just as practical considerations have evolved: from concerns regarding the time needed to complete a
calculation, to concerns about the space required to perform it, to issues such as the number of random bits needed to encrypt a message so that the code cannot be broken. In the early 1960s, Hao
Wang3 noted distinctions of form that rendered some problems in mathematical logic decidable, whereas logical problems as a class are undecidable. There also emerged a robust classification of
problems based on the machine capabilities required to attack them. The classification was dramatically refined by Juris Hartmanis and Richard Stearns at General Electric Company (GE), who showed
OCR for page 184
--> within a single machine model, a hierarchy of complexity classes exists, stratified by space or time requirements. Hartmanis then left GE to found the computer science department at Cornell
University. With NSF support, Hartmanis continued to study computational complexity, a field widely supported by NSF. Hartmanis and Stearns developed a "speed-up" theorem, which said essentially that
the complexity hierarchy is unaffected by the underlying speed of computing. What distinguishes levels of the hierarchy is the way that solution time varies with problem size—and not the scale at
which time is measured. Thus, it is useful to talk of complexity in terms of order-of-growth. To that end, the "big-oh" notation, of the form O(n), was imported from algorithm analysis to computing
(most notably by Knuth [1976]), where it has taken on a life of its own. The notation is used to describe the rate at which the time needed to generate a solution varies with the size of the problem.
Problems in which there is a linear relationship between problem size and time to solution are O(n); those in which the time to solution varies as the square of the problem size are O(n2).4 Big-oh
estimates soon pervaded algorithm courses and have since been included in curricula for computer science in high schools. The quantitative approach to complexity pioneered by Hartmanis and Stearns
spread rapidly in the academic community. Applying this sharpened viewpoint to decision problems in logic, Stephen Cook at the University of Toronto proposed the most celebrated theoretical notion in
computing—NP completeness. His "P versus NP" conjecture is now counted among the important open problems of mathematics. It states that there is a sharp distinction between problems that can be
computed deterministically or nondeterministically in a tractable amount of time.5 Cook's theory, and previous work by Hartmanis and Stearns, helps categorize problems as either deterministic or
nondeterministic. The practical importance of Cook's work was vivified by Richard Karp, at the University of California at Berkeley (UC-Berkeley), who demonstrated that a collection of
nondeterministically tractable problems, including the famous traveling-salesman problem,6 are interchangeable ("NP complete") in the sense that, if any one of them is deterministically tractable,
then all of them are. A torrent of other NP-complete problems followed, unleashed by a seminal book by Michael Garey and David Johnson at Bell Laboratories (Garey and Johnson, 1979). Cook's
conjecture, if true, implies that there is no hope for precisely solving any of these problems on a real computer without incurring an exponential time penalty. As a result, software designers,
knowing that particular applications (e.g., integrated-circuit layout) are intrinsically difficult, can opt for "good enough" solutions, rather than seeking "best possible" solutions. This leads to
another question: How good a solution
OCR for page 184
--> can be obtained for a given amount of effort? A more refined theory about approximate solutions to difficult problems has been developed (Hochbaum, 1997), but, given that approximations are not
widely used by computer scientists, this theory is not addressed in detail here. Fortunately, good approximation methods do exist for some NP-complete problems. For example, huge "traveling salesman
routes" are routinely used to minimize the travel of an automated drill over a circuit board in which thousands of holes must be bored. These approximation methods are good enough to guarantee that
certain easy solutions will come very close to (i.e., within 1 percent of) the best possible solution. Verifying Program Correctness Although the earliest computer algorithms were written largely to
solve mathematical problems, only a tenuous and informal connection existed between computer programs and the mathematical ideas they were intended to implement. The gap between programs and
mathematics widened with the rise of system programming, which concentrated on the mechanics of interacting with a computer's environment rather than on mathematics. The possibility of treating the
behavior of programs as the subject of a mathematical argument was advanced in a compelling way by Robert Floyd at UC-Berkeley and later amplified by Anthony Hoare at The Queen's University of
Belfast. The academic movement toward program verification was paralleled by a movement toward structured programming, christened by Edsger Dijkstra at Technische Universiteit Eindhoven and
vigorously promoted by Harlan Mills at IBM and many others. A basic tenet of the latter movement was that good program structure fosters the ability to reason about programs and thereby assure their
correctness.7 Moreover, analogous structuring was to inform the design process itself, leading to higher productivity as well as better products. Structured programming became an obligatory slogan in
programming texts and a mandated practice in many major software firms. In the full verification approach, a program's specifications are described mathematically, and a formal proof that the program
realizes the specifications is carried through. To assure the validity of the (exhaustingly long) proof, it would be carried out or checked mechanically. To date, this approach has been too onerous
to contemplate for routine programming. Nevertheless, advocates of structured programming promoted some of its key ideas, namely precondition, postcondition, and invariant (see Box 8.1). These terms
have found their way into every computer science curriculum, even at the high school level. Whether or
OCR for page 184
--> BOX 8.1 The Formal Verification Process In formal verification, computer programs become objects of mathematical study. A program is seen as affecting the state of the data with which it
interacts. The purpose of the program is to transform a state with known properties (the precondition) into a state with initially unknown, but desired properties (the postcondition). A program is
composed of elementary operations, such as adding or comparing quantities. The transforming effect of each elementary operation is known. Verification consists of proving, by logical deduction, that
the sequence of program steps starting from the precondition must inexorably lead to the desired postcondition. When programs involve many repetitions of the same elementary steps, applied to many
different data elements or many transformational stages starting from some initial data, verification involves showing once and for all that, no matter what the data are or how many steps it takes, a
program eventually will achieve the postcondition. Such an argument takes the form of a mathematical induction, which asserts that the state after each repetition is a suitable starting state for the
next repetition. The assertion that the state remains suitable from repetition to repetition is called an "invariant" assertion. An invariant assertion is not enough, by itself, to assure a solution.
To rule out the possibility of a program running forever without giving an answer, one must also show that the postcondition will eventually be reached. This can be done by showing that each
repetition makes a definite increment of progress toward the postcondition, and that only a finite number of such increments are possible. Although nationally straightforward, the formal verification
of everyday programs poses a daunting challenge. Familiar programs repeat thousands of elementary steps millions of times. Moreover, it is a forbidding task to define precise preconditions and
postconditions for a program (e.g., spreadsheet or word processor) with an informal manual running into the hundreds of pages. To carry mathematical arguments through on this scale requires
automation in the form of verification tools. To date, such tools can handle only problems with short descriptions—a few dozen pages, at most. Nevertheless, it is possible for these few pages to
describe complex or subtle behavior. In these cases, verification tools come in handy. not logic is overtly asserted in code written by everyday programmers, these ideas inform their work. The
structured programming perspective led to a more advanced discipline, promulgated by David Gries at Cornell University and Edsger Dijkstra at Eindhoven, which is beginning to enter curricula. In this
approach, programs are derived from specifications by algebraic calculation. In the most advanced manifestation, formulated by Eric Hehner, programming is identified with mathematical logic. Although
it remains to be seen whether this degree of mathematicization will eventually be-
OCR for page 184
--> come common practice, the history of engineering analysis suggests that this outcome is likely. In one area, the design of distributed systems, mathematicization is spreading in the field perhaps
faster than in the classroom. The initial impetus was West's validation of a proposed international standard protocol. The subject quickly matured, both in practice (Holzmann, 1991) and in theory
(Vardi and Wolper, 1986). By now, engineers have harnessed a plethora of algebras (e.g., temporal logic, process algebra) in practical tools for analyzing protocols used in applications ranging from
hardware buses to Internet communications. It is particularly difficult to foresee the effects of abnormal events on the behavior of communications applications. Loss or garbling of messages between
computers, or conflicts between concurrent events, such as two travel agents booking the same airline seat, can cause inconvenience or even catastrophe, as noted by Neumann (1995). These real-life
difficulties have encouraged research in protocol analysis, which makes it possible to predict behavior under a full range of conditions and events, not just a few simple scenarios. A body of theory
and practice has emerged in the past decade to make automatic analysis of protocols a practical reality. Cryptography Cryptography is now more important than ever. Although the military has a long
history of supporting research on encryption techniques to maintain the security of data transmissions, it is only recently that cryptography has come into widespread use in business and personal
applications. It is an increasingly important component of systems that secure online business transactions or maintain the privacy of personal communications.8 Cryptography is a field in which
theoretical work has clear implications for practice, and vice versa. The field has also been controversial, in that federal agencies have sometimes opposed, and at other times supported, publicly
accessible research. Here again, the NSF supported work for which no funding could be obtained from other agencies. The scientific study of cryptography matured in conjunction with information
theory, in which coding and decoding are central concerns, albeit typically in connection with compression and robust transmission of data as opposed to security or privacy concerns. Although Claude
Shannon's seminal treatment of cryptography (Shannon, 1949) followed his founding paper on information theory, it was actually written earlier under conditions of wartime security. Undoubtedly,
Shannon's involvement with cryptography on government projects helped shape his thinking about information theory. Through the 1970s, research in cryptography was pursued mainly
OCR for page 184
--> under the aegis of government agencies. Although impressive accomplishments, such as Great Britain's Ultra code-breaking enterprise in World War II, were known by reputation, the methods were
largely kept secret. The National Security Agency (NSA) was for many years the leader in cryptographic work, but few of the results were published or found their way into the civilian community.
However, an independent movement of cryptographic discovery developed, driven by the availability and needs of computing. Ready access to computing power made cryptographic experimentation feasible,
just as opportunities for remote intrusion made it necessary and the mystery surrounding the field made it intriguing. In 1977, the Data Encryption Standard (DES) developed at IBM for use in the
private sector received federal endorsement (National Bureau of Standards, 1977). The mechanism of DES was disclosed, although a pivotal aspect of its scientific justification remained classified.
Speculation about the strength of the system spurred research just as effectively as if a formal request for proposals had been issued. On the heels of DES came the novel proposal for public-key
cryptography by Whitfield Diffie and Martin Hellman at Stanford University, and, independently, by R.C. Merkle. Hellman had been interested in cryptography since the early 1970s and eventually
convinced the NSF to support it (Diffie and Hellman, 1976). The notion of public-key cryptography was soon made fully practical by Ronald Rivest, Adi Shamir, and Leonard Adleman at the Massachusetts
Institute of Technology, who, with funding from the NSF and Office of Naval Research (ONR), devised a public-key method based on number theory (Rivest et al., 1978) (see Box 8.2). Their method won
instant acclaim and catapulted number theory into the realm of applied mathematics. Each of the cited works has become bedrock for the practice and study of computer security. The NSF support was
critical, as it allowed the ideas to be developed and published in the open, despite pressure from the NSA to keep them secret. The potential entanglement with International Traffic in Arms
Regulations is always apparent in the cryptography arena (Computer Science and Telecommunications Board, 1996). Official and semiofficial attempts to suppress publication have often drawn extra
notice to the field (Diffie, 1996). This unsolicited attention has evoked a notable level of independence among investigators. Most, however, have achieved a satisfactory modus vivendi with the
concerned agencies, as evidenced by the seminal papers cited in this chapter that report on important cryptographic research performed under unclassified grants.
OCR for page 184
--> BOX 8.2 Rivest-Shamir-Adleman Cryptography Before public-key cryptography was invented, cipher systems required two communicating parties to agree in advance on a secret key to be used in
encrypting and decrypting messages between them. To assure privacy for every communication, a separate arrangement had to be made between each pair who might one day wish to communicate. Parties who
did not know each other in advance of the need to communicate were out of luck. By contrast, public-key cryptography requires merely that an individual announce a single (public) encryption key that
can be used by everyone who wishes to send that individual a message. To decode any of the messages, this individual uses a different but mathematically related key, which is private. The security of
the system depends on its being prohibitively difficult for anyone to discover the private key if only the public key is known. The practicality of the system depends on there being a feasible way to
produce pairs of public and private keys. The first proposals for public-key cryptography appealed to complexity theory for problems that are difficult to solve. The practical method proposed by
Rivest, Shamir, and Adleman (RSA) depends on a problem believed to be of this type from number theory. The problem is factoring. The recipient chooses two huge prime numbers and announces only their
product. The product is used in the encryption process, whereas decryption requires knowledge of the primes. To break the code, one must factor the product, a task that can be made arbitrarily hard
by picking large enough numbers; hundred-digit primes are enough to seriously challenge a stable of supercomputers. The RSA method nicely illustrates how theory and practice evolved together.
Complexity theory was motivated by computation and the desire to understand whether the difficulty of some problems was inherent or only a symptom of inadequate understanding. When it became clear
that inherently difficult problems exist, the stage was set for public-key cryptography. This was not sufficient to advance the state of practice, however. Theory also came to the fore in suggesting
problems with structures that could be adapted to cryptography. It took the combination of computers, complexity theory, and number theory to make public-key cryptography a reality, or even
conceivable. Once the idea was proposed, remarkable advances in practical technique followed quickly. So did advances in number theory and logic, spurred by cryptographic needs. The general area of
protection of communication now covers a range of topics, including code-breaking (even the "good guys" must try to break codes to confirm security); authentication (i.e., preventing imposters in
communications); checks and balances (i.e., forestalling rogue actions, such as embezzlement or missile launches, by nominally trusted people); and protection of intellectual property (e.g., by
making information theft-proof or providing evidence that knowledge exists without revealing the knowledge).
OCR for page 184
--> Lessons from History Research in theoretical computer science has been supported by both the federal government and industry. Almost without exception in the cases discussed, contributions from
U.S. academia acknowledge the support of federal agencies, most notably the NSF and ONR. Nevertheless, many advances in theoretical computer science have emerged from major industrial research
laboratories, such as IBM, AT&T (Bell Laboratories), and GE. This is partly because some of the areas examined developed before the NSF was established, but also because some large corporate
laboratories have provided an environment that allows industry researchers to produce directly relevant results while also carrying on long-term, theoretical investigations in the background.
Shannon, for example, apparently worked on information theory for a decade before he told the world about it. Theoretical computer science has made important contributions to computing practice
while, conversely, also being informed by that practice. Work on the theory of one-way functions, for example, led to the development of public-key cryptography, and the development of complexity
theory, such as Cook's conjecture, sparked efforts to improve methods for approximating solutions to nondeterministically tractable problems. Similarly, the theoretical work in complexity and program
correctness (or verification) has been redirected by the advancing needs of computing systems. Academia has played a key role in propagating computing theory. By teaching and writing textbooks,
academic researchers naturally influenced the subjects taught, especially during the formative years of computer science departments. However, some important synthesizing books have come from
industrial research laboratories, where management has seen fit to support such writing to enhance prestige, attract candidates, and foster the competence on which research depends. Foreign nations
have contributed to theoretical computer science. Although the United States has been the center of systems-related research, a considerable share of the mathematical underpinnings for computer
science can be attributed to British, Canadian, and European academics. (The wider practical implementation of this work in the United States may be explained by a historically greater availability
of computers.) The major foreign contributions examined in this case were all supported by governments; none came from foreign industry. Personal and personnel dynamics have also played important
roles. Several of the papers cited in this chapter deal with work that originated during the authors' visiting or short-term appointments, when they were free of the ancillary burdens associated with
permanent positions. Research-
OCR for page 184
--> ers in theoretical computer science have often migrated between industry and academia, and researchers in these sectors have often collaborated. Such mixing and travel helped infuse computing
theory with an understanding of the practical problems faced by computer designers and helped establish a community of researchers with a common vocabulary. Notes 1. Between 1973 and 1996, NSF
funding for theoretical computer science grew from less than $2 million to almost $7 million dollars. In 1996 dollars (i.e., taking inflation into account), the NSF spent the equivalent of $6.1
million on theory in 1973, versus $6.9 million in 1996. Thus, the real increase in funding over 23 years was just 13 percent, or about 0.5 percent a year, on average. 2. A class of mathematical
problems, usually with yes or no answers, is called "decidable" if there is an algorithm that will produce a definite answer for every problem in the class. Otherwise, the class of problems is
undecidable. Turing demonstrated that no algorithm exists for answering the question of whether a Turing-machine calculation will terminate. The question might be answered for many particular
machines, even mechanically. But no algorithm will answer it for all machines: there must be some machine about which the algorithm will never come to a conclusion. 3. Hao Wang, who began his work at
Oxford University and later moved to Bell Laboratories and IBM, elucidated the sources of undecidability. 4. As an example of big-oh notation, the number of identical fixed-size solid objects that
can be fit into a cube with sides of length L is O(L3), regardless of the size or shape of the objects. This means that for L arbitrarily large, at most L3 objects will fit (scaled by some constant).
5. A class of problems is said to be "tractable" when the time necessary to solve problems of the class varies at most as a power of problem size. 6. This problem involves figuring out the most
efficient route for a salesperson to follow in visiting a list of cities. Each additional city added to the list creates a whole series of additional possible routes that must be evaluated to
identify the shortest one. Thus, the complexity of the problem grows much faster than does the list of cities. 7. Correctness is defined as the property of being consistent with a specification. 8.
For a more complete discussion of cryptography's growing importance, see Computer Science and Telecommunications Board (1996). | {"url":"http://www.nap.edu/openbook.php?record_id=6323&page=184","timestamp":"2014-04-18T13:40:51Z","content_type":null,"content_length":"73712","record_id":"<urn:uuid:0e24f8a1-35a6-4601-816f-5d3fa8b86e38>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Root functions
A root is a number that when multiplied by itself n times equals another number. Students commonly encounter squared roots. Quite literally, a squared root is the root of a squared value. For
example, the squared root of 4 is 2 or -2. The squared root of 9 is 3 or -3. The squared root of 16 is 4 or -4. In each case, the root multiplied by itself equals the original number.
Even numbered roots have a counting number root and its opposite. For example, -2^2 or 2^2 = 4, -3^2 or 3^2 = 9, and -4^2 or 4^2 = 16.
There are other root functions besides squared roots. There are cubed roots, 4th roots, 5th roots, etcetera. In each case, the value of the root is the number that when multiplied n times results in
the value of the original number. Example: the cubed root of 27 = 3, because 3*3*3 = 27; the 4th root of 256 = 4 or -4, because 4*4*4*4 ( either positive or negative values) = 256, and the 5th root
of 100,000 is 10, because 10*10*10*10*10 = 100,000.
Odd numbered roots of a positive value are always positive. This is because when a negative value is multiplied by itself n times, and n is odd, the result will always be negative. Example: the cubed
root of 27 is 3. 3*3*3 = 27 BUT -3*-3*-3 = - 27. A negative times a negative = positive. A positive times a negative = a negative. 27 does NOT equal - 27.
Overall, roots may be positive or negative, but it is not possible to obtain a REAL number as the root of negative number. Well get into imaginary numbers like "i" later.
Roots are not always neat and tidy. They may be expressed as non-integers (1.732 is approximately the squared root of 3) or sometimes they are simply expressed as the original value under the radical
Sometimes we may be required to reduce a root to its most simplified form. To do this we must determine the factors of the original number and break it down to its constituent values. For example,
the squared root of 8 is mathematically equivalent to the squared root of the values 4*2. The squared root of 4 can be reduced to 2. We place this 2 on the outside of the radical sign. The squared
root of 2, however, is not an integer. We keep this under the radical sign. The simplified answer then is 2 times the squared root of 2.
Roots can be your BFF if you learn these simple rules. | {"url":"http://www.wyzant.com/resources/blogs/13553/root_functions","timestamp":"2014-04-20T21:26:30Z","content_type":null,"content_length":"38715","record_id":"<urn:uuid:98ab276c-7abd-4d22-a2e6-8c734260cce7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Teaching of Arithmetic I: The Story of an experiment
L. P. Benezet
Superintendent of Schools, Manchester, New Hampshire
Originally published in the Journal of the National Education Association, Volume 24, Number 8, November 1935, pp. 241-244
In the spring of 1929 the late Frank D. Boynton, superintendent of schools at Ithaca, New York, and president of the Department of Superintendence, sent to a number of his friends and brother
superintendents an article on a modern public-school program. His thesis was that we are constantly being asked to add new subjects to the curriculum [safety instruction, health instruction, thrift
instruction, and the like], but that no one ever suggests that we eliminate anything. His paper closed with a challenge which seemed to say, "I defy you to show me how we can cut out any of this
material." One thinks, of course, of McAndrew's famous simile that the American elementary school curriculum is like the attic of the Jones' house. The Joneses moved into this house fifty years ago
and have never thrown anything away.
I waited a month and then I wrote Boynton an eight-page letter, telling him what, in my opinion, could be eliminated from our present curriculum. I quote two paragraphs:
In the first place, it seems to me that we waste much time in the elementary schools, wrestling with stuff that ought to be omitted or postponed until the children are in need of studying it. If
I had my way, I would omit arithmetic from the first six grades. I would allow the children to practise making change with imitation money, if you wish, but outside of making change, where does
an eleven-year-old child ever have to use arithmetic?
I feel that it is all nonsense to take eight years to get children thru the ordinary arithmetic assignment of the elementary schools. What possible needs has a ten-year-old child for a knowledge
of long division? The whole subject of arithmetic could be postponed until the seventh year of school, and it could be mastered in two years' study by any normal child.
Having written the letter, I decided that if this was my real belief, then I was falling down on the job if I failed to put it into practise. At this time I had been superintendent in Manchester for
five years, and I had already been greatly criticized because I had dropped practically all of the arithmetic out of the curriculum for the first two grades and the lower half of the third. In 1924
the enrollment in the first grade was 20 percent greater than the enrollment in the second, because, roughly, one-fifth of the children could not meet the arithmetic requirements for promotion into
the second grade and so were forced to repeat the year. By 1929 the enrollment of the first grade was no greater than that of the third.
Meanwhile, I was distressed at the inability of the average child in our grades to use the English language. If the children had original ideas, they were very helpless about translating them into
English which could be understood. I went into a certain eighth-grade room one day and was accompanied by a stenographer who took down, verbatim, the answers given me by the children. I was trying to
get the children to tell me, in their own words, that if you have two fractions with the same numerator, the one with the smaller denominator is the larger. I quote typical answers.
• "The smaller number in fractions is always the largest."
• "If the numerators are both the same, and the denominators one is smaller than the one, the one that is the smaller is the larger."
• "If you had one thing and cut it into pieces the smaller piece will be the bigger. I mean the one you could cut the least pieces in would be the bigger pieces."
• "The denominator that is smallest is the largest."
• "If both numerators are the same number, the smaller denominator is the largest - the larger - of the two."
• "If you have two fractions and one fraction has the smallest number at the bottom. It is cut into pieces and one has the more pieces. If the two fractions are equal, the bottom number was smaller
than what the other one in the other fraction. The smallest one has the largest number of pieces - would have the smallest number of pieces, but they would be larger than what the ones that were
cut into more pieces."
The average layman will think that this must have been a group of half-wits, but I can assure you that it is typical of the attempts of fourteen-year-old children from any part of the country to put
their ideas into English. The trouble was not with the children or with the teacher; it was with the curriculum. If the course of study required that the children master long division before leaving
the fourth grade and fractions before finishing the fifth, then the teacher had to spend hours and hours on this work to the neglect of giving children practise in speaking the English language. I
had tried the same experiment in schools in Indiana and in Wisconsin with exactly the same result as in New Hampshire.
In the fall of 1929 I made up my mind to try the experiment of abandoning all formal instruction in arithmetic below the seventh grade and concentrating on teaching the children to read, to reason,
and to recite - my new Three R's. And by reciting I did not mean giving back, verbatim, the words of the teacher or of the textbook. I meant speaking the English language. I picked out five rooms -
three third grades, one combining the third and fourth grades, and one fifth grade. I asked the teachers if they would be willing to try the experiment. They were young teachers with perhaps an
average of four years' experience. I picked them carefully, but more carefully than I picked the teachers, I selected the schools. Three of the four schoolhouses involved [two of the rooms were in
the same building] were located in districts where not one parent in ten spoke English as his mother tongue. I sent home a notice to the parents and told them about the experiment that we were going
to try, and asked any of them who objected to it to speak to me about it. I had no protests. Of course, I was fairly sure of this when I sent the notice out. Had I gone into other schools in the city
where the parents were high school and college graduates, I would have had a storm of protest and the experiment would never have been tried. I had several talks with the teachers and they entered
into the new scheme with enthusiasm.
The children in these rooms were encouraged to do a great deal of oral composition. They reported on books that they had read, on incidents which they had seen, on visits that they had made. They
told the stories of movies that they had attended and they made up romances on the spur of the moment. It was refreshing to go into one of these rooms. A happy and joyous spirit pervaded them. The
children were no longer under the restraint of learning multiplication tables or struggling with long division. They were thoroughly enjoying their hours in school.
At the end of eight months I took a stenographer and went into every fourth-grade room in the city. As we have semi-annual promotions, the children who had been in the advanced third grade at the
time of the beginning of the experiment, were now in the first half of the fourth grade. The contrast was remarkable. In the traditional fourth grades when I asked children to tell me what they had
been reading, they were hesitant, embarrassed, and diffident. In one fourth grade I could not find a single child who would admit that he had committed the sin of reading. I did not have a single
volunteer, and when I tried to draft them, the children stood up, shook their heads, and sat down again. In the four experimental fourth grades the children fairly fought for a chance to tell me what
they had been reading. The hour closed, in each case, with a dozen hands waving in the air and little faces crestfallen, because we had not gotten around to hear what they had to tell.
For some years I had noted that the effect of the early introduction of arithmetic had been to dull and almost chloroform the child's reasoning faculties. There was a certain problem which I tried
out, not once but a hundred times, in grades six, seven, and eight. Here is the problem: "If I can walk a hundred yards in a minute [and I can], how many miles can I walk in an hour, keeping up the
same rate of speed?"
In nineteen cases out of twenty the answer given me would be six thousand, and if I beamed approval and smiled, the class settled back, well satisfied. But if I should happen to say, "I see. That
means that I could walk from here to San Francisco and back in an hour" there would invariably be a laugh and the children would look foolish.
I, therefore, told the teachers of these experimental rooms that I would expect them to give the children much practise in estimating heights, lengths, areas, distances, and the like. At the end of a
year of this kind of work, I visited the experimental room which had had a combination of third- and fourth-grade children, who now were fourth and fifth graders. I drew on the board a rough map of
the western end of Lake Ontario, the eastern end of Lake Erie, and the Niagara River. I asked them to guess what it was, and was not surprised when they identified the location. I then labeled three
spots along the river with the letters "Q," "NF," and "B." They identified Niagara Falls and Buffalo without any difficulty, but were puzzled by the "Q." Some thought it was Quebec but others knew it
was not. I finally told them that it was Queenstown. I then drew a cross section of the falls, showing the hard layer of rock above and the soft layer eating out underneath, and they told me what it
was and why it was that the stone was falling, little by little, from the edge. They told me how this process was going on. I then made the statement that in 1680, when white men had first seen the
falls, the falls were 2500 feet lower down than they are at present. I then asked them at what rate the falls were retreating upstream. These children, who had had no formal arithmetic for a year but
who had been given practise in thinking, told me that it was 250 years since white men had first seen the falls and that, therefore, the falls were retreating upstream at the rate of ten feet a year.
I then remarked that science had decided that the falls had originally started at Queenstown, and, indicating that Queenstown was now ten miles down the river, I asked them how many years the falls
had been retreating. They told me that if it had taken the falls 250 years to retreat about a half mile, it would be at the rate of 500 years to the mile, or 5000 years for the retreat from
Queenstown. The map had been drawn so as to show the distance from Niagara Falls to Buffalo as approximately twice the distance from Queenstown to Niagara Falls. Then I asked these children whether
they had any idea how long it would be before the falls would retreat to Buffalo and drain the lake. They told me that it would not happen for another ten thousand years. I asked them how they got
that and they told me that the map indicated that it was twenty miles from Niagara Falls to Buffalo, or thereabouts, and that this was twice the distance from Queenstown to Niagara Falls!
It so happened that a few days after this incident I was visiting a large New England city with five of my brother superintendents. Our host was interested in my description of this incident and
suggested that I try the same problem on a fifth grade in one of his schools. With the other superintendents as audience, I stood before an advanced fifth grade in what was known as the Demonstration
School, the school used for practise teaching and to which visitors were always sent.
The home superintendent: Boys and girls, would you like to have Superintendent Benezet of Manchester, New Hampshire, ask you some questions about Niagara Falls?
The children express pleasure at the idea.
Mr. Benezet: [Drawing a map on the board] Children, what is this that I have drawn on the blackboard?
Children: The Great Lakes.
Mr. B.: Good. What lakes?
A child: Lake Ontario and Lake Erie.
Mr. B.: Good. What is the river?
Child: The St. Lawrence River.
Mr. B.: That is really correct. It is the St. Lawrence River. But they call it by a different name here. They call it the Niagara River. What have you heard in connection with the Niagara River?
Another child: Niagara Falls are there.
Another child: Niagara Falls are connected with Niagara River.
Mr. B.: Oh! How are they connected?
Child: The water trickles down the Falls and goes into the Niagara River.
Mr. B.: I should call that quite a trickle. Have any of you children seen Niagara Falls?
Three raise their hands.
Mr. B.: How high are the falls? Have you any idea? Are they higher than this room?
Children: Yes [dubiously].
Mr. B.: Well, how high is this room?
Its height is guessed anywhere from 11 feet to 40 feet. The room is actually about 16 feet high. The question of the height of the falls is finally dropped.
Mr. B.: Well, never mind how high the falls are. On this map here I have indicated one spot and marked it "NF," and another spot and marked it "B." What does "NF" mean?
Children: Niagara Falls.
Mr. B.: What does "B" stand for?
Another child: Bay.
Mr. B.: No. Remember that Niagara Falls is not only the name of the Falls, but the name of a city.
Child: Baltimore.
After considerable pause, the home superintendent, in the back of the room, tells the class that the name of the city is also the name of an animal.
Child: Buffalo.
Mr. B.: Yes. Now there is another town here that I am going to mark "Q." It is not Quebec; it is Queenstown. People who have studied this carefully tell us that once upon a time the falls were at
Queenstown. Tell me now. What does it mean if I say that I show you the cross section of an apple?
Class is uncertain.
Mr. B.: Suppose that you cut an apple in half with a knife. What do I show you if I hold up one-half?
Child: Half the apple.
Another child: The core of the apple.
Third child: The inside of an apple.
Mr. B.: Tell me. Is the word "section" a new word to the majority of you?
Enthusiastic chorus of "No."
Mr. B.: Well, a cross-section of an apple means a cut right thru an apple. Why have I said this to you?
Meantime he has drawn on the board a cross-section of Niagara Falls.
Child: Because that is a cross-section of the falls.
Mr. Benezet now explains the two kinds of rock and asks which is the harder. They finally decide that the rock above is the harder. He then shows how the underneath rock rotted away, and that
finally there was a shelf of hard rock overhanging. This became too heavy and fell off; and the falls have thereby moved back some ten feet.
Mr. B.: Now, when white men first saw the falls in 1680 [placing this date on the board], the falls were further down the river than they are now, and it is estimated that since that time they
have moved back upstream about 2500 feet. Now how long ago was it that white men first saw the falls?
Child: Four hundred years.
Another child: Two hundred years.
Third child: Three hundred years.
Guesses range anywhere between 110 years and 450 years. One boy says it was about the time that Columbus sailed to America; another says that it was about the time of the Pilgrims and the
Mr. B.: Well, how are we going to find out?
General bewilderment for a while. Finally:
Child: Take 1930 and subtract it from 1680.
Mr. B.: Fine.
He writes on the blackboard:
Mr. B.: Now take a look and tell me how many years that was. See if you can tell me before we subtract it, figure by figure.
It is to be noted that not one child called attention to the wrong position of the two sets of figures. They guess 350 years, 200 years, 400 years.
Mr. B.: Well, let's subtract it figure by figure.
Child: Zero from 0 equals 0. Three from 8 equals 5. Nine from 6 equals 3. Three hundred fifty years is the answer.
Mr. B.: How many think that 350 years is right?
About two-thirds of the hands go up. Finally two or three think that it is wrong.
Mr. B.: All right, correct it.
Child: It should have been 9 from 16 equals 7.
Mr. Benezet thereupon puts down 750 for the answer. When he asks how many in the room agree that this is right, practically every hand is raised. By this time the local superintendent was pacing
the door at the rear of the room and throwing up his hands in dismay at this showing on the part of his prize pupils. After a time, as Mr. Benezet looks a little puzzled, the children gradually
become a little puzzled also. One little girl, Elsie Miller, finally comes to the board, reverses the figures, subtracts, and says the answer is 250 years.
Mr. B.: All right. If the falls have retreated 2500 feet in 250 years, how many feet a year have the falls moved upstream?
Child: Two feet.
Mr. Benezet registers complete satisfaction and asks how many in the class agree. Practically the whole class put hands up again.
Mr. B.: Well, has anyone a different answer?
Child: Eight feet.
Another child: Twenty feet.
Finally Elsie Miller again gets up, and says the answer is ten feet.
Mr. B.: What? Ten feet? (Registering great surprise)
The class, at this, bursts into a roar of laughter. Elsie Miller sticks to her answer, and is invited by Mr. Benezet to come up and prove it. He says that it seems queer that Elsie is so
obstinate when everyone is against her. She finally proves her point, and Mr. Benezet admits to the class that all the rest were wrong.
Mr. B.: Now, what fraction of a mile is it that the falls have retreated during the last 250 years?
Children guess 3/2, 3/4, 2/3, 1/20, 7/8 - everything except 1/2. The bell for dismissal rings and the session is over.
It will be noted that the local superintendent gave them a little hint at the outset, that was not given to the Manchester children, when he said, "Niagara Falls." They were prepared to identify my
map. Also, the Manchester children who had not learned tables but had talked a great deal about distances and dimensions, recognized the fact that 2500 feet was about a half a mile, while the
children in the larger city who were fresh from their tables, had little conception of the distance.
I was so delighted with the success of the experiment so far that in the fall of 1930 we started six or seven other rooms along the same line. The formal arithmetic was dropped and emphasis was
placed on English expression, on reasoning, and estimating of distances.
One day I tried an experiment having to do with English expression. I hung before a 7-B class a copy of a painting by Frederick Waugh, representing a polar bear floating on a small berg of ice. This
was a traditionally taught room in a school where there were very few children of foreign extraction. I asked the children to write anything which they felt inspired to put down as a result of seeing
the picture. Three quarters of an hour later I hung the same picture before another 7-B grade, one of the experimental groups this time, in a school where not more than three children in the room
came from homes where English was the language of the parents. I then called the seventh-grade teachers of the city together and read them the ten best papers from one room and the ten best from the
other. I asked them if they saw any difference. One teacher remarked that one group was about a year and a half or two years ahead of the other in maturity of expression, and there was general assent
to this statement. I said to the teachers, "If I should tell you that one group came from the "A" school and the other from the "B," from which school would you guess the better group of papers came?
"Oh, the "A" school, undoubtedly," said they, naming the school whose patrons speak English in their homes.
"Well," I said, "it was just the other way," and there was a murmur of incredulity. Then we analyzed the papers and counted the number of adjectives used by the traditionally taught pupils. There
were forty all told: nice, pretty, blue, green, cold, etc. We then counted the adjectives used by the other group [the number of papers was approximately the same] and we found 128, including
magnificent, awe-inspiring, unique, majestic, etc. The little Greeks, Armenians, Poles, and French-Canadians had far surpassed their English-speaking opponents.
I next tried a rather similar test. I hung the same picture - a landscape representing a river scene in the vicinity of Manchester - before ten different fifth-grade rooms. Five of them had been
brought up under the old traditional curriculum and five of them were of the experimental group. It was the same story: the experimental rooms far excelled the others in fluency of expression. They
used words that the others had never heard of. Nevertheless, when we came to test the papers for spelling, the poorest of the experimental rooms exactly tied the record of the best of the traditional
groups. The most surprising result came in a certain room in which there was housed a 5-B grade and a 5-A. The younger pupils, the 5-B's, had been brought up under the experimental curriculum,
without arithmetic, while the other half of the room were traditional. The 5-A's made the poorest record of all the ten groups while the 5-B's, the younger group, were next to the top. For four
months they had been taught by the same teacher but by different methods.
Now we were ready to experiment on a much larger scale. By the fall of 1932 about one-half of the third-, fourth-, and fifth-grade rooms in the city were working under the new curriculum. Some of the
principals were a little dubious and asked permission to postpone formal arithmetic until the beginning of the sixth grade instead of the beginning of the seventh. Accordingly, permission was given
to four schools to begin the use of the arithmetic book with the 6-B grade. About this time Professor Guy Wilson of Boston University asked permission to test our program. One of our high school
teachers was working for her master's degree at Boston University and as part of her work he assigned her the task of giving tests in arithmetic to 200 sixth grade children in the Manchester schools.
They were divided fairly evenly, 98 from experimental rooms and 102 from the traditional groups, or something like that. These were all sixth graders. Half of them had had no arithmetic until
beginning the sixth grade and the other half had had it throughout the course, beginning with the 3-A. In the earlier tests the traditionally trained people excelled, as was to be expected, for the
tests involved not reasoning but simply the manipulation of the four fundamental processes. By the middle of April, however, all the classes were practically on a par and when the last test was given
in June, it was one of the experimental groups that led the city. In other words these children, by avoiding the early drill on combinations, tables, and that sort of thing, had been able, in one
year, to attain the level of accomplishment which the traditionally taught children had reached after three and one-half years of arithmetical drill. [This article will be continued in the December
Part II | Part III
Benezet Centre | {"url":"http://www.inference.phy.cam.ac.uk/sanjoy/benezet/1.html","timestamp":"2014-04-25T01:32:05Z","content_type":null,"content_length":"25497","record_id":"<urn:uuid:9c1547f6-ae18-401b-a243-3c1e99251c8d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00549-ip-10-147-4-33.ec2.internal.warc.gz"} |
Credit Courses Through the Math Learning Center
This course is designed for those in Developmental Mathematics to receive credit for using the MLC on a regular basis. You must attend a total of approximately 20 hours per quarter for each credit
you sign up for. Choose this course if you are concurrently enrolled in Math 060, Math 070, Math 080, Math 099, or BUS 102.
You need to check in and out at the front desk every time you use the MLC. If you forget to check out, you will only get a half hour regardless of the actual length of stay.
This course is designed for those in College Level Mathematics to receive credit for using the MLC on a regular basis. You must attend a total of approximately 20 hours per quarter for each credit
you sign up for. Choose this course if you are concurrently enrolled in Math 107, Math 111, Math 141, Math 142, Math 146, Math 148, Math 151, Math 152, Math 163, Math 171, Math 172, Math 173, Math
207, Math 208, Math 209, Math 211, or Math 264.
You need to check in and out at the front desk every time you use the MLC. If you forget to check out, you will only get a half hour regardless of the actual length of stay.
Syllabus for Math 90 and Math 100 beginning Winter 2013 | {"url":"http://oscar.ctc.edu/math-learning-center/credit-courses.aspx","timestamp":"2014-04-18T05:32:39Z","content_type":null,"content_length":"12817","record_id":"<urn:uuid:65d385bd-a53f-44b6-8b5e-65a758b9f833>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Advogato: Blog for vicious
“Maxima is calculating”
So friday afternoon I wanted to test for existence of a certain mapping that takes one surface to another surface. Everything is algebraic so one might assume if a mapping exist it might actually be
polynomial and since everything is of low degree, the mapping might be as well. So I just set up brute force equations and tried an arbitrary degree 2 mapping. After a second or two, maxima returned
no solutions to the resulting system. OK, so how about plugging in degree 3. It turns out I don’t need to test the linear terms, and there are 3 variables so 16 variables per component so I get an
algebraic system in 48 variables. Sounds bad, but lot of the equations become something of the form “x=0″. So I looked at a subset of the system. Already the generating of the equations took a few
seconds. So I thought, this will take a few minutes. So I started “algsys” on the equations. Well, that was wednesday afternoon. It is Sunday and the thing is still running. Unfortunately it just
says “Maxima is calculating” in the wxMaxima window, so one has no clue if it will take another day or so, another year or so, or if the sun will implode first. I sort of have the feeling it is doing
something stupid. Once I get more time for math on monday, I’ll probably try to simplify the equations by hand first. I could also try for the solution (or lack thereof) numerically. In the meantime
I’ll let it run. This is on my laptop which is surely not meant as a computation machine. It’s only running on one core so it’s not heating up too badly. When I was running some computations for days
in the summer on all four cores you could almost cook eggs on the keyboard.
On a related front, I decided that my work computer is sitting too idly so I started the degree 19 calculation that we never did with Danny on our paper [1]. In 2008 we thought it would take at least
half a year. Presumably the computers have gotten a tad quicker in the meantime (and since I’m running it on 4 cores), so perhaps the result will come in sooner. Still the progress seems slow from
the output so far. It is a bit difficult to judge, I’ll try to estimate time left more precisely later on, but just as first guess from looking at the output I don’t think this will be done before
There is something magical about pressing ENTER to start the computation you know will take months to complete. It is one of the few places where you really use the fact that you have a fast
computer. Most computer power is totally wasted. So for example in somewhat similar time frame Firefox managed to get 70 minutes of CPU time (maxima is up to 5208 now). Now that’s with only very
occasional short browsing over the last few days. It seems mostly it’s the tabs being open that eat up time, run the CPU and heat our house. Come to think of that my office will be quite warm I bet
once I get there on monday, I don’t think the heating runs on the room termostat, as the swich on that thing is in “off” position and it still heats the room. So with the added heating from 4 cores
running at top speed and it being a small room, it should get toasty.
[1] Jiří Lebl and Daniel Lichtblau, Uniqueness of certain polynomials constant on a line, Linear Algebra and its Applications, 433 (2010), no. 4, 824-837, arXiv:0808.0284. | {"url":"http://advogato.org/person/vicious/diary/325.html","timestamp":"2014-04-16T07:26:44Z","content_type":null,"content_length":"7539","record_id":"<urn:uuid:e13fd4c3-5111-4d5e-b7a6-bd3c6f908fc5>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00285-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimation and testing for the effect of a genetic pathway on a disease outcome using logistic kernel machine regression via logistic mixed models
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
BMC Bioinformatics. 2008; 9: 292.
Estimation and testing for the effect of a genetic pathway on a disease outcome using logistic kernel machine regression via logistic mixed models
Growing interest on biological pathways has called for new statistical methods for modeling and testing a genetic pathway effect on a health outcome. The fact that genes within a pathway tend to
interact with each other and relate to the outcome in a complicated way makes nonparametric methods more desirable. The kernel machine method provides a convenient, powerful and unified method for
multi-dimensional parametric and nonparametric modeling of the pathway effect.
In this paper we propose a logistic kernel machine regression model for binary outcomes. This model relates the disease risk to covariates parametrically, and to genes within a genetic pathway
parametrically or nonparametrically using kernel machines. The nonparametric genetic pathway effect allows for possible interactions among the genes within the same pathway and a complicated
relationship of the genetic pathway and the outcome. We show that kernel machine estimation of the model components can be formulated using a logistic mixed model. Estimation hence can proceed within
a mixed model framework using standard statistical software. A score test based on a Gaussian process approximation is developed to test for the genetic pathway effect. The methods are illustrated
using a prostate cancer data set and evaluated using simulations. An extension to continuous and discrete outcomes using generalized kernel machine models and its connection with generalized linear
mixed models is discussed.
Logistic kernel machine regression and its extension generalized kernel machine regression provide a novel and flexible statistical tool for modeling pathway effects on discrete and continuous
outcomes. Their close connection to mixed models and attractive performance make them have promising wide applications in bioinformatics and other biomedical areas.
The rapid progress in gene expression array technology in the past decade has greatly facilitated our understanding of the genetic aspect of various diseases. Knowledge-based approaches, such as gene
set or pathway analysis, have become increasingly popular. In such gene sets/pathways, groups of genes act in concert to accomplish tasks related to a cellular process and the resulting genetic
pathway effects may manifest themselves through phenotypic changes, such as occurrence of disease. Thus it is potentially more meaningful to study the overall effect of a group of genes rather than a
single gene, as single-gene analysis may miss important effects on pathways and difficult to reproduce from studies to studies [1]. Researchers have made significant progress in identifying metabolic
or signaling pathways based on expression array data [2,3]. Meanwhile, new tools for identification of pathways, such as GenMAPP [4], Pathway Processor [5], MAPPFinder [6], have made pathway data
more widely available. However, It is a challenging task to model the pathway data and test for a potentially complex pathway effect on a disease outcome.
One way to model pathway data is through the linear model approach, where the pathway effect is represented by a linear combination of individual gene effects. This approach has several limitations.
Activities of genes within a pathway are often complicated, thus a linear model is often insufficient to capture the relationship between these genes. Furthermore, genes within a pathway tend to
interact with each other. Such interactions are not taken into account by the linear model approach.
In this paper we propose a nonparametric approach, the kernel machine regression, to model a pathway effect. The kernel machine method, with the support vector machine (SVM) as a most popular
example, has emerged in the last decade as a powerful machine learning technique in high-dimensional settings [7,8]. This method provides a flexible way to model linear and nonlinear effects of
variables and gene-gene interactions, unifies the model building procedure in both one- and multi-dimensional settings, and shows attractive performance compared to other nonparametric methods such
as splines.
Liu et al. [9] proposed a kernel machine-based regression model for continuous outcomes. In this paper, we propose a logistic kernel machine regression model for binary outcomes, where covariate
effects are modeled parametrically and the genetic pathway effect is modeled parametrically or nonparametrically using the kernel machine method. A main contribution of this paper is to establish a
connection between logistic kernel machine regression and the logistic mixed model. We show that the kernel machine estimator of the genetic pathway effect can be obtained from the estimator of the
random effects in the corresponding logistic mixed model. This connection provides a convenient vehicle to connect the powerful kernel machine method with the popular mixed model method in the
statistical literature. This mixed model connection also provides an unified framework for statistical inference for model parameters, including the regression coefficients, the nonparametric genetic
pathway function, and the regularization and kernel parameters. Based on the proposed logistic kernel machine regression model, we develop a new test for the nonlinear pathway effect on disease risk.
An appealing feature of the proposed test is that it performs well without the need to correctly specify the functional form of the effects of each gene or their interactions. This feature has a
significant practical implication when analyzing genetic pathway data, as the true relationship between the pathway and the disease outcome is often unknown. We extend the results to generalized
kernel machine regression for a class of continuous and discrete outcomes and discuss its connection with generalized linear mixed models [10].
Recently, Wei and Li [11] proposed a nonparametric pathway-based regression (NPR) to model pathway data. NPR is a pathway-based gradient boosting procedure, where the base learner is usually a
regression or classification tree. It provides a flexible approach in modeling pathways and interactions among genes within a pathway. Michalowski et al. [12] proposed a Bayesian Belief Network
approach for pathway data. Neither method is likelihood-based. Thus parameter estimation and inference cannot be casted within a unified likelihood framework. It is hence difficult to estimate and
quantify the overall pathway effect on disease risk and assess its statistical uncertainty. Secondly, a primary interest in this paper is to test for the statistical significance of the overall
pathway effect on the risk of a disease. Both NPR and Bayesian belief network do not provide such a statistical test for the pathway effect. For example, NPR uses an importance score to rank the
relative importance of each pathway. It lacks formal inferential procedure for assessing the statistical significance of a pathway. Further, when considering a single pathway, the importance score
loses its meaning in assessing the importance of a pathway. Our method, on the other hand, is based on penalized likelihood, and estimation and inference can be conducted in a systematic manner
within the likelihood framework. We also propose a formal statistical test for the significance of a pathway effect on the risk of a disease.
Goeman et al. [13] proposed a linear mixed model to relate the pathway effect with a continuous outcome. They modeled the pathway effect using a linear function with each gene entering into the model
as a regressor. They assumed the regression coefficients of the gene as random from a common distribution with mean 0 and an unknown variance. The pathway effect can then be tested through a variance
component test for random effects. Our approach is different from theirs in the following aspects. First, we model the pathway effect by allowing for a nonparametric model rather than a parametric
one. As we commented earlier, the highly complicated nature of activities of genes within a pathway makes the linear model assumption untenable. Secondly, the kernel function used in kernel machine
regression usually contains unknown tuning parameters. The parameter is present under the alternative hypothesis but disappears under null hypothesis. This makes tests as proposed in [13,14] not
applicable. Our proposed test, on the other hand, works quite well under this scenario. Third, Goeman et al. [14] extended their linear model results to discrete outcomes using basis functions. A key
advantage of the kernel machine approach over this basis approach for modeling multi-gene effects is that one does not need to specify bases explicitly, which is often difficult for high-dimensional
data especially when interactions are modeled.
Analysis of prostate cancer data
In this section, we apply the proposed logistic kernel machine regression model (3) as described in the Methods section to the analysis of a prostate cancer data set. The data came from the Michigan
prostate cancer study [15]. This study involved 81 patients with 22 diagnosed as non-cancerous and 59 diagnosed with local or advanced prostate cancer. Besides the clinical and demographic covariates
such as age, cDNA microarray gene expressions were also available for each patient. The early results of Dhanasekaran et al. [15] indicate that certain functional genetic pathways seemed
dys-regulated in prostate cancer relative to non-cancerous tissues. We are interested in studying how a genetic pathway is related to the prostate cancer risk, controlling for the covariates. We
focus in this analysis on the cell growth pathway, which contains 5 genes. The pathway we describe was annotated by the investigator (A. Chinnaiyan) and is simply used to illustrate the methodology.
Of course, one could take the pathways stored in commercial databases such as Ingenuity Pathway Analysis (IPA) and use the proposed methodology based on those gene sets.
The outcome was the binary prostate cancer status and the covariate includes age. Since the functional relationship between the cell growth pathway and the prostate cancer risk is unknown, the kernel
machine method provides a convenient and flexible framework for the evaluation of the pathway effect on the prostate cancer risk. Specifically, we consider the following semiparametric logistic model
logit(P(y = 1)) = β[0 ]+ β[1]age + h(gene[1], ..., gene[5]),
where h(·) is a nonparametric function of 5 genes within the cell growth pathway. The detail of the estimation procedure is provided in the Methods section. In summary, we fit this model using the
kernel machine method via the logistic mixed model representation and using the Gaussian kernel function in estimating h(·). Under the mixed model representation, we estimated (β[0], β[1]) and h(·)
using penalized quasi-likelihood (PQL), and estimated the smoothing parameter τ and the Gaussian kernel scale parameter ρ simultaneously by treating them as variance components. The results are
presented in Table Table11.
Analysis of prostate cancer data.
The test for the cell growth pathway effect on the prostate cancer status H[0]: h(z) = 0 vs H[1]:h(z) ≠ 0 was conducted using the proposed score test as described in the Methods section. For the
purpose of comparison, we also conducted the global test proposed by Goeman et al. [13] that assumed a linear pathway effect. Note that our test allows a nonlinear pathway effect and gene-gene
interactions. Table Table11 gives the p-values for both tests. The p-value of our test suggests that cell growth pathway has a highly significant effect on the disease status, while the test from
Goeman et al. [13] indicates only marginal significance of the growth pathway effect.
Simulation Study for the Parameter Estimates
We conducted a simulation study to evaluate the performance of the parameter estimates of the proposed logistic kernel machine regression by using the logistic mixed model formulation. We considered
the following model
logit(P(y[i ]= 1)) = x[i ]+ h(z[i1], z[ip]),
where the true regression coefficient β = 1. We consider p = 5 and set h(z[1], ..., z[5]) = 2{sin(z[1]) – $z22$ + z[1 ]exp(-z[3]) -sin(z[2]) cos(z[3]) + $z42$ + sin(z[4]) cos(z[1]) + $z52$ + z[3]z
[5]}. To allow x[i ]and (z[i1], z[ip]) to be correlated, x[i ]was generated as x[i ]= sin(z[i1]) + 2u[i], where u[i ]and z[ij ](j = 1, p) follow independent Uniform(-0.5, 0.5). The Gaussian kernel
was used throughout the simulation. All simulations ran 300 times. Settings 1, 2, and 3 correspond to sample size n = 100, 200, and 300, respectively.
The simulation results are shown in Table Table2.2. Due to the multi-dimensional nature of the variables z, it is difficult to visualize the fitted curve $h^$(z). We hence summarized the
goodness-of-fit of $h^$(·) in the following way. For each simulated data set, we regressed the true h on the fitted value $h^$, both evaluated at the design points. We then empirically summarized the
goodness-of-fit of $h^$(·) by calculating the average intercepts, slopes and R^2's obtained from these regressions over the 300 simulations. If the kernel machine method fits the nonparametric
function well, then we would expect the intercept to be close to 0, the slope close to 1, and R^2also close to 1.
Simulation results on estimation.
Our results show that even when the sample size is as low as 100, estimation of the regression coefficient and nonparametric function only has small bias. When the kernel parameter ρ is estimated,
these biases tend to be small compared with those when ρ is held fixed. With the increase of sample size the estimates of β and h become closer to the true values, especially when ρ is estimated,
while there are still some bias when ρ is fixed at values farther away from the estimated one. Table Table33 compares the estimated standard errors of $β^$ with the empirical standard errors. Our
results show that they agree to each other well when ρ is estimated.
Simulation results on standard errors.
Simulation Study of the Score Test for the Pathway Effect
We next conducted a simulation study to evaluate the performance of the proposed variance component score test for the pathway effect H[0]: h(·) = 0 vs H[1]: h(·) ≠ 0. In order to compare the
performance of our test with the linearity-based global test proposed by Goeman et al. [13], both tests were applied to each simulated data set. Nonlinear and linear functions of h(z) were both
considered. For the nonlinear pathway effect, the true model is logit(y) = x +ah(z), where h(z) = 2(z[1 ]- z[2])^2 + z[2]z[3 ]+ 3 sin(2z[3])z[4 ]+ $z52$ + 2 cos(z[4])z[5]. For the linear pathway
effect, the true model is logit(y) = x + ah(z), where h(z) = 2z[1 ]+ 3z[2]+z[3 ]+ 2z[4]+z[5]. All z's were generated from the standard normal distribution, and a = 0, 0.2, 0.4, 0.6, 0.8. To allow x
and (z[i1], z[ip]) to be correlated, x was generated as x = z[1]+e/2 with e being independent of z[1 ]and following N (0, 1). We studied the size of the test by generating data under a = 0, and
studied the power by increasing a. The sample size was 100. For the size calculations, the number of simulations was 2000; whereas for the power calculations, the number of runs was 1000. Based on
the discussions in Section "Test for the genetic Pathway Effect", the bound of ρ is set up by interval $[mini≠j∑l=15(zil−zjl)2/5,10maxi≠j∑l=15(zil−zjl)2]$, and the interval is divided by 500
equally spaced grid points. All simulations were conducted using R 2.5.0, and the package "globaltest" v4.6.0 was used for the test proposed by Goeman et al. [13] as a comparison.
Table Table44 reports the empirical size (a = 0) and power (a > 0) of the variance component score test for the pathway effect. When the true function h(z) is non-linear in z, the results show that
the size of our test was very close to the nominal value 0.05, while the size of the global test of Goeman et al. [13] is inflated. The results also show that our test had a much higher power. This
was not surprising since the test of Goeman et al. [13] was based on a linearity assumption of the pathway effect. When the true underlying model is far from linear, the linearity assumption breaks
down and the test quickly loses power. The results also show that the proposed test works well for moderate sample sizes. When the pathway effect is linear, the results show that the size of both
tests were very close to the nominal value 0.05 and their power were also very close. This demonstrates that our test is as powerful as the global test when the true underlying h(z) is linear.
Therefore our test could be used as a universal test for testing the overall effect of a set of variables without the need to specify the true functional forms of each variable. This feature is
especially desirable for genetic pathway data, because the relationship between genes and clinical outcome is often unknown.
Simulation results on score test.
Conclusions and Discussion
In this paper, we developed a logistic kernel machine regression model for binary outcomes, where the covariate effects are modeled parametrically and the genetic pathway effect is modeled
nonparametrically using the kernel machine method. This method provides an attractive way to model the pathway effect, without the need to make strong parametric assumptions on individual gene
effects or their interactions. Our model also allows for parametric pathway effects if a parametric kernel, such as the first-degree polynomial kernel, is used.
A key result of this paper is that we have established a close connection between the generalized kernel machine regression and generalized linear mixed models, and show that the kernel machine
estimators of regression coefficients and the nonparametric multi-dimensional pathway effect can be easily obtained from the corresponding generalized linear mixed models using PQL. The mixed model
connection provides a unified framework for estimation and inference and can be easily implemented in existing software, such as SAS PROC GLIMMIX or R GLMMPQL. The mixed model connection also makes
it possible to test for the overall pathway effect through the proposed variance component test. A key advantage of the proposed score test for the pathway effect is that it does not require an
explicit functional specification of individual gene effects and gene-gene interactions. This feature is of practical significance as the pathway effect is often complex. Our simulation study shows
the proposed test performs well for moderate sample size. It has similar power to the linearity-based pathway test of Goeman et al. [13] when the true effect is linear, but much higher power when the
true effect is nonlinear.
We have considered in this paper a single pathway. One could generalize the proposed semiparametric model to incorporate multiple pathways by fitting an additive model:
logit(P(y = 1)) = x^Tβ + h[1](z[1]) + h[m](z[m]),
where z[j ](j = 1, m) denotes a p[j ]× 1 vector of genes in the jth pathway and hj(·) denotes the nonparametric function associated with the jth genetic pathway.
Machine learning is a powerful tool in advancing bioinformatics research. Our effort helps to build a bridge between kernel machine methods and traditional statistical models. This connection will
undoubtedly provide a new and convenient tool for the bioinformatics community and opens a door for future research.
The Logistic Kernel Machine Model
Throughout the paper we assume that gene expression data have been properly normalized. Suppose the data consist of n samples. For subject i (i = 1, n), y[i ]is a binary disease outcome taking values
either 0 (non-disease) or 1 (disease), x[i ]is a q × 1 vector of covariates, z[i ]is a p × 1 vector of gene expression measurements in a pathway/gene set. We assume that an intercept is included in x
[i]. The binary outcome y[i ]depends on x[i ]and z[i ]through the following semiparametric logistic regression model:
where μ[i ]= P (y[i ]= 1| x[i], z[i]), β is a q × 1 vector of regression coefficients, and h(z[i]) is an unknown centered smooth function.
In model (3), covariate effects are modeled parametrically, while the multi-dimensional genetic pathway effect is modeled parametrically or nonparametrically. A nonparametric specification for h(·)
reflects our limited knowledge of genetic functional forms. Note that h(·) = 0 means that genes in the pathway have no association with the disease risk. If h(z) = γ[1]z[1 ]+ γ[p]z[p], model (3)
becomes the linear model considered by Goeman et al. [13].
In nonparametric modeling, such as smoothing splines, the unknown function is usually assumed to lie in a certain function space. For the kernel machine method, this function space, denoted by $HK$,
is generated by a given positive definite kernel function K(·, ·). The mathematical properties of $HK$ imply that any unknown function h(z) in $HK$ can be written as a linear combination of the given
kernel function K(·, ·) evaluated at each sample point. Two popular kernel functions are the dth polynomial kernel $K(z1,z2)=(z1Tz2+ρ)d$ and the Gaussian Kernel K(z[1], z[2]) = exp{-|| z[1 ]– z[2]||^
2/ρ^2}, where $||z1−z2||2=∑k=1p(z1k−z2k)2$ and ρ is an unknown parameter. The first and second degree polynomial kernels (d = 1, 2) correspond to assuming h(·) to be linear and quadratic in z's,
respectively. The choice of a kernel function determines which function space one would like to use to approximate h(z). The unknown parameter of a kernel function plays a critical role in function
approximation. It is a challenging problem to optimally estimate it from data. In the machine learning literature, this parameter is usually pre-fixed at some values based on some ad-hoc methods. In
this paper, we show that we can optimally estimate it from data based on a mixed model framework.
The Estimation Procedure
Assuming h(·) $HK$, the function space generated by a kernel function K(·, ·), we can estimate β and h(·) by maximizing the penalized log-likelihood function
where λ is a regularization parameter that controls the tradeoff between goodness of fit and complexity of the model. When λ = 0, it fits a saturated model, and when λ = ∞, the model reduces to a
simple logistic model logit (μ[i]) = $xiTβ$. Note that there are two tuning parameters in the above likelihood function, the regularization parameter λ and kernel parameter ρ Intuitively, λ controls
the magnitude of the unknown function while ρ mainly governs the smoothness property of the function.
By the representer theorem [16], the general solution for the nonparametric function h(·) in (4) can be expressed as
where k[i ]= {K(z[i], z[1]), ..., K(z[i], z[n])}^T and α = (α[1], α[n])^T, an n × 1 vector of unknown parameters.
Substituting (5) into (4) we have
where K = K(ρ) is an n × n matrix whose (i, i')th element is K(z[i], z[i]') and often depends on an unknown parameter ρ.
Since J(β, α) in (6) is a nonlinear function of (β, α), one can use the Fisher scoring or Newton-Raphson iterative algorithm to maximize (6) with respect to β and α. Let (k) denote the k^th iteration
step, then it can be shown (for details see Appendix A.3) that the (k + 1)^th update for β and α solves the following normal equation:
$[XTD(k)XXTD(k)KD(k)Xτ−1I+D(k)K] [β(k+1)α(k+1)]=[XTD(k)y˜(k)D(k)y˜(k)].$
where $y˜(k)=Xβ(k)+Kα(k)+D(k)−1(y−μ(k))$, τ = 1/λ, h^(k) = Kα ^(k), and $D(k)=Diag{μi(k)(1−μi(k))}$. The estimators $β^$ and $h^$ at convergence are the kernel machine estimators that maximize (6).
The Connection of Logistic Kernel Machine Regression to Logistic Mixed Models
Generalized linear mixed models (GLMMs) have been used to analyze correlated categorical data and have gained much popularity in the statistical literature [10]. Logistic mixed models are a special
case of GLMMs. We show in this section that the kernel machine estimator in the semiparametric logistic regression model (3) corresponds to the Penalized Quasi-Likelihood (PQL) [10] estimator from a
logistic mixed model, and the regularization parameter τ = 1/λ and kernel parameter ρ can be treated as variance components and estimated simultaneously from the corresponding logistic mixed model.
Specifically, consider the following logistic mixed model:
where β is a q × 1 vector of fixed effects, and h = (h[1], ..., h[n]) is a n × 1 vector of subject-specific random effects following h ~N{0, τ K(ρ)}, and the covariance matrix K(ρ) is the n × n
kernel matrix as defined in previous section.
As K is not diagonal or block-diagonal, the random effects h[i]'s across all subjects are correlated. The i^th mean response μ[i ]depends on other random effects h[i]' (i' ≠ i) through the
correlations of h[i ]with other random effects. To estimate the unknown parameters in the logistic mixed model (8), we estimate β and h by maximizing the PQL [10], which can be viewed as a joint log
likelihood of (β , h),
Setting τ = 1/λ and h = Kα , one can easily see that equations (6) and (9) are identical. It follows that the logistic kernel machine estimators $β^$ and $h^$ can be obtained by fitting the logistic
mixed model (8) using PQL. In fact, examination of the kernel machine normal equations (7) shows that they are identical to the normal equations obtained from the PQL (9) [10], where $y˜$ in (7) is
in fact the PQL working vector and Dis the PQL working weight matrix.
Note that the estimators of β and h depend on the unknown regularization parameter τ and the kernel parameter ρ. Within the PQL framework, we can estimate these parameters δ = (τ, ρ) by maximizing
the approximate REML likelihood
where V = D^-1 + τ K, and $y˜$ is the working vector as defined above. The estimator $δ^$ of δ can be obtained by setting equal to zero the first derivative of (10) with respect to δ. The estimating
procedure for β, h, and δ = (τ, ρ) can be summarized as follows: we fit the logistic kernel machine model by iteratively fitting the following working linear mixed model to estimate (β, h) using
BLUPs and to estimate δ using REML, until convergence
where $y˜$ is the working vector defined below equation (7), h is a random effect vector following N{0, τ K(ρ)}, and ε ~ N(0, D). The covariance of $β^$ is estimated by (X^TV^-1X)^-1, and the
covariance of $h^$ is estimated by $τ^K−τ^KPK$, where P = V^-1 - V^-1X(X^TV^-1X)^-1X^TV^-1 and V = V($δ^$). The covariance of $δ^$ can be obtained as the inverse of the expected information matrix
calculated using the second derivative of (10) with respect to δ. The square roots of the diagonal elements of the estimated covariance matrices give the standard errors of $β^$, $h^$, and δ. The
above discussion shows that we can easily fit the logistic kernel machine regression using the existing PQL-based mixed model software, such as SAS GLIMMIX and R GLMMPQL.
Test for the Genetic Pathway Effect
It is of significant practical interest to test the overall genetic pathway effect H[0]: h(z) = 0. Assuming h(z) $HK$, one can easily see from the logistic mixed model representation (8) that H[0]: h
(z) = 0 vs H[1]: h(z) ≠ 0 is equivalent to testing the variance component τ as H[0]: τ = 0 vs H[1]: τ > 0. Note that the null hypothesis places τ on the boundary of the parameter space. Since the
kernel matrix K is not block diagonal, unlike the standard case considered by Self and Liang [17], the likelihood ratio for H[0]: τ = 0 does not follow a mixture of $χ02$ and $χ12$ distribution. We
consider instead a score test in this paper.
When conducting statistical tests for pathways, two types of tests could be formulated. The first is called the competitive test and the second the self-contained test [18]. The competitive test
compares an interested gene set to all the other genes on a gene chip. An example of the competitive test is the gene set enrichment analysis (GSEA) [1], where an enrichment score of a gene set is
defined and a permutation test is used to test for the significance of the gene set based on the enrichment score. The self-contained test compares the gene set to an internal standard which does not
involve any genes outside the gene set considered. In other words, the self-contained test examines the null hypothesis that a pathway has no effect on the outcome versus the alternative hypothesis
that the pathway has an effect. The variance component test of [13] for the linear pathway effect is a self-contained test. Goeman and Bühlmann [18] pointed out that the self-contained test has a
higher power than a competitive test and that its statistical formulation is also consistent for both single gene tests and gene set tests, and the statistical sampling properties of the competitive
test can be difficult to interpret.
Our pathway effect hypothesis H[0]: h(z) = 0 vs H[1]: h(z) ≠ 0 is a self-contained hypothesis. We propose in this paper a self-contained test for the pathway effect by developing a kernel machine
variance component score test for H[0]: τ = 0 vs H[0]:τ > 0. The proposed test allows for both linear and nonlinear pathway effects and includes the tests by Goeman et al. [13,14] as a special case.
A key advantage of our kernel-based test is that we do not need to explicitly specify the basis functions for h(·), which is often difficult for modeling the joint effects of multiple genes, and we
all let the data to estimate the best curvature of h(·).
Zhang and Lin [19] proposed a score test for H[0]: τ = 0 to compare a polynomial model with a smoothing spline. Goeman et al. [14] also proposed a global test against a high dimensional alternative
under the empirical Bayesian framework. The variance-covariance matrix used in these tests do not involve any unknown parameters. However, the kernel function K(·, ·) in a kernel machine model
usually depends on some unknown parameter ρ. One can easily see from the mixed model representation (8) that under H[0]: τ = 0, the kernel matrix K disappears. This makes the parameter ρ inestimable
under the null hypothesis and therefore renders the above tests inapplicable.
Davies [20,21] studied the problem of a parameter disappearing under H[0 ]and proposed a score test by treating the score statistic as a Gaussian process indexed by the nuisance parameter and then
obtaining an upper bound to approximate the p-value of the score test. We adopt this line of approaches for our proposed score test.
Using the derivative of (10) with respect to τ, we propose the following score test statistic for H[0]: τ = 0 as
where $β^0$ is the MLE of β under H[0]: τ = 0, $μ^0=logit−1(Xβ^0)$, μ[Q ]= tr{P[0]K(ρ)}, $σQ2$ = 2tr{P[0]K(ρ)P[0]K(ρ)}, and P[0 ]= D[0 ]– D[0]X(X^TD[0]X)^-1 X^TD[0], where $D0=diag{μ^i0(1−μi0)}$.
Note that under H[0]: τ = 0, model (3) reduces to the simple logistic model logit (μ[i]) = $xiTβ$. Hence the $β^0$ is the MLE of β under this null logistic model.
If the Gaussian kernel is used, then an arbitrary nonlinear pathway effect is implicitly assumed. Our proposed test, which is derived to test for any nonlinear effect, is therefore more powerful than
tests based on a parametric assumption. We show in Appendix A.1 that when ρ becomes large in the Gaussian kernel, our test statistic reduces asymptotically to the one based on linearity assumption of
genetic effects. Hence our test includes linear model based test as a special case. From (11) it is also clear that our test is invariant to the relative scaling of the kernel function K(·, ·).
Under appropriate regularity conditions similar to those specified in [22], S(ρ) under the null hypothesis can be considered as an approximate Gaussian process indexed by ρ. Using this formulation,
we can then apply Davies' results [20,21] to obtain the upper bound for the p-value of the test. Since a large value of Q[τ ]($β^$, ρ) would lead to the rejection of H[0], the p-value of the test
corresponds to the up-crossing probability. Following Davies [21], the p-value is upper-bounded by
where Φ (·) is the normal cumulative distribution function, M is the maximum of S(ρ) over the range of ρ, W = | S(ρ[1]) – S(L)| + | S(ρ[2]) – S(ρ[1]) | + ... + | S(U) – S(ρ[m]) |, L and U are the
lower and upper bound of ρ respectively and ρ[l], l = 1, ..., m are the m grid points between L and U. Davies [20] points out that this bound is sharp. For the Gaussian kernel, we suggest to set the
bound of ρ as L = 0.1 $mini≠j∑l=1p(zil−zjl)2$ and U = 100 $maxi≠j∑l=1p(zil−zjl)2$. For justifications, see Appendix A.2.
Extension to generalized kernel machine model
For simplicity, we focus in this paper on logistic regression for binary outcomes. The proposed semiparametric model (3) can be easily extended to other types of continuous and discrete outcomes,
such as normal, count, skewed data, whose distributions are in the exponential family [23]. In this section, we briefly discuss how to generalize our estimation and testing procedures for binary
outcomes to other data types within the generalized kernel machine framework and discuss its fitting using generalized linear mixed models.
Suppose the data consist of n independent subjects. For subject i (i = 1, ..., n), y[i ]is a response variable, x[i ]is a q × 1 vector of covariates, z[i ]is a p × 1 vector of gene expressions within
a pathway. Suppose y[i ]follows a distribution in the exponential family with density [23]
where θ[i ]is the canonical parameter, a(·) and c(·) are known functions, is a dispersion parameter, and m[i ]is a known weight. The mean of y[i ]satisfies μ[i ]= E(y[i]) = a'(θ[i]) and Var(y[i]) = [
i]a"(θ[i]). The generalized kernel machine model is an extension of the generalized linear model [23] by allowing the pathway effect to be modeled nonparametrically using kernel machine as
where g(·) is a known monotone link function, and h(·) is an unknown centered smooth function lying in the function space $HK$ generated by a positive definite kernel function K(·, ·). For binary
data, setting g(μ) = logit(μ) = $logμ1−μ$ gives the logistic kernel machine model (3); for count data, g(μ) = log(μ) gives the Poisson kernel machine model; for Gaussian data, g(μ) = μ gives linear
kernel machine model [9]. The regression coefficients β and the nonparametric function h(·) in (14) can be obtained by maximizing the penalized log-likelihood function
where ln(p) is the log-likelihood, p is the density function given in (13), and λ is a tuning parameter. Using the kernel expression of h(·) in (5), the generalized kernel machine model (14) can be
written as
and the penalized likelihood can be written
where K is an n × n matrix whose (i, j)th element is K(z[i], z[j]).
One can use the Fisher scoring iteration to solve for β and a. The procedure is virtually the same as that described in Section "The Estimation Procedure". The normal equation takes the same form as
(7), except that now μ[i ]is specified under (14) and D = diag{var(y[i])} under (13). Similar calculations to those in Section "The Connection of Logistic Kernel Machine Regression to Logistic Mixed
Models" show that model (14) can be fit using the generalized linear mixed model [10] via PQL
where τ = 1/λ, and h = (h[1 ]..., h[n]) is an n × n random vector with distribution N {0, τ K(ρ)}. The same PQL statistical software, such as SAS PROC GLIMMIX and R GLMMPQL, can be used to fit this
model and obtain the kernel machine estimators of β and h(·).
The score test (11) also has a straightforward extension. The only change is that the elements in matrix D in (11) be replaced by appropriate variance function var(y[i]) under the assumed parametric
distribution of y[i].
A.1 Proof of the relationship of the proposed score test and that of Goeman, et al [13] under the linearity assumption
We show in this section when the scale parameter is large, the proposed nonparametric variance component test for the pathway effect using the Gaussian kernel reduces to the linearity-based global
test of Goeman et al. [13].
Suppose K(·) is the Gaussian kernel. It can be shown that the score statistic for testing H[0]: τ = 0 satisfies
where $μ^0$ is the MLE of μ under H[0]. The test statistic of Goeman et al. [13] takes the form
where R = ZZ^T. We now show when ρ is large relative to $maxi≠j∑l=1p(zil−zjl)2$
Simple Taylor expansions show that
When $maxi≠j∑l=1p(zil−zjl)2/ρ$ is small, i.e., when ρ is large relative to $maxi≠j∑l=1p(zil−zjl)2$, we have that $exp{−∑l=1p(zil−zjl)2/ρ}≈1−∑l=1p(zil−zjl)2/ρ$ for any i ≠ j. Hence
Since $∑j=1(yj−μ^j)=0$ under the PQL, we have $∑j≠1(yj−μ^j)=−(yi−μ^i)$. Hence
This proves the approximate relation (19).
A.2 Calculations of the lower and upper bounds of ρ
Although in theory ρ could take any positive values up to infinity, for computational purpose we would require ρ to be bounded. For the proposed test statistic (11), its value in fact only depends on
a finite range of ρ values. We describe why this is the case and how to find this range. For a given data set, the proof in Appendix A.1 shows that when is sufficiently large, the quantity 0.5ρQ[τ ](
$β^0$, ρ) converges to $S0=(y˜−μ^0)TR(y˜−μ^0)$, which is free of ρ.
These arguments suggest that for numerical evaluation, it is not necessary to consider all ρ values up to infinity. Instead, a moderately large enough value would suffice. Now the question comes down
to how to decide on appropriate upper and lower bounds for ρ. The proof in Appendix A.1 requires $maxi≠j∑l=1p(zil−zjl)2/ρ$ be close to 0. Let C[1 ]be some large positive number such that 1/C[1 ]≈ 0.
If we take the upper bound of ρ to be C[1 ]$maxi≠j∑l=1p(zil−zjl)2$, then $maxi≠j∑l=1p(zil−zjl)2/ρ$ would be close to 0. In practice we suggest taking C[1 ]= 100, which would give good
approximation. Using a similar idea, we can find a lower bound for ρ. It is clear that when $mini≠j∑l=1p(zil−zjl)2$/ρ → K(ρ) will be 0 and the kernel matrix reduces to an identity matrix. Hence, if
we pick a small enough number C[2 ]such that 1/C[2 ]→ ρ to be C[2 ]$mini≠j∑l=1p(zil−zjl)2$. In practice we suggest take C[2 ]= 0.1, which yields a good approximation.
A.3 derivation of normal equation (5)
Taking partial derivative of (6) with respect to β and writing in matrix notation, we have X^T(y-μ). Similarly for α, we have K(y – μ) – λ Kα. The gradient vector is thus
Taking derivative of q with respect to β and α, we can get the following hessian matrix
where D = Diag{μ[i](1 – μ[i])}. The Newton-Raphson iteration states that the parameter value at the (k + 1)^th iteration can be updated by the following relationship
δ^(k+1) = δ^(k) – (H^(k))^-1 q^(k),
where δ = (β^T, α^T)^T. Substitute (20) and (21) into (22), we arrive at normal equation (7).
Authors' contributions
DL performed statistical analysis. All authors participated in development of the methods and preparation of the manuscript. All authors read and approved the final manuscript.
Liu and Lin's research was supported by a grant from the National Cancer Institute (R37 CA-76404). Ghosh's research was supported by a grant from the National Institute of Health (R01 GM-72007). We
thank A. Chinnaiyan for providing the prostate cancer data. We also thank the two anonymous referees for their comments which helped improve the manuscript.
• Subramanian A, Tamayo P, Mootha V, Mukherjee S, Ebert B, Gillette M, Paulovich A, Pomeroy S, Golub T, Lander E, Mesirov J. Gene set enrichment analysis: A knowledge-based approach for
interpreting genome-wide expression profiles. Proceedings of the National Academy of Sciences. 2005;102:15545–15550. doi: 10.1073/pnas.0506580102. [PMC free article] [PubMed] [Cross Ref]
• Eisenberg D, Graeber TG. Bioinformatic identification of potential autorine signaling loops in cancers from gene expression profiles. Nature Genetics. 2001;29:295–300. doi: 10.1038/ng718. [PubMed
] [Cross Ref]
• Raponi M, Belly R, Karp J, Lancet J, Atkins D, Wang Y. Microarray analysis reveals genetic pathways modulated by tipifarnib in acute myeloid leukemia. BMC Cancer. 2004;4:56. doi: 10.1186/
1471-2407-4-56. [PMC free article] [PubMed] [Cross Ref]
• Dahlquist KD, Salomonis N, Vranizan K, Lawlor SC, Conklin BR. GenMAPP, a new tool for viewing and analyzing microarray data on biological pathways. Nature Genetics. 2002;31:19–20. doi: 10.1038/
ng0502-19. [PubMed] [Cross Ref]
• Grosu P, Twonsend JP, Hartl DL, Cavalieri D. Pathway Processor: A tool for integrating whole-genome expression results into metabolic networks. Genome Research. 2002;12:1121–1126. doi: 10.1101/
gr.226602. [PMC free article] [PubMed] [Cross Ref]
• Doniger SW, Salomonis N, Dahlquist KD, Vranizan K, Lawlor SC, Conklin BR. MAPPFinder: using Gene Ontology and GenMAPP to create a global gene-expression profile from microarray data. Genome
Biology. 2003;4:R7. doi: 10.1186/gb-2003-4-1-r7. [PMC free article] [PubMed] [Cross Ref]
• Vapnik V. Statistical Learning Theory. New York: Wiley; 1998.
• Schölkopf B, Smola A. Learning with Kernels. Cambridge, Massachusetts: MIT press; 2002.
• Liu D, Lin X, Ghosh D. Semiparametric regression of multi-dimensional genetic pathway data: least squares kernel machines and linear mixed models. Biometrics. 2007;63:1079–1088. [PMC free article
] [PubMed]
• Breslow N, Clayton D. Approximate inference in generalized linear mixed models. Journal of the American Statistical Association. 1993;88:9–25. doi: 10.2307/2290687. [Cross Ref]
• Wei Z, Li H. Nonparametric pathway-based regression models for analysis of genomic data. Biostatistics. 2007;8:265–284. doi: 10.1093/biostatistics/kxl007. [PubMed] [Cross Ref]
• Sprague R, Ed . Proceedings of the 39th Annual Hawaii International Conference on System Sciences. Los Alamitos: IEEE; 2006. [CD ROM version]
• Goeman JJ, Geer SA van de, de Kort F, van Houwelingen HC. A global test for groups of genes: testing association with a clinical outcome. Bioinformatics. 2004;20:93–99. doi: 10.1093/
bioinformatics/btg382. [PubMed] [Cross Ref]
• Goeman JJ, Geer SA van de, van Houwelingen HC. Testing against a high dimensional alternative. Journal of the Royal Statistical Society: Series B. 2006;68:477–493. doi: 10.1111/
j.1467-9868.2006.00551.x. [Cross Ref]
• Dhanasekaran S, Barrette T, Ghosh D, Shah R, Varambally S, Kurachi K, Pienta K, Rubin M, Chinnaiyan A. Delineation of prognostic biomarkers in prostate cancer. Nature. 2001;412:822–6. doi:
10.1038/35090585. [PubMed] [Cross Ref]
• Kimeldorf G, Wahba G. Some results on Tchebycheffian spline functions. Journal of Mathematical Analysis and Applications. 1970;33:82–95. doi: 10.1016/0022-247X(71)90184-3. [Cross Ref]
• Self SG, Liang KY. Asymptotic properties of maximum likelihood estimators and likelihood ratio tests under non-standard conditions. Journal of the American Statistical Association. 1987;82
:605–610. doi: 10.2307/2289471. [Cross Ref]
• Goeman JJ, Bühlmann P. Analyzing gene expression data in terms of gene sets: methodological issues. Bioinformatics. 2007;23:980–987. doi: 10.1093/bioinformatics/btm051. [PubMed] [Cross Ref]
• Zhang D, Lin X. Hypothesis testing in semiparametric additive mixed models. Biostatistics. 2002;4:57–74. doi: 10.1093/biostatistics/4.1.57. [PubMed] [Cross Ref]
• Davies R. Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika. 1977;64:247–254. doi: 10.2307/2335690. [PubMed] [Cross Ref]
• Davies R. Hypothesis testing when a nuisance parameter is present only under the alternative. Biometrika. 1987;74:33–43. [PubMed]
• le Cessie S, van Houwelingen J. Goodness of fit tests for generalized linear models based on random effect models. Biometrics. 1995;51:600–614. doi: 10.2307/2532948. [PubMed] [Cross Ref]
• McCullagh P, Nelder J. Generalized Linear Models. New York: Chapman & Hall; 1989.
Articles from BMC Bioinformatics are provided here courtesy of BioMed Central
• MedGen
Related information in MedGen
• PubMed
PubMed citations for these articles
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2483287/?tool=pubmed","timestamp":"2014-04-21T09:59:19Z","content_type":null,"content_length":"176221","record_id":"<urn:uuid:6f9ebe66-0bfd-46f6-9934-a42a9eb558c8>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00032-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Probabilistic Method, 3rd Edition
ISBN: 978-0-470-17020-5
376 pages
August 2008
Read an Excerpt
Praise for the Second Edition:
"Serious researchers in combinatorics or algorithm design will wish to read the book in its entirety...the book may also be enjoyed on a lighter level since the different chapters are largely
independent and so it is possible to pick out gems in one's own area..."
—Formal Aspects of Computing
This Third Edition of The Probabilistic Method reflects the most recent developments in the field while maintaining the standard of excellence that established this book as the leading reference on
probabilistic methods in combinatorics. Maintaining its clear writing style, illustrative examples, and practical exercises, this new edition emphasizes methodology, enabling readers to use
probabilistic techniques for solving problems in such fields as theoretical computer science, mathematics, and statistical physics.
The book begins with a description of tools applied in probabilistic arguments, including basic techniques that use expectation and variance as well as the more recent applications of martingales and
correlation inequalities. Next, the authors examine where probabilistic techniques have been applied successfully, exploring such topics as discrepancy and random graphs, circuit complexity,
computational geometry, and derandomization of randomized algorithms. Sections labeled "The Probabilistic Lens" offer additional insights into the application of the probabilistic approach, and the
appendix has been updated to include methodologies for finding lower bounds for Large Deviations.
The Third Edition also features:
• A new chapter on graph property testing, which is a current topic that incorporates combinatorial, probabilistic, and algorithmic techniques
• An elementary approach using probabilistic techniques to the powerful Szemerédi Regularity Lemma and its applications
• New sections devoted to percolation and liar games
• A new chapter that provides a modern treatment of the Erdös-Rényi phase transition in the Random Graph Process
Written by two leading authorities in the field, The Probabilistic Method, Third Edition is an ideal reference for researchers in combinatorics and algorithm design who would like to better
understand the use of probabilistic methods. The book's numerous exercises and examples also make it an excellent textbook for graduate-level courses in mathematics and computer science.
See More
PART I. METHODS.
1. The Basic Method.
1.1 The Probabilistic Method.
1.2 Graph Theory.
1.3 Combinatorics.
1.4 Combinatorial Number Theory.
1.5 Disjoint Pairs.
1.6 Exercises.
The Probabilistic Lens: The Erd" osKoRado Theorem.
2. Linearity of Expectation.
2.1 Basics.
2.2 Splitting Graphs.
2.3 Two Quickies.
2.4 Balancing Vectors.
2.5 Unbalancing Lights.
2.6 Without Coin Flips.
2.7 Exercises.
The Probabilistic Lens: Brégman’s Theorem.
3. Alterations.
3.1 Ramsey Numbers.
3.2 Independent Sets.
3.3 Combinatorial Geometry.
3.4 Packing.
3.5 Recoloring.
3.6 Continuous Time.
3.7 Exercises.
The Probabilistic Lens: High Girth and High Chromatic Number.
4. The Second Moment.
4.1 Basics.
4.2 Number Theory.
4.3 More Basics.
4.4 Random Graphs.
4.5 Clique Number.
4.6 Distinct Sums.
4.7 The Rödl Nibble.
4.8 Exercises.
The Probabilistic Lens: Hamiltonian Paths.
5. The Local Lemma.
5.1 The Lemma.
5.2 Property B and Multicolored Sets of Real Numbers.
5.3 Lower Bounds for Ramsey Numbers.
5.4 A Geometric Result.
5.5 The Linear Arboricity of Graphs.
5.6 Latin Transversals.
5.7 The Algorithmic Aspect.
5.8 Exercises.
The Probabilistic Lens: Directed Cycles.
6. Correlation Inequalities.
6.1 The Four Functions Theorem of Ahlswede.
and Daykin.
6.2 The FKG Inequality.
6.3 Monotone Properties.
6.4 Linear Extensions of Partially Ordered Sets.
6.5 Exercises.
The Probabilistic Lens: Turán’s Theorem.
7. Martingales and Tight Concentration.
7.1 Definitions.
7.2 Large Deviations.
7.3 Chromatic Number.
7.4 Two General Settings.
7.5 Four Illustrations.
7.6 Talagrand’s Inequality.
7.7 Applications of Talagrand’s Inequality.
7.8 KimVu.
7.9 Exercises.
The Probabilistic Lens: Weierstrass Approximation Theorem.
8. The Poisson Paradigm.
8.1 The Janson Inequalities.
8.2 The Proofs.
8.3 Brun’s Sieve.
8.4 Large Deviations.
8.5 Counting Extensions.
8.6 Counting Representations.
8.7 Further Inequalities.
8.8 Exercises.
The Probabilistic Lens: Local Coloring.
9. Pseudorandomness.
9.1 The Quadratic Residue Tournaments.
9.2 Eigenvalues and Expanders.
9.3 Quasi Random Graphs.
9.4 Exercises.
The Probabilistic Lens: Random Walks.
PART II. TOPICS.
10 Random Graphs.
10.1 Subgraphs.
10.2 Clique Number.
10.3 Chromatic Number.
10.4 ZeroOne Laws.
10.5 Exercises.
The Probabilistic Lens: Counting Subgraphs.
11. The Erd" osR.
‘enyi Phase Transition.
11.1 An Overview.
11.2 Three Processes.
11.3 The GaltonWatson Branching Process.
11.4 Analysis of the Poisson Branching Process.
11.5 The Graph Branching Model.
11.6 The Graph and Poisson Processes Compared.
11.7 The Parametrization Explained.
11.8 The Subcritical Regions.
11.9 The Supercritical Regimes.
11.10 The Critical Window.
11.11 Analogies to Classical Percolation Theory.
11.12 Exercises.
The Probabilistic Lens: The Rich Get Richer.
12. Circuit Complexity.
12.1 Preliminaries 318.
12.2 Random Restrictions and BoundedDepth Circuits.
12.3 More on BoundedDepth Circuits.
12.4 Monotone Circuits.
12.5 Formulae.
12.6 Exercises.
The Probabilistic Lens: Maximal Antichains.
13. Discrepancy.
13.1 Basics.
13.2 Six Standard Deviations Suffice.
13.3 Linear and Hereditary Discrepancy.
13.4 Lower Bounds.
13.5 The BeckFiala Theorem.
13.6 Exercises.
The Probabilistic Lens: Unbalancing Lights.
14. Geometry.
14.1 The Greatest Angle among Points in Euclidean Spaces.
14.2 Empty Triangles Determined by Points in the Plane.
14.3 Geometrical Realizations of Sign Matrices.
14.4 QNets and VCDimensions of Range Spaces.
14.5 Dual Shatter Functions and Discrepancy.
14.6 Exercises.
The Probabilistic Lens: Efficient Packing.
15. Codes, Games and Entropy.
15.1 Codes.
15.2 Liar Game.
15.3 Tenure Game.
15.4 Balancing Vector Game.
15.5 Nonadaptive Algorithms.
15.6 Half Liar Game.
15.7 Entropy.
15.8 Exercises.
The Probabilistic Lens: An Extremal Graph.
16. Derandomization.
16.1 The Method of Conditional Probabilities.
16.2 dWise Independent Random Variables in Small Sample Spaces.
16.3 Exercises.
The Probabilistic Lens: Crossing Numbers, Incidences, Sums and Products.
17. Graph Property Testing.
17.1 Property Testing.
17.2 Testing colorability.
17.3 Szemer ’edi’s Regularity Lemma.
17.4 Testing trianglefreeness.
17.5 Characterizing the testable graph properties.
17.6 Exercises.
The Probabilistic Lens: Tur?an Numbers and Dependent Random Choice.
Appendix A: Bounding of Large Deviations.
A.1 Chernoff Bounds.
A.2 Lower Bounds.
A.3 Exercises.
The Probabilistic Lens: Trianglefree Graphs Have Large Independence Numbers.
Appendix B: Paul Erd" os.
B.1 Papers.
B.2 Conjectures.
B.3 On Erd" os.
B.4 Uncle Paul.
Subject Index.
Author Index.
See More
, PhD, is Baumritter Professor of Mathematics and Computer Science at Tel Aviv University, Israel. A member of the Israel National Academy of Sciences, Dr. Alon has written over 400 published papers,
mostly in the areas of combinatorics and theoretical computer science. He is the recipient of numerous honors in the field, including the Erdös Prize (1989), the Pólya Prize (2000), the Landau Prize
(2005), and the Gödel Prize (2005).
JOEL H. SPENCER, PhD, is Professor of Mathematics and Computer Science at the Courant Institute of Mathematical Sciences at New York University and is the cofounder and coeditor of the journal Random
Structures and Algorithms. Dr. Spencer has written over 150 published articles and is the coauthor of Ramsey Theory, Second Edition, also published by Wiley.
See More
• New chapter devoted to Graph Property Testing and the included sections are Graph Property Testing; Testing Colorability; Szemeredi's Regularity Lemma; Testing Triangle-Freeness; and
Characterizing the Testable Graph Properties.
• New sections have also been added on Percolation, Webgraphs, and Chernoff Bounds.
• A substantial revision has been made to the Double Jump section.
• The number of exercises included in the third edition has been almost doubled from that of the second edition, and hints and/or answers to some of the exercises are provided.
See More
• This is the only book that is totally devoted to the probabilistic method, which is used to show the existence of combinatorial objects by studying appropriately defined random objects.
• This book explores probabilistic methods in an accessible way, and extensive exercises compliment the approach and style of the book.
• The number of exercises has been increased significantly with an eye towards use in graduate courses.
• Written by well-known authorities in the field, this book has an informal, clear, and precise style and approach.
• The entire manuscript has been reexamined and revised where necessary in order to reflect the recent developments in the field.
See More
Buy Both and Save 25%!
The Probabilistic Method, 3rd Edition (US $148.00)
-and- Cellular Automata: A Discrete View of the World (US $150.00)
Total List Price: US $298.00
Discounted Price: US $223.50 (Save: US $74.50)
Cannot be combined with any other offers. Learn more. | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-0470170204,subjectCd-MA90.html","timestamp":"2014-04-21T16:13:03Z","content_type":null,"content_length":"60648","record_id":"<urn:uuid:e6d1c8d1-b640-43d2-b60c-fcdabb32ae7c>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00662-ip-10-147-4-33.ec2.internal.warc.gz"} |
Honors Theory of Computation
G22.3350-001 (Honors Theory of Computation), Spring 2004
Lecturer: Prof. Yevgeniy Dodis, dodis(at)cs.nyu.edu, (212) 998-3084, room 508, WWH. Office hour: Thursday 3:15-4:15
Meeting Time/Place: TR 2-3:15, room 613, WWH.
Mailing list: To subscribe to the class list, follow instructions at
To post a message to all the list members, send email to g22_3350_001_sp04@cs.nyu.edu. Please, post only messages interesting to everybody taking the class. Specific class-related questions and most
of your other correspondence should be directed to the instructor.
Lecture Summaries (from last year, subject to change)
See the page for the Spring 2003 class for more information.
Brief Course Description:
This is an advanced graduate course on the fundamentals of theory of computability and computation. The objective of the course is to understand questions of the form: "what is computation?", "which
computational problems can be solved?", "which problems are easy/hard?", etc. In trying to answer these metaphysical questions, we will introduce many possible models of computation, study the
relationship between these models, learn how to formally show that some problems are harder than others, and that some problems are not solvable at all! We will study a wide range of topics,
including Turing machines, recursion theory, (un)decidability, diagonalization, oracles, randomization and non-determinism, time and space complexity, reductions, complete problems, games,
interactive protocols, various complexity classes (P, NP, coNP, L, RP, ZPP, PSPACE, EXP, IP, etc.), foundations of cryptography, probabilistically checkable proofs, approximation algorithms, parallel
computation, imperfect random sources, etc. The emphasis will be put on the understanding of the material and the proof techniques used.
Michael Sipser. Introduction to the Theory of Computation. PWS Publishing, 1997.
Christos Papadimitriou. Computational Complexity. Addison Wesley, 1994. | {"url":"http://cs.nyu.edu/courses/spring04/G22.3350-001/index.htm","timestamp":"2014-04-18T05:36:21Z","content_type":null,"content_length":"4600","record_id":"<urn:uuid:fbc79474-5e20-4400-a178-01397424aa3b>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied Mathematics, BS
What is the study of Applied Mathematics?
Mathematics reveals hidden patterns that help us understand the world around us. Now much more than arithmetic and geometry, Mathematics today is a diverse discipline that deals with data,
measurements, and observations from science; with inference, deduction, and proof; and with mathematical models of natural phenomena, of human behavior, and of social systems.
As a practical matter, Mathematics is a science of pattern and order. Its domain is not molecules or cells, but numbers, chance, form, algorithms, and change. As a science of abstract objects,
Mathematics relies on logic rather than on observation as its standard of truth, yet employs observation, simulation, and even experimentation as means of discovering truth.” From: Everybody Counts:
A Report to the Nation on the Future of Mathematics Education (c) 1989 National Academy of Sciences.
Why Should I Consider this Major?
The special role of Mathematics in education is a consequence of its universal applicability. The results of Mathematics-theorems and theories-are both significant and useful; the best results
are also elegant and deep. Through its theorems, Mathematics offers science both a foundation of truth and a standard of certainty.
In addition to theorems and theories, Mathematics offers distinctive modes of thought which are both versatile and powerful, including modeling, abstraction, optimization, logical analysis,
inference from data, and use of symbols. Experience with mathematical modes of thought builds mathematical power—a capacity of mind of increasing value in this technological age that enables one
to read critically, to identify fallacies, to detect bias, to assess risk, and to suggest alternatives. Mathematics empowers us to understand better the information-laden world in which we live."
—From Everybody Counts: A Report to the Nation on the Future of Mathematics Education (c) 1989 National Academy of Sciences.
Empowered with the critical thinking skills that Mathematics develops, recent Mathematics graduates from Western have obtained positions in a variety of fields including actuarial science, cancer
research, computer software development, business management and the movie industry, among many others. The skills acquired in our program have prepared graduates for further academic studies in
Mathematics, Computer Science, Physics, Biology, Chemistry, Oceanography and Education.
To see more detailed information about this program, please visit our University Catalog:
Sample Careers
• Actuary
• Research Analyst
• Statistician
• Biostatistician
• Math Teacher
• Demographer
• Database Administrator
• Information Scientist | {"url":"http://www.wwu.edu/majors/applied-mathematics-bs","timestamp":"2014-04-17T12:37:09Z","content_type":null,"content_length":"23729","record_id":"<urn:uuid:c3952d8f-b33c-4f6f-8005-4d6d7265ea28>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to implement rotation about an arbitrary point? [Archive] - OpenGL Discussion and Help Forums
11-17-2003, 05:31 AM
Hi there,
Suppose that I want to rotate an object about the point say (525,300) ,90 degrees.Ho wthis can be implemented in openGL.I learned that I should transate the object to the origin (0,0),rotate,and then
translate bac to my chosen point.However.I couldn't apply this correctly.Can any one indicate to me the steps with code.
Thanks in advance | {"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-130187.html","timestamp":"2014-04-18T00:30:48Z","content_type":null,"content_length":"4569","record_id":"<urn:uuid:fb473aef-33d1-4bc2-b2e8-4777a6e31954>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
Visiting Power Laws in Cyber-Physical Networking Systems
Mathematical Problems in Engineering
Volume 2012 (2012), Article ID 302786, 13 pages
Review Article
Visiting Power Laws in Cyber-Physical Networking Systems
^1School of Information Science & Technology, East China Normal University, No. 500, Dong-Chuan Road, Shanghai 200241, China
^2Department of Computer and Information Science, University of Macau, Avenue Padre Tomas Pereira, Taipa 1356, Macau SAR, China
Received 23 February 2011; Accepted 23 March 2011
Academic Editor: Carlo Cattani
Copyright © 2012 Ming Li and Wei Zhao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Cyber-physical networking systems (CPNSs) are made up of various physical systems that are heterogeneous in nature. Therefore, exploring universalities in CPNSs for either data or systems is desired
in its fundamental theory. This paper is in the aspect of data, aiming at addressing that power laws may yet be a universality of data in CPNSs. The contributions of this paper are in triple folds.
First, we provide a short tutorial about power laws. Then, we address the power laws related to some physical systems. Finally, we discuss that power-law-type data may be governed by stochastically
differential equations of fractional order. As a side product, we present the point of view that the upper bound of data flow at large-time scaling and the small one also follows power laws.
1. Introduction
Cyber-physical networking systems (CPNSs) consist of computational and physical elements integrated towards specific tasks [1–3]. Generally, both data and systems in CPNSs are heterogeneous. For
instance, teletraffic data are different from transportation traffic, letting along other data in CPNS, such as those in physiology. Therefore, one of the fundamental questions is what possible
general laws are to meet CPNS in theory. The answer to that question should be in two folds. One is data. The other is systems that transmit data from sources to destinations within a predetermined
restrict period of time according to a given quality of service (QoS).
In general, both data and systems in CPNS are multidimensional. For instance, data from sources to be transmitted may be from a set of sensors distributed in a certain area. Destinations receiving
data may be a set of actuators, for example, a set of cars distributed in a certain area. Systems to transmit data are generally distributed.
Denote by the -dimensional Euclidean space. Denote data at sources and destinations, respectively, by which is supposed to be -dimensional and which is supposed to be -dimensional. They are given by
A stochastic equation describing an abstract relationship between and may be expressed by where implies the transposition, is a servie matrix of order of a system, and , which is a vector with the
same dimension as that of , may represent uncertainty for the operation of . The operations ⊗ and ⊕ are to be studied from a view of systems, and they are out of the scope of this paper.
Note that is usually a random field, see for example the work of Chilés and Delfiner in [4] in geosciences, the work of Uhlig in [5] in telecommunications, the work of Messina et al. in [6] in power
systems, the work of Muniandy and Stanslas in [7] in medical images, the work of Mason et al. in [8] in wind engineering, and of simply citing a few. The statistics of is obviously crucial for the
performance analysis of physical systems in CPNS. It is noted that the physical meaning of is diverse. For example, it may represent a two-dimensional aeromagnetic data (Spector and Grant [9]), a
medical image (Fortin et al. [10]), vegetation data (Myrhaug et al. [11]), surface crack in material science (Tanaka et al. [12]), and data in physiology (Werner [13], West [14]), DNA (Cattani [15]),
data in stock markets (Rosenow et al. [16]), just mentioning a few. Therefore, seeking for possible universalities of in CPNS is desired.
Without lose of generality, we rewrite (1.1) by where . The norm of is given by The autocovariance function (ACF) of is given, over the hyperrectangle for (Adler [17]), by where is the mean operator,
, and The ACF measures how correlates to .
From the point of view of applications of CPNS, we are interested in two asymptotic expressions of . One is for . The other is for . The former characterizes the small scaling phenomenon of . The
latter measures the large scaling one. It is quite natural for us to investigate two types of scaling phenomena. As a matter of fact, one may be interested in small scaling in some applications, for
example, admission control in computer communication or monitoring sudden disaster in geoscience. On the other side, one may be interested in large scaling in applications, for example, long-term
performance analysis of systems. Exact expression of is certainly useful, but it may usually be application dependent. Consequently, we study possible generalities of for and instead of its exactly
full expressions. The aim of this paper is to explain that both the small scaling described by for and the large scaling described by for , in some fields related to CPNS, ranging from geoscience to
computer communications, follow power laws.
The rest of paper is organized as follows. Short tutorial about power laws is explained in Section 2. Some cases of power laws relating to computational and physical systems in CPNS are described in
Section 3. Stochastically differential equations to govern power-law-type data are discussed in Section 4, which is followed by our conclusions.
2. Brief on Power Laws
Denote by the probability space. Then, is said to be a stochastic process when the random variable represents the value of the outcome of an experiment for every time , where represents the sample
space, is the event space or sigma algebra, and is the probability measure.
As usual, is simplified to be written as . That is, the event space is usually omitted. Denote by the probability function of . Then, one can define the general th order, time varying, joint
distribution function for the random variables . The joint probability density function (pdf) is written by For simplicity, we write and . Then, the probability is given by The mean and the ACF of
based on pdf are written by (2.3) and (2.4), respectively, Let be the variance of . Then,
The above expressions imply that the integrals in (2.3) and (2.5) are convergent in the domain of ordinary functions if is light tailed, for example, exponentially decayed (Li et al. [18]).
Light-tailed pdfs are not our interests. We are interested in heavy-tailed pdfs. By heavy tail we mean that decays so slowly that (2.3) and (2.5) may be divergent. In the following subsections, we
will describe power laws in probability space, ACF, and power spectrum density (PSD) function, respectively.
2.1. Power Law in pdf
A typical heavy-tailed case is the Pareto distribution. Denote by the pdf of the Pareto distribution. Then, where and are parameters and . The mean and variance of that follows are given by (2.7) and
(2.8), respectively, It is easily seen that and do not exist if . Note that implies a global property of while represents a local property of . Therefore, heavy-tailed pdfs imply that is in wild
randomness due to infinite or very large variance, see the work of Mandelbrot in [19] for the meaning of wild randomness.
Note 1. The Pareto distribution is an instance of power-law-type pdf.
2.2. Power Law in ACF
A consequence of a heavy-tailed random variable in ACF is that is slowly decayed. By slowly decayed we mean that decays hyperbolically in the power law given by (Adler et al. [20]) The Taqqu theorem
describes the relationship between a heavy-tailed pdf and hyperbolically decayed ACF (Abry et al. [21]).
2.3. Power Law in PSD
Denote by the PSD of . Then, According to the theory of generalized functions (Kanwal [22]), one has Therefore, power law in PSD, which is usually termed noise, see the work of Wornell in [23], the
work of Keshner in [24], the work of Ninness in [25], the work of Corsini and Saletti in [26], and the work of Li in [27].
2.4. Power Laws in Describing Scaling Phenomena
We now turn to scaling descriptions. Small scaling phenomenon may be investigated by for and large scaling for , respectively (Li and Zhao [28]).
On the one side, following Davies and Hall [29], if is sufficiently smooth on and if where is a constant and is the fractal index of , then the fractal dimension, denoted by , of is expressed by
Note 2. Fractal dimension is a parameter to characterize small scaling phenomenon (Mandelbrot [30], Gneiting and Schlather [31], Li [32]).
On the other side, if then the parameter is used to measure the statistical dependence of . If , is integrable, and accordingly is short-range dependent (SRD). If is nonintegrable and is long-range
dependent (LRD), see the work of Beran in [33]. Representing by the Hurst parameter yields
Note 3. Statistical dependence, either SRD or LRD, is a property for large scaling phenomenon.
3. Cases of Power Laws in CPNS
We address some application cases of power laws in CPNS in this section.
3.1. Power Laws in the Internet
Let be the teletraffic time series. It may represent the packet size of teletraffic at time . Denote the ACF of by Then, we have (Li and Lim [34]) From (3.2), the fractal dimension and the Hurst
parameter of teletraffic are, respectively, given by The above exhibits that both the small scaling and the large one follow power laws.
It is worth noting that the upper bounds of teletraffic also follow power laws. In fact, the amount of teletraffic accumulated in the interval is upper bounded by where and are constants and (Cruz [
35]). Following Li and Zhao [28], we have the bounds of both the small-time scaling and the large one, respectively expressed by where is a small-scale factor and is a large-scale factor. Therefore,
we have the following theorem.
Theorem 3.1. Both the small-scale factor and the large one of teletraffic obey power law, that is, and .
Proof. Two scaling factors follow and , respectively. Thus, they obey power laws. This completes the proof.
In addition to teletraffic, others with respect to the Internet also follow power laws. Some are listed below.
Note 4. Barabasi and Albert [36] studied several large databases in the World Wide Web (WWW), where they defined vertices by HyperText Markup Language (HTML) documents. They inferred that the
probability that a vertex in the network interacts with other vertices decays hyperbolically as for , hence, power law.
Note 5. Let and be the probabilities of a document to have outgoing and incoming links, respectively. Then, and obey power laws (Albert [37]).
Note 6. The probability of web pages among sites is of power law (Huberman and Adamic [38]).
3.2. Power Laws in Geosciences
Let be a spatial point. The physical meaning of a random function may be diverse in the field. For instance, it may represent prospected gold amount at in a gold mine, or a value of pollution index
for pollution alert at in a city.
For simplicity, denote a vector by . Let Then, one may be interested in the covariance function of . Denote by the covariance function of . Then, One of the commonly used models of covariance
functions in geosciences is given by The above constant power is the case of the standard Cauchy process (Webster and Oliver [39]). It fits with some cases in geosciences, see, for example, the work
of Wackernagel in [40]. We list some in the following notes.
Note 7. Let be the covariance function between yield densities at any two points in a region, where represents the distance difference between two points. Then, see the work of Whittle in [41].
Note 8. Sea-level fluctuations, river flow, and flood height follow power laws (Li et al. [42], Lefebvre [43], Lawrance and Kottegoda [44]).
Note 9. Urban growth obeys power laws (Makse et al. [45]).
3.3. Power Laws in Wind Engineering
Wind engineering is an important field relating to wind power generation and disaster preventions from a view of CPNS. In this field, studying fluctuations of wind speed is essential.
The PSD introduced by von Kármán [46], known as the von Kármán spectra (VKS), is widely used in the diverse fields, ranging from turbulence to acoustic wave propagation in random media, see for
example, the work of Goedecke et al. in [47] and the work of Hui et al. in [48]. For the VKS expressed in (3.10), we use the term VKSW for short, where is frequency (Hz), is turbulence integral
scale, is mean speed, is friction velocity (ms^−1), and is friction velocity coefficient such that the variance of wind speed . Equation (3.10) implies that VKSW obeys power law for .
Another famous PSD in wind engineering is the one introduced by Davenport [49], which is expressed by where is the normalized frequency ( (10m)), (10m) is the mean wind speed (ms^−1) measured at
height 10m, is the mean wind speed (ms^−1) measured at height . Davenport’s PSD exhibits a power law of wind speed. Other forms of the PSDs of wind speed, such as those discussed by Kaimal [50],
Antoniou et al. [51], and Hiriart et al. [52], all follow power laws, referring [50–52] for details.
4. Possible Equations for Power-Law-Type Data
The cases of power laws mentioned in the previous section are a few that people may be interested in from a view of CPNS. There are others that are essential in the field of CPNS, such as power laws
in earthquake, see for example, the work of Pisarenko and Rodkin in [53]. Now, we turn to the discussions about the generality about the equations that may govern data of power law type.
Conventionally, a stationary random function may be taken as a solution of a differential equation of integer order, which is driven by white noise . This equation may be written by where and are
Let and be piecewise continuous on ( and integrable on any finite subinterval of . For , denote by the Riemann-Liouville integral operator of order [54–57]. Then, where is the Gamma function. For
simplicity, we write by below.
Let be a strictly decreasing sequence of nonnegative numbers. Then, for the constants , we have The above is a stochastically fractional differential equation with constant coefficients of order .
This class of equations yield random functions with power laws (Li [27]). In the case of random fields, (4.3) is extended to be a partial differential equation of fractional order given by where both
and are multidimensional and is an operator of partial differentiation.
Another class of stochastically differential equations of fractional order is given by (Lim and Muniandy [58])
Note that (4.3), (4.4), or (4.5) should not be taken as a simple extension of conventional equation (4.1) from integer order to fractional one. As a matter of fact, there are challenging issues with
respect to differential equations of fractional order. Since data of power-law-type may be with infinite variance (Samorodnitsky and Taqqu [59]), variance analysis which is a powerful tool in the
analysis of conventional random functions fails to describe random data with infinite variance. Power-law type data may be LRD, which makes the stationarity test of data a tough issue, see for
example, the work of Mandelbrot in [60], the work of Abry and Veitch in [61], the work of Li et al. in [62]. Owing to power laws, stability of systems that produce such a type of data becomes a
critical issue in theory, see the work of Li et al. in [63] and the references therein. In addition, the prediction of data with power laws considerably differs from that of conventional data (M. Li
and J. Y. Li [64], Hall and Yao [65]). Topics in power laws are paid attention to, see for example, the work of Kamoun in [66], the work of Ng et al. in [67], the work of Song et al. in [68], the
work of Cattani et al. in [69–71], and in [72–79].
5. Conclusions
We have discussed the elements of power laws from both a mathematical point of view and with respect to applications to a number of fields in CPNS. The purpose of this paper is to exhibit that power
laws may yet serve as a universality of data in CPNS. We believe that this point of view may be useful for data modeling and analysis in CPNS.
This work was supported partly by the National Natural Science Foundation of China under the project Grant nos 60873264, 61070214, and the 973 plan under the Project no. 2011CB302801/2011CB302802.
1. E. A. Lee, “Cyber physical systems: design challenges,” Tech. Rep. UCB/EECS-2008-8, University of California, Berkeley, Calif, USA, 2008.
2. J. A. Stankovic, I. Lee, A. Mok, and R. Rajkumar, “Opportunities and obligations for physical computing systems,” Computer, vol. 38, no. 11, pp. 23–31, 2005. View at Publisher · View at Google
Scholar · View at Scopus
3. R. Alur, D. Thao, J. Esposito et al., “Hierarchical modeling and analysis of embedded system,” Proceedings of the IEEE, vol. 91, no. 1, pp. 11–28, 2003. View at Publisher · View at Google Scholar
· View at Scopus
4. J.-P. Chilès and P. Delfiner, Geostatistics, Modeling Spatial Uncertainty, Wiley Series in Probability and Statistics: Applied Probability and Statistics, John Wiley & Sons, New York, NY, USA,
1999. View at Zentralblatt MATH
5. S. Uhlig, “On the complexity of Internet traffic dynamics on its topology,” Telecommunication Systems, vol. 43, no. 3-4, pp. 167–180, 2010. View at Publisher · View at Google Scholar · View at
6. A. R. Messina, P. Esquivel, and F. Lezama, “Time-dependent statistical analysis of wide-area time-synchronized data,” Mathematical Problems in Engineering, vol. 2010, Article ID 751659, 17 pages,
2010. View at Publisher · View at Google Scholar
7. S. V. Muniandy and J. Stanslas, “Modelling of chromatin morphologies in breast cancer cells undergoing apoptosis using generalized Cauchy field,” Computerized Medical Imaging and Graphics, vol.
32, no. 7, pp. 631–637, 2008. View at Publisher · View at Google Scholar · View at Scopus
8. M. S. Mason, D. F. Fletcher, and G. S. Wood, “Numerical simulation of idealised three-dimensional downburst wind fields,” Engineering Structures, vol. 32, no. 11, pp. 3558–3570, 2010.
9. A. Spector and F. S. Grant, “Statistical methods for interpreting aeromagnetic data,” Geophysics, vol. 35, no. 2, pp. 293–302, 1970.
10. C. Fortin, R. Kumaresan, W. Ohley, and S. Hoefer, “Fractal dimension in the analysis of medical images,” IEEE Engineering in Medicine and Biology Magazine, vol. 11, no. 2, pp. 65–71, 1992. View
at Publisher · View at Google Scholar · View at Scopus
11. D. Myrhaug, L. E. Holmedal, and M. C. Ong, “Nonlinear random wave-induced drag force on a vegetation field,” Coastal Engineering, vol. 56, no. 3, pp. 371–376, 2009. View at Publisher · View at
Google Scholar · View at Scopus
12. M. Tanaka, R. Kato, Y. Kimura, and A. Kayama, “Automated image processing and analysis of fracture surface patterns formed during creep crack growth in austenitic heat-resisting steels with
different microstructures,” ISIJ International, vol. 42, no. 12, pp. 1412–1418, 2002. View at Scopus
13. G. Werner, “Fractals in the nervous system: conceptual implications for theoretical neuroscience,” Frontiers in Fractal Physiology, vol. 1, article 15, 28 pages, 2010. View at Publisher · View at
Google Scholar
14. B. J. West, “Fractal physiology and the fractional calculus: a perspective,” Frontiers in Fractal Physiology, vol. 1, article 12, 2010. View at Publisher · View at Google Scholar
15. C. Cattani, “Fractals and hidden symmetries in DNA,” Mathematical Problems in Engineering, vol. 2010, Article ID 507056, 31 pages, 2010. View at Zentralblatt MATH
16. B. Rosenow, P. Gopikrishnan, V. Plerou, and H. E. Stanley, “Dynamics of cross-correlations in the stock market,” Physica A, vol. 324, no. 1-2, pp. 241–246, 2003. View at Publisher · View at
Google Scholar · View at Scopus
17. R. J. Adler, The Geometry of Random Fields, Wiley Series in Probability and Mathematical Statistic, John Wiley & Sons, Chichester, UK, 1981.
18. M. Li, W. Zhao, and S.-Y. Chen, “mBm-based scalings of traffic propagated in Internet,” Mathematical Problems in Engineering, vol. 2011, Article ID 389803, 21 pages, 2011. View at Publisher ·
View at Google Scholar
19. B. B. Mandelbrot, Multifractals and 1/f Noise, Springer, New York, NY, USA, 1998.
20. R. J. Adler, R. E. Feldman, and M. S. Taqqu, Eds., A Practical Guide to Heavy Tails: Statistical Techniques and Applications, Birkhäuser, Boston, Mass, USA, 1998.
21. P. Abry, P. Borgnat, F. Ricciato, A. Scherrer, and D. Veitch, “Revisiting an old friend: on the observability of the relation between long range dependence and heavy tail,” Telecommunication
Systems, vol. 43, no. 3-4, pp. 147–165, 2010. View at Publisher · View at Google Scholar · View at Scopus
22. R. P. Kanwal, Generalized Functions: Theory and Applications, Birkhäuser, Boston, Mass, USA, 3rd edition, 2004.
23. G. W. Wornell, “Wavelet-based representations for the 1/f family of fractal processes,” Proceedings of the IEEE, vol. 81, no. 10, pp. 1428–1450, 1993. View at Publisher · View at Google Scholar ·
View at Scopus
24. M. S. Keshner, “1/f noise,” Proceedings of the IEEE, vol. 70, no. 3, pp. 212–218, 1982. View at Scopus
25. B. Ninness, “Estimation of 1/f noise,” IEEE Transactions on Information Theory, vol. 44, no. 1, pp. 32–46, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
26. G. Corsini and R. Saletti, “$1/{f}^{\gamma }$ power spectrum noise sequence generator,” IEEE Transactions on Instrumentation and Measurement, vol. 37, no. 4, pp. 615–619, 1988. View at Publisher
· View at Google Scholar · View at Scopus
27. M. Li, “Fractal time series—a tutorial review,” Mathematical Problems in Engineering, vol. 2010, Article ID 157264, 26 pages, 2010. View at Zentralblatt MATH
28. M. Li and W. Zhao, “Representation of a stochastic traffic bound,” IEEE Transactions on Parallel and Distributed Systems, vol. 21, no. 9, pp. 1368–1372, 2010. View at Publisher · View at Google
Scholar · View at Scopus
29. S. Davies and P. Hall, “Fractal analysis of surface roughness by using spatial data,” Journal of the Royal Statistical Society. Series B, vol. 61, no. 1, pp. 3–37, 1999. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH
30. B. B. Mandelbrot, The Fractal Geometry of Nature, Schriftenreihe für den Referenten, W. H. Freeman and Co., San Francisco, Calif, USA, 1982.
31. T. Gneiting and M. Schlather, “Stochastic models that separate fractal dimension and the Hurst effect,” SIAM Review, vol. 46, no. 2, pp. 269–282, 2004. View at Publisher · View at Google Scholar
· View at Zentralblatt MATH
32. M. Li, “A class of negatively fractal dimensional Gaussian random functions,” Mathematical Problems in Engineering, vol. 2011, Article ID 291028, 18 pages, 2011. View at Publisher · View at
Google Scholar
33. J. Beran, Statistics for Long-Memory Processes, vol. 61 of Monographs on Statistics and Applied Probability, Chapman and Hall, New York, NY, USA, 1994.
34. M. Li and S. C. Lim, “Modeling network traffic using generalized Cauchy process,” Physica A, vol. 387, no. 11, pp. 2584–2594, 2008. View at Publisher · View at Google Scholar · View at Scopus
35. R. L. Cruz, “A calculus for network delay—part I: network elements in isolation—part II: network analysis,” IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 114–141, 1991.
36. A.-L. Barabási and R. Albert, “Emergence of scaling in random networks,” Science, vol. 286, no. 5439, pp. 509–512, 1999. View at Publisher · View at Google Scholar
37. R. Albert, H. Jeong, and A. L. Barabási, “Internet: diameter of the world-wide web,” Nature, vol. 401, no. 6749, pp. 130–131, 1999. View at Publisher · View at Google Scholar · View at Scopus
38. B. A. Huberman and L. A. Adamic, “Internet: growth dynamics of the world-wide web,” Nature, vol. 401, no. 6749, p. 131, 1999. View at Scopus
39. R. Webster and M. A. Oliver, Geostatistics for Environmental Scientists, John Wiley & Sons, 2007.
40. H. Wackernagel, Multivariate Geostatistics: An Introduction with Applications, Springer, 2005.
41. P. Whittle, “On the variation of yield variance with plot size,” Biometrika, vol. 43, no. 3-4, pp. 337–343, 1962. View at Zentralblatt MATH
42. M. Li, C. Cattani, and S.-Y. Chen, “Viewing sea level by a one-dimensional random function with long memory,” Mathematical Problems in Engineering, vol. 2011, Article ID 654284, 13 pages, 2011.
View at Publisher · View at Google Scholar
43. M. Lefebvre, “A one- and two-dimensional generalized Pareto model for a river flow,” Applied Mathematical Modelling, vol. 30, no. 2, pp. 155–163, 2006. View at Publisher · View at Google Scholar
· View at Scopus
44. A. J. Lawrance and N. T. Kottegoda, “Stochastic modelling of riverflow time series,” Journal of the Royal Statistical Society. Series A, vol. 140, no. 1, pp. 1–47, 1977. View at Scopus
45. H. A. Makse, S. Havlin, and H. E. Stanley, “Modelling urban growth patterns,” Nature, vol. 377, no. 6550, pp. 608–612, 1995. View at Scopus
46. T. von Kármán, “Progress in the statistical theory of turbulence,” Proceedings of the National Academy of Sciences of the United States of America, vol. 34, no. 11, pp. 530–539, 1948. View at
Publisher · View at Google Scholar · View at Zentralblatt MATH
47. G. H. Goedecke, V. E. Ostashev, D. K. Wilson, and H. J. Auvermann, “Quasi-wavelet model of von Kármán spectrum of turbulent velocity fluctuations,” Boundary-Layer Meteorology, vol. 112, no. 1,
pp. 33–56, 2004. View at Publisher · View at Google Scholar · View at Scopus
48. M. C. H. Hui, A. Larsen, and H. F. Xiang, “Wind turbulence characteristics study at the Stonecutters Bridge site—part II: wind power spectra, integral length scales and coherences,” Journal of
Wind Engineering and Industrial Aerodynamics, vol. 97, no. 1, pp. 48–59, 2009. View at Publisher · View at Google Scholar · View at Scopus
49. A. G. Davenport, “The spectrum of horizontal gustiness near the ground in high winds,” Quarterly Journal of the Royal Meteorological Society, vol. 87, no. 372, pp. 194–211, 1961.
50. J. C. Kaimal, J. C. Wyngaard, Y. Izumi, and O. R. Coté, “Spectral characteristics of surface-layer turbulence,” Quarterly Journal of the Royal Meteorological Society, vol. 98, no. 417, pp.
563–589, 1972.
51. I. Antoniou, D. Asimakopoulos, A. Fragoulis, A. Kotronaros, D. P. Lalas, and I. Panourgias, “Turbulence measurements on top of a steep hill,” Journal of Wind Engineering and Industrial
Aerodynamics, vol. 39, no. 1–3, pp. 343–355, 1992. View at Scopus
52. D. Hiriart, J. L. Ochoa, and B. García, “Wind power spectrum measured at the San Pedro Mártir Sierra,” Revista Mexicana de Astronomia y Astrofisica, vol. 37, no. 2, pp. 213–220, 2001. View at
53. V. Pisarenko and M. Rodkin, Heavy-Tailed Distributions in Disaster Analysis, vol. 30, Springer, 2010.
54. C. A. Monje, Y.-Q. Chen, B. M. Vinagre, D. Xue, and V. Feliu, Fractional Order Systems and Controls—Fundamentals and Applications, Springer, 2010.
55. M. D. Ortigueira, “An introduction to the fractional continuous-time linear systems: the 21st century systems,” IEEE Circuits and Systems Magazine, vol. 8, no. 3, pp. 19–26, 2008. View at
Publisher · View at Google Scholar · View at Scopus
56. Y. Q. Chen and K. L. Moore, “Discretization schemes for fractional-order differentiators and integrators,” IEEE Transactions on Circuits and Systems. I, vol. 49, no. 3, pp. 363–367, 2002. View at
Publisher · View at Google Scholar
57. B. M. Vinagre, Y. Q. Chen, and I. Petráš, “Two direct Tustin discretization methods for fractional-order differentiator/integrator,” Journal of the Franklin Institute, vol. 340, no. 5, pp.
349–362, 2003. View at Publisher · View at Google Scholar · View at Zentralblatt MATH
58. S. C. Lim and S. V. Muniandy, “Self-similar Gaussian processes for modeling anomalous diffusion,” Physical Review E, vol. 66, no. 2, Article ID 021114, 14 pages, 2002. View at Publisher · View at
Google Scholar · View at Scopus
59. G. Samorodnitsky and M. S. Taqqu, Stable Non-Gaussian Random Processes: Stochastic Models with Infinite Variance, Chapman & Hall, New York, NY, USA, 1994.
60. B. B. Mandelbrot, “Note on the definition and the stationarity of fractional Gaussian noise,” Journal of Hydrology, vol. 30, no. 4, pp. 407–409, 1976. View at Scopus
61. P. Abry and D. Veitch, “Wavelet analysis of long-range-dependent traffic,” IEEE Transactions on Information Theory, vol. 44, no. 1, pp. 2–15, 1998. View at Publisher · View at Google Scholar ·
View at Zentralblatt MATH
62. M. Li, W. S. Chen, and L. Han, “Correlation matching method for the weak stationarity test of LRD traffic,” Telecommunication Systems, vol. 43, no. 3-4, pp. 181–195, 2010. View at Publisher ·
View at Google Scholar · View at Scopus
63. M. Li, S. C. Lim, and S. Y. Chen, “Exact solution of impulse response to a class of fractional oscillators and its stability,” Mathematical Problems in Engineering, vol. 2011, Article ID 657839,
9 pages, 2011. View at Publisher · View at Google Scholar
64. M. Li and J.-Y. Li, “On the predictability of long-range dependent series,” Mathematical Problems in Engineering, vol. 2010, Article ID 397454, 9 pages, 2010. View at Publisher · View at Google
65. P. Hall and Q. Yao, “Inference in ARCH and GARCH models with heavy-tailed errors,” Econometrica, vol. 71, no. 1, pp. 285–317, 2003. View at Publisher · View at Google Scholar · View at
Zentralblatt MATH
66. F. Kamoun, “Performance analysis of a discrete-time queuing system with a correlated train arrival process,” Performance Evaluation, vol. 63, no. 4-5, pp. 315–340, 2006. View at Publisher · View
at Google Scholar · View at Scopus
67. J. K.-Y. Ng, S. Song, and W. Zhao, “Statistical delay analysis on an ATM switch with self-similar input traffic,” Information Processing Letters, vol. 74, no. 3-4, pp. 163–173, 2000. View at
Publisher · View at Google Scholar
68. Z. Song, Y.-Q. Chen, C. R. Sastry, and N. C. Tas, Optimal Observation for Cyber-Physical Systems, Springer, 2009.
69. C. Cattani, “Harmonic wavelet approximation of random, fractal and high frequency signals,” Telecommunication Systems, vol. 43, no. 3-4, pp. 207–217, 2010. View at Publisher · View at Google
Scholar · View at Scopus
70. C. Cattani, “Fractals and hidden symmetries in DNA,” Mathematical Problems in Engineering, vol. 2010, Article ID 507056, 31 pages, 2010. View at Zentralblatt MATH
71. G. Mattioli, M. Scalia, and C. Cattani, “Analysis of large-amplitude pulses in short time intervals: application to neuron interactions,” Mathematical Problems in Engineering, vol. 2010, Article
ID 895785, 15 pages, 2010. View at Zentralblatt MATH
72. S. Y. Chen, Y. F. Li, and J. Zhang, “Vision processing for realtime 3-D data acquisition based on coded structured light,” IEEE Transactions on Image Processing, vol. 17, no. 2, pp. 167–176,
2008. View at Publisher · View at Google Scholar
73. S. Y. Chen and Y. F. Li, “Vision sensor planning for 3-D model acquisition,” IEEE Transactions on Systems, Man, and Cybernetics, Part B, vol. 35, no. 5, pp. 894–904, 2005. View at Publisher ·
View at Google Scholar · View at Scopus
74. W. B. Mikhael and T. Yang, “A gradient-based optimum block adaptation ICA technique for interference suppression in highly dynamic communication channels,” EURASIP Journal on Applied Signal
Processing, vol. 2006, Article ID 84057, 2006. View at Publisher · View at Google Scholar · View at Scopus
75. E. G. Bakhoum and C. Toma, “Dynamical aspects of macroscopic and quantum transitions due to coherence function and time series events,” Mathematical Problems in Engineering, vol. 2010, Article ID
428903, 2010. View at Publisher · View at Google Scholar · View at Scopus
76. E. G. Bakhoum and C. Toma, “Mathematical transform of traveling-wave equations and phase aspects of quantum interaction,” Mathematical Problems in Engineering, vol. 2010, Article ID 695208, 15
pages, 2010. View at Zentralblatt MATH
77. Z. Liao, S. Hu, and W. Chen, “Determining neighborhoods of image pixels automatically for adaptive image denoising using nonlinear time series analysis,” Mathematical Problems in Engineering,
vol. 2010, Article ID 914564, 2010. View at Publisher · View at Google Scholar · View at Scopus
78. Z.-W. Liao, S.-X. Hu, D. Sun, and W. F. Chen, “Enclosed Laplacian operator of nonlinear anisotropic diffusion to preserve singularities and delete isolated points in image smoothing,”
Mathematical Problems in Engineering, vol. 2011, Article ID 749456, 15 pages, 2011. View at Publisher · View at Google Scholar
79. J.-W. Yang, Z.-R. Chen, W.-S. Chen, and Y.-J. Chen, “Robust affine invariant descriptors,” Mathematical Problems in Engineering, vol. 2011, Article ID 185303, 15 pages, 2011. View at Publisher ·
View at Google Scholar | {"url":"http://www.hindawi.com/journals/mpe/2012/302786/","timestamp":"2014-04-16T16:57:43Z","content_type":null,"content_length":"331339","record_id":"<urn:uuid:b00d8e4e-578b-47d9-9ff2-78343d201382>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00591-ip-10-147-4-33.ec2.internal.warc.gz"} |
e-Books | Advanced Heat and Mass Transfer - Internal Convective Heat Transfer | Thermal-Fluids Central
5 INTERNAL FORCED CONVECTIVE HEAT AND MASS TRANSFER 5.1 Introduction Internal heat and mass transfer have significant applications in a variety of technologies, including heat exchangers and
electronic cooling. Internal convective heat and mass transfer can be classified as either forced or natural convection. An initial simple approach to internal convective heat transfer is to utilize
the dimensional analysis presented in Chapter 1 to obtain important parameters and dimensionless numbers for the steady laminar flow of an incompressible fluid in a convectional tube, i.e., h = f (k
, μ , c p , ρ , u, D, x , ΔT ) (5.1) The local heat transfer coefficient is a function of the fluid properties (viscosity, μ; thermal conductivity, k; density, ρ; specific heat, cp), geometry (D),
temperature (ΔT), and flow velocity (u). In dimensionless form, as shown in Chapter 1, Nu = g(Re, Pr, x / D ) (5.2) The above relation indicates that the local Nusselt number for flow in a circular
tube is a function of the Reynolds number, Prandtl number, and x/D. The goal of this chapter is to develop the heat and mass transfer coefficients for various internal flow configurations under
different operating conditions. The objective of this chapter is to present fundamental models, and analytical and numerical solutions of both laminar and turbulent internal forced convections.
Section 5.2 introduces the basic definitions, terminologies, and governing equations for internal flow; followed by discussions on uncoupled fully developed laminar flow and the thermal entry effects
in Section 5.3 and 5.4. The fully developed laminar flow with coupled thermal and concentration entry effects is taken up in Section 5.5. While the flow in Sections 5.3 – 5.5 is assumed to be fully
developed, the combined hydrodynamic, thermal, and concentration entry effects are discussed in Section 5.6. The full numerical solution of internal forced convection problem based on full
Navier-Stokes equations using the finite volume method is discussed in Section 5.7; this is followed by a discussion on forced convection in microchannels. This chapter is closed by a very detailed
discussion on the turbulence for internal forced convection in Section 5.9. 438 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 5.2
Basic Definitions, Terminology and Governing Equations It is important to clarify some basic definitions, terminologies and criteria that are often used in internal convective heat and mass transfer.
These include: 1. Mean velocity, temperature, and concentration 2. Fully developed flow, temperature, and concentration profiles 3. Hydrodynamic, thermal, and concentration entrance lengths Figure
5.1(a) shows the development of a velocity profile inside a duct or tube with uniform inlet velocity for laminar flow of an incompressible Newtonian fluid. The velocity profile at some distance away
from the tube’s inlet no longer changes along the flow direction, where it is referred to as the fully developed flow condition. The fully developed condition is often met at some distance away from
the inlet. However, there are also applications in which fully developed flow is never reached. Momentum, thermal, and concentration boundary layers form on the inside surface of the tube. The
thickness of the layers increases in a similar manner as boundary layer flow over a flat plate (which was presented in detail in Chapter 4). Figure 5.1(a) shows how the momentum boundary layer builds
up in a pipe along the flow direction. At some distance away from the inlet, the boundary layer fills the flow area. The flow downstream from this point is referred to as Hyrdrodynamic entrance
length Fully developed flow uin u δ r δ (a) Velocity profile ro friction factor (b) Friction x Figure 5.1 Velocity profiles and friction factor variation in laminar flow in a circular tube. Chapter 5
Internal Forced Convective Heat and Mass Transfer 439 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press fully developed flow since the velocity slope does not change
after this point. The distance downstream from the inlet to where the flow becomes fully developed is called the hydrodynamic entrance length. If the flow is laminar (Re < 2300 for flow inside
circular tubes), the fully developed velocity is a parabolic shape. It should be noted that the fluid velocity outside the boundary layer increases with x, which is required to satisfy the
conservation of mass (or continuity) equation. The center line velocity finally reaches a value two times the inlet velocity, uin , for fully developed, steady, incompressible, laminar flow inside
tubes. It should be noted that the hydrodynamic entrance length for fully developed flow does not start from the point where the friction coefficient, τw cf = (5.3) 2 ρ uin / 2 does not change along
the flow. The friction coefficient variation for laminar flow inside a circular tube with uniform inlet velocity is shown in Fig. 5.1(b). The friction coefficient is highest at the entrance and then
decreases smoothly to a constant value, corresponding to fully developed flow. Two factors cause the friction coefficient to be higher in the entrance region of tubes than in the fully developed
region. The first factor is the larger velocity gradient at the entrance on the wall. The gradient decreases along the pipe and becomes constant before the velocity becomes fully developed. The
second factor is the velocity outside the boundary layer, which must increase to satisfy the conservation of mass or continuity equation. Accelerating velocity in the core produces an additional drag
force when its effect is considered in the friction coefficient. The turbulent velocity profile and friction coefficient variation for a circular pipe are shown in Fig. 5.2. Even for a very high
inlet velocity, the boundary layer will be laminar over a part of the entrance. This transition from laminar to turbulent is clearly shown by the sudden increase in momentum boundary layer thickness
as shown in Fig. 5.2(a). The friction coefficient variation for turbulent flow in a pipe entrance is shown in Fig. 5.2(b). The hydrodynamic entry length required for fully developed flow should be
obtained by a complete solution of the flow and thermal field in the entrance region. A rule of thumb to judge whether or not the flow is fully developed for circular pipes is LH ≥ 0.05 Re D LH ≥
0.625 Re0.25 D for laminar flow for turbulent flow (5.4) (5.5) where LH is the hydrodynamic length and the Reynolds number is defined by Re = um D ν 440 Advanced Heat and Mass Transfer Amir Faghri,
Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Hyrdrodynamic entrance length δ Fully developed flow u r0 δ (a) Velocity profile r Laminar boundary layer friction factor Turbulent
layer boundary Fully developed turbulent (b) Friction factor Figure 5.2 Velocity profiles and friction factor in turbulent flow in a circular tube. x Uniform inlet temperature, Tin Tin-Tw δT r δT ro
Figure 5.3 Temperature development along the flow in a circular tube. A similar behavior is expected for thermal cases with the thermal boundary layer growth at the entrance of a tube as shown in
Figure 5.3, which corresponds to a case where there may be an unheated length in which the velocity is fully developed before heating starts. One expects that the thermal boundary layer increases in
the thermal entry region before the heat transfer coefficient becomes constant. It should be noted that the requirement for a fully developed thermal region is that the dimensionless temperature, θ ,
Chapter 5 Internal Forced Convective Heat and Mass Transfer 441 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press θ= Tw − T Tw − Tm or Tw − T Tw − Tc does not change
with distance along the flow direction. The mean and centerline temperatures are Tm and Tc, respectively. Similar requirements exist for the fully developed concentration profile where θ is replaced
with φ , φ= cw − c c w − cm or cw − c c w − cc where the mean and centerline concentrations (or mass fractions) are cm and cc, respectively. In the subsequent sections, we use the following
definitions to mathematically define the fully developed flow, temperature, and concentration profiles: Fully developed flow profile u uc or r u = f um ro (5.6) Fully developed temperature
profile θ= Tw − T Tw − Tm cw − c c w − cm or r Tw − T = g Tw − Tc ro r cw − c = h c w − cc ro (5.7) Fully developed concentration profile φ= or (5.8) We can now define the local
heat and mass transfer coefficients (h and hm) based on the mean temperature or concentration. ′′ qw = h ( Tw − Tm ) = − k ∂T ∂r (5.9) r = ro ′′ mw = hm (ωw − ωm ) = − ρ D ∂ω ∂r (5.10) r = ro where
D is mass diffusivity. Since we define the fully developed temperature profile as when the nondimensional temperature profile (Tw − T ) / (Tw − Tm ) is invariant in the flow direction (x-direction),
we can write the following equation: ∂T − ∂r r = ro h Tw − T ∂ = constant = = = constant ∂r Tw − Tm r = r Tw − Tm k o (5.11) The above conclusion, that the local heat transfer coefficient
is constant along the flow direction for a fully developed temperature profile, is only valid for constant wall heat flux or constant wall temperature conditions. The requirement for the
dimensionless temperature to be invariant for a fully developed temperature profile can also be presented as 442 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright ©
2010 Global Digital Press ∂ Tw − T =0 ∂x Tw − Tm (5.12) dTm dx Differentiating the above equation yields ∂T dTw Tw − T = − ∂x dx Tw − Tm dTw Tw − T + dx Tw − Tm
(5.13) In external flow, the heat and mass transfer coefficients are usually defined by a driving differential (Tw - T∞) or (ωw – ω∞) where T∞ and ω∞ are the temperature and mass fraction of the
fluid in the free stream (far away from the wall). In most cases, T∞ and ω∞ are known and constant for external flows. However, in internal flow configurations, there is not usually a well-defined
temperature or concentration (mass fraction), except at the inlet and/or the boundaries. In internal flow, the temperature and concentrations may change both in the axial direction and perpendicular
to the flow direction. Therefore, there are several choices available for the driving differential for temperature and concentration in internal flow. The most common choice for defining the driving
temperature or concentration is based on mean temperature or concentration (mass fraction or mass density). The mixed mean fluid temperature or concentration is defined at a given local axial
location based on the convective thermal energy or mass balance, i.e., Tm = 1 Aum ρm c p , m 1 Aum A A uT ρ c p dA (5.14) (5.15) ρ A ,m = u ρ A dA where ρ A ,m is the mean mass density for a
given component A, and ρm is the mean density for the fluid where the mean velocity is defined as um = 1 A ρm A u ρ dA (5.16) Assuming constant properties for mean velocity, temperature and mass
density in the above equation, we obtain 1 udA A A 1 Tm = uTdA Aum A um = (5.17) (5.18) (5.19) ρ A,m = 1 Aum A ρ A udA We will now focus our attention on two special conventional boundary
conditions; constant wall heat flux and constant surface temperature. First, consider the constant heat flux or heat rate at the wall, which occurs in many applications such as electronic cooling,
electric resistance heating, and radiant Chapter 5 Internal Forced Convective Heat and Mass Transfer 443 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press ′′ heating.
From eq. (5.9), since h and qw are constant, we can conclude Tw – Tm = constant. Differentiating leads to dTw dTm = dx dx Substituting into eq. (5.13) gives us ∂T dTw dTm = = ∂x dx dx (5.20) Now
consider the case of constant surface or wall temperature, which also occurs in many applications including condensers, evaporators and any heat exchange surface where the heat transfer coefficient
is extremely high. Using eq. (5.13) and the fact that dTw /dx = 0 for constant surface temperature, we get ∂T Tw − T dTm = ∂x Tw − Tm dx (5.21) It should be emphasized that eqs. (5.20) and
(5.21) are only applicable when the temperature profile is fully developed. The variations of wall and mean temperature for the fully developed temperature profile along the flow for constant heat
rate or surface temperature are shown in Fig. 5.4. Finally, to obtain the convective heat and/or mass transfer coefficients, one needs to solve the continuity, mass, momentum, energy and appropriate
species equations. In convective heat and mass transfer problems, it is important to obtain information about the flow by solving the continuity and momentum equations, in addition to the energy and
species equations. These conservation equations are mostly decoupled, except for circumstances such as a variable property, or coupled governing equations or boundary conditions due to physical
circumstances (which happens in applications such as natural convection, absorption, sublimation, evaporation and condensation problems). T Tw Tm Tm T Tw x (a) Constant heat flux at wall (b) Constant
wall temperature x Figure 5.4 Wall and mean temperature variation along the flow in a circular tube for fully developed flow and temperature profile. 444 Advanced Heat and Mass Transfer Amir Faghri,
Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press It is obviously more accurate to solve the complete transport conservation equations (elliptic form) for internal flow without
making boundary layer assumptions (parabolic form) as discussed in Chapter 4. However, in most cases it is not practical due to complexity of the geometry and/or solution techniquesm, as well as the
requirement of additional boundary conditions in both analytical or numerical methods. For the case of two-dimensional fully developed steady laminar flow with constant properties, the momentum
equation in a circular tube, including boundary conditions, as shown in Chapter 2 is: dp μ d du (5.22) = r dx u=0 r dr dr at r = ro du =0 dr at r=0 (5.23) Integrating the above equation
twice and using the boundary conditions yield a parabolic velocity profile: u= −ro2 dp r 2 1 − 4 μ dx ro2 (5.24) Using the definition of mean velocity, um , for constant
properties and the above equation, we obtain π ro2 Equation (5.24) in terms of mean velocity is A r2 u = 2um 1 − 2 ro um = A udA = ro 0 2π rudr =− ro2 dp 8μ dx (5.25) (5.26) The shear
stress at the wall can be calculated from the velocity gradient at the wall. 4u μ r dp ∂u (5.27) = m =− o τw = μ ∂r r = ro ro 2 dx The above result can be presented in terms of the friction
coefficient, cf. τw 8μ 16 cf = = = (5.28) 2 ρ um / 2 ro ρ um Re In addition to the above friction coefficient, the following friction factor is also widely used: f= −(dp / dx ) D 2 ρ um / 2 (5.29) It
follows from eq. (5.27) that 4τ w 64 f= = 4c f = 2 ρ um / 2 Re (5.30) Chapter 5 Internal Forced Convective Heat and Mass Transfer 445 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global
Digital Press Let’s make an analysis of the energy equation to get a feeling about the importance of various terms, since determination of the temperature field in the fluid is required for the heat
transfer coefficients. To simplify the analysis, lets consider a two-dimensional cylindrical geometry with the following assumptions: 1. Steady laminar flow 2. Constant properties 3. Fully developed
flow 4. Newtonian incompressible fluid The energy equation under the above assumptions is 2 1 ∂ ( r ∂T / ∂r ) ∂ 2T ∂T μ ∂u =α + 2 + (5.31) u ∂x ∂r ∂x ρ c p ∂r r The above
equation is non-dimensionalized using the following variables to show the effect of axial conduction and viscous dissipation: u+ = T − Tr θ= Tin − Tr u um , x+ = 2( x / D) Re Pr , E= + 2 um c p ΔT ,
ΔT = Tin − Tr r ,r= ro (5.32) , Pe = Re Pr where Tr is a reference temperature and E and Pe are Eckert and Peclet numbers, respectively. The resulting dimensionless energy equation is ∂u + u + ∂θ
1 ∂ ∂θ 1 ∂ 2θ = + + r+ + + + E Pr + 2 ∂x + r ∂r ∂r 2 Pe 2 ∂x + 2 ∂r 2 (5.33) The second term on the right hand side of the above equation is due to axial heat conduction, and the
last term is due to the viscous dissipation effect. If E Pr is small, viscous dissipation can be neglected. This is true for flow with a low velocity and low Prandtl number. The second term on the
right hand side (axial heat conduction) is neglected when the Peclet number, Pe, is greater than 100. Axial heat conduction should be accounted for when the Peclet number is small, in the case of
liquid metals. Example 5.1. Estimate the hydrodynamic entry length, LH, using Blasius’ result for the momentum boundary layer thickness. Solution: From Blasius’ solution δ 5 x ≈ Re x1/ 2 (5.34) For
fully developed flow conditions, δ = D / 2, and therefore D/2 5 = LH Re x1/ 2 LH Re x1/ 2 = = 0.1Re x1/ 2 10 D (5.35) 446 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press 5.3 Hydrodynamically and Thermally Fully Developed Laminar Flow Constant Wall Heat Flux In this section, we consider the case of fully developed laminar flow and
constant properties in a circular tube with a fully developed temperature and concentration profiles. We first consider the case of constant heat rate per unit surface area for steady, laminar, fully
developed flow. The energy equation in a circular tube, by neglecting axial heat conduction and viscous dissipation terms, is: u ∂T 1 ∂ ∂T =α r ∂x r ∂r ∂r (5.36) For a fully
developed flow with constant wall heat flux, eq. (5.20) can be substituted into eq. (5.36) to obtain u dTm 1 ∂ ∂T =α r dx r ∂r ∂r (5.37) The boundary conditions are −k ∂T ′′ = qw
∂r at r = ro at r = 0 ∂T =0 ∂r (5.38) Integrating eq. (5.37) twice and applying the boundary conditions in eq. (5.38) to get the temperature distribution gives us T = Tw − 2um dTm 3ro2 r 2 r4 −+
α dx 16 4 16ro2 (5.39) Using the definition of mean temperature presented in the last section with the above profile for temperature, and assuming constant properties: Tm uTdA = 2 π ruTdr
= πr u udA A 0 A 2 o m ro Substituting eq. (5.39) into the above expression yields: Tm = Tw − 11 2um dTm 2 ro 96 α dx (5.40) 2 ro The heat flux at the wall can be obtained using
the above relation for Tm 11 2u dT qw′′ = h ( Tw − Tm ) = h m m 96 α dx (5.41) The heat flux at the wall can also be calculated using eq. (5.39) for the temperature profile
and Fourier’s law of heat conduction ′′ qw = −k ∂T ∂r r = ro u dT = ρ c p ro m m 2 dx (5.42) Combining eqs. (5.41) and (5.42) and solving for the heat transfer coefficient, h,
yields Chapter 5 Internal Forced Convective Heat and Mass Transfer 447 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press h = 4.364 k / D or in terms of the Nusselt
number, Nu = 4.364 (5.43) Example 5.2. Make an energy balance on a control volume element in a circular tube with constant wall heat flux for the fully developed flow and temperature profile region
to prove eq. (5.42). Solution: Applying the conservation of energy for the control volume element shown in Fig. 5.5 yields ′′ qw ( 2π ro ) = ( ρ c p ) (π ro2 ) um dTm dx Solving for heat flux at the
wall, ′′ qw = ρ c p r0 dT um m 2 dx ′′ qw (2π ro )Δx ρ c p (π ro2 ) umTm x r r0 Δx ρ c p (π ro2 ) um Tm + dTm Δx dx Figure 5.5 Energy balance on a control volume element for fully
developed flow and temperature profile in a circular tube with constant wall heat flux Constant Surface Temperature We begin with the energy eq. (5.31) and neglect the effects of axial heat
conduction and viscous dissipation. We already showed that for a fully developed flow and temperature profile with constant surface temperature T − T dTm ∂T =w ∂x Tw − Tm dx The energy equation and
boundary conditions, using the fully developed velocity profile eq. (5.26) and the above equation, are r 2 T − T dTm 1 ∂ ∂T 2um 1 − 2 w =α r r ∂r ∂r ro Tw −
Tm dx r = ro , T = Tw r = 0, ∂T = 0 or T = finite ∂r (5.44) (5.45) 448 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The above
equation and boundary conditions have been solved by various techniques in literatures, including separation of variables and infinite series. For additional detailed information the reader should
refer to Burmeister (1993), Kakac, et al. (1987), Kays et al. (2005), and Bejan (2004). The solution for eqs. (5.44) and (5.45) in the form of an infinite series for the temperature and Nusselt
number is (Kakac et al. 1987): ∞ r T − Tw = C 2m Tin − Tw m = 0 ro 2m (5.46) λ02 where co = 1, 1 c2 = − λ02 = −1.828397, 4 λ0 = 2.704364 Nu = 12 λ = 3.657 2 c2 m = ( 2m ) 2 ( c2 m − 4 −
c2 m − 2 ) The Nusselt number corresponding to the above temperature distribution is (5.47) The temperature slope for constant wall heat flux at the wall is higher than the temperature slope for
constant surface temperature. This effect has resulted in a 16 percent increase in Nusselt number for constant wall flux versus constant wall temperature for a fully developed flow and temperature
profile. The two cases of constant wall temperature and constant wall heat flux are special cases of a more general exponential heat flux boundary condition: 1 ′′ qw = A exp nx + 2 (5.48)
where A and n are both constants and n can be assumed to be either a positive or negative value, and x + = x / ro . n = 0 corresponds to constant heat flux at the Re Pr wall and n = –14.63
corresponds to constant wall temperature. Shah and London (1978) developed the following correlation, which fits the exact solution of eq. (5.48) within 3 percent for –51.36 < n < 100. Nu = 4.3573 +
0.0424n − 2.8368 × 10−4 n 2 + 3.6250 × 10 −6 n 3 − 7.6497 × 10−8 n 4 + 9.1222 × 10−10 n 5 − 3.8446 × 10−12 n 6 (5.49) Example 5.3. When Pr = ν α 1 , it is possible to assume a uniform velocity in
the radial direction within a short distance from the inlet, since the diffusion of momentum is much less than the diffusion of heat. This is referred to in literature as a “slug flow” condition.
Determine the Nusselt number for “slug flow” with a fully developed temperature profile in a circular pipe and uniform heat flux at the wall. Solution. The energy equation and boundary conditions for
this case can be found by assuming u = um = constant and ∂T/∂x = dTw/dx = dTm/dx = constant. Chapter 5 Internal Forced Convective Heat and Mass Transfer 449 Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press um dTw 1 ∂ ∂T =α r dx r ∂r ∂r ∂T r = 0, =0 ∂r r = ro , T = Tw (5.50) Integrating eq. (5.50) twice with respect to η and using the above
boundary conditions yields T = Tw − 2 um ro2 dTw r 1 − α 4 dx ro (5.51) Both the heat flux at the wall and the mean temperature are obtained from the above equation ′′ qw =
−k dT dr =k r = r0 um ro dTw 2α dx (5.52) q′′ r (5.53) = = Tw − w o π ro2 um 4k ro2 Using the above equation and the definition of the Nusselt number, Tm 0 0 = r0 2π rum Tdr 2 rTdr r0 Nu = qw′′ D
hD = =8 k (Tw − Tm ) k (5.54) Obviously, as one expects, the Nusselt number for “slug flow” is higher than when assuming a parabolic velocity profile. The velocity is higher near the wall assuming
“slug flow” compared to the parabolic profile. The problem of fluid flow and heat transfer in an annulus (see Fig. 5.6) is also of considerable interest in various applications, including heat
exchangers and heat pipes, due to an increase and flexibility in the heating and cooling of surface area. We present the result below for the case in which both the inner wall (radius ri) and outer
wall (radius ro) are kept at a constant heat flux for various values of K = ri/ro. K = 0 corresponds to conventional circular tubes, and K =1 corresponds to flow between two parallel planes. ′′ qo ro
ri r qi′′ ′′ qo Figure 5.6 Flow and heat transfer in an annulus 450 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The momentum and
energy equations, as well as the boundary conditions for steady, laminar, fully developed flow and temperature profile, by neglecting radial conduction of the wall, axial heat conduction in the
fluid, and viscous dissipation, and assuming constant properties, are dp μ d du (5.55) = r dx r dr dr dTm 1 ∂ ∂T =α dx r ∂r ∂r ∂T r = ri , u = 0, −k = qi′′ ∂r r =ri
u r = ro , u = 0, k ∂T ∂r = qo′′ r = ro (5.56) (5.57) (5.58) where qi′′ and qo′′ are the inner and outer wall heat fluxes, respectively. The velocity profile can be obtained by integrating eq. (5.55)
twice and applying no slip boundary conditions at both the inner and outer walls. 2 u 2 r r = 1 − + B ln um A ro ro (5.59) where A = 1 + K − B, 2 K 2 −1 B= , ln K um = (r ro
ri 2 o urdr − ri 2 ) The local heat transfer coefficients and Nusselt numbers are hi = qi′′ qo′′ , ho = Ti − Tm To − Tm hi Dh hD , Nuo = o h k k Nui = where the hydraulic diameter, Dh = 2(ro – ri).
The energy eq. (5.56) can also be solved using the velocity profile given by eq. (5.59) in a similar manner as presented for a circular tube, except that it can involve lengthy algebra. Since the
energy eq. (5.56) is linear and homogeneous, the principle of superposition was used to obtain the solution of the above problem as a sum of two problems. One problem is the outer wall heated
uniformly and the inner wall insulated. The second problem is the inner wall heated uniformly and the outer wall insulated. The principle of superposition can be applied to linear homogeneous
differential equations as long as the summation of the governing equations and boundary conditions for each subset problem will add to the original problem. Kakaç and Yucel (1974) performed a
numerical solution for fully developed velocity and temperature profile. Table 5.1 shows the inner and outer wall Nusselt number for various K values obtained by Kakaç and Yucel (1974). Chapter 5
Internal Forced Convective Heat and Mass Transfer 451 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The inner and outer wall Nusselt numbers, based on the average
wall heat flux ratio, can be calculated from the results in Table 5.1 using the following equations: Nui = Nuo = Nuii ′′ 1 − ( qo / qi′′) θ i ∗ Nuoo ′′ 1 − ( qi′′ / qo ) θ o∗ (5.60) (5.61) where Nuii
is defined as the inner wall Nusselt number when the inner tube is heated and outer wall is insulated. Similarly, Nuoo is the outer tube Nusselt number when the outer wall is heated and inner tube is
insulated. θ i∗ and θ o∗ are defined as influence coefficients and are a function of K only for laminar flow. Table 5.1 Nusselt number for fully developed flow and temperature profile with constant
wall heat flux in an annulus, K = ri /ro (Kakaç and Yucel, 1974) K 0 0.10 0.25 0.50 1.00 Nuii ∞ 11.900 7.735 6.181 5.384 Nuoo 4.364 4.834 4.904 5.036 5.384 θi* ∞ 1.3835 0.7932 0.5288 0.3460 θo* 0
0.0562 0.1250 0.2160 0.3460 Example 5.4. Determine the Nusselt number for a fully developed flow and temperature profile between two parallel planes when both surfaces are heated at constant
identical heat fluxes using the results presented in Table 5.1. Solution: As indicated before, flow and heat transfer between parallel planes is the limiting case of an annulus where K = 1. In this
case, eqs. (5.60) and (5.61) are identical. Nui = Nuo = Nuoo 5.385 = = 8.234 ∗ 1 − θo 1 − 0.346 The fluid and heat transfer solution for tubes of noncircular cross section with a fully developed flow
and temperature profile can be obtained by solving the energy equation for the particular geometry. The results for some of the more conventional geometries obtained by Shah and London (1974) are
presented in Table 5.2. There are three different boundary conditions in this table; “H1” which refers to the circumferentially constant wall temperature and the axial constant wall heat flux
boundary condition, “H2” is for both axially and circumferentially constant heat flux at the wall, and “T” represents a constant wall temperature boundary condition. For the case of a symmetrically
heated straight duct having no corners and a constant peripheral curvature, e.g. parallel planes and circular pipe, H1 and H2 boundary conditions are the same. 452 Advanced Heat and Mass Transfer
Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Table 5.2 Solutions for Nusselt Number and Friction Coefficient for fully developed laminar flow and temperature
profile for various geometries (Shah and London 1974) Geometry (L/Dh > 100) NuH1† 3.014 NuH2† 1.474 NuT 2.39* cfRe 12.630 b a a b b a b 3 = a 2 60° b 3 = a 2 b =1 a 3.111 1.892 2.47 13.333 3.608
3.091 2.976 14.227 4.002 b1 = a2 3.862 3.34* 15.054 b 4.123 3.017 3.391 15.548 a 4.364 4.364 3.657 16.000 b a b a b a b = 0.9 a 5.099 4.350* 3.66 18.700 b1 = a4 5.331 2.930 4.439 18.233 b1 = a8 b =0
a 6.490 8.235 2.904 8.235 5.597 7.541 20.585 24.00 * Interpolated values. † Nusselt numbers are averaged with respect to tube periphery 5.4 Hydrodynamically Fully Developed and Thermally Developing
Laminar Flow In the previous section, we considered problems where the velocity and temperature profile were fully developed, so that the heat transfer coefficient was constant with distance along
the pipe. In this section, we consider problems in which only velocity is fully developed at the point where the heat transfer starts. Furthermore, as before, we consider two cases of constant wall
temperature and Chapter 5 Internal Forced Convective Heat and Mass Transfer 453 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press wall heat flux, both by assuming
uniform temperature at the inlet. Under these conditions, the heat transfer coefficient is not constant but varies along the tube. Whiteman and Drake (1980), Lyche and Bird (1956) and Blackwell
(1985) studied the case of fully developed flow with thermal entry effects for nonNewtonian fluids. Sellars et al. (1956) obtained thermal entry length solutions for the case of a Newtonian fluid
with constant wall temperature and fully developed flow, which are presented below. The following assumptions are made in order to obtain a closed form solution for heat transfer analysis for fully
developed flow and developing temperature profile in a circular tube: 1. Incompressible Newtonian fluid 2. Laminar flow 3. Two-dimensional steady state 4. Axial heat conduction and viscous
dissipation are neglected 5. Constant properties This does not mean that one cannot obtain analytical solutions when one or more of the above assumptions is valid, but the solution will be much
easier by making the above assumptions. Since the fully developed velocity was already obtained in Section 5.2, we will focus on the solution of the energy equation and boundary conditions for a
developing temperature profile. 5.4.1 Constant Wall Temperature The dimensionless energy eq. (5.33) and boundary conditions using the above assumptions for the case of constant wall temperature are
reduced to u + ∂θ 1 ∂ ∂θ (5.62) = + + r + + + 2 ∂x + θ (1, x + ) = 0 θ ( 0, x + ) = finite or ∂θ ( 0, x + ) = 0 ∂r + θ (r , 0) = 1 r ∂r ∂r (5.63) where r+ = T − Tw x / r0 r u ,
θ= , u+ = , x+ = ro Tin − Tw um Re Pr For a fully developed laminar flow, the parabolic velocity profile previously developed is applicable, i.e., r2 2 u = 2um 1 − 2 or u + = 2 1 − r + ro
( ) (5.64) Substituting the above equation into the energy eq. (5.62), we get ∂θ ∂ 2θ 1 ∂θ + (1 − r ) ∂x 2 + = ∂r + 2 + r + ∂r + 454 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John
Howell Copyright © 2010 Global Digital Press Since the above partial differential equation is linear and homogeneous, one can apply the method of separation of variables. The separation of variables
solution is assumed of the form (5.65) θ (r + , x + ) = R (r + ) X ( x + ) The substitution of the above equation into eq. (5.64) yields two ordinary differential equations X′ + λ2X = 0 (5.66) R′′ +
2 1 R′ + λ 2 R 1 − r + = 0 r+ ( ) (5.67) where X′ = 2 dX , dx + X ′′ = d2 X dx + 2 , R′ = dR d2 R , R′′ = +2 + dr dr 2+ and – λ is the separation constant or eigenvalue. The solution for eq. (5.66)
is a simple exponential function of the form e − λ x while the solution of eq. (5.67) is of infinite series referred to by the SturmLiouville theory. The solution is of the form θ ( r + , x + ) =
cn Rn ( r + ) exp ( −λn 2 x + ) n =0 ∞ (5.68) where λn are the eigenvalues, Rn are the eigenfunctions corresponding to eq. (5.67), and cn are constants. The local heat flux, dimensionless mean
temperature, local Nusselt number and mean Nusselt number can be obtained from the following equations, using the above temperature distribution (Tw − Tin ) ∂θ ∂T ′′ qw = −k ∂r = −k r = ro ro ∂r + r
+ =1 2k = − ( Tw − Tin ) Gn exp ( −λn 2 x + ) ro n =0 ∞ exp ( −λn 2 x + ) Tm − Tw θm = = 8 Gn λn 2 Tin − Tw n =0 ∞ (5.69) (5.70) exp ( −λn 2 x + ) Nux = hx ( 2ro ) k = (Tw − Tin )
kθ m 1 =+ x −qw′′ ( 2ro ) −2 ∂θ = θ m ∂r + = r + =1 G ∞ n =0 n =0 ∞ n 2 Gn exp ( −λn 2 x + ) / λn 2 (5.71) Nu m = hx ( 2ro ) k x+ 0 ∞ Gn exp ( −λn 2 x + ) 1 Nux dx = − + ln 8 2x λn 2 n
=0 + (5.72) where 1 ′ Gn = − cn Rn (1) 2 Chapter 5 Internal Forced Convective Heat and Mass Transfer 455 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The
first five terms in eqs. (5.69) – (5.72) are sufficient to provide accurate solutions to the above infinite series. The eigenvalues, λn and Gn, used to calculate qw′′ , θm, Nux and Num for the above
problem are presented in Table 5.3. Table 5.3 Eigenvalues and Eigenfunctions of a Circular Duct; Thermal Entry Effect with Fully Developed Laminar Flow and Constant Wall Temperature (Blackwell 1985)
n 0 1 2 3 4 λn2/2 3.656 22.31 56.9 107.6 174.25 Gn 0.749 0.544 0.463 0.414 0.383 Table 5.4 Nusselt Solution for Thermal Entry Effect of a Circular Tube for Fully Developed Laminar Flow and Constant
Wall Temperature (Blackwell 1985) x+ 0 0.001 0.004 0.01 0.04 0.08 0.1 0.2 ∞ Nu x ∞ 10.1 8.06 6.00 4.17 3.79 3.71 3.658 3.657 Nu m ∞ 15.4 12.2 8.94 5.82 4.89 4.64 4.16 3.657 θm 1 0.940 0.907 0.836
0.628 0.457 0.395 0.190 0 Table 5.4 provides the variations of Nux, Num and θm with distance along the tube. It can be easily observed from Table 5.4 that the fully developed temperature profile
starts at approximately: x+ = x / r0 = 0.1 Re Pr (5.73) Therefore, (LT,T / D) = 0.05RePr where LT,T is the thermal entrance length for constant wall temperature The thermal entry length increases as
both the Reynolds number and Prandtl number increase. A very long thermal entry length is needed for fluids with a high Prandtl number, such as oil. Therefore, care should be taken to make a fully
developed temperature profile assumption for fluids with a high Prandtl number. 5.4.2 Constant Heat Flux at the Wall The laminar fully developed flow with thermal entry length effects (developing
temperature profile) for constant wall heat flux is very similar to that 456 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press in the
case of constant wall temperature, except that the dimensionless temperature and boundary conditions are defined as θ= ′′ qw = −k Tin − T ′′ qw D / k = constant r = ro (5.74) (5.75) ∂T ∂r Siegel et
al. (1958) solved the above problem for laminar fully developed flow using separation of variables and the Strum-Liouville theory, of which the result is presented below: θ = θ * (r + , x + ) + 4x +
+ r + − 2 where θ* is given below ∞ n =1 7 r+ − 4 24 2 (5.76) (5.77) θ * ( r + , x + ) = cn Rn exp ( −λn 2 x + ) The eigenvalues, λn, eigenfunctions, Rn, and constants, cn, are presented in Table
5.5. Table 5.5 Eigenvalues and Eigenfunctions for Thermal Entry Effect of a Circular Tube for Fully Developed Laminar Flow and Constant Wall Heat Flux (Siegel et al. 1958) n 2 λn Rn(1) cn 1 2 3 4 5 6
7 25.6796 83.8618 174.167 296.536 450.947 637.387 855.850 -0.492517 0.395508 -0.345872 0.314047 -0.291252 0.273808 -0.259852 0.403483 -0.175111 0.105594 -0.0732804 0.0550357 -0.043483 0.035597 Table
5.6 Local Nusselt Number for Thermal Entry Effect with Fully Developed Flow of a Circular Tube with Constant Wall Heat Flux (Siegel et al. 1958) x+ 0 0.0025 0.005 0.01 0.02 0.05 0.1 0.2 ∞ Nu x ∞ 11.5
9.0 7.5 6.1 5.0 4.5 4.364 4.364 Chapter 5 Internal Forced Convective Heat and Mass Transfer 457 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The local Nusselt
number, based on the above solution, is given below and numerical values are presented in Table 5.6 Nux = 1 + ( 24 / 11) cn exp ( −λn 2 x + ) Rn (1) n=0 ∞ ( 48 / 11) (5.78) The thermal entrance for
constant wall heat flux based on the numerical results presented in Table 5.6 is x + ≈ 0.05 or (5.79) ( LT , H ) = 0.05 ( Re Pr )( D ) where LT, H is the thermal entry length for fully developed flow
with constant wall heat flux. The mean temperature variation can be obtained from the Nusselt number, eq. (5.78), using the following equation: Tw − Tm = qw′′ qw′′ D = hx Nux k (5.80) The thermal
entry length solutions presented above for hydrodynamically fully developed flows are based only on either constant wall temperature or constant heat flux. Although the wall temperature or heat flux
may be assumed constant along the tube length in the entrance region for certain internal convection problems, there are cases where the wall temperature or heat flux varies considerably with tube
length. In these cases, a decision must be made whether to assume constant wall temperature or constant heat flux, use a mean overall wall temperature or heat flux with respect to length, or attempt
to take into account the effect of varying wall temperature or heat flux. If the latter is desirable, the solution can be obtained by superposing the thermal entry length solutions at infinitesimal
and finite surface-temperature steps (Kays et al., 2005). 5.5 Hydrodynamically Fully Developed Flow with Coupled Thermal and Concentration Entry Effects There are many transport phenomena problems in
which heat and mass transfer simultaneously occur. In some cases, such as sublimation and vapor deposition, they are coupled. These problems are usually treated as a single phase. However, coupled
heat and mass transfer should both be considered even though they are modeled as being single phase. In this section, coupled forced internal convection in a circular tube will be presented for both
adiabatic and constant wall heat flux. 5.5.1 Sublimation inside an Adiabatic Tube In addition to the external sublimation discussed in subsection 5.6.2, internal sublimation is also very important.
Sublimation inside an adiabatic and externally 458 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press heated tube will be analyzed in the
current and the following subsections. The physical model of the problem under consideration is shown in Fig. 5.7 (Zhang and Chen, 1990). The inner surface of a circular tube with radius ro is coated
with a layer of sublimable material which will sublime when gas flows through the tube. The fully-developed gas enters the tube with a uniform inlet mass fraction of the sublimable substance, ω0, and
a uniform inlet temperature, T0. Since the outer wall surface is adiabatic, the latent heat of sublimation is supplied by the gas flow inside the tube; this in turn causes the change in gas
temperature inside the tube. It is assumed that the flow inside the tube is incompressible laminar flow with constant properties. In order to solve the problem analytically, the following assumptions
are made: 1. The entrance mass fraction, ω0, is assumed to be equal to the saturation mass fraction at the entry temperature, T0. 2. The saturation mass fraction can be expressed as a linear function
of the corresponding temperature. 3. The mass transfer rate is small enough that the transverse velocity components can be neglected. The fully developed velocity profile in the tube is r 2 u
= 2um 1 − ro (5.81) where um is the mean velocity of the gas flow inside the tube. Neglecting axial conduction and diffusion, the energy and mass transfer equations are ∂T ∂ ∂T
= α r ∂x ∂r ∂r ∂ω ∂ ∂ω ur = D r ∂x ∂r ∂r ur (5.82) (5.83) where D is mass diffusivity. Equations (5.82) and (5.83) are subjected to the following boundary conditions: T = T0 , x = 0
(5.84) ω = ω0 , x = 0 (5.85) Figure 5.7 Sublimation in an adiabatic tube. Chapter 5 Internal Forced Convective Heat and Mass Transfer 459 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010
Global Digital Press ∂T ∂ω = = 0, r = 0 ∂r ∂r ∂T ∂ω −k = ρ Dhsv , r = ro ∂r ∂r (5.86) (5.87) Equation (5.87) implies that the latent heat of sublimation is supplied as the gas flows inside the tube.
Another boundary condition at the tube wall is obtained by setting the mass fraction at the wall as the saturation mass fraction at the wall temperature (Kurosaki, 1973). According to the second
assumption, the mass fraction and temperature at the inner wall have the following relationship: ω = aT + b , r = ro (5.88) where a and b are constants. The following non-dimensional variables are
then introduced: 2u r r x α , Le = , Re = m o , η= , ξ= ro r0 Pe D ν (5.89) ω −ωf T − Tf 2u r Pe = m 0 , θ = , ϕ= T0 − T f α ω0 − ω f where Tf and ωf are temperature and mass fraction of the
sublimable substance, respectively, after heat and mass transfer are fully developed, and Le is Lewis number. Equations (5.82) – (5.88) then become ∂θ ∂ ∂θ = (5.90) η (1 − η 2 ) η ∂ξ ∂η ∂η
η (1 − η 2 ) ∂ϕ 1 ∂ ∂ϕ = η ∂ξ Le ∂η ∂η (5.91) (5.92) (5.93) (5.94) (5.95) θ = ϕ = 1, ξ = 0 ∂θ ∂ϕ = = 0, η = 0 ∂η ∂η ∂θ 1 ∂ϕ − = , η =1 ∂η Le ∂η ϕ = ahsv c p θ , η = 1 The heat
and mass transfer eqs. (5.90) and (5.91) are independent, but their boundary conditions are coupled by eqs. (5.94) and (5.95). The solution of eqs. (5.90) and (5.91) can be obtained via separation of
variables. It is assumed that the solution of θ can be expressed as a product of the function of η and a function of ξ, i.e., θ = Θ(η )Γ(ξ ) (5.96) Substituting eq. (5.96) into eq. (5.90), the energy
equation becomes d dΘ Γ′ dη dη = = −β 2 Γ η (1 − η 2 )Θ (5.97) 460 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press where β
is the eigenvalue for the energy equation. Equation (5.97) can be rewritten as two ordinary differential equations: (5.98) Γ′ + β 2 Γ = 0 d dΘ 2 2 + β η (1 − η )Θ = 0 dη dη (5.99) (5.100)
(5.101) The solution of eq. (5.98) is Γ = C1e − β 2 ξ The boundary condition of eq. (5.99) at η = 0 is Θ′(0) = 0 2 The dimensionless temperature is then θ = C1Θ(η )e − β ξ (5.102) Similarly, the
dimensionless mass fraction is ϕ = C 2 Φ (η )e −γ ξ (5.103) where γ is the eigenvalue for the conservation of species equation, and Φ (η ) satisfies 2 d dΦ 2 2 + Leγ η (1 − η )Φ = 0 dη dη
(5.104) and the boundary condition of eq. (5.104) at η = 0 is (5.105) Substituting eqs. (5.102) – (5.103) into eqs. (5.94) – (5.95), one obtains β =γ (5.106) ah − sv c p Θ(1) Θ′(1) = Le Φ
(1) Φ ′(1) Φ ′(0) = 0 (5.107) To solve eqs. (5.99) and (5.104) using the Runge-Kutta method it is necessary to specify two boundary conditions for each. However, there is only one boundary
condition for each: eqs. (5.101) and (5.105), respectively. Since both eqs. (5.99) and (5.104) are homogeneous, one can assume that the other boundary conditions are Θ(0) = Φ(0) = 1 and the solve
eqs. (5.99) and (5.104) numerically. It is necessary to point out that the eigenvalue, β, is still unknown at this point and must be obtained by eq. (5.107). There will be a series of β which satisfy
eq. (5.107), and for each value of βn there is one set of corresponding Θn and Φn functions (n = 1, 2,3,) . If we use any one of the eigenvalues, βn, and corresponding eigenfunctions, Θn and Φn, in
eqs. (5.102) and (5.103), the solutions of eq. (5.90) and (5.91) become θ = C1Θn (η )e − β ξ (5.108) n 2 ϕ = C 2 Φ n (η )e − β ξ 2 n (5.109) Chapter 5 Internal Forced Convective Heat and Mass
Transfer 461 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press which satisfy all boundary conditions except those at ξ = 0 . In order to satisfy boundary conditions at ξ
= 0 , one can assume that the final solutions of eqs. (5.90) and (5.91) are θ = Gn Θn (η )e − β n =1 ∞ ∞ n 2 ξ (5.110) (5.111) ϕ = H n Φ n (η )e − β ξ 2 n n =1 where Gn and H n can be obtained by
substituting eqs. (5.110) and (5.111) into eq. (5.92), i.e., 1 = Gn Θn (η ) n =1 ∞ ∞ (5.112) (5.113) 1 = H n Φ n (η ) n =1 Due to the orthogonal nature of the eigenfunctions Θn and Φ n ,
expressions of Gn and H n can be obtained by Θ (1) 1 )Θn (η )dη + n η (1 − η 2 )Φ n (η )dη Φ n (1) 0 Gn = 2 1 Θ (1) 2 2 η (1 − η 2 ) Θn (η ) + ( Ahsv / c p ) n Φ n (η )
dη 0 Φ n (1) Ahsv Θn (1) Hn = Gn c p Φ n (1) η (1 − η 0 1 2 (5.114) (5.115) The Nusselt number due to convection and the Sherwood number due to diffusion are −k Nu = ∂T ∂r r = ro Tm
− Tw 2ro 2 =− θm − θ w k G e β ξ Θ′ (1) n − 2 n ∞ n (5.116) n =1 −D Sh = (5.117) ωm − ωw n =1 where Tm and ωm are mean temperature and mean mass fraction in the tube. Figure 5.8 shows heat and mass
transfer performance during sublimation inside an adiabatic tube. For all cases, both Nusselt and Sherwood numbers become constant when ξ is greater than a certain number, thus indicating that heat
and mass transfer in the tube have become fully developed. The length of the entrance flow increases with an increasing Lewis number. While the fully developed Nusselt number increases with an
increasing Lewis number, the Sherwood number decreases with an increasing Lewis number, because a larger Lewis number indicates larger thermal diffusivity or low mass diffusivity. The effect of (ahsv
/ c p ) on the Nusselt and Sherwood numbers is relatively n ∂ω ∂r r = ro 2ro 2 =− ϕm − ϕw D H ∞ e − βn ξ Φ ′ (1) n 2 462 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press ahsv cp ahsv cp Figure 5.8 Nusselt and Sherwood numbers for sublimation inside an adiabatic tube (Zhang and Chen, 1990). insignificant: both the Nusselt and
Sherwood numbers increase with increasing (ahsv / c p ) for Le < 1, but increasing (ahsv / c p ) for Le > 1 results in decreasing Nusselt and Sherwood numbers. 5.5.2 Sublimation inside a Tube
Subjected to External Heating When the inner wall of a tube with a sublimable-material-coated outer wall is heated by a uniform heat flux, q′′ (see Fig. 5.9), the latent heat will be supplied by part
of the heat flux at the wall. The remaining part of the heat flux will be used to heat the gas flowing through the tube. The problem can be described by eqs. (5.81) – (5.88), except that the boundary
condition at the inner wall of the tube is replaced by ∂ω ∂T ρ hsv D +k = q′′ at r = ro (5.118) ∂r ∂r where the thermal resistance of the tube wall is neglected because the tube wall and the coated
layer are very thin. ro Figure 5.9 Sublimation in a tube heated by a uniform heat flux. Chapter 5 Internal Forced Convective Heat and Mass Transfer 463 Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press The governing equations for sublimation inside a tube heated by a uniform heat flux can be non-dimensionalized by using the dimensionless variables defined in
eq. (5.89), except the following: h (ω − ωsat ,0 ) k (T − T0 ) θ= , ϕ = sv (5.119) q′′ro c p q′′ro where ωsat ,0 is the saturation mass fraction corresponding to the inlet temperature T0. The
resulting dimensionless governing equations and boundary conditions are ∂θ ∂ ∂θ (5.120) η (1 − η 2 ) = η ∂ξ ∂η ∂η η (1 − η 2 ) ∂ϕ 1 ∂ ∂ϕ = η ∂ξ Le ∂η ∂η (5.121) (5.122) (5.123)
(5.124) (5.125) (5.126) θ = 0, ξ = 0 ϕ = ϕ0 , ξ = 0 ∂θ ∂ϕ = = 0, η = 0 ∂η ∂η ∂θ 1 ∂ϕ + = 1, η = 1 ∂η Le ∂η ahsv θ , η =1 c p where ϕ0 = khsv (ω − ωsat ,0 ) / (c p q′′ro ) in eq. (5.123).
ϕ = The sublimation problem under consideration is not homogeneous, because eq. (5.125) is a nonhomogeneous boundary condition. The solution of the problem is consistent with its particular (fully
developed) solution as well as the solution of the corresponding homogeneous problem (Zhang and Chen, 1992): θ (ξ ,η ) = θ1 (ξ ,η ) + θ 2 (ξ ,η ) (5.127) ϕ (ξ ,η ) = ϕ1 (ξ ,η ) + ϕ 2 (ξ ,η ) (5.128)
While the fully developed solutions of temperature and mass fraction, θ1 (ξ ,η ) and ϕ1 (ξ ,η ) , respectively, must satisfy eqs. (5.120) – (5.121) and (5.124) – (5.126), the corresponding
homogeneous solutions of the temperature and mass fraction, θ 2 (ξ ,η ) and ϕ2 (ξ ,η ) , must satisfy eqs. (5.120), (5.121), (5.124), and (5.126), as well as the following conditions: θ 2 = −θ1 (ξ ,η
) , ξ = 0 (5.129) ϕ2 = ϕ0 − ϕ1 (ξ ,η ) , ξ = 0 (5.130) ∂θ 2 1 ∂ϕ 2 + = 0, η = 1 (5.131) ∂η Le ∂η The fully developed profiles of the temperature and mass fraction are 464 Advanced Heat and Mass
Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press θ1 = 1 1 4ξ + η 2 1 − η 2 + ϕ0 1 + ahsv / c p 4 11Le ahsv / c p − 18ahsv / c p − 7 + 24(1 +
ahsv / c p ) ahsv / c p 1 4ξ + Le η 2 1 − η 2 + ϕ0 ϕ1 = 1 + ahsv / c p 4 (5.132) 7 Le ahsv / c p + 18 Le − 11 − 24(1 + ahsv / c p ) (5.133) The solution of the corresponding
homogeneous problem can be obtained by separation of variables: θ 2 = Gn Θn (η )e − β n =1 ∞ n 2 ξ (5.134) (5.135) ϕ2 = H n Φ n (η )e − β ξ 2 n ∞ n =1 where Gn = η (1 − η 0 1 Θ (1) 1 )θ 2
(0,η )Θn (η )dη + n η (1 − η 2 )ϕ 2 (0,η )Φ n (η )dη Φ n (1) 0 2 1 Θn (1) 2 2 2 0 η (1 − η ) Θn (η ) + ( ahsv / c p ) Φ n (1) Φ n (η ) dη ahsv Θn (1) Hn = Gn
c p Φ n (1) 2 (5.136) (5.137) and β n is the eigenvalue of the corresponding homogeneous problem. The Nusselt number based on the total heat flux at the external wall is Nu = = 2q′′r0 2 = k (Tw − Tm
) θ w − θ m 2(1 + Ahsv / c p ) 11 ahsv + 1 + 24 cp ∞ − β 2ξ Gn e n n =1 4 Θn (1) + 2 Θ′ (1) βn n (5.138) where θ w and θ m are dimensionless wall and mean temperatures,
respectively. The Nusselt number based on the convective heat transfer coefficient is 2h r 2ro ∂T 2 ∂θ = Nu * = x o = k Tw − Tm ∂r r = r θ w − θ m ∂η η =1 o = 2 + 2(1 + ahsv / c
p ) Gn e − βn ξ Θ′ (1) n 2 ∞ (5.139) n =1 11 ahsv + 1 + 24 cp ∞ − β 2ξ Gn e n n =1 4 Θn (1) + 2 Θ′ (1) n βn Chapter 5 Internal Forced Convective Heat and Mass Transfer 465
Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The Sherwood number is Sh = 2hm , x r0 D = 2r0 ∂ω ωw − ωm ∂r = r = ro 2 ∂ϕ ϕw − ϕm ∂η η =1 2Le = 2 ahsv ah ∞ + 2(1 + sv
) H n e − βn ξ Φ ′ (1) n cp c p n =1 11 ahsv ahsv Le + 1 + 24 cp cp ∞ 4 − β 2ξ Gn e n Φ n (1) + 2 Φ ′ (1) n n =1 β n Le (5.140) When the heat and mass transfer are fully
developed, eqs. (5.138) – (5.140) reduce to ah Nu = 1 + sv cp 48 11 (5.141) 48 (5.142) 11 48 Sh = (5.143) 11 The variations of the local Nusselt number based on total heat flux along
the dimensionless location ξ are shown in Fig. 5.10. It is evident from Fig. 5.10(a) that Nu increases significantly with increasing (ahsv / c p ) . The Lewis number has very little effect on Nux
when (ahsv / c p ) = 0.1, but its effects become obvious in the region near the entrance when (ahsv / c p ) = 1.0 and gradually diminishes in the region near the exit. ϕ0 has almost no influence on
Nu in almost the entire region when (ahsv / c p ) = 1.0, as seen in Fig. 5.10(b). When (ahsv / c p ) = 0.1, Nux increases slightly when ξ is small. The variation of the local Nusselt number based on
convective heat flux, Nu*, is shown in Fig. 5.11(a). Only a single curve is obtained, which implies that Nu* remains unchanged when the mass transfer parameters are varied. The value of Nu* is
exactly the same as for the process without sublimation. Figure 5.11(b) Nu* = (a) ϕ0 = 0 Figure 5.10 Nusselt number based on total heat flux. (b) Le=3.5 466 Advanced Heat and Mass Transfer Amir
Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press (a) Nusselt number based on convection (b) Sherwood number Figure 5.11 Nusselt number based on convective heat flux and
Sherwood number. shows the Sherwood number for various parameters. It is evident that (ahsv / c p ) and ϕ0 have no effect on Shx, and Le has an insignificant effect on Shx in the entry region.
Example 5.5 Air flows through a circular tube that has a radius of ro and is heated by external convection. The external convective heat transfer coefficient and fluid temperature are he and Te,
respectively. The inner surface of the tube is coated by a layer of sublimable material. The fluid, with a mass fraction of sublimable substance ω0 and a temperature T0, enters the tube with a
velocity U. For the sake of simplicity, the flow inside the tube is assumed to be slug flow (uniform velocity). The heat and mass transfer inside the tube are assumed to be developing. Find the
Nusselt number based on total heat transfer and convective heat transfer, as well as the Sherwood number. The thermal diffusivity and mass diffusivity are assumed to be the same, i.e., Le = 1.
Solution: The physical model of the problem is shown in Fig. 5.12. The conservations of energy and species equations are ∂T ∂ ∂T = α r ∂x ∂r ∂r ∂ω ∂ ∂ω Ur = D r ∂x ∂r ∂r Ur
(5.144) (5.145) (5.146) (5.147) (5.148) (5.149) with the following boundary conditions: T = T0 , x = 0 = 0, r = 0 ∂r ∂T ∂ω k + ρ Dhsv = he (Te − T ) , r = ro ∂r ∂r ∂r ω = ω0 , x = 0 ∂T ∂ω = where D
is mass diffusivity. Chapter 5 Internal Forced Convective Heat and Mass Transfer 467 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Figure 5.12 Sublimation in a tube
heated by external convection. Introducing the following non-dimensional variables, η= (5.150) ωe − ω ϕ= , ωe = aTe + b ωe − ω0 where ωe is the saturation mass fraction corresponding to Te, the
governing equations become η ∂θ ∂ ∂θ (5.151) = η 2 ∂ξ ∂η ∂η T −T θ= e , Te − T0 2Uro hr r x , ξ= , Pe = , Bi = e o , α ro ro Pe k η ∂ϕ ∂ ∂ϕ = η 2 ∂ξ ∂η ∂η θ = ϕ = 1, ξ = 0 ∂θ ∂ϕ
= = 0, η = 0 ∂η ∂η ahsv ∂θ ∂ϕ + = − Biθ w , η = 1 c p ∂η ∂η (5.152) (5.153) (5.154) (5.155) ϕw = θ w , η = 1 (5.156) Equations (5.151) and (5.152) can be solved using separation of variables, and the
resulting temperature and mass fraction distributions are (Zhang, 2002) ∞ 2 J1 ( β n ) J 0 ( β nη ) e−β ξ (5.157) θ =ϕ = 2 2 n =1 β n J 0 ( β n ) + J1 ( β n ) 2 n where J0 and J1 are the
zeroth and first order Bessel functions. The Nusselt number based on the total heat supplied by the external fluid is 2r h (T − Tw ) 2 Biθ w Nu = o e e = (5.158) k (Tw − Tm ) θ m − θ w 468 Advanced
Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The Nusselt number based on the heat transferred to the fluid inside the tube is 2ro 2 ∂θ
∂T Nu * = − = (5.159) (Tw − Tm ) ∂r r =r θ m − θ w ∂η η =1 o The Sherwood number is Sh = − 2ro ∂ω 2 = ωw − ωm ∂r r =r ϕm − ϕ w o ∂ϕ ∂η η =1 (5.160) The Nusselt
number based on the heat transferred to the fluid inside the tube and the Sherwood number are identical, since θ = ϕ as indicated by eq. (5.157). Fig. 5.13 shows the variation of the local Nusselt
number based on the total heat supplied from the fluid outside the tube. The dimensionless lengths of the entrance slightly increase with decreasing Biot number and are approximately equal to 0.1.
Nusselt numbers become constants after ξ becomes greater than 0.1. The fully developed ahsv = 1.0 cp Figure 5.13 Effect of Biot number on Nu (ahsv / c p = 1) . ahsv = 1.0 cp Figure 5.14 Effect of
Biot number on Nu* or Sh (ahsv / c p = 1) . Chapter 5 Internal Forced Convective Heat and Mass Transfer 469 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Nusselt
number increases with a decreasing Biot number. Fig. 5.14 shows the variation of the local Nusselt number based on heat transferred to the fluid inside the tube, or the local Sherwood number. The
variations of Nu* and Sh are similar to that of Nu in Fig. 5.13. 5.6 Developing Flow, Thermal and Concentration Effects All of the forced convective heat and mass transfer problems considered so far
assumed that the flow is fully developed, which, as previously shown, occurs at x/D approximately equal to 0.05Re for a circular tube. For forced convective heat and mass transfer with constant
properties, the hydrodynamic entrance length is independent of Pr or Sc. It was also shown that when assuming fully developed flow, the point at which the temperature profile becomes fully developed
for forced convection in tubes is linearly proportional to RePr. Analysis of these criteria for a fully developed flow and temperature profile shows that when Pr 1, as is the case with fluids with
high viscosities such as oils, the temperature profile takes a longer distance to completely develop. In these circumstances (Pr 1), it makes sense to assume fully developed velocity since the
thermal entrance is much longer than the hydrodynamic entrance. Obviously, from the definition of Prandtl number and the above criteria, one expects that when Pr ≈ 1 for fluids such as gases, the
temperature and velocity develop at the same rate. When Pr 1, as in the case of liquid metals, the temperature profile will develop much faster than the velocity profile, and therefore a uniform
velocity assumption (slug flow) is appropriate. Similar analysis and conclusions can be made with the Schmidt number, Sc, relative to mass transfer problems concerning the entrance effects due to
mass diffusion. If one needs to get detailed information concerning the hydrodynamic, thermal or concentration entrance effects, the conservation equations should be solved without a fully developed
velocity, concentration, or temperature profile. Consider laminar forced convective heat and mass transfer in a circular tube for the case of steady two-dimensional constant properties. The inlet
velocity, temperature and concentration are uniform at the entrance with the possibility of mass transfer between the wall and fluid, as shown in Fig. 5.15. r x ro L Figure 5.15 Geometry and
coordinate system flow for forced convective heat and mass transfer in a circular tube. ωin Tin Uin 470 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010
Global Digital Press The conservation equations with the above assumptions, as well as neglecting the viscous dissipation and assuming an incompressible Newtonian fluid, are continuity x - momentum u
r - momentum u energy u species u ∂u 1 ∂ + ( rv ) = 0 ∂x r ∂r (5.161) (5.162) (5.163) (5.164) (5.165) 1 ∂ ∂u ∂ 2 u ∂u ∂u 1 ∂p +v =− +ν r + 2 ∂x ∂r ρ ∂x r ∂r ∂r ∂x 1 ∂ ∂v
∂ 2 v ∂v ∂v 1 ∂p +v =− +ν r + 2 ∂x ∂r ρ ∂r r ∂r ∂r ∂x 1 ∂ ∂T ∂T ∂T +v =α r ∂x ∂r r ∂r ∂r 2 ∂ T + 2 ∂x 1 ∂ ∂ω1 ∂ 2ω1 ∂ω1 ∂ω +v 1 = D r + 2 ∂x ∂r r
∂r ∂r ∂x no slip boundary condition Typical boundary conditions are: Axial velocity at wall u ( x , ro ) = 0 vw = 0 impermeable wall vw > 0 injection and vw < 0 suction Radial velocity
at wall v ( x , ro ) = mw′′ = mass flux due to diffusion ∂ω = ρ ω1, w vw − D12 1 ∂r r = ro Tw = const. ∂T = const. or qw′′ = −k ∂r r = ro Tw = f ( x ) or ′′ qw =
g ( x ) T = Tin ω1 = ω1, in u = uin T = ? ω1 = ? u = ? P = ? Thermal condition on wall at ( r = ro ) Inlet condition at x = 0 Outlet condition at x = L Clearly there are five partial
differential equations and five unknowns (u, v, P, T, ω1). All equations are of elliptic nature (Chapter 2) and one can neglect the ∂ 2 u ∂ 2 v ∂ 2 T ∂ 2ω axial diffusion terms, 2 , 2 , 2 , 21
, under some circumstances in order ∂x ∂x ∂x ∂x to make the conservation equations of parabolic nature. These axial diffusion terms can also be neglected under boundary layer assumptions.
Chapter 5 Internal Forced Convective Heat and Mass Transfer 471 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Making boundary layer assumptions makes the result
invalid very close to the tube entrance where the Reynolds number is very small. Shah and London (1978) showed that the momentum boundary layer assumption will lead to error if Re < 400 and LH / D <
0.005Re. In these circumstances, the full Navier Stokes equation should be solved. It was also shown in Section 5.2 that there are circumstances other than boundary layer assumptions where axial
diffusion terms, such as the axial conduction term, can be neglected. However, as we showed in the case of the energy equation, one cannot neglect axial conduction for a very low Prandtl number
despite the thermal boundary layer assumption. 40 35 30 25 Nux 20 15 10 5 0 0.0001 0.001 x/(D Pe) 0.01 0.1 3.66 ReD=100 Pr=0.7 Pr=2 Pr=5 (a) Local Nusselt number 100 90 80 70 60 Num 50 40 30 20 10 0
0.0001 0.0010 0.0100 x/(D Pe) 0.1000 3.66 ReD=100 Pr=0.7 Pr=2 Pr=5 (b) Average Nusselt number Figure 5.16 Local and average Nusselt numbers for the entrance region of a circular tube with constant
wall temperature. 472 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Table 5.7 Local and average Nusselt number for the entrance
region of a circular tube with constant wall temperature Nux Num Pr = 0.7 Pr = 2 Pr = 5 Pr = 0.7 Pr = 2 Pr = 5 x+ 0.001 17.0 12.5 10.6 60.7 34.9 24.9 0.002 11.6 8.86 8.04 37.3 22.6 17.0 0.004 8.29
6.64 6.30 23.3 15.1 12.0 0.008 6.29 5.23 5.09 14.9 10.4 8.80 0.01 5.81 4.89 4.80 13.1 9.36 8.03 0.02 4.69 4.12 4.12 8.9 6.89 6.21 0.04 4.01 3.75 3.76 6.46 5.39 5.06 0.06 3.79 3.67 3.68 5.56 4.83 4.61
0.08 3.71 3.66 3.66 5.09 4.54 4.37 0.1 3.68 3.66 3.66 4.81 4.36 4.23 0.12 3.66 3.66 3.66 4.62 4.24 4.14 3.66 3.66 3.66 3.66 3.66 3.66 ∞ 60 50 40 Nux 30 20 10 0 0.0001 0.001 x/(D Pe) 0.01 0.1 ReD=100
Pr=0.7 Pr=2 Pr=5 4.36 (a) Local Nusselt number 140 120 100 80 Num 60 40 20 0 0.0001 0.001 x/(D Pe) 0.01 0.1 4.36 ReD=100 Pr=0.7 Pr=2 Pr=5 (b) Average Nusselt number Figure 5.17 Local and average
Nusselt numbers for the entrance region of a circular tube with constant heat flux. Chapter 5 Internal Forced Convective Heat and Mass Transfer 473 Amir Faghri, Yuwen Zhang, and John Howell Copyright
© 2010 Global Digital Press Table 5.8 Local and mean Nusselt number for the entrance region of a circular tube with constant wall heat flux Nux Nu m x+ 0.001 0.002 0.004 0.008 0.01 0.02 0.04 0.06
0.08 0.1 0.12 0.16 ∞ Pr = 0.7 23.0 15.6 10.9 7.92 7.24 5.69 4.81 4.53 4.43 4.39 4.37 4.36 4.36 Pr = 2 17.8 12.5 9.16 7.01 6.49 5.30 4.64 4.46 4.39 4.37 4.36 4.36 4.36 Pr = 5 15.0 11.1 8.44 6.65 6.22
5.21 4.62 4.45 4.39 4.37 4.36 4.36 4.36 Pr = 0.7 61.0 39.8 26.3 17.7 15.7 11.0 8.09 6.94 6.33 5.95 5.69 5.36 4.36 Pr = 2 43.4 29.0 19.8 13.8 12.4 9.11 7.00 6.18 5.74 5.47 5.29 5.23 4.36 Pr = 5 34.2
23.5 16.5 11.9 10.8 8.23 6.54 5.87 5.51 5.29 5.13 4.94 4.36 Table 5.9 Local Nusselt number for the entrance region of a group of circular Tube Annulus with Constant Wall Heat Flux (Heaton et al.,
1964; Reproduced with permission from Elsevier) Parallel planes Circular-tube annulus K= 0.50 Pr Nu11 θ1* Nuii Nuoo θ i* θ i* 24.2 0.048 -24.2 -0.0322 11.7 0.117 -11.8 -0.0786 8.8 0.176 9.43 8.9
0.252 0.118 0.01 5.77 0.378 6.4 5.88 0.525 0.231 5.53 0.376 6.22 5.6 0.532 0.238 5.39 0.346 6.18 5.04 0.528 0.216 18.5 0.037 19.22 18.3 0.0513 0.0243 9.62 0.096 10.47 9.45 0.139 0.063 7.68 0.154 8.52
7.5 0.228 0.0998 0.7 5.55 0.327 6.35 5.27 0.498 0.207 5.4 0.345 6.19 5.06 0.527 0.215 5.39 0.346 6.18 5.04 0.528 0.216 15.6 0.0311 16.86 15.14 0.045 0.0201 9.2 0.092 10.2 8.75 0.136 0.0583 7.49 0.149
8.43 7.09 0.224 0.0943 10 5.55 0.327 6.35 5.2 0.498 0.204 5.4 0.345 6.19 5.05 0.527 0.215 5.39 0.346 6.18 5.04 0.528 0.216 In general, elliptic equations are more complex to solve analytically or
numerically than parabolic equations. Furthermore, to solve the equations as elliptic you need pertinent information at the outlet as well, which in some cases is unknown. The momentum equation is
nonlinear while the energy equation is linear under the constant property assumption. In most cases, the momentum, energy, and species equations are uncoupled, except under the following
circumstances which make the equations coupled. 1. Variable properties, such as density variation as a function of temperature in natural convection problems. 474 Advanced Heat and Mass Transfer Amir
Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 2. Coupled governing equations and/or boundary conditions in phase change problems, such as absorption or dissolution
problems. 3. Existence of a source term in one conservation equation that is a function of the dependent variable in another conservation equation. Langhaar (1942) and Hornbeck (1965) obtained
approximate solutions for the momentum equation for circular tubes by solving the linearized momentum equation. Hornbeck (1965) solved the momentum equation numerically by making boundary layer
assumptions (parabolic form). Several investigators solved the energy equation either using Langhaar’s approximate velocity profile, or solving the momentum and energy equations numerically for both
constant wall temperature and constant wall heat flux in circular tubes. Heat transfer in hydrodynamic and thermal entrance region has been solved numerically based on full elliptic governing
equations (Bahrami, 2009). Variations of local and average Nusselt numbers for different Prandtl numbers under constant wall temperature and constant heat flux using full elliptic governing equations
are shown in Figs. 5. 16 and 5.17, respectively. The local and average Nusselt numbers for different Prandtl numbers and boundary conditions are also presented in Tables 5.7 and 5.8. Heaton et al.
(1964) approximated the result for linearized momentum and energy equations using the energy equation for constant wall heat flux for a group of circular tube annulus for several Prandtl numbers.
Table 5.9 summarizes the results for parallel plates and circular annulus. 5.7 Full Numerical Solutions The algorithms to solve the convection-diffusion equation and flow field that were addressed in
Section 4.8 are still applicable to the internal convection problems. However, special attention must be paid to proper treatment of the boundary conditions. For an internal flow and heat transfer
problem, there are several different types of boundary conditions (see Fig. 5.18): (1) inflow condition, (2) axisymmetric condition, (3) impermeable solid surface, and (4) outflow condition. The
inflow conditions are normally specified by a given distribution of the inlet velocity and the general variable, φ , at the inlet ( x = 0 ). The axisymmetric condition can be implemented by setting
the gradient of the velocity component in the y-direction, ∂u / ∂y , and the gradient of the general variable, ∂φ / ∂y , equal to zero along the axisymmetric line ( y = 0 ). In addition, the velocity
component in the y-direction, v, at the axisymmetric boundary should also be zero. For the impermeable solid surface, all three kinds of boundary conditions for the general variables are possible:
specified φ (the first kind), specified gradient of φ in the direction perpendicular to the impermeable surface, ∂φ / ∂y , (the second kind), or specified relation between φ and ∂φ / ∂y (the third
kind). The velocity components in both the tangential and normal direction of the impermeable Chapter 5 Internal Forced Convective Heat and Mass Transfer 475 Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press Impermeable solid (y = H) Inflow (x = 0) Outflow (x = L) x, u y, v 0 axisymmetric (y = 0) Figure 5.18 Typical boundary conditions for internal convection.
surface are zero, i.e., u = v = 0 , due to no-slip and impermeable conditions. No special treatment for the boundary conditions at inflow, axisymmetric, and impermeable surfaces is required. On the
contrary, the outflow boundary condition requires special treatment as outlined below. Since the momentum equation in the internal flow direction (the x-direction in Fig. 5.18) and the conservation
equation for the general variable (see eq. (4.200)) are elliptic, the second order derivatives with respect to x appeared in the partial difference equation. Mathematically, boundary conditions at
both the inflow and outflow boundary need to be specified. While the inflow boundary conditions are always known, the outflow boundary conditions (at y = L in Fig. 5.16) are usually unknown. Unless
the experimentally measured distribution at the outflow boundary is available, we cannot directly use the algorithms introduced in the previous chapter to solve the internal convection problem. A
coordinate (such as x in Fig. 5.18, or time for a transient problem) can be either two-way or one-way depending on the nature of the problem (Patankar, 1980, 1991). If the condition at a given
location, x, is influenced by changes of conditions at either side of the given location, the coordinate is said to be twoway. On the other hand, if the conditions at the given location, x, are
influenced by changes of conditions from only one side, the coordinate is said to be oneway. Mathematically, if the second order derivation with respect to x appeared in the partial differential
equation, the coordinate in the x-direction is two-way. For example, the spatial coordinate in the one-dimensional steady-state heat conduction in a fin (see Section 3.2.1) is two-way because the
temperature at any point is influenced by the temperatures at either side, and the second order derivative appears in the energy equation that describes heat conduction in the fin. On the other hand,
time is a one way coordinate because the conditions at a given time are only influenced by what happened before that time, and what happen afterward will not affect the conditions at the current
time. While the space coordinate in heat and mass transfer is normally always twoway, it can become one-way for some special cases. For internal flow, the change of condition at a given point will
have a more profound effect on the conditions of the points downstream, while its influence on the conditions of the points upstream will be relatively weak. For the case that convection overpowers
476 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press N Outflow boundary W P E S Figure 5.19 Control volume for continuity equation at
boundary diffusion, one can assume that the condition changes at one point can only propagate downstream and the coordinate becomes one-way. The most commonly used approach to handle the outflow
boundary condition is to assume the coordinate at the outflow boundary is locally one-way. Thus, the value of the general variable, φ , at the inner point, P, shown in Fig. 5.19 is not affected by
the value of φ at the outflow boundary grid point, E, which is located downstream. Computationally, one can set aE = 0 in the discretized equation for the point P. This simple treatment effectively
avoids the problem associated with an unknown value of φ at the outflow boundary. It follows from Section 4.8 that this treatment will be more accurate for cases with a high Peclet number. Another
situation that can be handled by a similar approach is the case that the unknown variable is fully developed at the outflow boundary, in which case the outflow boundary condition becomes ∂φ / ∂x = 0
. The implication of the fully developed condition at the outflow is exactly the same as the local one-way behavior discussed above, and can be treated using the same approach. However, it should be
pointed out that for the case of fully developed heat transfer, one must not use ∂T / ∂x = 0 , and the correct condition for fully developed heat transfer should be ∂[(T − Tm ) / (Tw − Tm )] / ∂x = 0
(see Section 5.3). Many practical problems are not similar to the cases corresponding to simple internal channel flow presented in Sections 5.1-5.6. In many practical applications for internal flow,
one needs to deal with conjugate effects, compressibility, multi-domain, and porous media, as well as non-conventional boundary conditions. To show the power of numerical simulation, the modeling of
a conventional wicked heat pipe with variable heat flux (see Fig. 5.20) will be presented here, compared to the non-conventional internal flow case presented in previous sections. The problem is
modeled as a two-dimensional conjugate, with compressible flow including the effect of porous media and coupling between various regions. Chapter 5 Internal Forced Convective Heat and Mass Transfer
477 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Heater Heat pipe screen wick Heat pipe wall r z Ri R0 Rv v w Vapor Multiple evaporator Figure 5.20 Heat pipe
configuration. Adiabatic section Condenser section Since the heat pipe in Fig. 5.20 is closed at both ends, it is required that the vapor which flows out of the evaporator segment enters into the
condenser section. The conservation of mass, momentum, and energy equations for the compressible vapor flow region including viscous dissipation are 1∂ ∂ ( ρ v rvv ) + ( ρ v wv ) = 0 ∂z r ∂r ∂wv ∂wv
∂pv ρ v wv + vv =− ∂z ∂r ∂z 2 4 ∂ wv 1 ∂ ∂wv 1 ∂ ∂vv 2 ∂ 1 ∂ + μv + r ∂r + r ∂r r ∂z − 3 ∂z r ∂r (rvv ) 2 r ∂r 3 ∂z ∂vv ∂v + vv v ∂z ∂r
2 ∂ v ∂p 4 ∂ ∂vv 4 vv 1 ∂ 2 wv = − v + μv 2v + + r − ∂r 3r ∂r ∂r 3 r 2 3 ∂z ∂r ∂z (5.166) (5.167) ρ v wv (5.168) ∂ ∂Tv k = v r r ∂r ∂r ρ v c pv wv ∂Tv ∂T
+ vv v ∂z ∂r ∂ 2T ∂p ∂p + r 2v + vv v + wv v + μv Φ ∂z ∂r ∂z (5.169) 478 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital
Press where the viscous dissipation is ∂v 2 v 2 ∂wv 2 1 ∂vv ∂wv 2 1 2 Φ = 2 v + v + + 2 ∂z + ∂r − 3 (∇ ⋅ Vv ) ∂r r ∂z (5.170) and ∇ ⋅ Vv is
given by ∂wv 1∂ (rvv ) + (5.171) ∂z r ∂r The ideal gas law ( p = ρ v RgTv ) is employed to account for the ∇ ⋅ Vv = compressibility of the vapor. The use of liquid capillary action is a unique
feature of the heat pipe. From a fundamental point of view, the liquid capillary flow in heat pipes with screen wicks should be modeled as a flow through a porous media. It is assumed that the
wicking material is isotropic and of constant thickness. In addition, the wick is saturated with liquid, and the vapor condenses and the liquid evaporates at the liquid-vapor interface. The averaging
technique has been applied by many investigators to obtain the general equation which describes the conservation of momentum in a porous structure. Since the development of Darcy’s semi-empirical
relation, which characterizes the fluid motion under certain conditions, many researchers have tried to develop and extend Darcy’s law in order to see the effect of the inertia terms. In this
respect, those who have tried to model the flow with the NavierStokes equations were the most successful (Chapter 2). The general equations of continuity, momentum and energy for steady state laminar
incompressible liquid flow in porous media in terms of the volume-averaged velocities as presented in Chapter 2 are: ∂ 1∂ ( ρ w ) + ( ρ rv ) = 0 ∂z r ∂r ∂w ∂w 1 1 ∂p ν w ν 1 ∂ ∂w
+ v w = − ρ ∂z − K + ε r ∂r r ∂z 2 ∂z ∂r ε (5.172) 2 ∂ wv + 2 ∂z (5.173) (5.174) (5.175) ∂v ∂v 1 1 ∂p ν v ν 1 ∂ ∂v v ∂ 2 vv − + −+ w + v = − r
2 ∂z ∂r K ε ρ ∂r ε r ∂r ∂z r 2 ∂z 2 ∂T ∂T 1 ∂ ∂T ∂ ∂T ρ c p w + v = rkeff + keff ∂z ∂r r ∂r ∂r ∂z ∂z where ε is the volume fraction
or porosity of the wick, and K is the permeability of the wick structure. The effective thermal conductivity of the wick, keff, is related to the thermal conductivity of the solid and liquid phases k
[(k + ks ) − (1 − ε )(k − ks )] (5.176) keff = [(k + ks ) + (1 − ε )(k − ks )] The steady state energy equation that describes the temperature in the heat pipe wall is Chapter 5 Internal
Forced Convective Heat and Mass Transfer 479 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press ∂Tw ∂ ∂Tw 1 ∂ kw ∂z + r ∂r kw r ∂r = 0 ∂z (5.177)
where kw is the local thermal conductivity of the heat pipe wall. The boundary conditions are w(0, r ) = v(0, r ) = 0 (5.178) w( Lt , r ) = v( Lt , r ) = 0 (5.179) w( z , Rv ) = 0 (5.180) w( z , Ri )
= 0 (5.181) ∂w ∂v ( z , 0) = ( z , 0) = 0 ∂r ∂r ∂T ∂T 1 v( z , Rv ) = + kv v −k ρ v hv eff ∂r r = Rv ∂r v( z , Ri ) = 0 ∂T ∂T (0, r ) = ( Lt , r ) = 0 ∂z ∂z 1 Rg pv ln T ( z ,
Rv ) = − T0 hv p0 ∂T Q( z ) kw w ( z , R0 ) = ± ∂r A −1 (5.182) (5.183) (5.184) (5.185) (5.186) (5.187) r = Rv In the boundary conditions specified by eqs. (5.178)-(5.187), the
heat flux (Q(z)/A) was uniform and positive at the active evaporators, and zero in all adiabatic and transport sections as well as in the inactive evaporators. In the condenser section, the heat flux
was assumed to be uniform and negative based on the total heat input through the evaporators. The temperature along the liquidvapor interface was taken as the equilibrium saturation temperature
corresponding to its equilibrium pressure condition. Through the above procedure, both the effects of energy conduction in the heat pipe wall and the liquid flow in the wick are taken into account.
For simplicity and generality, the problem should be solved as a single domain problem using conjugate heat transfer analysis. To achieve this, the conservation equations for mass, momentum, and
energy are generalized such that the energy equations with temperature as a dependent variable in the three regions (i.e., wall, wick, and vapor) have the same source term. In addition, the
continuity of temperature, heat flux, and mass flux, as well as some special boundary conditions such as thermodynamic equilibrium, should be satisfied at each of the interfaces. The energy equation
can be written in terms of temperature as ρ DT k s = ∇ 2T + Dt c p cp (5.188) where s is the source term which includes viscous dissipation and pressure work. It should be noted that solving the
problem in terms of enthalpy does not 480 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press preserve the condition of temperature
continuity at an interface when harmonic averaging is employed, and will lead to an incorrect calculation of diffusion. For the vapor region, the energy equation is ρv c p , / c p , v results in k
DT s = v ∇ 2T + Dt c p , v c p,v (5.189) Multiplying both sides of the energy equation for the liquid-wick region by ρl c p ,l DT k s = l ∇ 2T + c p , v Dt c p , v c p,v c p , w DT kw 2 s = ∇T+ c p ,
v Dt c p , v c p,v (5.190) Similarly, the energy equation for the heat pipe wall is ρw (5.191) From this transformation, the following can be observed: 1. The source term for the energy equation is
divided by cp,v for all three regions. 2. The density is equal to the modified density, ρ*, for different regions, i.e., ρv for vapor, ρ c p, / c p, v for liquid, and ρwcp,w/cp,v for solid. 3. The
diffusion coefficients contain only one cp, namely, cp,v. The transformation makes possible an exact representation of diffusion across an interface when the harmonic average is used for the
diffusion coefficient at the interfaces. The momentum equation for the vapor flow does not need any special treatment since ρ*= ρv in the vapor region. In the wick region, the momentum equation can
be transformed in the same manner: v ρ *ε DV ρ* V = −∇p * + vl ρ *V − l (5.192) Dt K It should be noted that: 1. A source term was added in the momentum equation for the porosity effect in the wick
region. 2. The pressure solved for is the modified pressure, p*, which is proportional to the actual pressure. The actual pressure drop between two points in the flow fluid can be calculated as Δp =
c p , v / c p,l Δp * . The transformed continuity equation can be written as ∇ ⋅ ( ρV ) = 0 (5.193) The finite volume method discussed above was employed by Faghri and Buchko (1991) in the solution
of the elliptic conservation eqs. (5.166)-(5.169) and (5.177) subject to the boundary conditions (5.178)-(5.187). The source terms due to viscous dissipation and pressure work in the energy equation,
and the source term due to the porous matrix in the momentum equation are linearized, and the SIMPLEST algorithm was employed for the momentum equations. The solution procedure is based on a
line-by-line iteration method in the axial direction and the Jacobi point-by-point procedure in the radial direction. Chapter 5 Internal Forced Convective Heat and Mass Transfer 481 Amir Faghri,
Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press T(ºC) 95 Experimental vapor temp. 90 85 80 75 70 65 60 0 0.2 0.4 0.6 0.8 1 Figure 5.21 Heat pipe wall and vapor temperature versus
axial location (Faghri and Buchko, 1991). Experimental wall temp. Numerical wall temp. Numerical vapor temp. Evaporator 1=97 (W) Evaporator 2=0 (W) Evaporator 3=0 (W) Evaporator 4=0 (W) The energy
equation is not continuous at the liquid-vapor interface due to the latent heat of evaporation and condensation. The term hv must be added as a source term in the energy equation at the liquid-vapor
interface. For the first few iterations, it was assumed the heat flux at the liquid-vapor interface is equal to the outer wall heat flux. An exact energy balance given by eq. (5.183) is satisfied
after these initial iterations. Numerical results based on the mathematical modeling presented above are shown by Faghri and Buchko (1991) in Fig. 5.21, which show excellent agreement with
experimental data for the wall and vapor temperatures along the heat pipe. In the condenser section, the experimental and numerical values of the outer wall temperature do not coincide. This is due
to the assumption of a constant heat flux along the condenser section at the outer wall, which does not precisely correspond to the experimental specifications of a cooling jacket around the
condenser sections. 5.8 Forced Convection in Microchannels 5.8.1 Introduction Due to recent advances in micro fabrication and manufacturing, various devices having the order of microns, such as micro
pumps, micro-heat sinks, microbiochips, micro-reactors, micro-motors, micro-valves, and micro-fuel cells, have 482 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright
© 2010 Global Digital Press been developed. These micro-devices found their applications in microelectronics, microscale sensing and measurement, power systems, spacecraft thermal control,
biotechnology, and microelectromechanical systems. For example, microchannel heat sinks are one of the ultimate heat removal solutions in most microscale devices. The need for efficient and effective
heat transfer techniques in miniature devices has, in recent years, fostered extensive research interest in microscale heat transfer with an emphasis on microchannels with both circular and
rectangular cross-sections (Karniadakis et al., 2005; Kandlikar et al., 2006). Several investigators have reported significant deviation from classical theory (e.g. with respect to the solution of
the Navier-Stokes and energy equations with simplifying assumptions like no-slip boundary conditions for velocity and temperature) used in macroscale applications, while others have reported general
agreement, especially in the laminar region. The common channel flow classification based on the hydrologic diameter divides the range from 1 to 100 µm as microchannels, 100 µm to 1 mm as
mesochannels, 1 mm to 6 mm as miniature channels, and greater than 6 mm as conventional channels. It is convenient to differentiate the flow regimes for experimental and theoretical predictions as a
function of Knudsen number (Kn). Kn is a parameter that physically indicates the relative importance of rarefaction or non-continuum effects. It is the ratio of the flux gas mean free path, λ, to the
characteristic dimension of flow field, D. The following classification is commonly accepted, as described in Chapter 1: • For Kn < 10-3, the flow is a continuum flow and is accurately modeled by the
Navier-Stokes and energy equations with classical no-slip boundary conditions for velocity and temperature. • For 10-3 < Kn < 10-1, the flow is a slip flow and the Navier-Stokes and energy equations
remain applicable, provided a first order velocity slip and a temperature jump are taken into account at the walls. These new boundary conditions indicate the rarefaction effects at the walls. • For
10-1 < Kn < 10, the flow is a transition flow and the continuum approach of the Navier-Stokes equations is no longer valid. However, the intermolecular effects are not yet negligible and should be
taken into account. • For Kn > 10, the flow is a free molecular flow and the occurrence of intermolecular collisions is negligible compared with the collisions between the gas molecules and the
walls. As noted above, some special effects or conditions that are typically neglected at the macroscale should be included at the microscale. One such condition is slip flow when the fluid is
rarefied or the geometry is at the microscale level. In contrast to continuum flow phenomena, the fluid no longer reaches the surface velocity or temperature. Two major characteristics of slip flow
are velocity slip and temperature jump at the surface. These can be Chapter 5 Internal Forced Convective Heat and Mass Transfer 483 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global
Digital Press us r, y uc 0 x Figure 5.22 Configuration for the slip boundary condition in a microchannel. determined using the kinetic theory of gases. For a cylindrical microchannel, the velocity
slip condition is: uS = − 2 − F ∂u λ F ∂r r =ro r0 , H (5.194) where us is the slip velocity, as shown in Fig. 5.22, λ is the molecular mean free path, and F is the tangential momentum accommodation
coefficient; and the temperature jump is 2 − Ft 2γ λ ∂T TS − Tw = − (5.195) Ft γ + 1 Pr ∂r r =r where Ts is the temperature of the fluid at the wall, Tw is the wall temperature, and Ft is the thermal
accommodation coefficient. To make the analysis simpler, F and Ft are usually denoted by F and assumed to be 1. The above relations clearly indicate that with an increase in the mean free path value,
the slip velocity condition at the walls, as well as the temperature jump, increases. Detailed investigations were made to compare the conventional (continuum) theories with the experimental data in
microscale flows and heat transfer problems. Hetsroni, et al. (2005) compared the experimental data from the literature with small Knudsen numbers (0.001 – 0.4) and Mach numbers (0.07 – 0.84) that
correspond to continuum models in circular, rectangular, triangular, and trapezoidal microchannels with hydraulic diameters ranging from 1.01 μm to 4010 μm and noted, in general, a good agreement
with the conventional theories. The no-slip boundary conditions are used for these models for both velocity and temperature. Hetsroni et al. (2005) concluded that the existing experimental friction
factor in the literature agrees quite well with that of the conventional continuum for fully developed laminar gas flow for 0.001 ≤ Kn ≤ 0.38 . There are, however, contradictory results despite the
existence of significant experimental and theoretical investigations in microchannels. For the microchannels with small Knudsen numbers which lie within the noslip flow region, the conventional
solutions apply with high accuracy. The o 484 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press solutions for the circular tube fully
developed flow and temperature profile are thus Nu = 3.66 for the constant wall temperature and Nu = 4.36 for the constant wall heat flux boundary conditions. The Nusselt number for a rectangular
microchannel has a dependence on the channel aspect ratio. Myong et al. (2006) adopted Langmuir’s slip model to characterize the slip boundary conditions. They developed a physical approach to
account for the interfacial interaction between the gas molecules and surface molecules. In this approach, the gas molecules are assumed to interact with the surface of the solid via long range
attractive forces. Consequently, the gas molecules can be adsorbed into the surface and then desorbed after some time lag. They found that, for most physical applications, this model always predicts
the reduction of heat transfer with increasing gas rarefaction. Several studies have also examined the transition point from laminar to turbulent flow in microchannel passages and found that the
critical Reynolds number is still approximately 2300. The classical flow and thermal regions such as: fully developed flow and temperature profiles, fully developed flow profile but developing
thermal profile, fully developed thermal profile but developing flow profile, and simultaneously developing flow and temperature profiles, are also applicable to the analysis of microchannels. For
the microchannels with the Knudsen numbers within the slip flow condition, the flow and heat transfer is characterized by Knudsen number (Kn), Peclet number (Pe), and Brinkman number (Br). 5.8.2
Fully Developed Laminar Flow and Temperature Profile Fully Developed Velocity Distribution The velocity distribution is derived from the continuity and momentum equations along with the slip
condition. The velocity profile is expressed in terms of Knudsen number. The model for the analysis of velocity distribution can be considered as the flow of a fluid in a circular tube of radius ro,
as shown in Fig. 5.22. The continuity equation in cylindrical coordinates is given as: ∂ρ 1 ∂ ∂ + (5.196) ( ρ rv ) + ( ρ u ) = 0 ∂t r ∂r ∂x For steady and fully developed flow of an incompressible
fluid, it becomes: ∂u =0 ∂x (5.197) Therefore, rv = constant. Since the radial velocity at the wall is zero (impermeability condition), it can be concluded that v = 0 everywhere in the flow field.
The steady parabolic momentum equation in the x-direction can be written in cylindrical coordinates as: Chapter 5 Internal Forced Convective Heat and Mass Transfer 485 Amir Faghri, Yuwen Zhang, and
John Howell Copyright © 2010 Global Digital Press ρu ∂u ∂u μ ∂ ∂u ∂p + ρv = r − ∂x ∂r r ∂r ∂r ∂x (5.198) For fully developed, steady flow of incompressible fluid, it reduces to: dp μ d
du − + (5.199) r =0 dx r dr dr 1 dp 4 μ dx As the velocity is independent of x, a constant C1 can be defined as: C1 = − (5.200) Therefore, the momentum equation reduces to: 4C1 + 1 d du
r =0 r dr dr (5.201) The above equation is integrated twice to yield: (5.202) Since the velocity, u, is finite at the center of the tube (r = 0), we conclude C2=0. At the centerline of the tube
(r = 0), the velocity is equal to uc, therefore, u(0) = uc = C3. At the inner surface of the tube wall (r = ro), the velocity is not zero due to the slip flow condition, but is equal to a finite
velocity us: u(ro ) = us = uc − C1ro2 (5.203) Rearranging and solving for C1 we have: u = C 2 ln r + C 3 − C1r 2 C1 = uc − us 1 dp =− 2 4 μ dx r0 (5.204) The velocity profile is obtained as follows:
(5.205) The mean velocity in the tube, um , is now evaluated. The volumetric flow rate is: u = uc − (uc − us )(r / ro ) 2 = uc [1 − (r / ro )2 ] + us (r / ro )2 π r02 um = 2π rudr 0 r0 (5.206)
(5.207) therefore, um = 2[uc / 2 − (uc − us ) / 4] = (uc + us ) / 2 The centerline velocity can be written as: uc = 2um − us (5.208) Using the above relation, we get the following velocity profile in
a microchannel: u = 2(um − us )(1 − r +2 ) + us (5.209) + where r =r/ro. If the slip velocity is zero, the velocity distribution reduces to the Poiseuille flow distribution for conventional channels.
The velocity slip is given by the following condition [see eq. (5.194)]: us = − 2 − F du λ F dr r =ro (5.210) For most applications, F has values near unity. Therefore, us = −λ du dr r = ro (5.211)
486 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 2 1.6 1.2 u/um 0.8 Kn=0.00 =0.02 =0.04 =0.06 =0.08 =0.10 =0.12 Kn 0.4 0 0 0.2 0.4
r+ Figure 5.23 Non-dimensional, fully developed velocity profile in a microchannel as a function of Knudsen number. 0.6 0.8 1 The slip velocity is derived using eq. (5.209) for velocity profile: 4λ
(um − us ) 8λ (um − us ) λ du us = − ro dr + = r + =1 ro = D (5.212) The Knudsen number, Kn = λ / D , is introduced in the above relation to obtain the following equation for the slip velocity: us
8Kn = um 1 + 8Kn (5.213) Substituting the above into eq. (5.209), the non-dimensional expression for the velocity profile is obtained: u 2(1 − r +2 ) + 8Kn = um 1 + 8Kn (5.214) The velocity
distribution is shown in Fig. 5.23 for different Kn numbers. The slip velocity at the wall increases with an increasing Kn number. Fully Developed Heat Transfer Coefficient in Microchannel The flow
is considered to be steady, laminar, and fully developed both hydrodynamically and thermally. The conventional continuum approach is coupled with the two main characteristics of the microscale
phenomena, the velocity slip and the temperature jump, as noted above. The energy equation including the viscous dissipation term for steady, fully developed flow in a pipe and neglecting axial
conduction is: Chapter 5 Internal Forced Convective Heat and Mass Transfer 487 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press u ∂T α ∂ ∂T ν ∂u = r + ∂x
r ∂r ∂r c p ∂r 2 (5.215) The variation of temperature in the r-direction at the center is zero due to axisymmetric conditions: ∂T ∂r =0 r =0 (5.216) Also, the temperature jump condition for
the fluid at the wall is written as follows: 2 − Ft 4γ Kn ∂T Ts − Tw = − (5.217) Ft 1 + γ Pr ∂r r = r Both conventional boundary conditions of constant heat flux and constant temperature at the wall
are solved analytically by Aydin and Avci (2006) for microchannel with circular cross-section, which are presented below. o • Constant Heat Flux at the Wall The constant heat flux at the wall is
described by: k ∂T ∂r ′′ = qw r = r0 (5.218) " where qw is positive when its direction is to the fluid (the hot wall) and negative when its direction is from the fluid (the cold wall). For the
constant wall heat flux, the following equation, similar to the analysis presented for conventional heat pipes, is applicable: ∂T dTw dTs = = ∂x dx dx (5.219) Substituting eq. (5.214) into eq.
(5.215) and non-dimensionalizing the resultant equation yield: 1 d + dθ q r+ + (5.220) r = β1 1 − r + 4Kn + 32 Brq + + + 2 r dr Brq = dr ( 2 ) 2 (1 + 8Kn ) where 2 μ um Dq " w , θq = 2 ρ c
p um ro dTs T − Ts , β1 = − ′′ ′′ qw r0 / k (1 + 8Kn ) qw dx (5.221) Since β1 is unknown, three non-dimensional boundary conditions are required for the second order ODE, eq. (5.220): ∂θ q + ∂r + = 0
at r = 0 ∂θ q ∂r + (5.222) + θ q = 0 and θ q ( r + ) = β1 2 = −1 at r = 1 r +4 + β2 16 − β3 The solution of eq. (5.220) using the above boundary conditions is: r + r +4 2 − +
Knr + 4 16 (5.223) 488 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press where β2 = 32 Brq (1 + 8Kn ) 2 , β1 = − β2 + 4 1 + 8Kn , β3 =
β 2 + β1 ( 3 + 16Kn ) 16 (5.224) The local heat transfer coefficient, h, is: h= k ∂T Tw − Tm ∂r (5.225) r = ro The Nusselt number, based on θ q is: Nu = hD 2 = θq ,m − θq , w k (5.226) where the
non-dimensional mean temperature ( θ q , m ) and slip temperature at the wall ( θ q , w ) are given as follows: θq ,m = + 1 5 + β 2 + Kn ( 32 + 6β 2 ) 2 4 24 (1 + 8Kn ) 2-Ft 4γ Kn Ft 1 + γ Pr (5.227)
(5.228) θ q , w =- Aydin and Avci (2006) investigated the effects of the Brinkman number and Knudsen number for both fully developed flow and temperature profile in a microchannel with circular
cross-section using the above analytical technique. Kn = 0 represents the macroscale case, while Kn > 0 holds for the microscale case, and Brq = 0 represents the case without the effect of the
viscous dissipation. 6.5 6 5.5 5 Wall Cooling Brq=-0.10 =-0.05 =0.00 =+0.05 =+0.10 Nu 4.5 4 3.5 3 2.5 2 0 0.02 0.04 0.06 Kn 0.08 Wall Heating 0.1 0.12 Figure 5.24 The variation of the Nusselt number
for microchannel, with the Knudsen number, for different values of the Brinkman number for constant heat flux at the wall. Chapter 5 Internal Forced Convective Heat and Mass Transfer 489 Amir Faghri,
Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Figure 5.24 shows the variation of Nusselt number with the Knudsen number for different Brinkman numbers for the case of constant
wall heat flux (Bahrami, 2009). For Brq = 0, an increase at Kn decreases Nu due to the temperature slip at the wall. Viscous dissipation significantly affects Nu. Positive values of Brq ′′ correspond
to wall heating ( qw > 0 ), while the opposite is true for negative values of Brq. With no viscous dissipation, the solution is independent of whether there is wall heating or cooling. Nu decreases
with increasing Brq for the wall heating case. Increasing Brq in the negative direction increases Nu. The trend followed by Nu versus Kn for lower values of the Brinkman number, either in the case of
wall heating (Brq = 0.05) or in the case of wall cooling (Brq = – 0.05) is very similar to that of Brq = 0. For the wall cooling case, at Brq = –0.1, the decreasing effect of Kn on Nu is more
significant. At Brq = 0.1, increasing Kn increases Nu up to Kn ≈ 0.01 where a maximum occurs, after which Nu decreases with increasing Kn. Constant Wall Temperature Aydin and Avci (2006) investigated
the effects of the Brinkman and Knudsen numbers for both fully developed flow and temperature profile in a microchannel with circular cross-section tube subjected to constant wall temperature. They
assumed that the fluid temperature at the wall does not change along the tube length, i.e. dTs / dx = 0 . However, their analysis for the viscous dissipation effect on Nusselt number was found to be
inconsistent with other researchers’ results (Hooman, 2008). In the following the effect of the Knudsen number for both fully developed flow and temperature profile in a microchannel subjected to
constant wall temperature considering variation of fluid temperature at the wall (i.e. dTs / dx ≠ 0 ) is presented (Bahrami, 2009). The non-dimensional temperature profile is defined as: θ= Ts − T Ts
− Tc • (5.229) where Tc is the fluid temperature at the centerline. Substituting eq. (5.214) into eq. (5.215), neglecting the viscous dissipation, and non-dimensionalizing the resultant equation give
us: 1 d + dθ r = ( β1θ + β 2 (1 − θ ) ) 1 − r + + 4Kn (5.230) + + + r dr dr ( 2 ) where: β1 = β2 = 2um ro2 dTc (1 + 8Kn ) α (Tc − Ts ) dx 2um ro2 dTs (1 + 8Kn ) α (Tc − Ts ) dx (5.231)
(5.232) Equation (5.230) is subject to the following boundary conditions: 490 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The
Nusselt number based on θ is: ∂θ = 0 and θ = 1 at r + = 0 ∂r + θ = 0 at r + = 1 (5.233) ∂θ −2 + hD ∂r r + =1 Nu = = θm − θw k (5.234) where θ m is the non-dimensional mean temperature, and
θ w is the slip temperature at the wall. The relation between β1 and β 2 can be obtained by taking the derivative of slip temperature boundary conditions along the xdirection, and the results are
given bellow: dTc 2 − Ft 4γ Kn ξ dTs = , ξ =− Nu (θ m − θ w ) dx ξ + 1 dx Ft 1 + γ Pr A closed form solution for Nu cannot be obtained for this case. However, the solution of θ can be obtained by
using an iterative procedure. The temperature profile for the constant heat flux at the wall can be used as the first approximation, and eq. (5.230) is then integrated to obtain θ. This iterative
procedure is repeated until an acceptable convergence is obtained. However, in order to get a very good accuracy, a forth-order Rung-Kutta procedure is employed to solve eq. (5.230). Figure 5.25
presents Nu versus Kn for the case of constant wall temperature (Bahrami, 2009). Similar trends to those obtained for the case of constant heat flux at the wall are observed. Nu values for the case
of constant wall temperature are, for the same Kn, lower than those Nu values for the case of constant heat flux at the wall. 4 3.8 3.6 3.4 3.2 Nu 3 2.8 2.6 2.4 2.2 2 0 0.02 0.04 0.06 Kn 0.08 0.1
0.12 Figure 5.25 Variation of the fully developed Nusselt number for microchannel with the Knudsen number for constant wall temperature and neglecting viscous dissipation effects. Chapter 5 Internal
Forced Convective Heat and Mass Transfer 491 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Table 5.10 Fully developed Nusselt numbers for microchannels with Br = 0,
and constant temperature at the wall Pr=0.6 Pr=0.65 Pr=0.7 Pr=0.75 Pr=0.8 Pr=0.85 Pr=0.9 Pr=0.95 Pr=1.0 Kn=0 Kn=0.02 Kn=0.04 Kn=0.06 Kn=0.08 Kn=0.1 Kn=0.12 3.657 3.432 3.191 2.952 2.728 2.522 2.337
3.657 3.462 3.245 3.024 2.812 2.614 2.433 3.657 3.488 3.292 3.088 2.887 2.697 2.521 3.657 3.512 3.334 3.145 2.955 2.773 2.603 3.657 3.532 3.372 3.196 3.017 2.843 2.678 3.657 3.550 3.405 3.243 3.074
2.907 2.747 3.657 3.566 3.436 3.285 3.126 2.967 2.812 3.657 3.580 3.463 3.323 3.173 3.022 2.872 3.657 3.593 3.488 3.359 3.217 3.073 2.929 Table 5.11 Fully developed Nusselt numbers for microchannels
with Brq = 0, and constant heat flux at the wall Kn=0.00 Kn=0.02 Kn=0.04 Kn=0.06 Kn=0.08 Kn=0.10 Kn=0.12 Pr=0.6 4.364 3.981 3.599 3.252 2.949 2.687 2.461 Pr=0.65 4.364 4.029 3.678 3.350 3.057 2.799
2.575 Pr=0.7 4.364 4.071 3.749 3.439 3.156 2.904 2.681 Pr=0.75 4.364 4.108 3.812 3.519 3.247 3.000 2.781 Pr=0.8 4.364 4.141 3.870 3.593 3.331 3.091 2.874 Pr=0.85 4.364 4.171 3.922 3.661 3.409 3.175
2.962 Pr=0.9 4.364 4.197 3.969 3.723 3.481 3.254 3.044 Pr=0.95 4.364 4.221 4.013 3.781 3.549 3.327 3.122 Pr=1 4.364 4.243 4.053 3.834 3.612 3.397 3.195 The fully developed Nusselt numbers for 0 < Kn
< 0.12 and no viscous dissipation are shown in Tables 5.10 and 5.11 for constant wall temperature and heat flux, respectively (Bahrami, 2009). The fully developed Nusselt number decreases as Kn
increases. The effect of temperature jump on the Nusselt number is shown in Figs. 5.26 and 5.27. The solid and dashed lines represent the results obtained for considering the temperature jump and
neglecting the temperature jump conditions, respectively. When the temperature jump condition is not accounted 7 Neglecting the Temperature Jump 6 Nu 5 Brq=-0.10 =0.00 =+0.10 Considering the
Temperature Jump 4 3 2 0 0.02 0.04 0.06 Kn 0.08 0.1 0.12 Figure 5.26 Effect of temperature jump on fully developed Nu in microchannel for different Brq when the wall is subjected to constant heat
flux. 492 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 5.5 Considering temperature jump 5 4.5 4 Neglecting temperature jump Nu 3.5 3
2.5 2 0 0.02 0.04 0.06 Kn 0.08 0.1 0.12 Figure 5.27 The effect of temperature jump on fully developed Nu for microchannel when the wall is subjected to constant temperature. for, i.e. only the
velocity slip condition is taken into consideration, the Nusselt number increases with increasing Kn, which indicates that the velocity slip and temperature jump have opposite effects on the Nusselt
number. 5.8.3 Fully Developed Flow with Developing Temperature Profile Convective heat transfer for steady state, laminar, hydrodynamically developed flow and developing temperature profile in
microchannels with both uniform temperature and uniform heat flux boundary conditions were solved by Tunc and Bayazitoglu (2001, 2002) using the integral transform technique that is presented below.
Uniform Temperature The energy equation assuming fully developed flow, including viscous dissipation and neglecting axial conduction, is the same as eq. (5.215). The boundary and inlet conditions for
constant wall temperature are: T = Ts at r = ro (5.235) ∂T = 0 at r = 0 ∂r T = T0 at x = 0 (5.236) (5.237) The fully developed velocity profile with slip boundary condition given by eq. (5.214) is
used. The slip boundary condition given by eq. (5.195) is also used to express the wall temperature jump. The following non-dimensional variables are introduced: for temperature (θ), radial
coordinate (r+), axial coordinate (x+), and velocity (u+): θ= T − Ts r x u , r + = , x + = , u+ = T0 − Ts ro L um (5.238) Chapter 5 Internal Forced Convective Heat and Mass Transfer 493 Amir Faghri,
Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press The non-dimensional energy equation and boundary conditions are obtained through use of the above variables: Gz (1 − r + 2 + 4Kn )
∂θ 1 ∂ ∂θ 16Br = + + r+ + + r +2 + 2 (1 + 8Kn ) ∂x r ∂r ∂r (1 + 8Kn )2 (5.239) θ = 0 at r + = 1 (5.240) ∂θ = 0 at r + = 0 (5.241) ∂r + θ = 1 at x + = 0 (5.242) where the Graetz number (Gz)
and the Brinkman number (Br) are defined as: 2 μ um RePrD Gz = and Br = (5.243) L k ΔT where ΔT is the difference between the temperature of the fluid at the wall, Ts, and at the tube entrance, T0,
i.e. ΔT = T0 − Ts . Uniform Heat Flux For the case of constant heat flux at the wall, the following non-dimensional variables are used: 2 T − T0 μ um (5.244) θ= and Br = ′′ qw r0 / k ′′ qw D Upon use
of the above non-dimensional variables, the non-dimensional energy equation for constant wall heat flux can be obtained: Gz (1 − r +2 + 4Kn ) ∂θ 1 ∂ ∂θ = + + r+ + + 2 (1 + 8Kn ) ∂x r ∂r ∂r 16Br
r +2 + 2 (1 + 8Kn ) (5.245) where the centerline symmetric and uniform inlet temperature conditions are the same as eqs. (5.241) and (5.242), respectively. However, the boundary condition at the
wall is given by ∂θ / ∂r + = 1 . An integral transform technique based on separation of variables was used by Tunc and Bayazitoglu (2001) to solve this problem. An appropriate integral transform pair
was developed. Under the transformation, the variable x+ was eliminated from the partial differential governing equation, which transformed the governing equation into an ordinary differential
equation. The effect of viscous heating is presented in Fig. 5.28 for the constant wall temperature case, where Kn = 0.04 and Pr = 0.7. The inclusion of viscous dissipation causes an increase in Nu.
The Nusselt number first reaches the fully developed condition as if there was no viscous dissipation, and then makes a jump to its final value for a given Br. Figure 5.29 shows the effect of viscous
heating on the Nusselt number for a uniform wall heat flux. Since the definition of the Brinkman number is different for the uniform wall heat flux boundary condition case, a positive Br means that
the heat is being transferred to the fluid from the wall, as opposed to the uniform wall temperature case. For constant heat flux at the wall, the Nusselt number decreases as Br (> 0) increases. 494
Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 6.5 6.5 6 Kn=0.04 Pr=0.7 5.5 Nusselt Number Br=0.01 5 Br=0.015 Br=0.006 4.5 Br=0.003 Br
=0.001 4 Br=0.0 3.5 3 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Non-dimensional axial length Figure 5.28: The effect of viscous dissipation on Nusselt number for constant wall temperature (Tunc and
Bayazitoglu, 2001). 4.1 4.1 Kn=0.04 4.0 Pr=0.7 3.9 Nusselt Number 3.8 Br=0.0 3.7 Br=0.003 Br=0.006 Br=0.01 3.6 Br=0.015 3.5 3.4 0.02 0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 Non-dimensional axial
length Figure 5.29 The effect of viscous dissipation on Nusselt number for uniform wall heat flux (Tunc and Bayazitoglu, 2001). Jeong and Jeong (2006) extended the analysis for application to
microchannels of rectangular cross-section, including axial conduction and viscous dissipation. The configuration for developed flow with developing temperature profile for rectangular microchannels
is similar to Fig. 5.22. The fluid temperature changes from the value T0 at the entrance, to the value Ts on the walls. The Chapter 5 Internal Forced Convective Heat and Mass Transfer 495 Amir
Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press governing energy equation and boundary conditions, including axial conduction and viscous dissipation, for laminar flow are:
ρcpu ∂ 2T ∂ 2T ∂u ∂T = k 2 + 2 + μ ∂x ∂x ∂y ∂y T = T0 at x = 0 2 (5.246) (5.247) (5.248) (5.249) T − Tw = 2 − Ft 2γ λ ∂T at y = ± H Ft γ + 1 Pr ∂y ∂T = 0 at y = 0 ∂y where H is
half the microchannel height, and the wall length in the x-direction is L. The fully developed velocity profile in the rectangular microchannel is: u ( y) = − 2 H 2 dP y 2− F 1− + 8 Kn
2 μ dx H F (5.250) which satisfies the slip boundary condition: us = 2 − F ∂u λ at y = ± H F ∂y , Kn = (5.251) λ Dh Defining the following dimensionless variables: 2 T −
Tw μ um x y θ= , x+ = , y + = , Br = T0 − Tw RePrH H k ( T0 − Tw ) 2 (5.252) Equations (5.250) and (5.254) are respectively non-dimensionalized as: ∂u + 1 + ∂θ 1 ∂ 2θ ∂ 2θ u = 2 + 2 + + 2 + Br
+ 4 ∂x + Pe ∂x ∂y ∂y (5.253) and 2−F 1 − y+ 2 + 8 Kn u 3 F u= = um 2 C1 + (5.254) where C1 = 1 + 12 2−F Kn F (5.255) (5.256) (5.257) (5.258) The non-dimensional boundary conditions
are: θ = 1 at x + = 0 ∂θ at y + = 1 θ = −4C 2 ∂y ∂θ = 0 at y + = 0 ∂y where 496 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 20 16
Pe=106 Br=0 12 Kn=0 Kn=0.04 Kn=0.08 Nu 8 7.54 4 0 0 0.25 0.5 0.75 1 x/(Pe·H) x/(PeH) Figure 5.30 Nusselt number distribution for a constant temperature boundary condition for rectangular
microchannels. (Jeong and Jeong, 2006). C2 = 2 − Ft 2γ Kn Ft γ + 1 Pr The effects of the Knudsen number on the Nusselt number variation along a rectangular microchannel neglecting axial conduction
and viscous dissipation are shown in Fig. 5.30. For Kn = 0, the fully developed Nusselt number is approximately 7.54, which is the result for a pipe of conventional size (classic Graetz problem). The
Nusselt number decreases as Kn increases due to the temperature jump at the wall. The effect of the Knudsen number on the Nusselt number distribution in a rectangular microchannel with constant wall
heat flux is presented in Fig. 5.31. When the channel is subjected to a constant wall temperature, as x + → ∞ , the Nusselt number for Br ≠ 0 is independent of Br and different from that for Br = 0 .
The thermally fully developed Nusselt number was obtained from the fully developed temperature profile for constant wall temperature and heat flux by Jeong and Jeong (2006). For constant wall
temperature the Nusselt number as x + → ∞ ( Br ≠ 0 ) is Nu ∞ = 140C1 1 + 7C1 + 140C1C 2 (5.259) And for constant heat flux Nu ∞ = C 2 1 ( 35C 2 1 + 14C1 + 2 + 420C12 C 2 ) + Br ( 42C12 + 33C1 + 6 )
420C14 (5.260) Chapter 5 Internal Forced Convective Heat and Mass Transfer 497 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 20 18 16 14 Pe=106 Br=0 Kn=0 Kn=0.04 Kn=
0.08 Nu 12 10 8 6 4 2 0 0.2 0.4 0.6 x/x/(PeH) (Pe H) 0.8 1 8.24 5.72 4.26 Figure 5.31 Nusselt number distribution for a constant heat flux boundary condition for rectangular microchannels. (Jeong
and Jeong, 2006). where 2−F C1 = 1 + 12 Kn F 2 − Ft 2γ Kn C2 = . Ft γ + 1 Pr Unless Br is a large negative number, Nu, is always positive. Example 5.6: Develop the analytical expression of
the Nusselt number for constant wall temperature for rectangular microchannels as x + → ∞ , including viscous dissipation, given by eq. (5.253). Solution: As x + → ∞ and ∂θ / ∂x + → 0 , eq. (5.253)
becomes: ∂u + ∂ 2θ = −Br + ∂y +2 ∂y 2 (5.261) The non-dimensional temperature profile, θ ∞ , can be derived by integrating both sides of eq. (5.261) and applying the boundary conditions
given by eqs. (5.257) and (5.258): θ∞ ( y ) = Br C12 3 +4 − 4 ( y − 1) + 12C 2 (5.262) The non-dimensional mean temperature and Nusselt number can be obtained using the following
equations: θ m = u +θ dy + 0 1 (5.263) 498 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Nu = hDh h ( 4 H ) 4 ∂θ = =− θ m ∂y + k k
(5.264) y + =1 Through the use of the temperature profile given by eq. (5.262), one obtains the full relation for the fully developed Nusselt number: Nu ∞ = 140C1 for Br ≠ 0 1 + 7C1 + 140C1C 2
(5.265) 5.9 Turbulence 5.9.1 Time-Averaged Governing Equations The generalized governing equations for three-dimensional turbulent flow have been presented in Chapter 2 (see Section 2.5). For
two-dimensional steady-state turbulent flow inside a cylindrical coordinate system (see Fig. 5.2), the governing equations are: ∂u 1 ∂ (rv ) + =0 ∂x r ∂r ∂u ∂u 1 dp 1 ∂ ∂u u +v =− + r (ν + ε M )
∂r ρ dx r ∂r ∂x ∂r u ∂T ∂T 1 ∂ ∂T +v = r (α + ε H ) ∂x ∂r r ∂r ∂r (5.266) (5.267) (5.268) which can be obtained by using the boundary layer theory similar to Chapter 4. The second
order derivatives of u and T in the x-direction have been dropped based on the similar arguments for laminar flow in a duct. The time-averaged pressure is not a function of r, but is a function of x
only. It can also be observed from eqs. (5.267) and (5.268) that the both momentum and energy diffusions are governed by molecular and eddy diffusions. Similar to the cases of external turbulent
boundary layers, the momentum and thermal eddy diffusivities, ε M and ε H , are defined as: ∂u ∂r ∂T − ρ c p v′T ′ = ρ c p ε H ∂r − ρ u′v′ = ρε M (5.269) (5.270) where u′, v′ and T ′ are the
fluctuations of axial velocity, radial velocity, and temperature, respectively. Appropriate turbulent models in either algebraic or differential equation forms must be employed to obtain the eddy
diffusivities. Chapter 5 Internal Forced Convective Heat and Mass Transfer 499 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 5.9.2 Velocity Profile and Friction
Coefficient for Fully Developed Flow Since the turbulent boundary layer grows much faster than the laminar boundary layer, the lengths of hydrodynamic and thermal entrances for turbulent internal
flow are also much shorter than those for laminar flow. When the Prandtl number of the fluid is on the order of 1 (e.g., air or water), the lengths of the hydrodynamic and thermal entrances are about
10 times of the diameter of the tube, i.e., LH LT 10 D D (5.271) The internal turbulent flow becomes fully developed after x > LH or LT . Similar to the laminar internal flow, we have v = 0, and
∂u / ∂x = 0 for fully developed flow, and the momentum eq. (5.267) becomes dp 1 ∂ (rτ app ) − = (5.272) dx r ∂r where τ app = ρ (ν + ε M ) ∂u ∂y (5.273) is the apparent or total shear stress and y is
the distance measured from the wall ( y = ro − r ). The apparent shear stress is equal to τ w at the wall and zero at the centerline. Integrating eq. (5.272) from the centerline to the wall yields,
τw dp − dx =2 ro which can be substituted into eq. (5.272) to yield τ w 1 ∂ (rτ app ) −2 r0 = r ∂r (5.274) Integrating eq. (5.274) in the interval of (0, r), one obtains: y (5.275) = τ w ⋅ 1
− ro where y is measured from the tube wall ( y = r0 − r ). Equation (5.275) shows that τ app = τ w ⋅ r ro the shear stress is a linear function for internal turbulent flow. Close to the
wall where r is near r0 (or y is near 0), the apparent shear stress is nearly a constant, i.e., τ app ≈ τ w . The law of the wall resulting from the two-layer turbulent model (see Section 4.11.2) can
be applied near the wall to yield: u + = 2.5ln y + + 5.5 (5.276) which is referred to as the Nikuradse equation. The constants 2.5 and 5.5 are different from those in Section 4.11.2 and are obtained
by curve-fitting to the experimental results. The dimensionless velocity and coordinate are defined as u+ = yu u , y+ = τ uτ ν (5.277) where 500 Advanced Heat and Mass Transfer Amir Faghri, Yuwen
Zhang, and John Howell Copyright © 2010 Global Digital Press τw (5.278) ρ It should be pointed out that the Nikuradse equation (5.276) is invalid near the centerline because the slope of the velocity
at the centerline obtained from eq. (5.276) is a finite value, not zero as it should be. In addition, eq. (5.276) also implies that ε M / ν = 0 at the centerline, which is also not true because the
centerline is also in the fully turbulent region. Reichardt (1951) suggested the following empirical correlation for the eddy diffusivity: uτ = r ε M κ y+ r = 1 + 1 + 2 6 ro ν
ro 2 (5.279) which becomes ε M / ν = κ y + near the wall – a result that coincides with the mixing length theory. Equation (5.279) produces finite eddy diffusivity at the centerline.
Assuming ε M ν , eq. (5.273) becomes τ app = ρε M ∂u ∂y (5.280) Substituting eqs. (5.275) and (5.279), and integrating the resultant equation, the following velocity profile is obtained 3(1 + r /
ro ) u + = 2.5ln y + + 5.5 2 2[1 + 2(r / ro ) ] (5.281) which becomes identical to eq. (5.276) near the wall and produces zero slope at the centerline. The friction factor for internal
turbulent flow is defined as τw cf = (5.282) 2 ρ um / 2 where um is the mean velocity over the cross-section of the duct. For axisymmetric flow in a circular tube, it is obtained by um = 1/ 2 2 ro2
ro 0 urdr (5.283) 1/ 2 The definition of the friction factor, eq. (5.282), can be rewritten as τw ρ cf = um 2 (5.284) For moderate Reynolds number, the velocity profile in the
entire tube can be approximated as (Kays et al., 2005) u + = 8.6( y + )1/ 7 (5.285) At the centerline where u = uc and y = r0 , the centerline velocity satisfies: r τ / ρ = 8.6 o w ν τw / ρ uc
1/ 7 (5.286) The velocity at any radius is related to the centerline velocity by Chapter 5 Internal Forced Convective Heat and Mass Transfer 501 Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press u y = uc ro 1/ 7 (5.287) Substituting eq. (5.287) into eq. (5.283), a relationship between the mean velocity and centerline velocity is obtained: um =
0.817uc (5.288) Substituting eq. (5.284) and (5.288) into eq. (5.286) and considering the definition of Reynolds number Re D = um D / ν , the friction coefficient can be obtained as c f = 0.078 Re −1
/ 4 (5.289) D which agreed with the experimental data very well up to Re D = 5 ×104 . For even higher Reynolds number, the following empirical correlation works better for smooth tubes (see Fig.
5.32): (5.290) c f = 0.046 Re −1/5 , for 3 × 104 < Re < 106 D Instead of the one-seventh law, eq. (5.285), the law of the wall, eq. (5.276), can be used to obtain the following correlation: c −1/ 2 =
1.737 ln(c1/ 2 Re D ) − 0.396 (5.291) f f which is referred to in the literature as the Kármán-Nikuradse relation. Equation (5.291) is valid up to ReD = 106. For fully-developed turbulent flow in a
non-circular tube, eq. (5.291) is still applicable provided the hydraulic diameter is used in the definition of the Reynolds number. In this case, the friction coefficient, cf, is defined based on
the perimeter-averaged wall shear stress because the shear stress is no longer uniform around the periphery of the cross-section. Figure 5.32 Friction factor for duct flow. 502 Advanced Heat and Mass
Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press When the inner surface of the tube is not smooth, the friction will significantly increase with roughness (see
Fig. 5.32). The effect of the surface roughness can be measured by the roughness Reynolds number defined as: ku k (τ / ρ )1/ 2 (5.292) Rek = s τ = s w ν ν where ks is the roughness. When the
roughness Reynolds number is greater than 70, the friction coefficient is no longer a strong function of the Reynolds number and becomes a constant, which is referred to as a fully rough surface. In
the fully rough surface regime, the roughness size excees the order of the magnitude of what would have been the thickness of the viscous sublayer for a smooth surface. The friction coefficient for
the fully rough surface regime can be obtained from the following empirical correlation: D c f 1.74 ln + 2.28 ks −2 (5.293) 5.9.3 Heat Transfer in Fully Developed Turbulent Flow Heat
transfer in fully-developed turbulent flow in a circular tube subject to ′′ constant heat flux ( qw = const) will be considered in this subsection (Oosthuizen and Naylor, 1999). When the turbulent
flow in the tube is fully developed, we have v = 0 and the energy eq. (5.268) becomes u ∂T 1 ∂ ∂T = r (α + ε H ) ∂x r ∂r ∂r (5.294) After the turbulent flow is hydrodynamically and
thermally fully developed, the time-averaged temperature profile is no longer a function of axial distance from the inlet, i.e., ∂ Tw − T ∂x Tw − Tc =0 (5.295) where Tc is the
time-averaged temperature at the centerline of the tube, and Tw is the wall temperature. Thus, (Tw − T ) / (Tw − Tc ) is a function of r only, i.e., Tw − T = f (r ) Tw − Tc (5.296) where f is
independent from x. Differentiating (5.295) yields ∂T dTw Tw − T dTw dTc = − − ∂x dx Tw − Tc dx dx (5.297) At the wall, the contribution of eddy diffusivity on the heat transfer
is negligible, and the heat flux at the wall becomes ′′ qw = k ∂T ∂r (5.298) r = r0 Chapter 5 Internal Forced Convective Heat and Mass Transfer 503 Amir Faghri, Yuwen Zhang, and John Howell Copyright
© 2010 Global Digital Press Substituting eq. (5.296) into eq. (5.298), one obtains: ′′ qw = −k (Tw − Tc ) f ′(r0 ) (5.299) ′′ Since the heat flux is constant, qw = const, it follows that (Tw − Tc ) =
const., i.e., dTw dTc = (5.300) dx dx Therefore, eq. (5.297) becomes: ∂T dTw = dx ∂x (5.301) For fully developed flow, the local heat transfer coefficient is: hx = ′′ qw = const Tw − Tm 2 r02 (5.302)
where Tm is the time-averaged mean temperature defined as: Tm = r0 0 uTrdr (5.303) ′′ Since qw = const, it follows from eq. (5.302) that (Tw − Tm ) = const., i.e., dTw dTm = dx dx (5.304) Combining
eqs. (5.300), (5.301) and (5.304), the following relationships are obtained: ∂T dTw dTc dTm (5.305) = = = dx dx dx ∂x The time-averaged mean temperature, Tm , changes with x as the result of heat
transfer from the tube wall. By following the same procedure as that in Example 5.2, the rate of mean temperature change can be obtained as follows: ′′ dTm 4π qw (5.306) = ρ c p Dum dx Substituting
eq. (5.305) into eq. (5.294), the energy equation becomes: u dTm 1 ∂ ∂T = (r0 − y)(α + ε H ) ∂y dx r0 − y ∂y (5.307) where y = r0 − r is the distance measured from the tube wall. Equation
(5.307) is subject to the following two boundary conditions: ∂T = 0, y = r0 (axisymmetric condition) ∂y (5.308) (5.309) Integrating eq. (5.307) in the interval of (r0, r) and considering eq. (5.308),
we have: T = Tw (unknown), y = 0 (r0 − y)(α + ε H ) ∂T dTm = dx ∂y y r0 (r0 − y)udy (5.310) which can be rearranged to 504 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell
Copyright © 2010 Global Digital Press dTm I ( y) ∂T = ∂y (r0 − y)(α + ε H ) dx (5.311) where I ( y) = (ro − y)udy ro y (5.312) Integrating eq. (5.311) in the interval of (0, y) and considering eq.
(5.309), one obtains: T − Tw = dTm dx y 0 I ( y) dy (ro − y)(α + ε H ) (5.313) If the profiles of axial velocity and the thermal eddy diffusivity are known, eq. (5.313) can be used to obtain the
correlation for internal forced convection heat transfer. With the exception of the very thin viscous sublayer, the velocity profile in the most part of the tube is fairly flat. Therefore, it is
assumed that the time-averaged velocity, u , in eq. (5.312) can be replaced by um , and I(y) becomes: I ( y) − 2 um (ro − y) 2 2 (5.314) Substituting eqs. (5.314) and (5.306) into eq. (5.313)
yields: T − Tw = − ′′ qw ρcp + y 0 (1 − y / ro ) dy (α + ε H ) (5.315) which can be rewritten in terms of wall coordinate q′′ (1 − y / ro ) ρy + T − Tw = − w 0 [1 / Pr + (ε M / ν ) / Prt ] dy ρcp
τ w (5.316) where y+ is defined in eq. (5.277). To consider heat transfer in an internal turbulent flow, the entire turbulent boundary layer is divided into three regions: (1) inner region ( y + < 5
), (2) buffer region ( 5 ≤ y + ≤ 30 ), and (3) outer region ( y + > 30 ). In the inner region ε M = ε H = 0 and eq. (5.316) becomes T − Tw = − y+ ′′ qw ρ Pr (1 − y / ro )dy + 0 ρcp τ w (5.317)
Since the inner region is very thin, y / r0 1 and 1 − y / r0 is effectively equal to 1. Therefore, the temperature profile in the inner region becomes: q′′ ρ T − Tw = − w Pr y + (5.318)
ρc τ p w The temperature at the boundary between the inner and buffer regions ( y + = 5 ), Ts , can be obtained from eq. (5.318) as q′′ Ts − Tw = −5 w ρc p ρ Pr τ w (5.319) Chapter 5
Internal Forced Convective Heat and Mass Transfer 505 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press In the buffer region where 5 ≤ y + ≤ 30 , the eddy diffusivity in
the buffer region is: ε M y+ = −1 (5.320) 5 ν Substituting eq. (5.320) into eq. (5.316) and assuming the turbulent Prandtl number Pr t = 1 , the following expression is obtained: q′′ (1 − y / ro ) ρy
+ T − Ts = − w (5.321) 5 [1 / Pr + y+ / 5 − 1] dy ρcp τ w Since the buffer region is also very thin, 1 − y / r0 in eq. (5.321) is effectively + equal to 1. Defining T + = (T − Tw ) − yields ′′
qw ρ and Integrating eq. (5.321) ρcp τ w i.e., T+ 5 Pr dT + = y+ 5 dy + 1 / Pr + ( y + − 5) / (5 Pr t ) (5.322) ρ y+ ln Pr − Pr + 1 , 5 < y + < 30 (5.323) τ 5 w The
temperature at the top of the buffer region where y + = 30 , Tb , becomes q′′ ρ Tb − Ts = −5 w ln(5 Pr + 1) (5.324) ρc τ p w For the outer region where ε M ν and ε H α , eq.
(5.316) becomes q′′ ρ y+ (1 − y / ro ) + T − Tb = − w dy (5.325) ρ c p τ w 30 ε M / ν q′′ T − Ts = −5 w ρc p where the turbulent Prandtl number is assumed to be equal to 1. It is assumed that
the Nikuradse equation (5.276) is valid in the outer region and the velocity gradient in this region becomes: ∂u + 2.5 = ∂y + y + (5.326) The expression of apparent shear stress in this region, eq.
(5.280) , can be nondimensionalized using eqs. (5.277) and (5.278) as: τ app ε M ∂u + (5.327) = τw ν ∂y + Substituting eqs. (5.275) and (5.326) into eq. (5.327), the eddy diffusivity in the outer
region is obtained as: ε M y y+ = 1 − (5.328) ν ro 2.5 Substituting eq. (5.328) into eq. (5.325), the temperature distribution in this region becomes: 506 Advanced Heat and Mass Transfer
Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press νρ The temperature at the center of the tube, Tc , can be obtained by letting y + = yc+ in eq. (5.329), i.e. q′′ Tc −
Tb = −2.5 w ρc p ρ r0 ln τ w 30ν q′′ ρ y+ 1 q′′ ρ y + T − Tb = −2.5 w dy + = −2.5 w ln (5.329) ρ c τ 30 y + ρc τ p w p w 30 which is
valid from y + = 30 to the center of the tube where yc = r0 or r τw + (5.330) yc = 0 τw ρ (5.331) The overall temperature change from the wall to the center of the tube can be obtained by
adding eqs. (5.319), (5.324) and (5.331): q′′ Tw − Tc = w ρcp ρ τw r 2.5ln 0 30ν 2 τ w = c f ρ um τw ρ + 5ln(5 Pr + 1) + 5 Pr (5.332) It follows from the
definition of friction factor, eq. (5.282), that 1 2 (5.333) Substituting eq. (5.333) into eq. (5.332) and considering the definition of Reynolds number, Re D = um D / ν , eq. (5.332) becomes: cf
+ 5ln(5 Pr + 1) + 5 Pr (5.334) 2 ′′ In order to obtain the heat transfer coefficient, h = qw / (Tw − Tm ) , the temperature qw ′′ Tw − Tc = ρc u pm Re 2 2.5ln D c 60
f difference Tw − Tm must be obtained. If the velocity profile can be approximated by eq. (5.287), and the temperature and velocity can also be approximated by the one-seventh law, i.e., Tw − T
y = Tw − Tc ro 1/ 7 , u y = uc ro 1/ 7 (5.335) it follows that Tw − Tm = ro 0 u (Tw − T )2π rdr ro 0 u 2π rdr = 5 (Tw − Tc ) 6 (5.336) Substituting eq. (5.334) into eq.
(5.336) results in: ′′ 5 qw Tw − Tm = ρc u 6 p m Re 2 2.5ln D c 60 f cf + 5ln(5 Pr + 1) + 5 Pr 2 (5.337) which can be rearranged to the following empirical
correlation Chapter 5 Internal Forced Convective Heat and Mass Transfer 507 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Nu D = Re D Pr cf Re 5 2.5ln D 60
6 2 cf + 5ln(5 Pr + 1) + 5 Pr 2 (5.338) which can be used together with appropriate friction coefficient discussed in the previous subsection to obtain the Nusselt number. Example
5.7 Water is to be heated from 30 °C to 70 °C by flowing through a smooth tube with a diameter of 50 mm and heated at constant heat flux. If the mass flow rate of water is 2.5 kg/s, what is the heat
transfer coefficient? Solution: The properties of water can be determined at the average temperature of the water at Tm = (Tm ,i + Tm , e ) = (30 + 70) / 2 = 50 °C. From Table C.8, we have ρ =
988.1kg/m3 , c p = 4180J/kg- o C , ν = 0.554 × 10−6 m 2 /s , k = 0.64W/m- o C , and Pr = 3.57 . The mean velocity of water is um = 4m 4 × 2.5 = = 1.288m/s ρπ D 2 988.1× π × 0.052 um D = 1.288 ×
0.05 = 1.16 × 105 −6 0.554 × 10 The Reynolds number is ν The friction coefficient under this Reynolds number can be found from eq. (5.290), i.e., c f = 0.046 Re −1/5 = 0.046 × (1.16 × 105 ) −1/5 =
4.46 × 10 −3 D Re D = The Nusselt number can be obtained from eq. (5.338), i.e. Nu D = Re D Pr Re 5 2.5ln D 60 6 cf 2 cf + 5ln(5 Pr + 1) + 5 Pr 2 4.46 × 10−3 2 The heat
transfer coefficient is therefore: h= Nu D k 535.56 × 0.64 = = 6855.17W/m 2 - o C D 0.05 1.16 × 105 5 2.5ln 6 60 = 535.56 = 1.16 × 105 × 3.57 × 4.46 × 10−3 2 + 5ln(5 × 3.57 + 1) +
5 × 3.57 508 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press References Aydin, O., M. Avci, 2006, “Heat and Fluid Flow
Characteristics of Gases in Micropipes,” Int. J. Heat Mass Transfer, Vol. 49, pp. 1723-1730. Bahrami, H., 2009, Personal Communication, Storrs, CT. Bejan, A., 2004, Convection Heat Transfer, 3rd ed.,
John Wiley & Sons, Hoboken, NJ. Blackwell, B. F., 1985, “Numerical Solution of the Graetz Problem for a Bingham Plastic in Laminar Tube Flow with Constant Wall temperature,” ASME J. Heat Transfer,
Vol. 107, pp. 466-468. Burmeister, L.C., 1993, Convective Heat Transfer, 2nd ed., John Wiley & Sons, Hoboken, NJ. Faghri, A., and Buchko, B., 1991, “Experimental and Numerical Analysis of Low
Temperature Heat Pipes with Multiple Heat Sources”, ASME J. Heat Transfer, Vol. 113, pp. 728-734. Heaton, H. S., Reynolds, W. C. and Kays, W. M., 1964, “Heat Transfer in Annular Passages:
Simultaneous Development of Velocity and Temperature Fields in Laminar Flow”, Int. J. Heat Mass Transfer, Vol. 7, pp. 763-781. Hetsroni, G., Mosyak, A., Pogrebnyak, E., Yarin, L.P., 2005, “Fluid Flow
in Micro-channels,” Int. J. Heat Mass Transfer, Vol. 48, pp. 1982-1998. Hooman, K., 2008, “Comments on ‘Viscous-dissipation effects on the heat transfer in a Poiseuille flow’ by O. Aydin and M.
Avci”, Applied Energy, Vol. 85, pp. 70-72 Hornbeck, R.W., 1965, “An All-numerical Method for Heat Transfer in the Inlet of a Tube,” ASME Paper No. 65-WA HT-36. Jeong, H.E., Jeong, J.T., 2006,
“Extended Graetz Problem including Streamwise Conduction and Viscous Dissipation in Microchannel,” Int. J. Heat Mass Transfer, Vol. 49, pp. 2151-2157. Kakaç, S., Shah, R., Aung, W., 1987, Handbook of
Single-Phase Convective Heat Transfer, John Wiley, New York. Kakaç, S., and Yucel, O., 1974, Laminar Flow Heat Transfer in an Annulus with Simultaneous Development of Velocity and Temperature Fields,
Technical and Scientific Council of Turkey, TUBITAK, ISITEK No. 19, Ankara, Turkey. Kandlikar, S.G., Garimella, S., Li, D., Colin, S. and King, M.R. (2006), Heat Transfer and Fluid Flow in
Minichannels and Microchannels, Elsevier, San Diego, CA, USA Karniadakis, G., Beskok, A., and Narayan, A., 2005, Microflows and Nanoflows, Springer Verlag, Berlin. Chapter 5 Internal Forced
Convective Heat and Mass Transfer 509 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Kays, W.M., Crawford, M.E., and Weigand, B., 2005, Convective Heat Transfer, 4th
ed., McGraw-Hill, New York, NY Kurosaki, Y., 1973, “Coupled Heat and Mass Transfer in a Flow between Parallel Flat Plate (Uniform Heat Flux),” Journal of the Japan Society of Mechanical Engineers,
Part B, Vol. 39, pp. 2512-2521 (in Japanese). Langhaar, H.L., 1942, Steady Flow in the Transition Length of a Straight Tube,” J. Appl. Mech., Vol. 9, pp. A55-A58. Lyche, B. C., and Bird, R. B., 1956,
“Graetz-Nusselt Problem for a Power-Law Non-Newtonian Fluid,” Chem. Eng. Sci., Vol. 6, pp. 35-41. Moody, L. F., 1944, “Friction Factors for Pipe Flow,” Trans. ASME, Vol. 66, pp. 671-684. Myong, R.S.,
Lockerby, D.A., Reese, J.M., 2006, “The Effect of Gaseous Slip on Microscale Heat Transfer: An Extended Graetz Problem,” Int. J. Heat Mass Transfer, Vol. 49, pp. 2502-2513. Oosthuizen, P.H., and
Naylor, D., 1999, Introduction to Convective Heat Transfer Analysis, WCB/McGraw-Hill, New York. Patankar, S.V., 1980, Numerical Heat Transfer and Fluid Flow, Hemisphere, Washington, DC. Patankar,
S.V., 1991, Computation of Conduction and Duct Flow Heat Transfer, Innovative Research. Reichardt, H., 1951, “Die Grundlagen des turbulenten Wärmeüberganges,” Arch. Gesamte Waermetech, Vol. 2, pp.
129-142. Sellars, J.R., Tribus, M., and Klein, J.S., 1956, “Heat Transfer to Laminar Flow in a Flat Conduit –The Graetz Problem Extended,” Trans. ASME, Vol. 78, pp. 441-448. Shah, R. K.; London, A.
L., 1974, “Thermal Boundary Conditions and Some Solutions for Laminar Duct Flow Forced Convection,” ASME J. Heat Transfer Vol. 96, pp. 159-165. Shah, R. K.; London, A. L., 1978, “Laminar Flow
Convection in Ducts,” Advances in Heat Transfer, Supplement 1, Irvine, T. F. and Harnett, J.P., Eds., Academic Press, San Diego, CA. Siegel, R., Sparrow, E.M., and Hallman, T.M., 1958, “Steady
Laminar Heat Transfer in a Circular Tube with Prescribed Wall Heat Flux,” Appl. Sci. Res., Ser. A, Vol. 7, pp. 386-392. Tunc, G., and Bayazitoglu, Y., 2001, “Heat Transfer in Microtubes with Viscous
Dissipation,” Int. J. Heat Mass Transfer, Vol. 44, pp. 2395-2403. Tunc, G., and Bayazitoglu, Y., 2002, “Heat Transfer in Rectangular Microchannels,” Int. J. Heat Mass Transfer, Vol. 45, pp. 765-773.
510 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press Whiteman, I. R., and Drake, W. B., 1980, Trans. ASME, Vol. 80, pp. 728-732. Zhang,
Y., 2002, “Coupled Forced Convective Heat and Mass Transfer in a Circular Tube with External Convective Heating,” Progress of Computational Fluid Dynamics Journal, Vol. 2, pp. 90-96. Zhang, Y., and
Chen, Z.Q., 1990, “Analytical Solution of Coupled Laminar Heat-Mass Transfer inside a Tube with Adiabatic External Wall,” Proceedings of the 3rd National Interuniversity Conference on Engineering
Thermophysics, Xi’an Jiaotong University Press, Xi’an, China, pp. 341-345. Zhang, Y., and Chen, Z.Q., 1992, “Analytical Solution of Coupled Laminar Heat- Mass Transfer in a Tube with Uniform Heat
Flux,” Journal of Thermal Science, Vol. 1, No. 3, pp. 184-188. Problems 5.1. Estimate the hydrodynamic entry length for laminar flow with constant properties inside ducts using the integral methods
developed in Chapter 5 for flow over a flat plate. Obtain the Nusselt number for laminar flow in a circular pipe assuming “slug flow” and a fully developed temperature profile with constant wall
temperature. Develop both the temperature distribution and Nusselt number for laminar flow as a function of r and x for a developing temperature profile with constant wall temperature by assuming
“slug flow” and using the separation of variables technique. What is the limit of Nusselt number as x → ∞ ? Is there a physical significance for the results as x → ∞ ? Repeat Problem 5.3 for the case
of constant heat flux at the wall. Consider the flow and heat transfer between two infinite parallel plates with uniform inlet temperature and by assuming “slug flow” and the same constant wall
temperature on both walls. Determine dimensionless temperature distribution, mean temperature and Nusselt number for the developing temperature region. Repeat Problem 5.5 for the case of the same
constant heat flux in each wall. Discuss the physical significance of the result for Example 5.4 if qo′′ qi′′ = 1 rather than 1. 0.346 5.2. 5.3. 5.4. 5.5. 5.6. 5.7. Chapter 5 Internal Forced
Convective Heat and Mass Transfer 511 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 5.8. Develop the governing equations and boundary conditions for laminar, steady
and two-dimensional forced convective heat transfer in a plane duct with developing velocity and temperature profiles for an incompressible Newtonian fluid with constant wall temperature using the
parabolic governing equation or boundary layer assumption (see Fig. P5.1). Nondimensionalize the conservation governing equations and boundary conditions so that the only dimensionless parameter in
the governing equations and boundary conditions is Prandtl number. x, u y, v Fig. P5.1 2H 5.9. Calculate the heat transfer coefficient and heat transfer rate for laminar fully developed flow and
temperature profile of water in a circular pipe of ¼” diameter with constant wall temperature of 80ºC. Assume an inlet temperature of 50ºC. 5.10. Repeat Problem 5.9 for air as the working fluid
instead of water. 5.11. Repeat Problem 5.9 for oil as the working fluid instead of water. 5.12. Air at 300ºK enters a 1.5cm ID tube, 30cm long with constant wall temperature of 340ºK and mean inlet
velocity of 0.6m/s. Determine the pressure drop, drag force, heat transfer coefficient and outlet temperature by assuming fully developed flow and temperature profile. Discuss the appropriateness of
assuming a fully developed flow and temperature profile. 5.13. Repeat Problem 5.12 for oil as the working fluid instead of air. 5.14. Determine the Nusselt number for fully developed flow and
temperature profile in a plane duct (i.e. between two large parallel plates; see Fig. P5.2) for the case of the same constant wall heat flux in both walls. 5.15. Repeat Problem 5.14 with one wall
kept at uniform flux and the other wall insulated. Tw x, u y, v H Tw Fig. P6.2 512 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press
5.16. It is common practice (but not acceptable) that when dealing with flow and heat transfer in non-circular tubes, one can use the results of flow in circular tubes based on hydraulic diameter (4
× area/wetted perimeter). Use the results of Example 5.4 to show that this gives significant error for the case of flow between parallel plates. 5.17. Couette flow consists of a fluid contained
between two parallel plates where one moves with constant velocity U and the other one is stationary (see Fig. P5.3). Furthermore, assume the two plates are porous and fluid enters the space between
the plates through both plates with constant velocity vw. Assume steady and laminar flow with no axial pressure gradient, as well as no end effects. The upper and lower walls are kept at uniform
temperature T1 and T2, respectively. Determine the temperature distribution within the fluid, and the shear stress and heat flux at the walls. U H y x Fig. P5.3 5.18. For sublimation inside a
circular tube subject to constant heat flux heating (see Section 5.5), show that the dimensionless mean temperature and concentration are related by θ m + ϕm − ϕ0 = 4ξ . 5.19. Show that the fully
developed dimensionless temperature and mass fraction distributions for sublimation inside a circular tube subject to constant heat flux heating discussed in Section 5.5 are eqs. (5.132) and (5.133).
5.20. The inner surface of a circular tube with radius R is coated with a layer of sublimable material, and the outer wall of the tube is kept at a constant temperature Tw . The fully developed gas
enters the tube with a uniform inlet mass fraction of the sublimable substance ω0 that equals the saturation mass fraction corresponding to the inlet temperature T0. The thermal and mass
diffusivities are assumed to be the same, i.e., Le = 1. Find the local Nusselt number based on convective heat flux and the total heat flux at the wall, and the local Sherwood number. 5.21. Obtain
the fully developed Nusselt number based on convective heat flux and the total heat flux at the wall, and the local Sherwood number for the sublimation problem discussed in Example 5.5. 5.22. Develop
an analytical solution, which shows that the fully developed Nusselt number for constant wall heat flux in rectangular microchannels is given by eq. (5.260). Chapter 5 Internal Forced Convective Heat
and Mass Transfer 513 Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010 Global Digital Press 5.23. If the tube in Example 5.7 is heated at constant wall temperature at 100 °C and eq. (5.338)
is assumed to be valid under constant wall temperature, what is the length of the tube? 5.24. Obtain the Nusselt number for fully developed internal turbulent flow based on analogy between momentum
and heat transfer. The Prandtl number and turbulent Prandtl number can be assumed to be equal to 1. 514 Advanced Heat and Mass Transfer Amir Faghri, Yuwen Zhang, and John Howell Copyright © 2010
Global Digital Press | {"url":"http://www.thermalfluidscentral.org/e-books/book-viewer.php?b=37&s=6","timestamp":"2014-04-18T18:11:30Z","content_type":null,"content_length":"186712","record_id":"<urn:uuid:83de8c44-a65d-445c-88bc-8d03d815373e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00093-ip-10-147-4-33.ec2.internal.warc.gz"} |
POEMS » POEMS-MATH Sammy Square
Sammy Square is my name
My 4 sides are just the same
Turn me around, I don't care
I'm always the same, I'm a square!
Danny Diamond
I am Danny Diamond
I am like a kite
But I'm really just a square
Who's corners are pulled tight
Ricky Rectangle
Ricky rectangle is my name
My 4 sides are not the same
2 are short and 2 are long
Count my sides, come along
Ollie Oval
I am Ollie Oval
A football shape is mine
Some people think that I'm an egg
But I think I look fine!
Tommy Triangle
Tommy triangle is the name for me.
Tap my sides 1, 2, 3.
Harry Heart
Harry Heart is my name
The shape I make is my fame
With a point on the bottom
And two humps on top
When it comes to love
I just can't stop!
Make A Triangle
(tune: Three Blind Mice)
One, two, three; one, two, three
Do you see? Do you see?
Up the hill and to the top
Down the hill--and then you stop
Straight across; tell me what have you got?
A triangle--a triangle!
Make A Square
(tune: Twinkle, Twinkle)
From the bottom to the top
Straight across and then you stop
Straight down to the bottom again
Across and stop where you began
If the lines are the same size
Then a square is you surprise.
Make A Rectangle
A long line at the bottom
A long line at the top
A short line to connect each side
A rectangle you've got!
Now a shot line at the bottom
A short line at the top
A long line to connect each side
A rectangle you've got!
Make A Circle
(tune: Pop Goes the Weasel)
Round and round on the paper I go
What fun to go around like so
What have I made, do you know?
I made a circle!
What Shape Is This?
(Song - Sung to The Muffin Man)
Do you know what shape this is?
What shape this is, what shape this is?
Do you know what shape this is,
I'm holding in my hand?
(Sung to Are You Sleeping?)
This is a square, this is a square
How can you tell? How can you tell?
It has four sides, all the same size
It's a square; it's a square
This is a circle, this is a circle
How can you tell? How can you tell?
It goes round and round, no end can be found
It's a circle; it's a circle
This is a triangle, this is a triangle
How can you tell? How can you tell?
It only has three sides that join to make three points
It's a triangle; it's a triangle
This is a rectangle, this is a rectangle
How can you tell? How can you tell?
It has two short sides, and it has two long sides
It's a rectangle; it's a rectangle
What Shape Is This? - Song - Sung to The Muffin Man
Do you know what shape this is?
What shape this is, what shape this is?
Do you know what shape this is,
I'm holding in my hand?
The Square Song
(Sung to You Are My Sunshine)
I am a square, a lovely square
I have four sides; they're all the same
I have four corners, four lovely corners
I am a square, that is my name
The Rolling Circle Song
(Sung to Have You Ever Seen A Lassie?)
Have you ever seen a circle, a circle, a circle?
Have you ever seen a circle, which goes round and round?
It rolls this way and that way, and that way and this way.
Have you ever seen a circle, which goes round and round?
Penny, penny, easy spent,
Copper brown and worth one cent.
Nickel, nickel, thick and fat,
You’re worth 5. I know that.
Dime, dime, little and thin,
I remember—you’re worth 10.
Quarter, quarter, big and bold,
You’re worth 25, I am told.
Half a dollar, half a dollar,
Giant size.
50 cents to buy some fries.
Dollar, dollar, green and long,
With 100 cents you can’t go wrong.
Penny Poems
Penny, penny,
Easily spent.
Copper brown
and worth one cent.
Nickel Poem
Nickel, nickel,
Thick and fat.
You're worth five cents,
I know that.
Dime Poem
Dime, dime,
Little and thin.
I remember,
you're worth ten.
Quarter Poem
Quarter, quarter,
big and bold.
You're worth twenty-five
I am told.
A Penny Is One Cent
A penny is one cent (stamp your foot)
A nickel is five (slap your thigh)
A dime is ten cents (clap your hands)
A quarter twenty-five (snap fingers over your head).
How many cents have I on this try?
The Penny
See the shiny penny, brown as it can be,
Showing Abe Lincoln for all of us to see.
He had a bushy beard and a tall black hat.
A penny's worth one cent. How about that?
The Nickel
Thomas Jefferson will be found
On a nickel, shiny, smooth, and round.
His home, Monticello, is on the other side.
A nickel is worth five cents. Say it with pride.
The Dime
A dime is the smallest coin of them all,
With Roosevelt posing nice and tall.
A dime is worth ten cents. Don't you agree?
Which makes Roosevelt as happy as can be!
Penny Poem
Abraham Lincoln, good and kind,
Was honored and loved by many.
To help us remember this president,
We put his face on the penny!
By Lisa Conrad
One quarter-- twenty-five. Let's do a quarter jive!
Two quarters -- fifty. That's really nifty!
Three quarters -- seventy-five. Old man's glad to be ALIVE.
Four quarters -- a dollar. Let's give a little hollar! Yeh
Counting to One Hundred
I'll be counting to one hundred,
It will take about a year!
Because I've never seen one hundred, But it's HUGE, that's what I hear!
I'll be nine when I start counting,
In a year I'll reach the end.
I'll just count and count some more and
When I'm done I will be ten!
So, I'm counting to one hundred
In a voice that's loud and clear.
Now I'm half way to one hundred,
And it didn't take a year!
I'm not sure where I got these math poems...I think they were on one of my webrings.
Length is Fun
12, 12, 12
Inches make a foot.
Measure short things with the foot.
Length is so much fun.
3, 3, 3
Feet are in a yard.
Measure long things with a yard.
Length is so much fun.
Inch, inch, inch
Inches are in yards.
Thirty-six, thirty-six
Inches make a yard.
What time is it? What time is it?
Do you know? Do you know?
The short hand tells the hour.
The long hand tells the minute.
Count by 5’s, Count by 5’s.
5 10 15 20 25 30 35 40 45 50…
We just know what we’ve been told
Mathematics is worth its weight in gold
Studying numbers here and there
Solving problems everywhere.
Sound off…addition!
Sound off…subtraction!
Sound off… multiplication!
Sound off…division!
Sorting numbers is our thing
Finding symmetry is our game
Finding paremeter and area is easy
All because of you and me!
Sound off…strategies
Sound off…problem solving
Sound off…reading numbers
Sound off…one, two, three, four….LET’S GO!!!
Problem Solving
There’s a math problem
Written on a sheet
I can work it out
It’s not such a feat
First I read it twice
find out what it asks
Double underline the info
It’s an easy task
Next I look for clues
Underline them once
Choose a strategy
Sometimes you have to hunt
Now I work it out
I can draw a chart
I will show my work
This is quite an art
Mark your answers clear
Check over your facts
Make sure they are right
Be sure it is intact
Mathematics Bugaloo
I’m a mathematician and I’m here to say
I do mathematics everyday.
Sometimes I just use paper
Sometimes I use my head
Or maybe a drawing will do instead.
Problem solving, number sense,
Figuring out what to do
Doing the mathematics Bugaloo.
First, I study the problem
To get a clue
Just what’s the problem asking me to do?
Next I choose a strategy
Which one shall I try?
Once I have decided
That strategy I apply.
Then I work on problem showing
All the steps too,
Last I check my answer, Bugaloo.
Odd and Even
If you are an even number
You always have a pair
So if you look around
Your buddy will always be there
But ...
If you are an odd number
There's always a lonely one
He looks around to find his buddy
But he's the only one.
Marg Wadsworth
The Faces Of The Clock
The Big Hand is busy
But the Small Hand has power.
The large one counts the minutes.
But the Little One names the hour.
When both Hands stand at the top together,
It's sure to be Twelve O'clock. But whether
That's twelve at noon or twelve at night
Depends on if it's dark or light.
Make a Ten
Make a ten. Make a ten.
We know ways to make a ten.
9+1 and 8+2;
They have sums of ten. It's true.
7+3 and 6+4;
Do you know there are two more?
5+5 and 0+10;
Now let's say them all again!
Joanne Griffin | {"url":"http://www.jologriffin.com/gazillion.cfm?subpage=25701","timestamp":"2014-04-20T18:27:13Z","content_type":null,"content_length":"117855","record_id":"<urn:uuid:e4f2ab73-cf91-4e6f-90df-d8768eb5cd7a>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solving trigonometric equations
January 28th 2010, 08:48 AM #1
Aug 2009
I have the solve the following equation without using a calculator:
2cos2x=1+cosx, where x is in radians and between 0 and 2 pi.
I used the double angle formulae to get that:
(4cosx +3)(cosx -1)=0,
But how can I figure out what cosx = -3/4 is?
Con somebody help me? Is there an easier way of doing it? Any help is very much appreciated.
Alas, you won't be able to find that by hand, there's where the calculator comes handy.
There are certain angles easily found, but this is not the case.
January 28th 2010, 09:20 AM #2 | {"url":"http://mathhelpforum.com/trigonometry/125955-solving-trigonometric-equations.html","timestamp":"2014-04-17T14:55:35Z","content_type":null,"content_length":"32888","record_id":"<urn:uuid:a864e2e4-fc82-4639-aeac-476994b0445e>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00409-ip-10-147-4-33.ec2.internal.warc.gz"} |
Questions about use of geometric progressions
April 7th 2009, 06:03 AM #1
Apr 2009
I apologize in advance that this is a more theoretical question. I have taken a couple of college- and graduate-level stats classes, but never precalculus or calc. I am kind of teaching these
things to myself as I come across them in a book about MatLab basics (oh yes, I'm teaching myself that too because someone in our lab needs to be able to use the program for analysis and running
experiments). So my question is...please give an example of how geometric progression can help me in statistics or digital signal processing? Or anything, really...I guess I am just wondering why
I would ever want to determine the sum of a geometric progression.
I apologize in advance that this is a more theoretical question. I have taken a couple of college- and graduate-level stats classes, but never precalculus or calc. I am kind of teaching these
things to myself as I come across them in a book about MatLab basics (oh yes, I'm teaching myself that too because someone in our lab needs to be able to use the program for analysis and running
experiments). So my question is...please give an example of how geometric progression can help me in statistics or digital signal processing? Or anything, really...I guess I am just wondering why
I would ever want to determine the sum of a geometric progression.
Geometric progression - Wikipedia, the free encyclopedia
April 7th 2009, 07:27 AM #2 | {"url":"http://mathhelpforum.com/pre-calculus/82688-questions-about-use-geometric-progressions.html","timestamp":"2014-04-18T21:13:25Z","content_type":null,"content_length":"34586","record_id":"<urn:uuid:6187face-8e21-4633-abe2-ee1fd249a8e1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00224-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Differential Approach for Bounding the Index of Graphs under Perturbations
This paper presents bounds for the variation of the spectral radius $\lambda(G)$ of a graph $G$ after some perturbations or local vertex/edge modifications of $G$. The perturbations considered here
are the connection of a new vertex with, say, $g$ vertices of $G$, the addition of a pendant edge (the previous case with $g=1$) and the addition of an edge. The method proposed here is based on
continuous perturbations and the study of their differential inequalities associated. Within rather economical information (namely, the degrees of the vertices involved in the perturbation), the best
possible inequalities are obtained. In addition, the cases when equalities are attained are characterized. The asymptotic behavior of the bounds obtained is also discussed. For instance, if $G$ is a
connected graph and $G_u$ denotes the graph obtained from $G$ by adding a pendant edge at vertex $u$ with degree $\delta_u$, then, $$ \textstyle \lambda(G_u)\le \lambda(G)+\frac{\delta_u}{\lambda^3
(G)}+\textrm{o}\left(\frac{1}{\lambda^3(G)}\right). $$
Full Text: | {"url":"http://www.combinatorics.org/ojs/index.php/eljc/article/view/v18i1p172/0","timestamp":"2014-04-17T04:04:23Z","content_type":null,"content_length":"15753","record_id":"<urn:uuid:25a5a808-9b53-47d7-967d-370771534e25>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00522-ip-10-147-4-33.ec2.internal.warc.gz"} |
how to show their sum is another distribution
October 12th 2009, 02:00 PM #1
Junior Member
May 2008
how to show their sum is another distribution
Xi idd exponential distribution, show sum of n Xi is gamma distribution.
i think this is typical question. but i got no idea at all. So you try to show p.d.f.? or other ways?
Last edited by pengchao1024; October 12th 2009 at 03:03 PM.
Use the product of the MGFs.
The sum of INDEPENDENT gamma's with the same BETA
So if $X_i\sim\Gamma (\alpha_i, \beta)$ then
$\sum_{i=1}^nX_i\sim\Gamma (\sum_{i=1}^n\alpha_i, \beta)$
That's also how you prove the sum of independent chi-squares is a $\chi^2$.
You just add the dfs.
BUT you need independence.
I'm working on a random stopping boot strap problem where the chi-squares are NOT independent.
Anyone want to help me? lol
Last edited by matheagle; October 12th 2009 at 10:39 PM.
how can u find the density function of Gamma distribution by using convolution?
well, would you do it like first two variables for example to show the ideas of using induction
October 12th 2009, 02:57 PM #2
October 12th 2009, 06:03 PM #3
October 13th 2009, 08:09 AM #4
Junior Member
May 2008
October 13th 2009, 08:11 AM #5
October 13th 2009, 08:12 AM #6
Junior Member
May 2008 | {"url":"http://mathhelpforum.com/advanced-statistics/107604-how-show-their-sum-another-distribution.html","timestamp":"2014-04-16T19:26:24Z","content_type":null,"content_length":"47397","record_id":"<urn:uuid:bef5e87e-2e46-4cf8-9f55-0c02c1acd022>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00328-ip-10-147-4-33.ec2.internal.warc.gz"} |
Homological Illusions of Persistence and Stability
Abstract (Summary)
In this thesis we explore and extend the theory of persistent homology, which captures topological features of a function by pairing its critical values. The result is represented by a collection of
points in the extended plane called persistence diagram.
We start with the question of ridding the function of topological noise as suggested by its persistence diagram. We give an algorithm for hierarchically finding such epsilon-simplifications on
2-manifolds as well as answer the question of when it is impossible to simplify a function in higher dimensions.
We continue by examining time-varying functions. The original algorithm computes the persistence pairing from an ordering of the simplices in a triangulation and takes worst-case time cubic in the
number of simplices. We describe how to maintain the pairing in linear time per transposition of consecutive simplices. A side effect of the update algorithm is an elementary proof of the stability
of persistence diagrams. We introduce a parametrized family of persistence diagrams called persistence vineyards and illustrate the concept with a vineyard describing a folding of a small peptide. We
also base a simple algorithm to compute the rank invariant of a collection of functions on the update procedure.
Guided by the desire to reconstruct stratified spaces from noisy samples, we use the vineyard of the distance function restricted to a 1-parameter family of neighborhoods of a point to assess the
local homology of a sampled stratified space at that point. We prove the correctness of this assessment under the assumption of a sufficiently dense sample. We also give an algorithm that constructs
the vineyard and makes the local assessment in time at most cubic in the size of the Delaunay triangulation of the point sample.
Finally, to refine the measurement of local homology the thesis extends the notion of persistent homology to sequences of kernels, images, and cokernels of maps induced by inclusions in a filtration
of pairs of spaces. Specifically, we note that persistence in this context is well defined, we prove that the persistence diagrams are stable, and we explain how to compute them. Additionally, we use
image persistence to cope with functions on noisy domains.
Bibliographical Information:
Advisor:Edelsbrunner, Herbert
School:Duke University
School Location:USA - North Carolina
Source Type:Master's Thesis
Keywords:computer science mathematics persistent homology persistence vineyards diagrams stability algorithms
Date of Publication:08/04/2008 | {"url":"http://www.openthesis.org/documents/Homological-Illusions-Persistence-Stability-269125.html","timestamp":"2014-04-16T16:02:15Z","content_type":null,"content_length":"10172","record_id":"<urn:uuid:8051cd4f-1151-4ec0-9760-a2930ca655dd>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00302-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Matheology � 300
Replies: 15 Last Post: Jul 17, 2013 5:12 PM
Messages: [ Previous | Next ]
fom Re: Matheology § 300
Posted: Jul 15, 2013 8:03 PM
Posts: 1,969
Registered: On 7/15/2013 4:09 PM, mueckenh@rz.fh-augsburg.de wrote:
12/4/12 > On Monday, 15 July 2013 21:44:37 UTC+2, Zeit Geist wrote:
>> On Monday, July 15, 2013 12:40:03 PM UTC-7, muec...@rz.fh-augsburg.de wrote:
>>> Let the sets where they belong, namely in matheology. Let's talk math, namely about numbers. They are all finite and in finite sets. Any exception known?
>> That is your prerogative. Use any consistent you wish. But be aware that you may implicitly cross the line into the infinite without knowing it.
> There is no line to cross. Every natural number belongs to a finite set. Have you really become so brain-damaged during your study that you can't understand this simple fact? There is
no material in mathematics that could "cross the line to infinity". Infinity cannot be reached. Then it would be finity.
This is only an argument to use
the correct term -- transfinite.
Now you will point out Cantor's
metaphysical beliefs to justify
your agenda...
... and so it goes ...
... and so it goes ...
... and so it goes ...
... and so it goes ...
... and so it goes ... | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2582108&messageID=9170069","timestamp":"2014-04-17T04:32:46Z","content_type":null,"content_length":"32417","record_id":"<urn:uuid:7932886f-edb7-4156-8edd-8ea6ee34c554>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00173-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Outside Activities
Replies: 5 Last Post: Oct 22, 2007 6:13 PM
Messages: [ Previous | Next ]
Re: Outside Activities
Posted: Jul 13, 1995 5:18 PM
How about trying to estimate the number of grains of sand on the surface of
the beach?
The total number of grains of sand?
Using dry sand, what is the ratio of the height of a sand pile to it's
Does it change as the pile gets higher or are they all similar?
Does the ratio depend on the "quality" of the sand used?
If you have a bucket of full water, how much sand can you pour into it
before it overflows? Howcome you can pour any sand at all into a full bucket?
Can you pour more water into a full bucket of sand than you can pour sand
into a full bucket of water?
Some of this isn't exactly math, but it sure sounds like good stuff to me.
I suspect this is only a start.
John Benson
Evanston Township High School 715 South Boulevard
Evanston Illinois 60204 Evanston IL 60202-2907
(708) 492-5848 (708) 492-5848 | {"url":"http://mathforum.org/kb/message.jspa?messageID=1074990","timestamp":"2014-04-17T01:18:21Z","content_type":null,"content_length":"22624","record_id":"<urn:uuid:22a811f8-f817-4767-8053-3a1ae76fa75c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiple choice
February 23rd 2008, 06:58 PM #1
Feb 2008
Multiple choice
can anyone help me with the problem below?
The total cost for five items of repair work on a car was $195. Overhaul of the carburetor cost twice as much as the tune-up, brake pads cost one-third as much as the carburetor overhaul, and
alignment and wheel balancing each cost one-third as much as the tune-up. What did the tune-up cost?
A. $30
B. $45
C. $60
D. $90
E. One cannot tell from the information given.
can anyone help me with the problem below?
The total cost for five items of repair work on a car was $195. Overhaul of the carburetor cost twice as much as the tune-up, brake pads cost one-third as much as the carburetor overhaul, and
alignment and wheel balancing each cost one-third as much as the tune-up. What did the tune-up cost?
A. $30
B. $45
C. $60
D. $90
E. One cannot tell from the information given.
let $x$ be the cost of the tuneup.
then the cost of the carburetor overhaul is $2x$
the cost of the brake pads is $\frac 13 (2x)$
the costs of alignment and wheel balancing are $\frac 13x$ each.
sum all these, equate to 195 and solve for x
can anyone help me with the problem below?
The total cost for five items of repair work on a car was $195. Overhaul of the carburetor cost twice as much as the tune-up, brake pads cost one-third as much as the carburetor overhaul, and
alignment and wheel balancing each cost one-third as much as the tune-up. What did the tune-up cost?
A. $30
B. $45
C. $60
D. $90
E. One cannot tell from the information given.
Here's what you do:
When you are given a problem like this you need to read the whole question and see how much information you are given. What is given in this problem is:
Total Cost = 195
Overhaul of the carburetor = 2*tune-up
brake pads = (1/3)*Overhaul of the carburetor=(1/3)*2*tune-up
alignment = (1/3)*tune-up
wheel balancing = (1/3)*tune-up
Now you can take this information and add them together because they all have the same variable in them (i.e. tune-up) and set them equal to 195. So you get:
The answer you get is how much your tune-up cost.
Hope this helps!
can anyone help me with the problem below?
The total cost for five items of repair work on a car was $195. Overhaul of the carburetor cost twice as much as the tune-up, brake pads cost one-third as much as the carburetor overhaul, and
alignment and wheel balancing each cost one-third as much as the tune-up. What did the tune-up cost?
A. $30
B. $45
C. $60
D. $90
E. One cannot tell from the information given.
Here's what you do:
When you are given a problem like this you need to read the whole question and see how much information you are given. What is given in this problem is:
Total Cost = 195
Overhaul of the carburetor = 2*tune-up
brake pads = (1/3)*Overhaul of the carburetor=(1/3)*2*tune-up
alignment = (1/3)*tune-up
wheel balancing = (1/3)*tune-up
Now you can take this information and add them together because they all have the same variable in them (i.e. tune-up) and set them equal to 195. So you get:
The answer you get is how much your tune-up cost.
Hope this helps!
Ah, another female mathematician! welcome aboard!
you are female aren't you? otherwise calling yourself "princess" is, well, weird
February 23rd 2008, 07:11 PM #2
February 23rd 2008, 07:25 PM #3
Feb 2008
February 23rd 2008, 08:37 PM #4 | {"url":"http://mathhelpforum.com/math-topics/28917-multiple-choice.html","timestamp":"2014-04-16T07:41:36Z","content_type":null,"content_length":"43660","record_id":"<urn:uuid:3c08b2a2-6645-4f99-a807-047eea3ef41f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Multiplying Numbers with Decimals
Date: 12/7/95 at 19:34:41
From: Anonymous
Subject: decimals
What is 9.4 x 9.6?
Date: 12/23/95 at 22:49:2
From: Doctor Elise
Subject: Re: decimals
Okay, this is the way you multiply decimals without a
Basically, it's the same as multiplying whole numbers,
with a twist!
If you had 94 x 96, you'd do it like this:
x 96
564 (this is 6 times 94)
8460 (this is 9 times 94 times 10)
9024 (564 + 8460, which is the answer)
Now, if we have decimals, we count how many
numbers there are to the right of the decimal
point, like this:
in 9.4, the 4 is to the right of the decimal point
so that's one, and in 9.6 the 6 is to the right
of the decimal point, so that's another, for a total
of 2.
Then we put the decimal point in the answer 2 places
to the left of the 'ones' column, like this:
If you think about it, you know that 9x9 = 81, and
10x10 = 100, so 9.4 x 9.6 has to be between 81 and 100!
Does this help?
-Doctor Elise, The Geometry Forum | {"url":"http://mathforum.org/library/drmath/view/58901.html","timestamp":"2014-04-20T13:28:52Z","content_type":null,"content_length":"5771","record_id":"<urn:uuid:7e074483-87c3-47a8-ab3a-231ea0998d99>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Triangle List Generator for Sphere
Download Source for Visual Studio 2012
WPF lacks built-in 3D shapes such as sphere, cylinder and cube. In result, those who intend to add such shapes to their 3D scene have no choice rather than generating their mesh through additional
coding. As I've already googled and searched through CodeProject for a simple solution for generating a sphere in WPF, I didn't find a clear code that I could rely on for a serious project. Some of
them too long others with limitations and ambiguity. The code that I submit here generates a triangle list and their normal vectors for a sphere with unit radius which can be then moved and resized
by simple calculations. It’s written in VB.Net and can be easily converted to other .Net based languages.
Using the code
Triangle Structure
For better manipulation of Trianlges and their Normals, the following structure is developed.
Public Structure Triangle
Dim Point1 As Point3D
Dim Point2 As Point3D
Dim Point3 As Point3D
Dim Normal As Vector3D
End Structure
The points in space should be located in such a way that they all have equal distance from center of sphere and also the enumerator of such points must be able to keep a record of neighboring points.
This can be accomplished by converting the Cartesian coordinate to Spherical.
The following shows how the parameters r, θ and ɸ replace the Cartesian X, Y and Z.
As far as r is supposed to be unit it can be omitted. Figure below shows how the enumerator passes through points on sphere surface. It’s shown that ɸ enumerator can be used to find adjacent points
on certain Z surface while θ enumerator iterates through all Z surfaces to generate the sphere surface points.
A closer look onto the sphere surface reveals how the triangles can be generated using the algorithm. As it’s shown each θ iteration represents a point which may be common in 8 triangles. In order to
prevent generating additional triangles, the algorithm locates two triangles on ɸ iteration forward and θ level backward as shown below.
Now it's time to write the code.
Generating Sphere Points
The CreateTriangleListForSphere function returns an ArrayList that contains Triangles. In this function, At first, the following code generates the points on surface of sphere and adds them to
PointList()() Array. The Density parameters determines the number of ɸ for sphere.
For tita As Integer = 0 To Density
Dim vtita As Double = tita * (Math.PI / Density)
For nphi As Integer = -Density To Density
Dim vphi As Double = nphi * (Math.PI / Density)
PointList(tita)(nphi + Density).X = Math.Sin(vtita) * Math.Cos(vphi)
PointList(tita)(nphi + Density).Y = Math.Sin(vtita) * Math.Sin(vphi)
PointList(tita)(nphi + Density).Z = Math.Cos(vtita)
Generating Triangles
Then, iteration below connects the point lists according to described algorithm. Triangle1 and Triangle2 are respective triangles explained and illustrated already. As far as sphere is located in
origin the Normal vectors can be suggested easily as the location of points on sphere surface.
Dim TriangleList As New ArrayList
For n_tita As Integer = 1 To PointList.GetLength(0) - 1
For n_phi As Integer = 0 To PointList(n_tita).GetLength(0) - 2
Dim Triangle1, Triangle2 As Triangle
Triangle1.Point1 = PointList(n_tita)(n_phi)
Triangle1.Point2 = PointList(n_tita)(n_phi + 1)
Triangle1.Point3 = PointList(n_tita - 1)(n_phi)
Triangle1.Normal = New Vector3D(Triangle1.Point1.X, Triangle1.Point1.Y, Triangle1.Point1.Z)
Triangle2.Point1 = PointList(n_tita)(n_phi + 1)
Triangle2.Point2 = PointList(n_tita - 1)(n_phi + 1)
Triangle2.Point3 = PointList(n_tita - 1)(n_phi)
Triangle2.Normal = New Vector3D(Triangle1.Point1.X, Triangle1.Point1.Y, Triangle1.Point1.Z)
1) In order to enhance the application performance, the triangle list can be saved on hard drive and then
loaded in memory instead of calling the CreateTrianlgeListForSphere function each time a sphere is needed.
2) For bigger projects it’d better to use triangle strips rather than an ArrayList of Triangles. That will reduce use of memory, CPU and GPU significantly.
3) Two loops above can be easily merged for best performance.
Adding a custom sphere requires calling CreateTriangleListForSphere and then resize and move the Triangle points: Point1, Point2 and Point3. The Normal vector remains intact. | {"url":"http://www.codeproject.com/Articles/699565/Triangle-List-Generator-for-Sphere","timestamp":"2014-04-17T17:33:11Z","content_type":null,"content_length":"78915","record_id":"<urn:uuid:0896d16c-9395-4d8a-9d9e-50342c1407a5>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: uniform prior
Replies: 6 Last Post: Dec 4, 2012 2:40 AM
Messages: [ Previous | Next ]
uniform prior
Posted: Nov 27, 2012 3:06 AM
Does anyone know how to compute the uniform reference prior for mean of a gaussian distribution i.e.,
I want to compute P(mean)
where mean is a vector with 3 elements. | {"url":"http://mathforum.org/kb/message.jspa?messageID=7928608","timestamp":"2014-04-18T11:41:07Z","content_type":null,"content_length":"22883","record_id":"<urn:uuid:43ce658e-ad8b-4f55-9971-5ca77f57be5c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
some problems ??
I need to solve this questions for assignment in HEDVA(Calculus in Hebrew) ??
If, the two functions are infinitely differenciable on some open interval, then $(fg)'=f'g+fg'$ $(fg)''=(f'g+fg')'=(f''g+f'g+f'g'+fg'')=f''g+2f'g'+ fg''$ $(fg)'''=(f''g+2f'g'+fg'')'=
f'''g+f''g'+2f''g'+2f'g ''+f'g''+fg'''$= $f'''g+3f''g'+3f'g''+fg'''$ I hope you recognize the pattern. It follows binomial coefficients. (You can support this with induction). Thus, $(fg)^{(n)}=\sum_
{k=0}^n {n \choose k}f^{(k)}g^{(n-k)}$ Thus, you can think of, $\sin x\sin 2x\sin 3x=(\sin x\sin 2x)\sin 3x$ And apply the theorem (twice because you still need to do the other product). Thus, $\sum_
{k=0}^n {n\choose k}(\sin x\sin 2x)^{(k)}(\sin 3x)^{(n-k)}$ Apply theorem again, $\sum_{k=0}^n {n\choose k}\cdot \sum_{i=0}^k {k\choose i} (\sin x)^{(i)}(\sin 2x)^{(k-i)} \cdot (\sin 3x)^{(n-k)}$ | {"url":"http://mathhelpforum.com/calculus/9187-some-problems.html","timestamp":"2014-04-20T02:15:30Z","content_type":null,"content_length":"36994","record_id":"<urn:uuid:da51c355-c34d-4a03-b88f-aa43a3c36782>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Semiclassical analysis of two-level collective population inversion using photonic crystals in three-dimensional systems
I theoretically demonstrate the population inversion of collective two-level atoms using photonic crystals in three-dimensional (3D) systems by self-consistent solution of the semiclassical
Maxwell-Bloch equations. In the semiclassical theory, while electrons are quantized to ground and excited states, electromagnetic fields are treated classically. For control of spontaneous emission
and steady-state population inversion of two-level atoms driven by an external laser which is generally considered impossible, large contrasts of electromagnetic local densities of states (EM LDOS’s)
are necessary. When a large number of two-level atoms are coherently excited (Dicke model), the above properties can be recaptured by the Maxwell-Bloch equations based on the first-principle
calculation. In this paper, I focus on the realistic 1D PC’s with finite structures perpendicular to periodic directions in 3D systems. In such structures, there appear pseudo photonic band gaps
(PBG’s) in which light leaks into air regions, unlike complete PBG’s. Nevertheless, these pseudo PBG’s provide large contrasts of EM LDOS’s in the vicinity of the upper photonic band edges. I show
that the realistic 1D PC’s in 3D systems enable the control of spontaneous emission and population inversion of collective two-level atoms driven by an external laser. This finding facilitates
experimental fabrication and realization.
© 2012 OSA
OCIS Codes
(270.0270) Quantum optics : Quantum optics
(050.5298) Diffraction and gratings : Photonic crystals
ToC Category:
Quantum Optics
Original Manuscript: May 29, 2012
Revised Manuscript: June 15, 2012
Manuscript Accepted: June 25, 2012
Published: July 12, 2012
Hiroyuki Takeda, "Semiclassical analysis of two-level collective population inversion using photonic crystals in three-dimensional systems," Opt. Express 20, 17201-17213 (2012)
Sort: Year | Journal | Reset
1. S. John, “Strong localization of photons in certain disordered dielectric superlattices,” Phys. Rev. Lett.582486–2489 (1987). [CrossRef] [PubMed]
2. E. Yablonovitch, “Inhibited spontaneous emission in solid-state physics and electronics,” Phys. Rev. Lett.582059–2062 (1987). [CrossRef] [PubMed]
3. E. M. Purcell, “Spontaneous emission probabilities at radio frequencies,” Phys. Rev.69681 (1946).
4. P. W. Milloni and J. H. Eberly, Laser Physics (Wiley, 2009).
5. M. Lindberg and C. M. Savage, “Steady-state two-level atomic population inversion via a quantized cavity field,” Phys. Rev. A385182–5192 (1988). [CrossRef] [PubMed]
6. S. John and T. Quang, “Collective switching and inversion without fluctuation of two-level atoms in confined photonic systems,” Phys. Rev. Lett.781888–1891 (1997). [CrossRef]
7. T. M. Stace, A. C. Doherty, and S. D. Barrett, “Population inversion of a driven two-level system in a structureless bath,” Phys. Rev. Lett.95106801 (2005). [CrossRef] [PubMed]
8. S. Hughes and H. J. Carmichael, “Stationary inversion of a two level system coupled to an off-resonant cavity with strong dissipation,” Phys. Rev. Lett.107, 193601 (2011). [CrossRef] [PubMed]
9. B. R. Mollow, “Power spectrum of light scattered by two-level systems,” Phys. Rev.1881969–1975 (1969). [CrossRef]
10. M. Florescu and S. John, “Single-atom switching in photonic crystals,” Phys. Rev. A64033801 (2001). [CrossRef]
11. M. Florescu and S. John, “Resonance fluorescence in photonic band gap waveguide architectures: Engineering the vacuum for all-optical switching,” Phys. Rev. A69053810 (2004). [CrossRef]
12. S. John and M. Florescu, “Photonic bandgap materials: towards an all-optical micro-transistor,” J. Opt. A: Pure Appl. Opt.3S103–S120 (2001). [CrossRef]
13. R. H. Dicke, “Coherence in spontaneous radiation processes,” Phys. Rev.9399–110 (1954). [CrossRef]
14. F. T. Arecchi and E. Courtens, “Cooperative phenomena in resonant electromagnetic propagation,” Phys. Rev. A21730–1737 (1970). [CrossRef]
15. R. Bonifacio, P. Schwendimann, and F. Haake, “Quantum statistical theory of superradiance. I,” Phys. Rev. A4302–313 (1971). [CrossRef]
16. R. Bonifacio and L. A. Lugiato, “Cooperative radiation processes in two-level systems: Superfluorescence,” Phys. Rev. A111507–1521 (1975). [CrossRef]
17. H. Takeda and S. John, “Self-consistent Maxwell-Bloch theory of quantum-dot-population switching in photonic crystals,” Phys. Rev. A83053811 (2011). [CrossRef]
18. H. Takeda, “Collective population evolution of two-level atoms based on mean-field theory,” Phys. Rev. A85, 023837 (2012). [CrossRef]
19. A. A. Belyanin, V. V. Kocharovsky, Vl. V. Kocharovsky, and D. S. Pestov, “Novel schemes and prospects of superradiant lasing in heterostructures,” Laser Phys.13161–167 (2003).
20. I. Staude, M. Thiel, S. Essig, C. Wolff, K. Busch, G. von Freymann, and M. Wegener, “Fabrication and characterization of silicon woodpile photonic crystals with a complete bandgap at telecom
wavelengths,” Opt. Lett.35, 1094–1096 (2010). [CrossRef] [PubMed]
21. M. D. Leistikow, A. P. Mosk, E. Yeganegi, S. R. Huisman, A. Lagendijk, and W. L. Vos, “Inhibited spontaneous emission of quantum dots observed in a 3D photonic band gap,” Phys. Rev. Lett.107,
193903 (2011). [CrossRef] [PubMed]
22. U. Hoeppe, C. Wolff, J. Kuchenmeister, J. Niegemann, M. Drescher, H. Benner, and Kurt Busch, “Direct observation of non-markovian radiation dynamics in 3D bulk photonic crystals”, Phys. Rev.
Lett.108, 043603 (2012). [CrossRef] [PubMed]
23. S. Reitzenstein, N. Gregersen, C. Kistner, M. Strauss, C. Schneider, L. Pan, T. R. Nielsen, S. Hofling, J. Mork, and A. Forchel, “Oscillatory variations in the Q factors of high quality
micropillar cavities,” Appl. Phys. Lett.94061108 (2009). [CrossRef]
24. T. Yoshie, A. Scherer, J. Hendrickson, G. Khitrova, H. M. Gibbs, G. Rupper, C. Ell, O. B. Shchekin, and D. G. Deppe, “Vacuum Rabi splitting with a single quantum dot in a photonic crystal
nanocavity,” Nature432, 200–203 (2004). [CrossRef] [PubMed]
25. K. Hennessy, A. Badolato, M. Winger, D. Gerace, M. Atature, S. Gulde, S. Falt, E. L. Hu, and A. Imamoglu, “Quantum nature of a strongly coupled single quantum dot-cavity system,” Nature445,
896–899 (2007). [CrossRef] [PubMed]
26. J. P. Reithmaier, G. Se.k, A. Löffler, C. Hofmann, S. Kuhn, S. Reitzenstein, L. V. Keldysh, V. D. Kulakovskii, T. L. Reinecke, and A. Forchel, “Strong coupling in a single quantum
dot-semiconductor microcavity,” Nature432, 197–200 (2004). [CrossRef] [PubMed]
27. M. Lermer, N. Gregersen, F. Dunzer, S. Reitzenstein, S. Hofling, J. Mork, L. Worschech, M. Kamp, and A. Forchel, “Bloch-wave engineering of quantum dot micropillars for cavity quantum
electrodynamics experiments,” Phys. Rev. Lett.108, 057402 (2012). [CrossRef] [PubMed]
28. L. Allen and J. H. Eberly, Optical Resonance and Two-level Atoms (Dover Publications, Inc., New York, 1987).
29. I. Kang and F. W. Wise, “Electronic structure and optical properties of PbS and PbSe quantum dots,” J. Opt. Soc. Am. B141632–1646 (1997). [CrossRef]
30. P. M. Naves, T. N. Gonzaga, A. F. G. Monte, and N. O. Dantas, “Band gap energy of PbS quantum dots in oxide glasses as a function of concentration,” J. Non-Crystalline Solids3523633–3635 (2006).
31. M. T. Rakher, R. Bose, C. W. Wong, and K. Srinivasan, “Spectroscopy of 1.55 μm PbS quantum dots on Si photonic crystal cavities with a fiber taper waveguide,” Appl. Phys. Lett.96161108 (2010).
32. A. Taflove and S. C. Hagness, Computational Electrodynamics: The Finite-Difference Time-Domain Method (Artech House, 2005).
33. K. Busch and S. John, “Photonic band gap formation in certain self-organizing systems,” Phys. Rev. E583896–3908 (1998). [CrossRef]
34. P. Yao, V. S. C. Manga Rao, and S. Hughes, “On-chip single photon sources using planar photonic crystals and single quantum dots,” Laser Photonics Rev.4499–516 (2010). [CrossRef]
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/oe/abstract.cfm?uri=oe-20-15-17201","timestamp":"2014-04-19T15:29:03Z","content_type":null,"content_length":"232483","record_id":"<urn:uuid:6609235a-b9fb-4d2f-9336-66b616f66f81>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Recently, I attended an industry summit that had an expert’s panel where people provided their best tips. One Excel tip involved filtering cells with a certain number of words. A nice tip, but it
left some attendees wondering where Excel’s word count function was. The program doesn’t have this feature, but you can get the answer by creating an Excel formula to count words. (
Includes sample Excel worksheet and formula
The summit tip was for a specific product, but opportunities exist where you might want to count words in an Excel cell. I do a similar process when I pull a list of site search queries from my web
analytics to see people’s interests. I look at all the cells, but I filter one word entries. The reason is it’s hard for me to guess the searcher’s intent based on one word.
Setting the Stage to Counting Words
The key to counting words in Excel is to correctly identify the spaces between words. You need to remove leading and trailing spaces in the cells or the count will be inflated. There are a couple of
ways to do this. A simple way is to use a free Excel add-on I’ve reviewed called ASAP Utilities.
Another way is to use the Excel TRIM function. The trim function removes leading and trailing spaces in a cell. This text function also removes extra spaces between words to just one space. In the
table below, you can see that the spaces aren’t always obvious.
│Before │Scenario │After Trim Function │
│Example text│Leading space │Example text │
│Example text│Double space │Example text │
│Example text│Trailing space │Example text │
In addition to TRIM, I’ll also use Excel’s LEN and SUBSTITUTE functions. These are also considered TEXT functions.
LEN returns the number of characters in a string. In my case, the number will reflect the number for each cell. Since a space is considered a character, it is counted.
SUBSTITUTE is similar to “search and replace” on a cell except we can specify how many times the substitution should occur. For example, you could indicate once, all, or a specific number.
As example, the formula =SUBSTITUTE(A1,"example","sample"), would replace the word “example” with “sample” for cell A1.
For our purposes, we want to substitute a space “ “ with nothing. Effectively, the function removes all spaces so the words run together. “Example text” would change to “Exampletext”.
Understanding the Word Count Formula
One nice feature about Excel is that you can nest formulas that include multiple functions. As example, I’ll use LEN, TRIM and substitute in this formula.
To get the word count in cell A2 in my spreadsheet, I would use this formula in B2,
=IF(LEN(TRIM(A2))=0,0,LEN(TRIM(A2))-LEN(SUBSTITUTE(A2," ",""))+1)
While stringing Excel functions together is efficient, it may make the formula intimidating.
Let me break it down for you.
1. We TRIM any extra spaces in cell A2 and determine if the cell is blank by using =IF(LEN(TRIM(A2))=0,0. If the cell is blank, it assigns the word count as 0.
2. If A2 isn’t blank, we count the characters in the cell using LEN(TRIM(A2)). You might think of this as our starting character count inclusive of spaces. The superfluous spaces have been removed.
3. We use -LEN(SUBSTITUTE(A2)," ","")) to remove the remaining spaces. We then count the characters in this new string.
4. We take the LEN count from Step 2 and subtract the LEN count from Step 3. We then add one to count to adjust for the first word.
If you prefer word problems, think of it this way. If the cell is empty, make the word count = 0. Otherwise, remove the extra spaces and count the characters in the cell. Hold that value as “A”. Now,
remove all spaces in that cell and count the characters again. Hold that value as “B”. Your word count is (A-B) + 1.
“Sample example text” = LEN count of 19. This is your “A”.
“Sampleexampletext” = LEN count of 17. This is your “B”.
(19-17)+1 = word count of 3.
After writing this formula, I think I have a new appreciation with the ease at which some programs, like Microsoft Word, can return a word count. If you want to see an example with these formulas,
you can download the sample Excel spreadsheet.
Related Excel Tutorials
Last Updated (Friday, 18 June 2010 18:36) | {"url":"http://www.timeatlas.com/5_Minute_Tips/General/Finding_Excel_Word_Count","timestamp":"2014-04-21T09:37:03Z","content_type":null,"content_length":"17618","record_id":"<urn:uuid:8970dce7-4b1b-4e49-a23e-025ef476627f>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00371-ip-10-147-4-33.ec2.internal.warc.gz"} |
August 16th 2007, 09:54 PM
Part 1: Suppose that the number of new homes built, H, in a city over a period of time, t, is graphed on a rectangular coordinate system where time is on the horizontal axis. Suppose that the
number of homes built can be modeled by an exponential function, H= p * at , where p is the number of new homes built in the first year recorded. If you were a homebuilder looking for work, would
you prefer that the value of a to be between 0 and 1 or larger than 1? Explain your reasoning.
Typing hint: Type formula above as H = p * a^t
August 17th 2007, 01:31 AM
Part 1: Suppose that the number of new homes built, H, in a city over a period of time, t, is graphed on a rectangular coordinate system where time is on the horizontal axis. Suppose that the
number of homes built can be modeled by an exponential function, H= p * at , where p is the number of new homes built in the first year recorded. If you were a homebuilder looking for work, would
you prefer that the value of a to be between 0 and 1 or larger than 1? Explain your reasoning.
Typing hint: Type formula above as H = p * a^t
Umm. Why the hint? Is that not supposed to be the model eponential function?
The given model, H = p*at, is not exponential.
Say, H = p * a^t -----(i)
If you are looking for work, you need more new houses to be built. So you need H to increase as time t goes on.
If 0 < a < 1, or "a" is a fraction less than 1, then as t increases, the a^t decreases in value. So H decreases too. No good for you.
Example, a = 0.3
at t=1, a^t = (0.3)^1 = 0.3
at t=2, a^t = (0.3)^2 = 0.09 <----less than 0.3
at t=3, a^t = (0.3)^3 = 0.027 <---less than 0.09
If "a" is greater than 1, then the a^t increases as t increases. So the H increases too. Good for you.
Example, a = 3
at t=1, a^t = (3)^1 = 3
at t=2, a^t = (3)^2 = 9 <----more than 3
at t=3, a^t = (3)^3 = 27 <---more than 9
So which one you prefer? | {"url":"http://mathhelpforum.com/pre-calculus/17829-exponential-print.html","timestamp":"2014-04-19T23:24:08Z","content_type":null,"content_length":"6686","record_id":"<urn:uuid:f6e572a1-3967-4a36-b6bc-139860bf01b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00254-ip-10-147-4-33.ec2.internal.warc.gz"} |
Shaw Prize
The Shaw Prize in Mathematical Sciences
The Shaw Prize is an international award managed and administered by The Shaw Prize Foundation based in Hong Kong. It was established under the auspices of Run Run Shaw.
Mathematics is the basic language of all natural sciences and all modern technology. In the twentieth century mathematics made tremendous strides both in opening new frontiers and in solving
important and difficult old problems. Its influence permeates every creative scientific and technological discipline, and extends into the social science. With the developments in computer
science, information technology, and statistics in the twentieth century, the importance of mathematics to mankind will be further enhanced in the twenty-first century.
2004 Shiing-Shen Chern
... for his initiation of the field of global differential geometry and his continued leadership of the field, resulting in beautiful developments that are at the centre of contemporary
mathematics, with deep connections to topology, algebra and analysis, in short, to all major branches of mathematics of the last sixty years.
2005 Andrew John Wiles
... for his proof of Fermat's Last Theorem.
2006 David Mumford (shared)
... for his contributions to mathematics, and to the new interdisciplinary fields of pattern theory and vision research.
2006 Wu Wen-Tsun (shared)
... for his contributions to the new interdisciplinary field of mathematics mechanization.
2007 Robert Langlands and Richard Taylor
... for initiating and developing a grand unifying vision of mathematics that connects prime numbers with symmetry.
2008 Vladimir Arnold and Ludwig Faddeev
... for their widespread and influential contributions to Mathematical Physics.
2009 Simon K Donaldson Clifford H Taubes
... for their many brilliant contributions to geometry in 3 and 4 dimensions.
JOC/EFR February 2010
The URL of this page is: | {"url":"http://www-groups.dcs.st-and.ac.uk/~history/Societies/ShawPrize.html","timestamp":"2014-04-17T12:32:04Z","content_type":null,"content_length":"5950","record_id":"<urn:uuid:040ce4fd-e01a-45ab-ab42-e068deba02f1>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
WyzAnt Resources
graph the region -4x+5y<20
4. Graph the following functions. (a) f(x)=5^x (b) f(x)=(2/3)^x (c) f(x)=-3^(-x)
I'm decent at math. Although I haven't had a math class since I was a senior in high school, so about three to four years ago. I was on a good pace of figuring this out until last night I messed
Given the graph of an equations, where would you look on the graph to find the solutions to the equation that goes with the graph? This word problem was on my test and I got it wrong,...
y=2/x, y=3x how do you solve the problem, thanks
graph y=2x+3 in coordinate plane
y=3(x+3)squared (dont know how to put sq root in)
That is all the question says. we are working with slope intercept form and standard form and the lessons like that. I just dont get this
y = 2x + 8 I know it's going to be above the line and it is going to be dashed, but I don't know how to find the origin. Does it included the origin or does not include the origin...
y= 25-(x/3)^2 graph 6 points growth or decay, faster linear or slower than linear
how do i know what a slope is on a graph
will also need to plot second line (like in a scatter plot or line graph). The point is to show the baseline points all Then need to plot times of those days the kid did the behavior. how do i do...
finding the domain and range on a exponential and linear graph | {"url":"http://www.wyzant.com/resources/answers/graphs?userid=8996189","timestamp":"2014-04-18T21:27:20Z","content_type":null,"content_length":"43556","record_id":"<urn:uuid:1fa40c4f-a12a-4dc1-958a-04f90d360c6b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Signed distance fields
Hey all, hopefully this should be quick and simple thread.
I've read up on a few ways of handling mesh-mesh collision detection, and the signed distance field sounds like the best way of handling things. What I am wondering, though, is how are you supposed
to use the SDF for the detection?
My best guess right now is that you loop through the vertices of the first mesh, then if the point is within the SDF boundaries, interpolate the four closest distance values stored in the SDF grid.
Then keep track of the most negative value to find the vertex that penetrates the second mesh the most. Then repeat the algorithm for the second mesh. If the recorded value is less than zero, then
the meshes intersect at the saved point, with the saved interpenetration value.
Is that the correct way to handle it? If not, how do I use the SDF? And if so, are there any optimizations I can add?
Thanks in advance!
Posts: 1,403
Joined: 2005.07
It might be best to start with, where did you hear the term "signed distance field"?
Sir, e^iπ + 1 = 0, hence God exists; reply!
It was referenced in a lot of places, but here's one paper that stands out:
From what I understand, it's just a 3D grid of numbers, where each number represents the smallest distance between the current point in the grid and the surface of the mesh. I just need to figure out
how this set of data is supposed to be used for the collision detection.
Posts: 370
Joined: 2006.08
for my mesh collision detection, I plan on using bounding ellipses; this is a quick and simple way to do things, and looks like it would be a bit faster than yours, also...just depends on how
accurate you need your information to be. The basic theory behind bounding ellipses is that you record the max/min values for X,Y, and Z for each object, then construct a 3D ellipse out of that
information. You then test to see if the two ellipses intersect anywhere, and if they do, you know that you have a collision. This method is not as accurate as the one that you were talking about,
because there is usually a bit of a difference between the max points and the min points in your mesh; if this is a problem, however, you can simply make more bounding ellipses for more accuracy.
Probably not the answer you were looking for, but I thought it might help anyway
Sorry, I should have specified - I was looking for the fastest mesh-mesh collision detection system that works well with a rigid body dynamics system, so the ellipsis method wouldn't work too well.
The SDF apparently gives a very fast and close approximation of the points of collision and interpenetration distances between the meshes (close enough to result in plausible physics). The only
problem is I'm not 100% sure on how it's supposed to be used.
I think you have the general idea. The part I could never figure out though is how to get useful normals for the collision points. You could store the indexes of the closest features as well, but
that still isn't enough in a lot of cases.
Scott Lembcke -
Howling Moon Software
Author of
Chipmunk Physics
- A fast and simple rigid body physics library in C.
Posts: 19
Joined: 2007.03
I think what he might have been referring to was a hierarchical bounding volume tree, where each ellipse (i use spheres) represents the minimum bounding volume for a set of triangles. Each ellipse
has sub-ellipses which bound smaller sets of these triangles.
When testing for collisions, one traverses the tree of ellipses until you reach a leaf, where you then do only the few needed triangle-triangle tests. This way you get much improved speed and only
have to do a few triangle tests.
Posts: 370
Joined: 2006.08
heh, yes, thanks...that was exactly what I was trying to say
Skorche Wrote:I think you have the general idea.
Okay cool, thanks. Once I understood what the SDF actually represented, that seemed like the most logical way to use the data. I guess I'll just have to implement it and see how it works.
Quote:The part I could never figure out though is how to get useful normals for the collision points. You could store the indexes of the closest features as well, but that still isn't enough in a
lot of cases.
Why not?
That seemed like a really clever idea when I read it, but then I read the second half of the sentence. I mean... I guess there are a few cases of when the closest point is an edge or a vertex, but it
seems like you could then either store the interpolated normal or just store the index of one of the polygons that shares that edge/vertex.
Actually, now I'm reading up on how the Havok engine breaks nonconvex meshes into a series of smaller connected convex shapes, then uses a Minkowski difference approximation (although I don't really
know what that is yet...). Apparently the SDF - although fast - is a huge memory hog. I'm going to continue reading up on this stuff.
imikedaman Wrote:Okay cool, thanks. Once I understood what the SDF actually represented, that seemed like the most logical way to use the data. I guess I'll just have to implement it and see how
it works.
Why not?
That seemed like a really clever idea when I read it, but then I read the second half of the sentence. I mean... I guess there are a few cases of when the closest point is an edge or a vertex,
but it seems like you could then either store the interpolated normal or just store the index of one of the polygons that shares that edge/vertex.
You can't always get usable collision normals from the closest feature.
Consider the following:
You have a slightly smaller cube sitting on a larger cube. When you check for collisions, the smaller cube has moved into the larger cube deeper than the difference in it's width. When you check the
vertexes, you find that the bottom 4 have negative distances. So far so good. Now you find the closest feature to get the normal, but because it's penetrated to deep, none return the top face of the
cube. So you have correct contact points, and pretty good penetration depth values, but all of the normals are tangent to what the collision normal should be.
Scott Lembcke -
Howling Moon Software
Author of
Chipmunk Physics
- A fast and simple rigid body physics library in C.
Man, all these collision systems seem to have serious drawbacks. What's the best one to use? I'd need (of course) the collision points, the penetration depth of each, and the normal. My original idea
was just running an intersection test on the edges of the first mesh against the triangles of the second mesh, but there has to be a faster way.
I use SAT in Chipmunk, sampling for collision points at polygon vertexes. You can always get a collision depth and a normal for overlapping convex polygons, you can't always get usable collision
points when only sampling at the vertexes.
Overall I'm pretty pleased with it though.
Scott Lembcke -
Howling Moon Software
Author of
Chipmunk Physics
- A fast and simple rigid body physics library in C.
In 3D I can see a case where two cubes would intersect via their edges and not vertices, but is there a case in 2D where using only vertex collisions can lead to problems? This part is only for
enlightenment since I have no plans for 2D anytime soon.
If I end up using the intersection test for the edges of one mesh against the polygons of another, each collision between an edge and polygon would add these points:
1. The exact point of collision between the edge and polygon (penetration depth = 0)
2. The vertex (if any) from the edge that is inside the second mesh (penetration depth = distance between above point and this point)
Even though it seems like this algorithm might give me all the contact information I need (although I haven't tested it yet), I'm going to continue looking for a faster way of handling it. I looked
at SDF and GJK and could only find ways of generating part of the required info, then I'll look into SAT and EPA sometime tomorrow to see if I can find a good use for it.
By the way Skorche, how do you handle large velocities in your RBD sim? Multisampling? Sweep tests? Manually changing the number of iterations per second to suit each demo? I'm referring to that one
test I remember seeing in your Chipmunk demo where you shot a tiny circular object really quickly at a large stack of blocks.
One final thing I'm wondering is this: once I find the minimum distance vector required to push the two bodies apart from one another, can I just normalize that distance vector and use that as the
normal for each collision point? I've only worked it out for a few test cases so far, so I don't know if it will lead to any unexpected problems.
I think it's pretty obvious I still have a lot of material left to read.
How do I handle large velocities? I don't. The only option currently is to decrease the time step. Though I'm working with a guy on adding swept collisions.
Quote:once I find the minimum distance vector required to push the two bodies apart from one another, can I just normalize that distance vector and use that as the normal for each collision
That's what I do, but I only deal with convex shapes, so that is technically correct.
Scott Lembcke -
Howling Moon Software
Author of
Chipmunk Physics
- A fast and simple rigid body physics library in C.
Possibly Related Threads...
Thread: Author Replies: Views: Last Post
Fast Distance formula? mikey 11 7,067 Nov 23, 2009 10:43 AM
Last Post: mikey
calculating X and Y coordinates w/ an angle and distance ferum 13 14,448 Jun 25, 2008 10:53 PM
Last Post: rosenth
Speed distance velocity and other headaches Thinker 6 3,524 Jul 3, 2003 09:55 AM
Last Post: Thinker
Distance in a shootem up 2D Mars_999 16 5,421 Mar 2, 2003 03:17 PM
Last Post: kberg
Carbon and edit text fields Tobi 1 3,447 Jun 13, 2002 10:17 PM
Last Post: Tobi | {"url":"http://idevgames.com/forums/thread-3180.html","timestamp":"2014-04-18T20:51:56Z","content_type":null,"content_length":"51046","record_id":"<urn:uuid:2f26e872-43d8-4b5c-ac83-575944b2abbe>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00061-ip-10-147-4-33.ec2.internal.warc.gz"} |
When the existential quantifier is inside a universal quantifier, the bound variable must be replaced by a Skolem function of the variables bound by universal quantifiers. Thus $\forall x[x=0\vee\
exists y[x=y+1]]$ becomes $\forall x[x=0\vee x=f(x)+1]$.
In general, the functions and constants symbols are new ones added to the language for the purpose of satisfying these formulas, and are often denoted by the formula they realize, for instance $c_{{\
exists x\phi(x)}}$.
This is used in second order logic to move all existential quantifiers outside the scope of first order universal quantifiers. This can be done since second order quantifiers can quantify over
functions. For instance $\forall^{1}x\forall^{1}y\exists^{1}z\phi(x,y,z)$ is equivalent to $\exists^{2}F\forall^{1}x\forall^{1}y\phi(x,y,F(x,y))$.
Skolem function, Skolem constant
Mathematics Subject Classification
no label found
no label found
Added: 2002-08-25 - 23:01 | {"url":"http://planetmath.org/Skolemization","timestamp":"2014-04-17T19:00:05Z","content_type":null,"content_length":"48544","record_id":"<urn:uuid:de4ee214-d050-405c-8db8-8688c880c99b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00631-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Size a Pool Pump for Your In-Ground Pool
When purchasing a new in-ground swimming pool, you need to determine what size pool pump is required. There is tendency to purchase a bigger pump than is necessary thinking bigger is best. However,
not only does this lead to higher operating costs, but you may also be overpowering your filter system. As a general rule you should have a pump that filters all the water in a pool in an 8 hour
period. This page will show you how to select a pump that filters all the water in your pool in 8 hours.
Tips & Warnings
Things You'll Need
Step by Step
Step 1
Your first step is to determine the number of gallons of water in you pool. The formulas for calculating gallons depend on the shape of your pool. For a RECTANGULAR POOL, measure the length (ft), the
width (ft) and the average depth. Average depth is determined by adding the depth at the shallow end to the depth at the deep end and dividing by 2. The formula for calculating total gallons in a
rectangular pool is: Gallons = Length x Width x Average Depth x 7.5 Example (see picture): your pool is 30 ft long and 15 feet wide; the pool's shallow end is 4 ft and its deep end is 8 ft so the
pool's average depth is 4 plus 8 = 12 divided by 2 = 6 ft. Pool's capacity is 30 ft x 15 ft x 6 ft x 7.5 = 20,250 gallons. Go to Step 5
Step 2
ROUND SWIMMING POOL - To determine the number of gallons of water in your round pool, measure the diameter of the pool and its average depth. Average depth is determined by adding the depth at the
shallowest part to the depth at the deepest part and dividing by 2. The formula for calculating total gallons in an oval pool is: Gallons = Diameter x Diameter x Average Depth x 5.9. Example (see
picture): your pool is 25 ft in diameter; the pool's shallow end is 3 ft and its deep end is 7 ft so the pool's average depth is 3 plus 7 = 10 divided by 2 = 5 ft. Pool's capacity is 25 ft x 25 ft x
5 ft x 5.9 = 18,428 gallons. Go to Step 5
Step 3
OVAL SWIMMING POOL - To determine the number of gallons of water in your oval pool, measure the long diameter, the short diameter and the average depth. Average depth is determined by adding the
depth at the shallow end to the depth at the deep end and dividing by 2. The formula for calculating total gallons in an oval pool is: Gallons = Long diameter x Short diameter x Length x Average
depth x 5.9. Example (see picture): Your pool's long diameter is 25 ft, Short diameter is 15 ft and the pool's average depth is (3 + 7) / 2 = 5 ft. Pool's capacity is 25 x 15 x 5 x 5.9 = 11,063
gallons. Go to Step 5
Step 4
KIDNEY SWIMMING POOL - To determine the number of gallons of water in your kidney pool, measure the largest width, the smallest width and the average depth. Average depth is determined by adding the
depth at the shallow end to the depth at the deep end and dividing by 2. The formula for calculating total gallons in an kidney pool is: Gallons = (Longest width + Shortest width) x Length x Average
depth x 3.38. Example (see picture): Your pool's length is 25 ft, longest width is 15 ft, Shortest width is 10 ft and the pool's average depth is (3 + 7) / 2 = 5 ft. Pool's capacity is (15 + 10) x 25
x 5 x 3.38 = 10,563 gallons
Step 5
Now that you have calculated the number of gallons in your swimming pool, you want to determine how many gallons per hour (GPH) you need to pump to clean all the water in your pool in 8 hours. To
come up with this flow rate simply divide your calculated gallons by 8. For the RECTANGULAR swimming pool example the GPH required is 20,250 gallons / 8 hours or 2531 GPH.
Step 6
Most pool pump specifications are expressed in gallons per minute (GPM) so to convert from GPH to GPM, divide your GPH by 60 minutes - 2531 GPH / 60 = 42.2 GPM.
Step 7
Having calculated your required GPM, you next have to figure out the average Feet of Head for your pool pump. A good estimate is to take the average amount of feet from where your suction lines are
(skimmers or main drain) back to where your pool pump will be located. The picture at the right provides an example of how the average Feet of Head would be calculated for a pool with 3 suction
returns, two skimmers and one main drain. The lengths of each line are: Skimmer 1 - 5'+15'+25'+15'+10' = 70'; Skimmer 2 - 15'+10' = 25'; and Main Drain - 25'+20' = 45'. To get the average Feet of
Head take the three suction line lenghts and divide by three. 70' + 25' + 45' = 140' / 3 = 47 Average Feet of Head
Step 8
You now have the information required to select the size of your pool pump. Go to the description page of the style of pump you would like to purchase. Many pump manufacturers will provide a chart on
this description page showing the HP required for your particular GPM and Foot of Head requirement. For example, say you wanted the popular Hayward Super Pump. An abbreviated version of the Hayward
Super Pump Performance Page is shown at the left. Based on the data calculated above for a typical RECTANGULAR pool, we are looking for a pump that will handle 42 GPM with a 47' Feet of Head.
According to the chart for 50 Feet of Head (closest to 47'), we need a pump between 3/4 HP (31 GPM) and 1 HP (50 GPM). Since we always go to the higher GPM, we would select the 1 HP pump.
Step 9
The full Performance Page for the Hayward Super Pump can be found at this link, Harward Super Pump . For the location of Performance Pages for other pump models, contact an Inyopools sales
representative at 1-877-372-6038.
Step 10
The size of your pool filter is directly related to the pool pump you have selected. If your pool filter is too small for the pump, there will be additional strain on the pump motor as it tries to
push water through and meets resistance at the filter. This will eventually burn out the pump motor and your filtration will also be compromised. We recommend over-sizing the filter to be absolutely
certain it can handle the flow coming from the pump. So in this case, instead of getting a filter rated at exactly 42 GPM, you should select one that is a little higher – around 60 gpm would be fine.
Step 11
There are a couple of other considerations that should be mentioned in your selection of a pool pump. The above calculations are based on a basic pool configuration with no extra water features:
fountains, spas, waterfalls, solar heating, and infloor cleaning systems. The features generally require higher GFM rates which equates to a higher HP pumps. Also if your pool requires greater than
60 GFM you may need at least 2" diameter suction pipes. Suction pipes of 1 1/2" have a physical limit of 60 GPM. 2" pipes can handle up to 100 GPM.
Comments (1 to 40 of 96)
Posted: 4/15/2014 6:17:37 PM
User: Inyopools
L.Dore - You don't need a pump larger than 1.5 HP for your size pool. You should hook this pump up to an automatic timer [like T101P3] and only run the pump 8 hours a day. I would stay with a
cartridge filter for convenience and size.
Posted: 4/14/2014 7:45:06 AM
User: LDore
I have a 27 foot round above ground pool. The pump that came with the pool was a 2 speed 2.5 HP pump. It lasted 5 years and now needs replaced. From what I read it doesn't seem that a pump this size
is required for my pool. We tended to turn it on high and leave it on all summer. What can I purchase to accommodate my pool size. Considering buying a automatic timer to not have to run the pool 24
hours a day. Is a variable speed needed? We have a cartridge filter, but also debating switching to sand????
Posted: 4/11/2014 12:07:37 AM
User: Inyopools
zman - A 3/4 HP pump would be a good match for your size pool.
Posted: 4/9/2014 9:12:19 AM
User: Inyopools
Suction for cleaner – If your HP is borderline, you may have to shut down your main drain a little to get enough pressure to operate the cleaner.
Posted: 4/8/2014 8:42:27 PM
User: zman
I have 288 sq. ft. pool 24x10 and will a3/4hp be good or not . thank you
Posted: 4/7/2014 2:44:34 PM
How does action of a pool sweep with hose attached to skimmer affect pump horsepower requirements?
Posted: 3/27/2014 12:45:49 PM
User: Inyopools
lucyandpaco - When you replace a pump, you want to look at the pump’s total HP (THP) which is a product of HP x Service Factor (SF). The values should be listed on your pump's motor label. Our specs
show that a Speck 90-II has a THP of 1.0 [HP-1, SF-1). So you will be looking for a pump with a THP of at least 1. For your application, I would recommend an Energy Efficient pump like the Hayward
Super II EE. For an equivalent size with a 20% savings in operational cost, I would get the Hayward Super II Energy Efficient (EE) 3/4 HP pump, model SP3007EEAZ. With a HP of 3/4 and a SF of 1.46,
its THP is 1.14 slightly greater that your current Speck pump.
Posted: 3/25/2014 5:03:31 PM
User: lucyandpaco
My pool is 16x30 inground with a 9 foot deep end. I have 11 Fafco solar panels on the roof. My pump which is noisy is a 1hp Speck 90-II. Its time to replace it. What do you recommend?
Posted: 2/26/2014 6:41:42 PM
User: Inyopools
hugo- For your 14,000 gallon pool, we would recommend a cartridge filter for convenience and a 1.5 pump with at least a 2-speed motor rather than a 1-speed. If you can afford the initially cost, buy
a variable speed pump. You will recoup your initial purchase cost in 1-2 years of savings in operational costs. See our guide "How to Save Money Using a Variable Speed Motor". If you have 2" piping,
you might purchase the "Hayward Star Clear Plus 120 Sq Ft. Filter 2" Ports". For 1 1/2" piping, buy " Star Clear Plus 120 Sq Ft. Filter 1.5" Ports"
Posted: 2/25/2014 6:02:35 PM
User: hugo
i am building amn inground gunite pool 16'x32' 3'-6' dp with 3 scuppers 6 returns 2 maindrains, 1400 galons not shure what pump size and filter to buy 1 hp ? 1 1/2hp? 2 speed? single speed? cartridge
or sand
Posted: 1/6/2014 8:57:45 AM
User: Inyopools
Cu ft to gals – There are 7.5 gallons in a cubic foot. Length x width x average depth gives you cubic feet. Multiplying by 7.5 gives you gallons.
Posted: 1/3/2014 2:37:20 AM
Why are you multiplying 7.5 with the length, width and average depth for the rectangular-sized pool?
Posted: 9/7/2013 10:36:59 AM
User: Inyopools
carlos805 - The Sta-Rite system 2 PLM150 filter system should work fine. You need 40 GPM flow to turn over your 18,400 gallons in 8 hours. The PLM150 has a GPM capacity of 50-120 GPM. It doesn't hurt
to have a larger filter than needed. I would increase the pump to 1 HP for your size pool.
Posted: 9/7/2013 12:25:08 AM
User: carlos805
I have a pool 12'x34', shallow end 3.5', deep end 8.5'. I have a heyward DE4800, old school. after calculating everything here, I think my pool holds about 18400 gallons of water. my DE4800 is shot
and I need to replace it. a neighbor is giving me a Sta-Rite system 2 PLM150 filter system. will this work. the pump I have now is 3/4 HP.
Posted: 9/3/2013 11:44:27 PM
User: Inyopools
Frog - $70 a year seems very low for a VS pump savings. See our guide on "How to Save Money Using a Variable Speed Motor" for more information. As to the size of VS pump to purchase, they currently
come in two sizes, about 1.6 HP and 3.0 + HP. I would go with the larger HP for your size pool. Since they are self-adjusting, you can scale the HP down to what's actually needed.
Posted: 9/1/2013 2:42:37 PM
User: Frog
I have an inground irregular/kidney shaped pool with 16,000 gallons. It is a saltwater pool with solar heating (2 story house) and infloor sweepers. I have 2 drains ~3 ft apart and 1 skimmer with an
avg. ft head of 48 ft. I have a DE filter. My pump size is 2HP A.O. Smith MOD K48N2PA105C4, Volts 230 with a Pentair WhisperFLo Mod WFE-8 2 HP/ Service factor of 1.3 pump. Should I change to a VS
pump with less HP and if so which size would be best? I have used energy savings calculators and they say I will only save $70 a year, seems low. 50% of our energy use is running the pool pump ~5 tp
6 hrs a day. Thanks for your help!
Posted: 8/27/2013 7:55:08 PM
User: MDF
Thank you!!
Posted: 8/26/2013 3:14:44 PM
User: Inyopools
MDF - You are correct. For a single speed motor you want to run it as short a time as required. For a VS motor you want to reduce the speed as much as you can and run it for as long as you can at
that lower rate. Remember if you cut your speed from 3450 RPM to 1725, you will reduce your energy costs to 1/8 over the same period of time. If a SS pump costs $240 to run a month at 8 hours a day,
a VS pump will cost $60 to run at 1725 for 16 hours a day. Both will turn over the same volume of water.
Posted: 8/26/2013 1:18:42 PM
User: MDF
Thanks for the fast response! Ok, I thought I was required to get all of the pool water through in 8 hours. We run our pump continuously, so does that mean we could spread it over 24 hours and plan
for bursts of turning it in 8?
Posted: 8/26/2013 9:38:38 AM
User: Inyopools
MDF - Your requirement for 198 GPM is based on running your 95,000 gallons of water through your filter in 8 hours. If you ran the pump 10 hours instead of 8, you could use a pump that had a flow of
160 GPM. There are at least two VS pumps that will generate 160 GPM: the Hayward EcoStar Variable Speed Pump and the Pentair IntelliFlo Variable Speed. Both motors are over 3.75 Total HP. If you ran
these motors at half speed [and half flow] for 20 hours, you would reduce your energy cost to 1/8 of your full speed energy cost. See our guide on "How to Save Money Using a Variable Speed Motor".
Posted: 8/25/2013 12:22:42 PM
User: MDF
We have a 25'x 78' rectangular pool with a deep end of 10' and a shallow end of 3'. I calculated this to be about 95,000 gallons based on the formula given in step 1. This leads me to 11,875 GPH or
198 GPM.
We have 4 skimmers and 2 main drains, but no other suction features.
I calculated the average feet of head to be 90'.
We had a single speed 3HP pump running 24x7 that just froze up. So we need to replace it.
Question 1. Can we get a variable speed pump?
Question 2. None of the pumps listed in the Hayward table seem to meet these specs. Can you tell me the name of another brand that might meet these specs?
Posted: 8/23/2013 4:03:05 PM
User: Inyopools
lance - The model number of a pump is sometimes stamped into the shoulder of the pump near the discharge port. If not, try looking on the underside of the strainer cover. A part number is usually
stamped there that can be crosschecked to the pump on a parts list.
Posted: 8/22/2013 10:21:05 PM
User: alex
Thank you, very helpful !!!
Posted: 8/22/2013 9:57:28 AM
User: lance
Hi, sorry me again. I think the pool pump is a sta rite however, I cannot find a model number on it at all. Any clue on where it is located? This is actually my mother pool and bought the house a
couple of years ago and she didn't get any manuals or information on the pool. I don't know the age of it and I am assuming it's a sta rite based on a picture search I did.
Posted: 8/14/2013 8:33:48 AM
User: Inyopools
lance - You are correct. The pump shaft seal will depend on the type of pump.
Posted: 8/13/2013 9:25:20 PM
User: lance
Thanks so much! Now...the pump shaft seal will depend on the type of pump I have right?
Posted: 8/12/2013 9:03:17 AM
User: Inyopools
lance - For a 20K gallon pool, people will generally use a 1 1/2 HP motor. You actually have a 1.67 HP motor. A pump's Total HP (THP) is measured by multiplying the pump's stated HP by its Service
Factor (SF). If you look on the label of your B848 you will see that it has a HP 1.0 and a SF of 1.67. The product of the two numbers is 1.67 which is your pump's THP. I would replace your current
motor with the same motor. Remember to replace your pump's shaft seal when you change out the motor.
Posted: 8/11/2013 12:10:23 PM
User: lance
I have a 20k gallon pool with a 1.0 hp SF 165 AO SMITH B848 Pump motor that has died. The pool has 2 returns, 1 skimmer, and 1 drain. My thinking is to replace it with a 1.5 hp AO Smith B2854. Can I
do that or should I stay with 1.0 hp? If I replace with the 1.5 hp will I need upgrade the impeller, diffuser and eye seal?
Posted: 8/1/2013 12:05:50 AM
User: Inyopools
Jon - Sounds like your bearings are going again. They may have been misaligned when installed. If you didn't replace the shaft seal, it may be spraying onto the motor. You could definitely go to a 1
HP motor for your size pool and setup. And, if you can afford the initial cost, you should consider a variable speed pump. See our guide on "How to Save Money Using a Variable Speed Motor". They are
just coming out with smaller 1 1/2 HP VS pump that would work well for your setup.
Posted: 7/28/2013 8:10:49 PM
User: Jon
Excellent information so far, just a couple of questions. I have 16000 gallon inground with no fountains, just suction vacuum and solar heating as well as gas heater. Currently I have 3/4 hp Hayward
super pump that I just changed bearings however its starting to make a squeaking noise from the motor. First question, are my new bearings going again or do you think motor is just dying? Also
wondering if i should upgrade to 1hp? I have to keep my pump running 24/7 should look into variable speed or stick to single ? Thanks in advance
Posted: 7/28/2013 2:12:00 PM
User: Oreshans
We have a 1000 gallon, in ground spa that we cannot keep from getting green from one Saturday to the next. Usually, by Wednesday or Thursday, it is green. We have been trying to bring the phosphates
under control. We shocked it yesterday and today the phosphates were at 300. A month ago, we drained the entire spa and filled it with new water. We are constantly needing to shock it because the
chlorine barely registers even though there are tablets in the dispenser and we have been diligent about keeping the Soda Ash level in range. We are going nuts. The GPM rate is 16.7 and the head is
10. I took a picture of the pump to see what the pump was and found that it is a Magnetek Century 8-77064-03 Pool and Spa Motor. 1081 Pump Duty. the
HP is 2.o - .25 The filter is a Hayward, Star Clear Plus,__ __175.
We are at our wits end. It is in our rental and we maintain it but we can't.
Can you make suggestions as to what the problem can be. It acts like the water goes through the inlet and right back out the outlet and never is filtering because the pressure never changes at all.
Yesterday, we engineered to parts that close the cartridge filter hold so it fits tightly rather than leaving a space in the center. It seems to have changed the pressure from 28 - 30 after filtering
out most of the green algae. It didn't seem like the water was ever going through the filter so we forced it to go through the filter.
HELP, Please....
Posted: 7/23/2013 1:11:15 PM
User: Inyopools
oyster56 - I don't see a motor replacement at 1.25 THP, but based on your information, you could probably use a motor with 1.1 or 1.0 THP. The 1.1 THP motor is a standard uprated motor, UST1102. The
1.0 THP motor is an Energy Efficient (EE) motor, UCT1102. The EE motor is $40 more but would save you 20% on operating costs. Also since these both are slightly smaller motors than your old one, you
will have to buy a smaller impeller. And, for any motor replacements, you should buy a new shaft seal.
Posted: 7/23/2013 11:36:16 AM
User: oyster56
We have a pool that just about exactly matches your average pool but it has only one drain and one skimmer, which are an average of 30 ft. away from the pool pump. We currently have a A.O. Smith
Century Centurion 1 HP motor with an SF factor of 1.25, which has reached the end of its life. This pump motor is on a Jacuzzi Magnum 1000 pump. Is a replacement with THP of 1.25 sufficient or over
or under our needs? There are no additional features such as waterfalls using the pool pump's capacity. Although I favour a VS pump for the quiet and the lower environmental impact, given our modest
electricity costs and a short swimming season of less than three months, we will probably stay with a single speed pump motor. Thoughts? Suggestions? Thanks.
Posted: 7/18/2013 12:15:10 PM
User: Inyopools
Mike - Your pool holds about 10,000 gallons of water. A 1 1/2 HP pump should be sufficient to handle circulation for this size pool and your waterfalls.
Posted: 7/17/2013 5:33:26 PM
User: Mike
I am having a 27x12 fiberglass pool installed. The deep end being 5'. I am also having a three tier waterfall with 1 shear decent in the middle and 2 18" shear decents on either side. I am having a 1
1/2 hp pump installed. My question is should I get a second pump to run the waterfalls. The pump will be located 15' from the skimmer.
Posted: 7/17/2013 9:56:20 AM
User: Inyopools
MM - For a head of 11' you would need a 1/2 HP Hayward Super Pump that would provide 55 GPM. This is overkill but this is the smallest pump we sell. For a head of 67' the charts show you would need a
1 HP Hayward Super II Pump (different class of pump) which provides 35 GPM. The next lower pump, 3/4 HP, is right on the edge of providing 19 GPM for 67' of head and the manufacturer recommends going
up to the next level.
Posted: 7/16/2013 6:43:49 PM
User: Inyopools
Bjtex - Yes, you could use a 1 1/2 HP pump but you're on the edge. You could hedge your choice a little by getting a 1 1/2 HP pump with a SF of 1.10 or 1.25 to get a slightly higher THP. THP = HP x
SF. If you are concerned about operating cost and can afford the initial pump cost, you should look at buying an Energy Efficient (EE) pump or a 2 speed or variable speed pump.
Posted: 7/16/2013 1:45:07 AM
User: MM
I am building a small inground splash pool of size 13' X 8' with 3' depth. This will contain 2300 gallons of water. I have two options for placing the pump ; one nearby with head of 11' and the other
at a distance with head of 67'. What capacity of pump and also the pump size is recommended in each of the two cases with flow of 19 GPM? Appreciate urgent help. Thanks
Posted: 7/14/2013 10:10:47 PM
User: Bjtex
I have a 30K In-ground pool with 40 ft of head and 2 in water lines Could I use a 1.5 hp pump. I currently have a 10 yr. old 2 hp pump that's very costly to run...
Posted: 7/8/2013 1:19:13 PM
User: Inyopools
volts/amps/watts - A pool motor's volts/amps/watts are generally defined by the HP of the motor. With little exception, pool motors use either 115V or 230V. That is defined by the power available at
the house. Amps (and Watts) are directly related to HP. The more HP a motor has, the more Amps/Watts it will use. Some EE motor are designed to be more energy efficient than their stand counterparts
and might use 20% less Amps/ Watts for the same HP.
1 2 3 → | {"url":"http://www.inyopools.com/HowToPage/how_to_size_a_pool_pump_for_your_in_ground_pool_.aspx","timestamp":"2014-04-17T22:00:45Z","content_type":null,"content_length":"110346","record_id":"<urn:uuid:42c3af2c-013b-44c7-80d8-86539a8a33bc>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00265-ip-10-147-4-33.ec2.internal.warc.gz"} |
May 2002 MDX Puzzle Solution
A variety of businesses use a type of analysis called dependency risk analysis. This type of analysis determines whether one group of items in your business (e.g., products) is overly dependent on
just one item of another group (e.g., customers). Retailers describe an overly dependent item as at risk. For example, you might want to find out which products depend most on a single customer. To
answer this question, you need to find what percentage of total store sales for each product comes from a single customer. To test yourself, find the top 10 highest risk products, and show the
percentage and amount of the product's sales that are at risk.
Listing A shows a query that defines two new measures. One measure calculates the total of Store Sales for the selected product (e.g., you might want to find the total sales to that product's top
customer). The other measure calculates the percentage of the product's total sales that's at risk. The MDX query in Listing A uses the PercentAtRisk measure to find the 10 products with the highest
percentage of Store Sales at risk. The query then displays both the amount at risk and percentage at risk for each of the top 10 products. | {"url":"http://sqlmag.com/print/database-development/may-2002-mdx-puzzle-solution","timestamp":"2014-04-18T00:22:13Z","content_type":null,"content_length":"16161","record_id":"<urn:uuid:69370b34-30fc-41ff-b937-400628f9e06f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
English auctions and the Stolper-Samuelson theorem
Dubra, Juan and Echenique, Federico and Manelli, Alejandro (2007): English auctions and the Stolper-Samuelson theorem.
Download (304Kb) | Preview
We prove that the English auction (with bidders that need not be ex ante identical and may have interdependent valuations) has an efficient ex post equilibrium. We establish this result for
environments where it has not been previously obtained. We also prove two versions of the Stolper-Samuelson theorem, one for economies with n goods and n factors, and one for non-square economies.
Similar assumptions and methods underlie these seemingly unrelated results.
Item Type: MPRA Paper
Original English auctions and the Stolper-Samuelson theorem
Language: English
Keywords: English auctions, Stolper-Samuelson, single crossing
F - International Economics > F1 - Trade > F11 - Neoclassical Models of Trade
Subjects: D - Microeconomics > D4 - Market Structure and Pricing > D44 - Auctions
C - Mathematical and Quantitative Methods > C6 - Mathematical Methods; Programming Models; Mathematical and Simulation Modeling > C60 - General
Item ID: 8218
Depositing Juan Dubra
Date 10. Apr 2008 18:29
Last 13. Feb 2013 15:24
Birulin, Oleksii and Sergei Izmalkov (2003): “On Efficiency of the English Auction,” Working Paper, MIT. Chipman, John S. (1969): “Factor Price Equalization and the Stolper-Samuelson
Theorem,” International Economic Review, Vol. 10, issue 3,pp. 399-406. Dasgupta, P. and E. Maskin (2000): “Efficient Auctions,” Quarterly Journal of Economics, Vol. 115, issue 2, pp.
341–88. Ethier, Wilfred J. (1984): “Higher Dimensional Issues in Trade Theory,” in Handbook of International Economics, ed. by Ronald W. Jones and Peter B. Kenen. Elsevier Science
Publishers, North Holland, Amsterdam. Gale, David, and H. Nikaido (1965): “The Jacobian Matrix and Global Univalence of Mappings,” Mathematische Annalen, 159, pp. 81–93. Jones, Ronald
References: W., and Tapan Mitra (1995): “Shared ribs and Income Distribution,” Review of International Economics, 3(1), pp. 36–52. Jones, Ronald W., and Jos´e A. Scheinkman (1977): “The Relevance of
the Two-Sector Production Model in Trade Theory,” The Journal of Political Economy, 85, pp. 909-936. Krishna, Vijay (2003): “Asymmetric English Auctions,” Journal of Economic Theory, 112
(2), pp. 261–288. Maskin, Eric (1992): “Auctions and Privatization,” in Privatization, ed. by H.Siebert. Institut f¨ur Weltwirtschaft and der Universit¨at Kiel. Milgrom, Paul, and Robert
Weber (1982): “A Theory of Auctions and Competitive Bidding,” Econometrica, 50, 5, pp. 1089–1122. Samuelson, Paul A. (1953): “Prices of Factors and Goods in General Equilibrium,” Rev.
Econ. Stud., 21, pp. 1-20. Stolper, Wolfgang F., and Paul A. Samuelson (1941): “Protection and Real Wages,” Rev. Econ. Stud, 9, pp. 58–73. Wilson, Robert (1998): “Sequential Equilibria
of Asymmetric Ascending Auctions: The Case of Log-Normal Distributions,” Economic Theory, 12, pp. 433–440.
URI: http://mpra.ub.uni-muenchen.de/id/eprint/8218 | {"url":"http://mpra.ub.uni-muenchen.de/8218/","timestamp":"2014-04-16T11:12:10Z","content_type":null,"content_length":"20675","record_id":"<urn:uuid:3762db3d-1459-40e0-83fa-27c7e9367468>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00068-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with 2-d arrays, tough problem
June 11th, 2012, 08:36 PM
Help with 2-d arrays, tough problem
The problem I'm attempting is fairly complicated in solving as there may be many ways to solve it, I just can't figure out a way. Here it goes. In a 2x2 grid, there are 6 possible paths from the
top left corner to the bottom right corner without backtracking, how many possible paths are there in a 20x20 grid? I can't figure out either a formula or a way to maybe brute force a solution.
All thoughts are welcome.
June 11th, 2012, 11:39 PM
Re: Help with 2-d arrays, tough problem
Sounds like project Euler to me...write out how you would do this for a smaller grid, then extrapolate that to a larger one. If you are looking for a brute-force way, give some thought to graphs
(the node/edge kind), recursive backtracking, and/or dynamic programming
June 12th, 2012, 11:13 AM
Re: Help with 2-d arrays, tough problem
I would move through the grid running through every possible path
What makes it difficult is the fact the the moves are through the lines of the grid and not the spaces, i don't know how to do that.
Yes, it's project euler which is why I'm not asking for an answer, but pseudocode or a formula.
June 12th, 2012, 11:22 AM
Re: Help with 2-d arrays, tough problem
Break down the problem. Think about the topics I mentioned above. Imaging each corner is a Node you wish to traverse through - to traverse you can go one of two directions (except for the edge
Nodes and the terminal node). Then imagine how you can use recursion to traverse the next nodes until you reach the end.
June 12th, 2012, 08:58 PM
Re: Help with 2-d arrays, tough problem
How would i do that? If i knew, I wouldn't have started this thread.
June 13th, 2012, 12:41 PM
Re: Help with 2-d arrays, tough problem
Are you familiar with data structures? Linked lists? Graphs? Recursively traversing these data structures? I'd start there. I'm not intentionally trying to be vague, but this is an advanced but
extremely important topic that couldn't be done justice through this problem alone (so rather than throwing out pseudo-code for you to translate you might learn more to study the subject).
Of course, I did intentionally note this solution is brute force - the brute force solution is a good programming exercise. The alternative solution based upon a formula is...well, it IS just a
formula (the derivation of which is fun, but knowing the formula itself spoils the surprise)
June 14th, 2012, 11:15 PM
Re: Help with 2-d arrays, tough problem
Graphs, in algebra, yes. Data structures, linked lists, and recursively traversing them I'm not familiar with. I agree with the importance of this idea, video game AI, geography, advanced
simulations, etc. all use this technique inside it. So, could you maybe give me a few sources to look at?
June 15th, 2012, 09:56 AM
Re: Help with 2-d arrays, tough problem
google 'java data structures'. You should find tons of explanations and examples. Keep in mind the many different data structures available (see List of data structures - Wikipedia, the free
encyclopedia )
June 15th, 2012, 09:41 PM
Re: Help with 2-d arrays, tough problem
I've done a little research, is there a class I could use for this?
June 18th, 2012, 04:09 PM
Re: Help with 2-d arrays, tough problem
Well, since you put it that way, here are my thoughts:
I will restate the problem in a way that is specific enough for me to boil it down to something that is not very complicated (to me).
You are given a 2-D array of "nodes." You want a path from the upper left node to the lower right node. Each step of the path can consist of one move to the right or one move down.
The question is: How many such paths are possible for, say, a 20x20 grid of cells bounded by the nodes.
Here's a way to look at it:
For an excursion from upper left to lower right, at each step there are two possibilities: Move to the right or move down.
Suppose we designate "Move to the right" as binary 0 and "Move down" as binary 1. (We are all computer wonks, so we really like this binary stuff, right?)
Then, for a 20x20 grid in your notation, each path will consist of exactly 40 steps. There will be a total of 20 steps to the right and 20 steps down.
A couple of simple examples:
Example 1:
Start at the upper left corner. Go straight right all of the way until you reach the upper right corner and then go straight down to the lower right corner.
A binary representation of this path is
0000000000000000000011111111111111111111 (20 zeros and 20 ones)
Example 2:
Start at the upper left corner. Go straight down all of the way until you reach the lower left corner and then go straight right to the lower right corner.
A binary representation of this path is
1111111111111111111100000000000000000000 (20 ones and 20 zeros)
Summary: All legal paths will be 40-bit numbers. Each path will have 20 ones and 20 zeros. The ones and zeros can be distributed in arbitrary ways, but the total number of each will always be the
Brute force way of counting the number of legal paths:
Make a loop with a long integer counter that goes from 0x00000fffffL through 0xfffff00000L. For each value, count the number of '1' bits. If the number of '1' bits is equal to 20, it's a legal
path. That's a lot of counting, but it is possible.
Note that there are a couple of snazzy ways to find the number of bits set in a given integer, but the scheme is still likely to be very time-consuming. Maybe that's not so bad. (Take a break.
Have a drink. Get some dinner. Have a couple more drinks. Fall in love. Start a family, ...)
Now, people who are mathematically inclined might have noticed that when we look at the paths as binary integers, the problem boils down to the following:
How many ways can 20 '1' bits be distributed in a total of 40 bits. This is the very familiar combinatorial problem: What is the number of combinations of 40 things taken 20 at a time? (We have
40 bits, how many ways can 20 '1' bits be distributed among those 40 bits?)
The conventional formula involves numbers like 40 factorial and 20 factorial, which are very large numbers. Since the result in this case is somewhere around 2 times 10 to the eleven power, it
can be represented in something like 38 bits, so, if you were very (very) careful in the order in which you carried out the mathematical operations, it just might be possible to do the "counting"
without overflowing 64-bit signed arithmetic and without incurring roundoff error.
With Java's BigInteger class, you can do the large-factorial calculations and other arithmetic in a short time and without a lot of hassle with order of calculations. I can calculate 40 factorial
and 20 factorial and perform all required arithmetic in something like a few milliseconds on my old, creaky, Linux workstation.
June 19th, 2012, 10:46 AM
Re: Help with 2-d arrays, tough problem
Alright, I like thinking in formulas as they can generally be applied in other ways. How would one derive the conventional formula?
June 19th, 2012, 11:13 PM
Re: Help with 2-d arrays, tough problem
IF you have never heard of "the number of combinations of N things taken K at a time, you might start here: Combinations-Wikipedia
The number of combinations of N things taken K at a time is given by
Code :
N! / (K! * (N-K)!) (Where the "!" means "factorial," not "glad to see you.")
So for the 20x20 case of your example, the way that I am looking at it we need the number of combinations of 40 things taken 20 at a time or
Code :
I indicated how I did it in Java. (Wrote a factorial function using the BigInteger class.)
You can check your answer by comparing it with the output of the following two lines of Python 2.6
Code python:
>>> from math import factorial
>>> print factorial(40)/(factorial(20)*factorial(20))
June 20th, 2012, 12:49 AM
Re: Help with 2-d arrays, tough problem
Thank you for that, I already had a factorial method written that is pretty fast. Now, I'll have a method for combinations of N taken R at a time.
I got the solution in 1ms.
June 20th, 2012, 09:36 AM
Re: Help with 2-d arrays, tough problem
Ok, that, for some reason, is not the correct answer. I went to project Euler and tried my solution which was the same as yours and it came back as wrong.
June 20th, 2012, 02:03 PM
Re: Help with 2-d arrays, tough problem
I have no connection with or knowledge of Project Euler, and I don't have any ideas about anything else outside the discussion in my post.
Here's the thing:
1. I believe my description of how to arrive at a solution is consistent with the way you almost asked the question in your first post. Since I don't have a formal specification of the project,
I have nothing else to work with.
2. I know, for a fact, that the calculation of (40 things taken 20 at a time) gives an answer of 137846528820, and I stand by that number. Whether that is consistent with what the originator(s)
of the problem had in mind, well...
And that's all I have to say about that! Period. Full stop.
June 20th, 2012, 03:36 PM
Re: Help with 2-d arrays, tough problem
I meant that I won't go into the combinatorial "solution" any more. (Well not after these few words.)
What I like about the combinatorial solution is that I can arrive at a closed formula without using a computer!. (Of course I need some kind of computer or calculator that can handle "the number
of combinations of 40 things taken 20 at a time," but the derivation itself is mental, not computery.)
Now, I like to be able to come at things from more than one direction, so here's a way that actually counts the paths without going into trees, recursive traversal of graphs (with or without
memoization), etc., etc., etc. I mean, all of those topics are, perhaps, important, but I don't necessarily see any of them as a solution looking for this problem.
Now, I could, maybe, do the counting mentally, but for this one I'll write a simple program to do the counting. (In addition to being better at counting, the computer program makes it easy to
change the number of nodes without me having to reset my internal mental counter.)
I'll present it kind of like pseudo-code, but the actual Java (or C++ or whatever...) implementation can follow this precisely. (And I say that, because that is exactly what I did. Results are
Code :
Suppose we have a 2-D array of cells: for example 20x20
Then there is an array of 21x21 "nodes," where each cell is bounded by the four
nodes at its corner coordinates.
Suppose we designate the location of the upper left node as (0,0) and the
lower right node as (20,20). We define a path to be an excursion from the
upper left node to the lower right node as being a sequence of steps for
which each step can go exactly one node to the right or one node down from
the current location.
The question is: How many such paths are there from the node at (0,0)
to the node at (20,20)?
In array notation, the locations would be nodes[0][0] and nodes[20][20],
Here's the drill:
// Prompt the user to enter the number of cells and read it in.
// In Java, it could go like this:
Scanner keyboard = new Scanner(System.in);
System.out.print("Enter the number of cells: ");
int numCells = keyboard.nextInt();
// For convenience, define the number of nodes here:
int numNodes = numCells + 1;
long [][] nodes = new long[numNodes][numNodes];
// Initialize the grid distances.
// The value for node[i][j] is equal to the number
// of ways to get there from node[0][0]
// The first ones are easy:
// No traveling necessary, but this value is needed for
// some of the other calculations.
nodes[0][0] = 0;
// Now take care of the outside nodes
for (int i = 0; i < numNodes; i++)
// Go down the left-most column
// There is only one way to get to location [0][i]:
// Straight down from the starting point.
// That is: all nodes in the left-most column are
// on the same path.
// So---set nodes[0][i] to 1
// Go right on the top-most row
// There is only one way to get to location [i][0]:
// Straight right from the starting point.
// That is: all nodes on the top row are on the
// same path.
// So---set nodes[i][0] to 1
// Now, the more interesting paths. For an interior
// node at location [i][j] on any path, there are exactly
// two immediately preceding nodes from which we could
// have arrived there:
// The node to its immediate left: nodes[i-1][j]
// and
// The node just above it: nodes[i][j-1]
for (int i = 1; i < numNodes; i++)
for (int j = 1; j < numNodes; j++)
// Add the values from the two immediately preceding
// nodes to get the number of ways to get here:
// nodes[i][j] = value at nodes[i-1][j] plus value at nodes[i][j-1]
// Taa-daa! The grand finale: How many ways are there to get to
// the lower right node? Well just show the value that
// was the very last value calculated by the nested loops:
System.out.printf("Number of paths from the node at (0,0) to the node at (%d,%d) = %d\n",
numNodes-1, numNodes-1, nodes[numNodes-1][numNodes-1]);
Now a direct implementation of this stuff gives me the following output:
Code :
Enter the number of cells: 20
Size of the array of cells is 20x20
There are 21x21 nodes.
Number of paths from the node at (0,0) to the node at (20,20) = 137846528820
Since this is the same value that I got from the combinatorial solution, I can't think of anything else to say except that maybe the problem statement is either misleading or incomplete.
Note that I originally ran it using a BigInteger array for the nodes even though I "knew" that it shouldn't overflow a Java long variable (signed 64-bit integer). Results are the same.
June 20th, 2012, 05:14 PM
Re: Help with 2-d arrays, tough problem
I'm not saying the number itself is wrong in the this situation, you obviously know more than I do, so take a look at the original problem: Problem 15 - Project Euler
The formula worked for the example, but not for the 20x20 grid apparently.
June 20th, 2012, 05:46 PM
Re: Help with 2-d arrays, tough problem
I just registered at projecteuler.net and submitted the answer to problem #15 that I have posted here several times: 137846528820
It was accepted.
I am apparently the 55434th person (or other being) to submit a correct solution.
June 20th, 2012, 06:09 PM
Re: Help with 2-d arrays, tough problem
I see my problem, something in my factorial() method is adding an extra zero to the end of the number, so I'm getting 1378465288200 instead of 137846528820.
Code java:
public static BigInteger factorial(BigInteger n)
BigInteger num = BigInteger.ONE;
for(BigInteger i = BigInteger.ONE; i.compareTo(n)!=0; i = i.add(BigInteger.ONE))
num = i.multiply(num);
return num;
June 20th, 2012, 09:29 PM
Re: Help with 2-d arrays, tough problem
And you didn't notice that it was different from the answer(s) that I posted? I always try to come up with an independent way to check my work, and I even showed the answer obtained from an
alternative program.
So: didn't you test it before plugging it into a program that uses it? I mean you don't have to go all the way to 40 factorial or even 20 factorial, but you should do some testing.
Code java:
import java.util.Scanner;
import java.math.BigInteger;
public class TestFactorial
public static void main(String [] args)
Scanner keyboard = new Scanner(System.in);
String xstr;
System.out.print("Enter a positive integer: ");
xstr = keyboard.nextLine();
while (xstr.length() > 0)
BigInteger x = new BigInteger(xstr);
System.out.println("Calling factorial (" + x + ")");
BigInteger xfact = factorial(x);
System.out.println("factorial(" + x + ") = " + xfact);
System.out.print("Enter another positive integer: ");
xstr = keyboard.nextLine();
public static BigInteger factorial(BigInteger n)
// Implementation of the factorial function goes here.
A run with your factorial function:
Code :
Enter a positive integer: 1
Calling factorial (1)
factorial(1) = 1
Enter another positive integer: 2
Calling factorial (2)
factorial(2) = 1
Enter another positive integer: 3
Calling factorial (3)
factorial(3) = 2
Enter another positive integer: 4
Calling factorial (4)
factorial(4) = 6
Enter another positive integer: 5
Calling factorial (5)
factorial(5) = 24
Enter another positive integer: 6
Calling factorial (6)
factorial(6) = 120
Enter another positive integer: 7
Calling factorial (7)
factorial(7) = 720
Enter another positive integer: 8
Calling factorial (8)
factorial(8) = 5040
Enter another positive integer: 9
Calling factorial (9)
factorial(9) = 40320
See? There's a pattern here:
Other than factorial(1), they all act like the argument is off by 1: Didn't go through the multiplication loop enough times, right?
A couple of notes:
I can't see any earthly reason to make the argument a BigInteger. Why not just use an int? I mean, you aren't going to be calculating factorial values for numbers greater than 2147483747 are you?
I don't think so.
Now, it's OK to use BigInteger argument and loop counter if you really want to, but no matter how you implement it, the loop control should not terminate the loop until after it has multiplied by
a term equal to value of n, right? When your loop counter reaches n, it terminates the loop before multiplying by that last value. Or so it seems to me...
Bottom line: Your factorial function is not "adding an extra zero to the end." It is operating incorrectly so that instead of calculating
factorial(40)/(factorial(20)*factorial(20)) = 137846528820
as it was supposed to, your program is, apparently, getting
factorial(39)/(factorial(19)*factorial(19) = 1378465288200
See how it goes? Test the lower level functionality first, and the higher-level stuff will take care of itself. Don't jump to conclusions about what the program is doing. Make it tell you!
June 20th, 2012, 11:08 PM
Re: Help with 2-d arrays, tough problem
Fixed it, I chose the wrong compareTo() value; 0 instead of 1. I should've tested smaller numbers first, this much is true haha. thanks again for the help.
I used BigInteger then mainly because I was practicing with it. I made the loop int using ints now.
Code java:
public static BigInteger factorial(int n)
BigInteger num = BigInteger.ONE;
for(int i = 1; i <= n; i++)
num = num.multiply(new BigInteger(Integer.toString(i)));
return num;
completes in ~2ms on my machine with 40 and 20
Code java:
public static BigInteger combosOfNTakenRTimes(int n, int r)
return factorial(n).divide(factorial(r).multiply(factorial(n-r))); // n! / (r!*(n-r)!)
Nothing ground-breaking at all.
June 21st, 2012, 08:04 AM
Re: Help with 2-d arrays, tough problem
That's great!
So, you figured out that the loop control for this problem should have been completely equivalent to your integer version, right?
Code java:
BigInteger num = BigInteger.ONE;
for(BigInteger i = BigInteger.ONE;
i.compareTo(n) <= 0; //Note: "<=" not "!="
i = i.add(BigInteger.ONE))
num = i.multiply(num);
return num;
June 21st, 2012, 09:03 AM
Re: Help with 2-d arrays, tough problem
That's exactly what I did. I used != to have it loop until it reached compareTo(n) reached 0. Thus shows the all to common off-by-one error. | {"url":"http://www.javaprogrammingforums.com/%20algorithms-recursion/16112-help-2-d-arrays-tough-problem-printingthethread.html","timestamp":"2014-04-20T02:10:34Z","content_type":null,"content_length":"55347","record_id":"<urn:uuid:100446d7-b9ca-4a2e-b121-f26709556a24>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00300-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: error bars
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: error bars
From "moleps islon" <moleps2@gmail.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: error bars
Date Wed, 18 Jun 2008 22:17:52 +0200
I' making a double error bar graph- e.g two different variables x and
y with error bars categorised according to z. I've tried using ciplot
x y, by(z) however I get a graph with two identical error bars (though
different symbol, but identical values) for each instance of z. Any
idea how to remedy this? If I need to make bars (a bar graph) for
each category and then have the positive part of the error bar
projecting from the top- how do I go about this?
On Mon, Jun 16, 2008 at 2:23 PM, Nick Cox <n.j.cox@durham.ac.uk> wrote:
> Standard errors can come from lots of places, including the -ci-
> command.
> Moleps seems to be implying that -ci- does the right calculation for his
> or her purposes.
> (I note that Moleps has yearly results, but -ci- for separate years does
> nothing about any time series structure in the data, for example about
> taking serial correlation into account.)
> -ciplot- and -stripplot- from SSC both do -ci-type calculations and
> graphing in one.
> Nick
> n.j.cox@durham.ac.uk
> Maarten buis
> --- moleps islon <moleps2@gmail.com> wrote:
>> I need to make a an error-bar graph categorised by year. I've tried
>> using serrbar mean(x) etc, but I cant find a command for the standard
>> error. Do I need to run a CI, generate a new variable from the ci
>> result and feed that into serrbar?
> There are many ways of doing this. For instance you can use methods
> discussed in (Buis 2007) (a convenient estimatation command would in
> this case be -mean-), or you can remember that the standard error of
> the mean is the standard deviation divided by the square root of the
> number of observations, like in the example below:
> *-------------- begin example -----------------------
> sysuse nlsw88, clear
> gen mis = missing(wage, age)
> bys age: egen mwage = mean(wage)
> bys age: egen sdwage = sd(wage)
> bys age mis: gen se = sdwage/sqrt(_N) if mis == 0
> serrbar mwage se age, scale(1.96)
> *-------------- end example -------------------------
> Notice that a fixed scale is slightly problematic here as idealy this
> scale should depend on the number of observations (a t-test), but 1.96
> should work fine in large samples. For a more flexible approach, where
> you can take all this into account see: (Newson 2003)
> -- maarten
> M.L. Buis (2007), "Stata tip 54: Where did my p-values go?", The Stata
> Journal, 7(4), pp.584--586.
> http://home.fsw.vu.nl/m.buis/wp/pvalue.html
> R. Newson (2003), "Confidence intervals and p-values for delivery to
> the end user", The Stata Journal, 3(3), pp. 245--269.
> http://www.stata-journal.com/article.html?article=st0043
> *
> * For searches and help try:
> * http://www.stata.com/support/faqs/res/findit.html
> * http://www.stata.com/support/statalist/faq
> * http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2008-06/msg00626.html","timestamp":"2014-04-16T22:36:21Z","content_type":null,"content_length":"9184","record_id":"<urn:uuid:9af74378-a242-4309-bcb3-f17e33b22f80>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lockheed Martin’s quantum computer steps into the limelight.
Entered By: grant on March 25, 2013 No Observations
New York Times has a pretty good profile of what could be the next big breakthrough in computing – the chips that understand “maybe”:
[A] powerful new type of computer that is about to be commercially deployed by a major American military contractor is taking computing into the strange, subatomic realm of quantum mechanics. In
that infinitesimal neighborhood, common sense logic no longer seems to apply. A one can be a one, or it can be a one and a zero and everything in between — all at the same time.
It sounds preposterous, particularly to those familiar with the yes/no world of conventional computing. But academic researchers and scientists at companies like Microsoft, I.B.M. and
Hewlett-Packard have been working to develop quantum computers.
Now, Lockheed Martin — which bought an early version of such a computer from the Canadian company D-Wave Systems two years ago — is confident enough in the technology to upgrade it to commercial
scale, becoming the first company to use quantum computing as part of its business.
Quantum computing has been a goal of researchers for more than three decades, but it has proved remarkably difficult to achieve. The idea has been to exploit a property of matter in a quantum
state known as superposition, which makes it possible for the basic elements of a quantum computer, known as qubits, to hold a vast array of values simultaneously.
There are a variety of ways scientists create the conditions needed to achieve superposition as well as a second quantum state known as entanglement, which are both necessary for quantum
computing. Researchers have suspended ions in magnetic fields, trapped photons or manipulated phosphorus atoms in silicon.
The D-Wave computer that Lockheed has bought uses a different mathematical approach than competing efforts. In the D-Wave system, a quantum computing processor, made from a lattice of tiny
superconducting wires, is chilled close to absolute zero. It is then programmed by loading a set of mathematical equations into the lattice.
The processor then moves through a near-infinity of possibilities to determine the lowest energy required to form those relationships. That state, seen as the optimal outcome, is the answer.
Tags: computer science, quantum physics | {"url":"http://guildofscientifictroubadours.com/2013/03/25/lockheed-martins-quantum-computer-steps-into-the-limelight/","timestamp":"2014-04-18T20:57:30Z","content_type":null,"content_length":"41919","record_id":"<urn:uuid:7ef40dc8-8b4d-4d9b-bea8-20832bb1c2f3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Indiana University Math Club
Jekyll 2013-11-18T22:13:15-05:00 http://www.indiana.edu/~mathclub// Tim Zakian http://www.indiana.edu/~mathclub// tzakian@indiana.edu http://www.indiana.edu/~mathclub//articles/Marlies-Gerber http://
www.indiana.edu/~mathclub//articles/Marlies-Gerber 2013-11-20T00:00:00-05:00 2013-11-20T00:00:00-05:00 Tim Zakian http://www.indiana.edu/~mathclub/ tzakian@indiana.edu <!-- mathjax config similar to
math.stackexchange --> <script type="text/x-mathjax-config"> MathJax.Hub.Config({ jax: ["input/TeX", "output/HTML-CSS"], tex2jax: { inlineMath: [ ['$', '$'] ], displayMath: [ ['$$', '$$']],
processEscapes: true, skipTags: ['script', 'noscript', 'style', 'textarea', 'pre', 'code'] }, messageStyle: "none", "HTML-CSS": { preferredFont: "TeX", availableFonts: ["STIX","TeX"] } }); </script>
<script src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS_HTML" type="text/javascript"></script> <p>A function $~f : [a,b] \to [0,1]$ is unimodal if $~f(a) = f(b) = 0$, and there
is a point $~c$ in $~(a,b)$ such that $~f$ is strictly increasing on $~[a,c]$ and strictly decreasing on $~[c,b]$. If $~[a,b] = [0,1]$, then we can consider the composition of $~f$ with itself $~n$
times, and we will denote that by $~f^n$. We can think of $~f$ as a dynamical system that gives a rule for how states change after time $~n$. (If the initial state is $~x$, then after one unit of
time, the new state is $~f(x)$, and after two units of time, it is $~f^2(x)=f(f(x))$, etc. ) One of the features of a chaotic dynamical system is that for a given initial condition $~x$, you can find
another initial condition $~y$, arbitrarily close to $~x$, such that after a long period of time $~n$, $~f^n(x)$ and $~f^n(y)$ are far apart. This is called sensitive dependence on initial
conditions. Intuitively, it means that the long-term future cannot be accurately predicted, because there will always be some errors in measuring the initial conditions. I will discuss an easily
verifiable condition on the derivatives of $~f$ (in the case of a three times differentiable unimodal map $~f$) that will guarantee sensitive dependence on initial conditions. Math M211 is the only
prerequisite for this talk.</p> <p><a href="http://www.indiana.edu/~mathclub//articles/Marlies-Gerber">Sensitive Dependence for Unimodal Maps on the Interval</a> was originally published by Tim
Zakian at <a href="http://www.indiana.edu/~mathclub/">Indiana University Math Club</a> on November 20, 2013.</p> http://www.indiana.edu/~mathclub//articles/Matt-Bainbridge http://www.indiana.edu/
~mathclub//articles/Matt-Bainbridge 2013-10-02T00:00:00-04:00 2013-10-02T00:00:00-04:00 Tim Zakian http://www.indiana.edu/~mathclub/ tzakian@indiana.edu <p>In this talk, Prof. Matt Bainbridge will
talk about the <em>Poncelet’s Porism</em>. More specifically, suppose E and F are ellipses in the plane with E inside F. Poncelet’s Porism says that if there is a single polygon which is
inscribed in F and circumscribed around E, then there are infinitely many such polygons. In this talk, I’ll give a beautiful proof of this theorem using some basic (but deep) properties of
elliptic curves.</p> <p><a href="http://www.indiana.edu/~mathclub//articles/Matt-Bainbridge">Poncelet's Porism</a> was originally published by Tim Zakian at <a href="http://www.indiana.edu/~mathclub/
">Indiana University Math Club</a> on October 02, 2013.</p> http://www.indiana.edu/~mathclub//articles/Ciprian-Demeter http://www.indiana.edu/~mathclub//articles/Ciprian-Demeter
2013-09-25T00:00:00-04:00 2013-09-25T00:00:00-04:00 Tim Zakian http://www.indiana.edu/~mathclub/ tzakian@indiana.edu <p>In this talk, Prof. Ciprian Demeter will talk about finding arithmetic
progressions of prime numbers. More specifically, he briefly explore the progress of (mathematical) technology over the last 100 years or so that has led to the recent proof that the prime numbers
contain arbitrarily long arithmetic progressions.</p> <p><a href="http://www.indiana.edu/~mathclub//articles/Ciprian-Demeter">Why finding arithmetic progressions is hard</a> was originally published
by Tim Zakian at <a href="http://www.indiana.edu/~mathclub/">Indiana University Math Club</a> on September 25, 2013.</p> http://www.indiana.edu/~mathclub//articles/Jiri-Dadok http://www.indiana.edu/
~mathclub//articles/Jiri-Dadok 2013-09-18T00:00:00-04:00 2013-09-18T00:00:00-04:00 Tim Zakian http://www.indiana.edu/~mathclub/ tzakian@indiana.edu <p>In this talk, Prof. Jiri Dadok will talk about
the Mathematics behind the CAT scan, as well as talking about the history leading up to the mathematics used. This will include a brief discussion of the Radon and Fourier transforms.</p> <p><a href=
"http://www.indiana.edu/~mathclub//articles/Jiri-Dadok">Mathematics of CAT scan</a> was originally published by Tim Zakian at <a href="http://www.indiana.edu/~mathclub/">Indiana University Math Club
</a> on September 18, 2013.</p> http://www.indiana.edu/~mathclub//articles/Chris-Judge http://www.indiana.edu/~mathclub//articles/Chris-Judge 2013-09-11T00:00:00-04:00 2013-09-11T00:00:00-04:00 Tim
Zakian http://www.indiana.edu/~mathclub/ tzakian@indiana.edu <p>In this talk, Prof. Chris Judge will talk about what sorts of sounds a vibrating triangle can make. He will also set out to say what
its `pure tones’ are and how the answers depend on the shape of the triangle. He will also discuss some partial answers to these <em>mathematical</em> questions.</p> <p><a href="http://
www.indiana.edu/~mathclub//articles/Chris-Judge">The Music of Triangles</a> was originally published by Tim Zakian at <a href="http://www.indiana.edu/~mathclub/">Indiana University Math Club</a> on
September 11, 2013.</p> | {"url":"http://www.indiana.edu/~mathclub/feed.xml","timestamp":"2014-04-21T09:42:06Z","content_type":null,"content_length":"8509","record_id":"<urn:uuid:f809053b-b171-4c3f-ae2d-4d0f6ba76551>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Areas of House Lots
Date: 02/18/98 at 14:41:45
From: Jack Redfearn
Subject: Area of 4-sided figure
We design and construct sanitary sewers for people in Kansas City.
After the job is complete we have the task of determining sanitary
sewer assessments of properties based on the square feet of their
lots. Many lots are 4-sided but do not have any parallel lines. We
would like to find a powerful formula that can calculate the area of a
lot given the length of each side.
I believe a figure is "defined" when the 4 sides are given, so there
must be a way to calculate the area. However, we have been unable to
come up with anything.
Again, no sides are parallel and we do not know any angles.
Date: 02/18/98 at 16:15:25
From: Doctor Rob
Subject: Re: Area of 4-sided figure
Sorry, but the figure is not "defined" when the four sides are given.
You need one more datum. It may be an angle, it may be the length of
a diagonal, or some other quantity. Even if all the sides are equal,
you can have a square, or a rhombus (a parallelogram), and the area of
the rhombus is always less than that of the square. The acute angle
in the rhombus can be anything between 0 and 90 degrees.
As a result, there is no such formula.
If you know the sides are a, b, c, and d, running around the boundary,
and the diagonal of length e cuts the lot into two triangles of sides
a, b, e, and c, d, e, respectively, then the formula to compute the
area is as follows.
Let s = (a+b+e)/2 and t = (c+d+e)/2. Then the area is
A = Sqrt[s*(s-a)*(s-b)*(s-e)] + Sqrt[t*(t-c)*(t-d)*(t-e)].
If you know the angle X between sides whose lengths are a and b, then
the Law of Cosines tells that
e^2 = a^2 + b^2 - 2*a*b*cos(X).
Then you can figure out e and use the previous formula.
-Doctor Rob, The Math Forum
Check out our web site http://mathforum.org/dr.math/
Date: 02/19/98 at 12:41:03
From: Anonymous
Subject: Re: Area of 4-sided figure
Thanks for your prompt reply. A couple of us in the office are
working on this and we appreciate your help and suggestions.
I think there will be times when we will know an angle (or be able to
find out an interior angle from the subdivision plat) and/or be able
to compute a diagonal. Anyway, thanks again. | {"url":"http://mathforum.org/library/drmath/view/54991.html","timestamp":"2014-04-17T07:18:41Z","content_type":null,"content_length":"7383","record_id":"<urn:uuid:8f3a319c-dd30-4ed7-a460-0b191359834a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
K is a subfield of L... Prove that f(x) is a factor...
November 25th 2012, 07:56 PM #1
Junior Member
Oct 2012
K is a subfield of L... Prove that f(x) is a factor...
Let K be a subfield of a field L. Let f(x) and g(x) be polynomials in K[x].
(a) If f(x) is a factor of g(x) in L[x], prove that f(x) is also a factor of g(x) in K[x].
(b) If f(x) and g(x) have a common factor of positive degree in L[x], prove that they also have a common factor of positive degree in K[x].
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/208425-k-subfield-l-prove-f-x-factor.html","timestamp":"2014-04-18T13:15:12Z","content_type":null,"content_length":"29023","record_id":"<urn:uuid:2b3ec6e1-30bb-4a04-9b27-e423848ef798>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Colwyn, PA Algebra 2 Tutor
Find a Colwyn, PA Algebra 2 Tutor
...Instead, I'm evaluating your particular strengths and weaknesses and developing *your* strategy to get you the highest score possible.We'll cover tricks and tips that will save you time, and
we'll only use methods that will work for you. I'll help you master vocabulary and learn how to get throu...
47 Subjects: including algebra 2, chemistry, reading, English
...I first assess the students Alegbra skills, and if necessary do a review to bring the student up to the necessary level. As an Aide at Harriton High School I assist students daily in Geometry
and Honors. I use my personal notes to explain basics, then more complex problems.
35 Subjects: including algebra 2, chemistry, English, reading
...My tutoring focuses on a solid understanding of the material and a consistent and methodical approach to problem-solving, with special attention paid to a good foundation in mathematical
methods. I am a native German-speaker, and have been working for several years as a German-to-English translator. I can help German language students with writing, grammar, pronunciation and
21 Subjects: including algebra 2, reading, physics, writing
Hi! I am a patient, flexible, and encouraging tutor, and I'd love to help you or your child gain confidence and succeed academically. I adapt my teaching style to students' needs, explaining
difficult concepts step by step and using questions to "draw out" students' understanding so that they learn valuable problem-solving skills along the way.
38 Subjects: including algebra 2, English, reading, physics
...I have a Master of Science degree in math, over three years' experience as an actuary, and am a member of MENSA. I am highly committed to students' performances and to improve their
comprehension of all areas of mathematics.I have excelled in courses in Ordinary Differential Equations in both un...
19 Subjects: including algebra 2, calculus, geometry, statistics
Related Colwyn, PA Tutors
Colwyn, PA Accounting Tutors
Colwyn, PA ACT Tutors
Colwyn, PA Algebra Tutors
Colwyn, PA Algebra 2 Tutors
Colwyn, PA Calculus Tutors
Colwyn, PA Geometry Tutors
Colwyn, PA Math Tutors
Colwyn, PA Prealgebra Tutors
Colwyn, PA Precalculus Tutors
Colwyn, PA SAT Tutors
Colwyn, PA SAT Math Tutors
Colwyn, PA Science Tutors
Colwyn, PA Statistics Tutors
Colwyn, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/colwyn_pa_algebra_2_tutors.php","timestamp":"2014-04-16T22:24:16Z","content_type":null,"content_length":"24216","record_id":"<urn:uuid:ccd24734-c430-4dd0-ab48-060d34ebfa95>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
how do you evaluate C(9,7)
• 10 months ago
• 10 months ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/51b88172e4b0862d04989854","timestamp":"2014-04-21T15:47:20Z","content_type":null,"content_length":"44448","record_id":"<urn:uuid:16c72976-12c6-4d96-ab06-697e288d1027>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: probability question
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: probability question
From "Feiveson, Alan H. (JSC-SK311)" <alan.h.feiveson@nasa.gov>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: RE: probability question
Date Wed, 28 Oct 2009 08:33:36 -0500
If you look at this retrospectively, you can use the hypergeometric distribution. Suppose we are given that it rained 7 of the 120 days and that the client wore the hat on 4 of those days. The probability that it rained on 3 of the four hat days would then be
(7C3 x 113C1 )/(120C4) = .00048146 (I think).
whew aCb refers to a things taken b at a time
Here "probability" refers to a repeated experiment in which one picks 4 of the 120 days at random to wear the hat, without knowledge of which of those are the rain days.
Al Feiveson
-----Original Message-----
From: owner-statalist@hsphsun2.harvard.edu [mailto:owner-statalist@hsphsun2.harvard.edu] On Behalf Of Richard Goldstein
Sent: Wednesday, October 28, 2009 7:38 AM
To: statalist
Subject: st: probability question
it's been a long time since I thought about questions like this, but, as
a lead-in to a study, a client has asked the following question which he
thinks he understands and says is related to where he wants to go:
during a consecutive period of 120 days, if it rains on 7 days and my
client wears a hat on 4 days (these are independent of any knowledge of
the weather), what is the probability that it will rain on 3 of the days
on which he is wearing a hat?
my client swears that this is not a homework problem for him or his wife
or one of their kids!
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-10/msg01285.html","timestamp":"2014-04-17T19:04:25Z","content_type":null,"content_length":"7678","record_id":"<urn:uuid:bc7ea079-9d56-4868-b92e-047fd31052b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00554-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crossing power sets
February 2nd 2009, 04:38 PM #1
Jan 2009
Kingston, PA
Im looking for two sets such that P(A) X P(B)=P(A X B). Where A and B represents the two sets. I have tryed everything. I get close when using the empty set as one but can't get an exact answer.
Thanks 4 any help!
February 2nd 2009, 06:24 PM #2 | {"url":"http://mathhelpforum.com/discrete-math/71431-crossing-power-sets.html","timestamp":"2014-04-16T08:47:49Z","content_type":null,"content_length":"32457","record_id":"<urn:uuid:7b19511d-9465-41bd-96a0-61b24a613995>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00202-ip-10-147-4-33.ec2.internal.warc.gz"} |
JIPMER 2013 Practice Papers
The candidates, who want to prepare for JIPMER 2013, are advised to solve the JIPMER 2013 Practice Papers in order to prepare for the exam in the best possible comprehensive way. While the candidates
are preparing for the exams they have trouble in choosing the right source or right reference to filter those topics from the syllabus on which questions are mostly asked. All such problems can be
solved with JIPMER 2013 Practice Papers.
The JIPMER 2013 Practice Papers will not only give the students an idea about the level of the exam they are going to appear for but also give them the best exhaustive practice material. The
candidates can use them as a reference and solve the questions from them on a daily or weekly basis from the topics they have covered till then.
The JIPMER 2013 Practice Papers can be solved both online as well as offline. The offline JIPMER 2013 Practice Papers will be good for the initial stages of preparation as they don’t have any time
constraint and can be solved section wise and topic wise. Online JIPMER 2013 Practice Papers will be good for the candidates who have completed their syllabus once and thus are now in advanced stages
of preparation and need to practice within the time bounds of the exam. The online version of the JIPMER 2013 Practice Papers will have the countdown timers and thus will be an exact simulation of
the exam environment. The candidates can also record their performance while solving JIPMER 2013 Practice Papers at some online portals and can have a better analysis of their performance at solving
these papers.
Other Related Links:
JIPMER 2013 Eligibility Criteria
JIPMER 2013 Reservation of Seats
JIPMER 2013 Application Submission Process
JIPMER 2013 Application Form, Date, Notification | {"url":"http://medical.entrancecorner.com/exams/2297-jipmer-2013-practice-papers.html","timestamp":"2014-04-20T03:10:23Z","content_type":null,"content_length":"36082","record_id":"<urn:uuid:b19eef96-84e6-4978-aa47-8c8905a43252>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00306-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Matheology § 224
Date: Mar 22, 2013 3:20 PM
Author: fom
Subject: Re: Matheology § 224
On 3/22/2013 1:21 PM, WM wrote:
> On 22 Mrz., 16:31, William Hughes <wpihug...@gmail.com> wrote:
>> On Mar 22, 10:05 am, WM <mueck...@rz.fh-augsburg.de> wrote:
>>>> If you want to
>>>> remove all of the lines you have to remove the set of all
>>>> lines that are indexed by a natural number.
>>> But I don't want to remove a set.
>> We have the set of lines. You do not want to leave
>> any of the lines.
> I do not want this or that.
> I simply prove that for every line l_n the following property is true:
> Line l_n and all its predecessors do not in any way influence (neither
> decrease nor increase) the union of all lines, namely |N.
> This is certainly a proof that does not force us to "remove a set".
> But we can look at the set of lines that have this property. The
> result is the complete set of all lines.
> And this mathematical result cannot be violated or re-interpreted.
Willard Quine actually wrote a version of set theory.
Later, he argued for the elimination of singular terms from
logical language using description theory and made sense of
it with the amazing fact that description theory could re-introduce
This argument (1960) is the one that actually justifies using only
fundamental relations in the formal language of set theory in relation
to Zermelo's use of denotation in the 1908 paper. (This use of
denotation had simply been dropped earlier because of the influence of
other philosophical trends.)
His analyses give a slightly different picture from the
one you suggest for your readers.
I found a few papers that mention his ideas. So, I thought
I would make them available.
WM's principle of "proof by reality", of course, has an immutable
semantics based on the universally consistent pragmatics of
language acquisition in childhood.
Thus, he need never explain himself. The defect is always
with the questioner.
When mathematics is based on "will", one ought not | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8725669","timestamp":"2014-04-19T23:38:08Z","content_type":null,"content_length":"3770","record_id":"<urn:uuid:e5f7080a-bc52-4680-8913-cb5f58b83f5f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00397-ip-10-147-4-33.ec2.internal.warc.gz"} |
Voorhees Township, NJ Trigonometry Tutor
Find a Voorhees Township, NJ Trigonometry Tutor
...I am experienced and capable of tutoring in classes ranging from college algebra, geometry, high school algebra, 1 and 2 calculus and pre-calculus. I have taken college calculus 1, 2, 3,
differential equations, along with probability & statistics. My engineering courses include structural analysis, strength of materials, water mechanics, soil mechanics and other of the civil
entering nature.
10 Subjects: including trigonometry, calculus, geometry, algebra 1
...When I work with you, I'm not following a scripted curriculum. Instead, I'm evaluating your particular strengths and weaknesses and developing *your* strategy to get you the highest score
possible.We'll cover tricks and tips that will save you time, and we'll only use methods that will work for ...
47 Subjects: including trigonometry, chemistry, English, reading
...I have experience in teaching and tutoring the following concepts in discrete mathematics:Set TheoryNumber Theory including prime numbers and prime factoringCombinations and PermutationsBinary
code and hexidecimal conversionGame theory. Linear algebra. (Cramer's Rule)Sample space (decision tree)...
22 Subjects: including trigonometry, geometry, statistics, GED
...I got an A+ in linear algebra and abstract algebra. I spent a lot of time helping the other students with their homework and in understanding the concepts. Consequently, I am well-prepared to
help students learn the various parts of algebra, from proofs to dealing with spaces.
19 Subjects: including trigonometry, calculus, geometry, algebra 2
...In The Chemical Engineering Curriculum, the required courses include the core courses in Mathematics, Chemistry, and the Chemical Engineering courses. As a matter of fact, Chemical Engineering
students take the same core courses as the Math and Chemistry Majors do. Here’s a list of courses I ha...
30 Subjects: including trigonometry, chemistry, calculus, geometry | {"url":"http://www.purplemath.com/voorhees_township_nj_trigonometry_tutors.php","timestamp":"2014-04-18T18:57:48Z","content_type":null,"content_length":"24932","record_id":"<urn:uuid:7ac1f7dd-f827-4ad5-94e1-636689bf0df7>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
Is R mod 2pi a Compact Manifold?
Why R mod 2pi is a Compact Manifold?
Isn't this like a real line which is not compact?
How should we prove it using a finite sub-cover for this manifold?
What is the topology that you want?
If it is R with the topology of a line modulo the discrete group of integer multiples of 2pi, then use the definition of open set in the quotient topology to show that every open cover has a finite
subcover. You need to know that a closed interval is compact. | {"url":"http://www.physicsforums.com/showpost.php?p=3732277&postcount=4","timestamp":"2014-04-16T04:35:56Z","content_type":null,"content_length":"8192","record_id":"<urn:uuid:5d16bad7-e5be-4b74-8d6b-e53b76b97336>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00141-ip-10-147-4-33.ec2.internal.warc.gz"} |
von Neumann, John (1903
von Neumann, John (1903–1957)
Hungarian-American mathematician who made important contributions to set theory, computer science, economics, and quantum mechanics. John von Neumann (pronounced von noi-man) received a Ph.D. in
mathematics from the University of Budapest and later he worked at the Institute for Advanced Study in Princeton. The book Games and Economic Behavior,^1 which he co-authored with Oskar Morgenstern
in 1944, is considered a seminal work in the field of game theory. Von Neumann devised the so-called von Neumann architecture used in all modern computers and studied cellular automata in order to
construct the first examples of self-replicating automata, now known as von Neumann machines. Von Neumann had a mind of great ingenuity, nearly total recall of what he'd learned, immense arrogance,
and a great love of jokes and humor.
1. Neumann, J. von and Morgenstern, O. Theory of Games and Economic Behavior. New York: Wiley, 1964.
2. Poundstone, William. Prisoner's Dilemma: John Von Neumann, Game Theory and the Puzzle of the Bomb. New York: Anchor, reprinted 1993.
Related entry
von Neumann probe
Related categories | {"url":"http://www.daviddarling.info/encyclopedia/V/von_Neumann.html","timestamp":"2014-04-21T09:38:27Z","content_type":null,"content_length":"7754","record_id":"<urn:uuid:f6f06660-2939-4c44-b0d7-4e034cbe6a82>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00199-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve the following system by using graph paper or graphing technology. 2x + 2y = –6 3x – 2y = 11
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f7cb60de4b09f22231ba40d","timestamp":"2014-04-19T07:25:27Z","content_type":null,"content_length":"44347","record_id":"<urn:uuid:37964eed-736f-4771-923a-ed8f590fd64c>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00652-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bayesian Abductive Logic Programs: A Probabilistic Logic for Abductive Reasoning
Sindhu V. Raghavan
In this proposal, we introduce Bayesian Abductive Logic Programs (BALP), a probabilistic logic that adapts Bayesian Logic Programs (BLPs) for abductive reasoning. Like BLPs, BALPs also combine
first-order logic and Bayes nets. However, unlike BLPs, which use deduction to construct Bayes nets, BALPs employ logical abduction. As a result, BALPs are more suited for problems like plan/activity
recognition that require abductive reasoning. In order to demonstrate the efficacy of BALPs, we apply it to two abductive reasoning tasks — plan recognition and natural language understanding. | {"url":"http://ijcai.org/papers11/Abstracts/492.html","timestamp":"2014-04-16T14:10:36Z","content_type":null,"content_length":"1628","record_id":"<urn:uuid:56655ba8-0a1d-4b9c-9952-d85bbcf902b9>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Perpetual calendar
Hi all,
Last edited by gAr (2011-07-07 21:32:20)
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
Hi gAr;
Have you heard of Zeller's congruence? Yours is similar in some ways.
Would like to move this over to "Formulas." I do not think we have one in there.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Hi bobbym,
Thanks for telling about that, I didn't know such a formula existed.
Okay, you may move it to "formulas".
I still can't explain the strange behaviour of my formula during leap years!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
Hi gAr;
What anomalous behaviour? Do you have an example?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Hi bobbym,
The behaviour which I mentioned in the postscript of post #1.
For leap years, subtraction is required only for the first 2 months, then it's alright!
2004/01/13 - 3
2004/02/13 - 6
2004/03/13 - 6
May require a small correction, but unable find that.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
I can not explain that either. Have you found any mistakes that the method makes?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Not able to find any!
When posting, I believed everything was fine.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
In leap years February makes its contribution of one extra day - the 29th of Feb.
Re: Perpetual calendar
That should affect months after february, not before.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
hi gAr
Is this your own formula and you want to verify that it works?
a formula that you have found and you want to know why it works?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Perpetual calendar
Hi Bob,
It's my own formula which I derived today, not perfect yet.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
OK. I'm impressed!
But why not yet perfect?
You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
Re: Perpetual calendar
Thank you!
I think it's because the sequence I'm considering adds the extra day of the leap year to the next year, and not to the months after february of the leap year.
It may require some rearrangement of months, like moving out the first two months to the previous year, and moving in the two months of the next year, hmmm let me check that way!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
Hi gAr;
In Zeller's the year starts on Mar 1. That is a clue as to why you add to your first two. They are really the last two monts of the year!
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Hi bobbym,
Yes, I'm thinking of that, we posted at the same time!
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
Yes, he has Jan and Feb as the last 2 months of the previous year.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Hi gAr,
If we consider 2 cases, the years 400 and the year 2000. then y would be the same for both cases. But that would be a contradiction right?
Re: Perpetual calendar
You are not thinking in terms of mods. The days of the week could be the same for groups of years. There is no contradiction.
Modular equations can have an infinite number of solutions because as far as mod 7 is concerned { ...,-5,2,9,16,...} are all the same.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Hi 123ronnie321,
According to the rule to find leap year, the same set of calendars repeat every 400 years.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
Re: Perpetual calendar
Thanks for telling, I guess it was not followed those days.
I'll look at some history.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
The method you are using may not apply for those two dates. 400 AD is the Julian calendar and 2000 AD is the Gregorian. In 1752 they changed from the Julian to the Gregorian. About 12 days were lost.
That might be the discrepancy.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
Hi bobbym,
Yes, thanks.
Would there be any further correction, say after 10000's of years.
I checked for 6666A.D, it works fine for that year.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay."
Re: Perpetual calendar
I do not think so. Except that every 3000 years or so there is a loss of a day using the Gregorian calendar.
This is what I am using you will see a similarity with yours.
http://en.wikipedia.org/wiki/Calculatin … f_the_week
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: Perpetual calendar
I'll continue with the formula then. Let past 1600 be whatever it was!
Sakamoto's algorithm works great, very similar to mine!
I'll stop the duplication of work.
Thank you.
"Believe nothing, no matter where you read it, or who said it, no matter if I have said it, unless it agrees with your own reason and your own common sense" - Buddha?
"Data! Data! Data!" he cried impatiently. "I can't make bricks without clay." | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=180397","timestamp":"2014-04-23T23:36:24Z","content_type":null,"content_length":"38872","record_id":"<urn:uuid:a7e9ef29-1a36-44a1-ac5f-29219f529f4a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00453-ip-10-147-4-33.ec2.internal.warc.gz"} |
R Tutorials--Counts and Proportions
COUNTS AND PROPORTIONS
Binomial and Poisson Data
Count data--data derived from counting things--are often treated as if they are assumed to be binomial distributed or Poisson distributed. The binomial model applies when the counts are derived from
independent Bernoulli trials in which the probability of a "success" is known, and the number of trials (i.e., the maximum possible value of the count) is also known. The classic example is coin
tossing. We toss a coin 100 times and count the number of times the coin lands heads side up. The maximum possible count is 100, and the probability of adding one to the count (a "success") on each
trial is known to be 0.5 (assuming the coin is fair). Trials are independent; i.e., previous outcomes do not influence the probability of success on current or future trials. Another requirement is
that the probability of success remain constant over trials.
Poisson counts are often assumed to occur when the maximum possible count is not known, but is assumed to be large, and the probability of adding one to the count at each moment (trials are often ill
defined) is also unknown, but is assumed to be small. A example may help.
Suppose we are counting traffic fatalities during a given month. The maximum possible count is quite high, we can imagine, but in advance we can't say exactly (or even approximately) what it might
be. Furthermore, the probability of a fatal accident at any given moment in time is unknown but small. This sounds like it might be a Poisson distributed variable. But is it?
The built in data set "Seatbelts" allows us to have a look at such a count variable. "Seatbelts" is not a data frame, it's a time series, so extracting the counts of fatalities will take a bit of
> deaths = as.vector(Seatbelts[,1]) # All rows of column 1 extracted.
> length(deaths)
[1] 192
> mean(deaths)
[1] 122.8021
> var(deaths)
[1] 644.1386
The vector "deaths" now contains the monthly number of traffic deaths in Great Britain during the 192 months from January 1969 through December 1984. We also have our first indication this is not a
Poisson distributed variable. In the Poisson distribution, the mean and the variance are the same. Let's continue nevertheless.
Next we'll look at a histogram of the "deaths" vector, and plotted over top of that we will put the Poisson distribution with mean = 122.8...
> hist(deaths, breaks=15, prob=T, ylim=c(0,.04))
> lines(x<-seq(65,195,10),dpois(x,lambda=mean(deaths)),lty=2,col="red")
And I'm not gonna lie to you. That took some trial and error! The histogram function asks for 15 break points, turns on density plotting, and sets the y-axis to go from 0 to .04. The poisson density
function was plotted using the lines( ) function. The x-values were generated by using seq( ) and stored into "x" on the fly. The y-values were generated using the dpois( ) function. We also
requested a dashed, red line. Examination of the figure shows what we suspected. The empirical distribution does not match the theoretical Poisson distribution.
> sim.dist = rpois(192, lambda=mean(deaths))
> qqplot(deaths, sim.dist, main="Q-Q Plot")
> abline(a=0, b=1, lty=2, col="red")
Finally, you may recall (if you read that tutorial) the qqnorm( ) function can be used to check a distribution for normality. The qqplot( ) function will compare any two distributions to see if they
have the same shape. If they do, the plotted points will fall along a straight line. The plot to the right doesn't look too terribly bad until we realize these two distributions should not only both
be poisson, they should also both have the same mean (or lambda value). Thus, the points should fall along a line with intercept 0 and slope 1. The moral of the story: just because count data sound
like they might fit a certain distribution doesn't mean they will. R provides a number of mechanisms for checking this.
We could have saved ourselves a lot of trouble by looking at a time series plot to begin with. I will not reproduce it, but the command for doing so is below. In the plot, we see the data are
strongly cyclical and, therefore, that the individual elements of the vector should not be considered independent counts. In fact, there is a yearly cycle in the number of traffic deaths. (How would
you show that?)
> plot(Seatbelts[,1]) # Not shown.
Furthermore, a scatterplot of "deaths" against its index values appears to show that the probability of dying in a traffic accident is decreasing over the duration of the record...
> scatter.smooth(1:192, deaths) # Not shown.
Lot's of problems here!
The Binomial Test
Suppose we set up a classic card-guessing test for ESP using a 25-card deck of Zener cards, which consists of 5 cards each of 5 different symbols. If the null hypothesis is correct (H[0]: no ESP) and
the subject is just guessing at random, then we should expect pN correct guesses from N independent Bernoulli trials on which the probability of a success (correct guess) is p = 0.2. Suppose our
subject gets 9 correct guesses. Is this out of line with what we should expect just by random chance?
A number of proportion tests could be applied here, but the sample size is fairly small (just 25 guesses), so an exact binomial test is our best choice...
> binom.test(x=9, n=25, p=.2)
Exact binomial test
data: 9 and 25
number of successes = 9, number of trials = 25, p-value = 0.07416
alternative hypothesis: true probability of success is not equal to 0.2
95 percent confidence interval:
0.1797168 0.5747937
sample estimates:
probability of success
Oh, too bad! Assuming we set alpha at the traditional value of .05, we fail to reject the null hypothesis with an obtained p-value of .074. The 95% confidence interval tells us this subject's true
rate of correct guessing is best approximated as being between 0.18 and 0.57. This incorporates the null value of 0.2, so once again, we must regard the results as being consistent with the null
We might argue at this point that we should have done a one-tailed test. (A two-tailed test is the default.) Of course, this decision should be made in advance, but if the subject is displaying
evidence of ESP, we would expect his success rate to be not just different from chance but greater than chance. To take this into account in the test, we need to set the "alternative=" option.
Choices are "less", "greater", and "two.sided" (the default)...
> binom.test(x=9, n=25, p=.2, alternative="greater")
Exact binomial test
data: 9 and 25
number of successes = 9, number of trials = 25, p-value = 0.04677
alternative hypothesis: true probability of success is greater than 0.2
95 percent confidence interval:
0.2023778 1.0000000
sample estimates:
probability of success
The one-tailed test allows the null to be rejected at alpha=.05. The confidence interval says the subject is guessing with a success rate of at least 0.202. The confidence level can also be set by
changing the "conf.level=" option to any reasonable value less than 1. The default value is .95.
The Single-Sample Proportion Test
The subject keeps guessing because, of course, we'd like to see this above chance performance repeated. He has now made 400 passes through the deck for a total of 10,000 independent guesses. He has
guessed correctly 2,022 times. What should we conclude?
An exact binomial test is probably not the best choice here as the sample size is now very large. We'll substitute a single-sample proportion test...
> prop.test(x=2022, n=10000, p=.2, alternative="greater")
1-sample proportions test with continuity correction
data: 2022 out of 10000, null probability 0.2
X-squared = 0.2889, df = 1, p-value = 0.2955
alternative hypothesis: true p is greater than 0.2
95 percent confidence interval:
0.1956252 1.0000000
sample estimates:
The syntax is exactly the same. Notice the proportion test calculates a chi-squared statistic. The traditional z-test of a proportion is not implemented in R, but the two tests are exactly
equivalent. Notice also a correction for continuity is applied. If you don't want it, set the "correct=" option to FALSE. The default value is TRUE. (This value must be set to FALSE to make the test
mathematically equivalent to the uncorrected z-test of a proportion.)
Two-Sample Proportions Test
A random sample of 428 adults from Myrtle Beach reveals 128 smokers. A random sample of 682 adults from San Francisco reveals 170 smokers. Is the proportion of adult smokers in Myrtle Beach different
from that in San Francisco?
> prop.test(x=c(128,170), n=c(428,682),
+ alternative="two.sided",
+ conf.level=.99)
2-sample test for equality of proportions with continuity correction
data: c(128, 170) out of c(428, 682)
X-squared = 3.0718, df = 1, p-value = 0.07966
alternative hypothesis: two.sided
99 percent confidence interval:
-0.02330793 0.12290505
sample estimates:
prop 1 prop 2
0.2990654 0.2492669
Don't be upset by the fact that I typed this on multiple lines by hitting the Enter key at convenient spots. Being neat is optional! The two-proportions test also does a chi-square test with
continuity correction, which is mathematically equivalent to the traditional z-test with correction. Enter "hits" or successes into the first vector ("x"), the sample sizes into the second vector
("n"), and set options as you like. To turn off the continuity correction, set "correct=F". I set the alternative to two-sided, but this was unnecessary as two-sided is the default. I also set the
confidence level for the confidence interval to 99% to illustrate this option. I made up these data, by the way.
R incorporates a function for calculating the power of a 2-proportions test. The syntax is illustrated here from the help page...
power.prop.test(n = NULL, p1 = NULL, p2 = NULL, sig.level = 0.05, power = NULL,
alternative = c("two.sided", "one.sided"),
strict = FALSE)
The value of n should be set to the sample size per group, p1 and p2 to the group probabilities or proportions of successes, and power to the desired power. One and only one of these options must be
passed as NULL, and R will calculate it from the others. In the example above, what sample sizes should we have if we want a power of 90%?
> power.prop.test(p1=.299, p2=.249, sig.level=.05, power=.9,
+ alternative="two.sided")
Two-sample comparison of proportions power calculation
n = 1670.065
p1 = 0.299
p2 = 0.249
sig.level = 0.05
power = 0.9
alternative = two.sided
NOTE: n is number in *each* group
We would need 1,670 subjects in each group.
Multiple Proportions Test
The two-proportions test generalizes directly to a multiple proportions test. The example from the help page should suffice to illustrate...
> example(prop.test)
> ## Data from Fleiss (1981), p. 139.
> ## H0: The null hypothesis is that the four populations from which
> ## the patients were drawn have the same true proportion of smokers.
> ## A: The alternative is that this proportion is different in at
> ## least one of the populations.
> smokers <- c( 83, 90, 129, 70 )
> patients <- c( 86, 93, 136, 82 )
> prop.test(smokers, patients)
4-sample test for equality of proportions without continuity
data: smokers out of patients
X-squared = 12.6004, df = 3, p-value = 0.005585
alternative hypothesis: two.sided
sample estimates:
prop 1 prop 2 prop 3 prop 4
0.9651163 0.9677419 0.9485294 0.8536585
If you run this example, you will notice that I have deleted some of the output from the simpler proportion tests. revised 2010 August 6
Return to the Table of Contents | {"url":"http://ww2.coastal.edu/kingw/statistics/R-tutorials/proport.html","timestamp":"2014-04-17T06:54:55Z","content_type":null,"content_length":"14109","record_id":"<urn:uuid:7f6cf949-620f-459a-8b1f-70074581bab7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00282-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computing in the net of possibilities
In systems composed of coupled oscillating elements the saddle points form a network. The above networks belong to a system of five elements. The saddle points are depicted as points. Every saddle
point is connected to four others: two of these connections lead to the particular saddle point, two others away from it. The figure shows two possible paths (orange and blue) the system may take.
Each path corresponds to the result of a calculation. © MPI for Dynamics and Self-Organization
(Phys.org) -- Scientists at the Max Planck Institute for Dynamics and Self-Organization in Göttingen have developed an entirely new principle for information processing. The complex network computer
now stands as an alternative to the other possibilities in data processing - such as the conventional computer or the quantum computer. The fundamental requirement is a system, for instance a laser,
with oscillating elements that can interact with one another. The researchers were able to demonstrate that the characteristic dynamics of such a system can be cleverly harnessed to perform the full
range of logical operations. The complex network computer can even perform some tasks, such as the coarse sorting of numbers, considerably faster than conventional computers. Furthermore, the
researchers have managed to take a first step in programming a robot according to the new principle.
A computer is much more than simply hardware. Foremost, it is a principle for the processing of data and information. The essence of the conventional computer for example, which has long had a
decisive effect on our daily life, is not to be sought in transistors, chips and semiconductors. Rather, it is characterized by the ways and means of performing calculations with the help of two
easily distinguishable states (conventionally known as 0 and 1). Scientists at the Max Planck Institute for Dynamics and Self-Organization in Göttingen have now developed a completely new principle
for information processing. Their so-called complex network computer is equally capable of performing arbitrary calculations, but does this under completely different conditions.
"In contrast to classical data processing on a PC, our new approach is not based on a binary system of zeros and ones", explains Marc Timme, head of the Network Dynamics research group at Institute.
What is more, a complex network computer could in principle be built from any oscillating system. "The simplest example is a pendulum", says Timme. However, particular electrical circuits whose
components rhythmically exchange charge with each other, or lasers can also be said to oscillate. If several such units are linked - as with a number of pendulums connected to each other by a spring
- they exhibit a special dynamic behaviour which lends itself to the processing of data.
Systems A choreography of oscillations correspond to a state of the whole system
The key to this behaviour are so-called saddle points: states of the whole system which are stable in some respects and unstable in others. "Imagine a ball sitting in the hollow of a real saddle ,
explains Timme. If this ball is moved exactly parallel to the horse's back and then released, it will always roll back into the hollow. The initial state is stable with respect to this kind of
disturbance. But if the ball is set in motion perpendicularly to the horse's back, it is a completely different matter: the ball will fall off; the state is unstable. In the case of connected
pendulums, a special relation between the oscillations, in which particular pendulums move synchronously, corresponds to such a saddle point state.
In systems of connected oscillating elements, such saddle points form a kind of network: in response to an external disturbance that destabilizes a particular saddle point , the whole system shifts
to another one. "In our example system each saddle point leads to two others, which in turn are connected with two further saddle points", explains Fabio Schittler Neves from the Institute. Which
path the system actually takes in this net of possible states depends on the kind of disturbance.
"In our design, we regard each disturbance as an input signal that can be composed of several components", says Schittler Neves. Each component is coupled to one of the oscillating elements of the
whole system. In the case of a group of coupled pendulums, for instance, a component signal corresponds to a slight impact on one of the pendulums. The relative strength of these component signals
determines to which new saddle point state the system will tend.
All logical operations can be performed in one network
The input signal thus determines the path taken through the network of saddle points. The path taken corresponds to the result of the calculation. "The state then taken by the system allows
inferences about the relative strengths of the individual signal components , explains Timme. It s a kind of sorting by size.
In their latest publication, the researchers were now able to show that a complete system of logic can be built on this: all the logical operations such as addition, multiplication or negation
can be represented. However, whereas a classical computer uses one component - a subsystem of the whole computer - to perform a particular logical operation like, for example, addition, the operation
in a complex network computer takes place in the whole network simultaneously. "All logical operations can therefore be performed similarly in this network , explains Timme.
This means that even relatively small systems can perform an unbelievably large number of possible operations: whereas five oscillating elements provide only ten different system states and can
therefore perform only ten different calculations, 100 elements provide 5 x 10^20. This number represents 10,000 times the number of letters in all the books in all the libraries in the world. In
addition, the complex network computer performs some tasks, such as the coarse sorting of figures, much faster than its conventional equivalent.
The new principle of calculation enables a robot to navigate its way through an obstacle course
The new principle of calculation has also proved itself in its first practical application. It allowed the scientists to build a simple robot which finds its own way through an obstacle course. The
input signals from its sensors correspond to the disturbances of the system. In this case, electrical oscillators could serve as the hardware , explains Schittler Neves. This very first application
shows how the robot s brain functions when the basic principle of the complex network computer is imitated , he adds. The scientists are currently working on a physical implementation in electronic
We are still far away from a powerful computer in the true sense of the word , says Timme. But we were able to demonstrate that the idea basically works , he adds. The current status is therefore
comparable with that of the quantum computer. The theory of performing computations with the help of quantum algorithms is continually advancing. However, whether the hardware might be based on
semiconductor structures, superconductors, arrangements of single atoms or completely different physical systems is still a subject for further research.
"In the case of complex network computers, it is probably not going to be coupled pendulums , say Timme with a smile. Effective computation would require several thousand of such coupled pendulums.
The system is more suitable for illustrative purposes. Systems of coupled lasers seem more promising to the researchers. Not only do they offer precisely controlled frequencies, which are a further
requirement for complex network computers, they also operate in a particularly high range of up to several billion oscillations per second, enabling a computer to calculate particularly fast.
More information: Fabio Schittler Neves and Marc Timme, Computation by Switching in Complex Networks of States, Physical Review Letters, 2 July 2012. dx.doi.org/10.1103/PhysRevLett.109.018701
4 / 5 (1) Aug 15, 2012
absolutely fascinating!
and there is a system out there in here that's already doing it with neural nets and doing it well: the brain!
5 / 5 (1) Aug 15, 2012
This approach to computing reminds me of what happens in a network of soap bubbles: when one of the bubbles pops, the entire mass adjusts to the new network of tensions and compressions.
When one of the bubbles pop, the effect on the entire mass depends on the size and location of the popped bubble.
Further, each bubble could be said to play the role of an oscillator (as the bubble membrane is formed by the balancing of opposing molecular forces), and the network of surfaces formed by the
intersection of all the bubbles are the couplings between the oscillators.
Make the mass of bubbles 'programmable' and voila, a complex network computer!
not rated yet Aug 15, 2012
Self-driving vehicles just got several years closer.
not rated yet Aug 15, 2012
Just like the brain: the interconnections ARE the programing!
1 / 5 (1) Aug 15, 2012
Self-driving vehicles just got several years closer.
What? They're allready here...
But they're facing very hard resistence from the establishment.
This research could perhaps lead to an actual artificial brain, neurons, axons, synapses and so on, and maybe a better understanding of what actually makes conciousness...
This is indeed something to follow. | {"url":"http://phys.org/news/2012-08-net-possibilities.html","timestamp":"2014-04-17T04:19:19Z","content_type":null,"content_length":"83296","record_id":"<urn:uuid:542d534a-8bdf-45e2-a478-5f5cf8cc6238>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00165-ip-10-147-4-33.ec2.internal.warc.gz"} |
San Luis Rey Algebra 2 Tutor
Find a San Luis Rey Algebra 2 Tutor
...I am now a Biology major at UCSD I have tutored many high school students in algebra and other areas of math. I have gone beyond the high school level to taking higher level calculus courses
at my university, where I study biology. I also have younger cousins, and help them with their homework in math and other subject areas whenever I see them, so am still fresh with elementary
school math!
42 Subjects: including algebra 2, reading, English, Spanish
...I've been working with high-school Chemistry students for so long I've memorized the curriculum for the course. My experience with the Physical Sciences portion of the MCAT also lends to my
expertise in the subject. I know how boring conversions can be, but trust me: once we get to the electromagnetic spectrum, everything will be interesting again.
43 Subjects: including algebra 2, English, reading, chemistry
Hey there! :)My name is Julie V. and I currently attend Cal State San Marcos as a business major. I graduated high school with high honors and a 3.75 GPA. I did really well with math and writing
16 Subjects: including algebra 2, English, writing, reading
...I show the student how to do these and to think logically about the problem. Train your brain !!! So, if you need a really great SAT or ACT math tutor who is experienced and knows how to teach
properly, I would be glad to tutor your student. I am an excellent ASVAB tutor and AFQT tutor.
20 Subjects: including algebra 2, reading, Spanish, geometry
...At best, I improve students' self esteem, helping them to help others. Thank you, Sammie W. (Oceanside)I am fully qualified for the CBEST because I successfully passed the CBEST to become a
middle school math teacher. I have a bachelors in applied math.
13 Subjects: including algebra 2, calculus, geometry, ASVAB
Nearby Cities With algebra 2 Tutor
Barona Rancheria, CA algebra 2 Tutors
Beach Center, CA algebra 2 Tutors
Belmont Shore, CA algebra 2 Tutors
Espinoza, CO algebra 2 Tutors
Gilman Hot Springs, CA algebra 2 Tutors
Lakeview, CA algebra 2 Tutors
Naples, CA algebra 2 Tutors
Oak Glen, CA algebra 2 Tutors
Old Town, SD algebra 2 Tutors
Pinyon Pines, CA algebra 2 Tutors
Portola Hills, CA algebra 2 Tutors
Romoland, CA algebra 2 Tutors
Sky Valley, CA algebra 2 Tutors
Smiley Heights, CA algebra 2 Tutors
Villas Del Parque, PR algebra 2 Tutors | {"url":"http://www.purplemath.com/San_Luis_Rey_Algebra_2_tutors.php","timestamp":"2014-04-19T23:51:36Z","content_type":null,"content_length":"24354","record_id":"<urn:uuid:eee29214-6ba4-491e-8e7a-ef45ffe387a1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast evaluation of elementary mathematical functions with correctly rounded last bit
Results 1 - 10 of 28
- ACM Trans. Math. Softw , 2007
"... This paper presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitrary-precision
ideas from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these stron ..."
Cited by 70 (14 self)
Add to MetaCart
This paper presents a multiple-precision binary floating-point library, written in the ISO C language, and based on the GNU MP library. Its particularity is to extend to arbitrary-precision ideas
from the IEEE 754 standard, by providing correct rounding and exceptions. We demonstrate how these strong semantics are achieved — with no significant slowdown with respect to other
arbitrary-precision tools — and discuss a few applications where such a library can be useful. Categories and Subject Descriptors: D.3.0 [Programming Languages]: General—Standards; G.1.0 [Numerical
Analysis]: General—computer arithmetic, multiple precision arithmetic; G.1.2 [Numerical Analysis]: Approximation—elementary and special function approximation; G 4 [Mathematics of Computing]:
Mathematical Software—algorithm design, efficiency, portability
- IEEE Transactions on Computers , 1998
"... The Table Maker's Dilemma is the problem of always getting correctly rounded results when computing the elementary functions. After a brief presentation of this problem, we present new
developments that have helped us to solve this problem for the double-precision exponential function in a small d ..."
Cited by 32 (14 self)
Add to MetaCart
The Table Maker's Dilemma is the problem of always getting correctly rounded results when computing the elementary functions. After a brief presentation of this problem, we present new developments
that have helped us to solve this problem for the double-precision exponential function in a small domain. These new results show that this problem can be solved, at least for the double-precision
format, for the most usual functions. Index Terms---Floating-point arithmetic, rounding, elementary functions, Table Maker's Dilemma. ------------------------------ ###p###
------------------------------ 1INTRODUCTION HE IEEE-754 standard for floating-point arithmetic [2], [11] requires that the results of the arithmetic operations should always be correctly rounded.
That is, once a rounding mode is chosen among the four possible ones, the system must behave as if the result were first computed exactly, with infinite precision, then rounded. There is no similar
requirement for the elementary...
- In W. Gautschi (Ed.), AMS Proceedings of Symposia in Applied Mathematics 48 , 1994
"... . This document is an excerpt from the current hypertext version of an article that appeared in Walter Gautschi (ed.), Mathematics of Computation 1943--1993: A Half-Century of Computational
Mathematics, Proceedings of Symposia in Applied Mathematics 48, American Mathematical Society, Providence, ..."
Cited by 21 (0 self)
Add to MetaCart
. This document is an excerpt from the current hypertext version of an article that appeared in Walter Gautschi (ed.), Mathematics of Computation 1943--1993: A Half-Century of Computational
Mathematics, Proceedings of Symposia in Applied Mathematics 48, American Mathematical Society, Providence, RI 02940, 1994. The symposium was held at the University of British Columbia August 9--13,
1993, in honor of the fiftieth anniversary of the journal Mathematics of Computation. The original abstract follows. Higher transcendental functions continue to play varied and important roles in
investigations by engineers, mathematicians, scientists and statisticians. The purpose of this paper is to assist in locating useful approximations and software for the numerical generation of these
functions, and to offer some suggestions for future developments in this field. 5.9. Mathieu, Lam'e, and Spheroidal Wave Functions. 5.9.1. Characteristic Values of Mathieu's Equation. Software
- In Real Numbers and Computers, Schloss Dagstuhl , 2004
"... Abstract. This article is a case study in the implementation of a portable, proven and efficient correctly rounded elementary function in double-precision. We describe the methodology used to
achieve these goals in the crlibm library. There are two novel aspects to this approach. The first is the pr ..."
Cited by 19 (9 self)
Add to MetaCart
Abstract. This article is a case study in the implementation of a portable, proven and efficient correctly rounded elementary function in double-precision. We describe the methodology used to achieve
these goals in the crlibm library. There are two novel aspects to this approach. The first is the proof framework, and in general the techniques used to balance performance and provability. The
second is the introduction of processor-specific optimization to get performance equivalent to the best current mathematical libraries, while trying to minimize the proof work. The implementation of
the natural logarithm is detailed to illustrate these questions. Mathematics Subject Classification. 26-04, 65D15, 65Y99. 1.
- In Proceedings of the 2006 ACM symposium on Applied computing , 2006
"... The implementation of a correctly rounded or interval elementary function needs to be proven carefully in the very last details. The proof requires a tight bound on the overall error of the
implementation with respect to the mathematical function. Such work is function specific, concerns tens of lin ..."
Cited by 17 (6 self)
Add to MetaCart
The implementation of a correctly rounded or interval elementary function needs to be proven carefully in the very last details. The proof requires a tight bound on the overall error of the
implementation with respect to the mathematical function. Such work is function specific, concerns tens of lines of code for each function, and will usually be broken by the smallest change to the
code (e.g. for maintenance or optimization purpose). Therefore, it is very tedious and error-prone if done by hand. This article discusses the use of the Gappa proof assistant in this context. Gappa
has two main advantages over previous approaches: Its input format is very close to the actual C code to validate, and it automates error evaluation and propagation using interval arithmetic.
Besides, it can be used to incrementally prove complex mathematical properties pertaining to the C code. Yet it does not require any specific knowledge about automatic theorem proving, and thus is
accessible to a wider community. Moreover, Gappa may generate a formal proof of the results that can be checked independently by a lowerlevel proof assistant like Coq, hence providing an even higher
confidence in the certification of the numerical code. 1.
, 2005
"... This article presents advances on the subject of correctly rounded elementary functions since the publication of the libultim mathematical library developed by Ziv at IBM. This library showed
that the average performance and memory overhead of correct rounding could be made negligible. However, the ..."
Cited by 13 (8 self)
Add to MetaCart
This article presents advances on the subject of correctly rounded elementary functions since the publication of the libultim mathematical library developed by Ziv at IBM. This library showed that
the average performance and memory overhead of correct rounding could be made negligible. However, the worst-case overhead was still a factor 1000 or more. It is shown here that, with current
processor technology, this worst-case overhead can be kept within a factor of 2 to 10 of current best libms. This low overhead has very positive consequences on the techniques for implementing and
proving correctly rounded functions, which are also studied. These results lift the last technical obstacles to a generalisation of (at least some) correctly rounded double precision elementary
, 2001
"... CLIP is an implementation of CLP(Intervals) which has been designed to be verifiably correct in the sense that the answers it returns are mathematically correct solutions to the underlying
arithmetic constraints. This fundamental design criteria affects many aspects of the implementation from the in ..."
Cited by 10 (2 self)
Add to MetaCart
CLIP is an implementation of CLP(Intervals) which has been designed to be verifiably correct in the sense that the answers it returns are mathematically correct solutions to the underlying arithmetic
constraints. This fundamental design criteria affects many aspects of the implementation from the input and output of decimal constants to the design of the interval arithmetic libraries and the
constraint solving algorithms. In particular, to enhance verifiability, CLIP employs the simplest model of constraint solving in which constraints are decomposed into sets of primitive constraints
which are then solved using a library of primitive constraint contractors. This approach results in a simple constraint solver whose correctness is relatively straightforward to verify, but the
solver is only able to solve relatively simple constraints. In this paper, we present the syntax, semantics, and implementation of CLIP, and we show how to use metalevel techniques to enhance the
power of the CLIP constraint solver while preserving the simple structure of the system. In particular, we demonstrate that several of the box-narrowing algorithms from the Newton and Numerica
systems can be easily implemented in CLIP. The principal advantages of this approach are (1) the resulting solvers are relatively easy to prove correct, (2) new solvers can be rapidly prototyped
since the code is more concise and declarative than for imperative languages, and (3) contractors can be implemented directly from mathematical formulae without having to first prove results about
interval arithmetic operators. Finally, the source code for the system is publicly available, which is a clear prerequisite for public, independent verifiability.
- IEEE TRANSACTIONS ON COMPUTERS, 2010. 9 HTTP://DX.DOI.ORG/10.1145/1772954.1772987 10 HTTP://DX.DOI.ORG/10.1145/1838599.1838622 11 HTTP://SHEMESH.LARC.NASA.GOV/NFM2010/PAPERS/NFM2010_14_23.PDF 12
HTTP://DX.DOI.ORG/10.1007/978-3-642-14203-1_11 13 HTTP://DX. , 2011
"... High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the
error relative to some ideal value is well bounded. This certification may require a time-consuming proof fo ..."
Cited by 8 (3 self)
Add to MetaCart
High confidence in floating-point programs requires proving numerical properties of final and intermediate values. One may need to guarantee that a value stays within some range, or that the error
relative to some ideal value is well bounded. This certification may require a time-consuming proof for each line of code, and it is usually broken by the smallest change to the code, e.g., for
maintenance or optimization purpose. Certifying floating-point programs by hand is, therefore, very tedious and error-prone. The Gappa proof assistant is designed to make this task both easier and
more secure, due to the following novel features: It automates the evaluation and propagation of rounding errors using interval arithmetic. Its input format is very close to the actual code to
validate. It can be used incrementally to prove complex mathematical properties pertaining to the code. It generates a formal proof of the results, which can be checked independently by a lower level
proof assistant like Coq. Yet it does not require any specific knowledge about automatic theorem proving, and thus, is accessible to a wide community. This paper demonstrates the practical use of
this tool for a widely used class of floating-point programs: implementations of elementary functions in a mathematical library.
"... We propose a new algorithm to find worst cases for correct rounding of an analytic function. We first reduce this problem to the real small value problem — i.e. for polynomials with real
coefficients. Then we show that this second problem can be solved efficiently, by extending Coppersmith’s work on ..."
Cited by 7 (3 self)
Add to MetaCart
We propose a new algorithm to find worst cases for correct rounding of an analytic function. We first reduce this problem to the real small value problem — i.e. for polynomials with real
coefficients. Then we show that this second problem can be solved efficiently, by extending Coppersmith’s work on the integer small value problem — for polynomials with integer coefficients — using
lattice reduction [4, 5, 6]. For floating-point numbers with a mantissa less than, and a polynomial approximation of ¡ degree, our al-gorithm finds all worst cases ¢ at distance a machine number �� �
§ ¥�©������� � in time ¡��¤ �. For, this improves �� � �� � � on the complexity from Lefèvre’s algorithm
- IEEE Transactions on Computers
"... Abstract—We propose a new algorithm to find worst cases for the correct rounding of a mathematical function of one variable. We first reduce this problem to the real small value problem—i.e.,
for polynomials with real coefficients. Then, we show that this second problem can be solved efficiently by ..."
Cited by 7 (3 self)
Add to MetaCart
Abstract—We propose a new algorithm to find worst cases for the correct rounding of a mathematical function of one variable. We first reduce this problem to the real small value problem—i.e., for
polynomials with real coefficients. Then, we show that this second problem can be solved efficiently by extending Coppersmith’s work on the integer small value problem—for polynomials with integer
coefficients—using lattice reduction. For floating-point numbers with a mantissa less than N and a polynomial approximation of degree d, our algorithm finds all worst cases at distance less than N d2
2dþ1 from a machine number in time OðN dþ1 2dþ1þ " Þ. For d 2, a detailed study improves on the OðN 2=3þ " Þ complexity from Lefèvre’s algorithm to OðN 4=7þ " Þ. For larger d, our algorithm can be
used to check that there exist no worst cases at distance less than N k in time OðN 1=2þ " Þ. Index Terms—Computer arithmetic, multiple precision arithmetic, special function approximations. æ 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=671713","timestamp":"2014-04-24T08:25:28Z","content_type":null,"content_length":"41824","record_id":"<urn:uuid:f1a6a965-8a8c-4879-8fc9-ca373b27a32f>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
- User Profile for: tgua_@_inci.rr.com
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
User Profile: tgua_@_inci.rr.com
User Profile for: tgua_@_inci.rr.com
UserID: 34962
Name: Chris or Terry Guay
Registered: 12/6/04
Total Posts: 90
Show all user messages | {"url":"http://mathforum.org/kb/profile.jspa?userID=34962","timestamp":"2014-04-17T09:51:09Z","content_type":null,"content_length":"12019","record_id":"<urn:uuid:3304d841-7354-4517-9519-e4af367ca9fa>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00398-ip-10-147-4-33.ec2.internal.warc.gz"} |
an object is stationary in space and located at a distacen 10000km from the centre of a certain planet. it is found... - Homework Help - eNotes.com
an object is stationary in space and located at a distacen 10000km from the centre of a certain planet. it is found that 1.0MJ of work needs to be done to move the object to a stationary object point
20000km from the centre of the planet. Caluculate how much more work needs to be done to move the object to a stationary point 80000km from the centre of the planet
We know ,
W=F.S ( F- Newton , S meter , W Joules)
W=mgS ,where g acceleration due to gravity of the planet.
m mass of the object ,s is displacement.
It is given
`1.0 MJ=mg 2.0xx10^7 ` (i)
{20000 km=20000000 m }
let W be the work done if object move 80000 from centre of the planet.
80000 km=80000000 m
`W=4.0 MJ`
Join to answer this question
Join a community of thousands of dedicated teachers and students.
Join eNotes | {"url":"http://www.enotes.com/homework-help/an-object-stationary-space-located-distacen-424269","timestamp":"2014-04-21T00:17:59Z","content_type":null,"content_length":"25498","record_id":"<urn:uuid:0f9a1e1f-7463-40fc-9e8f-ecd1d1aa4e3e>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00359-ip-10-147-4-33.ec2.internal.warc.gz"} |
Analysis of individual differences in multidimensional scaling via n-way generalization of Eckart-Young decomposition
Results 11 - 20 of 271
- Journal of Experimental Psychology: General , 1991
"... In this article, the relation between the identification, similarity judgment, and categorization of multidimensional perceptual stimuli is studied. The theoretical analysis focused on general
recognition theory (GRT), which is a multidimensional generalization of signal detection theory. In one app ..."
Cited by 46 (7 self)
Add to MetaCart
In this article, the relation between the identification, similarity judgment, and categorization of multidimensional perceptual stimuli is studied. The theoretical analysis focused on general
recognition theory (GRT), which is a multidimensional generalization of signal detection theory. In one application, 2 Ss first identified a set of confusable stimuli and then made judgments of their
pairwise similarity. The second application was to Nosofsky's (1985b, 1986) identificationcategorization experiment. In both applications, a GRT model accounted for the identification data better
than Luce's (1963) biased-cboice model. The identification results were then used to predict performance in the similarity judgment and categorization conditions. The GRT identification model
accurately predicted the similarity judgments under the assumption that Ks allocated attention to the 2 stimulus dimensions differently in the 2 tasks. The categorization data were predicted
successfully without appealing to the notion of selective attention. Instead, a simpler GRT model that emphasized the different decision rules used in identification and categorization was adequate.
The perceptual processes involved when subjects identify, categorize, or judge the pairwise similarity of multidimensional perceptual stimuli are closely related (e.g., Ashby &
- SIAM JOURNAL ON SCIENTIFIC COMPUTING , 2007
"... In this paper, the term tensor refers simply to a multidimensional or $N$-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study
sparse tensors, which have the property that the vast majority of the elements are zero. We propose stori ..."
Cited by 45 (13 self)
Add to MetaCart
In this paper, the term tensor refers simply to a multidimensional or $N$-way array, and we consider how specially structured tensors allow for efficient storage and computation. First, we study
sparse tensors, which have the property that the vast majority of the elements are zero. We propose storing sparse tensors using coordinate format and describe the computational efficiency of this
scheme for various mathematical operations, including those typical to tensor decomposition algorithms. Second, we study factored tensors, which have the property that they can be assembled from more
basic components. We consider two specific types: A Tucker tensor can be expressed as the product of a core tensor (which itself may be dense, sparse, or factored) and a matrix along each mode, and a
Kruskal tensor can be expressed as the sum of rank-1 tensors. We are interested in the case where the storage of the components is less than the storage of the full tensor, and we demonstrate that
many elementary operations can be computed using only the components. All of the efficiencies described in this paper are implemented in the Tensor Toolbox for MATLAB.
- IEEE INTERNATIONAL CONFERENCE ON DATA MINING , 2005
"... Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the
principal eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structu ..."
Cited by 45 (16 self)
Add to MetaCart
Linear algebra is a powerful and proven tool in web search. Techniques, such as the PageRank algorithm of Brin and Page and the HITS algorithm of Kleinberg, score web pages based on the principal
eigenvector (or singular vector) of a particular non-negative matrix that captures the hyperlink structure of the web graph. We propose and test a new methodology that uses multilinear algebra to
elicit more information from a higher-order representation of the hyperlink graph. We start by labeling the edges in our graph with the anchor text of the hyperlinks so that the associated linear
algebra representation is a sparse, three-way tensor. The first two dimensions of the tensor represent the web pages while the third dimension adds the anchor text. We then use the rank-1 factors of
a multilinear PARAFAC tensor decomposition, which are akin to singular vectors of the SVD, to automatically identify topics in the collection along with the associated authoritative web pages.
, 2000
"... INTRODUCTION Consider an I # J matrix X and suppose that rank (X) = 3. Let x i,j denote the (i, j)th entry of X.Thenit holds that x i,j admits a three-component bilinear decomposition x i#j # #
3 f #1 a i#f b j#f #1# for all i = 1,...,I and j = 1,...,J. Equivalently, letting a f := [a 1,f ..."
Cited by 45 (9 self)
Add to MetaCart
INTRODUCTION Consider an I # J matrix X and suppose that rank (X) = 3. Let x i,j denote the (i, j)th entry of X.Thenit holds that x i,j admits a three-component bilinear decomposition x i#j # # 3 f #
1 a i#f b j#f #1# for all i = 1,...,I and j = 1,...,J. Equivalently, letting a f := [a 1,f ,...,a I,f ] T and similarly for b f , X # a 1 b T 1 # a 2 b T 2 # a 3 b T 3 #2# i
- Siam J Sci Statist Comp , 1991
"... Abstract. The (modified) Newton method is adapted to optimize generalized cross validation (GCV) and generalized maximum likelihood (GML) scores with multiple smoothing parameters. The main
concerns in solving the optimization problem are the speed and the reliability of the algorithm, as well as th ..."
Cited by 43 (8 self)
Add to MetaCart
Abstract. The (modified) Newton method is adapted to optimize generalized cross validation (GCV) and generalized maximum likelihood (GML) scores with multiple smoothing parameters. The main concerns
in solving the optimization problem are the speed and the reliability of the algorithm, as well as the invariance of the algorithm under transformations under which the problem itself is invariant.
The proposed algorithm is believed to be highly efficient for the problem, though it is still rather expensive for large data sets, since its operational counts are (2/3)kn + O(n2), with k the number
of smoothing parameters and n the number of observations. Sensible procedures for computing good starting values are also proposed, which should help in keeping the execution load to the minimum
possible. The algorithm is implemented in Rkpack [RKPACK and its applications: Fitting smoothing spline models, Tech. Report 857, Department of Statistics, University of Wisconsin, Madison, WI, 1989]
and illustrated by examples of fitting additive and interaction spline models. It is noted that the algorithm can also be applied to the maximum likelihood (ML) and the restricted maximum likelihood
(REML) estimation of the variance component models.
- Neuroimage
"... Finding the means to efficiently summarize electroencephalographic data has been a long-standing problem in electrophysiology. A popular approach is identification of component modes on the
basis of the timevarying spectrum of multichannel EEG recordings—in other words, a space/frequency/time atomic ..."
Cited by 42 (0 self)
Add to MetaCart
Finding the means to efficiently summarize electroencephalographic data has been a long-standing problem in electrophysiology. A popular approach is identification of component modes on the basis of
the timevarying spectrum of multichannel EEG recordings—in other words, a space/frequency/time atomic decomposition of the time-varying EEG spectrum. Previous work has been limited to only two of
these dimensions. Principal Component Analysis (PCA) and Independent Component Analysis (ICA) have been used to create space/time decompositions; suffering an inherent lack of uniqueness that is
overcome only by imposing constraints of orthogonality or independence of atoms. Conventional frequency/time decompositions ignore the spatial aspects of the EEG. Framing of the data being as a
three-way array indexed by channel, frequency, and time allows the application of a unique decomposition that is known as Parallel Factor Analysis (PARAFAC). Each atom is the tri-linear decomposition
into a spatial,
- IEEE Transactions on Knowledge and Data Engineering , 2008
"... Multiway data analysis captures multilinear structures in higher-order datasets, where data have more than two modes. Standard two-way methods commonly applied on matrices often fail to find the
underlying structures in multiway arrays. With increasing number of application areas, multiway data anal ..."
Cited by 42 (8 self)
Add to MetaCart
Multiway data analysis captures multilinear structures in higher-order datasets, where data have more than two modes. Standard two-way methods commonly applied on matrices often fail to find the
underlying structures in multiway arrays. With increasing number of application areas, multiway data analysis has become popular as an exploratory analysis tool. We provide a review of significant
contributions in literature on multiway models, algorithms as well as their applications in diverse disciplines including chemometrics, neuroscience, computer vision, and social network analysis. 1.
- in IEEE Computer Society Conference on Computer Vision and Pattern Recognition
"... We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph
partitioning problem. We propose a twostep algorithm for solving this problem. In the first step we use a nove ..."
Cited by 41 (2 self)
Add to MetaCart
We consider the problem of clustering in domains where the affinity relations are not dyadic (pairwise), but rather triadic, tetradic or higher. The problem is an instance of the hypergraph
partitioning problem. We propose a twostep algorithm for solving this problem. In the first step we use a novel scheme to approximate the hypergraph using a weighted graph. In the second step a
spectral partitioning algorithm is used to partition the vertices of this graph. The algorithm is capable of handling hyperedges of all orders including order two, thus incorporating information of
all orders simultaneously. We present a theoretical analysis that relates our algorithm to an existing hypergraph partitioning algorithm and explain the reasons for its superior performance. We
report the performance of our algorithm on a variety of computer vision problems and compare it to several existing hypergraph partitioning algorithms. 1.
- SIAM J. Matrix Anal. Appl , 2004
"... Abstract. The canonical decomposition of higher-order tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem
can be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. ..."
Cited by 37 (7 self)
Add to MetaCart
Abstract. The canonical decomposition of higher-order tensors is a key tool in multilinear algebra. First we review the state of the art. Then we show that, under certain conditions, the problem can
be rephrased as the simultaneous diagonalization, by equivalence or congruence, of a set of matrices. Necessary and sufficient conditions for the uniqueness of these simultaneous matrix
decompositions are derived. In a next step, the problem can be translated into a simultaneous generalized Schur decomposition, with orthogonal unknowns [A.-J. van der Veen and A. Paulraj, IEEE Trans.
Signal Process., 44 (1996), pp. 1136–1155]. A first-order perturbation analysis of the simultaneous generalized Schur decomposition is carried out. We discuss some computational techniques (including
a new Jacobi algorithm) and illustrate their behavior by means of a number of numerical experiments.
, 2006
"... We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker
operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and ..."
Cited by 32 (9 self)
Add to MetaCart
We propose two new multilinear operators for expressing the matrix compositions that are needed in the Tucker and PARAFAC (CANDECOMP) decompositions. The first operator, which we call the Tucker
operator, is shorthand for performing an n-mode matrix multiplication for every mode of a given tensor and can be employed to consisely express the Tucker decomposition. The second operator, which we
call the Kruskal operator, is shorthand for the sum of the outer-products of the columns of N matrices and allows a divorce from a matricized representation and a very consise expression of the
PARAFAC decomposition. We explore the properties of the Tucker and Kruskal operators independently of the related decompositions. Additionally, we provide a review of the matrix and tensor operations
that are frequently used in the context of tensor decompositions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=270232&sort=cite&start=10","timestamp":"2014-04-17T16:21:33Z","content_type":null,"content_length":"40714","record_id":"<urn:uuid:791e69ff-4750-45db-9c95-a99a783f1564>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00563-ip-10-147-4-33.ec2.internal.warc.gz"} |
Dallas, GA Geometry Tutor
Find a Dallas, GA Geometry Tutor
I have been a teacher for seven years and currently teach 9th grade Coordinate Algebra and 10th grade Analytic Geometry. I am up to date with all of the requirements in preparation for the EOCT.
I am currently finishing up my masters degree from KSU.
4 Subjects: including geometry, algebra 1, algebra 2, prealgebra
...ARMY. I have a good sense of humor and a warm, calm, laid back personality. MY approach to teaching is easy and simplistic.
11 Subjects: including geometry, algebra 1, GED, algebra 2
...I'm currently taking Accounting II. The highest score I made on SAT math is a 780. I have also taken the SAT four times for my experience.
17 Subjects: including geometry, chemistry, calculus, algebra 1
...I have tutored Physics for over 10 years and believe my success with students comes from being able to relate physics concepts to day to day experiences and examples. This makes it much easier
to understand, and maybe even more important, it makes it a lot more interesting and fun for the student. I love standardized tests and have scored within the 99th percentile for all tests I
19 Subjects: including geometry, physics, calculus, GRE
...I like to explain the student the many different ways a topic or a problem can be understood so the student can choose which one is best for him/her. I want to show examples so the student can
fully understand each concept. Algebra 2 can be an advance course for students that have difficulty in Math, since it involves logarithmic equations and some other advance concepts.
21 Subjects: including geometry, calculus, algebra 1, ESL/ESOL
Related Dallas, GA Tutors
Dallas, GA Accounting Tutors
Dallas, GA ACT Tutors
Dallas, GA Algebra Tutors
Dallas, GA Algebra 2 Tutors
Dallas, GA Calculus Tutors
Dallas, GA Geometry Tutors
Dallas, GA Math Tutors
Dallas, GA Prealgebra Tutors
Dallas, GA Precalculus Tutors
Dallas, GA SAT Tutors
Dallas, GA SAT Math Tutors
Dallas, GA Science Tutors
Dallas, GA Statistics Tutors
Dallas, GA Trigonometry Tutors
Nearby Cities With geometry Tutor
Acworth, GA geometry Tutors
Aragon, GA geometry Tutors
Austell geometry Tutors
Clarkdale, GA geometry Tutors
Emerson, GA geometry Tutors
Euharlee, GA geometry Tutors
Fairburn, GA geometry Tutors
Hiram, GA geometry Tutors
Holly Springs, GA geometry Tutors
Morrow, GA geometry Tutors
Powder Springs, GA geometry Tutors
Taylorsville, GA geometry Tutors
Temple, GA geometry Tutors
Villa Rica geometry Tutors
Winston, GA geometry Tutors | {"url":"http://www.purplemath.com/Dallas_GA_Geometry_tutors.php","timestamp":"2014-04-21T13:18:54Z","content_type":null,"content_length":"23784","record_id":"<urn:uuid:eb310e73-50de-44b5-85a4-d8ed716c07bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00519-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do good math jokes exist?
MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required.
Have a good joke? Share.
I know this is subjective, but the principle "should be of interest to mathematicians" trumps. (I hope.)
up vote 71 down vote favorite
84 examples soft-question big-list
show 11 more comments
I know this is subjective, but the principle "should be of interest to mathematicians" trumps. (I hope.)
Questions on MathOverflow are expected to relate to research level mathematics within the scope defined by the community. Consider editing the question or leaving comments for improvement if you
believe the question can be reworded to fit within the scope. Read more about reopening questions here.If this question can be reworded to fit the rules in the help center, please edit the question.
What did the forgetful functor do for his stoner friend?
up vote 27 down vote He left adjoint as a free object.
add comment
What did the forgetful functor do for his stoner friend?
An engineer hears that a famous mathematician will be giving a public lecture, and always having a soft spot for math, he attends. The mathematician then talks at length about all sorts of
amazing phenomena that happen in 17 dimensional space. The engineer, amazed at this mathematician's intuition for 17 dimensional space, goes up to him afterwards and asks 'How do you
picture 17 dimensions?", to which the mathematician answers 'Oh, its easy. Just imagine n-dimensional space, and set n equal to 17.'
up vote 25
down vote My dad (an engineer) loves that joke.
add comment
An engineer hears that a famous mathematician will be giving a public lecture, and always having a soft spot for math, he attends. The mathematician then talks at length about all sorts of amazing
phenomena that happen in 17 dimensional space. The engineer, amazed at this mathematician's intuition for 17 dimensional space, goes up to him afterwards and asks 'How do you picture 17 dimensions?",
to which the mathematician answers 'Oh, its easy. Just imagine n-dimensional space, and set n equal to 17.'
Q: What's purple and commutes? A: A dead baby in a suitcase.
up vote 24 down vote Q: What's purple and commutes and has a certain number of followers? A: A dead baby Jesus in a suitcase.
show 2 more comments
Q: What's purple and commutes? A: A dead baby in a suitcase.
Q: What's purple and commutes and has a certain number of followers? A: A dead baby Jesus in a suitcase.
Mathematician1: So why did you become a mathematician?
up vote 24 down vote Mathematician2: I don't like working with numbers.
show 1 more comment
Here is a joke I invented (based on a famous one) and had mixed reaction.
up vote A young mathematician comes to present to a famous mathematician his conjecture and ideas. "You are absolutely wrong," the famous mathematician dismissed the young one. Next enters another
23 down young mathematician and presents precisely the opposite conjecture. "You are absolutely wrong" replies the famous mathematician. The famous mathematician's wife interferes. "How could you
vote tell both of them that they are wrong," she sais. "They have made completely opposite claims, one of them must be right!" "You are also wrong," replied the famous mathematician.
show 1 more comment
Here is a joke I invented (based on a famous one) and had mixed reaction.
A young mathematician comes to present to a famous mathematician his conjecture and ideas. "You are absolutely wrong," the famous mathematician dismissed the young one. Next enters another young
mathematician and presents precisely the opposite conjecture. "You are absolutely wrong" replies the famous mathematician. The famous mathematician's wife interferes. "How could you tell both of them
that they are wrong," she sais. "They have made completely opposite claims, one of them must be right!" "You are also wrong," replied the famous mathematician.
After introducing general topological spaces, the professor began to introduce the notion of convergence without a metric. He turned around and said,
"I have no balls."
up vote 20 down vote
A hit for months.
show 2 more comments
After introducing general topological spaces, the professor began to introduce the notion of convergence without a metric. He turned around and said,
Perhaps the question should be, not "Do good math jokes exist", but "are they unique"?
up vote 17 down vote
show 1 more comment
Perhaps the question should be, not "Do good math jokes exist", but "are they unique"?
There's a Notices article on this.
up vote 16 down vote
show 2 more comments
One of my favorites. It's about a statistician - close enough for me. (I found this version of the joke here)
A physicist, an engineer, and a statistician were out game hunting. The engineer spied a bear in the distance, so they got a little closer. "Let me take the first shot!" said the
up vote 16 engineer, who missed the bear by three metres to the left. "You're incompetent! Let me try" insisted the physicist, who then proceeded to miss by three metres to the right. "Ooh, we got
down vote him!!" said the statistician.
add comment
One of my favorites. It's about a statistician - close enough for me. (I found this version of the joke here)
A physicist, an engineer, and a statistician were out game hunting. The engineer spied a bear in the distance, so they got a little closer. "Let me take the first shot!" said the engineer, who missed
the bear by three metres to the left. "You're incompetent! Let me try" insisted the physicist, who then proceeded to miss by three metres to the right. "Ooh, we got him!!" said the statistician.
Do good math jokes exist? Under the axiom of choice, sure. But it's not possible to find an explicit example.
up vote 15 down vote
add comment
Do good math jokes exist? Under the axiom of choice, sure. But it's not possible to find an explicit example.
I find the observation that the grade school carry operation from addition-with-carry forms a non-trivial degree 1 cocycle in the group cohomology of Z/10 a pretty good joke
up vote 14 down embedded in mathematics.
add comment
I find the observation that the grade school carry operation from addition-with-carry forms a non-trivial degree 1 cocycle in the group cohomology of Z/10 a pretty good joke embedded in mathematics.
There are 10 types of people in the world: those who understand binary, and 9 others.
up vote 14 down vote
add comment
There are 10 types of people in the world: those who understand binary, and 9 others.
The water receded and the Ark came to rest upon the land. Noah opened the doors and commanded the animals, “Go forth and multiply.” The animals slowly departed the Ark except for two
snakes that remained in the back. Again Noah proclaimed ,“Go forth and multiply” yet the two snakes did not move. Noah walked to the back of the Arc and asked, “Why have you not followed
my command”.? The snakes answered, “Noah, we can’t because we are Adders.”
up vote 14 Noah then went out upon the land and felled several large trees; from these trees he made a four legged platform. He then went inside the Arc and carried the snakes outside and upon
down vote placing them on the platform, his words became true.
As everyone knows … Adders can multiply using log tables.
add comment
The water receded and the Ark came to rest upon the land. Noah opened the doors and commanded the animals, “Go forth and multiply.” The animals slowly departed the Ark except for two snakes that
remained in the back. Again Noah proclaimed ,“Go forth and multiply” yet the two snakes did not move. Noah walked to the back of the Arc and asked, “Why have you not followed my command”.? The snakes
answered, “Noah, we can’t because we are Adders.”
Noah then went out upon the land and felled several large trees; from these trees he made a four legged platform. He then went inside the Arc and carried the snakes outside and upon placing them on
the platform, his words became true.
As everyone knows … Adders can multiply using log tables.
It was proven by Cantor that a good math joke exists. Unfortunately, his proof was entirely non-constructive.
up vote 14 down vote
add comment
It was proven by Cantor that a good math joke exists. Unfortunately, his proof was entirely non-constructive.
Less of a joke than an observation, but...
up vote 14 down vote I've always found it appropriate that online identity thieves are in the business of stealing ones and zeroes.
show 3 more comments
I've always found it appropriate that online identity thieves are in the business of stealing ones and zeroes.
The continuous functions are having a ball. At the dance floor, cosine and sine are jumping up and down, and the polynomials are forming a ring. But the exponential function is
standing separately the whole evening. Due to sympathy for it, the identity joins it and suggest: "Come one, just integrate yourself!" – "I've tried that already", answers the
exponential function, "but it didn't change a bit!"
up vote 13 another one
down vote
Why the mathematician named his dog "Cauchy"? Because he leaves a residue at every pole
add comment
The continuous functions are having a ball. At the dance floor, cosine and sine are jumping up and down, and the polynomials are forming a ring. But the exponential function is standing separately
the whole evening. Due to sympathy for it, the identity joins it and suggest: "Come one, just integrate yourself!" – "I've tried that already", answers the exponential function, "but it didn't change
a bit!"
Why the mathematician named his dog "Cauchy"? Because he leaves a residue at every pole
I enjoy this page of Milne's Tips for Authors.
I also find the book Mathematics Made Difficult by Linderholm to be hilarious. I'm not going to search for favorites, but I find the first 2 exercises amusing:
up vote 13 down vote "1. Show that a finite subset of an arbitrary set E in a ring suffices to generate the ideal generated by E if, and only if, the ring is Noetherian.
*2. Show that 17 x 17 = 289. Generalize this result."
add comment
I also find the book Mathematics Made Difficult by Linderholm to be hilarious. I'm not going to search for favorites, but I find the first 2 exercises amusing:
"1. Show that a finite subset of an arbitrary set E in a ring suffices to generate the ideal generated by E if, and only if, the ring is Noetherian.
*2. Show that 17 x 17 = 289. Generalize this result."
My favourite is supposedly a joke made by a mathematician who was interviewing a not very good graduate student who was taking generals. The interview was going badly, so to make the
student feel better the mathematician asked him for an example of a non-compact topological space. "The reals?" suggested the student, to which the mathematician replied, "Which topology
up vote 12 were you taking?"
down vote
add comment
My favourite is supposedly a joke made by a mathematician who was interviewing a not very good graduate student who was taking generals. The interview was going badly, so to make the student feel
better the mathematician asked him for an example of a non-compact topological space. "The reals?" suggested the student, to which the mathematician replied, "Which topology were you taking?"
An excerpt from H. Petard, "A contribution to the mathematical theory of big game hunting," The American Mathematical Monthly, vol. 45, no. 7, pp. 446-447, 1938:
The Hilbert, or axiomatic, method. We place a locked cage at a given point of the desert. We then introduce the following logical system.
□ Axiom I. The class of lions in the Sahara Desert is non-void.
□ Axiom II. If there is a lion in the Sahara Desert, there is a lion in the cage.
□ Rule of Procedure. If p is a theorem, and "p implies q" is a theorem, then q is a theorem.
□ Theorem I. There is a lion in the cage.
up vote The method of inversive geometry. We place a spherical cage in the desert, enter it, and lock it. We perform an inversion with respect to the cage. The lion is then in the interior of
11 down the cage, and we are outside.
The method of projective geometry. Without loss of generality, we may regard the Sahara Desert as a plane. Project the plane into a line, and then project the line into an interior
point of the cage. The lion is projected into the same point.
The Bolzano-Weierstrass method. Bisect the desert by a line running N-S. The lion is either in the E portion or in the W portion; let us suppose him to be in the W portion. Bisect this
portion by a line running E-W. The lion is either in the N portion or in the S portion; let us suppose him to be in the N portion. We continue this process indefinitely, constructing a
sufficiently strong fence about the chosen portion at each step. The diameter of the chosen portions approaches zero, so that the lion is ultimately surrounded by a fence of arbitrarily
small perimeter.
show 4 more comments
An excerpt from H. Petard, "A contribution to the mathematical theory of big game hunting," The American Mathematical Monthly, vol. 45, no. 7, pp. 446-447, 1938:
The Hilbert, or axiomatic, method. We place a locked cage at a given point of the desert. We then introduce the following logical system. Axiom I. The class of lions in the Sahara Desert is non-void.
Axiom II. If there is a lion in the Sahara Desert, there is a lion in the cage. Rule of Procedure. If p is a theorem, and "p implies q" is a theorem, then q is a theorem. Theorem I. There is a lion
in the cage. The method of inversive geometry. We place a spherical cage in the desert, enter it, and lock it. We perform an inversion with respect to the cage. The lion is then in the interior of
the cage, and we are outside. The method of projective geometry. Without loss of generality, we may regard the Sahara Desert as a plane. Project the plane into a line, and then project the line into
an interior point of the cage. The lion is projected into the same point. The Bolzano-Weierstrass method. Bisect the desert by a line running N-S. The lion is either in the E portion or in the W
portion; let us suppose him to be in the W portion. Bisect this portion by a line running E-W. The lion is either in the N portion or in the S portion; let us suppose him to be in the N portion. We
continue this process indefinitely, constructing a sufficiently strong fence about the chosen portion at each step. The diameter of the chosen portions approaches zero, so that the lion is ultimately
surrounded by a fence of arbitrarily small perimeter.
The Hilbert, or axiomatic, method. We place a locked cage at a given point of the desert. We then introduce the following logical system.
The method of inversive geometry. We place a spherical cage in the desert, enter it, and lock it. We perform an inversion with respect to the cage. The lion is then in the interior of the cage, and
we are outside.
The method of projective geometry. Without loss of generality, we may regard the Sahara Desert as a plane. Project the plane into a line, and then project the line into an interior point of the cage.
The lion is projected into the same point.
The Bolzano-Weierstrass method. Bisect the desert by a line running N-S. The lion is either in the E portion or in the W portion; let us suppose him to be in the W portion. Bisect this portion by a
line running E-W. The lion is either in the N portion or in the S portion; let us suppose him to be in the N portion. We continue this process indefinitely, constructing a sufficiently strong fence
about the chosen portion at each step. The diameter of the chosen portions approaches zero, so that the lion is ultimately surrounded by a fence of arbitrarily small perimeter.
A friend made this up recently (I prefer the first half on its own):
"No meal is complete without soup. But you have to order it first."
Also I like this meta-joke, also by a friend (who didn't understand the original):
up vote 11 down vote
"What's purple and commutes? An abelian eggplant."
EDIT: one more, by Elizabeth: "Does this Hausdorff measure make me look fat?"
show 2 more comments
A friend made this up recently (I prefer the first half on its own):
"No meal is complete without soup. But you have to order it first."
Also I like this meta-joke, also by a friend (who didn't understand the original):
EDIT: one more, by Elizabeth: "Does this Hausdorff measure make me look fat?"
A swiftie. Most of you are probably too young to remember them...
up vote 11 down vote " $s = \displaystyle\int_a^b \sqrt{1 + [f'(x)]^2}\mathrm{d}x$ ", said Tom at length.
add comment
A swiftie. Most of you are probably too young to remember them...
" $s = \displaystyle\int_a^b \sqrt{1 + [f'(x)]^2}\mathrm{d}x$ ", said Tom at length.
Tom Lehrer was a Mathematician and this comes through in several of his famous skits. Not precisely a "math joke", but still mathy and pretty darn funny.
up vote 9 down vote
show 1 more comment
Tom Lehrer was a Mathematician and this comes through in several of his famous skits. Not precisely a "math joke", but still mathy and pretty darn funny.
If somebody likes mathematical logic, category theory, lambda calculus, combinatory logic, then the following article can provide him/her jokes that are at the same time correct
mathematical theorems:
Ruehr, Fritz (2001). The Evolution of a Haskell Programmer. Willamette University.
up vote 8 The article provides approaches to implement a mere Fibonacci function with such "over-calibrated" methods like harnessing deep metamathematical theorems (combinatory logic, category
down vote theory).
Haskell is a programming language (named after the logician Haskell B. Curry). It has been developed by academia (not by industry or market), and most motivations behind its creation was
cleanness and purity. And it is based directly on lambda calculus, type theory, combinatory logic. Many of the programmer practice in it is based on category theory and algebra.
add comment
If somebody likes mathematical logic, category theory, lambda calculus, combinatory logic, then the following article can provide him/her jokes that are at the same time correct mathematical
Ruehr, Fritz (2001). The Evolution of a Haskell Programmer. Willamette University.
The article provides approaches to implement a mere Fibonacci function with such "over-calibrated" methods like harnessing deep metamathematical theorems (combinatory logic, category theory).
Haskell is a programming language (named after the logician Haskell B. Curry). It has been developed by academia (not by industry or market), and most motivations behind its creation was cleanness
and purity. And it is based directly on lambda calculus, type theory, combinatory logic. Many of the programmer practice in it is based on category theory and algebra.
What did the zero say to the eight? "Nice belt."
up vote 8 down vote
add comment
What did the zero say to the eight? "Nice belt."
I once ad-libbed this one. (Alas, it is a late entrant.)
Q: Why is it important to study Verma modules of Lie algebras?
up vote 8 down
vote A: The most widely used modules of Lie algebras and Lie groups are finite-dimensional irreducible representations, the Weyl modules. Of course, you should learn them first when you
study representation theory. But they are only the tip of the iceberg.
add comment
I once ad-libbed this one. (Alas, it is a late entrant.)
Q: Why is it important to study Verma modules of Lie algebras?
A: The most widely used modules of Lie algebras and Lie groups are finite-dimensional irreducible representations, the Weyl modules. Of course, you should learn them first when you study
representation theory. But they are only the tip of the iceberg.
I excuse my english if you spot some flaws...., since this is my first post here I thought it would be nice to share some neat jokes.
1) A mathematician, a physicist and an engineer were out in the countryside when they met a farmer trying to build a fence. They introduced themselves and the farmer asked them if they could
help him shape the fence so he would get as much space as possible within it. The engineer stepped forward and said, that it would be best for the farmer to make the fence square, that would
be easiest. The physicist then said that it would be better to make it as a circle, because then he would get as much space as possible. The mathematician laughed and said that you can get a
lot more space then that! He took some pieces of fence and rolled it around himself, then he defined himself outside the fence!
up vote
8 down 2) Infinitely many mathematicians walked into a bar, the first one asked for one beer, the next one asked for half a beer, the third one asked for a quarter of a beer and the fourth one
vote asked for one eight of a beer, then the bartender said :"screw this" and filled two glasses of beer!
3) An engineer was working on a problem when suddenly his trash bin caught fire. He immediately grabbed the fire extinguisher and put out the fire. In the next room a physicist was also
working on a problem when his trash caught fire, he thought, fire extinguisher block oxygen from the fire, ergo fire is put out. So he grabs the fire extinguisher and puts out the fire. In
the third room there was a mathematician working on a problem, his trash bin also caught fire so he looked at and thought, problem has a solution, and continued working!
add comment
I excuse my english if you spot some flaws...., since this is my first post here I thought it would be nice to share some neat jokes.
1) A mathematician, a physicist and an engineer were out in the countryside when they met a farmer trying to build a fence. They introduced themselves and the farmer asked them if they could help him
shape the fence so he would get as much space as possible within it. The engineer stepped forward and said, that it would be best for the farmer to make the fence square, that would be easiest. The
physicist then said that it would be better to make it as a circle, because then he would get as much space as possible. The mathematician laughed and said that you can get a lot more space then
that! He took some pieces of fence and rolled it around himself, then he defined himself outside the fence!
2) Infinitely many mathematicians walked into a bar, the first one asked for one beer, the next one asked for half a beer, the third one asked for a quarter of a beer and the fourth one asked for one
eight of a beer, then the bartender said :"screw this" and filled two glasses of beer!
3) An engineer was working on a problem when suddenly his trash bin caught fire. He immediately grabbed the fire extinguisher and put out the fire. In the next room a physicist was also working on a
problem when his trash caught fire, he thought, fire extinguisher block oxygen from the fire, ergo fire is put out. So he grabs the fire extinguisher and puts out the fire. In the third room there
was a mathematician working on a problem, his trash bin also caught fire so he looked at and thought, problem has a solution, and continued working!
Kurd Lasswitz, mathematician, writer, inventor of science fiction in Germany, wrote this "nth part of Faust" for the Breslau Mathematical Society 1882:
Prost, Stud. math. in höheren Semestern, steht vor dem Staats-Examen,
Mephisto, Dx (sprich De-ix), Differentialgeisterkönig, ein Fuchs.
Ort Breslau. Zeit: Nach dem Abendessen. (Rechts ein Sofa, auf dem Tische zwischen allerlei Büchern ein Bierseidel und Bierflaschen, links eine Tafel auf einem Gestell, Kreide und
Schwamm. Auf der Tafel ist eine die gesamt Fläche einnehmende ungeheuerliche Differentialgleichung aufgeschrieben).
Prost am Tische, mit den Büchern beschäftigt. Er stärkt sich.
Habe nun, ach, Geometrie, Analysis und Algebra
und leider auch Zahlentheorie studiert,
und wie, das weiß man ja!
up vote 6 Da steh' ich nun als Kandidat
down vote und finde zur Arbeit keinen Rat.
Ließe mich gern Herr Doktor lästern;
zieh' ich doch schon seit zwölf Semestern
herauf, herab und quer und krumm
meine Zeichen auf dem Papiere herum,
und seh', daß wir nichts integrieren können.
Es ist wahrhaftig zum Kopfeinrennen.
Zwar bin ich nicht so hirnverbrannt,
daß ich mich quälte als Pedant,
wenn ich 'ne Reihe potenziere,
zu seh'n, ob sie auch konvergiere,
... "
add comment
Kurd Lasswitz, mathematician, writer, inventor of science fiction in Germany, wrote this "nth part of Faust" for the Breslau Mathematical Society 1882:
"Personen: Prost, Stud. math. in höheren Semestern, steht vor dem Staats-Examen, Mephisto, Dx (sprich De-ix), Differentialgeisterkönig, ein Fuchs. Ort Breslau. Zeit: Nach dem Abendessen. (Rechts ein
Sofa, auf dem Tische zwischen allerlei Büchern ein Bierseidel und Bierflaschen, links eine Tafel auf einem Gestell, Kreide und Schwamm. Auf der Tafel ist eine die gesamt Fläche einnehmende
ungeheuerliche Differentialgleichung aufgeschrieben).
Prost am Tische, mit den Büchern beschäftigt. Er stärkt sich.
Habe nun, ach, Geometrie, Analysis und Algebra und leider auch Zahlentheorie studiert, und wie, das weiß man ja! Da steh' ich nun als Kandidat und finde zur Arbeit keinen Rat. Ließe mich gern Herr
Doktor lästern; zieh' ich doch schon seit zwölf Semestern herauf, herab und quer und krumm meine Zeichen auf dem Papiere herum, und seh', daß wir nichts integrieren können. Es ist wahrhaftig zum
Zwar bin ich nicht so hirnverbrannt, daß ich mich quälte als Pedant, wenn ich 'ne Reihe potenziere, zu seh'n, ob sie auch konvergiere, ... "
Have you head the one about the constipated mathematician?
up vote 6 down vote He had to work it with a pencil.
show 2 more comments
A millionaire is trying to scientifically develop the best racing horse. He asked a biologist, veterinary, trainer, and a mathematician. The biologist gives him an advice about which type of
horse to cross with which other type, the veterinary advices on how to feed the horse, and how to keep him healthy, the trainer explains how to physically train the horse. The mathematician
up vote does not reply. After a few weeks the millionaire meets the mathematician and it looks that the mathematician did not sleep much in recent days. Do you have a solution for me, ask the
6 down millionaire? It is a difficult problem, answers the mathematician, but I think I have a satisfactory solution to the case of spherical horses.
add comment
A millionaire is trying to scientifically develop the best racing horse. He asked a biologist, veterinary, trainer, and a mathematician. The biologist gives him an advice about which type of horse to
cross with which other type, the veterinary advices on how to feed the horse, and how to keep him healthy, the trainer explains how to physically train the horse. The mathematician does not reply.
After a few weeks the millionaire meets the mathematician and it looks that the mathematician did not sleep much in recent days. Do you have a solution for me, ask the millionaire? It is a difficult
problem, answers the mathematician, but I think I have a satisfactory solution to the case of spherical horses.
A creation of my own:
Q:What did the simplicial set say to the fibrant replacement functor?
up vote 4 down vote
A:"Oh, I'm so horny..."
add comment
Q:What did the simplicial set say to the fibrant replacement functor? | {"url":"http://mathoverflow.net/questions/1083/do-good-math-jokes-exist/8515","timestamp":"2014-04-18T06:16:22Z","content_type":null,"content_length":"170435","record_id":"<urn:uuid:60976a43-9dc5-487b-8de0-dc396434a27c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00267-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Are a rational expression and its simplified forms equivalent?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/510800d7e4b069b605a286f8","timestamp":"2014-04-17T12:45:46Z","content_type":null,"content_length":"56652","record_id":"<urn:uuid:87ae1117-c8c2-4ce0-bdcd-56d40a59de20>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00332-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Statistical Look at Jewish History
Introductory Activity:
Using census numbers and various charts, students will be able to look at trends in the population of Jewish people in the last century and today. But, before looking at numbers that will be in the
hundreds of thousands and millions, teachers should review the concept of percentages:
Ask students to guess what the word "percent" means. Suggest that they break the word up into parts: what does "per" mean? "Cent"? Tell them "percent" means "out of 100" or "of one 100." So, 3%
means 3 out of every hundred.
Ask students to brainstorm places they've seen percentages used outside of the math room, e.g., 2% milk, newspapers... Then ask them: What information does a percentage provide that a number doesn't
and vice versa? Ask them to compare the following two statements:
Statement one: There are five girls in the class.
Statement two: Five percent of the class is girls.
What information does the first statement provide? The second? What does the first provide that the second doesn't? What does the second provide that the first doesn't? When might it be more useful
to describe something using a percentage? A number amount?
Tell students that now that they've reviewed the concept of percentages, they will review calculating percentages. Ask your students to estimate the percentage of boys in the class. Girls? Ask them
how they came up with their estimates.
Then, ask students to determine the exact percentages and have them explain how they came up with these numbers. Record their responses on the board. Ask students what information they need to
determine a percentage (the part and the whole) and then ask for the formula to calculate a percentage.
Next, ask students to identify the percentage of time they spend doing the following:
• being at school
• doing homework
• working at a job
• spending time with friends
• talking/spending time with their parents
• watching television
• sleeping
[Note: You may want to model how to perform these calculations, e.g., if a student spends eight hours of his day sleeping, then he spends 8/24 hours per day sleeping, which is 33.3% of his time.]
Break students into groups of four or five to discuss their percentages. Have group members compare their figures with one another. What patterns do they see among their group members' statistics?
What stories do these numbers tell about how the students spend their lives? What questions do the students have that these statistics cannot answer?
Ask students to reconvene as a group. Have them share the patterns they saw and questions the statistics cannot answer. Encourage students to understand that while statistics can point out patterns
and trends, they cannot answer questions about why these patterns exist.
Tell students that in the upcoming activities they will use percentages to study Jewish population statistics and learn about the history of the Jewish people.
Learning Activities:
Activity One: Figuring out where the world's Jewish population currently lives
Before you begin this activity, make sure you print out Student Organizer-Activity One and make a copy for each of your students. This activity will give students practice in locating information on
the Internet and calculating percentages using raw data.
Ask students to think back to the Introductory Activity. Ask them to review what they learned about percentages, e.g., what "percentage" means, how to calculate a percentage, what information a
percentage offers versus a flat amount, and the strengths and shortcomings of percentages when looking at the way they spent their time. (Helps them spot patterns and trends, but doesn't answer why
they exist.) Record their responses on a piece of chart paper.
Now, write the following two phrases on the board:
• Percentage of total Jewish population worldwide living in that country
• Percentage of Jewish population within a country
Ask students what the difference between these two percentages is. Then, ask them what information they will need to determine the first percentage. The second? Write their responses on the board.
Give each student a copy of Student Organizer-Activity One. Go over the directions with students:
Below you will find the name of a country and the number of Jews living in that country. There are a total of 13 million Jews worldwide. Using this number and the figures below, determine what
percentage of the total Jewish population lives in that country. Note: There are many other countries that serve as home to Jews. However, their numbers are too small for our purposes.
Find the total population of each country listed. Use the Internet for help. (The Web site www.population.com is very helpful.) Calculate what percentage of each country's population is Jewish.
Complete the first country with the students. Ask them to tell you how to proceed and then model the method on the board.
Then, break students into pairs and ask them to complete the handouts.
After students are done, begin a discussion about the findings. Ask students to analyze their data. What patterns do they see? Do they see any exceptions to these patterns? What questions does their
data raise? Record students' questions on the board. If students have trouble generating questions, you may want to offer one of the following to jumpstart the discussion:
• Where is the greatest concentration of Jews in the world? Why?
• Why do you think the United States is home to the greatest number of Jews in the world?
• What other percentages might be interesting to calculate, e.g., percentage of Jews living in North America, South America, Europe...?
Answer Key for Activity One:
│ Country │ Number of Jews living in │ % of total Jewish population worldwide living in that │ Total population of country │ Percentage of Jewish population in that │
│ │ country │ country │ listed │ country │
│ United States │ 5,800,000 │ 44.6% │ 276,768,280 │ 2% │
│ Israel │ 4,847,000 │ 37.3% │ 6,188,054 │ 78% │
│ France │ 600,000 │ 4.6% │ 58,978,172 │ 1% │
│ Russia │ 550,000 │ 4.2% │ 147,463,480 │ 0.3% │
│ Ukraine │ 400,000 │ 3% │ 49,811,174 │ 0.8% │
│ Canada │ 360,000 │ 2.7% │ 31,006,347 │ 1.1% │
│ United Kingdom │ 300,000 │ 2.3% │ 58,795,119 │ 0.5% │
│ Argentina │ 250,000 │ 1.9% │ 36,737,664 │ 0.7% │
│ Brazil │ 130,000 │ 1% │ 171,853,126 │ 0.07% │
│ South Africa │ 106,000 │ 0.8% │ 43,426,386 │ 0.2% │
│ Australia │ 100,000 │ 0.7% │ 18,783,551 │ 0.5% │
│ Hungary │ 80,000 │ 0.6% │ 10,065,420 │ 0.8% │
│ Belarus │ 60,000 │ 0.5% │ 10,401,784 │ 0.5% │
│ Germany │ 60,000 │ 0.5% │ 82,584,731 │ 0.07% │
Please note: This table was last updated in December, 2001.
Activity Two: Looking at the numbers of Jews in the last century
In this activity, students will examine the numbers of the world Jewish population in the last century. Using these numbers and Microsoft Excel, they will create a clear and coherent graph. More
importantly, they will begin to see how numbers and history correspond with each other. Before starting, give each student a copy of Student Organizer-Activity Two and a piece of graph paper.
Break students into small groups (2-4 students).
Ask students to look at their Student Organizer-Activity Two printout.
Using the information from their printout, they should create a graph that displays the data on their chart, ideally using the Excel program if it is available. Before they begin graphing ask them:
What are the two categories of information they will be charting? (population and time). Which category will they use for the x-axis and which for the y-axis? (There is no right answer. When the
students are done, ask them to compare the graphs with time as an x-axis with the graphs using population as the x-axis.)
After graphing, students will use the same data to determine the percentage increase or decrease for each of the years listed. To do so, the teacher may have to give a refresher about how to do this
calculation. As students work, encourage them to first anticipate whether a change will be an increase or decrease, and then calculate the actual percentage. If their original guesses don't match
their answer, ask them to revisit their calculations-tell them this is an easy way to check their work.
For example, to calculate the percentage increase from 1900 to 1914, you must calculate (13,500,000/11,000,000 - 1) x 100% = 22.73%.
To calculate the percentage decrease from 1939 to 1948, you apply the same formula. (11,500,000/16,728,000 -1) x 100% = 31.25%
Answer Key for Activity Two:
│ Percentage difference from year A to B │ Difference │
│ 1900 to 1914 │ 22.73% increase │
│ 1914 to 1939 │ 23.91% increase │
│ 1939 to 1948 │ 31.25% decrease │
│ 1948 to 1969 │ 19.88% increase │
│ 1969 to 2000 │ 4.31% decrease │
Once the students are done with the mathematical portion of the lesson, ask them to analyze their data. Ask them to write down a list of questions that these statistics raise.
Ask students to research the questions they generated. Suggest the following sites:
• National Museum of American Jewish History
This site contains information about Jewish history, including a side-by-side timetable of American history, American Jewish history, and world Jewish history. (http://www.nmajh.org/timeline/
• HyperHistory Online
This site provides an interactive world history timeline.
• PBS' HERITAGE Web site
This is the Web site companion to the PBS series HERITAGE: CIVILIZATION AND THE JEWS. Contains comprehensive historical and cultural information spanning 1,000 years.
After the students have researched their questions, bring them together as a group to share their questions and research as a class. The following are questions you might want to cover if the
students do not bring them up on their own.
• Why is there such a severe drop in the world's Jewish population from 1939 to 1948? What period of history accounts for this?
• Why the decrease between 1969 and 2000? What was going on in the world? What was happening with the world population as a whole?
• What do you predict about the world's Jewish population for 2030? Explain your theories. (For this question, ask students to look at their graphs and identify and general trends, e.g., the
population is generally going up, down, etc., and base their predictions on these trends.)
Ask students to reflect and comment on how the population statistics helped them formulate questions and guided their research.
Ask students to brainstorm theories about the future of the world's Jewish population, based on history and population trends thus far.
Activity Three: Cities with the Largest Jewish Population in the Diaspora
In this activity, students will be given a listing of the cities with the largest Jewish population in the Diaspora. They will use this information to determine the percentage of Jews in the city and
then research individual cities and their possible appeal to the Jewish community. Before starting, give each student a copy of Student Organizer-Activity Three.
Divide class into small groups of 2 to 4.
Ask them to use http://www.population.com or whatever resource you would like to find the population of each of the cities listed on Student Organizer-Activity Three.
Then tell them to calculate the percentage of the city's population that is composed of people who identify themselves as Jewish.
Once students have completed the mathematical portion of this exercise, have them consider the following questions. Record student responses on the board or a piece of chart paper. Encourage
students to share other questions the statistics raised.
a. What similarities exist in these cities that make them attractive for large numbers of Jewish people? Think about geography, ideology, politics, etc.
b. What types of services and centers exist in these cities that cater to the needs of Jews? For this question, pick one city and use the Internet to search for businesses, industries and community
centers that target Jews. For example, are there Jewish neighborhoods?
Ask each group to choose another city to research on the Internet. Then have each group present their ideas about why the city they researched might attract a significant Jewish population.
Answer Key for Activity Three:
│ City │ Jewish population of the city │ Total population of city │ Percentage of Jews in the city │
│ New York, USA │ 1,750,000 │ 7,392,064 │ 24% │
│ Miami, USA (metropolitan area) │ 535,000 │ 4,000,000* │ 13% │
│ Los Angeles, USA │ 490,000 │ 3,593,823 │ 14% │
│ Paris, France │ 350,000 │ 2,077,537 │ 17% │
│ Philadelphia, USA │ 254,000 │ 1,434,587 │ 18% │
│ Chicago, USA │ 248,000 │ 2,706,882 │ 9% │
│ San Francisco, USA │ 210,000 │ 746,277 │ 28% │
│ Boston, USA (metropolitan area) │ 208,000 │ 5,828,000* │ 4% │
│ London, UK │ 200,000 │ 7,169,443 │ 2.7% │
│ Moscow, Russia │ 200,000 │ 8,345,471 │ 2% │
│ Buenos Aires, Argentina │ 180,000 │ 14,392,081 │ 1.3% │
│ Toronto, Canada │ 175,000 │ 2,400,000** │ 7% │
│ Washington, DC, USA │ 165,000 │ 510,478 │ 32% │
│ Kiev, Ukraine │ 110,000 │ 2,609,998 │ 4% │
│ Montreal, Canada │ 100,000 │ 999,243 │ 10% │
│ St. Petersburg, Russia │ 100,000 │ 4,750,000* │ 2% │
Source: World Jewish Congress (WJC), Lerner Publications Company, 1998. City population sources: www.population.com
** These population figures were not from www.population.com. Students should use alternate sources.
* The number from www.population.com did not seem correct, so alternate sources were used to get this number.
Culminating Activity/Assessment:
Option 1:
For the culminating activity, students will need to evaluate the population of their own ethnicity. As part of their project, they will have to do the following:
• research population figures of people of their own ethnicity worldwide
• find the percentage of their ethnicity in the United States
• create pie graphs as a visual representation of the figures that they have found
• using information from the 2000 U.S. Census, make predictions about the future
The students will discuss their heritage and ethnicity and determine which subgroup they want to be a part of.
Each student will need to create a presentation that includes, but is not limited to, the following information: (Note: Encourage students to use Excel when charting their information.)
• the total number of people in their ethnicity/race worldwide
• the total number of people in their ethnicity/race in the United States
• the percentage of the U.S. population their group makes up
• where the biggest numbers of people in their ethnic group are located and why they think this is so
• other trends they notice about their racial or ethnic group, e.g., they can trace their groups population over time, and graph it as they did in the previous activity
• predictions about the demographics of their racial or ethnic group in the future
Students should display their work on posterboards or pieces of chart paper, and then have a fair where they display their findings for their class. To do this, break the class into two groups.
First, group one displays and group two reads the posterboards and asks questions of the presenters. Then the groups switch roles.
Option 2:
If there are too many students doing the same racial or ethnic groups, you may want to have some students do a school-wide survey and investigation. After they have collected their data, they should
compare the school figures to the national percentages of the 2000 U.S. Census.
Note: Because this is a complex group activity, students should first create a list of what needs to be done and who will be doing what.
Find rosters of each class in the school.
The students should come up with a list of racial and ethnic groups that would encompass all of the students in the school.
Have the students schedule times with each teacher so they can go to the classrooms and take a quick poll. Students should keep good records of which classrooms have been visited so they do not
bother classes twice and obtain repeat numbers.
Students may need to spend a few days gathering all of the data.
Once all of the data has been gathered, students will compile all of the numbers and start figuring out the percentages for each group.
Students will need to obtain the national percentages from the 2000 U.S. Census and then compare their school's figures. Ask students to put this information into tables and graphs. Suggestions:
• Students could create two pie charts, one showing the make up of the U.S. population and another showing their own schools population.
• Students could create a color-coded bar graph, with red bars representing their school and blue representing the U.S., with the y-axis representing % of the population and the x-axis containing
the names of racial/ethnic groups.
Ask students to explain the reasons behind the differences and similarities between their school's demographics and those of the U.S. as a whole.
All of this information should be reported in a visual presentation to the rest of the class. Some ideas include using PowerPoint or a bulletin board in the hallway so the rest of the school can see
the results.
Cross-Curricular Extensions:
• History: Read historical fiction related to the Jewish Diaspora or students' own backgrounds to complement their understanding of the immigrant experience.
• Population trends: Demographers have predicted that the "white" population will not be the majority population in the next few decades. Have debates about what this will mean for the future of
each of these groups.
Community Connections:
• Students can interview family members to document their stories about living in the United States and the issues, conflicts and challenges that they faced when they first arrived in the country.
Video, audio and text clips can be included in their presentations. | {"url":"http://www.thirteen.org/edonline/lessons/statistics/b.html","timestamp":"2014-04-18T23:19:02Z","content_type":null,"content_length":"40276","record_id":"<urn:uuid:5ca9c4af-f745-4738-b675-c49e7b780ef7>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Math & Calculators: perhaps an irrelevant story
Replies: 1 Last Post: Jun 26, 1995 10:57 AM
Messages: [ Previous | Next ]
Math & Calculators: perhaps an irrelevant story
Posted: Jun 21, 1995 6:51 PM
On the subject of calculator use and mathematics education, I want to
toss in this incident from a recent ARML team practice we had. Now
these are *strong* students -- they all love math, they are incredibly
successful at it, they have a very solid understanding of the
processes underlying computation and yet.... I *still* think they are
too quick to use their calculators.
Context: we're practicing the "team" round of the ARML competition.
In this round, the fifteen-member team has twenty minutes to do a set
of ten problems. Cooperative work is allowed. For the last few years,
calculators have been permitted. In this practice, we are using an old
ARML competition written before calculators were permitted -- but I am
letting them use calculators anyway, since they will be able to do so
in the actual competition.
one of the problems in the set was:
Find all numbers between 90 and 100 (inclusive) that can
not be written in the form
a + b + ab
where a and b are positive integers.
If you have spent some time playing around with multiplying
and factoring polynomials, you may recognize that a+b+ab
looks a lot like (a+1)*(b+1)... in fact
a+b+ab = (a+1)*(b+1) - 1
And this means that a number can be written in the desired form unless
it is one less than a prime. It's easy to confirm that the only primes
between 91 and 101 are 97 and 101, so only 96 and 100 can't be written
in the desired form.
There are a number of other ways to do this problem, some of them
minor variants of the above, others less efficient and even an
exhaustive search can be done in a few minutes if one sees a few
tricks to narrow the range of values for the pair (a,b).
In our practice, our first team, which included eight USAMO
participants, elected to solve the problem by writing a quick program
for their calculator to enumerate the cases (without even seeing the
reduction mentioned above -- they simply had it run through all a and
b from 1 to 100) and print out all the numbers between 90 and 100 that
they got. This would work, too, of course -- but they made a mistake
somewhere in the details of the program (they were in a hurry) and so
blew the question.
I'm not sure what to think about this. On the one hand, it's a pretty
narrow and somewhat artificial problem, and the time constraints and
other problems in the set make the circumstances still more
artificial. Then, too, I have no objection in principle to using a
programmable calculator either to check one's work or as a shortcut.
Time is short, once can't rely on waiting for an insight to hit, if
you see a tractable way to a solution, go for it.
On the other hand, I would have hoped that they would not have
considered the problem done when the machine spat out its answer --
that at least one of them would have followed up and tried to confirm
their answer directly, algebraically. It also bothers me a bit that they
didn't see the algebraic pattern instantly.
I'm not trying to imply that I'm smarter than these students are --
quite the contrary, they are all sharper than I was at their age and
several are sharper than I am now. But they don't, in general, look
for algebraic patterns first, and they may not have as complete a
collection of algebraic patterns in their heads to draw upon as I feel
they should.
Littlewood wrote somewhere (in his "Miscellany"?) that he considered
every integer to be his personal friend. By which I think he meant
that he felt intimately familiar with the properties of numbers as a
result of long hours spent playing, computing, and working problems
with, through, over, and around them. It may well be possible to
acquire such familiarity even while using calculators and computers
extensively -- and indeed, to capture altogether new properties and
details -- but I think there is substantial risk of missing some core
notions, too.
[As an aside, note that the form of the problem ensures that it wouldn't be
enough just to plug a + b + ab into MAPLE or some symbolic computation
software and try to factor it --- without "completing the product" the
expression doesn't factor. This sort of limitation to symbolic
computation software really does crop up all the time in real work --
you need not only a grasp of the fundamentals, but experience born of
cranking through simpler cases to guide the software. Maybe, in a few
years, more intelligent symbolic computation software will be
available... maybe not.]
I'd like to see students who can do it all -- who don't shy away from
calculators OR algebra and who can use each to reinforce the other.
Ted Alper
Date Subject Author
6/21/95 Math & Calculators: perhaps an irrelevant story Ted Alper
6/26/95 Re: Math & Calculators: perhaps an irrelevant story Marsha Landau | {"url":"http://mathforum.org/kb/thread.jspa?messageID=1474469&tstart=0","timestamp":"2014-04-21T15:14:06Z","content_type":null,"content_length":"21916","record_id":"<urn:uuid:a9beaf24-df8e-4bb7-9b3e-06f680ca3e51>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00131-ip-10-147-4-33.ec2.internal.warc.gz"} |
max(), sum(), next()
Mensanator mensanator at aol.com
Thu Sep 4 01:20:39 CEST 2008
On Sep 3, 2:18 pm, Laszlo Nagy <gand... at shopzeus.com> wrote:
> bearophileH... at lycos.com wrote:
> > Empty Python lists [] don't know the type of the items it will
> > contain, so this sounds strange:
> >>>> sum([])
> > 0
> > Because that [] may be an empty sequence of someobject:
> You are right in that sum could be used to sum arbitrary objects.
> However, in 99.99% of the cases, you will be summing numerical values.
> When adding real numbers, the neutral element is zero. ( X + 0 = X) It
> is very logical to return zero for empty sequences.
No it isn't. Nothing is not 0, check with MS-Access, for instance:
Null + 1 returns Null. Any arithmetic expression involving a
Null evaluates to Null. Adding something to an unknown returns
an unknown, as it should.
It is a logical fallacy to equate unknown with 0.
For example, the water table elevation in ft above Mean Sea Level
is WTE = TopOfCasing - DepthToWater.
TopOfCasing is usually known and constant (until resurveyed).
But DepthToWater may or may not exist for a given event (well
may be covered with fire ants, for example).
Now, if you equate Null with 0, then the WTE calculation says
the water table elevation is flush with the top of the well,
falsely implying that the site is underwater.
And, since this particular site is on the Mississippi River,
it sometimes IS underwater, but this is NEVER determined by
water table elevations, which, due to the CORRECT treatment
of Nulls by Access, never returns FALSE calculations.
>>> sum([])
is a bug, just as it's a bug in Excel to evaluate blank cells
as 0. It should return None or throw an exception like sum([None,1])
> Same way, if we would have a prod() function, it should return one for
> empty sequences because X*1 = X. The neutral element for this operation
> is one.
> Of course this is not good for summing other types of objects. But how
> clumsy would it be to use
> sum( L +[0] )
> or
> if L:
> value = sum(L)
> else:
> value = 0
> instead of sum(L).
> Once again, this is what sum() is used for in most cases, so this
> behavior is the "expected" one.
> Another argument to convince you: the sum() function in SQL for empty
> row sets returns zero in most relational databases.
> But of course it could have been implemented in a different way... I
> believe that there have been excessive discussions about this decision,
> and the current implementation is very good, if not the best.
> Best,
> Laszlo
More information about the Python-list mailing list | {"url":"https://mail.python.org/pipermail/python-list/2008-September/478959.html","timestamp":"2014-04-20T19:26:48Z","content_type":null,"content_length":"5575","record_id":"<urn:uuid:ba49b1e7-c745-4697-9ba3-cae8f8dc1496>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00438-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
ASCII Text x
Jean-Marc Delosme, Shen-Fu Hsiao, "Householder CORDIC Algorithms," IEEE Transactions on Computers, vol. 44, no. 8, pp. 990-1001, August, 1995.
BibTex x
@article{ 10.1109/12.403715,
author = {Jean-Marc Delosme and Shen-Fu Hsiao},
title = {Householder CORDIC Algorithms},
journal ={IEEE Transactions on Computers},
volume = {44},
number = {8},
issn = {0018-9340},
year = {1995},
pages = {990-1001},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.403715},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Householder CORDIC Algorithms
IS - 8
SN - 0018-9340
EPD - 990-1001
A1 - Jean-Marc Delosme,
A1 - Shen-Fu Hsiao,
PY - 1995
KW - CORDIC
KW - computer arithmetic
KW - Householder reflections
KW - parallel algorithms
KW - VLSI.
VL - 44
JA - IEEE Transactions on Computers
ER -
Abstract—Matrix computations are often expressed in terms of plane rotations, which may be implemented using COordinate Rotation DIgital Computer (CORDIC) arithmetic. As matrix sizes increase
multiprocessor systems employing traditional CORDIC arithmetic, which operates on two-dimensional (2D) vectors, become unable to achieve sufficient speed. Speed may be increased by expressing the
matrix computations in terms of higher dimensional rotations and implementing these rotations using novel CORDIC algorithms—called Householder CORDIC—that extend CORDIC arithmetic to arbitrary
dimensions. The method employed to prove the convergence of these multi-dimensional algorithms differs from the one used in the 2D case. After a discussion of scaling factor decomposition, range
extension and numerical errors, VLSI implementations of Householder CORDIC processors are presented and their speed and area are estimated. Finally, some applications of the Householder CORDIC
algorithms are listed.
[1] H.M. Ahmed,“Signal processing algorithms and architectures,” PhD dissertation, Dept. of Electrical Eng., Stanford Univ., June 1982.
[2] H.M. Ahmed,J.-M. Delosme,, and M. Morf,“Highly concurrent computing structures for matrix arithmetic and signalprocessing,” Computer, pp. 65-82, Jan. 1982.
[3] R.P. Brent,F.T. Luk,, and C.F. Van Loan,“Computation of the singular value decomposition using mesh-connected processors,” J. VLSI Computer Systems, vol. 1, no. 3, pp. 242-270, 1985.
[4] J.R. Cacallaro and F.T. Luk,"CORDIC Arithmetic for a SVD Processor," J. Parallel and Distributed Computing, vol. 5, pp. 271-290, 1988.
[5] J.R. Cavallaro and F.T. Luk,"Floating Point CORDIC for Matrix Computations," Proc. IEEE Int'l Conf. Computer Design, pp. 40-42, 1988.
[6] A.A.J. de Lange and E.F. Deprettere,“Design and implementation of a floating-point quasi-systolicgeneral purpose CORDIC rotator for high-rate parallel data andsignal processing,” Proc. 10th IEEE
Symp. Computer Arithmetic, pp. 272-281, July 1991.
[7] J.-M. Delosme,“Algorithms for finite shift-rank processes,” PhD dissertation, Dept. of Electrical Eng., Stanford Univ., 1982.
[8] J.-M. Delosme,“VLSI implementation of rotations in pseudo-Euclidean spaces,” Proc. IEEE Int’l Conf. ASSP, pp. 927-930, Apr. 1983.
[9] J.-M. Delosme,“The matrix exponential approach to elementary operations,” Advanced Algorithms and Architectures for Signal Processing IV, Proc. SPIE 696, pp. 188-195, Aug. 1986.
[10] J.-M. Delosme and I.C.F. Ipsen,“Parallel solution of symmetric positive definite systems with hyperbolic rotations,” Linear Algebra and Applications, vol. 77, pp. 75-111, May 1986.
[11] J.-M. Delosme,“A processor for two dimensional symmetric eigenvalue and singular value arrays,” Proc. 21st Asilomar Conf. Circuits, Systems, and Computers, pp. 217-221, Nov. 1987.
[12] J.-M. Delosme,“Bit-level systolic algorithms for real symmetric and hermitian eigenvalue problems,” J. VLSI Signal Processing, vol. 4, pp. 69-88, Jan. 1992.
[13] A.M. Despain,“Fourier transform computers using CORDIC iterations,” IEEE Trans. Computers, vol. 23, pp. 993-1,001, Oct. 1974.
[14] E.F. Deprettere,P. Dewilde,, and R. Udo,“Pipelined CORDIC architectures for fast VLSI filtering and array processing,” Proc. IEEE Int’l Conf. ASSP, pp. 3:41A.6.1-41A.6.4, Mar. 1984.
[15] M.D. Ercegovac and T. Lang,"Redundant and On-Line CORDIC: Application to Matrix Triangularisation and SVD," IEEE Trans. Computers, vol. 38, no. 6 pp. 725-740, June 1990.
[16] G. Golub and C. Van Loan, Matrix Computations, third ed. Baltimore: Johns Hopkins Univ. Press, 1996.
[17] G. Haviland and A. Tuszynski,“A CORDIC arithmetic processor chip,” IEEE Trans. Computers, vol. 29, no. 2, pp. 68-79, Feb. 1980.
[18] S.-F. Hsiao and J.-M. Delosme, "The CORDIC Householder Algorithm," Proc. 10th Symp. Computer Arithmetic, pp. 256-263, 1991.
[19] S.-F. Hsiao,“Multidimensional CORDIC algorithms, PhD dissertation, Dept. of Electrical Eng., Yale Univ., Dec. 1993.
[20] S.-F. Hsiao and J.-M. Delosme,“Parallel complex singular value decomposition using multidimensional CORDIC algorithms,” Proc. Int’l Conf. Parallel and Distributed Systems, pp. 487-494, Dec.
[21] Y.H. Hu,"The Quantization Effects of the CORDIC Algorithm," IEEE Trans. Circuits and Systems, vol. 40, no. 4, pp. 834-844, 1992.
[22] Y.M. Hu, “CORDIC-Based VLSI Architectures for Digital Signal Processing,” IEEE Signal Processing Magazine, vol. 9, pp. 16-35, 1992.
[23] Y.H. Hu and H.E. Liao,“CALF: A CORDIC adaptive lattice filter,” IEEE Trans. Signal Processing, vol. 40, no. 4, pp. 990-993, Apr. 1992.
[24] X. Hu and R.G. Harber,“Expanding the range of convergence of the CORDIC algorithm,” IEEE Trans. Computers, vol. 40, no. 1, pp. 13-21, Jan. 1991.
[25] K. Kota and J.R. Cavallaro,“Numerical accuracy and hardware tradeoffs for CORDIC arithmetic for special-purpose processors,” IEEE Trans. Computers, vol. 42, no. 7, pp. 769-779, July 1993.
[26] C. Mazenc,X. Merrheim,, and J.-M. Muller,“Computing functions cos-1 and sin-1 using Cordic,” IEEE Trans. Computers, vol. 42, no. 1, pp. 118-122, Jan. 1993.
[27] C.M. Rader and A.O. Steinhardt,“Hyperbolic householder transformations,” IEEE Trans. ASSP, vol. 34, no. 6, pp. 1,589-1,602, Dec. 1986.
[28] N. Takagi,T. Asada, and S. Yajima,"Redundant CORDIC Methods with a Constant Scale Factor for Sine and Cosine Computation," IEEE Trans. Computers, vol. 40, no. 9, pp. 989-995, Sept. 1991.
[29] A.-J. van der Veen and E.F. Deprettere,“Parallel VLSI matrix pencil algorithm for high resolution direction finding,” IEEE Trans. Signal Processing, vol. 39, pp. 383-394, Feb. 1991.
[30] J.E. Volder,“The CORDIC trigonometric computing technique,” IRE Trans. Electronic Computers, vol. 8, no. 3, pp. 330-334, Sept. 1959.
[31] J.S. Walther,“A unified algorithm for elementary functions,” Proc. AFIPS Spring Joint Computing Conf., vol. 38, pp. 379-385, 1971.
Index Terms:
CORDIC, computer arithmetic, Householder reflections, parallel algorithms, VLSI.
Jean-Marc Delosme, Shen-Fu Hsiao, "Householder CORDIC Algorithms," IEEE Transactions on Computers, vol. 44, no. 8, pp. 990-1001, Aug. 1995, doi:10.1109/12.403715
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/1995/08/t0990-abs.html","timestamp":"2014-04-18T22:23:12Z","content_type":null,"content_length":"58166","record_id":"<urn:uuid:72ca1c2e-a4f3-47bf-ba51-f75b87ed67ef>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Journal of Research in Health Sciences
JRHS 2011; 11(1): 7-13
Copyright © Journal of Research in Health Sciences
Modeling of Malaria Incidence in Nepal
Sampurna Kakchapati (MPH)a, Jurairat Ardkaew (PhD)b*
a Program in Research Methodology, Department of Mathematics and Computer Science, Faculty of Science and Technology, Prince of Songkla University, Pattani Campus, Thailand
b Mathematics and Statistics Program, Department of Sciences, Faculty of Sciences and Technology, Loei Rajabhat University, Loei, Thailand
* Correspondence: Jurairat Ardkaew (PhD) E-mail: Jurairat_p@hotmail.com
Received: 1 April 2011, Revised: 9 May 2011, Accepted: 6 June 2011, Available online: 12 June 2011
Background: Malaria is a major cause of morbidity and mortality in Nepal. The magnitude of malaria across the country is alarming and varies with location. Therefore, the present study aimed to model
malaria incidence rates during 1998 to 2009 in Nepal.
Methods: Data for the study were obtained from Health Management Information System (HMIS), Ministry of Public Health. A negative binomial model was used to fit malaria incidence rates as a function
of year and location and provided a good fit, as indicated by residual plots.
Results: In total, 83,345 cases of malaria were reported from 1998 to 2009. The mean incidence rate was 0.30 per 1000 population. The models show trends and spatial variations in disease incidence.
There was decreasing trend in the incidence rates of malaria (1998-2004), followed by a more moderate upward trend until 2008, when the rate decreases again. Zero malaria incidences occurred in six
districts including Humla, Jajarkot, Manang, Kathmandu, Bhakthapur and Solukhumbu districts for over twelve years. Higher incidence occurred in Kanchanpur, Kailali, Bardiya, Kavre, and Jhapa
districts for the study period.
Conclusion: Malaria is still a public health problem in Nepal. This study showed a steady decreasing trend in malaria incidence but the numbers of cases are still very high. Higher rates were
observed in Terai Region and border areas. These findings highlight the need for more systematic and effective malaria control measures on malaria burden areas of Nepal.
Keywords: Malaria, Poisson model, Negative binomial model,
Malaria remains a global problem with an over 600 million cases and over 2 million deaths each year worldwide 1. It is endemic in 109 countries with about half of the world's population is at risk of
malaria, particularly those living in lower-income countries 2. In Nepal, malaria is major public health concerns in terms of mortality, morbidity and the subsequent overall impact on the national
economy. It is estimated that about 74% (17.4 million) live in endemic areas. The disease is prevalent in the plains, foot- hills, forest, and forest- fringe of Terai and inner Terai valleys and
distributed sporadically in hills, hill-mountain valleys 3. The transmission is distinctly seasonal, with transmission limited to the warm and rainy summer months (June-September), hence malaria is
unstable and epidemic-prone 3, 4.
The annual reports provide evidence that the magnitude of malaria across the country is high and varies with location. Malaria is prevalent in 67 districts of country with high endemicity in 12
districts and throughout the Terai from the far west to the eastern region 5. There is also growing concern that the reservoir of malaria in the Terai has become also affected by its neighboring
country India. Malaria in Nepal occurred along the national borders particularly on the border to India 6. The Terai region of Nepal has cross-border problems of communicable disease including
malaria with the Indian states measuring a length of approximately 550 miles.
Malaria related problems have assumed a newer dimension in the recent decades. Epidemics are now even occurring in areas where transmission was thought to have been eliminated already. The migration
of population, deforestation, inadequate resources increase in epidemic potential and neglect of epidemiology are other important factors responsible for changing epidemiological pattern of malaria.
The situation is likely to be further aggravated by climate change. The climate variability and the breeding activity of anopheles are considered one of the important environmental contributors to
malaria transmission in recent years7.
Estimating the burden of malaria is highly needed for evidence based planning of malaria control. It is required for public health officials to evaluate disease incidence in the country. They need to
investigate the regional and temporal pattern of disease so that necessary actions can be taken. Statistical modeling may applied investigating key issues related to disease incidence. The Poisson
distribution and its extension to the negative binomial distribution to handle over-dispersion is a standard approach to modeling event count data.
The objective of our study was thus to identify the spatial patterns and trends of malaria incidences in Nepal with focused for border and non-border districts.
Study area and data source
Nepal is a landlocked country in the southern Asia, bordered on the north by Chinese Tibet and the Himalayas and by India to the east, south, and west. It has five development regions (eastern,
central, western, mid western and far western), 14 zones, 75 districts. Based on topography, it is divided into three distinct geographical regions areas; Mountain (7% of the population), Hill (43%)
and Terai (50%), in decreasing altitude 8.
The information used, regarding cases notified between mid July 1998 to mid July 2009 were reviewed using Annual Reports of the Department of Health Services and data were obtained through Health
Management Information System (HMIS). These data were available in computer files for each year comprising characteristics of the disease, location and year. These data were obtained in excel format
and were then modified and entered into computer text files suitable for data cleaning and analysis.
Statistical methods
Poisson regression is commonly used for modeling the number of cases of disease in a specific population within a certain time. If λjt denotes the mean incidence rates for geographical location j and
year t, an additive model with this distribution is expressed as
ln(λjt) = ln(Pj) + μ + αj + βt (1)
In this model, Pj is the corresponding population at risk in 1000s of location j and the terms αj and βt represent location (border + non-border districts) and year effects that sum to zero so that μ
is a constant encapsulating the overall incidence. Poisson models for disease counts are often over-dispersed due to clustering, in which case the negative binomial model is more appropriate 9. The
negative binomial model is an extension of the Poisson model for incidence rates that allows for the over dispersion that commonly occurs for disease counts. The variance of this distribution is λjt
(1+ λjt/θ) with the Poisson model arising in the limit as θ→∞ 10.
A characteristic of the Poisson distribution is that its mean is equal to its variance. If the observed variance is greater than the mean, the data are over-dispersed and the residual deviance plot
will indicate that the model is not appropriate. After fitting the model to the data, to check the adequacy of the respective model, one usually computes a residual deviance for each cell. Thus, the
deviance statistic for an observation reflects its contribution to the overall goodness of fit of the model. Plotting these residual deviances against corresponding quantiles for the normal
distribution gives an indication of the adequacy of the fit of the model to the data. If the plot is approximately linear with unit slope, the fit is satisfactory 9. In addition, it is also
informative to plot observed counts and appropriately scaled incidence rates against corresponding fitted values based on the model.
The model also gives adjusted incidence rates for each factor of interest, obtained by suppressing the subscripts in Equation (1) corresponding to the other factors and replacing these terms with a
constant satisfying the condition that the sum of the disease counts based on the adjusted incidence rates matches the total. Sum contrasts 9, 11 were used to obtain confidence intervals for
comparing the adjusted incidence rates within each factor with the overall incidence rate. The R program was used for all statistical analysis, graphs, and maps 12, 13.
The results of the model fitting are shown in Figure 1. The left and right upper panels show plots of observed counts and observed annual incidence rates per 1000 versus corresponding fitted values
using the negative binomial model. The left and right lower panels show plots of the deviance residuals against the normal quantiles based on Poisson model and negative binomial model. The dispersion
parameter θ provides a way of improving the fit, by allowing for over-dispersion and thus reducing the residual deviance. Clearly, the residuals plot from the negative binomial model fit the data
Figure 1: Diagnostic plots for Poisson and negative binomial models and plots of observed counts and observed incidence against fitted values.
Figure 2 shows 95% confidence intervals of the trends of malaria incidence rates over period fitted by negative binomial model. The horizontal dotted line corresponds to the overall mean incidence
rates of malaria (0.30 per 1000). There was decreasing trend in the incidence rates of malaria (1998-2004), followed by a more moderate upward trend until 2008, when the rate of decrease again
Figure 2: Annual malaria incidence in Nepal
Figure 3 shows 95% confidence intervals of annual malaria incidence rates by districts separated by border regions based on the negative binomial model. The dotted horizontal lines on graph represent
the overall mean annual incidence rate (0.30 per 1000). Higher malaria incidences occurred in border districts.
Figure 3: Annual malaria incidence/1000 for non-border districts and border districts in 75 districts of Nepal
Figure 4 shows a schematic map of the malaria incidence rates by districts by classifying districts as their confidence intervals (Figure 3) above the mean (darkest shade), below the mean (lightest
shade), not evidently different from the mean (intermediate shade) and zero malaria case (no shade).
Figure 4: Schematic map of annual malaria incidence rates in districts of Nepal
The map shows that zero malaria incidences occurred in six districts including Humla, Jajarkot, Manang, Kathmandu, Bhaktapur and Solukhumbu districts for over eleven years. Similarly higher incidence
occurred Kanchanpur,Kailali, Bardiya, Kavre and Jhapa districts.
Mosquito-borne diseases particularly malaria is becoming dreaded health problems in Nepal. This study applied statistical modeling of malaria incidence in Nepal from 1998 to 2009. When the dependent
variable is the disease count, Poisson and negative binomial generalized linear models are usually considered most statistically appropriate. The Poisson distribution assumes events independent and
does not account for clustering, over-dispersion, or serial correlation. A negative binomial GLM is an extension of the Poisson regression model that allows for over-dispersion. Poisson and Negative
binomial models containing year and district as factors were fitted to the disease incidences. However, for these data the negative binomial model fit the data as indicated by the residual plot.
Over the year 1998 to 2008, the incidence rate of malaria showed a fluctuating trend in Nepal, with increasing trend (2004-2008) followed by decreased in 2009. These findings were in consistent with
WHO report and annual report on malaria, which shows decreasing, trends of malaria.
Zero incidences occurred in six districts as Humla, Jajarkot, Manang, Kathmandu, Bhakthapur and Solukhumbu districts for over twelve years. The findings were consistent with the annual reports.
However, the malaria control programmes have identified these districts as the “malaria free” districts. Along with them, the Malaria control programmes have identified Mugu, Dopla, Mustang, and
Rasuwa as the “malaria free” districts 5. In our study, these districts also show the low incidences of Malaria over the period.
Higher incidence occurred in five districts including Kanchanpur, Kailali, Bardiya, Kavre and Jhapa district. Out of them, four districts occurred in Terai region and border regions. Thus, it can be
concluded that malaria is more prevalent and prone in the Terai region and border regions. These findings were consistent with the annual report that prioritized these districts as endemic malaria
districts in annual reports. The endemicity of malaria in Terai may be attributed to factors such as topography, climate, socio-economic status, heavy migration and cross boarder issues 5, 6. The
Terai region is characterized by hot and humid climate, high precipitation, and low socio-economic status that attributed to high malaria cases. A number of studies had found a strong correlation
between malaria incidence rate and variations of environmental variables. In many studies, humidity, temperature and rainfall are considered major risk factors that affects the life cycle and
breeding of mosquitoes 14,15. Study in Nepal indicates that rainfall during months of June, July and August influence the number of malaria cases which occur (after a certain time lag) during
September, October, and November 16. Besides this, Nepal also has an active migrant population moving frequently to malaria-risk areas of India for livelihood and they return home infected and easily
transmit malaria. In addition, malaria-stricken patients from India are lined up at the government health centre to receive treatment in Nepal free medicines. Thus, it can be conclude that malaria is
higher in boarder areas.
There are some limitations in our study. It is based on secondary data and we could not incorporate seasonal or months, which is considered as the one of the factors for malaria, due to
unavailability of monthly-specific incidence data.
In general, the present study investigates the spatial patterns and trends of malaria incidences in Nepal in various districts. The results are illustrated by a thematic map showing both the
districts with high incidence rates. Such maps can be used by public health authorities for applying preventive measures to control malaria outbreaks by focusing preventive measures according to
priority in high, average, and low locations. These findings also highlight the endemic prone malaria areas in Nepal and the need for future intervention policies. It would be useful and appropriate
to apply the statistical model to additional examples of disease incidence and disease forecasting.
We would like to express our gratitude to the Health Management Information System (HMIS), Ministry of Public Health for permission to use their data. We are indebted to Prof. Don McNeil, Dr. Suresh
Tiwari and Dr. Sharad Kumar Sharma for helping our research.
Conflict of interest statement
The authors have no conflict of interests to declare.
This study was funded by the Research and Development Institute of Loei Rajabhat University.
1. Breman JG, Alilio MS, White NJ. Defining and defeating the intolerable burden of malaria III. Progress and perspectives. Am J Trop Med Hyg. 2007;77(Suppl 6):vi-xi.
2. World Health Organization. World Malaria Report 2008. Geneva: WHO; 2008.
3. Department of Health Services, Epidemiology, and Disease Control Division. Success Story of Malaria Control in Nepal (1963 – 2003). Kathmandu: Department of Health Services, Epidemiology and
Disease Control Division; 2004.
4. Craig MH, Snow RW, Le Sueur D. A climate-based distribution model of malaria transmission Africa. Parasitol Today. 1999;15(3):105-111.
5. Department of Health Services. Annual Report. Kathmandu: Department of Health Services; 2008.
6. World Health Organization. Cross-border initiatives on HIV/AIDS, TB, malaria and kala-azar. Geneva, Switzerland: WHO; 2001.
7. McMichael AJ, Martens WJM. The health impact of global climate changes: grasping with scenarios, predictive models and multiple uncertainties. Ecosyst Health. 1995;1(1):23-33.
8. Central Bureau of Statistics. Nepal in Figures. Kathmandu: CBS; 2008.
9. Venables WN, Ripley BD. Modern Applied Statistics with S. 4th ed. New York: Springer; 2002.
10. Hilbe JM. Negative binomial Regression. 1st ed. New York: Cambridge University Press; 2007.
11. Tongkumchum P, McNeil D. Confidence intervals using contrasts for regression model. Songklanakarin J Sci Technol. 2009;31(2):151-156.
12. Venables WN, Smith DM, The R Development Core Team. An Introduction to R. Vienna: R Foundation for Statistic; 2008.
13. Murrell P. R Graphics. 1st ed. London: Chapman & Hall/CRC; 2006.
14. Nobre AA, Schmidt AM, Lopes HF. Spatio-temporal models for the mapping incidence of malaria in Para. Environmetrics 2005;16:291-304.
15. Salehi M, Mohammad K, Farahani MM et.al Spatial modeling of malaria incidence rates in Sistan and Baluchistan provinces, Islamic Republic of Iran. Saudi Med J. 2008;29:1791-1796.
16. Dahal S. Climatic determinants of malaria and kalaazar in Nepal. Regional Health Forum 2008; 12(1):32-37.
School of Public Health, Hamadan University of Medical Sciences, Shaheed Fahmideh Ave. Hamadan, Islamic Republic of Iran
Postal code: 6517838695, PO box: 65175-4171
Tel: +98 811 8380292, Fax: +98 811 8380509
E-mail: jrhs@umsha.ac.ir | {"url":"http://jrhs.umsha.ac.ir/index.php/JRHS/article/view/206/html_1","timestamp":"2014-04-18T06:14:10Z","content_type":null,"content_length":"51595","record_id":"<urn:uuid:6a71745b-1f27-422f-aa00-a654acf97f9e>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00274-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus Tutors
Fullerton, CA 92833
Patient and Professional Math and Physics Tutor
...I'm a graduate of the Torrey Honors Institute at Biola University, with a Bachelor of Arts degree in the Humanities, emphasis in English. I also teach Pre-
and Physics for Biola Youth Academics. In middle school, I participated in math competitions, primarily...
Offering 6 subjects including calculus | {"url":"http://www.wyzant.com/Chino_Hills_Calculus_tutors.aspx","timestamp":"2014-04-20T02:26:56Z","content_type":null,"content_length":"60352","record_id":"<urn:uuid:f72e4f3f-0b7a-4396-8bd8-70b7f025af2a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00026-ip-10-147-4-33.ec2.internal.warc.gz"} |
Civil Engineering Archive | December 02, 2011 | Chegg.com
Civil Engineering Archive: Questions from December 02, 2011
• Anonymous asked
2 answers
• BewilderedBacon7076 asked
1 answer
• Anonymous asked
6 answers
• OrangeMeteor2929 asked
1 answer
• OrangeMeteor2929 asked
0 answers
• NiftyZipper1894 asked
1 answer
• Anonymous asked
0 answers
• CALIFORNIA asked
0 answers
• CALIFORNIA asked
0 answers
• CALIFORNIA asked
0 answers
• CALIFORNIA asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
0 answers
• Anonymous asked
A short tube nozzle has the dimensions shown in the figure below. The fluid flowing is oil, with uni... Show more
A short tube nozzle has the dimensions shown in the figure below. The fluid flowing is oil, with unit weight, γ = 49.9 lb/ft3 and kinematic viscosity, ν = 8 x 10-4 ft2/sec. The tube discharges
into the atmosphere. The pressure in the large tank at the elevation of the tube centerline is 0.45 lb/in2.
Assume that the boundary layer development is the same as that on a smooth flat plate with zero pressure gradient. This assumption, which of course is not physically consistent, says that
relations between the external “free stream” velocity U and the boundary layer thickness, δ, at a given distance x are calculated as if U, δ, and x were the values found in the flat plate case.
Assume further that the boundary layer velocity profile is that suggested by Prandtl and which has the following properties:
u/U = f(eta)= 3/2(eta) - 1/2 (eta)^3
where (eta)= y/δ, α[1] = 39/280, and β[1] = 1.5
a. Determine the discharge velocity U for that portion of the jet, which is outside the boundary layer.
b. Estimate the discharge coefficient, C[d], in the expression
Q= C[d] A√2gH
which is a conventional expression for such discharge devices. Indicate any assumptions, which you think might be necessary in order to make these calculations.
c. Now assume that the tube is lengthened just enough so that the flow is just “fully developed” at the outlet. (We are not concerned with the actual tube length required). Compare the centerline
velocity and the wall shear stress at the discharge end of the tube with those values where δ=0.5 r[0] . Express the comparisons in ratio form.
• Show less
0 answers
• Anonymous asked
0 answers
Get the most out of Chegg Study | {"url":"http://www.chegg.com/homework-help/questions-and-answers/civil-engineering-archive-2011-december-02","timestamp":"2014-04-18T18:19:02Z","content_type":null,"content_length":"180633","record_id":"<urn:uuid:e79b173e-f1ce-4ac2-8bbb-363852ed940f>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00167-ip-10-147-4-33.ec2.internal.warc.gz"} |
CalcTool: Hyperbolic functions calculator
This calculator finds the hyperbolic sine (sinh), cosine (cosh), tangent (tanh), cotangent(coth), secant (sech) and cosecant (csch) of the given angle.
The hyperbolic functions are analogs of the regular trigonometric functions, mapping ratios along a hyperbola rather than around a circle. These functions find many uses in
math, engineering, and science.
Many calculators do not include their easy calculation, so this calc is available for you to use. You may choose your input units by use of the menu. | {"url":"http://www.calctool.org/CALC/math/trigonometry/hyperbolic","timestamp":"2014-04-21T07:44:37Z","content_type":null,"content_length":"10980","record_id":"<urn:uuid:7271fe2c-2c87-4251-a2c7-61e59ca69d1b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"} |
Loris::Filter Class Reference
#include <Filter.h>
Public Member Functions
Filter (void)
Construct a filter with an all-pass unity gain response.
template<typename IterT1 , typename IterT2 >
Filter (IterT1 ffwdbegin, IterT1 ffwdend, IterT2 fbackbegin, IterT2 fbackend, double gain=1.)
Filter (const Filter &other)
Filter & operator= (const Filter &rhs)
~Filter (void)
double apply (double input)
double operator() (double input)
std::vector< double > numerator (void)
const std::vector< double > numerator (void) const
std::vector< double > denominator (void)
const std::vector< double > denominator (void) const
void clear (void)
Clear the filter state.
Detailed Description
Filter is an Direct Form II realization of a filter specified by its difference equation coefficients and (optionally) gain, applied to the filter output (defaults to 1.). Coefficients are specified
and stored in order of increasing delay.
Implements the rational transfer function
-1 -nb b[0] + b[1]z + ... + b[nb] z Y(z) = G ---------------------------------- X(z) -1 -na a[0] + a[1]z + ... + a[na] z
where b[k] are the feed forward coefficients, and a[k] are the feedback coefficients. If a[0] is not 1, then both a and b are normalized by a[0]. G is the additional filter gain, and is unity if
Filter is implemented using a std::deque to store the filter state, and relies on the efficiency of that class. If deque is not implemented using some sort of circular buffer (as it should be --
deque is guaranteed to be efficient for repeated insertion and removal at both ends), then this filter class will be slow.
Constructor & Destructor Documentation
template<typename IterT1 , typename IterT2 >
Loris::Filter::Filter ( IterT1 ffwdbegin,
IterT1 ffwdend,
IterT2 fbackbegin,
IterT2 fbackend,
double gain = 1.
) [inline]
Initialize a Filter having the specified coefficients, and order equal to the larger of the two coefficient ranges. Coefficients in the sequences are stored in increasing order (lowest order
coefficient first).
If template members are allowed, then the coefficients can be stored in any kind of iterator range, otherwise, they must be in an array of doubles.
ffwdbegin is the beginning of a sequence of feed-forward coefficients
ffwdend is the end of a sequence of feed-forward coefficients
fbackbegin is the beginning of a sequence of feedback coefficients
fbackend is the end of a sequence of feedback coefficients
gain is an optional gain scale applied to the filtered signal
Loris::Filter::Filter ( const Filter & other )
Make a copy of another digital filter. Do not copy the filter state (delay line).
Loris::Filter::~Filter ( void )
Destructor is virtual to enable subclassing. Subclasses may specialize construction, and may add functionality, but for efficiency, the filtering operation is non-virtual.
Member Function Documentation
double Loris::Filter::apply ( double input )
Compute a filtered sample from the next input sample.
input is the next input sample
the next output sample
const std::vector< double > Loris::Filter::denominator ( void ) const
Provide access to the denominator (feedback) coefficients of this filter. The coefficients are stored in order of increasing delay (lowest order coefficient first).
std::vector< double > Loris::Filter::denominator ( void )
Provide access to the denominator (feedback) coefficients of this filter. The coefficients are stored in order of increasing delay (lowest order coefficient first).
const std::vector< double > Loris::Filter::numerator ( void ) const
Provide access to the numerator (feed-forward) coefficients of this filter. The coefficients are stored in order of increasing delay (lowest order coefficient first).
std::vector< double > Loris::Filter::numerator ( void )
Provide access to the numerator (feed-forward) coefficients of this filter. The coefficients are stored in order of increasing delay (lowest order coefficient first).
double Loris::Filter::operator() ( double input ) [inline]
Function call operator, same as sample().
See also:
Filter& Loris::Filter::operator= ( const Filter & rhs )
Make a copy of another digital filter. Do not copy the filter state (delay line).
The documentation for this class was generated from the following file:
• /Users/kfitz/Projects/Loris/Loris development/src/Filter.h
Generated by | {"url":"http://www.cerlsoundgroup.org/Loris/docs/cpp_html/class_loris_1_1_filter.html","timestamp":"2014-04-18T21:01:04Z","content_type":null,"content_length":"17237","record_id":"<urn:uuid:ed5ca735-7038-4b6e-9f87-5bc46684656c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Where is there a treatment of "exponential monads"?
up vote 9 down vote favorite
I have a category $C$, which is equipped with a symmetric monoidal structure (tensor product $\otimes$, unit object $1$). My category also has finite coproducts (I'll write them using $\oplus$, and
write $0$ for the initial object), and $\otimes$ distributes over $\oplus$.
By an exponential monad, I mean a monad $(T,\eta,\mu)$ on $C$, where the functor $T:C\to C$ is equipped with some structure maps of the form $$\nu \colon 1 \to T(0)$$ and $$\alpha\colon T(X)\otimes T
(Y) \to T(X\oplus Y).$$ The structure maps are isomorphisms, and are suitably "coherent" with respect to the two monoidal structures $\otimes$ and $\oplus$.
The simplest example is: $C$ is the category of $k$-vector spaces, and $T=\mathrm{Sym}$ is the commutative $k$-algebra monad (i.e., $\mathrm{Sym}(X)$ is the symmetric algebra $\bigoplus \mathrm{Sym}^
Now, I'm sure I can work out all the formalism that I need for this, if I have to. My question is: is there a convenient place in the literature I can refer to for this? Alternately, is there
suitable categorical language which makes this concept easy to talk about?
I'd also like to have a good formalism for talking about a "grading" on $T$. This means a decomposition of the functor $T=\bigoplus T^q$, where $T^q\colon C\to C$ are functors, which have "nice"
properties (for instance, $T^m(X\oplus Y)$ is a sum of $T^p(X)\otimes T^{m-p}(Y)$). The motivating example again comes from the symmetric algebra: $\mathrm{Sym}=\bigoplus \mathrm{Sym}^q$.
ct.category-theory monads symmetric-algebras
en.wikipedia.org/wiki/Monoidal_monad – Martin Brandenburg Feb 18 '13 at 20:46
Martin: that would probably be this, if the two monoidal structures otimes and + are actually the same. I'm interested in a case where they are not. – Charles Rezk Feb 19 '13 at 14:04
add comment
2 Answers
active oldest votes
I don't know if there's a standard name for what you're calling "exponential monads". Maybe not. But such things have been considered. You mention the example of the free commutative algebra
monad $\mathrm{Sym}$ on $\mathbf{Vect}$. The linear structure is inessential here, so you could simplify and consider the free commutative monoid monad $T$ on $\mathbf{Set}$ instead. Perhaps
this deserves to be called the exponential monad, since $$ T(X) = \sum_{n = 0}^\infty X^n/S_n $$ where $S_n$ is the $n$th symmetric group, which has $n!$ elements. (Here $\sum$ denotes
coproduct.) I believe there is even a sense in which the derivative of $T$ is $T$. Your monad $\mathrm{Sym}$ might be called the "linearized exponential monad".
Exponential monads (and in particular the free commutative monoid monad) have been considered in linear logic. I'm probably not giving the canonical reference here, but you could try at a
paper of Marcelo Fiore, Differential structure in models of multiplicative biadditive intuitionistic linear logic. For example, on page 8 he mentions
up vote the Seely monoidal natural isomorphism $$s: !A \otimes !B\ \stackrel{\cong}{\to}\ !(A \times B)$$
8 down
vote Here $!$ is a comonad (you might have to do some dualizing), and $\times$ is a "biproduct", i.e. simultaneously a product and a coproduct. (In your example of $k$-vector spaces, coproducts
are biproducts.) He cites work of Blute, Cockett and Seely on differential categories, which might be worth chasing up in the hope of finding a treatment of exponential monads.
I think this use of the word "differential" is connected to my dim recollection above, that the derivative of the free commutative monoid monad is itself --- whatever that means.
Edit You might get a better response if you ask the categories mailing list, categories@mta.ca. There are certainly people there who know more about this than me.
add comment
I've just spent some time working out for myself the details of what an exponential monad is supposed to be, so I might as well post what I learned here. (I don't know that I'll ever have a
reason to write it up more formally.)
Here is the correct definition. Given symmetric monoidal $(C,\otimes, 1)$, with finite coproducts $+$ and initial object $0$, an exponential structure on a monad $T$ on $C$ should be
• the structure of strong symmetric monoidal functor on $T: (C,+,0)\to (C,\otimes, 1)$, consisting of natural isomorphisms $\nu\colon 1\to T0$ and $\alpha\colon TX\otimes TY\to T(X+Y)$
satisfying a bunch of coherence properties.
There is one additional condition you need to impose. To state this condition, let $\gamma: T(TX\otimes TY)\to TX\otimes TY$ be the composite $$ T(TX\otimes TY) \xrightarrow{T\alpha} TT
(X+Y) \xrightarrow{\mu} T(X+Y) \xrightarrow{\alpha^{-1}} TX\otimes TY, $$ where $\mu: TT\to T$ is part of the monad structure. The additional condition is
• for all $X$ and $Y$, we have $\gamma\circ T(\mu\otimes \mu)=(\mu\otimes \mu)\circ \gamma$ as maps $T(TTX\otimes TTY)\to TX\otimes TY$.
The map $\gamma$ defines a $T$-algebra structure on $TX\otimes TY$ (it is the free algebra structure on $T(X+Y)$, transported along the isomorphism $\alpha$); the additional property says
up vote 7 that $\mu\otimes \mu$ is itself a map of $T$-algebras. Note also that the map $\nu$ identifies $1$ with the initial $T$-algebra $T0$.
down vote
Given this, you can prove (with no more difficulty than you would expect) that for every pair of $T$-algebras $A$ and $B$, we can put a canonical $T$-algebra structure on $A\otimes B$,
which exhibits it as the coproduct of $A$ and $B$ in the category of $T$-algebras. In particular, the forgetful functor from $T$-algebras to $C$ becomes strong symmetric monoidal, using
coproduct as the monoidal structure for $T$-algebras.
I would have expected that you would need another condition, relating $\gamma$ to the unit map $\eta\colon I\to T$, or perhaps a condition on $\nu$, but it doesn't seem that this is
necessary as far as I can tell.
You don't need the hypothesis that $\otimes$ distribute over coproduct, as I suggested in my question. It might seem surprising, but apparently you don't even need the monoidal structure on
$C$ to be symmetric or associative; being unital appears to be enough to exhibit coproducts of $T$-algebras using $\otimes$. I suspect that for such a "unital monoidal" category $C$, you
might be able to show that the existence of an exponential monad implies that $C$ is symmetric monoidal. (This does not seem so crazy in light of the way the symmetric monoidal smash
product of EKMM spectra comes about).
add comment
Not the answer you're looking for? Browse other questions tagged ct.category-theory monads symmetric-algebras or ask your own question. | {"url":"http://mathoverflow.net/questions/6155/where-is-there-a-treatment-of-exponential-monads","timestamp":"2014-04-21T09:38:18Z","content_type":null,"content_length":"60171","record_id":"<urn:uuid:6c89c67d-d029-4184-92ce-f4160ec80415>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Fourier Transforms
A common operation in analyzing various kinds of data is to find the discrete Fourier transform (or spectrum) of a list of values. The idea is typically to pick out components of the data with
particular frequencies or ranges of frequencies.
In Mathematica, the discrete Fourier transform of a list of length is by default defined to be . Notice that the zero frequency term appears at position 1 in the resulting list.
The inverse discrete Fourier transform of a list of length is by default defined to be .
In different scientific and technical fields different conventions are often used for defining discrete Fourier transforms. The option FourierParameters allows you to choose any of these conventions
you want.
Typical settings for FourierParameters with various conventions.
Two-dimensional discrete Fourier transform.
Mathematica can find discrete Fourier transforms for data in any number of dimensions. In dimensions, the data is specified by a list nested levels deep. Two-dimensional discrete Fourier transforms
are often used in image processing.
One issue with the usual discrete Fourier transform for real data is that the result is complex-valued. There are variants of real discrete Fourier transforms that have real results. Mathematica has
commands for computing the discrete cosine transform and the discrete sine transform.
Discrete real Fourier transforms.
There are four types each of Fourier discrete sine and cosine transforms typically in use, denoted by number or sometimes roman numeral as in "DCTII" for the discrete cosine transform of type 2.
Discrete real Fourier transforms of different types.
The default is type 2 for both FourierDCT and FourierDST.
Mathematica does not need or functions because FourierDCT and FourierDST are their own inverses when used with the appropriate type. The inverse transforms for types 1, 2, 3, 4 are types 1, 3, 2, 4,
The discrete real transforms are convenient to use for data or image compression. | {"url":"http://reference.wolfram.com/mathematica/tutorial/FourierTransforms.html","timestamp":"2014-04-20T11:03:46Z","content_type":null,"content_length":"45371","record_id":"<urn:uuid:97d5a94e-b015-49ba-a093-cf6c1ee6a310>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00492-ip-10-147-4-33.ec2.internal.warc.gz"} |