content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Proof of Sine Addition Formula with complex numbers
November 25th 2010, 05:30 AM #1
Nov 2010
Proof of Sine Addition Formula with complex numbers
Hi there,
I am stuck on a proof using complex numbers. I have tried almost everything but I seem to get nowhere. Here is the question .
Let a,b be complex numbers. Prove that sin(a+b)= sinacosb + sinbcosa
This is just a tedious problem.
Use $\sin (z) = \dfrac{{e^{iz} - e^{ - iz} }}<br /> {{2i}}\;\& \,\cos (z) = \dfrac{{e^{iz} + e^{ - iz} }}<br /> {2}$
November 25th 2010, 05:43 AM #2 | {"url":"http://mathhelpforum.com/pre-calculus/164350-proof-sine-addition-formula-complex-numbers.html","timestamp":"2014-04-21T06:39:49Z","content_type":null,"content_length":"33505","record_id":"<urn:uuid:04b67c35-2b82-4039-821e-ae449447e871>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"} |
Programming for all, part 2: From concept to code
Do one thing, and do it well
In our first installment, we wrote several programs that really did nothing more than illustrate a concept. Let's turn the complexity up a notch and compose a program that actually solves a problem.
The problem we are tasked with: given the high temperature of the past three days, compute the average and standard deviation.
To do this, we are going to need to implement an algorithm, the programming equivalent to a set of directions. It gives the major steps that one must take in order to solve a problem, but the details
of how are left up to the programmer who implements the algorithm. For our problem at hand, we could write out our algorithm as follows:
1. Read in three values
2. Compute the sum of these values
3. Compute the average by dividing the sum by 3.
4. Figure out how far each value is from the average.
5. Add the square of the distances obtained in step four
6. Take the square root of the value in step five
7. Divide by the square root of 3
So, let us set out to implement our remedial algorithm in MHF:
PROGRAM weatherStation
NUMBER temperature1 # tell the computer that we're going to need
NUMBER temperature2 # space set aside for the three temperatures
NUMBER temperature3 # plus space for the average and standard
NUMBER averageTemp # deviation results
NUMBER stddevTemp
NUMBER sum, sqDist1, sqDist2, sqDist3 # we can define multiple
# variables of the same type
# on one line
READ temperature1 # interface with someone or something
READ temperature2 # to get our three temperature readings
READ temperature3 # algorithm step 1
# algorithm step 2
sum = temperature1 + temperature2 + temperature3
# algorithm step 3
averageTemp = sum / 3.0
# steps 4 and half of 5 from the algorithm
sqDist1 = (temperature1 - averageTemp)^2
sqDist2 = (temperature2 - averageTemp)^2
sqDist3 = (temperature3 - averageTemp)^2
# other half of 5 and step 6 from the algorithm
stddevTemp = ((sqDist1+sqDist2+sqDist3)/3.0)^(0.5)
PRINT "Average temperature = ", averageTemp, "+/-", stddevTemp
END PROGRAM weatherStation
The weatherStation program follows the algorithm laid out above and calculates the average and the standard deviation of three temperature readings. But it's not possible to directly map the steps of
the algorithm to the code. Some steps are split over multiple lines; other lines do more than a single step.
This example is reminiscent of real world development—as programs grow in complexity and size, it can be harder to track what individual steps are meant to accomplish.
To make the code easier to follow, and to allow the same operation to be done at multiple locations in the code, it's often useful to compartmentalize your code. All major programming languages (that
I am aware of) support what are called functions, methods, or subroutines. These allow programmers to pull out related operations and place them in their own module, where they are separated from
other parts of the program.
Functions, as we'll call them in our toy language, can be thought of like a mini-program within a program. They allow developers to isolate pieces of logic or complex operations and then refer to
them by name later in the development cycle. In our weatherStation example, we could create one function for computing the average and another for computing the standard deviation.
One principle that drives developers is that each logical block of code should do one thing, and do it well—ideally without affecting things outside of its scope. Functions enable this to happen.
Many programmers create collections of side-effect free functions; those that do what they are purported to do and nothing more. So a function to compute the average of three values would not format
your hard drive, or order a pizza for you, or other less nefarious things (such as change the numbers themselves).
Let's take a look at how we could rework weatherStation to modularize some of its functionality. We see that there are three key things that occur in this program: we read in some values, calculate
the average, and calculate the standard deviation. The latter two can easily be rolled into stand-alone functions.
A function in MHF will look a lot like a little program, it will start with the keyword FUNCTION, followed by the name of the function, then a parenthetical list of the inputs to the function. (These
inputs are formally known as the arguments.) Finally, a function will contain a RETURNS keyword, which identifies any value that gets sent back when the function is complete. Let's look at a
modularized weatherStation program
PROGRAM modularWeatherStation
NUMBER temperature1 # tell the computer that we are going to need
NUMBER temperature2 # space set aside for the three temperatures
NUMBER temperature3 # plus space for the average and standard
NUMBER averageTemp # deviation results. Just like before
NUMBER stddevTemp
READ temperature1
READ temperature2
READ temperature3
# we will call a function that computes the average for us
# at this line, control of the program will be transferred
# to the computeAverage function
averageTemp = computeAverage( temperature1, temperature2, temperature3)
# likewise for the standard deviation
stddevTemp = standardDeviation( temperature1, temperature2, temperature3, averageTemp)
PRINT "Average temperature = ", averageTemp, "+/-", stddevTemp
END PROGRAM weatherStationRefactor
FUNCTION computeAverage( NUMBER n1, NUMBER n2, NUMBER n3) RETURNS NUMBER
NUMBER sum
# the values passed into this function will be mapped to
# the values n1, n2, and n3--only in this function
sum = n1 + n2 + n3
# at a RETURN statement, the program will return to the
# point where it left off
RETURN sum/3.0
END FUNCTION computeAverage
FUNCTION standardDeviation( NUMBER n1, NUMBER n2, NUMBER n3, NUMBER avg) RETURNS NUMBER
NUMBER sqDist1, sqDist2, sqDist3
sqDist1 = (n1-avg)^2
sqDist2 = (n2-avg)^2
sqDist3 = (n3-avg)^2
RETURN ((sqDist1+sqDist2+sqDist3)/3.0)^(0.5)
END FUNCTION standardDeviation
Program modularWeatherStation will produce the same results as our previous weatherStation program (assuming we give it the same three inputs). However, the ideas and implementation of the average
and standard deviation are encapsulated in a function. The consumer of these functions—the main program—doesn't care what happens in the functions, as long as it gets the average back.
When the main program begins, it will set aside space for all the variables we tell it we will need then it will read in the three temperatures that we are interested in. When we reach line 15, we
call the function computeAverage with the arguments temperature1, temperature2, and temperature3. Once the computer reaches this line of the main program, it will transfer its control to the
computeAverage function, which is defined later.
In computeAverage, temperature1, temperature2, and temperature3 are now referred to by their local aliases, n1, n2, and n3. We define a local variable— variable that will only exist as long as the
execution of the program is within the computeAverage function—for the sum. The algorithm for computing the average is carried out as before. Before we reach the final line in the computeAverage
function, however, we encounter a RETURN statement.
On the line where we first describe the function, we also noted that it would return a NUMBER-type variable. In this case, the RETURN statement contains an expression that evaluates to a NUMBER-type
variable. That value gets returned to the point in the main body of the program where the function call was made. The program will then continue to run from that point on as before. | {"url":"http://arstechnica.com/science/2012/12/programming-for-all-part-2-from-concept-to-code/","timestamp":"2014-04-17T05:35:38Z","content_type":null,"content_length":"45863","record_id":"<urn:uuid:ccb8a328-591a-4faa-a4a4-bd3db29ad841>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00172-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ph.D. Theses
PhD Dissertations published by the Structures Group. Links are to abstracts of the thesis where available on-line.
247 Guan, G. 2013 FRP Debonding Fracture and Design for Flexural Retrofitting
246 Seereeram, V. 2012 Compliant Shell Mechanisms and Inextensional Theory
245 Schenk, M. 2012 Folded Shell Structures
244 Bonin, A. 2012 Wrinkling in polygonal membranes
243 Yapa, H. D. 2011 Optimum Shear Strengthening of Reinforced Concrete
242 Music, O. 2011 Flexible Asymmetric Spinning
241 Jackson, A. 2011 Modelling the collapse behaviour of reinforced concrete slabs
240 Augusthus Nelson, L. 2011 Size Effects in Reinforced Concrete Beams Strengthened with CFRP Straps
239 Taher Khorramabadi, M. 2010 FRP Bond behaviour during intermediate concrete cover separation in flexurally strenghtened RC beams
238 Eltayeb Yousif, M. 2010 Non-linear Bond Modelling for reinforced Concrete-A Newly-Modified Bond Model
237 Long, Q. 2010 Subdivision Finite Elements for Geometrically Complex Thin & Thick Shells
236 Giannopoulos, I. 2010 Creep and Creep Rupture Behaviour of Aramid Fibres
235 Hassan Dirar, S.M.L. 2009 Shear Strengthening of pre-cracked reinforced concrete T-beams using carbon fibre systems
234 Gan, W.W. 2009 Analysis & Design of Closed-Loop Deployable Frame Structure
233 Gerngross, T. 2009 Viscoelastic Behaviour in Stratospheric Balloon Structures
232 Winslow, P. 2009 Synthesis and Optimisation of Free-Form Grid Structures
231 Scott, P. 2009 Aspects of CFRP Prestressed concrete durability in the marine environment
230 Achintha, P.M.M. 2009 Fracture Analysis of Debonding Mechanism for FRP Plates
229 Ramar, P. R. 2009 Novel Symmetric Tensegrity Structures
228 Norman, A. 2009 Multistable and Morphing Corrugated Shell Structures
227 Toews von Riesen, E. 2008 Active Hyperhelical Structures
226 Parikh, P. 2008 Impact of Integrated Water and Environmental Sanitation Infrastructure on Poverty Alleviation
225 Persaud, R. 2008 The Structural Behaviour of a Composite Timber and Concrete Floor System Incorporating Steel Decking as Permanent Formwork
224 Prendergast, J.M. 2008 Simulation of Unsteady 2-D Wind by a Vortex Method
223 Xu, Y. 2008 A computational Study of Lobed Balloons
222 Kueh, A. 2008 Thermo-mechanical properties of triaxial weave fabric composites
221 Pagitz, M. 2008 Analytical and Numerical Studies of Superpressure Balloons
220 Leung, A. 2007 Actuation Properties of Kagome Lattice Structures
219 Marfisi, E. 2007 Measurement of Concrete Samples using Magnetic Resonance Imaging
218 Ye, H. 2007 Bistable Cylindrical Space Frames
217 Waller, S.D. 2007 Mechanics of Novel Compression Structures
216 Santer, M.J. 2006 Design of Multistable Structures
215 Yee, J. 2006 Thin CFRP Composite Deployable Structures
214 Walker, G.M. 2006 Strength assessment of reinforced concrete voided bridge slabs
213 Hoult, N.A. 2006 Shear retrofitting of reinforced concrete beams with CFRP straps
212 Morais, M. 2006 Ductility of Beams Prestressed with FRP Tendons
211 Schioler, T. 2005 Multi-stable structural elements
210 Imhof, D. 2005 Risk assessment of existing bridge structures
209 Lea, F. 2005 Uncertainty in condition and strength assessment of reinforced concrete bridges
208 Jensen, F.V. 2005 Concepts for retractable roof structures
207 Ong, P.P.A. 2004 Frequency domain analysis of underwater catenary mooring cables
206 Ekstrom, L.J. 2004 Welding of bistable fibre-reinforced thermoplastic composite pipelines
205 Jaafar, K. 2004 Spiral shear reinforcement for concrete structures under static and seismic loads
204 Baskaran, K. 2004
203 Farmer, S.M. 2004 Large-displacement buckling of centrally loaded simply supported circular plates
202 Lu, H-Y. 2003 Behaviour of reinforced concrete cantilevers under concentrated loads
201 Kesse, G. 2003 Concrete beams with external prestressed carbon FRP shear reinforcement
200 Balafas, I. 2003 Fibre-reinforced-polymers versus steel in concrete bridges: structural design and economic viability
199 Watt, A.M. 2003 Deployable structures with self-locking hinges
198 Alwis K.G.N.C. 2003 Accelerated testing for long-term stress-rupture behaviour of aramid fibres
197 Wong, Y.W. 2003 Wrinkling of thin membrane structures
196 Tan, L.T. 2003 Thin-walled elastically foldable reflector structures
195 Kukathasan, S. 2003 Vibration of space membrane structures
194 Morgenthal, G. 2002 Aerodynamic analysis of structures using high-resolution vortex particle methods.
193 Lennon, B.A. 2002 Equilibrium and stability of inflatable membrane structures.
192 Aberle M. 2001 The nonlinear analysis of shear-weak gridshells.
191 Denton S.R. 2001 The strength of reinforced concrete slabs and the implications of limited ductility.
190 Galletly D. 2001 Modelling the equilibrium and stability of slit tubes.
189 Iqbal K. 2001 Mechanics of laminated bi-stable tubular structures.
188 Lai C.Y. 2001 Analysis and design of a deployable membrane reflector.
187 Ochsendorf J.A. 2001 Collapse of masonry structures.
186 Fischer A. 2000 Gravity compensation of deployable space structures.
185 Frandsen J.B. 2000 Computational fluid-structure interaction applied to long-span bridge design.
184 Leung H.Y. 2000 Aramid fibre spirals to confine concrete in compression.
183 Stratford T.J. 2000 The shear of concrete with elastic FRP reinforcement.
182 Weerasinghe M. 2000 The structural behaviour of a composite stub-girder incorporating an asymmetric slim floor beam.
181 Bulbul M.Y.I. 1999 The Geometric Nonlinear Behaviour of Space Structures with Imperfect, Laterally Loaded Slender Members.
180 Hack T. 1999 Stick-Slip Piezoelectric Actuators.
179 King S.A. 1999 Nonlinear and chaotic dynamics of thin-walled open-section deployable structures.
178 Hicks S.J. 1998 Longitudinal Shear Resistance of Steel and Concrete Composite Beams.
177 Huang W. 1998 Shape Memory Alloys and their Application to Actuators for Deployable Structures.
176 Kangwai R.D. 1998 The analysis of symmetric structures using group representation theory.
175 Miles D.J. 1998 Lateral thermal buckling of pipelines.
174 Srinivasan G. 1998 Modelling and Control of Vortex-induced Bridge Oscillations
173 Brown I.F. 1997 Abrasion and Friction in Parallel-Lay Rope Terminations.
172 El Mously M.E.M. 1997 Free Vibration of Cylindrical and Hyperboloidal Cooling-Tower Shells.
171 Lees J.M. 1997 Flexure of concrete beams pre-tensioned with aramid FRPs .
170 Mandal P. 1997 Buckling of thin cylindrical shells under axial compression.
169 Seffen K.A. 1997 Analysis of Structures Deployed by Tape-Springs.
168 Sundaram J. 1997 Design of continuous prestressed concrete beam bridges using expert systems.
167 Tan G.B. 1997 Nonlinear vibration of cable-deployed space structures.
166 Darby A.P. 1996 Active control of flexible structures using inertial stick-slip actuators.
165 Holst J.M.F.G. 1996 Large Deflection Phenomena in Cylindrical Shells.
164 Kumar P. 1996 Kinematic bifurcations and deployment simulation of foldable space structures.
163 Olonisakin A.A. 1995 Reinforced concrete slabs with partial lateral edge restraint.
162 El Hassan M.A. 1995 The Geometry and structure of DNA and its Role in DNA/protein recognition.
161 Sebastian W.M. 1995 The performance of a composite space truss bridge with glass reinforced plastic panels.
160 Ashour A.F. 1994 Behaviour and strength of reinforced concrete continuous deep beams.
159 Guest S.D. 1994 Deployable structures : concepts and analysis.
158 You Z. 1994 Deployable structures for masts and reflector antennas.
157 Lancaster E.R. 1993 Behaviour of pressurised pipes containing dents and gouges .
156 Maltby T.C. 1993 The upheaval buckling of buried pipelines.
155 Nautiyal S.D. 1993 Parallel computing techniques for investigating three dimensional collapse of a masonry arch.
154 Chan T.K. 1992 Stress concentrations in weld heat affected zones in aluminium-zinc-magnesium alloy.
153 Hearn N. 1992 Saturated permeability of concrete as influenced by cracking and self-sealing.
152 Ibell T.J. 1992 Behaviour of anchorage zones for prestressed concrete.
151 Middleton C.R. 1992 Strength and safety assessment of concrete bridges.
150 Amaniampong G. 1991 Variability and viscoelasticity of parallel-lay ropes.
149 El-Sheikh A.I. 1991 The effect of composite action on the behaviour of space structures.
148 van Heerden T.F. 1991 Force method solution of finite element equilibrium models for plane continua.
147 Jayasinghe M.T.R. 1991 Rationalization of prestressed concrete spine beam design philosophy for expert systems.
146 Kuang J.S. 1991 Punching shear failure of concrete slabs with compressive membrane action.
145 Phaal, R. 1991 A two-surface computational model for the analysis of thin shell structures.
144 Kwan A.S.K. 1990 A pantographic deployable mast.
143 Lipscombe P.R. 1990 Dynamics of rigid block structures.
142 Prakhya K.V.G. 1990 Ferrocement structures: constitutive relations, non-linear finite element analysis, and analogy with reinforced concrete.
141 Salami A.T. 1990 Finite element analysis of membrane action in reinforced concrete slabs.
140 Tam L.L. 1990 Strain-rate and inertia effects in the collapse of energy-absorbing structures.
139 Tsiagbe W.Y. 1990 Relaxation of weld residual stresses by post-weld heat treatment.
138 Hodgetts P.A. 1989 The collapse behaviour of lattice hyperbolic paraboloids.
137 Kamyab H. 1989 Effects of foundation settlement on oil storage tanks.
136 Madros M.S.Z.B. 1989 The structural behaviour of composite stub-birder floor systems.
135 Peer L.B.B. 1989 Water flow into unsaturated concrete.
134 Robinson N.J. 1989 The wind induced vibration and fatigue of floating roofs on oil storage tanks.
133 Roche J.J. 1989 The design and analysis of shallow spherical domes constructed from triangular panels.
132 Kandil K.S. 1988 Interaction between local and euler buckling modes in thin-walled columns.
131 Lu G. 1988 Cutting of a plate by a wedge.
130 Affan A. 1987 Collapse of double-layer space grid structures.
129 Fathelbab F.A. 1987 The effect of joints on the stability of shallow single layer lattice domes.
128 Gray-Stephens D.M.R. 1987 Residual stresses in ring stiffened cylinders.
127 Hatzis D.T. 1987 The influence of imperfections on the behaviour of shallow single layer lattice domes.
126 Joseph P.J. 1987 The compressive behaviour of thin-walled cold-formed steel columns.
125 Kamalarasa S. 1987 Buckle propagation in submarine pipelines.
124 Kollek R.J. 1987 Collapse mechanisms of locally loaded reinforced concrete shells.
123 Lam W.F. 1987 Constitutive relations for finite element analysis of tension stiffening in reinforced concrete.
122 Li S-L. 1987 Stress analysis in two dimensions by a 'mixed' finite element method.
121 Li Kim Mui S.T. 1987 Pore pressure in concrete: theory and triaxial tests.
120 Mohamed Z.B. 1987 Shear strength of reinforced concrete wall-beam structures: upper-bound analysis and
119 Bajoria K.M. 1986 Three dimensional progressive collapse of warehouse racking.
118 Free J.A. 1986 Residual stresses in welded tubular Y-joints.
117 Kani I.M. 1986 A theoretical and experimental investigation of the collapse of shallow reticulated domes.
116 Payne J.G. 1986 Residual stresses in welded tubular T-joints.
115 Pellegrino S. 1986 Mechanics of kinematically indeterminate structures
114 Abbassian F. 1985 Long-running ductile fracture of high pressure gas pipelines.
113 Robertson I. 1985 Strength loss in welded aluminium structures.
112 Scaramangas A. 1985 Residual stresses in girth butt welded pipes.
111 Hong G.M. 1984 Buckling of non-welded aluminium columns
110 Kishek M.A. 1984 Tension stiffening and crack widths in reinforced concrete beam and slab elements.
109 Mofflin D.S. 1984 Plate buckling in steel and aluminium.
108 See T. 1984 Large displacement elastic buckling of space structures
107 Stonor R.W.P. 1983 Unstiffend steel compression panels with and without coincident shear.
106 Kelly S.J. 1982 Structural aspects of the progressive collapse of warehouse racking.
105 Low H.Y. 1982 Some structural aspects of collisions between ships and offshore concrete
104 Whaley B.C. 1982 The application of the reflective moire method to the bending and buckling of steel plates.
103 Wong M.P. 1982 Weld shrinkage in non-linear materials.
102 Clark M.A. 1981 Collapse of rigidly jointed trusses.
101 Chamorro Garcia R. 1981 Strength and stability of concrete deep beams.
100 Smithers T. 1981 The design of homologically deforming cyclically symmetric structures.
99 Kashani-Akhavan A. 1979 Fracture toughness of glass fibre reinforced cement composites.
98 Memon N.A. 1979 A study of anisotropic slabs with particular reference to the effects of openings.
97 Kubik L.A. 1978 Strength and serviceability of reinforced-concrete deep beams.
96 Pavlovic M. 1978 Numerical methods for the analysis of elastic thin shells.
95 Robinson J.M. 1978 Aspects of the elastic buckling of thin cylindrical shells.
94 Bradfield C.D. 1977 Problems in the strength of stiffened steel compression panels.
93 Reddy B.D. 1977 The elastic and plastic buckling of circular cylinders in bending.
92 White J.D. 1977 Residual stress in welded plates.
91 Yasseri S.F. 1977 Optimal design of plates, with special reference to reinforced concrete slabs.
90 Cookson P.J. 1976 Collapse of concrete box girders involving distortion of the cross-section.
89 Lawal T. 1976 Compressive membrane forces in reinforced concrete slabs.
88 Mohr G.A. 1976 Analysis and design of plate and shell structures using finite elements.
87 Rogers N.A. 1975 Local buckling of welded steel outstands.
86 Hope-Gill M.C. 1974 The ultimate strength of continuous composite beams.
85 Kamtekar A.G. 1974 Welding and buckling effects in thin steel plates.
84 Little G.H. 1974 Local and flexural failure in steel compression members.
83 Woodhead A.L. 1974 A finite-element method for analysis of two-dimensional continuous structures.
82 Gilbert R.B. 1973 Topics in the elastic buckling of plates and columns.
81 Spence R.J.S. 1973 The strength of concrete box girder bridge decks of deformable cross section.
80 Thevendran V. 1973 Structural optimization by mathematical programming.
79 Gill J.I. 1972 Computer-aided design of shell structures using the finite element method.
78 Loov R.E. 1972 Finite element analysis of concrete members considering the effects of cracking and the inclusion of reinforcement.
77 Oppenheim I.J. 1972 The effect of cladding on tall buildings.
76 Rajendran S. 1972 The strength of reinforced-concrete slab elements.
75 Cammaert A.B. 1971 The optimal design of multi-storey frames using mathematical programming.
74 Clarke J.L. 1971 Composite plates with stud shear connectors.
73 Climenhaga J.J. 1971 Local buckling in composite beams.
72 Johnston D.C. 1971 Compression members in trusses.
71 Melchers R.E. 1971 Optimal fibre-reinforced plates: with special reference to reinforced concrete.
70 Pitman F.S. 1971 The behaviour of intersections in cylindrical pressure vessels.
69 Young B.W. 1971 Steel column design.
68 Moxham K.E. 1970 Compression in welded web plates.
67 Serra R.F. 1970 Numerical solution of some plate and shell problems, with emphasis on collocation.
66 Sharples B.P.M. 1970 The structural behaviour of composite columns.
65 Sheppard D.J. 1970 Structural design optimization by dynamic programming.
64 Taylor D.A. 1970 The behaviour of continuous columns.
63 Williams J.H. 1970 Elastic cylindrical shells with open and closed ends.
62 Morris A.J. 1969 Point Loads on Shell Structure.
61 Ranaweera M.P. 1969 The finite element method applied to limit analysis.
60 Willmington R.T. 1969 Vertical shear in composite beams.
59 Gunaratnam D.J. 1968 Finite elastic-plastic displacements of shells.
58 Gurney T.R. 1968 Methods of improving the fatigue strength of fillet welded joints.
57 Sim R.G. 1968 Creep of structures.
56 Southward R.E. 1968 Inelastic column stability.
55 Woodman M.J. 1968 Analysis of shallow elastic shells using a method of moments.
54 Goodall I.W. 1967 On the design of intersections in pressure vessels.
53 Van Dalen K. 1967 Composite action at the supports of continuous beams.
52 Butlin G.A. 1966 The finite element method applied to plate flexure.
51 Graves Smith T.R. 1966 The ultimate strength of locally buckled columns of arbitrary length.
50 Isenberg J. 1966 Inelasticity and fracture in concrete.
49 Kemp A.R. 1966 Composite steel-concrete floor systems.
48 Ractliffe A.T. 1966 The strength of plates in compression.
47 Marriott D.L. 1965 Creep deformation in structures.
46 Morley C.T. 1965 The ultimate bending strength of reinforced concrete slabs.
45 Massey P.C. 1965 The inelastic lateral stability of mild steel I beams.
44 Augusti G. 1964 Some problems in structural instability with special reference to beam columns of I-section.
43 Bernard P.R. 1964 On the collapse of composite beams.
42 Ogle M.H. 1964 Shakedown of steel frames.
41 Royles R. 1964 The failure of ductile structures in reversed bending.
40 Wasti S.T. 1964 Finite plastic deformations of spherical shells.
39 Gorczynski W. 1963 The influence of temperature and internal pressure on stresses in piping systems.
38 Poskitt T.J. 1963 Some problems in the analysis of structures containing suspended cables.
37 Lyon J.R. 1962 The incremental collapse of ductile structures.
36 Martin J.B. 1962 Some aspects of the plastic theory of structures, with special reference to transversely loaded bents and grids.
35 Oladapo I.O. 1962 The effect of the rate of loading on the moment-curvature relation of prestressed concrete beams.
34 Topper T.H. 1962 The behaviour of mild steel under cyclic loading in the plastic range.
33 Grundy P. 1961 The strength of elastically restrained mild steel tubular struts.
32 La Grange L.E. 1961 Moment redistribution in prestressed concrete beams and frames.
31 Renton J.D. 1961 The elastic stability of frameworks.
30 Thompson J.M.T. 1961 The elastic instability of spherical shells.
29 Britvec S.J. 1960 The post-buckling behaviour of frames.
28 Cotterell B. 1960 Thermal buckling of circular plates.
27 Sherbourne A.N. 1960 The elastic-plastic behaviour of mild steel plates in compression.
26 Ariaratnam S.T. 1959 The collapse load of elastic-plastic structures.
25 Khalil H.S. 1958 The plastic design of Vierendeel trusses.
24 Rydzewski J.R. 1958 Experimental and analytical determination of the stresses in buttress dams.
23 Bailey R.W. 1957 The plastic behaviour of tubular beams.
22 Clyde D.H. 1957 Rigid jointed structures.
21 Cogill W.H. 1957 The measurement of strain due to alternating stress.
20 Ellis J.S. 1957 The plastic behaviour of tubular compression members.
19 Percy J.H. 1956 Elastic-plastic bending: the calculation of deflexions in frames.
18 Eickhoff K.G. 1955 The plastic behaviour of columns and beams.
17 Stevens L.K. 1955 Carrying capacity of mild steel arches.
16 Foulkes J.D.P. 1955 The analysis and minimum weight design of ductile structures.
15 Wright G.D.T. 1954 The plastic behaviour of flexural members and connections under combined loading.
14 Ashwell D.G. 1953 The finite deformation of thin plates and shells.
13 Davidson J.F. 1953 Some problems in static and dynamic buckling.
12 Parkes E.W. 1952 The stress distribution near a loading point in a uniform flanged beam.
11 Blakey F.A. 1950 Ultimate strength of concrete members.
10 Gibson J. 1950 The behaviour of metals under tensile loads of short durations.
9 Gross N. 1950 Experiments on curved thin-walled tubes.
8 Horne M.R. 1950 Critical loading conditions in engineering structures.
7 Heyman J. 1950 The failure of ductile structures.
6 Jones R.P.N. 1948 The stresses in beams of uniform cross-section under dynamic loading.
5 Neal B.G. 1948 The lateral instability of mild steel beams of rectangular cross-section bent beyond the elastic limit.
4 Ng W.H. 1947 The behaviour and design of battened structural members.
3 Davies R.D. 1935 The transverse oscillation of railway vehicles.
2 Henderson P.L. 1933 Oscillations in railway bridges.
1 Goodier J.N. 1931 (a) Some problems of plane stress
(b) On the permanent corrugation of sufaces by the action of moving loads.
(c) On the forced transverse oscillations of constrained beams and plates. | {"url":"http://www-structures.eng.cam.ac.uk/pubs/phd","timestamp":"2014-04-19T07:14:51Z","content_type":null,"content_length":"95535","record_id":"<urn:uuid:bb2606c8-5368-4b1c-87e8-86e16c5db182>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Parametric - Dictionary Definition of Parametric
A function is 'parametric' in a given context if its functional form is known to the economist.
Example 1: One might say that the utility function in a given model is increasing and concave in consumption. But it only becomes parametric once one says that u(c)=ln(c) or u(c)=c^1-A/1-A. At this
point only parameters such as A remain to be specified or estimated.
Example 2: In an econometric model one often imposes assumptions such as that the the relationship being estimated is linear, thence to do a linear regression. These are parametric assumptions. One
might also make some estimates of the 'regression function' (the relationship) without such parametric assumptions. This field is called nonparametric estimation.
Terms related to Parametric:
About.Com Resources on Parametric:
Writing a Term Paper? Here are a few starting points for research on Parametric:
Books on Parametric:
Journal Articles on Parametric: | {"url":"http://economics.about.com/od/economicsglossary/g/parametric.htm","timestamp":"2014-04-17T00:48:51Z","content_type":null,"content_length":"37717","record_id":"<urn:uuid:86dcb878-ef2d-4c32-8e5e-333d13616611>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hunters Creek Village, TX Trigonometry Tutor
Find a Hunters Creek Village, TX Trigonometry Tutor
...I am on the President's Honor Roll and have a GPA of 3.73. I am graduating in May and pursuing a Masters degree. I have had three chemical engineering internships through which I have gained
experiences in the field.
22 Subjects: including trigonometry, chemistry, geometry, physics
...I can tutor from early mornings to late at night (about 10 pm). I can meet you where you feel comfortable whether it be at your home, a coffee shop, a bookstore, or even a library. I am a
native English speaker and am extremely patient and will take my time to make sure you understand the conce...
24 Subjects: including trigonometry, chemistry, calculus, physics
...So once you learn the basics, you will build on them as you progress. The first skill is algebra. The concepts (e.g., variables and inequality) and manipulations (e.g., solving equations and
factoring) you learn in algebra are a kind of language that will be used in geometry, algebra 2, trigonometry, calculus, probability and statistics.
20 Subjects: including trigonometry, writing, algebra 1, algebra 2
...I am interested in helping in AP and SAT related math topics. I am qualified (PhD Rice engineering. 800 Math SAT 1, 800 Math SAT2, 800 Math GRE). I know how to tackle these tests & can pass
this information on to you. Unfortunately, these things matter and are the difference between admission to an above average school and the best school.
17 Subjects: including trigonometry, calculus, physics, geometry
...I focus on the student: I listen, assess and constantly check for understanding until I am sure they attain independent practice. I teach by establishing an on-going dialogue with my student. I
have taught all levels, from Kindergarten to University.
41 Subjects: including trigonometry, Spanish, English, reading | {"url":"http://www.purplemath.com/hunters_creek_village_tx_trigonometry_tutors.php","timestamp":"2014-04-17T01:04:07Z","content_type":null,"content_length":"25095","record_id":"<urn:uuid:9dba3dfb-5187-46d7-97e8-2d7bae12d66b>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Integral and the Computer
Re: An Integral and the Computer
Sweet Jesus, its still good enough
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
Now for 5?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
You can empirically see that after n = 6 you have the 4 significant digits you require so that is the answer.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Yes. That is what I was planning to do.
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
When you get time check this page because your code is slightly off. There are implementations there that will help,
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Why should it be off?
Thats just the formula...
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
You are probably out of alignment.
Try your formula out on this
f(x) = x^9 between 0 and 10 with 100000 intervals.
You should get 1000000000.75
After you do that I will show you one last thing that is really cute.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Yes. The correct answer should be 10^9.
Whats wrong with my code?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
What did you get?
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
That is correct!
This is a plot of what the trapezoid rule is doing for n = 10:
Notice it sometimes went over and sometimes under. The second trapezoid is almost perfect.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
What about post 58?
OK, see you later, gotta go to school
Last edited by Agnishom (2013-07-24 15:06:32)
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
Have a good day at school and study hard.
Post #62 was the cute thing. The program not only does the calculation but shows graphically what it did.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Thanks. School was good.
Why is that error occuring?
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
What error? I do not understand.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Read post 58 to 61
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
It is possible there is a small problem with your code ( ever seen any code without bugs ?). But for the purposes of this discussion it worked well enough so I am happy.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Ofcourse there is some problem. Tell me where it is
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
What do you get for this problem here with n = 3
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
Which problem? I am getting confused..
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
The one that guy from India with the beard posted.
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda
Re: An Integral and the Computer
I am getting for n = 3, 0.15605651313635455
In mathematics, you don't understand things. You just get used to them.
I have the result, but I do not yet know how to get it.
All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
Re: An Integral and the Computer
In that case, don't you think mine is better!
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=279343","timestamp":"2014-04-20T21:02:38Z","content_type":null,"content_length":"42726","record_id":"<urn:uuid:e47068bf-33fb-4cfa-bba4-1d62f4e8c941>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00387-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Sciences Department
Mathematical Sciences Department
Metropolitan Community College students and advisors please check the current Chadron State College catalog for further information about all requirements of and electives in degree programs. Some
courses needed for certain programs may transfer but be found in a more appropriate section of this guide. Questions about the courses in this department may be directed to the Dean, Dr. Joel Hyer.
Mathematics department, faculty and programs.
The teacher education program has special requirements for enrollment and graduation.
Mathematical Sciences - Courses in Computer Science taken more than seven (7) years ago will not transfer unless reviewed and approved by your CSC faculty advisor and the Dean of Curriculum.
CSC Course # CSC Course Title Credit Hours MCC Course # MCC Course Title Credit Hours
MATH 132 Applied Mathematics 3 MATH 1240 Applied Mathematics 4.5
MATH 134 Plane Trigonometry 3 MATH 1430 Trigonometry 4.5
MATH 142 College Algebra 4 MATH 1420 College Algebra 5
MATH 151 Calculus I 5 MATH 2410 Calculus I 7.5
MATH 232 Applied Statistics 3 MATH 1410 Statistics 4.5
MATH 252 Calculus II 5 MATH 2411 Calculus II 7.5
Updated October 2012 | {"url":"http://www.csc.edu/admissions/transfer/guides/mcc/math.csc?p=1","timestamp":"2014-04-20T08:18:32Z","content_type":null,"content_length":"4387","record_id":"<urn:uuid:8e8e8794-1e90-4d79-9a64-3602c48068f9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00529-ip-10-147-4-33.ec2.internal.warc.gz"} |
Partial Derivative
Question: Let C be the trace of the paraboloid [tex]z=9-x^2-y^2[/tex] on the plane [tex]x=1[/tex]. Find parametric equations of the tangent line L to C at the point P(1,2,4).
What I did:
[tex]z=8-y^2, x=1[/tex]
So the parametric equations of L are:
Am I doing this correct? Thanks. | {"url":"http://www.physicsforums.com/showthread.php?t=113932","timestamp":"2014-04-21T04:53:12Z","content_type":null,"content_length":"19698","record_id":"<urn:uuid:ea12df5f-cced-4bea-819a-fb52d7d81fcf>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Carl Friedrich Gauss
Carl Friedrich Gauss
<person> A German mathematician (1777 - 1855), one of all time greatest. Gauss discovered the method of least squares and Gaussian elimination.
Gauss was something of a child prodigy; the most commonly told story relates that when he was 10 his teacher, wanting a rest, told his class to add up all the numbers from 1 to 100. Gauss did it in
seconds, having noticed that 1+...+100 = 100+...+1 = (101+...+101)/2.
He did important work in almost every area of mathematics. Such eclecticism is probably impossible today, since further progress in most areas of mathematics requires much hard background study.
Some idea of the range of his work can be obtained by noting the many mathematical terms with "Gauss" in their names. E.g. Gaussian elimination (linear algebra); Gaussian primes (number theory);
Gaussian distribution (statistics); Gauss [unit] (electromagnetism); Gaussian curvature (differential geometry); Gaussian quadrature (numerical analysis); Gauss-Bonnet formula (differential
geometry); Gauss's identity (hypergeometric functions); Gauss sums (number theory).
His favourite area of mathematics was number theory. He conjectured the Prime Number Theorem, pioneered the theory of quadratic forms, proved the quadratic reciprocity theorem, and much more.
He was "the first mathematician to use complex numbers in a really confident and scientific way" (Hardy & Wright, chapter 12).
He nearly went into architecture rather than mathematics; what decided him on mathematics was his proof, at age 18, of the startling theorem that a regular N-sided polygon can be constructed with
ruler and compasses if and only if N is a power of 2 times a product of distinct Fermat primes.
Last updated: 1995-04-10
Try this search on Wikipedia, OneLook, Google
Nearby terms: careware « cargo cult programming « Caribou CodeWorks « Carl Friedrich Gauss » Carnegie Mellon University » carpal tunnel syndrome » Carriage Return
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/Carl+Friedrich+Gauss","timestamp":"2014-04-19T22:10:21Z","content_type":null,"content_length":"7057","record_id":"<urn:uuid:bc77ef9b-0d79-4913-8671-75b6ae04b9e5>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00408-ip-10-147-4-33.ec2.internal.warc.gz"} |
Applied Calculus for the Managerial, Life, and Social Sciences
ISBN: 9780495559696 | 0495559695
Edition: 8th
Format: Hardcover
Publisher: Cengage Learning
Pub. Date: 1/14/2010
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help | {"url":"http://www.knetbooks.com/applied-calculus-managerial-life-social/bk/9780495559696","timestamp":"2014-04-16T22:25:19Z","content_type":null,"content_length":"31886","record_id":"<urn:uuid:6c1ae30f-6cd7-4e7e-b8cb-ec589717a31a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00526-ip-10-147-4-33.ec2.internal.warc.gz"} |
Question about arguments using Du Bois complex
up vote 1 down vote favorite
Let $D$ be a reduced projective scheme over $\mathbb{C}$ such that $H^1(D,\mathcal{O}_D) = 0$ and $D$ is Gorenstein. There is a map
$$r:= \frac{d \log}{2 \pi i }: H^1(D, \mathcal{O}_D^{\ast}) \otimes \mathbb{C} \rightarrow H^1(D,\Omega_D^1)$$ locally defined by $f \mapsto \frac{df}{f}$.
Question Is this homomorphism injective?
If $D$ is smooth or a V-manifold, it is known that this is injective. If $D$ is general, I considered the following argument. If there is a mistake, please let me know about it. ;
Let $(\underline{\Omega}_D^{\bullet},F)$ be the Du bois complex on $D$. Then there is a homomorphism $H^1(D, \Omega_D^1) \rightarrow \mathbb{H}^1(D, \underline{\Omega}^1_D) $ where $\underline{\
Omega}^1_D := Gr_F^1 \underline{\Omega}^{\bullet}_D[1]$.
Consider homomorphisms $ t : H^1(D, \mathcal{O}_D^{\ast}) \rightarrow H^2(D, \mathbb{C})$ which is induced by the exponential exact sequence $0 \rightarrow \mathbb{Z} \rightarrow \mathcal{O}_D \
rightarrow \mathcal{O}_D^{\ast} \rightarrow 0$ and the homomorphism which is composition of the form $H^1(D, \mathcal{O}_D^{\ast}) \stackrel{r}{\rightarrow} H^1(D, \Omega^1_D) \stackrel{s}{\
rightarrow} \mathbb{H}^1(D, \underline{\Omega}^1_D) \stackrel{u}{\rightarrow} H^2(D, \mathbb{C})$.
I don't know how to define $u$ in a natural way. Take a hyperresolution $f_{\bullet}: D_{\bullet} \rightarrow D $. I think that $\underline{\Omega}_D^1$
can be expressed by using the terms come from $\Omega^1_{D_{\bullet}}$ on each $D_{\bullet}$.
and there is a complex homomorphism $\underline{\Omega}^1_D \rightarrow \underline{\Omega}_D^{\bullet}$ induced by the expression above and define $u$ by this complex homomorphism.
If $t = u \circ s \circ r$, then $r$ is injective. Is there a mistake in this argument?
Moreover, I'm not familiar with arguments using Du Bois complex. I don't know the above arguments make sense. If there are useful literatures, please let me know about it.
(add) I had a mistake in the definition of $\underline{\Omega}_D^1$. I forgot a shift by 1.
I'm readin p.174 of the book "Mixed Hodge Structures" by Peters and Steenbrink. I thought that I can define $\underline{\Omega}_D^1 \rightarrow \underline{\Omega}_D^{\bullet}$ whose homomorphisms on
$k$-th term are defined by using the direct summand inclusions $ (f_k)_{*} \Omega_{D_k}^1 \rightarrow \oplus_{p+q = k+1} ( f_q )_{*} \Omega_{D_q}^p$.
What is the homomorphism $\underline{\Omega}^1_D \rightarrow \underline{\Omega}_D^{\bullet}$ you mention? – Sándor Kovács Jul 6 '11 at 16:26
I added comments in the last. I hope it makes sense. – tarosano Jul 6 '11 at 17:40
1 The problem with your definition is that this is not going to be a map of complexes. – Sándor Kovács Jul 6 '11 at 21:15
2 If I may be honest, your argument doesn't look quite right. But the end result should be. Here's how to fix it: The image of $t=c_1$ should lie in $$ker[H^1(D,\mathbb{C})\to H^1(D,\mathcal{O}_D)]\
subseteq F^1H^1(D,\mathbb{C})$$ But $im(c_1)$ is real, so it also lies in $\bar F^1$. You can identify $F^1\cap \bar F^1=H^1(D,\underline{\Omega}_D^\dt)$. Now proceed as above. (See Barbieri-Viale
/Srinivas Crelle 1994, and my paper with Kang in Comm. Algebra, 2011 for some related stuff.) – Donu Arapura Jul 6 '11 at 21:21
1 Thank you for great comments! I wish I had a right to vote. – tarosano Jul 7 '11 at 7:28
show 2 more comments
1 Answer
active oldest votes
Let me convert my obscure comment into an (obscure?) answer, with some corrections.
The problem with your argument as it stands is that a map is not well defined: $H^1(D,\underline{\Omega}_D^1)$ is only a subquotient of $H^2(D,\mathbb{C})$. But the problem is minor. To
fix things observe that $H^2(D)$ carries a mixed Hodge structure. Also the image $im(c_1)$ of the Chern class map (what you call $t$) lies in $F^1\cap \bar F^1$ because it lies in $$ker[H
^2(D,\mathbb{C})\to H^2(D,\mathcal{O}_D)]\subseteq F^1$$ and is invariant under conjugation. The space $F^1\cap \bar F^1$ maps injectively to $H^1(D,\underline{\Omega}_D^1)=Gr_F^1H^2(D,\
up vote 2 mathbb{C})$. Now by your assumption $c_1$ is injective, so the map to $H^1(D,\underline{\Omega}_D^1)$ is also injective, and as (the normalized) $dlog$ factors through it, it is also
down vote injective.
References: I'm using the standard facts from the papers of Deligne and Du Bois (which should be in Peters-Steenbrink). For more information about $c_1$ in this setting, see Barbieri
Viale and Srinivas "The Neron Severi groups and the mixed Hodge structure on $H^2$" Crelles (1994); for higher Chern classes, see my paper with Kang "Kaehler-de Rham cohomology and Chern
classes" Commun. Alg. 2011.
Thank you very much for the details. Can I ask a silly question? $F^1 \cap \overline{F}^2 = 0$ because they are filtration associated to the mixed hodge structure on $H^2(D, \mathbb
{C})$, is it right? They don't always satisfy that $F^1 + \overline{F}^2 = H^2(D, \mathbb{C})$, right? – tarosano Jul 7 '11 at 14:00
2 Yes, that's right. It helps to visualize the Hodge numbers of $H^2(D)$ as lying in the triangle $p+q\le 2$ in the $pq$-plane. $F^1$ is the intersection with the half plane $p\ge 1$,
and $\bar F^2$ is $q\ge 2$. – Donu Arapura Jul 7 '11 at 14:54
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/69643/question-about-arguments-using-du-bois-complex","timestamp":"2014-04-21T04:46:51Z","content_type":null,"content_length":"61378","record_id":"<urn:uuid:5eb20cad-8791-4e4a-bb0d-0cc9e0413d72>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00019-ip-10-147-4-33.ec2.internal.warc.gz"} |
Planar graph
July 24th 2010, 03:45 AM #1
Senior Member
Apr 2009
Planar graph
"If a connected, planar graph is drawn on a plane then the plane can be divided into continuous regions called faces. A face is characterised by the cycle that forms its boundary."
This is what my book says, and then it goes on to illustrate an example by saying this graph:
... has 4 faces, namely A, B, C, D (D is the face bounded by the cycle (1,2,3,4,6,1)
however shouldn't there be at least 6 faces? ie a face bounded by (1,2,5,4,6,1) and (1,2,3,4,5,1)...?
Many thanks~
That's probably a matter of definition: do you allow complex faces that can be further subdivided into constituent faces? My guess is no.
hmm I am not sure, the book doesn't mention it either, however I thought what you did too, but if they count (1,2,3,4,6,1) as a face, isn't that also made up of constituent faces?
A planar graph partitions the plane into regions called faces.
The operative word there is partition, disjoint cells.
So there are only four faces in that graph.
Hmm but isn't (1,2,3,4,6,1) made up of A+B+C?
Also for a planar graph, the edges must all be STRAIGHT lines right? Because if they are not... you can bend the lines for a non-planar graph into a planar graph XD
I'm reminded of the joke that has an engineer, a physicist, and a mathematician fencing off the maximum area with a fixed length of fence. The engineer arranges the fence in a circle and claims
to have fenced off the maximum area. The physicist puts the fence in a straight line, and says, "We can assume the fence goes off into infinity, hence I've fenced off half the earth." The
mathematician just laughs at the other two. He builds a tiny fence around himself, and says, "I declare myself to be on the outside."
In a similar fashion, the face D is, I think, "Everything else." If you were to think of this graph as describing a solid, D would be the face consisting of everything else. That's why it's not a
composite face.
I would agree with Plato if the phrase "divided" is a synonym for "partitioned."
[EDIT]: Edges don't have to be straight lines. Planarity has nothing to do with the straightness of lines. It has everything to do with the particular arrangement of edges and vertices.
Last edited by Ackbeet; July 24th 2010 at 04:59 AM. Reason: Straight lines.
Oh yes I get it! Thank you very much Ackbeet and Plato for clearing up the confusion.
Oh and for the straightness of lines thing, I get it now, I was thinking that if the lines doesn't have to be straight you could somehow bend it so that no lines will every intersect, but after
experimenting on a K_{3,3} graph I've convinced myself it can't be done!
You're very welcome.
July 24th 2010, 03:48 AM #2
July 24th 2010, 04:09 AM #3
Senior Member
Apr 2009
July 24th 2010, 04:24 AM #4
July 24th 2010, 04:56 AM #5
Senior Member
Apr 2009
July 24th 2010, 04:58 AM #6
July 24th 2010, 05:05 AM #7
Senior Member
Apr 2009
July 24th 2010, 07:21 AM #8 | {"url":"http://mathhelpforum.com/discrete-math/151853-planar-graph.html","timestamp":"2014-04-18T13:37:06Z","content_type":null,"content_length":"52727","record_id":"<urn:uuid:89fc9c0a-61f1-4948-95f0-e196f4b5d242>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Clock Problem
Re: Clock Problem
Sorry, it had been too stupid question
A clock is set right at 7 am. The clock gains 10 minutes in 24 hrs. What will the approximate time when the clock indicates 1 pm on the following day?
Is it 12:47:30 pm?
Last edited by Agnishom (2013-01-01 00:53:18)
'And fun? If maths is fun, then getting a tooth extraction is fun. A viral infection is fun. Rabies shots are fun.'
'God exists because Mathematics is consistent, and the devil exists because we cannot prove it'
'Who are you to judge everything?' -Alokananda | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=247192","timestamp":"2014-04-21T12:32:05Z","content_type":null,"content_length":"34967","record_id":"<urn:uuid:434c56ba-1045-4092-b121-1c6f55062498>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Torque calculations for logging winch
Thanks for your answer. I guess this is where I am getting confused - you say that changing shaft diameter will not affect torque -
This is what I dont understand - on the last pulley if I had a 1" shaft sticking out you are saying that this 1" shaft will pull with the same force as a 10" shaft -??
sO SAY THE 1" SHAFT CAN PULL 7500# ARE YOU SAYING A 10" SHAFT COULD PULL 7500#?
SAY THE RPM OF THE SHAFT IS 1OO
THE 1" SHAFT WOULD THEN PULL IN A 7500 # LOG 314" IN ONE MINUTE
WHILE A TEN INCH SHAFT PULLING 7500# WOULD PULL THE LOG IN 3140 INCHES IN ONE MINUTE
IT SEEM TO ME THAT THE FORCE WOULD BE MUCH LESS WITH THE TEN INCH SHAFT THEN THE ONE INCH SHAFT ?? aM i USING THE TERM TORQUE WRONG ??
ps THE 300 FT LB IS JUST A hypothetical | {"url":"http://www.physicsforums.com/showpost.php?p=3745079&postcount=3","timestamp":"2014-04-19T02:12:36Z","content_type":null,"content_length":"7890","record_id":"<urn:uuid:1e2f2bdd-c1a6-4c06-a498-7d764e9264ea>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00298-ip-10-147-4-33.ec2.internal.warc.gz"} |
centralizer of the symmetric group
up vote 1 down vote favorite
If $H$ and $D$ are subgroups of $S_n$, the symmetric group on $n$ letter, such that $H\cong S_k$ and $D\cong S_{n-k}$, where $n>k>1$, $n>15$ and $H=C_{S_n}(D)$ and $D=C_{S_n}(H)$, Is it true that $H=
Stab_{S_n}(X)$ and $D=Stab_{S_n}(X^c)$, where $X$ is a subset of $\{1,2,\dots,n\}$ such that $|X|=n-k$?
1 Is there any specific reason why you assume $n>15$? Is the result wrong for smaller $n$? – Johannes Ebert Jun 12 '12 at 7:33
It looks like you are endowing $\{1,\ldots,n\}$ with the standard action of $S_n$ by permutations. If that is the case, then it is easy to see that the answer is "no", because you can conjugate a
pair of standard subgroups nontrivially. If you are just asking if $H$ and $D$ are simultaneously conjugate to the subgroups listed, then the question is more interesting. – S. Carnahan♦ Jun 12
'12 at 7:41
2 S. Carnahan: I don't understand your comment. I believe the result is true as stated. – Derek Holt Jun 12 '12 at 9:24
@S. Carnahan: Conjugating the groups $H$ and $D$ by a permutation $\pi$ would just replace $X$ by $\pi(X)$, so the proposed conclusion, the existence of a suitable $X$, would be unaffected. –
Andreas Blass Jun 12 '12 at 10:19
add comment
2 Answers
active oldest votes
Yes. One way to see this is to use the fact that the smallest degree faithful permutation representations of $S_n$ for $n \ge 6$ have degrees $n$ (natural action), $2n$ (imprimitive action
on cosets of $A_{n-1}$), and $n(n-1)/2$ (action on unordered pairs). I think this was probably proved before the classification of finite simple groups, but I am not sure. A general
reference for maximal subgroups of $A_n$ and $S_n$ is
Liebeck, Martin W.; Praeger, Cheryl E.; Saxl, Jan. A classification of the maximal subgroups of the finite alternating and symmetric groups. J. Algebra 111 (1987), no. 2, 365–383.
Let's assume $n \ge 11$ (although I think your result is true for $n \ge 7$) and $k \ge n-k$, so $k \ge 6$. So the only faithful permutation representations of $S_k$ of degree at most $n$
have degree $k$, or $2k$ when $n=2k$.
up vote 5
down vote The imprimitive transitive representation of $S_k$ of degree $2k$ has centralizer of order 2 in $S_{2k}$ (the order of the centralizer in the symmetric group of a transitive permutation
group is equal to the number of fixed points of its point-stabilizer), so that cannot arise as your subgroup $H \cong S_k$.
Hence $H$ must have an orbit of length $k$ and act naturally on that orbit. If it had two orbits of length $k$ (with $n=2k$), then its centralizer in $S_n$ would have order 2, so it has a
unique such orbit. So your subgroup $D$ isomorphic to $S_{n-k}$, which centralizes $H$, must fix that orbit, and hence fix it pointwise, and so $D$ must act naturally on the remaining
points. Hence $H$ fixes all remaining points, and the result follows.
add comment
Johannes, I need $n>15$ in my work, however Magma software show it is not true for $n=6$, but for $n=7,8,9$ is true.
up vote 0
down vote
2 If you register an account, you can edit your question instead of adding answers where they don't belong. – S. Carnahan♦ Jun 12 '12 at 8:32
One would expect the result to fail for $n=6$, because $S_6$ has outer automorphisms. Applying an outer automorphism to an $H$ and $D$ of your desired form, you'd get new $H$ and $D$ not
of that form. (In other words, an outer automorphism works as well as a conjugation in S. Carnahan's comment to the question, and, unlike a conjugation, it does not preserve the desired
conclusion.) It would be interesting if there were, for $n=6$, counterexamples other than those given by outer automorphisms. – Andreas Blass Jun 12 '12 at 10:23
add comment
Not the answer you're looking for? Browse other questions tagged gr.group-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/99338/centralizer-of-the-symmetric-group","timestamp":"2014-04-21T07:20:30Z","content_type":null,"content_length":"59747","record_id":"<urn:uuid:484378c8-2729-4d9f-9169-3ff6c04a61cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00655-ip-10-147-4-33.ec2.internal.warc.gz"} |
Norden E. Huang
On the Trend, Detrend and the Variability of Nonlinear and Nonstationary Time Series
Norden E. Huang
NASA Goddard Space Flight Center
The trend and detrend are frequently encountered terms in data analysis. Yet, there is no precise mathematical definition for the trend in a data set, even though in many applications, such as
financial and climatologic data analyses for example, the trend is precisely the quantity we want to find. In other applications, such as in computing the correlation function and spectral analysis,
one would have to remove the trend from the data, or detrend, lest the result would be overwhelmed by the DC terms. Therefore, detrend is a necessary step before meaningful results can be obtained.
As there is a lack of precise definition for the trend, detrend is also a totally ad hoc operation. In most cases, trend is taken as the result of a moving mean, a regression analysis, a filtered
operation or simple curve fitting with an a priori functional form. Yet such a trend is determined subjectively and with certain idealized assumptions. Furthermore, the trend so determined is usually
different from the quantity taken away in the detrend operation, which usually consists of setting the data to a simple linear fit of the data as the zero reference.
The real trend should have the following properties: First, the trend should be an intrinsic property of the data. In other words, it should be part of the data, and driven by the same mechanisms
that generate the observed or measured data. Unfortunately, most of the available methods define trend by using an extrinsic approach, such as pre-selected simple functional forms. Being intrinsic,
therefore, requires that the method used in defining the trend have to be adaptive. Second, the trend exists only within a given data span; therefore, it should be local, and, be associated with a
local scale of data length. Consequently, the trend can only be valid within that part of data, which should be shorter than a full local wavelength. Thus, with this definition, we can avoid the
difficulty encountered by most economists: “one economist’s ‘trend’ can be another’s ‘cycle.’” New definition of the trend and variability (or volatility), based on Hilbert- Huang Transform, will be
given and analysis of Climate data as well as NASDAQ data will be used as examples to demonstrate the application of the Hilbert-Huang Transform. | {"url":"http://www.emc.ncep.noaa.gov/seminars/old/abstract.2005/Huang.html","timestamp":"2014-04-19T20:13:00Z","content_type":null,"content_length":"2975","record_id":"<urn:uuid:736b3338-9235-4864-b369-d905edec45dd>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00022-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about Physical Chemistry on The Chemical Statistician
March 20, 2014 Leave a comment
Much of chemistry concerns the interactions of the outermost electrons between different chemical species, whether they are atoms or molecules. The properties of these outermost electrons depends in
large part to the charge that the protons in the nucleus exerts on them. Generally speaking, an atom with more protons exerts a larger positive charge. However, with the exception of hydrogen, this
positive charge is always less than the full nuclear charge. This is due to the negative charge of the electrons in the inner shells, which partially offsets the positive charge from the nucleus.
Thus, the net charge that the nucleus exerts on the outermost electrons – the effective nuclear charge – is less than the charge that the nucleus would exert if there were no inner elctrons between | {"url":"http://chemicalstatistician.wordpress.com/category/physical-chemistry/","timestamp":"2014-04-19T14:29:22Z","content_type":null,"content_length":"175759","record_id":"<urn:uuid:681bdb62-acf5-4de9-a066-c089615ad455>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maths Question Paper for Class 11 CBSE
Knowing Math is important as this subject enables students to choose their preferred job. Poor knowledge in Math restricts students to explore in many fields. To give importance to this subject, the
CBSE board has designed each Math syllabus carefully. Under the guidance of several subject experts, CBSE board has prepared Math syllabus for class 11 and also make this available online.
Additionally, the board has prepared suitable question papers for Math that evaluate students’ expertise at the end of every academic session. Maths question paper for Class 11 CBSE represents the
praiseworthy educational pattern of the concerned board. Moreover, CBSE question papers are available year-wise. Hence, students can collect and take adequate help from these question papers at their
convenient time.
Maths Question Papers for Class 11 CBSE 2013
Math is a basic subject and it is included in each CBSE syllabus in a requisite manner. Each CBSE syllabus is designed under the strict vigilance of several subject experts associated with the
concerned board and hence, CBSE syllabus is specified as a worthwhile study material for students. Moreover, the board also prepares question papers for each academic session and these question
papers assess students’ expertise in a through manner. The board follows CCE pattern and hence, students’ learning problems are detected and consequently, they get suitable assistance to improve
their performance in exams. Maths question papers for class 11 CBSE 2013 are designed by following all CBSE guidelines. Therefore, students’ expertise is thoroughly assessed by these papers.
CBSE Maths Question Paper for Class 11 2012
CBSE Maths syllabus is undoubtedly a great learning resource for students. By using this syllabus thoroughly, students can get requisite knowledge in a step-by-step manner. Additionally, to assess
students’ knowledge properly, the board also prepares suitable question papers for each subject. These question papers are designed for each academic session and the board makes these question papers
available online for the convenience of students. CBSE Maths question paper for class 11 2012 is good to follow to get requisite knowledge about the original question paper. Additionally, this
question paper follows a standard educational pattern in all respects and therefore, it assesses students’ knowledge in a proper manner.
Question Paper Maths for Class 11 CBSE 2011
Question papers are designed to evaluate students’ knowledge in a right manner. In brief, question paper acts as a measuring tool that assesses students’ understanding in each subject. Specific
question paper is prepared for specific subject and most importantly, each CBSE question paper is designed under the guidance of several subject experts. Hence, students’ knowledge is evaluated
thoroughly. Question papers play a vital role in the examination process and based on the performance in exams; students are recognized at the end of each academic session. Therefore, question papers
and students’ result are quite interlinked. Question paper Maths for class 11 CBSE 2011 is available online and students can use this as a reference.
Related Concepts | {"url":"http://cbse.edurite.com/cbse-question-papers/maths-question-paper-for-class-11-cbse.html","timestamp":"2014-04-16T13:45:58Z","content_type":null,"content_length":"16869","record_id":"<urn:uuid:fd146573-8cc5-444a-a184-596eac389876>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00383-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 42
- JOURNAL OF THE ACM , 1985
"... This paper reports several properties of heuristic best-first search strategies whose scoring functions f depend on all the information available from each candidate path, not merely on the
current cost g and the estimated completion cost h. It is shown that several known properties of A * retain t ..."
Cited by 161 (12 self)
Add to MetaCart
This paper reports several properties of heuristic best-first search strategies whose scoring functions f depend on all the information available from each candidate path, not merely on the current
cost g and the estimated completion cost h. It is shown that several known properties of A * retain their form (with the minmax offplaying the role of the optimal cost), which helps establish general
tests of admissibility and general conditions for node expansion for these strategies. On the basis of this framework the computational optimality of A*, in the sense of never expanding a node that
can be skipped by some other algorithm having access to the same heuristic information that A* uses, is examined. A hierarchy of four optimality types is defined and three classes of algorithms and
four domains of problem instances are considered. Computational performances relative to these algorithms and domains are appraised. For each class-domain combination, we then identify the strongest
type of optimality that exists and the algorithm for achieving it. The main results of this paper relate to the class of algorithms that, like A*, return optimal solutions (i.e., admissible) when all
cost estimates are optimistic (i.e., h 5 h*). On this class, A * is shown to be not optimal and it is also shown that no optimal algorithm exists, but if the performance tests are confirmed to cases
in which the estimates are also consistent, then A * is indeed optimal. Additionally, A * is also shown to be optimal over a subset of the latter class containing all best-first algorithms that are
guided by path-dependent evaluation functions.
, 2005
"... Relational, XML and HTML data can be represented as graphs with entities as nodes and relationships as edges. Text is associated with nodes and possibly edges. Keyword search on such graphs has
received much attention lately. A central problem in this scenario is to e#ciently extract from the ..."
Cited by 117 (5 self)
Add to MetaCart
Relational, XML and HTML data can be represented as graphs with entities as nodes and relationships as edges. Text is associated with nodes and possibly edges. Keyword search on such graphs has
received much attention lately. A central problem in this scenario is to e#ciently extract from the data graph a small number of the "best" answer trees. A Backward Expanding search, starting at
nodes matching keywords and working up toward confluent roots, is commonly used for predominantly text-driven queries. But it can perform poorly if some keywords match many nodes, or some node has
very large degree. In this paper
- ACM Trans. Graph , 2005
"... The computation of geodesic paths and distances on triangle meshes is a common operation in many computer graphics applications. We present several practical algorithms for computing such
geodesics from a source point to one or all other points efficiently. First, we describe an implementation of th ..."
Cited by 67 (0 self)
Add to MetaCart
The computation of geodesic paths and distances on triangle meshes is a common operation in many computer graphics applications. We present several practical algorithms for computing such geodesics
from a source point to one or all other points efficiently. First, we describe an implementation of the exact “single source, all destination ” algorithm presented by Mitchell, Mount, and
Papadimitriou (MMP). We show that the algorithm runs much faster in practice than suggested by worst case analysis. Next, we extend the algorithm with a merging operation to obtain computationally
efficient and accurate approximations with bounded error. Finally, to compute the shortest path between two given points, we use a lower-bound property of our approximate geodesic algorithm to
efficiently prune the frontier of the MMP algorithm, thereby obtaining an exact solution even more quickly.
- IN WORKSHOP ON ALGORITHM ENGINEERING & EXPERIMENTS , 2006
"... We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [16] in several ways. In particular, we introduce a
bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs which reduc ..."
Cited by 60 (5 self)
Add to MetaCart
We study the point-to-point shortest path problem in a setting where preprocessing is allowed. We improve the reach-based approach of Gutman [16] in several ways. In particular, we introduce a
bidirectional version of the algorithm that uses implicit lower bounds and we add shortcut arcs which reduce vertex reaches. Our modifications greatly reduce both preprocessing and query times. The
resulting algorithm is as fast as the best previous method, due to Sanders and Schultes [27]. However, our algorithm is simpler and combines in a natural way with A∗ search, which yields
significantly better query times.
, 2003
"... In this paper, we consider Dijkstra's algorithm for the single source single target shortest paths problem in large sparse graphs. The goal is to reduce the response time for online queries by
using precomputed information. For the result of the preprocessing, we admit at most linear space. We as ..."
Cited by 53 (14 self)
Add to MetaCart
In this paper, we consider Dijkstra's algorithm for the single source single target shortest paths problem in large sparse graphs. The goal is to reduce the response time for online queries by using
precomputed information. For the result of the preprocessing, we admit at most linear space. We assume that a layout of the graph is given. From this layout, in the preprocessing, we determine for
each edge a geometric object containing all nodes that can be reached on a shortest path starting with that edge. Based on these geometric objects, the search space for online computation can be
reduced significantly. We present an extensive experimental study comparing the impact of different types of objects. The test data we use are traffic networks, the typical field of application for
this scenario.
- Journal of the ACM
"... study on realization of generalized quantum ..."
- ACM JOURNAL OF EXPERIMENTAL ALGORITHMS , 1998
"... We carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (TRansportation ANalysis and SIMulation System) project. The
main focus of the paper is to study how various heuristic as well as exact solutions and associated data ..."
Cited by 42 (22 self)
Add to MetaCart
We carry out an experimental analysis of a number of shortest path (routing) algorithms investigated in the context of the TRANSIMS (TRansportation ANalysis and SIMulation System) project. The main
focus of the paper is to study how various heuristic as well as exact solutions and associated data structures affect the computational performance of the software developed for realistic
transportation networks. For this purpose we have used a road network representing with high degree of resolution the Dallas Ft-Worth urban area. We discuss and experimentally analyze various
one-to-one shortest path algorithms. These include classical exact algorithms studied in the literature as well as heuristic solutions that are designed to take into account the geometric structure
of the input instances. Computational results are provided to empirically compare the efficiency of various algorithms. Our studies indicate that a modified Dijkstra's algorithm is computationally
fast and an ex...
- In SC ’05: Proceedings of the 2005 ACM/IEEE conference on Supercomputing , 2005
"... Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadthfirst search
(BFS) scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability ..."
Cited by 39 (2 self)
Add to MetaCart
Many emerging large-scale data science applications require searching large graphs distributed across multiple memories and processors. This paper presents a distributed breadthfirst search (BFS)
scheme that scales for random graphs with up to three billion vertices and 30 billion edges. Scalability was tested on IBM BlueGene/L with 32,768 nodes at the Lawrence Livermore National Laboratory.
Scalability was obtained through a series of optimizations, in particular, those that ensure scalable use of memory. We use 2D (edge) partitioning of the graph instead of conventional 1D (vertex)
partitioning to reduce communication overhead. For Poisson random graphs, we show that the expected size of the messages is scalable for both 2D and 1D partitionings. Finally, we have developed
efficient collective communication functions for the 3D torus architecture of BlueGene/L that also take advantage of the structure in the problem. The performance and characteristics of the algorithm
are measured and reported. 1
- Journal of Artificial Intelligence Research , 1997
"... The assessment of bidirectional heuristic search has been incorrect since it was first published more than a quarter of a century ago. For quite a long time, this search strategy did not achieve
the expected results, and there was a major misunderstanding about the reasons behind it. Although there ..."
Cited by 32 (2 self)
Add to MetaCart
The assessment of bidirectional heuristic search has been incorrect since it was first published more than a quarter of a century ago. For quite a long time, this search strategy did not achieve the
expected results, and there was a major misunderstanding about the reasons behind it. Although there is still wide-spread belief that bidirectional heuristic search is afflicted by the problem of
search frontiers passing each other, we demonstrate that this conjecture is wrong. Based on this finding, we present both a new generic approach to bidirectional heuristic search and a new approach
to dynamically improving heuristic values that is feasible in bidirectional search only. These approaches are put into perspective with both the traditional and more recently proposed approaches in
order to facilitate a better overall understanding. Empirical results of experiments with our new approaches show that bidirectional heuristic search can be performed very efficiently and also with
limited mem...
- In Algorithms and Theory of Computation Handbook , 1996
"... Introduction Search is a universal problem-solving mechanism in artificial intelligence (AI). In AI problems, the sequence of steps required for solution of a problem are not known a priori, but
often must be determined by a systematic trial-and-error exploration of alternatives. The problems that h ..."
Cited by 25 (0 self)
Add to MetaCart
Introduction Search is a universal problem-solving mechanism in artificial intelligence (AI). In AI problems, the sequence of steps required for solution of a problem are not known a priori, but
often must be determined by a systematic trial-and-error exploration of alternatives. The problems that have been addressed by AI search algorithms fall into three general classes: single-agent
pathfinding problems, two-player games, and constraint-satisfaction problems. Classic examples in the AI literature of pathfinding problems are the sliding-tile puzzles, including the 3 \Theta 3
Eight Puzzle (see Fig. 1) and its larger relatives the 4 \Theta 4 Fifteen Puzzle, and 5 \Theta 5 Twenty-Four Puzzle. The Eight Puzzle consists of a 3 \Theta 3 square frame containing eight numbered
square tiles, and an empty position called the blank. The legal operators are to slide any tile that is h | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=4421","timestamp":"2014-04-17T20:02:43Z","content_type":null,"content_length":"38981","record_id":"<urn:uuid:99474d5f-397b-49d0-b303-4cc2ca61be94>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
random number generator
noone wrote:
> On Sat, 21 Oct 2006 07:53:59 -0700, asdf wrote:
>> I want a random number generator, the random number should be subject a
>> uniform distribution in [0,1]. Could you please give me some hints?
>> Thanks.
> Are you sure you want (1) included in your range?
> Others have given you pointers to RNG literature. One thing I've noticed
> that seems to be an almost universal misuse of random number functions
> such as the srand() and rand() functions is that programmers who
> use them to generate numbers [0..1) virtually NEVER check to see if the
> returned value of rand()==RAND_MAX. While the probability of this
> happening is 1/RAND_MAX, you should check because rand()/RAND_MAX could
> end up outside of your intended range.
Better yet, don't check, but map the range so that this isn't a problem:
rand() / ((double)RAND_MAX + 1)
> For simple non-cryptographic uniform distributions the rand() function is
> often adequate but for more stringent requirements the BOOST library
> provides several RNGs. Of course as with many open source projects, the
> documentation is lacking...at least it was when I last used them.
There's documentation for the TR1 random number generators in chapter 13
of my book, "The Standard C++ Library Extensions." The Boost generators
are close to what's in TR1.
-- Pete
Author of "The Standard C++ Library Extensions: a Tutorial and
Reference." For more information about this book, see | {"url":"http://www.velocityreviews.com/forums/t457771-random-number-generator.html","timestamp":"2014-04-18T11:03:24Z","content_type":null,"content_length":"56306","record_id":"<urn:uuid:9cea5d83-58ea-4e67-a2b9-fa7c6919abd0>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: closed universe, flat space?
Date: Apr 29, 2013 3:05 PM
Author: dan.ms.chaos@gmail.com
Subject: Re: closed universe, flat space?
On Apr 29, 9:22 pm, RichD <r_delaney2...@yahoo.com> wrote:
> On Apr 24, Dan <dan.ms.ch...@gmail.com> wrote:
> > > Supposedly, our universe is closed and finite,
> > > a straight line (geodesic) traveler must return
> > > to his starting poiint, yes/no? Hence, curved space.
> > > At the same time, astronomers claim, that
> > > space is flat, to the precision of their
> > > measurements.
> > > So, space is closed, but also flat... back in my
> > > day, they had something called a logical
> > > contradiction -
> > Space can be 'closed' , and also, 'locally flat',
> > in the sense that the Riemann tensor vanishes , or
> > there exists, for any point of the space, a non-
> > infinitesimal spherical section around that point
> > that's indistinguishable from flat space .
> > Consider a piece of paper: flat? Yes. Closed? No.
> > You can go off the edge.
> um yeah
> Finally, somebody gets it -
> > Now make it so that when you go trough the 'up' edge
> > you end up coming from the 'down' edge , and when
> > you go go trough the 'left'
> > edge you end up coming from the 'right' edge .
> And to do that, you have to twist the paper into a cylinder... twist,
> flat... see the problem here?
> > More specifically, this
> > space is the factor group (R^2) / (Z^2) . The
> > space is still flat, as
> > far as definitions tell . However, it's closed.
> wooosh! Over my head -
> --
> Rich
First of all, it's more like folding a napkin and gluing its edges
than it is folding a 'cylinder' (you can try it if you want, great way
to learn topology) .
Second , it doesn't matter what it's "outside geometry" looks like .
What matters is what the observers living "inside" the space notice.
The "outside geometry" is inaccessible to the 'inside observers' .What
matters is the relationship of the "inside geometry" to itself .
Let's say I have a flat , plastic blanket, and some people living
purely within the world of the plastic blanket , with normal time
(same as our time ) . Now , I proceed to 'fold the blanket' . What
would the observers living 'inside the blanket' notice? Has anything
changed 'inside the blanket' ? Light along the blanket still travels
its shortest path , that is , along whatever fold I made in the
blanket , as to be a straight line in the 'unfolded blanket' . The
observers wouldn't notice anything has changed . In fact, for them ,
nothing has changed .
Let's say now , that I heat up a small portion of the blanket , so
that it 'expands' , and is no longer as flat as the rest of the
blanket . Would the observers notice? Most definitely . How so?
This is a great program to learn how it feels to live in a
significantly curved universe .
What properties of a space can you deduce purely from living 'inside
the space'? Well, clearly, you can't deduce it's 'outside shape' to an
arbitrary degree , as our blanket example illustrates . But , you can
find out about it's 'intrinsic curvature' , something independent of
the shape you fold it it . (a blanket is still a blanket, having the
same 'internal geometry' no matter how you fold it)
Let's say our observers are living in a perfect sphere (or a surface
with 'sphere-like' internal geometry ) . That means it has the same
non-zero 'intrinsic curvature' everywhere . But, can our observers
notice the 'intrinsic curvature' ?
Yes . Inside a sphere , they can build a triangle with three angles of
90 degrees . That clearly means something funky is going on with the
space .
Hoverer , inside my folded paper example, they can only build normal
triangles, who's angles sum up to 180 degrees . That's why the sphere
has curvature while the folded paper has none . In fact, curvature can
be defined starting from the 'excess degrees' in some small triangle
around the region . If it has more than 180 degrees ,then you're
dealing with spherical geometry (positive curvature ).
If it has less than 180 degrees , then you're dealing with hyperbolic
geometry (negative curvature ) . | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=8906052","timestamp":"2014-04-17T17:03:25Z","content_type":null,"content_length":"6878","record_id":"<urn:uuid:bf6f671e-5d93-4d4e-a5e7-2d0db518eaca>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00276-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to Find Vertical Asymptotes of a Rational Function
Edit Article
Edited by Lucky7, Luv_sarah, Lael Rapier, Oliver and 4 others
Finding the vertical asymptotes is one of the first things you need to do in order to graph a rational function. Read this article to find out how.
1. 1
Understand what a vertical asymptote is. A vertical asymptote is a vertical line that doesn't intersect the graph of the function. Since it is a vertical line, its equation is in the form x = a
(plugging in a for x in f(x) would make it undefined). Read on for learning how to find out the value(s) of a.
□ In the picture below, the vertical asymptotes to the function (graphed in green) are shown by red-dashed vertical lines.
2. 2
Factor the numerator and the denominator.
3. 3
Cancel out any common factors from the numerator and the denominator
4. 4
Set the denominator equal to zero and solve the resulting equation to get different value(s) of x.
5. 5
Plug these value(s) for a in x = a and each time you do this you get the equation of a vertical asymptote of the given rational function!
Article Info
Thanks to all authors for creating a page that has been read 164,393 times.
Was this article accurate? | {"url":"http://www.wikihow.com/Find-Vertical-Asymptotes-of-a-Rational-Function","timestamp":"2014-04-19T14:32:07Z","content_type":null,"content_length":"62843","record_id":"<urn:uuid:db4701b6-6932-4cf6-9540-e941f5ef44d7>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
Filter noise and interpolate microscopy images in frequency domain
07 Feb 2013 (Updated 01 Mar 2013)
Remove spatial frequencies beyond the optical cutoff and perform physically accurate interpolation.
function [out,outscale]=opticalLowpassInterpolation(in,inscale,fcut,IntFac)
% opticalLowpassInterpolation Filter and interpolate microscopy image while avoiding artifacts.
% Images acquired with lenses (microscope, camera) can possess fine
% features only upto the spatial frequency cut-off of the optics. Any fine
% features beyond that are due to noise. A simple and effective noise
% removal strategy is to remove the above-cutoff spatial frequencies.
% opticalLowpassInterpolation implements filtering of the imaging data with
% super-gaussian filter in such a way that filtering artifacts are
% minimized.
% The image can be optionally interpolated in frequency domain while
% maintaining physical variation in intensity.
% USAGE: [out, outpix] =
% opticalLowpassInterpolation(in,inpix,fcut,IntFac)
% OUTPUTS:
% out - filtered and interpolated output image.
% outpix - pixel-size in the output image.
% INPUTS:
% in - raw image
% inpix - pixel-size in the input image (numerical
% value in your chosen units of distance).
% fcut - spatial-frequency cutoff
% (numerical value in the inverse units of distance).
% IntFac - Interpolation factor.
% Author and Copyright: Shalin Mehta (www.mshalin.com)
% License: BSD
% Version history: April 2012, initial implementation with gaussian filter.
% August 2012, use super-gaussian filter.
% Feb 10, 2013, added functionality to interpolate in
% frequency domain.
% Feb 27, 2013, Order of super-gaussian is intelligently estimated from the sampling.
% Resolved bug that caused the center of
% intensity to shift (phase-shift in space)
% when interpolation is used.
%% Pad the input image to avoid edge artifacts.
%% Obtain the spectrum with DC at center of image.
%% Pad the spectrum to achieve spatial interpolation.
%% Generate frequency grid for padded spectrum.
mcut=1/(2*outscale); %Cut-off of frequency grid.
[ylen, xlen]=size(specpad); % The first return value is the height and the second is the width.
mxx=repmat(mx,[ylen 1]);
myy=repmat(my',[1 xlen]);
%% Estimate the sharpness of supergaussian and generate filter over above grid.
% Equation of supergaussian is a=exp(-(f/fo)^(1/2n)).
% I choose the transition of the filter response from 0.99 to 0.01 to
% be sampled over 5 frequency bins.
% The filter response is equal to a, when
% f/fo = -Log[a]^(1/2n).
% The super-gaussian filter is = 1/e when f=fo.
% The transition region normalized by fcut is thus given by,
% mtrans/fcut = -Log[0.01])^(1/2n) + Log[0.99])^(1/2n).
% Using Mathematica, above is simplified to
% mtrans/fcut=E^(0.764/n)-E^(-2.3/n)
% Above value is computed for various integer values of n and tabulated
% here. To estimate n from given mtrans/fcut, we just find the index
% where required mtrans/fcut is closest to the entry in the table.
% Size of frequency step in radial direction.
dm=sqrt( (mx(2)-mx(1))^2 + (my(2)-my(1))^2 );
% We want transition period to be atleast 3 frequency bins and transition region should start at fcut.
% transition period normalized by fcut.
% The suitable order of super-gaussian.
% Use super-gaussian as our frequency filter.
%% Filter the padded spectrum and obtain interpolated spatial image.
% Compute filtered, interpolated, zero-padded output.
% Crop the center of the output image.
cenlen2idx=@(cen,len) cen-ceil((len-1)/2):cen+floor((len-1)/2);
Remove spatial frequencies beyond the optical cutoff and perform physically accurate interpolation.
function [out,outscale]=opticalLowpassInterpolation(in,inscale,fcut,IntFac)
% opticalLowpassInterpolation Filter and interpolate microscopy image while avoiding artifacts.
% Images acquired with lenses (microscope, camera) can possess fine
% features only upto the spatial frequency cut-off of the optics. Any fine
% features beyond that are due to noise. A simple and effective noise
% removal strategy is to remove the above-cutoff spatial frequencies.
% opticalLowpassInterpolation implements filtering of the imaging data with
% super-gaussian filter in such a way that filtering artifacts are
% minimized.
% The image can be optionally interpolated in frequency domain while
% maintaining physical variation in intensity.
% USAGE: [out, outpix] =
% opticalLowpassInterpolation(in,inpix,fcut,IntFac)
% OUTPUTS:
% out - filtered and interpolated output image.
% outpix - pixel-size in the output image.
% INPUTS:
% in - raw image
% inpix - pixel-size in the input image (numerical
% value in your chosen units of distance).
% fcut - spatial-frequency cutoff
% (numerical value in the inverse units of distance).
% IntFac - Interpolation factor.
% Author and Copyright: Shalin Mehta (www.mshalin.com)
% License: BSD
% Version history: April 2012, initial implementation with gaussian filter.
% August 2012, use super-gaussian filter.
% Feb 10, 2013, added functionality to interpolate in
% frequency domain.
% Feb 27, 2013, Order of super-gaussian is intelligently estimated from the sampling.
% Resolved bug that caused the center of
% intensity to shift (phase-shift in space)
% when interpolation is used.
%% Pad the input image to avoid edge artifacts.
%% Obtain the spectrum with DC at center of image.
%% Pad the spectrum to achieve spatial interpolation.
%% Generate frequency grid for padded spectrum.
mcut=1/(2*outscale); %Cut-off of frequency grid.
[ylen, xlen]=size(specpad); % The first return value is the height and the second is the width.
mxx=repmat(mx,[ylen 1]);
myy=repmat(my',[1 xlen]);
%% Estimate the sharpness of supergaussian and generate filter over above grid.
% Equation of supergaussian is a=exp(-(f/fo)^(1/2n)).
% I choose the transition of the filter response from 0.99 to 0.01 to
% be sampled over 5 frequency bins.
% The filter response is equal to a, when
% f/fo = -Log[a]^(1/2n).
% The super-gaussian filter is = 1/e when f=fo.
% The transition region normalized by fcut is thus given by,
% mtrans/fcut = -Log[0.01])^(1/2n) + Log[0.99])^(1/2n).
% Using Mathematica, above is simplified to
% mtrans/fcut=E^(0.764/n)-E^(-2.3/n)
% Above value is computed for various integer values of n and tabulated
% here. To estimate n from given mtrans/fcut, we just find the index
% where required mtrans/fcut is closest to the entry in the table.
% Size of frequency step in radial direction.
dm=sqrt( (mx(2)-mx(1))^2 + (my(2)-my(1))^2 );
% We want transition period to be atleast 3 frequency bins and transition region should start at fcut.
% transition period normalized by fcut.
% The suitable order of super-gaussian.
% Use super-gaussian as our frequency filter.
%% Filter the padded spectrum and obtain interpolated spatial image.
% Compute filtered, interpolated, zero-padded output.
% Crop the center of the output image.
cenlen2idx=@(cen,len) cen-ceil((len-1)/2):cen+floor((len-1)/2);
function [out,outscale]=opticalLowpassInterpolation(in,inscale,fcut,IntFac) % opticalLowpassInterpolation Filter and interpolate microscopy image while avoiding artifacts. % % Images acquired with
lenses (microscope, camera) can possess fine % features only upto the spatial frequency cut-off of the optics. Any fine % features beyond that are due to noise. A simple and effective noise % removal
strategy is to remove the above-cutoff spatial frequencies. % % opticalLowpassInterpolation implements filtering of the imaging data with % super-gaussian filter in such a way that filtering
artifacts are % minimized. % % The image can be optionally interpolated in frequency domain while % maintaining physical variation in intensity. % % USAGE: [out, outpix] = %
opticalLowpassInterpolation(in,inpix,fcut,IntFac) % % OUTPUTS: % out - filtered and interpolated output image. % outpix - pixel-size in the output image. % % INPUTS: % in - raw image % inpix -
pixel-size in the input image (numerical % value in your chosen units of distance). % fcut - spatial-frequency cutoff % (numerical value in the inverse units of distance). % IntFac - Interpolation
factor. % % Author and Copyright: Shalin Mehta (www.mshalin.com) % License: BSD % Version history: April 2012, initial implementation with gaussian filter. % August 2012, use super-gaussian filter. %
Feb 10, 2013, added functionality to interpolate in % frequency domain. % Feb 27, 2013, Order of super-gaussian is intelligently estimated from the sampling. % Resolved bug that caused the center of
% intensity to shift (phase-shift in space) % when interpolation is used. %% Pad the input image to avoid edge artifacts. padsize=floor(0.5*size(in)); inpadded=padarray
(in,padsize,'replicate','both'); inpadded=double(inpadded); %% Obtain the spectrum with DC at center of image. spec=fftshift(fft2(ifftshift(inpadded))); %% Pad the spectrum to achieve spatial
interpolation. specpadsize=floor(0.5*size(spec)*(IntFac-1)); specpad=padarray(spec,specpadsize,'replicate','both'); %% Generate frequency grid for padded spectrum. outscale=inscale/IntFac; mcut=1/
(2*outscale); %Cut-off of frequency grid. [ylen, xlen]=size(specpad); % The first return value is the height and the second is the width. mx=linspace(-mcut,mcut,xlen); my=linspace(-mcut,mcut,ylen);
mxx=repmat(mx,[ylen 1]); myy=repmat(my',[1 xlen]); mrr=sqrt(mxx.^2+myy.^2); %% Estimate the sharpness of supergaussian and generate filter over above grid. % % Equation of supergaussian is a=exp(-(f/
fo)^(1/2n)). % % I choose the transition of the filter response from 0.99 to 0.01 to % be sampled over 5 frequency bins. % The filter response is equal to a, when % f/fo = -Log[a]^(1/2n). % % The
super-gaussian filter is = 1/e when f=fo. % The transition region normalized by fcut is thus given by, % mtrans/fcut = -Log[0.01])^(1/2n) + Log[0.99])^(1/2n). % Using Mathematica, above is simplified
to % mtrans/fcut=E^(0.764/n)-E^(-2.3/n) % Above value is computed for various integer values of n and tabulated % here. To estimate n from given mtrans/fcut, we just find the index % where required
mtrans/fcut is closest to the entry in the table. nrange=1:100; mtransbyfoTable=exp(0.764./nrange)-exp(-2.3./nrange); % Size of frequency step in radial direction. dm=sqrt( (mx(2)-mx(1))^2 + (my(2)
-my(1))^2 ); % We want transition period to be atleast 3 frequency bins and transition region should start at fcut. mtrans=3*dm; fo=fcut+3*dm; % transition period normalized by fcut. mtransbyfo=
mtrans/fcut; % The suitable order of super-gaussian. [~,n]=min(abs(mtransbyfoTable-mtransbyfo)); % Use super-gaussian as our frequency filter. FiltFreq=exp(-(mrr/fo).^(2*n)); %% Filter the padded
spectrum and obtain interpolated spatial image. SpecFilt=specpad.*FiltFreq; % Compute filtered, interpolated, zero-padded output. outpad=IntFac^2*fftshift(ifft2(ifftshift(SpecFilt),'symmetric')); %
Crop the center of the output image. outCenter=floor(0.5*size(outpad)+1); outLen=size(in)*IntFac; cenlen2idx=@(cen,len) cen-ceil((len-1)/2):cen+floor((len-1)/2); idy=cenlen2idx(outCenter(1),outLen
(1)); idx=cenlen2idx(outCenter(2),outLen(2)); out=outpad(idy,idx); out=cast(out,class(in)); end | {"url":"http://www.mathworks.com/matlabcentral/fileexchange/40207-filter-noise-and-interpolate-microscopy-images-in-frequency-domain/content/opticalLowPassInterpolation27Feb2013/ImageProcessing/opticalLowpassInterpolation.m","timestamp":"2014-04-18T06:25:48Z","content_type":null,"content_length":"24461","record_id":"<urn:uuid:9a5b91e3-1b69-4336-8ab5-fde794b13900>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Game - Page 5 - Naruto Discussion Forum
Originally Posted by
Kakashi's Girl
The plans' intersection is the point (2,2,3)
I'm starting to get really bored with math games...
So that means x=y.
x or y plus 1 equals the square root of 9, which is equal to 2+1, which is equal to 1+2, but that is simply switching to the Commutative property, but 1+2=z, and z equals the square root of 9, which,
in turn, equals 1+2, which equals 2+1, but again, that is just the Commutative property, so to use the identity property, 2+1=3=0=z, which makes 3=z. | {"url":"http://forum.naruto.viz.com/showthread.php?p=1514385","timestamp":"2014-04-23T18:13:16Z","content_type":null,"content_length":"173562","record_id":"<urn:uuid:0eb71927-d9c6-4d62-8076-c31692f24a27>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Purplemath Forums
I am working on a school assignment and I am being asked to solve the following: Let A be the set of all integers x such that x is = k2 for some integer k Let B be the set of all integers x such that
the square root of x, SQRT(x), is an integer Give a formal proof that A = B. Remember you must prove... | {"url":"http://www.purplemath.com/learning/search.php?author_id=10519&sr=posts","timestamp":"2014-04-16T16:03:02Z","content_type":null,"content_length":"13123","record_id":"<urn:uuid:6c228ce6-7fd9-4e8c-a4c7-c2881da09447>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
OraNA :: Oracle News Aggregator
In June 2014 I hope to see a whole lot of you at
's awesome conference
You will not regret going to Kscope14. There is a ton of great content and loads of awesome brainpower to pick ;-) And if you can find the time for it in
SQL Book Club – Any recommendations?
I got a note from Steven Feuerstein the other day about a group of developers in Stockholm starting an SQL Book Club. What a great idea :-) Anyway, they had asked Steven if he had any recommendations
for good books on Advanced SQL. And Steven asked me the same question...
Top selling items – revisited in 12c
April last year I blogged about TOP-N reporting using Top selling items as example. In Oracle 12c we now have a new FETCH FIRST syntax, so in this post I'll revisit the Top selling items example
showing where and how FETCH FIRST can be used and where you still need (more...)
Active Data Guard and Invalidations
To provide data source for our datawarehouse (in a seperate MS SQL database, god help it, but that's beside the point :-), we have a setup where we have several views where the datawarehouse
connection user has been granted select rights.
When we got Active Data Guard in the spring, (more...)
Half-day masterclass on Analytic Functions (I hope :-)
I've presented on Analytic Functions twice now - at ODTUG KScope12 and UKOUG 2012. Both times I've felt that an hour is not nearly enough to both teach how to use analytics as well as show use cases
of how analytics can really be used for solving a lot of (more...)
What I’m doing end of June 2013
It is that time again...
Time to go mingle with the best of the best, learn much, teach what I can, suffer information overload, have fun, enjoy life, and much much more...
In short - time for ODTUG KScope13 \o/ \o/
Hotel and flight has been booked half a year (more...)
PL/SQL Challenge Authorship
The PL/SQL Challenge site by Steven Feuerstein is great for learning various SQL and PL/SQL techniques. I am one of the quiz authors - I write most of the SQL quizzes (and one or two PL/SQL quizzes
now and then.)
That means there is now accumulated quite a bit of my work as quizzes - each quiz demonstrating some knowledge of SQL. I could replicate this work as blog posts as well, but instead it is now
possible for you to search all my quizzes on PL/SQL Challenge.
I have added that link to the right-hand menu of the (more...)
ROWS versus default RANGE in analytic window clause
I have talked at KScope about the difference between ROWS and RANGE in the analytic window clause, but haven't yet blogged about it. Recently while working out a quiz for the PL/SQL Challenge I
discovered yet another reason for remembering to primarily use ROWS and only use RANGE when the actual problem requires it.
From my KScope presentation examples here is a simple case of a rolling sum of salaries:
select deptno
, ename
, sal
, sum(sal) over (
partition by deptno
order by sal
) sum_sal
from scott.emp
order by deptno
, sal
DEPTNO ENAME SAL (more...)
The KScope Charitable Dinner Raffle
Do you want a chance for a dinner with me at KScope13 in New Orleans chatting about SQL? And at the same time get a warm charitable feeling inside helping the volunteers rebuilding New Orleans?
If yes, then read on :-) ...
In January ODTUG started a little competition where you could win a dinner for two at KScope13 in New Orleans by telling about your favorite experience from KScope. I entered a little story from
KScope12 where I presented on analytic functions.
Surprise, surprise - I won \o/ ... But as I haven't spotted any other entries in the competition, I (more...)
Recursive subquery graph traversing
In December a user Silpa asked a question on AskTom on "Bi-directional hierarchical query," which inspired me to fool around with recursive subquery factoring (available from version 11.2) giving
Silpa a solution which he seemed to find useful. Since then I've fooled around a little more with it, particularly concerning cycles in the graph data.
Silpa gave a table like this for testing:
create table network_table (
origin number
, destination number
And some data as well:
insert into network_table values (11, 12)
insert into network_table values (12, 13)
insert into network_table values (14, 11)
I’m evaluated…
UKOUG 2012 evaluations have arrived - I think I did OK :-).
On a scale from 1 to 6 my scores were:
• Topic: 5.5
• Content: 5.5
• Presentation skills: 4.83
• Quality of slides: 5
• Value of presentation: 5.67
I'm quite happy with those scores - particularly that the 6 people that filled out evaluation schemas thought they got a lot of value from the presentation. The skills score is fair, I had expected a
bit less as I know I am not world class presenter - but I hope practice makes better :-)
The comments also makes (more...)
Formspider comes to Denmark
For the Danish Oracle User Group (DOUG) I'll be hosting a Formspider event in Copenhagen January 21st 2013. I look forward to seeing Yalim Gerger demonstrate this alternative to APEX or ADF or Forms.
The event will take place in my company's classroom at Banestrøget 17, 2. th., 2630 Tåstrup, which is 2 minutes walk from Høje Tåstrup train station.
Program for the afternoon:
• 13:30 - 14:30 Introducing Formspider, the Web 2.0 framework for PL/SQL developers
Yalim Gerger, Founder&CEO of Formspider talks about the vision and the benefits of the Formspider Framework.
• 14:40 - (more...)
Thank you, UKOUG 2012
So, I'm about to leave UKOUG 2012. I had a good time and learned quite a bit from the smart people gathered in Birmingham ;-)
Thank you to those attending my presentation on analytic functions - I hope you learned something from it. If you need to take a closer look, both presentation and scripts can be found here.
Birmingham Airport next stop...
Analytic FIFO multiplied – part 3
This is part 3 of a three part posting on analytic FIFO picking of multiple orders. Part 3 shows how to combine the FIFO developed in part 1 with the analytics used for the better route calculation
in an earlier blog post.
We use the same tables and same data as part 1, so read part 1 for the setup.
When combining the FIFO for multiple orders with the route calculation, we get this nice piece of sql:
with orderlines as (
select o.ordno
, o.item
, o.qty
, nvl(sum(o.qty) over (
partition by o. (more...)
Analytic FIFO multiplied – part 2
This is part 2 of a three part posting on analytic FIFO picking of multiple orders. Part 2 shows an alternative way of doing the same thing as part 1 did - but this time using recursive subquery
factoring in Oracle v. 11.2.
We use the same tables and same data as part 1, so read part 1 for the setup.
And just to recap - here's the picking list developed in part 1:
with orderlines as (
select o.ordno
, o.item
, o.qty
, nvl(sum(o.qty) over (
partition by o.item
order by o. (more...)
Analytic FIFO multiplied – part 1
I have blogged before about Analytic FIFO picking as well as talked about it at KScope12 and will do again at UKOUG2012.
A few days ago Monty Latiolais, the president of ODTUG, had a need to do this - not just for one order which he already had developed the technique for, but for multiple orders, where the FIFO
picking for the second order should not consider the inventory that was already allocated to the first order, and so on.
So here is a three-part demo of how to do this.
First we setup the same inventory as (more...)
Ready for UKOUG2012
I think I am about ready for UKOUG conference 2012. Hope I haven't forgotten something :-)
• Train ticket to airport - check
• Plane ticket to get to Birmingham - check
• Hotel reservation - check
• UKOUG2012 registration - check
• Planned my agenda - check
• Chairing a session - check
• Uploaded presentation for my session - check
• Discovered where to get Oracle beer near ICC - check
Yup - checklist done :-)
If you're interested, come to my session Wednesday Dec. 5th at 12:10 and see if I can speak fast enough to go though 130 slides in an hour showing these (more...)
A bit of fun expressing ratios
Sometimes answering questions on the OTN forum leads to a little fun trying to be creative in SQL ;-) A user wished to express a ratio as 1:1 or 1:2. That lead to a little fun with CONNECT BY on DUAL
for recursion.
This is the SQL I ended up creating:
with r as (
select .2233 ratio from dual union all
select .2500 ratio from dual union all
select .2666 ratio from dual union all
select .2750 ratio from dual union all
select .2828 ratio from dual
select r.ratio ratio_num
, (
select to_char(
, 'TM9'
RANGE BETWEEN and leap years
Answering a question on the OTN forum was a bit tricky to get an analytic sum using a RANGE BETWEEN that would handle leap years, but in the end I came up with a workaround that satisfies the
requirement. Along the way I realized why there are two different INTERVAL datatypes :-)
Let's make a sales table to demo this:
create table sales (
day date
, qty number
And populate with some data for specific days in 2010, 2011 and 2012:
insert into sales values (date '2010-10-01', 1);
insert into sales values (date '2010-10-02', 2);
insert (more...)
Find your way with HttpUriType and Google Maps
Recently I read Duke Ganote writing about using UTL_HTTP to get stock quote from Yahoo. (Duke must have a thing for authorities, particularly Marshalls of Legoredo ;-) Anyway, I posted a comment how
to do a similar thing with HttpUriType.
And that reminded me that long time ago I reminded myself that I should blog about how we use HttpUriType to query driving distance and time from Google Maps. (I have even tried to submit abstract to
KScope and UKOUG on getting data with HttpUriType, UTL_HTTP or UTL_FTP, but no go so far...)
Let's imagine Larry needs directions to (more...) | {"url":"http://orana.info/author/kim-berg-hansen/","timestamp":"2014-04-16T04:12:09Z","content_type":null,"content_length":"63275","record_id":"<urn:uuid:407e3e0a-3151-43cc-8347-d458ddf3153c>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00210-ip-10-147-4-33.ec2.internal.warc.gz"} |
Benoit Mandelbrot’s work has led the world to a deeper understanding of this fractal pattern looks crazy but is the outcome of geometrical Benoit Mandelbrot Benoit Mandelbrot, a mathematician known
for developing the idea of fractal they do actually have an ordered pattern. Mandelbrot showed The Fractal Geometry of Nature [Benoit B. Mandelbrot] on Amazon.com. *FREE* super saver shipping on
qualifying offers. Clouds are not spheres, mountains are not
Mandelbrot fractal – Stock Image A925/0398 – Science Photo Library
Looking back: the psychology of mess – Vol. 24, Part 7 ( July 2011)
NEWS: How Mandelbrot’s fractals changed the world. via: BBC News. Yet when I look back I see a pattern. Text: Benoit Mandelbrot. Prints: Audrey Roger # pattern Benoit Mandelbrot announced in 1977
that the distribution of galaxies in space shows a fractal pattern. Images from the best telescopes, equipped with CCD cameras and French-American mathematician Benoit Mandelbrot discovered fractal
mathematics, Most people know fractals as the weird, colorful patterns drawn by computers.
Fractal geometry: detail from Mandelbrot Set – Stock Image A925/0033
Patterns of Visual Math – Mandelbrot Set – welcome to MIQEL.com
Benoît Mandelbrot obituary. Mathematician whose fractal geometry helps us find patterns in the irregularities of the natural world Obituary: Benoit Mandelbrot, Father of Fractal Geometry. Mandelbrot
noticed that they formed a pattern and that the closer they were examined, 11/9/2010 · Benoit Mandelbrot, Father of the Fractal Benoit Mandelbrot. are full of intricate patterns that reveal a hidden
order and beauty in the world.
Mandelbrot fractal – Stock Image A925/0399 – Science Photo Library
The Celiac Student: Benoit Mandelbrot, Father of the Fractal
There are many programs used to generate the Mandelbrot set and other fractals, the colors take on the same pattern that and to its father Benoit Mandelbrot. At TED2010, mathematics legend Benoit
Mandelbrot develops a theme he first discussed at TED in 1984 — the extreme complexity of roughness, and the way that fractal overview. Benoit Mandelbrot. earthquakes, patterns of vegetation in a
swamp, the way neurons fire when humans search through memory. the coastline. snowflake
THE OMNI REPORT: R.I.P – Benoit Mandelbrot
FRACTALS – Colby-Sawyer College
Benoit B. Mandelbrot (20 "in which smaller and smaller copies of a pattern are successively nested inside each other, Mandelbrot believed that fractals, Many people know Benoît Mandelbrot their
dot-matrix patterns sketches and notebook scribbles now on display in The Islands of Benoît Mandelbrot: Fractals, French mathematician Benoit B. Mandelbrot discovered fractal geometry in the 1970s.
some kind of extra-ordinary moment in the fractal pattern of Historical
Benoit Mandelbrot, Father of Fractals, s at 85
“Nearly all common patterns in nature are rough,” writes the mathematician Benoit Mandelbrot at the beginning of The Fractalist: Memoir of a Scientific Maverick Benoit Mandelbrot speech, Portland OR,
Fractal history, Mandelbulb, 3D Fractal
of fractals or fractal mathematician best known as this new edition
Fractal Patterns — Patterns In Nature, an online book
Benoit Mandelbrot was a Polish-born French he began to perceive an astonishing pattern most notably the now famous Mandelbrot Set fractal that Benoit B. Mandelbrot the pattern that Mandelbrot found
was both hidden and revolutionary. Benoit Mandelbrot: Fractals and the art of roughness; 10/22/2010 · With these words pioneering man of ideas Benoît Mandelbrot, that the lumps are arranged in a
beautiful pattern The Mandelbrot set was fractal.
The animation and notes on this page explain how custom shaders
allAfrica.com: Africa: Fractals and Benoit Mandelbrot – Lessons
10/17/2010 · Benoit Mandelbrot, whose work on fractals aided understanding of the complexity of patterns in nature, has d at the age of 85. IBM research Benoit Mandelbrot discovered fractals, Fractal
patterns have appeared in almost all of the physiological processes within our bos. — Benoit Mandelbrot, 1983. Good News !! A fractal pattern often has the following features: It has a fine structure
at all magnifications.
Tribute to Benoit Mandelbrot: The Father of Fractal Mathematics d
Benoit Mandelbrot – NNDB: Tracking the entire world
UPDATE: The man who invented the word FRACTAL & discoverer of the Mandelbrot set, Benoit Mandelbrot (often called the Father of Fractal Geometry) has d. The existence of these patterns [fractals]
challenges us to study forms that Euclid leaves aside as being formless, Quotes by others about Benoit Mandelbrot (1) Aside from my own personal indebtedness to the man for his discovery of fractal
geometry, Benoît Mandelbrot was a man I claim that many patterns of Nature
The Mandelbrot Set, named after the mathematician Benoit B. Mandelbrot
IBM100 – Fractal Geometry – IBM – United States
Fractals, the Islands of Benoit Mandelbrot, Fractals are typically self-similar patterns, where self-similar means they are “the same from near as from far”. Following the passing of Benoit
Mandelbrot this In this article Mandelbrot argued that: ‘Fractal patterns appear not just in the price changes of securities but IBM research Benoit Mandelbrot discovered fractals, following the
usual pattern, continue with fond reminiscences of teachers and pooctoral mentors.
Benoit Mandelbrot, Developer of Fractal Geometry, s (Play a Fractal
Benoit Mandelbrot: Fractals and the art of roughness – YouTube
10/17/2010 · Benoit Mandelbrot, who d on October 14 aged 85, was largely responsible for developing the discipline of fractal geometry – the study of rough or The ogy of the cauliflower best
describes fractal patterns (see over). Mandelbrot surmised that a floret cut from the Benoît Mandelbrot: Fractals and the art Scientist Mandelbrot, best known for concept of fractal dimensions or
fractal geometry, and famous for Mandelbrot set explained in Les objets fractals.
Benoit (of fractal fame) Mandelbrot turns 80
Benoit Mandelbrot Quotes – 8 Science Quotes – Dictionary of
http://www.ted.com At TED2010, mathematics legend Benoit Mandelbrot develops a theme he first discussed at TED in 1984 — the extreme complexity of 10/17/2010 · Images of influential mathematician
Benoit Mandelbrot and the fractal geometry Mandelbrot argued that seemingly random patterns could in fact be Fractals and Benoit Mandelbrot: Number and Pattern in African Culture’ was a major
contribution to the understanding of mathematics in everyday life in Africa.
comprised of fractals benoit mandelbrot pioneered fractal pretty | {"url":"http://delopern.com/benoit-mandelbrots-fractal-patterns/","timestamp":"2014-04-21T05:20:23Z","content_type":null,"content_length":"20778","record_id":"<urn:uuid:3e48ac4c-e5c1-42fe-8b8b-368d21c6ba90>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
complex supermanifold
Complex geometry
Formal context
Manifolds and cobordisms
There are two different things that one might mean by a “complex supermanifold”, and the term is in fact used for two different notions in the literature (the terminology is a mess!):
1. In the first sense, a complex supermanifold generalizes the notion of a smooth manifold with its sheaf of smooth complex-valued functions, just as an ordinary supermanifold is a generalization of
an ordinary manifold with its sheaf of smooth real-valued functions. However, considering ordinary smooth manifolds as ringed spaces with either their sheaves of real or complex smooth functions
gives two equivalent categories, whereas this is not true in the case of real and complex supermanifolds; the corresponding functor is neither essentially surjective nor fully faithful. (For $X$
a complex supermanifold in this sense, the underlying reduced manifold $X_{red}$ is not a complex manifold but just a smooth manifold regarded as a ringed space with structure sheaf taken to be
the sheaf of $\mathbb{C}$-valued smooth functions on the ordinary real manifold.)
2. In the second sense, a complex supermanifold is a super(complex manifold), a super-version of complex manifold.
A complex supermanifold is a ringed space $X = (|X|, O_X)$ such that
Write cSDiff? for the category of complex supermanifolds.
Example The functor $\Pi : \{real vector bundles\} \to SDiff$ has a complex analogue $\Pi : \{complex vector bundles\} \to cSDiff$.
Let $E \to X$ be a complex vector bundle of rank $\delta$. This gives rise to the complex supermanifold $\Pi E$, in the same way as a real vector bundle gives rise to a real supermanifold: the
structure sheaf is given by sections of the exterior algebra of the dual of $E$.
$C^\infty(X) := O_X(X)$ does not in general have a $\mathbb{C}$-antilinear involution $\bar{-} : C^\infty(X) \to C^\infty(X)$ but there does exist a canonical complex conjugation on the quotient $C^\
infty(X)$ by the ideal of nilpotent sections, which is $C^\infty(X_{red}; \mathbb{C})$. So on a complex supermanifold we have complex conjugation only on the reduced manifold.
As for ordinary supermanifolds (and with same proof as in the real case) we have the following two statements:
1. Every complex supermanifold is isomorphic to one of the form $\Pi E$.
2. $cSDiff(X,Y) \simeq ComplexSuperAlg(C^\infty(Y), C^\infty(X)).$
Remark It turns out that a $\mathbb{C}$-super algebra homomorphism $\phi : C^\infty(Y) \to C^\infty(X)$ automatically satisfies $\phi_{red}(\overline{f_{red}}) = \overline{\phi_{red}(f_{red})}$.
Define the complex supermanifold $\mathbb{R}_{cs}^{d|\delta}$ as $\mathbb{R}^d$ with structure sheaf $U\mapsto C^\infty(U) \otimes_{\mathbb{C}} \wedge^\bullet \mathbb{C}^\delta$.
Then for $S$ an arbitrary complex supermanifold we have
$\mathbb{R}_{cs}^{d|\delta}(S) = cSDiff(S, \mathbb{R}_{cs}^{d|\delta}) = \{ (x_1, \cdots, x_d, \theta_1, \cdots, \theta_{\delta})| x_i \in C^\infty(S)^{ev} , \theta_j \in C^\infty(S)^{odd}; with x_i
real in that \overline{(x_i)_{red}} = (x_i)_{red} \}$
$\mathbb{R}^{2|1}(S) = \{ (x,y,\theta) | x,y \in C^\infty(S)^{ev}, \theta \in C^\infty(S)^{odd}; x,y real \}$
we shall write
$\simeq \{ (z,\bar z, \theta) | z, \bar z \in C^\infty(S)^{ev}; \overline{z_{red}}=(\overline z )_{red} \}.$ | {"url":"http://www.ncatlab.org/nlab/show/complex+supermanifold","timestamp":"2014-04-18T02:59:29Z","content_type":null,"content_length":"46816","record_id":"<urn:uuid:6a6ec360-53b9-4f0e-924b-5ed4f7cff38e>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00594-ip-10-147-4-33.ec2.internal.warc.gz"} |
easurements of Sea Level Rise
TOPEX, JASON satellite measurements of Sea Level Rise
The picture above is from the University of Colorado website at http://sealevel.colorado.edu showing the sealevel rise over the last dozen years. The dashed line is an average of the data showing a
mean sea level rise of 2.8+/-0.4 mm/yr. The data is available in text form at http://sealevel.colorado.edu/2004_rel3.0/sl_ib_cu2004_rel3.0_global.txt
Looking at the graph it seemed to me that the rate of sealevel rise had increased since 1999. Therefore I fitted the data in four different ways, assuming an error estimate for the individual points
of 7mm.
1)a linear fit to all the data as in the colorado graph above. My results agree with the 2.8+/-0.4mm/yr figure from University of Colorado as I obtained 2.8+/-0.6mm/yr.
2)a linear fit to the data from 1992 to 1999. This gives a rate of sealevel rise of 2.1+/-0.2mm/yr
3)a linear fit to the data from 1999 through 2004. This gives a rate of sealevel rise of 3.7+/-0.2 mm/yr
4)just for fun an exponential fit to all the data: y=a*exp((x-1992)/tau)+b This fit gave tau=11+/-2yr at the time constant tau of the exponential.
Both the linear and the exponential fits had roughly the same chi-square indicating that the available data cannot really distinguish between a linear increase in sea level and an exponential one. It
may be of importance that the chi-squares error estimate for rate of sea level increase in the linear fits 2 and 3 are smaller than that in the linear fit 1 (the Colorado fit). This seems to show
that the data has some preference to being fitted with two piecewise linear sections than one straight line. Details of all the fits are at the end of this page.
The image above is a graph including the data and the fits.The red line is a linear fit to the data from 1992 to 1999. The green line is the University of Colorado fit to all the data. The blue line
is a linear fit to the data from 1999 through 2004. The purple is the exponential fit to all the data. It seems clear that the data is rising above the red line (linear fit from 1992-1999). I have
extrapolated (yes, I know, one of the Cardinal Sins) the lines out to 2020. If the sea level continues to rise in a linear fashion, it seems that we may expect sea level rise of between 40 and 55 mm
(1.6-2.2 in.) above present sea level in 2020. Bearing in mind that an average beach gradient is about 1%, this translates into shorelines moving inward by 4 to 5.5 m (13 to 18 ft.) by 2020.
Note:the increase in the rate of sealevel may be explained by the results from Dyurgerov which show a sharp increase in the contribution of mountain and subpolar glaciers to sealevel rise since 1997.
(Dyurgerov, Mark. 2002. Glacier Mass Balance and Regime: Data of Measurements and Analysis. INSTAAR Occasional Paper No. 55, ed. M. Meier and R. Armstrong. Boulder, CO: Institute of Arctic and Alpine
Research, University of Colorado. Distributed by National Snow and Ice Data Center, Boulder, CO. A shorter discussion is at http://nsidc.org/sotc/sea_level.html)
Greenland ice sheets are also melting faster; a recent paper (Joughin et al, Nature 432,p608.; http://www.spaceref.com/news/viewpr.html?pid=15611) shows that the Jacobshavn Isbrae glacier, which
drains 6% of the Greenland icesheets has doubled in speed from 1997 to 2003. Note added 17th Feb 2005: The British Antarctic Survey has recently reported quicker glacier velocities in the Antarctic
Peninsula. http://www.antarctica.ac.uk/News_and_Information/Press_Releases/story.php?id=158
Fit Details
I) linear fit to all data: y=m1*(x-1992)+c1
final sum of squares of residuals : 123.734
rel. change during last iteration : -6.2019e-15
degrees of freedom (ndf) : 375
rms of residuals (stdfit) = sqrt(WSSR/ndf) : 0.574419
variance of residuals (reduced chisquare) = WSSR/ndf : 0.329957
Final set of parameters Asymptotic Standard Error
======================= ==========================
m1 = 2.7742 +/- 0.06282 (2.264%)
c1 = -17.6361 +/- 0.4636 (2.629%)
correlation matrix of the fit parameters:
m1 c1
m1 1.000
c1 -0.895 1.000
II) linear fit to data from 1992 to 1999: y=m2*(x-1992)+c2
final sum of squares of residuals : 61.8659
rel. change during last iteration : -1.31919e-06
degrees of freedom (ndf) : 199
rms of residuals (stdfit) = sqrt(WSSR/ndf) : 0.55757
variance of residuals (reduced chisquare) = WSSR/ndf : 0.310884
Final set of parameters Asymptotic Standard Error
======================= ==========================
m2 = 2.06294 +/- 0.1556 (7.543%)
c2 = -14.8879 +/- 0.6717 (4.512%)
correlation matrix of the fit parameters:
m2 c2
m2 1.000
c2 -0.912 1.000
III) linear fit of data from 1999-2005: y=m3*(x-1992)+c3
final sum of squares of residuals : 47.8434
rel. change during last iteration : -1.59937e-08
degrees of freedom (ndf) : 174
rms of residuals (stdfit) = sqrt(WSSR/ndf) : 0.524368
variance of residuals (reduced chisquare) = WSSR/ndf : 0.274962
Final set of parameters Asymptotic Standard Error
======================= ==========================
m3 = 3.72349 +/- 0.1817 (4.879%)
c3 = -26.7349 +/- 1.775 (6.638%)
correlation matrix of the fit parameters:
m3 c3
m3 1.000
c3 -0.988 1.000
IV) exponential fit of all data: y(x)=a*exp((x-1992)/tau)+c
final sum of squares of residuals : 111.762
rel. change during last iteration : -2.79609e-08
degrees of freedom (ndf) : 374
rms of residuals (stdfit) = sqrt(WSSR/ndf) : 0.546653
variance of residuals (reduced chisquare) = WSSR/ndf : 0.298829
Final set of parameters Asymptotic Standard Error
======================= ==========================
a = 16.8106 +/- 4.578 (27.23%)
tau = 11.1991 +/- 1.837 (16.4%)
b = -30.9617 +/- 5.142 (16.61%)
correlation matrix of the fit parameters:
a tau b
a 1.000
tau 0.997 1.000
b -0.998 -0.991 1.000 | {"url":"http://membrane.com/sidd/sealevel.html","timestamp":"2014-04-20T06:19:27Z","content_type":null,"content_length":"7121","record_id":"<urn:uuid:1b9f21e6-18a8-42ae-a3cb-056b58ed4195>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00499-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cambridge, MA SAT Math Tutor
Find a Cambridge, MA SAT Math Tutor
...I am happy to incorporate this counseling into test prep sessions. My teaching style is to target weak points in the exam first, and building strength in those areas before consolidating my
students' understanding in the areas they are more familiar with. I will typically start with a diagnostic exam, look through some writing, and go from there!
22 Subjects: including SAT math, English, geometry, precalculus
...I have been tutoring GRE math for three years; the SAT math is essentially the same. I worked for the Princeton Review in New York for one year. I am a PhD candidate in comparative literature
at Harvard.
11 Subjects: including SAT math, reading, writing, English
...I speak fluent Mandarin and Cantonese. I use many romanization methods; pinyin, Jyutping, Cantonese, strokes... to write Chinese in electronic devices to communicate, teach and work. I have
taught Mandarin occasionally before.
47 Subjects: including SAT math, chemistry, English, physics
...I am so excited to be a tutor through WyzAnt and really hope that students feel free to contact me at their convenience. Happy studying!I took multiple math classes throughout high school and
college. I have tutored all kinds of math for middle schoolers, high schoolers, and college students.
48 Subjects: including SAT math, chemistry, Spanish, English
...Grasping those concepts wholeheartedly is even more important, and my methods of teaching them are designed for true understanding. Frequent practice, a 'teach me' module, homework,
assessments and frequent feedback are all key components to my algebra courses that help students excel in their algebra courses. Algebra 2 builds on the concepts from algebra 1.
28 Subjects: including SAT math, calculus, geometry, statistics
Related Cambridge, MA Tutors
Cambridge, MA Accounting Tutors
Cambridge, MA ACT Tutors
Cambridge, MA Algebra Tutors
Cambridge, MA Algebra 2 Tutors
Cambridge, MA Calculus Tutors
Cambridge, MA Geometry Tutors
Cambridge, MA Math Tutors
Cambridge, MA Prealgebra Tutors
Cambridge, MA Precalculus Tutors
Cambridge, MA SAT Tutors
Cambridge, MA SAT Math Tutors
Cambridge, MA Science Tutors
Cambridge, MA Statistics Tutors
Cambridge, MA Trigonometry Tutors
Nearby Cities With SAT math Tutor
Allston SAT math Tutors
Arlington, MA SAT math Tutors
Boston SAT math Tutors
Brighton, MA SAT math Tutors
Brookline, MA SAT math Tutors
Cambridgeport, MA SAT math Tutors
Charlestown, MA SAT math Tutors
Dorchester, MA SAT math Tutors
Everett, MA SAT math Tutors
Jamaica Plain SAT math Tutors
Malden, MA SAT math Tutors
Medford, MA SAT math Tutors
Newton, MA SAT math Tutors
Roxbury, MA SAT math Tutors
Somerville, MA SAT math Tutors | {"url":"http://www.purplemath.com/Cambridge_MA_SAT_Math_tutors.php","timestamp":"2014-04-17T00:50:23Z","content_type":null,"content_length":"24109","record_id":"<urn:uuid:3865b722-6ee5-47c4-b4ce-c483a1738346>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
What arithmetic information is contained in the algebraic K-theory of the integers
up vote 33 down vote favorite
I'm always looking for applications of homotopy theory to other fields, mostly as a way to make my talks more interesting or to motivate the field to non-specialists. It seems like most talks about
Algebraic $K$-theory mention that we don't know $K(\mathbb{Z})$ and that somehow $K(\mathbb{Z})$ is worth computing because it contains lots of arithmetic information. I'd like to better understand
what kinds of arithmetic information it contains. I've been unable to answer number theorists who've asked me this before. A related question is about what information is contained in $K(S)$ where
$S$ is the sphere spectrum.
I am aware of Vandiver's Conjecture and that it is equivalent to the statement that $K_n(\mathbb{Z})=0$ whenever $4 | n$. I also know there's some connection between $K$-theory and Motivic Homotopy
Theory, but I don't understand this very well (and I don't know if $K(\mathbb{Z})$ helps). It seems difficult to search for this topic on google. Hence my question:
Can you give me some examples of places where computations in $K(\mathbb{Z})$ or $K(S)$ would solve open problems in arithmetic or would recover known theorems with difficult proofs?
I'm hoping someone who has experience motivating this field to number theorists will come on and give his/her usual spiel. Here are some potential answers I might give a number theorist if I
understood them better...The wikipedia page for Algebraic K-theory mentions non-commutative Iwasawa Theory, L-functions (and maybe even Birch-Swinnerton-Dyer?), and Bass's conjecture. I don't know
anything about this, not even whether knowing $K(\mathbb{Z})$ would help. Quillen-Lichtenbaum seems related to $K(\mathbb{Z})$, but it seems it would tell us things about $K(\mathbb{Z})$ not the
other way around. Milnor's Conjecture (or should we call it Voevodsky's Theorem?) is definitely an important application of $K$-theory, but it's the $K$-theory of field of characteristic $p$, not $K
There was a previous MO question about the big picture behind Algebraic K Theory but I couldn't see in those answers many applications to number theory. There's a survey written by Weibel on the
history of the field, and that includes some problems it's solved (e.g. the congruence subgroup problem) but other than Quillen-Lichtenbaum I can't see anything which relies on $K(\mathbb{Z})$ as
opposed to $K(R)$ for other rings. If $K(\mathbb{Z})$ could help compute $K(R)$ for general $R$ then that would be something I've love to hear about.
Teena Gerhardt and her collaborators have done a number of computations for rings like Z[x]/(x^2) in which the answers are given (essentially) in terms of K(Z). So this would be in the direction
of your last sentence. – Dan Ramras May 5 '13 at 19:58
2 You might enjoy section 1.1 of Clark Barwick's Göttingen talks, which serves exactly the purpose of motivating homotopy theory for number theorists. There's not so much specifically on K(Z)
though... dl.dropbox.com/u/1741495/papers/barwick.pdf – Peter Arndt May 5 '13 at 21:24
2 @David: I have the feeling that $K_*(\mathbb{S})$ helps much more in geometry than in arithmetic, more precisely for the study of homeomorphism groups of high-dimensional manifolds and
h-cobordisms. See folk.uio.no/rognes/papers/plmf.pdf or math.nagoya-u.ac.jp/~larsh/papers/023/whitehead.pdf for example. – Lennart Meier May 6 '13 at 0:14
2 Something that Persiflage wrote only a few days ago is related to your question : galoisrepresentations.wordpress.com/2013/05/04/… – Chandan Singh Dalawat May 6 '13 at 3:20
4 By the way, we DO know the K theory of Z, except in degree 8,12,16,20,24.... (where it is conjectured to be 0), see math.uiuc.edu/K-theory/0691/KZsurvey.pdf – user15817 May 6 '13 at 12:44
show 4 more comments
2 Answers
active oldest votes
$\newcommand\Z{\mathbf{Z}}$ $\newcommand\Q{\mathbf{Q}}$
I'm a number theorist who already thinks of the algebraic $K$-theory of $\Z$ as part of number theory anyway, but let me make some general remarks.
A narrow answer: Since (following work of Voevodsky, Rost, and many others) the $K$-groups of $\Z$ may be identified with Galois cohomology groups (with controlled ramification) of
certain Tate twists $\Z_p(n)$, the answer is literally "the information contained in the $K$-groups is the same as the information contained in the appropriate Galois cohomology groups."
To make this more specific, one can look at the rank and the torsion part of these groups.
1. The ranks (of the odd $K$-groups) are related to $H^1(\Q,\Q_p(n))$ (the Galois groups will be modified by local conditions which I will suppress), which is related to the group of
extensions of (the Galois modules) $\Q_p$ by $\Q_p(n)$. A formula of Tate computes the Euler characteristic of $\Q_p(n)$, but the cohomological dimension of $\Q$ is $2$, so there is
also an $H^2$ term. The computation of the rational $K$-groups by Borel, together with the construction of surjective Chern classes by Soulé allows one to compute these groups
explicitly for positive integers $n$. There is no other proof of this result, as far as I know (of course it is trivial in the case when $p$ is regular).
2. The (interesting) torsion classes in $K$-groups are directly related to the class groups of cyclotomic extensions. For example, let $\chi: \mathrm{Gal}(\overline{\Q}/\Q) \rightarrow \
mathbf{F}^{\times}_p$ be the mod-$p$ cyclotomic character. Then one can ask whether there exist extensions of Galois modules:
$$0 \rightarrow \mathbf{F}_p(\chi^{2n}) \rightarrow V \rightarrow \mathbf{F}_p \rightarrow 0$$
which are unramified everywhere. Such classes (warning: possible sign error/indexing disaster alert) are the same as giving $p$-torsion classes in $K_{4n}(\Z)$. The non-existence of such
classes for all $n$ and $p$ is Vandiver's conjecture. Now we see that: The finiteness of $K$-groups implies that, for any fixed $n$, there are only finitely many $p$ such that an
extension exists. An, for example, an explicit computation of $K_8(\Z)$ will determine explicitly all such primes (namely, the primes dividing the order of $K_8(\Z)$). As a number
theorist, I think that Vandiver's conjecture is a little silly --- its natural generalization is false and there's no compelling reason for it to be true. The "true" statement which is
always correct is that $K_{2n}(\mathcal{O}_F)$ is finite.
up vote 45 Regulators. Also important is that $K_*(\Z)$ admits natural maps to real vector spaces whose image is (in many cases) a lattice whose volume can be given in terms of zeta functions
down vote (Borel). So $K$-theory is directly related to problems concerning zeta values, which are surely of interest to number theorists. The natural generalization of this conjecture is one of
accepted the fundamental problems of number theory (and includes as special cases the Birch--Swinnerton-Dyer conjecture, etc.). There are also $p$-adic versions of these constructions which also
immediately lead to open problems, even for $K_1$ (specifically, Leopoldt's conjecture and its generalizations.)
A broader answer: A lot of number theorists are interested in the Langlands programme, and in particular with automorphic representations for $\mathrm{GL}(n)/\Q$. There is a special
subclass of such representations (regular, algebraic, and cuspidal) which on the one hand give rise to regular $n$-dimensional geometric Galois representations (which should be
irreducible and motivic), and on the other hand correspond to rational cohomology classes in the symmetric space for $\mathrm{GL}(n)/\Q$, which (as it is essentially a $K(\pi,1)$) is the
same as the rational cohomology of congruence subgroups of $\mathrm{GL}_n(\Z)$. Recent experience suggests that in order to prove reciprocity conjectures it will also be necessary to
understand the integral cohomology of these groups. Now the cohomology classes corresponding to these cuspidal forms are unstable classes, but one can imagine a square with four corners
as follows:
stable cohomology over $\mathbf{R}$: the trivial representation.
unstable cohomology over $\mathbf{R}$: regular algebraic automorphic forms for $\mathrm{GL}(n)/\Q$.
stable cohomology over $\mathbf{Z}$: algebraic $K$-theory.
unstable cohomology over $\mathbf{Z}$: ?"torsion automorphic forms"?, or at the very least, something interesting and important but not well understood.
From this optic, algebraic $K$-theory of (say) rings of integers of number fields is very naturally part of the Langlands programme, broadly construed.
Final Remark: algebraic K-theory is a (beautiful) language invented by Quillen to explain certain phenomena; I think it is a little dangerous to think of it as being an application of
"homotopy theory". Progress in the problems above required harmonic analysis and representation theory (understanding automorphic forms), Galois cohomology, as well as homotopy theory and
many other ingredients. Progress in open questions (such as Leopoldt's conjecture) will also presumably require completely new methods.
3 Beautiful answer! – Mariano Suárez-Alvarez♦ May 5 '13 at 23:45
As a narrow-minded person I find your narrow answer best! But to be even more concrete: Are there explicit arithmetic application of the Voevodsky-Rost result (or the integral
computation of $K_*i(\mathbb{Z})$ for $i$ not divisible by $4$) which do not contain K-theory in their statement? – Lennart Meier May 6 '13 at 0:06
2 @Rebecca: Thanks, this is a great answer! I'm somewhat amazed that the Langlands program is involved with algebraic K-theory, but you explain this point very well. And that connection
to Galois cohomology groups is great. Also, welcome to MathOverflow! – David White May 6 '13 at 4:11
@Frictionless Jellyfish: I have to say, I find your broader answer very mysterious. Could you expand on your four analogies? – Daniel Litt Jul 2 '13 at 3:57
@DanielLitt: Dear Daniel, The stable cohomology (with $\mathbb C$ coefficients, say, although FJ takes $\mathbb R$-coefficients --- but it doesn't really matter, as long as it's a
field of char. zero) for $\mathrm{SL}$ comes (from an automorphic point of view) from the trivial subrepresentation of $L^2(\mathrm{SL}_n(\mathbb Z) \backslash \mathrm{SL}_n(\mathbb
R))$. (``Comes from'' refers to a generalization of Eichler--Shimura theory, in which $(\mathfrak g, \mathfrak k)$-cohomology of automorphic representations is identified with group
cohomology.) The unstable cohomology comes from ... – Emerton Jul 28 '13 at 5:03
show 3 more comments
Let $p$ is an odd prime and $C$ the $p$-Sylow of the class group of $\mathbb{Q}(\zeta_p)$. If $C^\sigma$ denotes the group fixed by complex conjugation then Vandiver's conjecture is that $C^
\sigma = 0$. Both Kurihara and Soulé have made some partial progress towards this conjecture, and their methods rely on knowledge of the torsion piece of the groups $H_i(\mathbb{Z})$. A good
up vote introduction is Soulé's 14-page paper on the matter entitled "Perfect forms and the Vandiver conjecture". Kurihara's paper "Some remarks on conjectures about cyclotomic fields and $K$-groups
1 down of $\mathbb{Z}$" is another very readable source, which points out many arithmetic applications.
Thanks for your answer. It seems Vandiver's Conjecture is the standard motivation for trying to understand $K_*(\mathbb{Z})$. I was mostly interested in other problems in arithmetic which
could be solved by computations in $K_*(\mathbb{Z})$, so I'm going to hold off on accepting your answer because I'm hoping for more. Still, it sounds like Kurihara's paper might contain
some further applications, so I'll look into that paper soon. – David White May 5 '13 at 22:53
That's ok, I was in a hurry typing this and admittedly did not read the question as thoroughly as I should have. – Jason Polak May 5 '13 at 22:57
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology algebraic-k-theory kt.k-theory-homology nt.number-theory arithmetic or ask your own question. | {"url":"http://mathoverflow.net/questions/129762/what-arithmetic-information-is-contained-in-the-algebraic-k-theory-of-the-intege","timestamp":"2014-04-18T19:03:13Z","content_type":null,"content_length":"78932","record_id":"<urn:uuid:751c66e6-788c-4cad-ad13-be0dd4328181>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are Football Players Really Living Longer Than Baseball Players? Why Grantland's Study Is Wrong
Grantland recently published an article, "Mere Mortals," by Bill Barnwell, which claims that:
Baseball players who accrued at least five qualifying seasons from 1959 through 1988 died at a higher rate than similarly experienced football players from the same time frame. The difference
between the two is statistically significant and allows us to reject the null hypothesis; there is a meaningful difference between the mortality rates of baseball players and football players
with careers that emulated the [National Institute for Occupational Safety and Health] NIOSH criteria.
The author then goes on to collect data on football and baseball players who played at least five years between 1959 and 1988, and his results are below :
Baseball Football
Qualifying Players 1,494 3,088
Alive 1,256 2,694
Deceased 238 394
Mortality Rate 15.9 percent 12.8 percent
From this table, to his credit, Barnwell calculated confidence intervals for the mortality rate, as well as performing Fisher's Exact Test to test for independence between the rows (dead or alive)
and columns (baseball and football). For football players, the 95 percent confidence interval for the mortality rate was (11.6, 13.9), and, for baseball players, the 95 percent confidence interval
was (14.1,17.8). The Fisher's Exact Test gives a p-value of about 0.004 and from this he concludes, correctly, that the mortality rate is significantly different between the groups at the 0.01 level.
So, the big question is, as he poses it:
Why is it that baseball players from the '60s, '70s, and '80s are dying more frequently than football players from the same era? Truthfully, as a layman, I can't say with any certainty, and I
don't think it's appropriate to speculate. A deeper study into the mortality rates of baseball players that emulated the NIOSH focus on specific causes of death versus the general population
might prove valuable.
I'll field this one. Baseball players are dying more often because they are, on average, older than football players. The author never controlled for the age of the players, or any other risk
factors, for that matter. In 1959, there were, as far as I can tell, 12 NFL teams each with 40 players. That's 480 players. In 1988, there were 28 teams with 59 players each: a total of 1652. In
baseball, in 1959, there were 16 teams with roughly 40 men each, for a total of 640 players. That number in 1988 was 1,040—26 teams with 40 players. So there were almost three and half times more
players in the NFL in 1988 than there were in 1959. The number of baseball players only increased about 1.6 times over this same period.
These numbers aren't exact, but the point still stands: The group of football players that has been collected here has a greater proportion of younger people in it than the baseball group. So it's
not exactly apples to apples. In fact, it's not even close. You'd expect, just based on the ages of the players in these groups, for baseball players to have higher rates of mortality than the
football players. Basically, Grantland demonstrated that the old die more often than the young.
* * *
I wanted to confirm that Grantland's study really had this flaw, so I went and collected data myself and ran a quick analysis to check. My findings? When age is added to a model predicting death, the
effect of the sport on mortality rate completely disappears. This means that if two players are the exact same age and one played professional football and the other played professional baseball for
at least five years and one of those years was between 1959 and 1988, there is no evidence that the baseball player is more likely to be deceased than the football player, and vice versa.
Data collection
Using R, I scraped Football Almanac to get a list of players' names. I then used this list of players' names to scrape Pro Football Reference to get information about each player's date of birth, age
at death (if he has died), the start and end years of his career, height, and weight. (A note about a shortcoming of my data collection for football: If a player had the same name as another player,
I collected only one. I believe this is a small issue and will not affect the overall results, but it is worth noting.) In total, the football player data set had 14,396 players.
Using R, I scraped Baseball Almanac to get a list of players' names. I then used this list of players' names to scrape Baseball Reference to get information about each player's date of birth, age at
death (if he has died), the start and end years of his career, height, and weight. For baseball players, I was able to collect all players, including those who had the same name as another player. In
total, the baseball player data set had 5,587 players.
Time Frame
Both the baseball and football data sets were whittled down to consider only players who played at least five seasons and any of those seasons fell between 1959 and 1988. (These are slightly
different standards than in the Barnwell article, but, again, the larger point should remain the same.) This left 2,436 football players and 967 baseball players. The mean age of baseball players in
my sample was 64.19 while the mean age of football players was 60.91. (Barnwell tweeted that the difference in ages between his two groups, which were defined slightly differently, was about 24
months.) The mean ages of my two groups is significantly different with a p-value of <0.00000000000001. That's a big deal.
The distribution of the ages of the football and baseball players is displayed below using a density estimator in R. You'll notice that there are many more young players in the football group than in
the baseball group. This indicates that mortality rates cannot be compared directly to one another as is done in the Barnwell article.
Think for a minute about the graph below. Without knowing anything about which color represents which sport, which of these two groups should have a higher mortality rate? (Hint: The blue one!)
Fisher's Exact Test
Out of the 2,436 qualifying football players, 259 were deceased, according to Pro Football Reference, for a mortality rate of 10.63 percent. Among baseball players, 137 out of 967 were dead, for a
mortality rate of 14.17 percent. Both of these rates are lower than Barnwell's, but are of similar relative magnitudes. Using a Fisher's Exact Test, the null hypothesis of no association is rejected
with a p-value of 0.004407, which is essentially identical to Barnwell's p-value of 0.004. So there is a statistically significant difference between these groups. That's a fact.
But ...
Logistic Regression
This type of analysis estimates the probability of a certain event—in this case, death—while taking into account multiple factors that could be related to the event. Running a logistic regression
model with death as an outcome and only sport as a dummy variable predictor yields a p-value of 0.00384 for the significance of sport being associated with death. This is largely the same result as
the Fisher's Exact Test as neither controls for any other variables besides sport.
When age—technically years since birth, since some people are deceased—is added to the model, the effect of sport disappears entirely. The p-value for age is < 2^-16 and the p-value for sport is
0.441, which is not significant.
The conclusions reached in Barnwell's article are at the very least misleading; at worst, they understate the potential dangers of playing football. Is it possible, as Barnwell suggests, that
baseball players die at a younger age than football players? I suppose, but I think it's unlikely. In any case the phenomenon is not demonstrated in Barnwell's data.
To reiterate what we've found: A baseball player who enjoyed at least a five-year career (with one of those years falling between 1959 and 1988) is no more likely to be dead right now than a football
player of the same age who enjoyed at least a five-year career in the same time span.
While it is true that baseball players from this time period are more likely to be deceased than their football counterparts, I have demonstrated here that it is not because they played baseball;
it's because they are older. It turns out that your age is a more significant predictor of being dead than the sport you played.
Greg Matthews is a post-doctoral research fellow in biostatistics at the University of Massachusetts. He blogs at Stats in the Wild, where this post originally was published.
11 58Reply | {"url":"http://deadspin.com/5940716/are-football-players-really-living-longer-than-baseball-players-why-grantlands-study-is-wrong?tag=Nerds","timestamp":"2014-04-16T11:33:59Z","content_type":null,"content_length":"96901","record_id":"<urn:uuid:1ae12477-f76c-4002-8dbb-ff7fbd3e0e4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Basic Statistics A Primer For The Biomedical... Textbook Solutions | Chegg.com
Using Minitab, we can construct a histogram by using the menu option “Graph > Histogram”. We may then change some of the options to create the look that we prefer in a histogram, including labels and
tick marks (this can be done after the histogram is created by right-clicking on different portions of the graph, say the x axis).
Use the following MINITAB instruction set to construct a histogram.
1) Type or import data in to a column named “C9”.
2) Select, graph, histogram.
3) Specify the column “C9” in the Graph variable box.
4) Click Ok. | {"url":"http://www.chegg.com/homework-help/basic-statistics-a-primer-for-the-biomedical-sciences-4th-edition-solutions-9780470248799","timestamp":"2014-04-19T03:59:54Z","content_type":null,"content_length":"31608","record_id":"<urn:uuid:fc4b9eb9-791c-4a42-9194-f412b17a2236>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patente US7438689 - Method for arbitrary two-dimensional scaling of phonocardiographic signals
The invention relates to a method for assisting medically trained persons in diagnosing and classifying medical conditions of the heart by means of auscultation. It also relates to a method and
apparatus for transforming the information content of a phonocardiographic signal to a frequency range and duration commensurate with a communication channel having properties that fall outside the
normal frequency range of phonocardiographic signals.
Signals obtained by means of a transducer are phonocardiographic representations of sounds traditionally listened to by means of a stethoscope. Training in auscultation takes a long time and requires
an aptitude for recognising and classifying aural cues, frequently in a noisy environment. 20-30 different conditions may need to be differentiated, and within each, the severity evaluated.
Furthermore, there may be combinations among these. These factors contribute to explaining why not all physicians perform equally well when diagnosing heart conditions, and why it may be
Use of modern signal processing has been proposed to enhance the relevant aural cues, and in some cases it has been demonstrated that a transposition into a different frequency range or repetition of
a repetitive signal at a different repetition frequency from that originally recorded may be advantageous. Among the patents that describe this kind of signal processing the following may be
U.S. Pat. No. 4,220,160 uses the heart sound picked up by a microphone to modulate a higher and more audible carrier frequency. U.S. Pat. No. 4,424,815 is a very fundamental patent employing a range
of solutions, from magnetic drum (i.e. a non-digital procedure) to a tapped delay line for stretching time while conserving frequency content. U.S. Pat. No. 4,528,689 describes a time stretching
digital stethoscope that records several cycles, and obtains a “slow-down factor” proportional to the number of cycles recorded. Each part of a sound between zero crossings is repeated with opposite
polarity, thereby obtaining a signal that sounds like original signals but lasting longer. Only whole-number factors are possible. U.S. Pat. No. 4,792,145 describes how frequency scaling is obtained
by multiplying each Fourier spectral component by a factor. The original relationship between frequency components is retained, and it is purported that the audibility is improved. Similarly, the
time may be scaled.
All of the above solutions have severe limitations, either because of an increase in non-linear distortion and noise, or because the factor for stretching can only be whole-numbered.
It has been proposed to use an algorithm based on Matching Pursuit, such as in Xuan Zhang et al. “Time-Frequency scaling transformation of the phonocardiogram based on the matching pursuit method”,
IEEE Trans. Biom. Eng. Vol. 45, No. 8, pp. 972-979 in view of Xuan Zhang et al. “Analysis-synthesis of the phonocardiogram based on the matching pursuit method”, idem pp. 962-971 (August 1998).
However, this procedure is an iterative algorithm that relies on the modulation of the signal as a sum of functions selected from a library of functions that has to be searched several times for each
signal segment. It is hence a very time-consuming procedure, and in practice the control over the individual phases of the segments required for the stability of the scaled signal is questionable.
According to the invention, these disadvantages may be avoided by synthesizing a “clean” signal consisting of a sum of sinusoids and to perform a transformation on these sinusoids in order that the
relative amplitudes are maintained while the frequency range or time axis are changed in a continuous range of scales. In a further refinement of this method, the scale may be linked to other
phenomena by means of an auto-scale function.
An embodiment is particular in that that the signals are converted on a running basis to a sinusoidal model representation, that a time and/or frequency axis scaling is defined and used to control
the amplitudes and phases of the sinusoids, which are subsequently added to create a time and/or frequency scaled representation of the phonocardiographic signal. The basis of the invention is the
method in which a correct amplitude and phase adjustment of the contributing sine generators is obtained.
An advantageous embodiment of the invention is particular in that it comprises the steps of obtaining the frequency content of the signal by applying a Short Time Fourier Transform with overlapping
segments, a Discrete Fourier Transform being performed on each segment, performing a frequency peak search on each segment by consecutive removal of the spectral components having the highest energy
content, identifying each peak by its frequency value repeating the peak search, until a maximum number of peaks have been identified or until the energy content of the last peak is below a preset
minimum, establishing a segment-by-segment map of spectral peaks, said peaks forming a track over time, optionally subjecting the frequency values of the spectral peaks to a multiplication, defining
a synthesis frame of time, based on said segments, optionally subjecting each frame to a multiplication of the time scale, adjusting the phase of sine generators centered on the frequencies of the
tracks, adjusting the amplitudes of said sine generators, and summing the outputs of all sine generators active at any one instant for a given frame length T.
A more general embodiment is particular in that it includes the following steps: obtaining the frequency content of the signal by applying a Short Time Fourier Transform with overlapping segments, a
Discrete Fourier Transform being performed on each segment, performing a frequency peak search on each segment by repeated removal of the highest spectral “hills” identifying each peak by its
frequency value, repeating the peak search until a maximum number of peaks have been identified or until the peak level of the last peak is below a preset minimum, establishing a segment-by-segment
map of spectral peaks, said peaks forming a track over time, adjusting the phase of sine generators centered on the frequencies represented by the tracks, summing the outputs of all sine generators
active at any one instant for a given frame length T, creating a continuous output signal by joining consecutive frames. In the present embodiment use is made of the fact that the estimate of the
importance of a peak (a “hill”) may be obtained by other means than an energy measure, i.e. a different sorting criterion.
A further advantageous embodiment for scaling a phonocardiographic signal on the frequency axis is particular in that the spectral peaks are multiplied by a factor q.
A further advantageous embodiment for scaling a phonocardiographic signal on the time axis by a desired factor is particular in that the frame length is multiplied by a factor p.
A further advantageous embodiment for autoscaling a phonocardiographic signal on the time axis is particular in that the scaling factor p is set such that the frame length multiplying factor is equal
to the heart rate divided by 60. This means that a stability of the signal will be obtained irrespective of changes in the heart rate.
An advantageous embodiment of the invention that ensures sufficient resolution and precision in synthesis is particular in that the number of sine generators is maximum 50 in any one frame.
The invention also relates to an apparatus for performing the method, being particular in comprising means for windowing the time function, short time Fourier spectrum analysis means, means for
searching and classifying spectral peaks, means for comparing phases of signals corresponding to said spectral peaks, means for controlling the phases of sine generators providing signals
corresponding to said spectral peaks, means for controlling the amplitudes of said sine generators, and means for summing the signals of said controlled sine generators in order to obtain a
synthesized and essentially noise free output signal representative of said time function.
The invention will be more fully described in the following with reference to the drawing, in which
FIG. 1 shows a block diagram representative of the operations performed on an input signal in order that a sinusoidal model is obtained for the signal, which is summed to obtain an output signal,
FIG. 2 shows the procedure of block segmentation,
FIG. 3 shows the estimation of a finite number of spectral peaks,
FIG. 4 shows results of the peaks search
FIG. 5 shows the matching procedure
FIG. 6 shows the final results of the peak matching
FIG. 7 shows boundary condition for the interpolation functions,
FIG. 8 shows interpolation of instantaneous phase function and amplitude function
FIG. 9 shows a spectrogram of the original heart sound,
FIG. 10 shows a spectrogram of the sinusoidal coded heart sound,
FIG. 11 shows a tracking pattern for the sinusoidal coded heart sound,
FIG. 12 shows a Time-Frequency scaling procedure,
FIG. 13 shows a spectrogram of a frequency shifted heart sound (p=1,q=1,r=100 Hz),
FIG. 14 shows a spectrogram of a frequency scaled heart sound (p=1,q=2,r=0),
FIG. 15 shows a spectrogram of a non-scaled sinusoidal coded heart sound (p=1,q=1,r=0),
FIG. 16 shows a spectrogram of a joint time-frequency scaled heart sound (p=2,q=2, r=0),
FIG. 17 shows a spectrogram of a time scaled heart sound (p=2,q=1,r=0), and
FIG. 18 shows a spectrogram of the original heart sound (p=1,q=1,r=0), but using a different time axis.
The mathematical foundation for this procedure is described in the following paragraphs.
The sinusoidal model was originally developed for speech applications by McAulay and Quatieri and a thorough description can be found in Robert J. McAulay and Thomas F. Quatieri, “Speech Analysis/
Synthesis Based On A Sinusoidal Representation”, IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-34, NO. 4, August 1986, page 744-754, and Thomas F. Quatieri and Robert J.
McAulay, “Speech Transformations Based On A Sinusoidal Representation”, IEEE Transactions on Acoustics, Speech, and Signal Processing, Vol. ASSP-34, NO. 6, December 1986, page 1449-1464. Because the
spectral structure of a speech signal and a heart signal are very different (a heart beat does not have pitch like voiced speech, nor a broadband structure like unvoiced speech), the original
sinusoidal model will not be able to model a heart beat properly. A sinusoidal model specific for heart sounds is developed by exploring the spectral characteristics of the heart beat, when tracking
spectral components in the signal.
The sinusoidal model is based upon sinus functions. The synthesis part of the model consists in adding a group of sinusoidal functions:
$y ( t ) = ∑ k = 1 L ( t ) A k ( t ) cos ( Ω k ( t ) ) Eq . ( 1 )$
The number of functions L(t) will change as a function of time, depending on the complexity of the signal. Each sinusoidal function (frequency track) is controlled by the total phase function Ω(t),
and the amplitude A(t). The main objectives of the analysis part of the model is to establish the content of these two functions.
A fundamental relation for the sinusoidal model is the relationship between total phase and the instantaneous frequency:
$ω ( t ) = ⅆ ⅆ t Ω ( t ) [ rad / s ] Eq . ( 2 ) Ω ( t ) = ∫ - ∞ t ω ( τ ) ⅆ τ + φ offset [ rad ]$
Substituting the expression for the total phase into eq. (1), the final expression for the sinusoidal model is obtained:
$y ( t ) = ∑ k = 1 L ( t ) A k ( t ) cos ( ∫ - ∞ t ω k ( τ ) ⅆ τ + φ k ) Eq . ( 3 )$
The structure of a traditional sinusoidal model is illustrated in FIG. 1. The input signal x(t) is obtained from a sensor for heart sounds placed in contact with the body under examination. The
output signal y(t) is made available to the medically trained person via e.g. headphones after suitable amplification. Furthermore it may be used for a visual display and comparison of frequency
spectra on a display screen.
The model may be divided into three parts: an Analysis part where the signal parameters are measured, an Interpolation and transformation part, where Ω(t) and A(t) are interpolated from one window
position to the next window position, and a Synthesis part where the transformed heart signal is reconstructed.
The apparatus, the principle of which is shown in FIG. 1, essentially comprises means for windowing the time function, short time Fourier spectrum analysis means 1, means for searching and
classifying spectral peaks 2, 3, means 4 for comparing phases of signals corresponding to said spectral peaks, means for controlling the phases of sine generators 6 providing signals corresponding to
said spectral peaks, means 5, 7 for controlling the amplitudes of said sine generators, and means 8 for summing the signals of said controlled sine generators in order to obtain a synthesized and
essentially noise free output signal representative of said time function. The content of each block of FIG. 1 will be elaborated more in the following sections.
The first step of the Analysis is to measure the frequency content of the signal. As the frequency content is changing as a function of time, a Short Time Fourier Transform (STFT) is used to track
the frequency content. The STFT segments the signal into overlapping segments (75%), and each segment is Fourier transformed using a Discrete Fourier Transform (DFT). The segmentation procedure is
illustrated in FIG. 2.
When a signal segment has been Fourier transformed, the most dominant frequency component in the spectrum has to be found, and used as local “check points” for a frequency track over time. The
original sinusoidal model finds every peak in the spectrum, and associates a sinusoidal generator to each peak. In the case of a voiced speech signal, the signal is periodic and the corresponding
spectrum will be discrete (pitch). As the spectrum is discrete, the known peak search procedure will find a large number of peaks. However, a heart beat is non-periodic and has a nature like a
Gaussian function modulated by a low-frequency sinusoid. The corresponding spectrum is also Gaussian, and the known peak search procedure will model this heart beat by only one sinusoidal function,
where the amplitude is equal to the peak level. This will not model the heart beat correctly, compared to the remaining spectrum. However, if the amplitude of the sinusoidal function instead is set
equal to the area below the curve defining the “hill” corresponding to the peak, the heart beat will be properly modelled. The peak searching procedure is illustrated in FIG. 3.
First the index for the maximum peak is found, FIG. 3 a. Secondly, the area of the “hill” is estimated, FIG. 3 b, and the complete “hill” is deleted from the spectrum, FIG. 3 c. The procedure is
repeated with the modified spectrum, until a maximum number of peaks has been found, or the peak level is below a pre-defined threshold. For certain heart sounds as many as 50 peaks corresponding to
50 sinusoids may be found.
The result from the peak search is a table of peaks for each position of the window function. The table corresponding to window position m will be termed spec m. The distance between each window
position is T seconds. This is illustrated in FIG. 4.
The peaks in spec m now have to be matched together with peaks in spec m+1, in order to construct a frequency track that the total phase function Ω(t) must follow. Many different solutions for this
matching problem have been proposed in the literature, as no optimal solution exits. The matching strategy used in the present invention is based on the following three rules:
□ high energy peaks are matched first
□ tracks are not allowed to cross.
□ the frequency differences between two matched peaks must be below certain limits.
First, the procedure finds the maximum peak in spec m. This peak has to be matched with a peak in spec m+1. For this purpose, the procedure searches spec m+1 for the maximum peak within lower and
upper limits. These limits are controlled by two factors. The distance to the nearest peak in spec m and the maximum allowed frequency change during T seconds. Before a match is accepted, it must be
verified, i.e. it must not cross a previously found match. When the final match is verified, the two peaks are no longer visible to the matching procedure. Unmatched peaks in spec m are declared
dead, and unmatched peaks in spec m+1 are declared born. The procedure is illustrated in FIG. 5, and an example of the matching process is illustrated in FIG. 6.
The synthesis will be designed for T seconds at a time, that is from window position m to window position m+1, and this T second reconstructed signal will be termed frame m, with a frame length T.
The number of born, continued, and dead tracks in frame m will be termed L1(m), L2(m), and L3(m). For each frame m, the born, continued and dead tracks are synthesized separately and added to form
the final signal:
$y m ( t ) = y 1 m ( t ) + y 2 m ( t ) + y 3 m ( t ) = ∑ k 1 = 1 L 1 ( m ) A k 1 m ( t ) cos ( Ω k 1 m ( t ) ) + ∑ k 2 = 1 L 2 ( m ) A k 2 m
( t ) cos ( Ω k 2 m ( t ) ) + ∑ k = 1 L 3 ( m ) A k 3 m ( t ) cos ( Ω k 3 m ( t ) ) Eq . ( 4 )$
Each track k in frame m has to be synthesised by a sinusoidal function A[k](t)cos(Ω[k](t). The track is up to now only specified by the start and end points. This is illustrated in FIG. 7.
A^m [k ]is the amplitude of the Fourier coefficients, ω^m [k ]is the frequency of the Fourier coefficient and φ^m [k ]is the phase offset for the current track. The phase offset is increased by an
average phase step dependent on the start and stop frequency of the track (see FIG. 8). The original model is based on the phase of the Fourier coefficients. The amplitude is interpolated by A[k](t),
and the instantaneous phase is interpolated by Ω[k](t). The interpolation functions are only constrained by the start and stop conditions:
$A k ( 0 ) = A k m A k ( T ) = A k m + 1 Eq . ( 5 ) Ω k ( 0 ) = φ k m Ω k ( T ) = φ k m + T · ω k m + ω k m + 1 2 Ω k ′ ( 0 ) = ω k m Ω k ′ ( T ) = ω k m + 1$
The amplitude function is not critical, so a linear interpolation function will be used for A[k](t). However, the instantaneous phase function Ω[k](t) is more critical because any discontinuity in
the phase progress will deteriorate the perceptual quality of the final sound. In the following, the synthesis of born, continued, and dead tracks will be designed separately.
In the case of a continued track, a second order polynomial will be used to interpolate a smooth phase progress during a frame:
Ω(t)=a [1] +a [2] ·t+a [3] t ^2Eq. (6):
This is illustrated in FIG. 8.
The coefficients are determined by inserting the conditions from Eq. (5) into Eq. (6):
$Ω ( 0 ) = a 1 = φ k m Ω ′ ( 0 ) = a 2 = ω k m Ω ( T ) = a 1 + a 2 · T + a 3 · T 2 = φ k m + T · ω k m + ω k m + 1 2 ⇒ a 3 = ω k m + 1 - ω k m 2 T Eq . ( 7 )$
The linear interpolation of the amplitude function is achieved by:
$A k m ( t ) = A k m - t · A k m - A k m + 1 T Eq . ( 8 )$
The final equations for the synthesis of a continued track k in frame m is given by:
Continued Track
$Ω k m ( t ) = φ k m + ω k m · t + ω k m + 1 - ω k m 2 T · t 2 Eq . ( 9 ) A k m ( t ) = A k m - t · A k m - A k m + 1 T$
Two special cases remain to be considered. When a track is declared dead or born, we do not have both start and end conditions for them.
In the case of a born track, an instantaneous phase function will be initiated from the begining of frame k, with A^m [k]=0, ω^m [k]=ω^m+1 [k], φ^m+1 [k]=0, and the second order instantaneous phase
function is reduced to a linear function:
Born Track:
$Ω k m ( t ) = t · ω k m + 1 Eq . ( 10 ) A k m ( t ) = t · A k m + 1 T$
When a track is declared dead, the track is terminated at the end of frame m using A^m+1 [k]=0, ω^m+1 [k]=ω^m [k], and φ^m+1 [k]=T·φ^m [k], and the second order instantaneous phase function is again
reduced to a linear function:
Dead Track:
$Ω k m ( t ) = t · ω k m + φ k m Eq . ( 11 ) A k m ( t ) = A k m T ( T - t )$
In order to explain more fully the sinusoidal model according to the invention it is prepared for implementation in a mathematical simulation program, such as MatLab (™), and the interpolation
functions are hence converted to discrete index. The discrete time version is obtained using the following substitutions:
$t = n · Δ T = N · Δ ω k m = v k m N DFT · 2 π Δ Eq . ( 12 )$
These substitutions are now inserted into the main equations for the sinusoidal model.
$Ω k m [ n ] = Ω k m ( n Δ ) = a 1 + a 2 n Δ + a 3 n 2 Δ 2 = φ k m + v k m N DFT · 2 π Δ · n Δ + v k m + 1 N DFT · 2 π Δ - v k m N DFT · 2 π Δ 2 ( N · Δ
) n 2 Δ 2 = φ k m + n · v k m · 2 π N DFT + n 2 · ( v k m + 1 - v k m ) π N · N DFT Eq . ( 13 )$
The time index n is running from 0 to N. The discrete amplitude function is given by:
$A k m [ n ] = A k m ( n · Δ ) = A k m - n · Δ · A k m - A k m + 1 N · Δ = A k m - n · A k m - A k m + 1 N Eq . ( 14 )$
The final equations for a discrete-time continued track is:
Discrete Continued Track:
$Ω k m [ n ] = φ k m + n · v k m · 2 π N DFT + n 2 · ( v k m + 1 - v k m ) π N · N DFT Eq . ( 15 ) A k m [ n ] = n · Δ · A k m + 1 N · Δ = n · A k m + 1 N$
The discrete function for a born and dead track can likewise be found following the same approach:
Discrete Born Track:
$Ω k m [ n ] = n · Δ · v k m + 1 N DFT · 2 π Δ = n · v k m + 1 · 2 π N DFT Eq . ( 16 ) A k m [ n ] = n · Δ · A k m + 1 N · Δ = n · A k m + 1 N$
Discrete Dead Track:
$Ω k m [ n ] = n · Δ · v k m N DFT · 2 π Δ + φ k m = n · v k m · 2 π N DFT + φ k m Eq . ( 17 ) A k m [ n ] = A k m N · Δ ( N · Δ - n · Δ ) = A k m N ( N - n )$
The result of coding a heart sound with severe systolic murmurs is illustrated in FIGS. 9, 10, and 11. FIG. 11 shows the track pattern that is used to reconstruct the heart sound and clearly
illustrates how tracks are born, are continued, and die, according to the spectral content of the original heart sound, and in this respect FIG. 11 is a practical demonstration of the principle shown
in FIG. 6.
Using the novel sinusoidal model described above, we are now in a position to make modifications to the original signal, which would normally be very difficult to obtain. As the model contains a very
precise description of how the frequency content of the signal is changing over time, it provides a good foundation for making signal transformations like time stretching (without changing the
frequency structure), frequency stretching (without changing the time progress), and joint time frequency stretching.
The sinusoidal model may be extended to a time-frequency scaled sinusoidal model with small modifications. The joint time frequency scaling is obtained by mapping track information at time t and
frequency ω in the original model to a new time position t{tilde over ( )}=p·t, where p is the time scaling factor, and new frequency position ω{tilde over ( )}=q·t, where q is the frequency scaling
factor. A pure frequency scaled model, may be obtained by setting p=1, and a pure time scaling model may be obtain by setting q=1. The procedure is illustrated in FIG. 11.
The time scaling is implemented in the sinusoidal model by changing the frame T to p·T in the interpolation functions Ω(t) and A(t). Frequency scaling can be implemented by shifting the peaks found
by the peak searching procedure from ω to q·ω:
{tilde over (T)}=p·T
{tilde over (ω)}q·ωEq. (18):
When q·ω crosses half the sampling frequency, the track is removed. Every time a born or continued track has been synthesized, the phase offset φ^m+1 [k ]has to be updated to:
φ[k] ^m+1=Ω[k] ^m(T)Eq. (19):
Other frequency modifications will also be possible. Heterodyne modification is obtained by shifting peaks from ω to ω+r, where r is the amount of the spectrum movement.
The time-scaled sinusoidal model may form the foundation for a method for auto-scaling a heart sound signal, which scales arbitrary heart sounds to 60 beats per minute (bpm). This autoscaling
capability is obtained using a scale factor p:
$p = h 60 Eq . ( 20 )$
where h is the heart rate. Because of the sample rate in the Fourier transforms there is no problem in following a changing heart rate dynamically. The usefulness lies in the fact that a stressed
heart will display a change in the spectral content of the heart sound, and focusing on this phenomenon may be simplified if the simultaneous increase in heart rate may be disregarded.
In FIGS. 13 to 18, the results of applying the above described procedure are shown when used on a heart sound.
One particular use of the procedure relates to the transmission of very low frequency signals over a channel with a limited bandwidth centered much higher than the low frequency signals, such as
heart sound signals to be transmitted over a traditional telephone line with the bandwidth 300-3400 Hz. Traditionally this could be obtained by heterodyning only, in which harmonic relationships
between partials are destroyed. According to the present invention scaling to the frequency range of interest will preserve such harmonic relationships.
The basic parameters that apply to the procedure described above when used with arbitrary heart sounds may obviously be chosen by the skilled person according to specific needs. However, the examples
reproduced here are based on the following parameters:
Sampling frequency: 1 kHz
Window length: 64 samples
Window shift: 4 samples
Window type: Gaussian
Max. number of sines: 50
Max. frequency shift of tracks between neighbouring 80 Hz
It will be understood that once the signal has been converted to digital representation of data, its manipulation may take place in dedicated processors, RISC processors or general purpose computers,
the outcome of the manipulation being solely dependent on the instructions performed on the data under the control of the program written for the processor in order to obtain the function. The
physical location of the data at any one instant (i.e. in varying degrees of processing) may or may not be related to a particular block in the block diagram, but the representation of the invention
in the form of interconnected functional blocks provides the skilled person with sufficient information to obtain the advantages of the invention.
The foregoing description of the specific embodiments will so fully reveal the general nature of the present invention that others skilled in the art can, by applying current knowledge, readily
modify or adapt for various applications such specific embodiments without undue experimentation and without departing from the generic concept, and therefore, such adaptations and modifications
should and are intended to be comprehended within the meaning and range of equivalents of the disclosed embodiments. It is to be understood that the phraseology or terminology employed herein is for
the purpose of description and not of limitation. The means, materials, and steps for carrying out various disclosed functions may take a variety of forms without departing from the invention.
Thus, the expressions “means to . . . ” and “means for . . . ”, or any method step language, as may be found in the specification above and/or in the claims below, followed by a functional statement,
are intended to define and cover whatever structural, physical, chemical, or electrical element or structure, or whatever method step, which may now or in the future exist which carries out the
recited functions, whether or not precisely equivalent to the embodiment or embodiments disclosed in the specification above, i.e., other means or steps for carrying out the same function can be
used; and it is intended that such expressions be given their broadest interpretation. | {"url":"http://www.google.es/patents/US7438689?dq=flatulence","timestamp":"2014-04-19T07:10:39Z","content_type":null,"content_length":"120454","record_id":"<urn:uuid:6e604bed-21e7-49bd-a445-ebbfec466934>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00507-ip-10-147-4-33.ec2.internal.warc.gz"} |
Forest Heights, MD Prealgebra Tutor
Find a Forest Heights, MD Prealgebra Tutor
...Berkeley in Zoology. I have a Ph.D. in Molecular Genetics. I am founder and eight years as director of a cutting-edge clinical genetics laboratory.
25 Subjects: including prealgebra, chemistry, reading, writing
...I am familiar with many of the student oriented educational sites available on the web, and have worked with students on the Jefferson Labs, NovaNET and Khanacademy computer assisted learning
systems. At this stage of my tutoring career, I have assisted students covering the range from top-of-...
13 Subjects: including prealgebra, chemistry, physics, calculus
...With well over 20 years of teaching/tutoring experience, I doubt there are many tutors as patient, talented or effective as I am. I bring a lot more to the table than what you see "on paper"
and am constantly told by my students how well I explain things. Though other tutors may be cheaper, I am more efficient due to my experience and explanations.
28 Subjects: including prealgebra, chemistry, calculus, physics
Hello, My name is Leona. I am a licensed Civil Engineer and I love helping others to succeed. My strength is in math.
32 Subjects: including prealgebra, English, reading, calculus
...Utilizing these three insights, I have tutored students in various subject areas as well as used these skills to become a better student. Prior to entering business school I worked as a
consultant and was a mentor with the Essay Busters program, a volunteer organization which pair’s young, worki...
16 Subjects: including prealgebra, chemistry, English, reading
Related Forest Heights, MD Tutors
Forest Heights, MD Accounting Tutors
Forest Heights, MD ACT Tutors
Forest Heights, MD Algebra Tutors
Forest Heights, MD Algebra 2 Tutors
Forest Heights, MD Calculus Tutors
Forest Heights, MD Geometry Tutors
Forest Heights, MD Math Tutors
Forest Heights, MD Prealgebra Tutors
Forest Heights, MD Precalculus Tutors
Forest Heights, MD SAT Tutors
Forest Heights, MD SAT Math Tutors
Forest Heights, MD Science Tutors
Forest Heights, MD Statistics Tutors
Forest Heights, MD Trigonometry Tutors
Nearby Cities With prealgebra Tutor
Alexandria, VA prealgebra Tutors
Brandywine, MD prealgebra Tutors
Cottage City, MD prealgebra Tutors
District Heights prealgebra Tutors
Fairmount Heights, MD prealgebra Tutors
Fort Myer, VA prealgebra Tutors
Hillcrest Heights, MD prealgebra Tutors
Martins Add, MD prealgebra Tutors
Martins Additions, MD prealgebra Tutors
Mount Rainier prealgebra Tutors
Mount Vernon, VA prealgebra Tutors
Oxon Hill prealgebra Tutors
Seat Pleasant, MD prealgebra Tutors
Temple Hills prealgebra Tutors
University Park, MD prealgebra Tutors | {"url":"http://www.purplemath.com/Forest_Heights_MD_Prealgebra_tutors.php","timestamp":"2014-04-21T15:25:10Z","content_type":null,"content_length":"24381","record_id":"<urn:uuid:9717dbe0-8b90-4e46-9b2a-e84117ff6964>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00102-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematicians Ppt Presentation
Presentation Description
No description available.
By: harsh046 (34 month(s) ago)
please allow me to download this presentation
By: aman_jha (35 month(s) ago)
please allow us to download this i am also from india
By: phunsukhwangadu (38 month(s) ago)
causes a great difficult process to download the don | {"url":"http://www.authorstream.com/Presentation/Pravez-37626-mathematicians-Pythagoras-Zeno-Archimedes-Euclid-Descarte-Fermat-Pascal-Newton-Liebniz-as-Entertainment-ppt-powerpoint/","timestamp":"2014-04-16T10:11:16Z","content_type":null,"content_length":"212526","record_id":"<urn:uuid:fe8d5d76-5828-402d-9c4d-d64c7bc0dee6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00536-ip-10-147-4-33.ec2.internal.warc.gz"} |
correlations of divisor sums related to primes, III: k-correlations, preprint (available at AIM preprints
- Ann. of Math
"... Abstract. We prove that there are arbitrarily long arithmetic progressions of primes. ..."
- Collectanea Mathematica (2006), Vol. Extra., 37-88 (Proceedings of the 7th International Conference on Harmonic Analysis and Partial Differential Equations, El Escorial
"... Abstract. We describe some of the machinery behind recent progress in establishing infinitely many arithmetic progressions of length k in various sets of integers, in particular in arbitrary
dense subsets of the integers, and in the primes. 1. ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. We describe some of the machinery behind recent progress in establishing infinitely many arithmetic progressions of length k in various sets of integers, in particular in arbitrary dense
subsets of the integers, and in the primes. 1.
, 705
"... Abstract. Green and Tao proved that the primes contains arbitrarily long arithmetic progressions. We show that, essentially the same proof leads to the following result: If N is sufficiently
large and M is not too small compared with N, then the primes in the interval [N, N + M] contains many arithm ..."
Add to MetaCart
Abstract. Green and Tao proved that the primes contains arbitrarily long arithmetic progressions. We show that, essentially the same proof leads to the following result: If N is sufficiently large
and M is not too small compared with N, then the primes in the interval [N, N + M] contains many arithmetic progressions of length k. 1. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=14510011","timestamp":"2014-04-18T07:09:00Z","content_type":null,"content_length":"15937","record_id":"<urn:uuid:acafbebb-57b3-4f53-81b4-a64340c2c25c>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00614-ip-10-147-4-33.ec2.internal.warc.gz"} |
Grade 8 » Geometry
Standards in this domain:
Understand congruence and similarity using physical models, transparencies, or geometry software.
Understand that a two-dimensional figure is congruent to another if the second can be obtained from the first by a sequence of rotations, reflections, and translations; given two congruent figures,
describe a sequence that exhibits the congruence between them.
Describe the effect of dilations, translations, rotations, and reflections on two-dimensional figures using coordinates.
Understand that a two-dimensional figure is similar to another if the second can be obtained from the first by a sequence of rotations, reflections, translations, and dilations; given two similar
two-dimensional figures, describe a sequence that exhibits the similarity between them.
Use informal arguments to establish facts about the angle sum and exterior angle of triangles, about the angles created when parallel lines are cut by a transversal, and the angle-angle criterion for
similarity of triangles.
For example, arrange three copies of the same triangle so that the sum of the three angles appears to form a line, and give an argument in terms of transversals why this is so
Understand and apply the Pythagorean Theorem.
Apply the Pythagorean Theorem to determine unknown side lengths in right triangles in real-world and mathematical problems in two and three dimensions.
Solve real-world and mathematical problems involving volume of cylinders, cones, and spheres.
Know the formulas for the volumes of cones, cylinders, and spheres and use them to solve real-world and mathematical problems. | {"url":"http://www.corestandards.org/Math/Content/8/G/","timestamp":"2014-04-20T10:57:33Z","content_type":null,"content_length":"43189","record_id":"<urn:uuid:10ab9ac6-16ef-4208-80f9-f3ea5e7df112>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Direct constraints on the biasing field should be provided by the data themselves, of galaxy density (e.g., from redshift surveys) versus mass density (from peculiar velocity surveys, gravitational
lensing, etc.). A hint of scatter in the biasing relation is the fact that the smoothed density peaks of the Great Attractor (GA) and Perseus Pisces (PP) are of comparable height in the mass
distribution as recovered by POTENT from observed velocities [15, 13, 19], while PP is higher than GA in the galaxy maps [33, 52]. Another piece of indirect evidence for scatter comes from a linear
regression of the 1200km s^-1-smoothed density fields of POTENT mass and optical galaxies in our cosmological neighborhood, which yields a ^2 ~ 2 per degree of freedom [33]. One way to obtain a more
reasonable ^2 ~ 1 is to assume a biasing scatter of [b] ~ 0.5 (while b[1] ~ 1, one has [b]^2 / b[1]^2 ~ 0.25. This is only a crude estimate; there is yet much to be done with future data along the
lines of reconstructing the ``biasing field" in a given region of space.
We have recently worked out a promising way to recover the mean biasing function b(53]. This method is inspired by a ``de-biasing" technique by Narayanan & Weinberg [44]. If the biasing relation g(C
[g](g) and C[d](
We find, using halos in N-body simulations, that this is a good approximation for <g|Figure 2.
Figure 2. The PDFs and the mean biasing function, from a cosmological N-body simulation of [8] = 0.3 (z = 1) with top-hat smoothing of 8 h Mpc and for halos of M > 2 x 10^12 M[] Left: the cumulative
probability distributions C of density fluctuations of halos (g) and of mass (g|b(^2 at the corresponding value of 53].)
The other key point is that the cumulative PDF of mass density is relatively insensitive to the cosmological model or the power spectrum of density fluctuations [4, 5]. We find [53], using a series
of N-body simulations of the CDM family of models in a flat or an open universe with and without a tilt in the power spectrum, that, compared to the differences between C[g] and C, the latter can
always be properly approximated by a cumulative log-normal distribution of 1 + 5], which may affect the skewness and higher moments but are of little concern for our purpose here. This means that in
order to evaluate b(C[g](g) from a galaxy density field, and add the rms b( | {"url":"http://ned.ipac.caltech.edu/level5/Dekel1/Dek5.html","timestamp":"2014-04-18T15:47:50Z","content_type":null,"content_length":"7234","record_id":"<urn:uuid:a3e26c0f-c0a0-41ba-ad1c-300534ffc7f7>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00407-ip-10-147-4-33.ec2.internal.warc.gz"} |
Double Integral of Bivariate Normal Distribution
Hi All, is anyone able to show that the Double Integral of Bivariate Normal Distribution =1?
Thanks. I googled that before but after trying their substitution, I am not sure how to integrate because the term is so complex. My bad if my integration is pretty weak. Could anyone show me the
first step after the substitution and i continue from there? thanks again.
Hi all when doing this question can is it that both sigma and mean of x and y are constant? | {"url":"http://mathhelpforum.com/advanced-statistics/162786-double-integral-bivariate-normal-distribution.html","timestamp":"2014-04-17T08:45:34Z","content_type":null,"content_length":"37785","record_id":"<urn:uuid:4c200dba-3a89-45dc-bc63-57f5f95b9b97>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - 90% done with this "evenly dstrbtd chrg on infinite line" problem. PLEASE help 10%
1. The problem statement, all variables and given/known data
A continuous line of charge lies along the x-axis, extending from x=+x[0] to positive infinity. The line carries positive charge with a uniform linear charge density λ[0].
What is the magnitude of the electric field at the origin? (Use λ[0], x[0] and k[e] as necessary.)
2. Relevant equations
1) dE = (k[e]dq) / r^2
2) dq = λdx = (Q/L)dx
3. The attempt at a solution
I used the prior equations to set up: dE = (k[e]Q / L) * dx/x^2
Now time to integrate, but first a few questions. I understand "L" is the length of the entire rod? But so is x? Am I using L and x the right way or should I put everything in terms of x? Second,
what are the constants that I pull out of the integral? Since its to infinity, doesnt "L" (or x?) change, meaning I cant pull out the L? I understand k[e] and Q are constant so I pull them out, is
that all? Also I am integrating from 0 to infinity correct? Depending on what the set integral is, I understand that there is a possibility that when I integrate, infinity might be a denominator,
making that part go to 0? Im kind of confused, any help would be GREATLY appreciated! | {"url":"http://www.physicsforums.com/showpost.php?p=3560469&postcount=1","timestamp":"2014-04-20T05:59:33Z","content_type":null,"content_length":"9862","record_id":"<urn:uuid:1eda901c-7a2f-4761-a2a9-6897ba8339cd>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
first and second differences
The first part of the question A) ; given the following graph and the values of h and h, h=2,k=3 Determine its equations (there is a picture of a parabola with given points A(0,2); out of this i got
its equation to be y=-5(x-2)squared+3 The second part(b) says to calculate the first and second differences of the equation
Part B: ) 11. A) Given the graph and the values of h and k, h = 2 ; k = 3 (graph unavailable so the point given is point (0,2) determine its equation.b. calculate the first and second differences for
the equation.
y=-5(x-2)squared+3 im going to assume that by difference you mean derivative. 1st derivative: -10(x - 2) via: d/dx of z^n = n*z^(n-1) i used z=(x-2) and n=2 the 3 gets dropped because the slope of a
constant is 0. simplify to y' = -10x + 20 the 2nd derivative is y''=-10 via the slope of y' is always -10. hope that helps | {"url":"http://mathhelpforum.com/algebra/45556-first-second-differences-print.html","timestamp":"2014-04-17T07:17:41Z","content_type":null,"content_length":"7639","record_id":"<urn:uuid:dd07d23b-76a5-4856-9716-ba21e9b7161e>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00419-ip-10-147-4-33.ec2.internal.warc.gz"} |
pkgsrc.se | The NetBSD package collection
wip/nightfall Application for a producing a best-fit model for a binary star
wip/silo Mesh and filed I/O library and scientific databases
wip/qcdloop Repository of one-loop scalar Feynman integrals
wip/lcalc Lcalc is a program for calculating with L-functions
wip/gts GTS stands for the GNU Triangulated Surface Library
wip/py-mdp Modular toolkit for data processing
wip/gappa Formal tool for certifying numerical applications
wip/frobby Computations with monomial ideals
wip/py-csp Brings CSP (communicating sequential processes) to Python
wip/py-imread Imread: Image reading library
wip/cddlib Library for finding vertices of convex polytopes
wip/depsolver Multimaterial 3D electrostatic solver
wip/nsp Scientific software package similiar to Matlab, Scilab, octave
wip/py-magnitude Python library for computing with numbers with units
wip/py-pykit-shared Collection of modules shared amongst my other projects
wip/py-bctpy Brain Connectivity Toolbox for Python
wip/py-julio Python Implementation of the Fractal Specification
wip/py-pykit-sci Collection of python modules for science
wip/schur Calculating properties of Lie groups and symmetric functions
wip/py-cinfony Common API for several cheminformatics toolkits
wip/py-oasa Python library for manipulation of chemical formats
wip/py-datarray NumPy arrays with named axes and named indices
wip/py-gsl Python interface for the GNU scientific library
wip/py-scikit-nano Python toolkit for generating nano-structures
wip/ised Tool for generating number sequences and arithmetic evaluation
wip/py-mois Applications for interactive visualization of numerical methods
wip/py-clnum Rational and arbitrary precision floating point numbers
wip/nauty Brendan McKays graph isomorphism tester
wip/libscscp Library of the Symbolic Computation Software Composibility Protocol
wip/libfplll Library for LLL-reduction of Euclidean lattices
wip/aten Aten is a tool to create, edit, and visualise coordinates
wip/mopac7 Semi-empirical Quantum Chemistry Library
wip/py-scikits_ann Approximate Nearest Neighbor library wrapper for Numpy
wip/reinteract Reinteract is a system for interactive experimentation with Python
wip/py-mahotas Mahotas: Computer Vision Library
wip/py-mpop Meteorological post processing package
wip/stksolver Stokes flow solver using the boundary element method
wip/py-gts Python bindings for GNU Triangulated Surface Library
wip/vamp VAMP is WHIZARDs adaptive multi-channel integrator
wip/py-pyflation Calculating cosmological perturbations during an inflationary
wip/cgnslib CFD General Notation System library code
wip/py-utilib_misc Miscellaneous PyUtilib utilities
wip/py-thermopy Some utilities for Thermodynamics and Thermochemistry
wip/py-pweave Scientific reports with embedded python computations
wip/py-qmath Provides a class for deal with quaternion algebra and 3D rotation
wip/m4ri M4RI is a library for fast arithmetic with dense matrices over F2
wip/py-fftw Python bindings to the FFTW3 C-library
wip/py-traits Manifest typing and reactive programming for Python
wip/libbrahe Heterogeneous C library of interesting numeric functions
wip/spyder Scientific PYthon Development EnviRonment
wip/faast Library for Fast Arithmetics in Artin-Schreier Towers
wip/elk All-electron full-potential linearised augmented-planewave
wip/pythontoolkit PythonToolkit (PTK) an interactive python environment
wip/py-scikits-bvp_solver Python package for solving two-point boundary value problems
wip/py-joblib Lightweight pipelining: using Python functions as pipeline jobs
wip/py-cclib Parsers and algorithms for computational chemistry
wip/libxc Libxc is the ETSF library of exchange-correlation functionals
wip/py-numpydoc Sphinx extension to support docstrings in Numpy format
wip/csp2b The csp2B Tool written in Moscow ML
wip/py-sdt_metrics Signal Detection Theory (SDT) metrics for Python
wip/py-algopy Taylor Arithmetic Computation and Algorithmic Differentiation
wip/py-django-helmholtz Framework for creating neuroscience databases
wip/py-pysph General purpose Smoothed Particle Hydrodynamics framework
wip/py-cosmolopy Python for Cosmology
wip/py-owslib OGC Web Service utility library
wip/py-pyqu PyQu is an extension module for Python to implement quantum algorithms
wip/fxrays Computes extremal rays with filtering
wip/maloc Minimal Abstraction Layer for Object-oriented C
wip/py-descartes Use geometric objects as matplotlib paths and patches
wip/py-topkapi TOPKAPI hydrological model in Python
wip/gmm Gmm++ is a generic C++ template library for sparse
wip/macaulay2 Macaulay2 a software system for research in algebraic geometry
wip/analizo Extensible source code analysis and visualization toolkit
wip/spai Sparse Approximate Inverses
wip/py-scikit-monaco Python modules for Monte Carlo integration
wip/py-lapack PyLapack is a python interface to LAPACK
wip/siscone C++ code for a Seedless Infrared-Safe Cone jet finder
wip/py-larry Label the rows, columns, any dimension, of your NumPy arrays
wip/py-chealpy Python Binding of chealpix
wip/py-gnm Python Gaussian Network Model
wip/py-pygr Pygr graph database for bioinformatics
wip/py-cvxopt Python software for convex optimization
wip/py-colorpy Handling physical descriptions of color and light spectra
wip/py-theano Optimizing compiler for evaluating mathematical expressions on CPU/GPU
wip/py-dana Framework for distributed, asynchronous and adaptive computing
wip/py-cdecimal Fast arbitrary precision correctly-rounded decimal floating point arithmetic
wip/py-corebio Python toolkit for computational biology
wip/py-forthon Fortran95 wrapper/code development package
wip/py-metrics Metrics for python files
wip/py-nodepy Numerical ODE solvers in Python
wip/xmds XMDS is a code generator that integrates equations
wip/py-netcdf4 Python/numpy interface to netCDF library (versions 3 and 4)
wip/py-griddata Interpolate irregularly spaced data to a grid
wip/py-constraint Python module implementing support for handling CSPs
wip/spatt Statistics for Patterns
wip/py-fipy Finite volume PDE solver in Python
wip/polymul Fast multivariate polynomial multiplication in C++
wip/pythia6 Program can be used to generate high-energy-physics events
wip/py-neurolab Simple and powerfull neural network library for python
wip/py-delny Python module for delaunay triangulation
wip/py-pandas Pythonic cross-section, time series, and statistical analysis
wip/polylib The polyhedral library - long int version
wip/py-pebl Python Environment for Bayesian Learning
wip/py-scikits_learn Set of python modules for machine learning and data mining
wip/py-scikits_vectorplot Vector fields plotting algorithms
wip/py-pypol Python module to manage monomials and polynomials
wip/hepmc C++ Event Record for Monte Carlo Generators
wip/py-pyaiml PyAIML is an interpreter for AIML
wip/py-cythongsl Cython declarations for the Gnu Scientific Library
wip/fftjet Multiresolution jet reconstruction in the Fourier domain
wip/gerris Software the solution of the PDE describing fluid flow
wip/libpsurface Library that handles piecewise linear bijections
wip/dune-istl Iterative Solver Template Library
wip/bigdft Massively parallel electronic structure code using a wavelet basis set
wip/py-clics Clone detector and GUI
wip/lhapdf Les Houches Accord PDF library and interface
wip/py-biskit Python platform for structural bioinformatics
wip/py-utilib_component_core The PyUtilib Component Architecture
wip/linbox LinBox: exact computational linear algebra
wip/spade SPADE is an agent platform based on the XMPP/Jabber technology
wip/py-csa The Connection-Set Algebra implemented in Python
wip/dispred Tools for calculating DIS cross sections at LO and NLO in QCD
wip/py-pymvpa Multivariate pattern analysis
wip/probcons Probabilistic Consistency-based Multiple Alignment of Amino Acid Sequences
wip/ratpoints Optimized quadratic sieve algorithm
wip/py-bip Python package for object-oriented bayesian inference
wip/py-scitools Python library for scientific computing
wip/py-scikits_timeseries Time series manipulation
wip/py-diffpy-structure Crystal structure container and parsers for structure formats
wip/aokell AOKell is a Java implementation of the Fractal component model
wip/libnestedsums Library for the expansion of transcendental functions
wip/py-pyentropy Python module for estimation information theoretic quantities
wip/py-cogent Cogent A toolkit for statistical analysis of biological sequences
wip/petsc Portable, Extensible Toolkit for Scientific Computation
wip/triangle Two-Dimensional Quality Mesh Generator and Delaunay Triangulator
wip/dune-geometry Includes everything related to the DUNE reference elements
wip/py-opentmm OpenTMM is an object-oriented electrodynamic S-matrix
wip/py-pyecm Integer factorization with the Elliptic Curve Method (ECM)
wip/py-yt Analyzing and visualizing astrophysical simulation output
wip/py-paida Pure Python scientific analysis package
wip/fastjet Software package for jet finding in pp and e+e- collisions
wip/py-qit Quantum Information Toolkit
wip/myfitter Maximum Likelihood Fits in C++
wip/galoisfieldarth Galois Field Arithmetic Library
wip/py-aopython Aspect Oriented Python
wip/mixnet Erdos-Renyi Mixture for Networks
wip/py-fitsarray Ndarray subclass with a fits header
wip/py-swiginac Interface to GiNaC, providing Python with symbolic mathematics
wip/py-neuronpy The NEURON simulator and analyzing neural data
wip/py-scikits_hydroclimpy Tools to manipulate environmental and climatologic time series
wip/pilemc-svn Tool for the simulation of pile-up events at HepMC level
wip/py-quaternionarray Python package for fast quaternion arrays math
wip/orbifolder Study the Low Energy Effective Theory of Heterotic Orbifolds
wip/py-pulp LP modeler in Python
wip/py-chaco Chaco is a Python plotting application toolkit
wip/py-gpaw Grid-based real-space PAW method DFT code
wip/py-operators Operators and solvers for high-performance computing
wip/py-markovchain Simple markov chain implementation
wip/lissac Lisaac is the first compiled POO based on prototype concepts
wip/py-t3m Tinker toys for topologists
wip/py-pyfaces Traits-capable windowing framewor
wip/cmetrics Size and complexity metrics for C source code files
wip/hepmcvisual Interactive Browser for HepMC events
wip/py-bottleneck Fast, NumPy array functions written in Cython
wip/py-meigo Python wrapper of MEIGOR, a R optimisation package
wip/mplabs Multiphase lattice boltzmann suite
wip/py-mmLib Python Macromolecular Library
wip/py-utilib_math PyUtilib math utilities
wip/py-spfpm Tools for arithmetic on fixed-point (binary) numbers
wip/py-mwavepy Python datatypes and functions for microwave engineering
wip/py-repositoryhandler RepositoryHandler is a python library for handling code repositories
wip/py-quantities Support for physical quantities with units, based on numpy
wip/py-chaintipy CHIANTI atomic database for astrophysical spectroscopy
wip/py-ase Atomic Simulation Environment
wip/py-spectral Python module for hyperspectral image processing
wip/py-dolo Economic modelling in Python
wip/dune-common Contains the basic classes used by all DUNE modules
wip/py-traitsgui Traits-capable windowing framework
wip/oneloop Evaluate the one-loop scalar 1-point, 2-point, 3-point
wip/py-pythics Python Instrument Control System
wip/py-smop Matlab/Octave to Python compiler
biology/bodr Blue Obelisk Data Repository
wip/py-mdanalysis Library to analyze and manipulate molecular dynamics trajectories
wip/py-uncertainties Support for physical quantities with units, based on numpy
wip/jason Java-based interpreter for an extended version of AgentSpeak
wip/liboglappth Support libraries of science/ghemical port
wip/reduze Program for reducing Feynman Integrals
wip/py-chebpy Chebyshev polynomial based spectral methods of PDEs
wip/py-sciproc Process scientific multidimensional data
wip/scimark Java benchmark for scientific and numerical computing
wip/py-professor Parameterisation-based tuning tool for Monte Carlo event generators
wip/bagel Domain specific compiler for lattice QCD
wip/cspchecker CSP code type hecker
wip/py-nibabel Access a multitude of neuroimaging data formats
wip/py-pyamg Algebraic multigrid solvers in Python
wip/py-pyphant Workflow modelling app
wip/py-asciidata Asciidata , to handle (read/modify/write) ASCII tables
wip/py-zipline Backtester for financial algorithms
wip/libzn-poly Libzn_poly is a C library for polynomial arithmetic in Z/nZ[x]
wip/feynedit Tool for drawing Feynman diagrams
wip/libncl The NEXUS Class Library is a C++ library for parsing NEXUS files
wip/libode Open Dynamics Engine
wip/py-prop Framework for propagating the TDSE written in Python/C++
wip/atompaw Projector Augmented Wave code for electronic structure calculations
wip/netlogo NetLogo is a multi-agent programmable modeling environment
wip/py-aerocalc Aeronautical Engineering Calculations
wip/py-cvf Python Computer Vision Framework
wip/coxeter Computer program for the study of Coxeter group theory
wip/ktjet C++ implementation of the K clustering algorithm
wip/gwyddion Gtk2 based SPM data visualization and analysis tool
wip/py-scikits_bvp1lg Boundary value problem (legacy) solvers for ODEs
wip/thepeg Toolkit for High Energy Physics Event Generation
wip/py-scikits_image Image processing routines for SciPy
wip/lambertw Lambert W Function for Applications in Physics
wip/jython Python for the Java Platform
wip/sympol SymPol is a C++ tool to work with symmetric polyhedra
wip/lbt Converts from LTL formulas to Büchi automata
wip/py-openopt Python module for numerical optimization
wip/py-scikits_samplerate Python module for high quality audio resampling
wip/py-bigfloat Arbitrary precision correctly-rounded floating point arithmetic
wip/py-ncomb Python combinatorics library
wip/py-scikits_scattpy Light Scattering methods for Python
wip/py-scikits_optimization Python module for numerical optimization
wip/agile Interface for a variety of Fortran-based Monte Carlo generators
wip/py-pydec Python Library for Discrete Exterior Calculus
wip/xmakemol Program for visualizing atomic and molecular systems
wip/genus2reduction Conductor and Reduction Types for Genus 2 Curves
wip/py-kineticlib Library for kinetic theory calculations in the multi-temperature
wip/py-atpy Astronomical Tables in Python
wip/py-pyquante Quantum chemistry in Python
wip/py-pyslha Parsing, manipulating, and visualising SUSY Les Houches Accord data
wip/py-nzmath Number theory oriented calculation system
wip/alberta Adaptive hierarchical finite element toolbox
wip/py-qitensor Quantum Hilbert Space Tensors in Python and Sage
wip/py-spatialdata Spatialdata provides transformations among coordinate systems
wip/py-fuzzpy Fuzzy sets, graphs, and mathematics for Python
wip/py-mcint Simple tool to perform numerical integration using MC techniques
wip/omega Optimal Monte-Carlo Event Generation Amplitudes
wip/py-aesthete Integrated mathematics environment
wip/symmetrica Library for combinatorics
wip/py-brian Simulator for spiking neural networks
wip/py-sparce Sparse linear algebra extension for Python
wip/py-chaos UIC REU on Chaos, Fractals and Complex Dynamics
wip/py-ruffus Lightweight python module to run computational pipelines
wip/py-wafo Statistical analysis and simulation of random waves and random loads
wip/hawk HAWK is a Monte Carlo integrator for pp -> H + 2jets
wip/py-astropysics Astrophysics libraries for Python
wip/py-divisi2 Commonsense Reasoning over Semantic Networks
wip/higgsbounds Selection of Higgs sector predictions for any particular model
wip/py-gvar Utilities for manipulating Gaussian random variables
wip/superchic Monte Carlo Event Generator for Central Exclusive Production
wip/py-pysal Python Spatial Analysis Library
wip/py-agio Analysis and Inter-comparison of Gene Ontology functional annotations
wip/py-ecspy Framework for creating evolutionary computations in Python
wip/sector-decomposition Used to compute numerically the Laurent expansion of Feynman integrals
wip/py-psychopy Psychology and neuroscience software in python
wip/tmva Toolkit for Multivariate Data Analysis with ROOT
wip/py-model-builder Graphical ODE simulator
wip/mmdb Macromolecular coordinate library
wip/cartago Framework for programming and executing environments in multi-agent
wip/py-sfepy Simple finite elements in Python
wip/py-rogues Python and numpy port of Nicholas Highams m*lab test matrices
wip/libcuba Library for multidimensional numerical integration
wip/libginac The GiNaC symbolic framework
wip/py-sofa Python ctypes wrapper around the SOFA astronomical library
wip/gfan Program for computing with Groebner fans
wip/py-pyec Evolutionary computation package
wip/py-grpy Small GR-oriented package which uses sympy
wip/py-mark RDF Bookmarking Utilities
wip/py-scikits_datasmooth Scikits data smoothing package
wip/minuit2 MINUIT is a physics analysis tool for function minimization
wip/py-EMpy Suite of numerical algorithms widely used in electromagnetism
wip/py-hfk Computes Heegaard Floer homology for links
wip/ann Library for Approximate Nearest Neighbor Searching
devel/py-h5py Python interface to the HDF5 library
wip/py-diffpy_pdffit2 PDFfit2 - real space structure refinement program
wip/py-algebraic Algebraic modeling system for Python
wip/py-gratelpy Graph theoretic linear stability analysis
wip/py-qutip Quantum Toolbox in Python
wip/py-shapely Geometric objects, predicates, and operations
wip/palp Analyzing lattice polytopes
wip/py-utilib_common Commonly used PyUtilib data and methods
wip/py-scikits_talkbox Talkbox, a set of python modules for speech/signal processing
wip/py-tmatrix Python code for T-matrix scattering calculations
wip/py-tappy Tidal analysis program in python
wip/nsc2ke Navier-Stokes solver
wip/py-qecalc Wrapper for Quantum Espresso
wip/py-sympycore SympyCore an efficient pure Python Computer Algebra System
wip/py-sphviewer Framework for rendering particle simulations
devel/py-cython C-Extensions for Python
wip/py-vegas Tools for adaptive multidimensional Monte Carlo integration
wip/py-minepy Maximal Information-based Nonparametric Exploration
wip/freefem++ PDE oriented language using Finite Element Method
wip/py-luminous Optical Transfer Matrix and simple Quantum Well modelling
wip/py-anfft FFT package for Python, based on FFTW
wip/py-netflowvizu Network flow visualizer
wip/py-gravipy Tensor Calculus Package for General Relativity
wip/py-emmsa Multivariate Statistical Analysis for Electron Microscopy Data
wip/py-plink Link Projection Editor
wip/py-otb Utility functions for scientific numerical computation
wip/py-krylov Python package implementing common Krylov methods
wip/py-automata Finite automata for python
wip/py-jbessel Bessel functions of the first kind written in Cython
wip/dvegas Parallel Adaptive Monte Carlo Integration in C++
wip/py-empirical Emperical Method of Fundamental Solutions solver for Python
wip/py-seeds Stochastic Ecological and Evolutionary Dynamics System
wip/py-numericalunits Package that lets you define quantities with units
wip/py-rasterio Fast and direct raster I/O for Python programmers who use Numpy
wip/py-ssp Python speech signal processing library for education
wip/py-scikit-aero Aeronautical engineering calculations in Python
wip/singular SINGULAR is a Computer Algebra System for polynomial computations
wip/py-lingpy Python library for automatic tasks in historical linguistics
wip/py-nimfa Python Library for Nonnegative Matrix Factorization Techniques
wip/py-lifelines Including Kaplan Meier, Nelson Aalen and regression
wip/py-fdasrsf Functional data analysis using the square root slope framework
wip/py-raphrase Linguistics-related library
wip/herwig Monte Carlo package for simulating Hadron Emission Reactions
wip/py-h5py Python interface to the HDF5 library
wip/probe ProBE is an animator for CSP processes
wip/feynhiggs Fortran code for the diagrammatic calculation of the masses
wip/py-scikits_statsmodels Statistical computations and models for use with SciPy
wip/py-acq4 Neurophysiology acquisition and analysis platform
wip/py-tardis-sn TARDIS - Temperature And Radiative Diffusion In Supernovae
wip/py-qeutil Set of utilities for using Quantum-Espresso
wip/libecc C++ elliptic curve library
wip/lrslib Enumerate vertices and extreme rays of a convex polyhedron
wip/py-se Framework for solving PDEs with FDM using Python
wip/py-fdm Framework for solving PDEs with FDM using Python
wip/tvmet Tiny Vector and Matrix template library
wip/py-oct2py Python to GNU Octave bridge --> run m-files from python
wip/py-mcview 3D/graph event viewer for high-energy physics event simulations
wip/py-papy Parallel Pipelines In Python
wip/plasti Plasti is a 2D ALE (Arbitrary Lagrangian Eulerian) code
wip/py-pyvib2 Analyzing vibrational motion and vibrational spectra
wip/py-piquant Extending NumPy and SciPy to specification of numbers and arrays
wip/py-pydelay Translates a system of delay differential equations (DDEs)
wip/py-pymc Markov Chain Monte Carlo sampling toolkit
wip/py-fwarp Tool to wrap Fortran 77/90/95 code in C, Cython & Python
wip/py-vo Python based tools to parse/write VOTABLE file
wip/py-yellowhiggs Interface for the CERN Yellow Report
wip/py-sempy Python implementation of the spectral element method
wip/py-symath symbolic mathematics for python
wip/py-mdptoolbox Implementation of Markov Decision Problem algorithms for Python
wip/jmol Jmol: an open-source Java viewer for chemical structures in 3D
wip/py-decimalpy Decimal based version of numpy
wip/bkchem Python based chemical structures editor
wip/py-tracks Analysis tools for Molecular Dynamics and Monte Carlo simulations
wip/socnetv Social network analysis and visualisation application
wip/gsdpdf Gaunt-Stirling double Parton Distribution Functions
wip/py-scimath Scientific and Mathematical calculations
wip/py-gist Gist is a scientific graphics library
wip/py-mlstats Mailing lists analysis tool. Part of libresoft-tool
wip/py-pylith Finite element code for solving dynamic and quasi-static tectonic deformation problems
wip/py-hcluster Hierarchical Clustering Package For Scipy
wip/py-utilib_component_executables PyUtilib plugin for managing executables
wip/gambit Software Tools for Game Theory
wip/py-mystic Simple interactive inversion analysis framework
wip/py-corrfitter Utilities for fitting correlators in lattice QCD
wip/py-cse Python computations in science and engineering
wip/py-dicom Read, modify and write DICOM files with python code
wip/py-epigrass Epidemiological Geo-Referenced Analysis and Simulation System
wip/py-mpi4py MPI for Python - Python bindings for MPI
wip/py-pysb Python Systems Biology modeling framework
wip/py-control Python Control Systems Library
wip/py-hamilton Visualize and control mechanic systems through solving these systems
wip/py-pint Physical quantities module
wip/py-pytools Collection of tools for Python
wip/py-irco International Research Collaboration Graphs
wip/py-pymbolic Python package for symbolic computation
wip/py-dexy Document Automation
wip/py-prody Python Package for Protein Dynamics Analysis
wip/mcl Markov Cluster algorithm
wip/simpa Agent-oriented framework for concurrent, multi-core, distributed programming
wip/sympow Special values of symmetric power elliptic curve L-functions
wip/py-epipy Python tools for epidemiology
wip/cubature Multi-dimensional integration
wip/lmfit Levenberg-Marquardt least-squares minimization and curve fitting
wip/qrint Orthonormal integrators
wip/py-aipy Astronomical Interferometry in Python
wip/py-sumatra Tracking projects based on numerical simulation or analysis
wip/py-concepts Formal Concept Analysis with Python
wip/py-dcpf Python device communications protocol framework
wip/bicho Bug tracking system tool analyzer
wip/py-utilib_component_config Extensions for configuring components in PyUtilib
wip/py-rpncalc RPN Calculator For Python
wip/py-mne MNE python project for MEG and EEG data analysis
wip/picosat SAT solver with proof and core support
wip/py-pot Python Library for Robot Control
wip/py-emcee The Python ensemble sampling toolkit for affine-invariant MCMC
wip/py-spyse Spyse is a framework and platform for building multi-agent systems
wip/py-scipy-data_fitting Data fitting system with SciPy
wip/py-pycifrw CIF/STAR file support for Python
wip/py-fatiando Geophysical direct and inverse modeling
wip/oslc Open Source License Checker
wip/circe2 CIRCE1 is WHIZARDs generator for lepton collider beamstrahlung
wip/py-multichain_mcmc Multichain MCMC framework and algorithmse
wip/py-geopy Python Geocoding Toolbox
wip/py-solpy Solar Performance and Design library
wip/py-sode Python/Cython lib for solving Stochastic Ordinary Differential Equations
wip/py-pythia Framework for specifying and staging complex,multi-physics simulations
wip/py-sunpy Python for Solar Physicists
wip/py-sppy Sparse matrix package based on Eigen
wip/getdp General environment for the treatment of discrete problems
wip/py-thLib Collection of Python utilities for signal analysis
wip/py-recluse Reproducible Experimentation for Computational Linguistics Use
wip/py-toeplitz Wrapper for fortran 90 toeplitz package
wip/py-obspy Python framework for seismological observatories
wip/py-rf Receiver function calculation in seismology
wip/py-symeig Symmetrical eigenvalue routines for NumPy
wip/py-teafiles Time Series storage in flat files
wip/tnt Template Numerical Toolkit
wip/hztool Robust model-to-data comparisons
wip/py-feyn Easy-to-use Python library to help physicists draw Feynman diagrams
wip/py-paegan Processing and Analysis for Numerical Data
wip/py-cxnet Complex networks in education
wip/partonevolution Fast Evolution of Parton Distributions
wip/py-complexsystems Toolbox for Complex Sytems
wip/py-astropy Community-developed python astronomy tools
wip/yoda Yet more Objects for Data Analysis
wip/py-debacl Density-Based Clustering
wip/py-lsqfit Utilities for nonlinear least-squares fits
wip/py-dexy_viewer Document Automation viewer
wip/py-nipy-data Installation script for nipy data packages
wip/symbolic++ C++ and POO programming to develop a computer algebra system
wip/py-formex Tool to generate and manipulate complex 3D geometries
wip/py-aqopa Automated Quality of Protection Analysis Tool for QoP-ML models
wip/py-nipy Python package for analysis of neuroimaging data
wip/py-mvpoly Library for multivariate polynomials
wip/py-geographiclib Translation of the GeographicLib::Geodesic class to Python
wip/py-linop Pythonic abstraction for linear mathematical operators
wip/py-glespy Bindings for GLESP for calculations with spherical harmonics
wip/py-numdifftools Tools for automatic numerical differentiation
wip/py-pynn Python package for neuronal network models
wip/py-sasa SAIL/AIL Sentiment Analyzer
wip/exhume Monte Carlo simulation of central exclusive production | {"url":"http://pkgsrc.se/bbmaint.php?maint=jihbed.research@gmail.com","timestamp":"2014-04-18T15:40:03Z","content_type":null,"content_length":"76073","record_id":"<urn:uuid:4db31f4d-f372-4cec-8e88-d5c1f5427342>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00638-ip-10-147-4-33.ec2.internal.warc.gz"} |
DE Tutorial - Part IV: Laplace Transforms
August 30th 2009, 09:15 PM #1
DE Tutorial - Part IV: Laplace Transforms
The DE Tutorial is currently being split up into different threads to make editing these posts easier.
Laplace Transforms (Part I - Introduction, IVPs and Partial Fraction Techniques)
There are many types of transformations out there. For example, differentiation and integration are types of linear transformations. However, there is one particular transform that we would like
to analyze. This transform is of the form:
$\int_0^{\infty}K\!\left(s,t\right)f\!\left(t\right )\,dt$
where $K\!\left(s,t\right)$is called the kernel of the transformation.
In this case, we are interested in the transform with a kernel of $K\!\left(s,t\right)=e^{-st}$. With this kernel, we take $f\!\left(t\right)$ and transform it into another function $F\!\left(s\
right)$. This transformation described by $\int_0^{\infty}e^{-st}f\!\left(t\right)\,dt=F\!\left(s\right)$ is called the Laplace Transform. It is denoted by $\mathcal{L}\left\{f\!\left(t\right)\
Before we go and derive all the common Laplace Transforms (we will derive many more as we get futher into later posts), let us take a look at a familar function to some of us (this may also be
totally knew to some of you out there).
Given $x\in\mathbb{R}$, where $x>0$, we define the Gamma Function $\Gamma\!\left(x\right)=\int_0^{\infty}e^{-t}t^{x-1}\,dt$. It has the property $\Gamma\!\left(1\right)=1$ and $\Gamma\!\left(x+1\
Now, if $n\in\mathbb{N}$, then it follows by a similar idea that $\Gamma\left(n+1\right)=n\Gamma\!\left(n\right)$. If we continue simplifying, we have
\begin{aligned}n\Gamma\!\left(n\right) & = n\left(n-1\right)\Gamma\!\left(n-1\right)\\ &= n\left(n-1\right)\left(n-2\right)\Gamma\!\left(n-2\right)\\ &\vdots\\ &=n\left(n-1\right)\left(n-2\right)
\dots2\cdot\Gamma\!\left(2\right)\\ &= n\left(n-1\right)\left(n-2\right)\dots2\cdot1\cdot\Gamma\!\left(1\right)\en d{aligned}
This implies that when $n\in\mathbb{N}$, $\Gamma\!\left(n+1\right)=n!$.
(Thus it is interesting to point out that since $1\in\mathbb{N},\,\Gamma\!\left(1\right)=\color{red }\boxed{1=0!}$, an identity for factorials.)
Common Laplace Transforms
In this part, I will list the common Laplace Transforms, and leave the derivation of each in a spoiler for you to look at if you decide too.
$\mathcal{L}\left\{e^{at}\right\}=\frac{1}{s-a},\,s>a$ (This will pop up again, when we talk about translation theorems)
$\mathcal{L}\left\{t^a\right\}=\frac{\Gamma\left(a+ 1\right)}{s^{a+1}},\,s>0$; If $n\in\mathbb{N},\,\mathcal{L}\left\{t^n\right\}=\fr ac{\Gamma\!\left(n+1\right)}{s^{n+1}}=\frac{n!}{s^ {n+1}}$
Given $k\in\mathbb{R},\,\mathcal{L}\left\{\cosh\!\left(kt \right)\right\}=\frac{s}{s^2-k^2},\,s>k>0$
Given $k\in\mathbb{R},\,\mathcal{L}\left\{\sinh\!\left(kt \right)\right\}=\frac{k}{s^2-k^2},\,s>k>0$
Given $k\in\mathbb{R},\,\mathcal{L}\left\{\cos\!\left(kt\ right)\right\}=\frac{s}{s^2+k^2},\,s>0$
Given $k\in\mathbb{R},\,\mathcal{L}\left\{\sin\!\left(kt\ right)\right\}=\frac{k}{s^2+k^2},\,s>0$
Given $a\in\mathbb{R},\,\mathcal{L}\left\{u\!\left(t-a\right)\right\}=\frac{e^{-as}}{s},\,s>0,\,a>0$
Let us go through some examples on how to apply linearity and some of these formulas.
Example 31
Find the Laplace Transform of $f\!\left(t\right)=3t^{5/2}-4t^3$
By linearity, we have $\mathcal{L}\!\left\{3t^{5/2}-4t^3\right\}=3\mathcal{L}\left\{t^{5/2}\right\}-4\mathcal{L}\left\{t^3\right\}=3\frac{\Gamma\!\lef t(\tfrac{7}{2}\right)}{s^{7/2}}-4\frac{3!}{s
Taking into consideration Gamma function properties, we have $\Gamma\!\left(\tfrac{7}{2}\right)=\tfrac{5}{2}\Gam ma\!\left(\tfrac{5}{2}\right)=\tfrac{15}{4}\Gamma\ !\left(\tfrac{3}{2}\right)=\
tfrac{15}{8}\Gamma\!\l eft(\tfrac{1}{2}\right)$.
Its not hard to show that $\Gamma\!\left(\tfrac{1}{2}\right)=\sqrt{\pi}$. Therefore, $\Gamma\!\left(\tfrac{7}{2}\right)=\frac{15\sqrt{\p i}}{8}$.
Thus, $\color{red}\boxed{\mathcal{L}\!\left\{3t^{5/2}-4t^3\right\}=\frac{15\sqrt{\pi}}{8s^{7/2}}-\frac{24}{s^4}=\frac{15\sqrt{\pi s}-192}{8s^4}}$
Example 32
Find the Laplace Transform of $f\!\left(t\right)=\sin\!\left(3t\right)\cos\!\left (3t\right)$
Note that $\sin\!\left(3t\right)\cos\!\left(3t\right)=\tfrac{ 1}{2}\sin\!\left(6t\right)$
Therefore, $\mathcal{L}\left\{\sin\!\left(3t\right)\cos\!\left (3t\right)\right\}=\mathcal{L}\left\{\tfrac{1}{2}\ sin\!\left(6t\right)\right\}=\tfrac{1}{2}\mathcal{ L}\left\{\sin\!\left(6t\right)
\right\}=\color{red} \boxed{\frac{3}{s^2+36}}$
Inverse Laplace Transforms
As the name suggests, the Inverse Laplace Transform applied to a function $F(s)$ will give you the original $f\!\left(t\right)$:
$\mathcal{L}\left\{f\!\left(t\right)\right\}=F\!\le ft(s\right)\implies \mathcal{L}^{-1}\left\{\mathcal{L}\left\{f\!\left(t\right)\right \}\right\}=\mathcal{L}^{-1}\left\{F\!\left(s\right)\right
\}\implies \mathcal{L}^{-1}\left\{F\!\left(s\right)\right\}=f\!\left(t\righ t)$
We now list the common inverse Laplace Transforms:
$\mathcal{L}^{-1}\left\{\frac{\Gamma\!\left(a+1\right)}{s^{a+1}}\ right\}=t^a$
$\mathcal{L}^{-1}\left\{\frac{s}{s^2+k^2}\right\}=\cos\!\left(kt\ right)$
$\mathcal{L}^{-1}\left\{\frac{k}{s^2+k^2}\right\}=\sin\!\left(kt\ right)$
It is also worth mentioning that the Inverse Laplace Transform is linear.
Let us now go through a couple examples.
Example 33
Find the Inverse Laplace Transform of $F\!\left(s\right)=2s^{-1}e^{-3s}$
Example 34
Find the Inverse Laplace Transform of $F\!\left(s\right)=s^{-3/2}$
$\mathcal{L}^{-1}\left\{\frac{1}{s^{3/2}}\right\}=\frac{1}{\Gamma\!\left(\tfrac{3}{2}\ri ght)}\mathcal{L}^{-1}\left\{\frac{\Gamma\!\left(\tfrac{3}{2}\right)}{ s^{3/2}}\right\}=\color{red}\boxed{2
\sqrt{\frac{t}{\pi} }}$
Example 35
Find the Inverse Laplace Transform of $F\!\left(s\right)=\frac{10s-3}{25-s^2}$
right\}+\tfrac{3}{5}\mathcal{L}\left\{\frac{5} {s^2-25}\right\}=\color{red}\boxed{-10\cosh\!\left(5t\right)+\tfrac{3}{5}\sinh\!\left( 5t\right)}$
Laplace Transforms and IVPs (involving Partial Fraction Techniques)
We now introduce a method of solving initial value problems with Laplace Transforms. Before we go through this method, we first need to find the Laplace Transforms for $f^{\prime}\!\left(t\right)
$, $f^{\prime\prime}\!\left(t\right)$, and in general $f^{\left(n\right)}\!\left(t\right)$
I will leave the derivation of each in a spoiler.
$\mathcal{L}\left\{f^{\prime}\!\left(t\right)\right \}=sF\!\left(s\right)-f\!\left(0\right)$
$\mathcal{L}\left\{f^{\prime\prime}\!\left(t\right) \right\}=s^2F\!\left(s\right)-sf\!\left(0\right)-f^{\prime}\!\left(0\right)$
$\mathcal{L}\left\{f^{\left(n\right)}\right\}=s^nF\ !\left(s\right)-s^{n-1}f\!\left(0\right)-s^{n-2}f^{\prime}\!\left(0\right)-\dots-sf^{\left(n-2\right)}\!\left(0\right)-f^{\left(n-1\right)}\!\
Translation Theorem
We will discuss an important translation theorem:
Theorem: If $F\!\left(s\right)=\mathcal{L}\left\{f\!\left(t\rig ht)\right\}$ exists for $s>c$, then $\mathcal{L}\left\{e^{at}f\!\left(t\right)\right\}$ exists for $s>a+c$ and $\mathcal{L}\left\{e
^{at}f\!\left(t\right)\right\}= F\!\left(s-a\right)\implies \mathcal{L}^{-1}\left\{F\!\left(s-a\right)\right\}=e^{at}f\!\left(t\right)$.
Pf: Its obvious that $F\!\left(s-a\right)=\int_0^{\infty}e^{-\left(s-a\right)t}f\!\left(t\right)\,dt=\int_0^{\infty}e^{-st}\left[e^{at}f\!\left(t\right)\right]\,dt=\mathcal{L}\left\{e^{at}f\!\
left(t\right)\rig ht\}.\quad\square$
As a result of this translation theorem, we have six more Laplace Transforms to add to the list (I leave it for you to verify them):
$\mathcal{L}\left\{e^{at}t^{k}\right\}=\frac{\Gamma \!\left(k+1\right)}{\left(s-a\right)^{k+1}}$
$\mathcal{L}\left\{e^{at}t^{n}\right\}=\frac{n!}{\l eft(s-a\right)^{n+1}}$
$\mathcal{L}\left\{e^{at}\cosh\!\left(kt\right)\rig ht\}=\frac{s-a}{\left(s-a\right)^2-k^2}$
$\mathcal{L}\left\{e^{at}\sinh\!\left(kt\right)\rig ht\}=\frac{k}{\left(s-a\right)^2-k^2}$
$\mathcal{L}\left\{e^{at}\cos\!\left(kt\right)\righ t\}=\frac{s-a}{\left(s-a\right)^2+k^2}$
$\mathcal{L}\left\{e^{at}\sin\!\left(kt\right)\righ t\}=\frac{k}{\left(s-a\right)^2+k^2}$
There is one more interesting Laplace Transform worth considering:
$\mathcal{L}\left\{\int_0^t f\!\left(\tau\right)\,d\tau\right\}=\frac{1}{s}\ma thcal{L}\left\{f\!\left(t\right)\right\}=\frac{F\! \left(s\right)}{s}$
With these fundamental Laplace Transforms, we can now tackle some initial value problems (some of these may require partial fraction techniques).
Example 36
Use Laplace Transforms to solve the IVP $x^{\prime\prime}+4x^{\prime}+13x=te^{-t};\,x\!\left(0\right)=0,\,x^{\prime}\!\left(0\rig ht)=2$
First, we take the Laplace Transform of both sides:
$\mathcal{L}\left\{x^{\prime\prime}+4x^{\prime}+13x \right\}=\mathcal{L}\left\{te^{-t}\right\}\implies\mathcal{L}\left\{x^{\prime\prim e}\right\}+4\mathcal{L}\left\{x^{\prime}\right\}+1 3\mathcal
{L}\left\{x\right\}=\mathcal{L}\left\{te^ {-t}\right\}$
Applying the proper formulas and translations, we have
$s^2X\!\left(s\right)-sx\!\left(0\right)-x^{\prime}\!\left(0\right)+4\left(sX\!\left(s\righ t)-x\!\left(0\right)\right)+13X\!\left(s\right)=\frac {1}{\left(s+1\right)^2}$
Now apply the initial conditions $x\!\left(0\right)=0$ and $x^{\prime}\!\left(0\right)=2$ to get
$\left(s^2+4s+13\right)X\!\left(s\right)-2=\frac{1}{\left(s+1\right)^2}$$\implies X\!\left(s\right)=\frac{1}{\left(s+1\right)^2\left (s^2+4s+13\right)}+\frac{2}{s^2+4s+13}$
Now here comes the fun part: Take the Inverse Laplace transform of both sides to find the solution $x\!\left(t\right)$.
Lets consider each fraction individually.
First, consider $\mathcal{L}^{-1}\left\{\frac{1}{\left(s+1\right)^2\left(s^2+4s+1 3\right)}\right\}$.
To help us find the Inverse Laplace Transform, we need to apply partial fractions (I will redo this problem in the next post, when I talk about convolution):
$\frac{1}{\left(s+1\right)^2\left(s^2+4s+13\right)} =\frac{A}{s+1}+\frac{B}{\left(s+1\right)^2}+\frac{ Cs+D}{s^2+4s+13}$.
Our objective now is to find A, B, C, and D.
First, multiply both sides by the common denominator to get
$1=A\left(s+1\right)\left(s^2+4s+13\right)+B\left(s ^2+4s+13\right)+\left(Cs+D\right)\left(s+1\right)^ 2$
If we take $s=-1$, we have
$1=B\left(1-4+13\right)\implies B=\frac{1}{10}$.
If we take $s=0$, we have
$1=13A+\frac{13}{10}+D\implies -\frac{3}{10}-13A=D$
If we take $s=-2$, we have
$1=-9A+\frac{9}{10}-2C+D\implies \frac{1}{10}=-9A-2C+D\implies -\frac{2}{10}-11A=C$
If we take $s=2$, we have
$1=75A+\frac{25}{10}+18C+9D$$\implies 1=75A+\frac{25}{10}+18\left(-\frac{2}{10}-11A\right)+9\left(-\frac{3}{10}-13A\right)$
This simplifies to $1=75A+\frac{25}{10}-198A-\frac{36}{10}-\frac{27}{10}-117A\implies\frac{48}{10}=-240A\implies A=-\frac{1}{50}$
Thus, $C=\frac{11}{50}-\frac{10}{50}=\frac{1}{50}$ and $D=-\frac{15}{50}+\frac{13}{50}=-\frac{2}{50}$
Thus, $\frac{1}{\left(s+1\right)^2\left(s^2+4s+13\right)} =\frac{-1}{50\left(s+1\right)}+\frac{1}{10\left(s+1\right) ^2}+\frac{s-2}{50\left(s^2+4s+13\right)}$
$\mathcal{L}^{-1}\left\{\frac{-1}{50\left(s+1\right)}+\frac{1}{10\left(s+1\right) ^2}+\frac{s-2}{50\left(s^2+4s+13\right)}\right\}$$=-\frac{1}{50}\mathcal{L}^{-1}\left\{\frac{1}{s+1}\right\}+\
frac{1}{10}\mathca l{L}^{-1}\left\{\frac{1}{\left(s+1\right)^2}\right\}+\fra c{1}{50}\mathcal{L}^{-1}\left\{\frac{s-2}{s^2+4s+13}\right\}$
Note that $\mathcal{L}^{-1}\left\{\frac{s-2}{s^2+4s+13}\right\}=\mathcal{L}^{-1}\left\{\frac{s+2}{\left(s+2\right)^2+9}-\frac{3}{(s+2)^2+9}-\frac{3}{3\left[\left(s+2\right)^2+9\right]}\right\}$$=
Therefore, we finally have
$\mathcal{L}^{-1}\left\{\frac{1}{\left(s+1\right)^2\left(s^2+4s+1 3\right)}\right\}=-\frac{1}{50}e^{-t}+\frac{1}{10}te^{-t}+\frac{1}{50}\cos\!\left(3t\right)-\frac{4}{150}\sin\!\left(3t\right)$
Now, we need the second half of the solution! (We have only part of it!) XD
We now consider the other Inverse Laplace Transform:
We see that $\mathcal{L}^{-1}\left\{\frac{2}{s^2+4s+13}\right\}=\mathcal{L}^{-1}\left\{\frac{3}{\left(s+2\right)^2+9}\right\}-\frac{1}{3}\mathcal{L}^{-1}\left\{\frac{3}{\left(s+2\right)^2+9}\
Therefore, we now see that
\boxed{\frac{1}{50}\left[\left(5t-1\right)e^{-t}+e^{-2t}\left[\cos\!\left(3t\right)+32\sin\!\left(3t\right)\righ t]\right]}$
Example 37
Use Laplace Transforms to solve the IVP $x^{\prime\prime}+3x^{\prime}+2x=t;\,x\!\left(0\rig ht)=0,\,x^{\prime}\!\left(0\right)=2$
First apply the Laplace Transform on both sides to get
$s^2X\!\left(s\right)-sx\!\left(0\right)-x^{\prime}\!\left(0\right)+3sX\!\left(s\right)-3x\!\left(0\right)+2X\!\left(s\right)=\frac{1}{s^2 }$
Applying the initial conditions $x\!\left(0\right)=0$ and $x^{\prime}\!\left(0\right)=2$, we have
$\left(s^2+3s+2\right)X\!\left(s\right)-2=\frac{1}{s^2}\implies X\!\left(s\right)=\frac{1}{s^2\left[\left(s+\tfrac{3}{2}\right)^2-\tfrac{1}{4}\right]}+\frac{2}{\left(s+\tfrac{3}{2}\right)^2-\
This is where the Laplace Transform of an Integral comes into play nicely (to avoid partial fractions)
In finding $\mathcal{L}^{-1}\left\{\frac{1}{s^2\left[\left(s+\tfrac{3}{2}\right)^2-\tfrac{1}{4}\right]}\right\}$, we see that
tau$$=2\int_0^te^{-3/2 \tau}\sinh\!\left(\tfrac{1}{2}\tau\right)\,d\tau=\ int_0^t e^{-\tau}-e^{-2\tau}\,d\tau=-e^{-t}+\tfrac{1}{2}e^{-2t}+\tfrac{1}{2}$
right]}\right\}\,d\tau$$=\int_0^t -e^{-\tau}+\tfrac{1}{2}e^{-2\tau}+\tfrac{1}{2}\,d\tau=e^{-t}-\tfrac{1}{4}e^{-2t}+\tfrac{1}{2}t-\tfrac{3}{4}$
Now, $\mathcal{L}^{-1}\left\{\frac{2}{\left(s+\tfrac{3}{2}\right)^2-\tfrac{1}{4}}\right\}=4e^{-3/2t}\sinh\!\left(\tfrac{1}{2}t\right)=2e^{-t}-2e^{-2t}$.
Therefore, $\color{red}\boxed{x\!\left(t\right)=\tfrac{1}{4}\l eft[2t-3+12e^{-t}-9e^{-2t}\right]}$
This will conclude the first post on Laplace Transforms. I'm not sure when I will be able to post again, now that I start classes today. I'll try to find some time in the next several weeks to do
Last edited by mash; March 5th 2012 at 12:21 PM. Reason: fixed latex
Follow Math Help Forum on Facebook and Google+
What does the future hold for the Differential Equations Tutorial?
To the MHF Community,
It's been over a year since I have updated this differential equations tutorial of mine.
Some of you have PMed me about errors/typos found in this tutorial, why certain topics aren't covered, etc. But I just haven't had the time lately to fix them or add missing topics since most of
my time is devoted to my coursework (since I'm a full time Ph.D. student now). However, this summer (starting in June), I will be redoing this entire differential equations tutorial and
re-release it as a mini-book (it may not be mini when I'm through with it...
The chapter outline will be something like:
Chapter 0 - Calculus Review
Part 1 - Ordinary Differential Equations (to be released end of summer 2011)
Chapter 1 - First Order Differential Equations
Chapter 2 - Second and Higher Order Differential Equations
Chapter 3 - Solving Differential Equations by Numerical Methods
Chapter 4 - Matrix Methods and Systems of Differential Equations
Chapter 5 - Laplace Transforms
Chapter 6 - Power Series Solutions to Differential Equations
Part 2 - Partial Differential Equations (to be released sometime in 2012)
Chapter 7 - Introduction to Partial Differential Equations
Chapter 8 - Fourier Series
Chapter 9 - The Wave, Heat, and Laplace Equations
Chapter 10 - Solving Partial Differential Equations by Various Methods
Chapter 11 - Green's Functions
Chapter 12 - Solving Partial Differential Equations by Numerical Methods
If you would like to contribute to this project, send me a PM and we can discuss it in more detail. If I use any of your work in my book, you will be acknowledged.
I will close this thread in the meantime, and I will post updates when I'm able.
Thread Closed.
Last edited by Chris L T521; February 9th 2011 at 12:11 AM.
Follow Math Help Forum on Facebook and Google+
February 8th 2011, 11:56 PM #2 | {"url":"http://mathhelpforum.com/differential-equations/177500-de-tutorial-part-iv-laplace-transforms.html","timestamp":"2014-04-19T10:47:36Z","content_type":null,"content_length":"115423","record_id":"<urn:uuid:fb2ae9c6-111d-4b09-8e0c-c0bc26bb1c93>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
PIRSA - Perimeter Institute Recorded Seminar Archive
The principle of relative locality
Abstract: Several current experiments probe physics in the approximation in which Planck's constant and Newton's constant may be neglected, but, the Planck mass, is relevant. These include tests of
the symmetry of the ground state of quantum gravity such as time delays in photons of different energies from gamma ray bursts. I will describe a new approach to quantum gravity phenomenology in this
regime, developed with Giovanni Amelino-Camelia, Jersy Kowalski-Glikman and Laurent Freidel. This approach is based on a deepening of the relativity principle, according to which the invariant arena
for non-quantum physics is a phase space rather than spacetime. Descriptions of particles propagating and interacting in spacetimes are constructed by observers, but different observers, separated
from each other by translations, construct different spacetime projections from the invariant phase space. Nonetheless, all observers agree that interactions are local in the spacetime coordinates
constructed by observers local to them. This framework, in which absolute locality is replaced by relative locality, results from deforming momentum space, just as the passage from absolute to
relative simultaneity results from deforming the linear addition of velocities. Different aspects of momentum space geometry, such as its curvature, torsion and non-metricity, are reflected in
different kinds of deformations of the energy-momentum conservation laws. These are in principle all measurable by appropriate experiments.
Date: 23/02/2011 - 2:00 pm | {"url":"http://pirsa.org/11020116","timestamp":"2014-04-20T03:23:56Z","content_type":null,"content_length":"9546","record_id":"<urn:uuid:7b9f46df-1952-41aa-bfeb-8db2250216a0>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00123-ip-10-147-4-33.ec2.internal.warc.gz"} |
: The Story of a Number
Chapter 1. John Napier, 1614
Seeing there is nothing that is so troublesome to mathematical practice, nor that doth more molest and hinder calculators, than the multiplications, divisions, square and cubical extractions of
great numbers.... I began therefore to consider in my mind by what certain and ready art I might remove those hindrances.--JOHN NAPIER, Mirifici logarithmorum canonis descriptio (1614)^1
Rarely in the history of science has an abstract mathematical idea been received more enthusiastically by the entire scientific community than the invention of logarithms. And one can hardly imagine
a less likely person to have made that invention. His name was John Napier.^2
The son of Sir Archibald Napier and his first wife, Janet Bothwell, John was born in 1550 (the exact date is unknown) at his family's estate, Merchiston Castle, near Edinburgh, Scotland. Details of
his early life are sketchy. At the age of thirteen he was sent to the University of St. Andrews, where he studied religion. After a sojourn abroad he returned to his homeland in 1571 and married
Elizabeth Stirling, with whom he had two children. Following his wife's death in 1579, he married Agnes Chisholm, and they had ten more children. The second son from this marriage, Robert, would
later be his father's literary executor. After the death of Sir Archibald in 1608, John returned to Merchiston, where, as the eighth laird of the castle, he spent the rest of his life.^3
Napier's early pursuits hardly hinted at future mathematical creativity. His main interests were in religion, or rather in religious activism. A fervent Protestant and staunch opponent of the papacy,
he published his views in A Plaine Discovery of the whole Revelation of Saint John (1593), a book in which he bitterly attacked the Catholic church, claiming that the pope was the Antichrist and
urging the Scottish king James VI (later to become King James I of England) to purge his house and court of all "Papists, Atheists, and Newtrals."^4 He also predicted that the Day of Judgment would
fall between 1688 and 1700. The book was translated into several languages and ran through twenty-one editions (ten of which appeared during his lifetime), making Napier confident that his name in
history--or what little of it might be left--was secured.
Napier's interests, however, were not confined to religion. As a landowner concerned to improve his crops and cattle, he experimented with various manures and salts to fertilize the soil. In 1579 he
invented a hydraulic screw for controlling the water level in coal pits. He also showed a keen interest in military affairs, no doubt being caught up in the general fear that King Philip II of Spain
was about to invade England. He devised plans for building huge mirrors that could set enemy ships ablaze, reminiscent of Archimedes' plans for the defense of Syracuse eighteen hundred years earlier.
He envisioned an artillery piece that could "clear a field of four miles circumference of all living creatures exceeding a foot of height," a chariot with "a moving mouth of mettle" that would
"scatter destruction on all sides," and even a device for "sayling under water, with divers and other stratagems for harming of the enemyes"--all forerunners of modem military technology.^5 It is not
known whether any of these machines was actually built.
As often happens with men of such diverse interests, Napier became the subject of many stories. He seems to have been a quarrelsome type, often becoming involved in disputes with his neighbors and
tenants. According to one story, Napier became irritated by a neighbor's pigeons, which descended on his property and ate his grain. Warned by Napier that if he would not stop the pigeons they would
be caught, the neighbor contemptuously ignored the advice, saying that Napier was free to catch the pigeons if he wanted. The next day the neighbor found his pigeons lying half-dead on Napier's lawn.
Napier had simply soaked his grain with a strong spirit so that the birds became drunk and could barely move. According to another story, Napier believed that one of his servants was stealing some of
his belongings. He announced that his black rooster would identify the transgressor. The servants were ordered into a dark room, where each was asked to pat the rooster on its back. Unknown to the
servants, Napier had coated the bird with a layer of lampblack. On leaving the room, each servant was asked to show his hands; the guilty servant, fearing to touch the rooster, turned out to have
clean hands, thus betraying his guilt.^6
All these activities, including Napier's fervent religious campaigns, have long since been forgotten. If Napier's name is secure in history, it is not because of his best-selling book or his
mechanical ingenuity but because of an abstract mathematical idea that took him twenty years to develop: logarithms.
* * *
The sixteenth and early seventeenth centuries saw an enormous expansion of scientific knowledge in every field. Geography, physics, and astronomy, freed at last from ancient dogmas, rapidly changed
man's perception of the universe. Copernicus's heliocentric system, after struggling for nearly a century against the dictums of the Church, finally began to find acceptance. Magellan's
circumnavigation of the globe in 1521 heralded a new era of marine exploration that left hardly a corner of the world unvisited. In 1569 Gerhard Mercator published his celebrated new world map, an
event that had a decisive impact on the art of navigation. In Italy Galileo Galilei was laying the foundations of the science of mechanics, and in Germany Johannes Kepler formulated his three laws of
planetary motion, freeing astronomy once and for all from the geocentric universe of the Greeks. These developments involved an ever increasing amount of numerical data, forcing scientists to spend
much of their time doing tedious numerical computations. The times called for an invention that would free scientists once and for all from this burden. Napier took up the challenge.
We have no account of how Napier first stumbled upon the idea that would ultimately result in his invention. He was well versed in trigonometry and no doubt was familiar with the formula
sin A · sin B = 1/2[cos(A - B) - cos(A + B)]
This formula, and similar ones for cos A · cos B and sin A · cos B, were known as the prosthaphaeretic rules, from the Greek word meaning "addition and subtraction." Their importance lay in the fact
that the product of two trigonometric expressions such as sin A sin B could be computed by finding the sum or difference of other trigonometric expressions, in this case cos(A - B) and cos(A + B).
Since it is easier to add and subtract than to multiply and divide, these formulas provide a primitive system of reduction from one arithmetic operation to another, simpler one. It was probably this
idea that put Napier on the right track.
A second, more straightforward idea involved the terms of a geometric progression, a sequence of numbers with a fixed ratio between successive terms. For example, the sequence 1, 2, 4, 8, 16, . . .
is a geometric progression with the common ratio 2. If we denote the common ratio by q, then, starting with 1, the terms of the progression are 1, q, q^2, q^3, and so on (note that the nth term is q^
n-1). Long before Napier's time, it had been noticed that there exists a simple relation between the terms of a geometric progression and the corresponding exponents, or indices, of the common ratio.
The German mathematician Michael Stifel (1487-1567), in his book Arithmetica integra (1544), formulated this relation as follows: if we multiply any two terms of the progression 1, q, q^2, . . . ,
the result would be the same as if we had added the corresponding exponents.^7 For example, q^2 · q^3 = (q · q) · (q · q · q) = q · q · q · q · q = q^5, a result that could have been obtained by
adding the exponents 2 and 3. Similarly, dividing one term of a geometric progression by another term is equivalent to subtracting their exponents: q^5/q^3 = (q · q · q · q · q)/(q · q · q) = q · q =
q^2 = q^5-3. We thus have the simple rules q^m · q^n = q^m+n and q^m/q^n = q^m-n.
A problem arises, however, if the exponent of the denominator is greater than that of the numerator, as in q^3/q^5; our rule would give us q^3-5 = q^-2, an expression that we have not defined. To get
around this difficulty, we simply define q^-n to be 1/q^n, so that q^3-5 = q^-2 = 1/q^2, in agreement with the result obtained by dividing q^3 by q^5 directly.^8 (Note that in order to be consistent
with the rule q^m/q^n = q^m-n when m = n, we must also define q^0 = 1.) With these definitions in mind, we can now extend a geometric progression indefinitely in both directions: . . . . q^-3, q^-2,
q^-1, q^0 = 1, q, q^2, q^3, . . . . We see that each term is a power of the common ratio q, and that the exponents . . . , -3, -2, -1, 0, 1, 2, 3, ... form an arithmetic progression (in an arithmetic
progression the difference between successive terms is constant, in this case 1). This relation is the key idea behind logarithms; but whereas Stifel had in mind only integral values of the exponent,
Napier's idea was to extend it to a continuous range of values.
His line of thought was this: If we could write any positive number as a power of some given, fixed number (later to be called a base), then multiplication and division of numbers would be equivalent
to addition and subtraction of their exponents. Furthermore, raising a number to the nth power (that is, multiplying it by itself n times) would be equivalent to adding the exponent n times to
itself--that is, to multiplying it by n--and finding the nth root of a number would be equivalent to n repeated subtractions--that is, to division by n. In short, each arithmetic operation would be
reduced to the one below it in the hierarchy of operations, thereby greatly reducing the drudgery of numerical computations.
Let us illustrate how this idea works by choosing as our base the number 2. Table 1.1 shows the successive powers of 2, beginning with n = -3 and ending with n = 12. Suppose we wish to multiply 32 by
128. We look in the table for the exponents corresponding to 32 and 128 and find them to be 5 and 7, respectively. Adding these exponents gives us 12. We now reverse the process, looking for the
number whose corresponding exponent is 12; this number is 4,096, the desired answer. As a second example, supppose we want to find 4^5. We find the exponent corresponding to 4, namely 2, and this
time multiply it by 5 to get 10. We then look for the number whose exponent is 10 and find it to be 1,024. And, indeed, 4^5 = (2^2)^5 = 2^10 = 1,024.
TABLE 1.1 Powers of 2
│ n │-3 │-2 │-1 │0│1│2│3│4 │5 │6 │ 7 │ 8 │ 9 │ 10 │ 11 │ 12 │
│2^n│1/8│1/4│1/2│1│2│4│8│16│32│64│128│256│512│1,024 │2,048 │4,096 │
Of course, such an elaborate scheme is unnecessary for computing strictly with integers; the method would be of practical use only if it could be used with any numbers, integers, or fractions. But
for this to happen we must first fill in the large gaps between the entries of our table. We can do this in one of two ways: by using fractional exponents, or by choosing for a base a number small
enough so that its powers will grow reasonably slowly. Fractional exponents, defined by
were not yet fully known in Napier's time,^9 so he had no choice but to follow the second option. But how small a base? Clearly if the base is too small its powers will grow too slowly, again making
the system of little practical use. It seems that a number close to 1, but not too close, would be a reasonable compromise. After years of struggling with this problem, Napier decided on .9999999, or
1 - 10^-7.
But why this particular choice? The answer seems to lie in Napier's concern to minimize the use of decimal fractions. Fractions in general, of course, had been used for thousands of years before
Napier's time, but they were almost always written as common fractions, that is, as ratios of integers. Decimal fractions--the extension of our decimal numeration system to numbers less than 1--had
only recently been introduced to Europe,^10 and the public still did not feel comfortable with them. To minimize their use, Napier did essentially what we do today when dividing a dollar into one
hundred cents or a kilometer into one thousand meters: he divided the unit into a large number of subunits, regarding each as a new unit. Since his main goal was to reduce the enormous labor involved
in trigonometric calculations, he followed the practice then used in trigonometry of dividing the radius of a unit circle into 10,000,000 or 107 parts. Hence, if we subtract from the full unit its 10
^7th part, we get the number closest to 1 in this system, namely 1 - 10^-7 or .9999999. This, then, was the common ratio ("proportion" in his words) that Napier used in constructing his table.
And now he set himself to the task of finding, by tedious repeated subtraction, the successive terms of his progression. This surely must have been one of the most uninspiring tasks to face a
scientist, but Napier carried it through, spending twenty years of his life (1594-1614) to complete the job. His initial table contained just 101 entries, starting with 10^7 = 10,000,000 and followed
by 10^7(1 - 10^-7) = 9,999,999, then 10^7(1 - 10^-7)^2 = 9,999,998, and so on up to 10^7(1 - 10^-7)^100 = 9,999,900 (ignoring the fractional part .0004950), each term being obtained by subtracting
from the preceding term its 10^7th part. He then repeated the process all over again, starting once more with 10^7, but this time taking as his proportion the ratio of the last number to the first in
the original table, that is, 9,999,900 : 10,000,000 = .99999 or 1 - 10^-5. This second table contained fifty-one entries, the last being 10^7(1 - 10^-5)^50 or very nearly 9,995,001. A third table
with twenty-one entries followed, using the ratio 9,995,001: 10,000,000; the last entry in this table was 10^7 x .9995^20, or approximately 9,900,473. Finally, from each entry in this last table
Napier created sixty-eight additional entries, using the ratio 9,900,473 : 10,000,000, or very nearly .99; the last entry then turned out to be 9,900,473 x .99^68, or very nearly 4,998,609--roughly
half the original number.
Today, of course, such a task would be delegated to a computer; even with a hand-held calculator the job could done in a few hours. But Napier had to do all his calculations with only paper and pen.
One can therefore understand his concern to minimize the use of decimal fractions. In his own words: "In forming this progression [the entries of the second table], since the proportion between
10000000.00000, the first of the Second table, and 9995001.222927, the last of the same, is troublesome; therefore compute the twenty-one numbers in the easy proportion of 10000 to 9995, which is
sufficiently near to it; the last of these, if you have not erred, will be 9900473.57808."^11
Having completed this monumental task, it remained for Napier to christen his creation. At first he called the exponent of each power its "artificial number" but later decided on the term logarithm,
the word meaning "ratio number." In modern notation, this amounts to saying that if (in his first table) N = 10^7(1 - 10^-7)^L, then the exponent L is the (Napierian) logarithm of N. Napier's
definition of logarithms differs in several respects from the modern definition (introduced in 1728 by Leonhard Euler): if N = b^L, where b is a fixed positive number other than 1, then L is the
logarithm (to the base b) of N. Thus in Napier's system L = 0 corresponds to N = 10^7 (that is, Nap log 10^7 = 0), whereas in the modern system L = 0 corresponds to N = 1 (that is, log[b]l = 0). Even
more important, the basic rules of operation with logarithms--for example, that the logarithm of a product equals the sum of the individual logarithms--do not hold for Napier's definition. And
lastly, because 1 - 10^7 is less than 1, Napier's logarithms decrease with increasing numbers, whereas our common (bazse 10) logarithms increase. These differences are relatively minor, however, and
are merely a result of Napier's insistence that the unit should be equal to 10^7 subunits. Had he not been so concerned about decimal fractions, his definition might have been simpler and closer to
the modern one.^12
In hindsight, of course, this concern was an unnecessary detour. But in making it, Napier unknowingly came within a hair's breadth of discovering a number that, a century later, would be recognized
as the universal base of logarithms and that would play a role in mathematics second only to the number pi. This number, e, is the limit of (1 + 1/n)^n as n tends to infinity.^13
Return to Book Description
File created: 8/7/2007
Questions and comments to: webmaster@pupress.princeton.edu
Princeton University Press | {"url":"http://press.princeton.edu/chapters/s5342.html","timestamp":"2014-04-17T16:13:45Z","content_type":null,"content_length":"27958","record_id":"<urn:uuid:0d745029-7c22-4098-92ac-46eca7a892dd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
3D Polyhedron Shapes - Facts about Cubes, Pyramids, Tetrahedron, Dodecahedron
• Although there isn't always an agreed upon definition, a polyhedron is described as 3 dimensional geometric solid with flat faces and straight edges, such as a regular dodecahedron, which
features 12 pentagonal faces and would be very difficult to use as a soccer ball.
• A regular polyhedron has regular polygon faces (a square or equilateral triangle for example) that are organized the same way around each point (vertex). Examples of regular polyhedrons include
the tetrahedron and cube.
• A cube has 6 faces, 8 points (vertices) and 12 edges.
• 11 different ‘nets’ can be made by folding out the 6 square faces of a cube in a range of ways.
• A rectangular cuboid is similar to a cube but doesn't feature 3 edges of the same length. The rectangular cuboid shape can often be seen in boxes.
• A tetrahedron features 4 triangular faces, with 3 meeting at each point (vertex).
• In geometry, a pyramid is a polyhedron that connects a polygon base (such as a triangle or square) to a point (apex) using triangles. The great pyramids that the Ancient Egyptians built many
years ago are a rough example of a square pyramid (they have a square base). A triangular pyramid is also known as a tetrahedron.
• An octahedron has eight faces made from equilateral triangles, with 4 of them meeting at each point (vertex).
• An icosahedron has 20 faces made from identical equilateral triangles. It also has 30 edges and 12 points (vertices).
• A parallelepiped is a three dimensional polyhedron made from 6 parallelograms.
• By definition, curved 3D shapes such as cylinders, cones and spheres are not polyhedrons.
• Check out our pictures of shapes.
• Now that you're an expert on 3D polyhedron shapes, try learning about triangles, squares, quadrilaterals and other 2D polygon shapes. | {"url":"http://www.kidsmathgamesonline.com/facts/geometry/3dpolyhedronshapes.html","timestamp":"2014-04-16T15:59:41Z","content_type":null,"content_length":"18788","record_id":"<urn:uuid:d0839cee-b40b-46d3-8b20-218ff4df994f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
Scheme sum of list
up vote 3 down vote favorite
First off, this is homework, but I am simply looking for a hint or pseudocode on how to do this.
I need to sum all the items in the list, using recursion. However, it needs to return the empty set if it encounters something in the list that is not a number. Here is my attempt:
(DEFINE sum-list
(LAMBDA (lst)
(IF (OR (NULL? lst) (NOT (NUMBER? (CAR lst))))
(CAR lst)
(sum-list (CDR lst))
This fails because it can't add the empty set to something else. Normally I would just return 0 if its not a number and keep processing the list.
recursion lisp scheme
add comment
5 Answers
active oldest votes
I'd go for this:
(define (mysum lst)
(let loop ((lst lst) (accum 0))
up vote 3 down vote (cond
accepted ((empty? lst) accum)
((not (number? (car lst))) '())
(else (loop (cdr lst) (+ accum (car lst)))))))
It took me a long time to figure out how this one works, especially with the double lst on the second line. I could only envision a two parameter version. – rem45acp Feb 7 '12
at 20:40
You can substitute (lst lst) with (lst2 let), and change each occurrence of lst below that with lst2; that might make it clearer. Basically it creates a local variable let (or
lst2) that is initialized with the value of the original parameter, lst. – uselpa Feb 7 '12 at 20:51
add comment
I suggest you use and return an accumulator for storing the sum; if you find a non-number while traversing the list you can return the empty list immediately, otherwise the recursion
continues until the list is exhausted.
Something along these lines (fill in the blanks!):
(define sum-list
(lambda (lst acc)
(cond ((null? lst) ???)
up vote 4 ((not (number? (car lst))) ???)
down vote (else (sum-list (cdr lst) ???)))))
(sum-list '(1 2 3 4 5) 0)
> 15
(sum-list '(1 2 x 4 5) 0)
> ()
I like the idea of the two parameters, but can this really be done with only one parameter? – rem45acp Feb 5 '12 at 18:27
it can be, but the best way that actually will work that I can come up with requires 2 if statements and a let. Using the two-parameter version (perhaps with a wrapper function that
2 simply calls (inner-function x 0)) is probably the best way. Additionally, if you have run into tail-recursion yet, the two parameter version is tail-recursive, while any 1-parameter
version won't be. – Retief Feb 5 '12 at 21:31
add comment
Your issue is that you need to use cond, not if - there are three possible branches that you need to consider. The first is if you run into a non-number, the second is when you run into the
end of the list, and the third is when you need to recurse to the next element of the list. The first issue is that you are combining the non-number case and the empty-list case, which need
up vote 2 to return different values. The recursive case is mostly correct, but you will have to check the return value, since the recursive call can return an empty list.
down vote
If I separate the non-number case and empty list case, I was thinking to return 0 for the empty list and '() for non-number. But again, when the recursion is called its going to try to
add the empty list to what it accumulated before, producing a vm-exception like this: (+ 3 '()). I guess the only way to do this is with two parameters. – rem45acp Feb 5 '12 at 17:57
@rem45acp - good catch - you will have to check the return value before adding the current value. You probably could combine the "currently at a non-number" case and the "non-number
returned from recursive call" case, if you didn't mind processing the rest of the list unnecessarily. – Retief Feb 5 '12 at 21:18
add comment
Because I'm not smart enough to figure out how to do this in one function, let's be painfully explicit:
#lang racket
; This checks the entire list for numericness
(define is-numeric-list?
(lambda (lst)
((null? lst) true)
((not (number? (car lst))) false)
(else (is-numeric-list? (cdr lst))))))
; This naively sums the list, and will fail if there are problems
(define sum-list-naive
(lambda (lst)
((null? lst) 0)
(else (+ (car lst) (sum-list-naive (cdr lst)))))))
; This is a smarter sum-list that first checks numericness, and then
; calls the naive version. Note that this is inefficient, because the
; entire list is traversed twice: once for the check, and a second time
; for the sum. Oscar's accumulator version is better!
(define sum-list
up vote 1 down vote (lambda (lst)
((is-numeric-list? lst) (sum-list-naive lst))
(else '()))))
(is-numeric-list? '(1 2 3 4 5))
(is-numeric-list? '(1 2 x 4 5))
(sum-list '(1 2 3 4 5))
(sum-list '(1 2 x 4 5))
Welcome to DrRacket, version 5.2 [3m].
Language: racket; memory limit: 128 MB.
I suspect your homework is expecting something more academic though.
add comment
Try making a "is-any-nonnumeric" function (using recursion); then you just (or (is-any-numeric list) (sum list)) tomfoolery.
up vote 0 down vote
add comment
Not the answer you're looking for? Browse other questions tagged recursion lisp scheme or ask your own question. | {"url":"http://stackoverflow.com/questions/9151045/scheme-sum-of-list","timestamp":"2014-04-25T03:24:28Z","content_type":null,"content_length":"88024","record_id":"<urn:uuid:15d879e0-8bf7-4d5a-92ea-87bb05a4dc85>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00545-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fast Discriminative Stochastic Neighbor Embedding Analysis
Computational and Mathematical Methods in Medicine
Volume 2013 (2013), Article ID 106867, 14 pages
Research Article
Fast Discriminative Stochastic Neighbor Embedding Analysis
School of Computer Science and Technology, Zhejiang University of Technology, Hangzhou 310023, China
Received 9 February 2013; Accepted 22 March 2013
Academic Editor: Carlo Cattani
Copyright © 2013 Jianwei Zheng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
Feature is important for many applications in biomedical signal analysis and living system analysis. A fast discriminative stochastic neighbor embedding analysis (FDSNE) method for feature extraction
is proposed in this paper by improving the existing DSNE method. The proposed algorithm adopts an alternative probability distribution model constructed based on its -nearest neighbors from the
interclass and intraclass samples. Furthermore, FDSNE is extended to nonlinear scenarios using the kernel trick and then kernel-based methods, that is, KFDSNE1 and KFDSNE2. FDSNE, KFDSNE1, and
KFDSNE2 are evaluated in three aspects: visualization, recognition, and elapsed time. Experimental results on several datasets show that, compared with DSNE and MSNP, the proposed algorithm not only
significantly enhances the computational efficiency but also obtains higher classification accuracy.
1. Introduction
In recent years, dimensional reduction which can reduce the curse of dimensionality [1] and remove irrelevant attributes in high-dimensional space plays an increasingly important role in many areas.
It promotes the classification, visualization, and compression of the high dimensional data. In machine learning, dimension reduction is used to reduce the dimension by mapping the samples from the
high-dimensional space to the low-dimensional space. There are many purposes of studying it: firstly, to reduce the amount of storage, secondly, to remove the influence of noise, thirdly, to
understand data distribution easily, and last but not least, to achieve good results in classification or clustering.
Currently, many dimensional reduction methods have been proposed, and they can be classified variously from different perspectives. Based on the nature of the input data, they are broadly categorized
into two classes: linear subspace methods which try to find a linear subspace as feature space so as to preserve certain kind of characteristics of observed data, and nonlinear approaches such as
kernel-based techniques and geometry-based techniques; from the class labels’ perspective, they are divided into supervised learning and unsupervised learning; furthermore, the purpose of the former
is to maximize the recognition rate between classes while the latter is for making the minimum of information loss. In addition, judging whether samples utilize local information or global
information, we divide them into local method and global method.
We briefly introduce several existing dimensional reduction techniques. In the main linear techniques, principal component analysis (PCA) [2] aims at maximizing the variance of the samples in the
low-dimensional representation with a linear mapping matrix. It is global and unsupervised. Different from PCA, linear discriminant analysis (LDA) [3] learns a linear projection with the assistance
of class labels. It computes the linear transformation by maximizing the amount of interclass variance relative to the amount of intraclass variance. Based on LDA, marginal fisher analysis (MFA) [4],
local fisher discriminant analysis (LFDA) [5], and max-min distance analysis (MMDA) [6] are proposed. All of the three are linear supervised dimensional reduction methods. MFA utilizes the intrinsic
graph to characterize the intraclass compactness and uses meanwhile the penalty graph to characterize interclass separability. LFDA introduces the locality to the LFD algorithm and is particularly
useful for samples consisting of intraclass separate clusters. MMDA considers maximizing the minimum pairwise samples of interclass.
To deal with nonlinear structural data, which can often be found in biomedical applications [7–10], a number of nonlinear approaches have been developed for dimensional reduction. Among these
kernel-based techniques and geometry-based techniques are two hot issues. Kernel-based techniques attempt to obtain the linear structure of nonlinearly distributed data by mapping the original inputs
to a high-dimensional feature space. For instance, kernel principal component analysis (kernel PCA) [11] is the extension of PCA using kernel tricks. Geometry-based techniques, in general, are known
as manifold learning techniques such as isometric mapping (ISOMAP) [12], locally linear embedding (LLE) [13], Laplacian eigenmap (LE) [14], Hessian LLE (HLLE) [15], and local tangent space alignment
(LTSA) [16]. ISOMAP is used for manifold learning by computing the pairwise geodesic distances for input samples and extending multidimensional scaling. LLE exploits the linear reconstructions to
discover nonlinear structure in high-dimensional space. LE first constructs an undirected weighted graph, and then recovers the structure of manifold by graph manipulation. HLLE is based on sparse
matrix techniques. As for LTSA, it begins by computing the tangent space at every point and then optimizes to find an embedding that aligns the tangent spaces.
Recently, stochastic neighbor embedding (SNE) [17] and extensions thereof have become popular for feature extraction. The basic principle of SNE is to convert pairwise Euclidean distances into
probabilities of selecting neighbors to model pairwise similarities. As extension of SNE, -SNE [18] uses Student’s -distribution to model pairwise dissimilarities in low-dimensional space and it
alleviates the optimization problems and the crowding problem of SNE by the methods below: (1) it uses a symmetrized version of the SNE cost function with simpler gradients that was briefly
introduced by Cook et al. [19], and (2) it employs a heavy-tailed distribution in the low-dimensional space. Subsequently, Yang et al. [20] systematically analyze the characteristics of the
heavy-tailed distribution and the solutions to crowding problem. More recently, Wu et al. [21] explored how to measure similarity on manifold more accurately and proposed a projection approach called
manifold-oriented stochastic neighbor projection (MSNP) for feature extraction based on SNE and -SNE. MSNP employs Cauchy distribution rather than standard Student’s -distribution used in -SNE. In
addition, for the purpose of learning the similarity on manifold with high accuracy, MSNP uses geodesic distance for characterizing data similarity. Though MSNP has many advantages in terms of
feature extraction, there is still a drawback in it: MSNP is an unsupervised method and lacks the idea of class label, so it is not suitable for pattern identification. To overcome the disadvantage
of MSNP, we have done some preliminary work and presented a method called discriminative stochastic neighbor embedding analysis (DSNE) [22]. DSNE effectively resolves the problems above, but since it
selects all the training samples as their reference points, it has high computational cost and is thus computationally infeasible for the large-scale classification tasks with high-dimensional
features [23, 24]. On the basis of our previous research, we present a method called fast discriminative stochastic neighbor embedding analysis (FDSNE) to overcome the disadvantages of DSNE in this
The rest of this paper is organized as follows: in Section 2, we introduce in detail the proposed FDSNE and briefly compare it with MSNP and DSNE in Section 3. Section 4 gives the nonlinear extension
of FDSNE. Furthermore, experiments on various databases are presented in Section 5. Finally, Section 6 concludes this paper and several issues for future works are described.
2. Fast Discriminative Stochastic Neighbor Embedding Analysis
Consider a labeled data samples matrix as where is a -dimensional sample and means the th sample in the th class. is the number of sample classes, is the number of samples in the th class, and .
In fact, the basic principle of FDSNE is the same as -SNE which is to convert pairwise Euclidean distances into probabilities of selecting neighbors to model pairwise similarities [18]. Since the
DSNE selects all the training samples as its reference points, it has high computational cost and is thus computationally infeasible for the large-scale classification tasks with high-dimensional
features. So according to the KNN classification rule, we propose an alternative probability distribution function which makes the label of target sample determined by its first -nearest neighbors in
FDSNE. In this paper, and are defined. They, respectively, denote the th-nearest neighbor of from the same class and the different classes in the transformed space. Mathematically, the joint
probability is given by In formula (2), is the Euclidian distance between two samples and , the parameter is the variance parameter of Gaussian which determines the value of ,, , , , ,, , , , and ,
, , and then the denominator in formula (2) means all of the reference points under selection from the same class or the different classes. In particular, the joint probability not only keeps
symmetrical characteristics of the probability distribution matrix but also makes the probability value of interclass data to be 1 and the same for intraclass data.
For low-dimensional representations, FDSNE uses counterparts and of the high-dimensional datapoints and . It is possible to compute a similar joint probability via the following expression:
In what follows, we introduce the transformation by a linear projection: so that . Then by simple algebra formulation, formula (3) has the following equivalent expression:
Note that all data have the intrinsic geometry distribution and there is no exception for intraclass samples and interclass samples. Then the same distribution is required to hold in feature space.
Since the Kullback-Leiber divergence [25] is wildly used to quantify the proximity of two probability distributions, we choose it to build our penalty function here. Based on the above definition,
the function can be formulated as:
In this work, we use the conjugate gradient method to minimize . In order to make the derivation less cluttered, we first define four auxiliary variables , , , and as:
Then differentiating with respect to the transformation matrix gives the following gradient, which we adopt for learning:
Let be the order matrix with element , and let be the order matrix with element . Note that and are symmetric matrices; therefore, can be defined as a diagonal matrix that each entry is column (or
row) sum of and the same for , that is, and . With this definition, the gradient expression (7) can be reduced to
Once the gradient is calculated, our optimal problem (5) can be solved by an iterative procedure based on the conjugate gradient method. The description of FDSNE algorithm can be given by the
Step 1. Collect the sample matrix with class labels, and set -nearest neighborhood parameter , , the variance parameter , and the maximum iteration times .
Step 2. Compute the pairwise Euclidian distance for and compute the joint probability by utilizing formula (2) and class labels.
Step 3 (set ). We search for the solution in loop: firstly, compute the joint probability by utilizing formula (4); then, compute gradient by utilizing formula (8); finally, update based on by
conjugate gradient operation.
Step 4. Judge whether (in this paper, we take ) converges to a stable solution or reaches the maximum value . If these prerequisites are met, Step 5 is performed; otherwise, we repeat Step 3.
Step 5. Output .
Hereafter, we call the proposed method as fast discriminative stochastic neighbor embedding analysis (FDSNE).
3. Comparison with MSNP and DSNE
MSNP is derived from SNE and -SNE, and it is a linear method and has nice properties, such as sensitivity to nonlinear manifold structure and convenience for feature extraction. Since the structure
of MSNP is closer to that of FDSNE, we briefly compare FDSNE with MSNP and DSNE in this section.
FDSNE, MSNP, and DSNE use different probability distributions to determine the reference points. The difference can be explained in the following aspects.
Firstly, MSNP learns the similarity relationship of the high-dimensional samples by estimating neighborhood distribution based on geodesic distance metric, and the same distribution is required in
feature space. Then the linear projection matrix is used to discover the underlying structure of data manifold which is nonlinear. Finally, the Kullback-Leibler divergence objective function is used
to keep pairwise similarities in feature space. So the probability distribution function of MSNP and its gradient used for learning are respectively given by where is the geodesic distance for and
and is the freedom degree parameter of Cauchy distribution.
DSNE selects the joint probability to model the pairwise similarities of input samples with class labels. It also introduces the linear projection matrix as MSNP. The cost function is constructed to
minimize the intraclass Kullback-Leibler divergence as well as to maximize the interclass KL divergences. Its probability distribution function and gradient are, respectively, given as by Note that
on the basis of the DSNE, FDSNE makes full use of class label which not only keeps symmetrical characteristics of the probability distribution matrix but also makes the probability value of
interclass data and intraclass data to be 1, and it can effectively overcome large interclass confusion degree in the projected subspace.
Secondly, it is obvious that the selection of reference point in MSNP or DSNE is related to all training samples, while FDSNE only uses the first -nearest neighbors of each sample from all classes.
In other words, we propose an alternative probability distribution function to determine whether would pick as its reference point or not. Actually, the computation of gradient during the
optimization process mainly determines the computational cost of MSNP and DSNE. So their computational complexity can be written as in each iteration. Similarly, the computational complexity of FDSNE
is in each iteration, where . It is obvious that . Therefore, FDSNE is faster than MSNP and DSNE during each iteration.
4. Kernel FDSNE
As a bridge from linear to nonlinear, kernel method emerged in the early beginning of the 20th century and its applications in pattern recognition can be traced back to 1964. In recent years, kernel
method has attracted wide attention and numerous researchers have proposed various theories and approaches based on it.
The principle of kernel method is a mapping of the data from the input space to a high-dimensional space , which we will refer to as the feature space, by nonlinear function. Data processing is then
performed in the feature space, and this can be expressed solely in terms of inner product in the feature space. Hence, the nonlinear mapping need not be explicitly constructed but can be specified
by defining the form of the inner product in terms of a Mercer kernel function .
Obviously, FDSNE is a linear feature dimensionality reduction algorithm. So the remainder of this section is devoted to extend FDSNE to a nonlinear scenario using techniques of kernel methods. Let
which allows us to compute the value of the inner product in without having to carry out the map.
It should be noted that we use to denote for brevity in the following. Next, we express the transformation with
We define and , and then . Based on above definition, the Euclidian distance between and in the space is where is a column vector. It is clear that the distance in the kernel embedding space is
related to the kernel function and the matrix .
In this section, we propose two methods to construct the objective function. The first strategy makes parameterize the objective function. Firstly, we replace with in formula (3) so that , which are
defined to be applied in the high dimensional space can be written as Then, we denote by modifying via substituting with into the regularization term of formula (5). Finally, by the same argument as
formula (7), we give the following gradient:
In order to make formula (15) easy to be comprehended, , , , and are given by Meanwhile, the gradient expression (15) can be reduced to where is the order matrix with element , and is the order
matrix with element . Note that and are symmetric matrices; therefore, can be defined as a diagonal matrix that each entry is column (or row) sum of and the same for , that is, and .
For convenience, we name this kernel method as FKDSNE1.
Another strategy is that we let be the objective function in the embedding space . So its gradient can be written as in this form, can be regard as the matrix with vector in the th column, and vector
in the th column and the other columns are all zeros.
This method is termed as FKDSNE2. Note that is a constant matrix. Furthermore, the observations of formula (18) make us know that updating the matrix in the optimization only means updating the
matrix . Additionally, does not need to be computed explicitly. Therefore, we do not need to explicitly perform the nonlinear map to minimize the objective function . The computational complexity of
FKDSNE1 and FKDSNE2, is respectively, and in each iteration. Hence, it is obvious that FKDSNE2 is faster than FKDSNE1 during each iteration.
5. Experiments
In this section, we evaluate the performance of our FDSNE, FKDSNE1, and FKDSNE2 methods for feature extraction. Three sets of experiments are carried out on Columbia Object Image Library (COIL-20) (
http://www1.cs.columbia.edu/CAVE/software/softlib/coil-20.php), US Postal Service (USPS) (http://www.cs.nyu.edu/~roweis/data.html), and ORL (http://www.cam-orl.co.uk) face datasets to demonstrate
their good behavior on visualization, accuracy, and elapsed time. In the first set of experiments, we focus on the visualization of the proposed methods which are compared with that of the relevant
algorithms, including SNE [17], -SNE [18], and MSNP [21]. In the second set of experiments, we apply our methods to recognition task to verify their feature extraction capability and compare them
with MSNP and DSNE [22]. Moreover, the elapsed time of FDSNE, FKDSNE1, FKDSNE2, and DSNE is compared in the third set of experiments. In particular, the Gaussian RBF kernel is chosen as the kernel
function of FKDSNE1 and FKDSNE2, where is set as the variance of the training sample set of .
5.1. COIL-20, USPS, and ORL Datasets
The datasets used in our experiments are summarized as follows.
COIL-20 is a dataset of gray-scale images of 20 objects. The images of each object were taken 5 degrees apart as the object is rotated on a turntable and each object has 72 images. The size of each
image is pixels. Figure 1 shows sample images from COIL-20 images dataset.
USPS handwritten digit dataset includes 10 digit characters and 1100 samples in total. The original data format is of pixels. Figure 2 shows samples of the cropped images from USPS handwritten digits
ORL consists of gray images of faces from 40 distinct subjects, with 10 pictures for each subject. For every subject, the images were taken with varied lighting condition and different facial
expressions. The original size of each image is pixels, with 256 gray levels per pixel. Figure 3 illustrates a sample subject of ORL dataset.
5.2. Visualization Using FDSNE, FKDSNE1, and FKDSNE2
We apply FDSNE, FKDSNE1, and FKDSNE2 to visualization task to evaluate their capability of classification performance. The experiments are carried out, respectively, on COIL-20, USPS, and ORL
datasets. For the sake of computational efficiency as well as noise filtering, we first adjust the size of each image to pixels on ORL, and then we select five samples from each class on COIL-20,
fourteen samples from each class on USPS, and five samples from each class on ORL.
The experimental procedure is to extract a 20-dimensional feature for each image by FDSNE, FKDSNE1, and FKDSNE2, respectively. Then to evaluate the quality of features through visual presentation of
the first two-dimensional feature.
FDSNE, FKDSNE1, and FKDSNE2 are compared with three well known visualization methods for detecting classification performance: (1) SNE, (2) -SNE, and (3) MSPN. The parameters are set as follows: the
-nearest neighborhood parameter of FDSNE, FKDSNE1, and FKDSNE2 methods is (let denote the number of training samples in each class), ; for SNE and -SNE, the perplexity parameter is and the
iteration number is ; for MSNP, the degree freedom of Cauchy distribution is and the iteration number is 1000 as well.
Figures 4, 5, and 6 show the visual presentation results of FDSNE, FKDSNE1, FKDSNE2, SNE, -SNE, and MSNP, respectively, on COIL-20, USPS, and ORL datasets. The visual presentation is represented as a
scatterplot in which a different color determines different class information. The figures reveal that the three nearest-neighbor-based methods, that is, FDSNE, FKDSNE1, and FKDSNE2, give
considerably better classification result than SNE, -SNE, and MSNP on all datasets, for the separation between classes is quite obvious. In particular, SNE and -SNE not only get less separation for
the interclass data but also produce larger intraclass scatter. For MSNP, it has smaller intraclass scatter, but there exists an overlapping phenomenon among classes. With regard to FDSNE, FKDSNE1,
and FKDSNE2, we can find from the figures that FKDSNE1 shows the best classification performance among all the algorithms on ORL face dataset, while not on the other two datasets COIL-20 and USPS;
thereinto, the classification performance of FKDSNE1 is inferior to FDSNE on COIL-20 while on USPS it is inferior to FKDSNE2. In addition, the clustering qualities and separation degree of FKDSNE1
and FKDSNE2 are obviously better than that of FDSNE.
5.3. Recognition Using FDSNE, FKDSNE1, and FKDSNE2
In this subsection, we apply FDSNE, FKDSNE1, and FKDSNE2 to recognition task to verify their feature extraction capability. Nonlinear dimensional reduction algorithms such as SNE and -SNE lack
explicit projection matrix for the out-of-sample data, which means they are not suitable for recognition. So we compare the proposed methods with DSNE and MSNP, both of them are linear methods and
were proved to be better than existing feature extraction algorithms such as SNE, -SNE, LLTSA, LPP, and so on in [21, 22]. The procedure of recognition is described as follows: firstly, divide
dataset into training sample set and testing sample set randomly; secondly, the training process for the optimal matrix or is taken for FDSNE, FKDSNE1 and FKDSNE2; thirdly, feature extraction is
accomplished for all samples using or ; finally, a testing image is identified by a nearest neighbor classifier. The parameters are set as follows: the -nearest neighborhood parameter , in FDSNE,
FKDSNE1, and FKDSNE2 is , ; for DSNE, the perplexity parameter is and the iteration number is ; for MSNP, the freedom degree of Cauchy distribution in MSNP is determined by cross validation and the
iteration number is 1000 as well.
Figure 7 demonstrates the effectiveness of different subspace dimensions for COIL-20 ((a): , (b): ). Figure 8 is the result of the experiment in USPS ((a): , (b): ), and Figure 9 shows the
recognition rate versus subspace dimension on ORL ((a): , (b): ). The maximal recognition rate of each method and the corresponding dimension are given in Table 1, where the number in bold stands for
the highest recognition rate. From Table 1, we can find that FKDSNE1 and FKDSNE2 outperform MSNP, DSNE, and FDSNE on COIL-20, USPS, and ORL. As can be seen, FKDSNE1 and FKDSNE2 enhance the maximal
recognition rate for at least 2% compared with other three methods. Besides, FKDSNE1 and FKDSNE2 achieve considerable recognition accuracy when feature dimension is 20 on the three datasets. It
indicates that FKDSNE1 and FKDSNE2 grasp the key character of face images relative to identification with a few features. Though the maximal recognition rate of DSNE and FDSNE is closer to that of
FKDSNE1 and FKDSNE2 on ORL dataset, the corresponding dimension of FKDSNE1 and FKDSNE2 is 20 while that of DSNE and FDSNE exceeds 30. From the essence of dimensional reduction, this result
demonstrates that FDSNE and DSNE are inferior to FKDSNE1 and FKDSNE2.
5.4. Analysis of Elapsed Time
In this subsection, we further compare the computational efficiency of DSNE, FKDSNE, FKDSNE1, and FKDSNE2. The algorithm MSPN is not compared since its recognition rate is obviously worse than other
algorithms. The parameters of the experiment are the same to Section 5.3. Figures 10, 11, and 12, respectively, show the elapsed time of four algorithms under different subspace dimensions on the
three datasets. It can be observed from the figures that FKDSNE2 has the lowest computational cost among the four algorithms while DSNE is much inferior to other nearest-neighbor-based algorithms on
all datasets. Particularly, on the COIL-20 dataset, the elapsed time of FKDSNE2 is more than 2 times faster than DSNE. As for DSNE and FDSNE, the former is obviously slower than the latter. Besides,
for the two kernel methods, FKDSNE2 is notably faster than FKDSNE1, which confirms our discussion in Section 4.
Furthermore, kernel-based algorithms FKDSNE1 and FKDSNE2 can effectively indicate the linear structure on high-dimensional space. Their objective function can achieve better values on desirable
dimensions. For instance, Figure 13 illustrates the objective function value of MSNP, DSNE, FKDSNE, FKDSNE1, and FKDSNE2 versus iterative number on ORL dataset. It can be found that FKDSNE2 and
FKDSNE1 is close to the convergence value while FDSNE and DSNE only achieve and MSNP achieves when the iterative number is 400. It means that FKDSNE1 and FKDSNE2 can get the more precise objective
function value with less iterative number compared with DSNE and FDSNE; that is to say that, FKDSNE1 and FKDSNE2 can achieve the same value by using forty percent of the elapsed time of DSNE and
6. Conclusion
On the basis of DSNE, we present a method called fast discriminative stochastic neighbor embedding analysis (FDSNE) which chooses the reference points in -nearest neighbors of the target sample from
the same class and the different classes instead of the total training samples and thus has much lower computational complexity than that of DSNE. Furthermore, since FDSNE is a linear feature
dimensionality reduction algorithm, we extend FDSNE to a nonlinear scenario using techniques of kernel trick and present two kernel-based methods: FKDSNE1 and FKDSNE2. Experimental results on
COIL-20, USPS, and ORL datasets show the superior performance of the proposed methods. Our future work might include further empirical studies on the learning speed and robustness of FDSNE by using
more extensive, especially large-scale, experiments. It also remains important to investigate acceleration techniques in both initialization and long-run stages of the learning.
This project was partially supported by Zhejiang Provincial Natural Science Foundation of China (nos. LQ12F03011 and LQ12F03005).
1. E. Cherchi and C. A. Guevara, “A Monte Carlo experiment to analyze the curse of dimensionality in estimating random coefficients models with a full variance-covariance matrix,” Transportation
Research B, vol. 46, no. 2, pp. 321–332, 2012. View at Publisher · View at Google Scholar · View at Scopus
2. M. Turk and A. Pentland, “Eigenfaces for recognition,” Journal of Cognitive Neuroscience, vol. 3, no. 1, pp. 71–86, 1991. View at Scopus
3. S. Yan, D. Xu, B. Zhang, H.-J. Zhang, Q. Yang, and S. Lin, “Graph embedding and extensions: a general framework for dimensionality reduction,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 29, no. 1, pp. 40–51, 2007. View at Publisher · View at Google Scholar · View at Scopus
4. P. N. Belhumeur, J. P. Hespanha, and D. J. Kriegman, “Eigenfaces versus fisherfaces: recognition using class specific linear projection,” IEEE Transactions on Pattern Analysis and Machine
Intelligence, vol. 19, no. 7, pp. 711–720, 1997. View at Publisher · View at Google Scholar · View at Scopus
5. M. Sugiyama, “Dimensionality reduction of multimodal labeled data by local fisher discriminant analysis,” Journal of Machine Learning Research, vol. 8, pp. 1027–1061, 2007. View at Scopus
6. W. Bian and D. Tao, “Max-min distance analysis by using sequential SDP relaxation for dimension reduction,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 5, pp.
1037–1050, 2011. View at Publisher · View at Google Scholar · View at Scopus
7. Z. Teng, J. He, et al., “Critical mechanical conditions around neovessels in carotid atherosclerotic plaque may promote intraplaque hemorrhage,” Atherosclerosis, vol. 223, no. 2, pp. 321–326,
2012. View at Publisher · View at Google Scholar
8. Z. Teng, A. J. Degnan, U. Sadat et al., “Characterization of healing following atherosclerotic carotid plaque rupture in acutely symptomatic patients: an exploratory study using in vivo
cardiovascular magnetic resonance,” Journal of Cardiovascular Magnetic Resonance, vol. 13, article 64, 2011. View at Publisher · View at Google Scholar · View at Scopus
9. C. E. Hann, I. Singh-Levett, B. L. Deam, J. B. Mander, and J. G. Chase, “Real-time system identification of a nonlinear four-story steel frame structure-application to structural health
monitoring,” IEEE Sensors Journal, vol. 9, no. 11, pp. 1339–1346, 2009. View at Publisher · View at Google Scholar · View at Scopus
10. A. Segui, J. P. Lebaron, and R. Leverge, “Biomedical engineering approach of pharmacokinetic problems: computer-aided design in pharmacokinetics and bioprocessing,” IEE Proceedings D, vol. 133,
no. 5, pp. 217–225, 1986. View at Scopus
11. F. Wu, Y. Zhong, and Q. Y. Wu, “Online classification framework for data stream based on incremental kernel principal component analysis,” Acta Automatica Sinica, vol. 36, no. 4, pp. 534–542,
2010. View at Publisher · View at Google Scholar · View at Scopus
12. J. B. Tenenbaum, V. de Silva, and J. C. Langford, “A global geometric framework for nonlinear dimensionality reduction,” Science, vol. 290, no. 5500, pp. 2319–2323, 2000. View at Publisher · View
at Google Scholar · View at Scopus
13. S. T. Roweis and L. K. Saul, “Nonlinear dimensionality reduction by locally linear embedding,” Science, vol. 290, no. 5500, pp. 2323–2326, 2000. View at Publisher · View at Google Scholar · View
at Scopus
14. M. Belkin and P. Niyogi, “Laplacian eigenmaps for dimensionality reduction and data representation,” Neural Computation, vol. 15, no. 6, pp. 1373–1396, 2003. View at Publisher · View at Google
Scholar · View at Scopus
15. H. Li, H. Jiang, R. Barrio, X. Liao, L. Cheng, and F. Su, “Incremental manifold learning by spectral embedding methods,” Pattern Recognition Letters, vol. 32, no. 10, pp. 1447–1455, 2011. View at
Publisher · View at Google Scholar · View at Scopus
16. P. Zhang, H. Qiao, and B. Zhang, “An improved local tangent space alignment method for manifold learning,” Pattern Recognition Letters, vol. 32, no. 2, pp. 181–189, 2011. View at Publisher · View
at Google Scholar · View at Scopus
17. G. Hinton and S. Roweis, “Stochastic neighbor embedding,” Advances in Neural Information Processing Systems, vol. 15, pp. 833–840, 2002.
18. L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” Journal of Machine Learning Research, vol. 9, pp. 2579–2605, 2008. View at Scopus
19. J. A. Cook, I. Sutskever, A. Mnih, and G. E. Hinton, “Visualizing similarity data with a mixture of maps,” in Proceedings of the 11th International Conference on Artificial Intelligence and
Statistics, vol. 2, pp. 67–74, 2007.
20. Z. R. Yang, I. King, Z. L. Xu, and E. Oja, “Heavy-tailed symmetric stochastic neighbor embedding,” Advances in Neural Information Processing Systems, vol. 22, pp. 2169–2177, 2009.
21. S. Wu, M. Sun, and J. Yang, “Stochastic neighbor projection on manifold for feature extraction,” Neurocomputing, vol. 74, no. 17, pp. 2780–2789, 2011. View at Publisher · View at Google Scholar ·
View at Scopus
22. J. W. Zheng, H. Qiu, Y. B. Jiang, and W. L. Wang, “Discriminative stochastic neighbor embedding analysis method,” Computer-Aided Design & Computer Graphics, vol. 24, no. 11, pp. 1477–1484, 2012.
23. C. Cattani, R. Badea, S. Chen, and M. Crisan, “Biomedical signal processing and modeling complexity of living systems,” Computational and Mathematical Methods in Medicine, vol. 2012, Article ID
298634, 2 pages, 2012. View at Publisher · View at Google Scholar
24. X. Zhang, Y. Zhang, J. Zhang, et al., “Unsupervised clustering for logo images using singular values region covariance matrices on Lie groups,” Optical Engineering, vol. 51, no. 4, Article ID
047005, 8 pages, 2012. View at Publisher · View at Google Scholar
25. P. J. Moreno, P. Ho, and N. Vasconcelos, “A Kullback-Leibler divergence based kernel for SVM classification in multimedia applications,” Advances in Neural Information Processing Systems, vol.
16, pp. 1385–1393, 2003. | {"url":"http://www.hindawi.com/journals/cmmm/2013/106867/","timestamp":"2014-04-16T22:50:08Z","content_type":null,"content_length":"507224","record_id":"<urn:uuid:e98523e3-d62b-499c-a153-bf5f0afcea7f>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00384-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
The sum of the measures of two exterior angles of a triangle is 264 degrees. What is the measure of the third exterior angle?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50bcc853e4b0bcefefa08971","timestamp":"2014-04-20T08:44:51Z","content_type":null,"content_length":"39473","record_id":"<urn:uuid:ec4618ef-968b-4859-b7f8-3ff0cfd6e7b5>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00132-ip-10-147-4-33.ec2.internal.warc.gz"} |
Why Should Mathematicians Care About Philosophy of Mathematics?
Posted by: Alexandre Borovik | December 6, 2008
Why Should Mathematicians Care About Philosophy of Mathematics?
I will give a talk under that title at Magic Postgraduate Student Conference 2009.
Are mathematical objects invented or discovered? Questions of that type are inevitably asked and answered by mathematicians in the course of their work. In most cases, the answers are not
revealed publicly but retained for personal use. But even as questions like “what is a mathematical object?” are suppressed, their derivatives, like “What do we mean when we say that two objects
are identical or when we say that two objects are equivalent?” are unavoidable in any formal mathematical discourse.
At an informal level, the situation is even more puzzling. I quote a Fields Medal winner, Timothy Gowers:
“The following informal concepts of mathematical practice cry out to be explicated:
beautiful, natural, deep, trivial, “right”, difficult, genuinely, explanatory …”
Without doubt, you use these words when you talk to your colleagues about mathematics: can you explain their meaning?
Unfortunately, philosophy of mathematics as an academic discipline fails to fulfil its basic function: it does not help mathematicians to develop a conceptual framework for their normal,
day-to-day, professional discourse. In my talk, I will argue that mathematicians should take care of themselves and try to clarify “informal” aspects of their work.
Gian-Carlo Rota has some thought-provoking things to say about some of these aesthetic terms in his book Indiscrete Thoughts. Any thoughts on what he says?
By: Todd Trimble on December 6, 2008
at 5:47 pm
Posted in Uncategorized | {"url":"http://micromath.wordpress.com/2008/12/06/why-should-mathematicians-care-about-philosophy-of-mathematics/?like=1&source=post_flair&_wpnonce=bc4cf9c758","timestamp":"2014-04-16T20:12:58Z","content_type":null,"content_length":"57879","record_id":"<urn:uuid:8a191dc4-80ee-4989-a18a-cc71be7d079f>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Axial motion and scalar transport in stretched spiral vortices
Pullin, D. I. and Lundgren, T. S. (2001) Axial motion and scalar transport in stretched spiral vortices. Physics of Fluids, 13 (9). pp. 2553-2563. ISSN 1070-6631. http://resolver.caltech.edu/
See Usage Policy.
Use this Persistent URL to link to this item: http://resolver.caltech.edu/CaltechAUTHORS:PULpof01
We consider the dynamics of axial velocity and of scalar transport in the stretched-spiral vortex model of turbulent fine scales. A large-time asymptotic solution to the scalar advection-diffusion
equation, with an azimuthal swirling velocity field provided by the stretched spiral vortex, is used together with appropriate stretching transformations to determine the evolution of both the axial
velocity and a passive scalar. This allows calculation of the shell-integrated three-dimensional spectra of these quantities for the spiral-vortex flow. The dominant term in the velocity (energy)
spectrum contributed by the axial velocity is found to be produced by the stirring of the initial distribution of axial velocity by the axisymmetric component of the azimuthal velocity. This gives a
k(-7/3) spectrum at large wave numbers, compared to the k(-5/3) component for the azimuthal velocity itself. The spectrum of a passive scalar being mixed by the vortex velocity field is the sum of
two power laws. The first is a k(-1) Batchelor spectrum for wave numbers up to the inverse Batchelor scale. This is produced by the axisymmetric component of the axial vorticity but is independent of
the detailed radial velocity profile. The second is a k(-5/3) Obukov-Corrsin spectrum for wave numbers less than the inverse Kolmogorov scale. This is generated by the nonaxisymmetric axial vorticity
and depends on initial correlations between this vorticity and the initial scalar field. The one-dimensional scalar spectrum for the composite model is in satisfactory agreement with experimental
Item Type: Article
Additional Copyright © 2001 American Institute of Physics. Received 29 November 2000; accepted 10 May 2001. D.I.P. was supported in part by the National Science Foundation under Grant No.
Information: CTS-9978551.
Subject Keywords: vortices; turbulence
Record Number: CaltechAUTHORS:PULpof01
Persistent URL: http://resolver.caltech.edu/CaltechAUTHORS:PULpof01
Alternative URL: http://dx.doi.org/10.1063/1.1388207
Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided.
ID Code: 2086
Collection: CaltechAUTHORS
Deposited By: Tony Diaz
Deposited On: 07 Mar 2006
Last Modified: 26 Dec 2012 08:47
Repository Staff Only: item control page | {"url":"http://authors.library.caltech.edu/2086/","timestamp":"2014-04-19T09:28:09Z","content_type":null,"content_length":"21717","record_id":"<urn:uuid:277aff5a-faf8-4881-8dbf-1a8aaed5774e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00310-ip-10-147-4-33.ec2.internal.warc.gz"} |
Obvious but in need of confirmation!!
March 31st 2006, 11:21 PM #1
Jul 2005
Obvious but in need of confirmation!!
Is it possible to have a power raised to a power E.g (Its attached)
I am in year nine N.S.W sydney mathematics, stage 5.2-5.3 I think, and in a disadvantaged school ( Chester Hill High to be exact, it has an Intensive English Centre in which students with little
kowledge of english are separated and taught, then released into the rest of the school community i.e the one im in.)
Is it possible to have a power raised to a power E.g (Its attached)
I am in year nine N.S.W sydney mathematics, stage 5.2-5.3 I think, and in a disadvantaged school ( Chester Hill High to be exact, it has an Intensive English Centre in which students with little
kowledge of english are separated and taught, then released into the rest of the school community i.e the one im in.)
That is the same as
Meaning, the fourth power of 2-cube.
And that is simplified into
= 2^12 -----------answer.
Another way of looking at it,
= (2^3)(2^3)(2^3)(2^3)
= 2^(3+3+3+3)
= 2^12.
It's quite possible but you have to think a little carefully about what you mean by $2^{3^4}$, which I'll write on one line as 2^3^4. Do you mean (2^3)^4, ie take 2, raise that to the power 3,
then raise the result (8) to the power 4, giving 4096; or do you mean 2^(3^4), ie take the 4th power of 3, then take that power (81) of 2 giving 2417851639229258349412352. These are different in
general, and a way of saying that is that the ^ operator is not associative. You have actually seen this distinction before: addition and multiplication are associative, that is $a+(b+c) = (a+b)
+c$ and $a \times (b \times c) = (a \times b) \times c$, whereas subtraction and division are not.
Anyway it wouldn't really matter which way of inserting the brackets you chose, provided that everyone else understood the same thing by it, were it not for one thing. The first way of reading $a
^{b^c}$, as $\left(a^b\right)^c$ has another expression, since it is just $a^{(b\times c)}$. Since we don't really need two ways of writing the same thing, it makes sense for everyone to agree
that $a^{b^c}$ should mean $a^{\left(b^c\right)}$.
So the short answer is: yes, $a^{b^c} = a^{\left(b^c\right)}$.
It's quite possible but you have to think a little carefully about what you mean by $2^{3^4}$, which I'll write on one line as 2^3^4. Do you mean (2^3)^4, ie take 2, raise that to the power 3,
then raise the result (8) to the power 4, giving 4096; or do you mean 2^(3^4), ie take the 4th power of 3, then take that power (81) of 2 giving 2417851639229258349412352. These are different in
general, and a way of saying that is that the ^ operator is not associative. You have actually seen this distinction before: addition and multiplication are associative, that is $a+(b+c) = (a+b)
+c$ and $a \times (b \times c) = (a \times b) \times c$, whereas subtraction and division are not.
Anyway it wouldn't really matter which way of inserting the brackets you chose, provided that everyone else understood the same thing by it, were it not for one thing. The first way of reading $a
^{b^c}$, as $\left(a^b\right)^c$ has another expression, since it is just $a^{(b\times c)}$. Since we don't really need two ways of writing the same thing, it makes sense for everyone to agree
that $a^{b^c}$ should mean $a^{\left(b^c\right)}$.
So the short answer is: yes, $a^{b^c} = a^{\left(b^c\right)}$.
I just read that and went huh?
Anyway, I think you went too deep into the matter. I was doing a question and looked in the textbook for the answer (Not to cheat, more like see the answer and try to relate it to other
questions), and the answer was exactly like the image I posted before. I asked my teacher about, and she said
it was too advanced and I don't have to worry about it now. I still don't get it, could you please show this in more depth (if possible?) Thanks in advance.
I just read that and went huh?
Anyway, I think you went too deep into the matter. I was doing a question and looked in the textbook for the answer (Not to cheat, more like see the answer and try to relate it to other
questions), and the answer was exactly like the image I posted before. I asked my teacher about, and she said
it was too advanced and I don't have to worry about it now. I still don't get it, could you please show this in more depth (if possible?) Thanks in advance.
I think what the others are trying to say is that there are two ways to interpret the expression you wrote:
1) $2^{(3^4)}=2^{81}$
2) $(2^3)^4=2^{12}$
Both expressions are possible and there is apparently no convention in place to say which one you should be using.
Using a calc, if you punch in those numbers >>
2^3^4, youll get 2^12
Unless it indicates that there are "()" in between the powers ur pretty much set.
2^a^b = 2^(a*b)
2^3^4 = 2^(3*4) = 2^(12)
now if the brackets do exsist then its diff
2^(3^4) = 2^(3*3*3*3) = 2^(81)
Well i hope that makes a lil clearer its the same thing that the others are saying pretty much hope it helped.
I understand now! Thanks for help me and mm goy sai!
April 1st 2006, 02:30 AM #2
MHF Contributor
Apr 2005
April 1st 2006, 02:41 AM #3
April 5th 2006, 02:37 AM #4
Jul 2005
April 5th 2006, 04:06 AM #5
April 5th 2006, 08:29 AM #6
Jan 2006
April 13th 2006, 11:19 PM #7
Jul 2005 | {"url":"http://mathhelpforum.com/math-topics/2411-obvious-but-need-confirmation.html","timestamp":"2014-04-19T00:19:21Z","content_type":null,"content_length":"53292","record_id":"<urn:uuid:55d16152-6229-4674-ace1-85f5a7001fe5>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00031-ip-10-147-4-33.ec2.internal.warc.gz"} |
Elementary tools for proving congruences of modular forms
up vote 7 down vote favorite
My impression is that the specialists in the field use geometric modular forms when proving congruences of modular forms. While this is probably the right way, I don't think I will be able to get a
working knowledge of this point of view fast enough, as this is for an undergraduate summer research project.
Do you know of any references that of anything that involves congruences of modular forms being proved by elementary means? Mostly, I'm looking for examples of methods and tools rather than a
specific theorem.
More specifically, I'm interested in congruences between modular forms of different weight and the same level. The only fact I know in this case is that the Eisenstein series $E_{p-1}\equiv 1\pmod{p}
$, and multiplying by this Eistenstein series give you equivalences between modular forms of different weight. Also (though much less trivial), the converse is also true. However, with only this
fact, it seems the only hope of proving anything is to come up with very explicit formulas for what is happening.
(I know there is a short paper by Serre that determines the structure of the ring of modular forms, under the full SL_2(Z), reduced mod p http://math.bu.edu/people/potthars/writings/serre-1.pdf.
However, this does not generalize to modular forms of a given level, since it uses the structure of the ring of modular forms under the full modular group.
A couple of other papers I've found are "Congruences between systems of eigenvalues of modular forms" and "A study of the local components of the Hecke algebra mod l" by Jochnowitz, but I've only
just started reading.
This was originally posted on stackexchange: http://math.stackexchange.com/questions/420509/elementary-tools-for-proving-congruences-of-modular-forms.)
nt.number-theory modular-forms
Don't you have a mentor for the summer project who can give you advice? – user29720 Jun 14 '13 at 22:29
1 Yes, I have already asked for references and tools, which is how I know about E_{p-1} and Serre's expository paper. However, this is all that we have in terms of elementary references. My mentor
is a topologist and knows of an approach using topological modular forms, but wonders if there is a more direct way. Until I come up with something concrete, I think, other than asking clarifying
questions, all the advice that will be given has been given. – Dtseng Jun 14 '13 at 22:50
2 See E. Ghate, An introduction to congruences between modular forms math.tifr.res.in/%7Eeghate/basics.dvi – François Brunault Jun 15 '13 at 11:11
add comment
1 Answer
active oldest votes
To understand congruences between modular forms, one needs first to understand the abstract notion of congruences between elements of modules with (Hecke) operators acting on them. For this,
Ghate's note, as recommended by François, is a veri good introduction.
But then I assume you want to go further, and understand how people prove, or use, congruences between modular forms. The problem with the traditional definition of modular forms as
holomorphic functions on the upper half plane is that obviously, this is an analytic, not algebraic definition, and that therefore some (hard) work is needed to reveal the arithmetic nature
of modular forms, in particular to study congruences between them. This hard work generally involve some serious algebraic geometry, such as defining, constructing and studying moduli space
of elliptic curves with various structures over an arithmetic basis, etc. This is likely to be overwhelming for an undegraduate student. Yet ti understand seriously the aspect you are
mentioning (congruences between modular forms of same level but various weights)
An alternative is to start with a different kind of object, somehow related to modular forms, but with a more direct connection to arithmetic. I am thinking of either "modular symbols" or
"modular forms over quaternion algebra".
up vote Let me just mention the second here. Let $D$ be a quaternion algebra over $\mathbb Q$ which is ramified at infinity (that is $D \otimes \mathbb R = \mathbb H$) and $G=D^\ast$ the algebraic
3 down group over $\mathbb Q$ of its invertble element. Define a "level" K as a compact open subgroup of $G(\mathbb A_f)$, and a "modular form of weight 2 and level K" as simply a function form $G(\
vote mathbb A_f)$ to $\mathbb C$ which is left-invariant by $K$ and right invariant of $G(\mathbb Q)$. You get this way a notion which has a lot of analogy with modular forms: they form a
finite-dimensional vector space, which has a natural structure over $\mathbb Z$ (just take the same functions with values in $\mathbb Z$), a natural action of Hecke operators, etc. Actually
this is just more than a simple analogy: a deep theorem of Jacquet-Langlands tells you that the space of "modular forms of weight 2 for $D$" defined as above is actually a big subspace of the
space of traditional modular forms of weight 2, in a way which respects the Hecke opertaors. But you don't need to understand fully this theorem, less alone its difficult proof, to use it as
a motivation to study modular forms over $D$, and their congruences.
Now it turns that certain theorem about congruences between modular forms are much simpler to prove (but still non-trivial) for their $D$-counterpart. One example of this is the famous "Ribet
level-raising theorem" which is quite difficult to prove for classical modular forms but is relatively simple and very beautiful to prove for modular forms for $D$. Understanding this theorem
may seem a worthwhile goal for a summer project of a very good and very motivated undergrad. One reference is section 1 of Taylor's early and very deep paper "on Galois rep. associated with
Hilbert modular forms" at inventiones. Unfortunately it is not extremely reader-friendly, as the main ideas (basically some computations on the Bruhat-Tits tree) are not explicitly explained.
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory modular-forms or ask your own question. | {"url":"http://mathoverflow.net/questions/133798/elementary-tools-for-proving-congruences-of-modular-forms","timestamp":"2014-04-19T04:36:07Z","content_type":null,"content_length":"58323","record_id":"<urn:uuid:9597ba92-37da-4ba1-959e-87390c98d742>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00355-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding the cubic function rule
Hi i would like some help on finding the expression for the following cubic curve: hope someone can help P.S
1. The general equation of a cubic function is: $f(x)=ax^3+bx^2+cx+d$ 2. Since the graph passes through the origin you already know that d = 0. 3. Plug in the coordinates of the given points into the
genral equation. You'll get a system of 3 equations: $\left|\begin{array}{l}3 = 8a+4b+2c \\\frac34 = a+b+c \\ -3=-8a+4b-2c \end{array}\right.$ 4. Solve for (a, b, c)
Thanks i think i know how to work it out now | {"url":"http://mathhelpforum.com/algebra/117922-finding-cubic-function-rule.html","timestamp":"2014-04-19T22:39:01Z","content_type":null,"content_length":"36058","record_id":"<urn:uuid:2fbbec12-2be0-47a2-bce2-ffb6a31b88bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
East Bangor, PA Math Tutor
Find an East Bangor, PA Math Tutor
...Mostly, I will tutor in physics, mathematics and even chemistry. Contact me if you have any questions at all and I will be more than happy to answer them!I have a Bachelor of Science degree in
Physics with a minor in mathematics. I spent one year as a one-on-one tutor for introductory physics c...
16 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I got married, and we moved to this area for my husband's work. While my degree is in Biology, I also have experience in Mathematics. I am a very patient person, and I love helping people
through difficult material.
35 Subjects: including prealgebra, anatomy, botany, nursing
...Additionally, I have taught History, Social Studies and Science with great success for my students. I create engaging, fun and easy to understand lessons that are individually tailored to meet
the needs of my students with varying skills and ability levels. I have taught students from Kindergarten through tenth grade.
16 Subjects: including prealgebra, reading, writing, grammar
...I, myself have struggled in math, so that makes me patient and understanding. I believe in helping students do the work for themselves. I have very simple methods and I draw a lot of pictures
and incorporate real-world examples in my tutoring sessions.
14 Subjects: including algebra 1, algebra 2, Microsoft Excel, geometry
...I will not offer to, and I do not expect to be asked to, teach more advanced topics of Chemistry such as those one might find in college. I earned a 730 in Critical Reading and an 800 in
Writing on the SAT. This included an 11 out of 12 on the essay, which requires concise, mature vocabulary (n...
28 Subjects: including algebra 1, algebra 2, American history, calculus
Related East Bangor, PA Tutors
East Bangor, PA Accounting Tutors
East Bangor, PA ACT Tutors
East Bangor, PA Algebra Tutors
East Bangor, PA Algebra 2 Tutors
East Bangor, PA Calculus Tutors
East Bangor, PA Geometry Tutors
East Bangor, PA Math Tutors
East Bangor, PA Prealgebra Tutors
East Bangor, PA Precalculus Tutors
East Bangor, PA SAT Tutors
East Bangor, PA SAT Math Tutors
East Bangor, PA Science Tutors
East Bangor, PA Statistics Tutors
East Bangor, PA Trigonometry Tutors | {"url":"http://www.purplemath.com/East_Bangor_PA_Math_tutors.php","timestamp":"2014-04-18T11:15:15Z","content_type":null,"content_length":"24015","record_id":"<urn:uuid:aac330c8-2918-4d88-a7b8-2fb33583e36c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Copyright © University of Cambridge. All rights reserved.
'Straw Scaffold' printed from http://nrich.maths.org/
Why do this problem?
Building a model scaffold will require students to work in groups. To be successful, a group will need:
• team work
• good communication
• leadership
• self-discipline
They will also need to plan their scaffold, testing it as they go to see if their design is fit for purpose.
These are all skills required in Design Technology.
This activity is based on a resource from Richard Hall and Michael Acheson, two of the teachers involved in the very successful STEM teacher inspiration days, 2011-12. You may also be interested in
the resources
used in the Dragster workshop on TI day 2.
The practical activity will help students to develop an understanding of forces which will help them in both Maths and Science later on. Groups should also assess which structure can bear the
greatest volume of water for the least number of straws. To do this, they will need to discuss how they are going to calculate the volume : number of straws ratio to ensure that there is a common
criterion in the class.
Possible approach
This would be an ideal lesson for a visit from one of the Design Technology department, who could lead an initial discussion about how to design and test the scaffold.
For an hour lesson, a suitable division of time would be 40 minutes to test, plan and build the scaffold, and 20 minutes for assessment and discussion.
One way to avoid too much mess is to give groups a beaker and small cubes to test their structure, and only make water available at the final testing.
Key questions
• How can you compare scaffolds so you can work out which fulfils the brief best?
• What features characterised successful scaffolds?
• What features characterised successful groups?
Possible extension
• Finding the maximum amount of water that can be supported by a given scaffold.
• Deciding on whether there are redundant straws in the structures.
Possible support
All groups should be able to get started at least. Suggest to a group which is struggling that they start with a smaller scaffold then gradually add height, testing for stability and for
load-bearing at each stage. | {"url":"http://nrich.maths.org/8847/note?nomenu=1","timestamp":"2014-04-21T12:31:00Z","content_type":null,"content_length":"5458","record_id":"<urn:uuid:21badea6-cd61-4c25-80c3-259586a92512>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00464-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical logic: Algorithm, Set theory, List of mathematical symbols, Surreal number, Entscheidungsproblem, Recursion
We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals.
As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based
Customer Service team is ready to help by email, chat or phone.
So sorry. Try back another time as our inventory fluctuates daily.
Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students
the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same
people, only our corporate name has changed. | {"url":"http://www.bookrenter.com/mathematical-logic-algorithm-set-theory-list-of-mathematical-symbols-surreal-number-entscheidungsproblem-recursion-wikipedia-1157608922-9781157608929","timestamp":"2014-04-19T11:27:21Z","content_type":null,"content_length":"26944","record_id":"<urn:uuid:8e825fe7-5cdf-4a5f-9342-c2f4d97057c8>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00080-ip-10-147-4-33.ec2.internal.warc.gz"} |
Particle Swarm Optimization and Bacterial Foraging Optimization Techniques for Optimal Current Harmonic Mitigation by Employing Active Power Filter
Applied Computational Intelligence and Soft Computing
Volume 2012 (2012), Article ID 897127, 10 pages
Research Article
Particle Swarm Optimization and Bacterial Foraging Optimization Techniques for Optimal Current Harmonic Mitigation by Employing Active Power Filter
Department of Electrical Engineering, National Institute of Technology, Rourkela 769008, India
Received 13 May 2011; Revised 31 July 2011; Accepted 29 August 2011
Academic Editor: Shyi-Ming Chen
Copyright © 2012 Sushree Sangita Patnaik and Anup Kumar Panda. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution,
and reproduction in any medium, provided the original work is properly cited.
Conventional mathematical modeling-based approaches are incompetent to solve the electrical power quality problems, as the power system network represents highly nonlinear, nonstationary, complex
system that involves large number of inequality constraints. In order to overcome the various difficulties encountered in power system such as harmonic current, unbalanced source current, reactive
power burden, active power filter (APF) emerged as a potential solution. This paper proposes the implementation of particle swarm optimization (PSO) and bacterial foraging optimization (BFO)
algorithms which are intended for optimal harmonic compensation by minimizing the undesirable losses occurring inside the APF itself. The efficiency and effectiveness of the implementation of two
approaches are compared for two different conditions of supply. The total harmonic distortion (THD) in the source current which is a measure of APF performance is reduced drastically to nearly 1% by
employing BFO. The results demonstrate that BFO outperforms the conventional and PSO-based approaches by ensuring excellent functionality of APF and quick prevail over harmonics in the source current
even under unbalanced supply.
1. Introduction
Introduced by Kennedy and Eberhart in the year 1995 [1], particle swarm optimization (PSO) has emerged as a proficient stochastic approach of evolutionary computation. Since then it has been employed
in various fields of applications and research and is successful in yielding an optimized solution. This algorithm mimics the social behavior executed by the individuals in a bird flock or fish
school while searching for the best food location (global optima). The PSO algorithm neither depends upon the initial condition nor on the gradient information. Since it depends only on the value of
objective function, it makes the algorithm computationally less expensive and much simple to implement. The low CPU and memory requirement is another advantage. However, some experimental results
show that the local search ability around the optima is very poor though the global search ability of PSO is quite good [2–4]. This results in premature convergence in problems where multiple optima
exist and; hence, the performance is degraded.
The bacterial foraging optimization (BFO) proposed by Passino in the year 2002 [5] is based on natural selection that tends to eliminate animals with poor foraging strategies. After many generations,
poor foraging strategies are eliminated while only the individuals with good foraging strategy survive signifying survival of the fittest. BFO formulates the foraging behavior exhibited by E. coli
bacteria as an optimization problem. Over certain real-world optimization problems, BFO has been reported to outperform many powerful optimization algorithms in terms of convergence speed and final
accuracy [6–8].
The power system is continuously being subjected to huge disturbances due to the proliferation of large number of nonlinear loads such as power electronic converters, arc furnaces, fluorescent
lights, motor drives, saturated transformers, switched mode power supplies, computers, and other domestic and industrial electronic loads. Though the active power filter (APF) is efficient enough to
compensate for these disturbances, optimal load compensation by the APF is always desirable.
This paper exploits conventional, PSO, and BFO approaches to optimize the shunt APF performance for optimal load compensation, and the results demonstrate that the APF that implements BFO converges
to global optimum solution faster compared to APFs employing conventional method and PSO.
2. Particle Swarm Optimization
The mechanism of PSO is initialized with a group of randomly dispersed particles assigned with some arbitrary velocities. The particles fly in the -dimensional problem space, cluster together, and
finally converge to a global optimum area. The movement of particles in the search space is in accordance with the flying experience of the individual and its neighboring particles in the swarm
population (swarm intelligence). Let the th particle in the swarm be at moving with a velocity . Then, the position and velocity of the particle at next iteration will be and , respectively, which is
illustrated in Figure 1 and can be given mathematically as
In the above expression, parameter is known as inertia constant that maintains a balance between the local and global search. and are acceleration constants. and are two independently generated
random numbers which are uniformly distributed in the interval . represents coordinates of the best location discovered as yet by the th particle (local optima), whereas the coordinates of best
location discovered thus far by the entire swarm (global optima) are stored in (Figure 2).
The exploration of new search space depends upon the value of inertia constant . Therefore, Eberhart and Shi proposed a modified that decreases linearly with the successive iterations [9], which can
be given as Here is the generation index representing the current number of evolutionary generations, is the predefined value of maximum number of generations, and and are the maximal and minimal
weights. Initially the value of is 0.9 in order to allow the particles to find the global optimum neighborhood faster. The value of is set to 0.4 upon finding out the optima so that the search is
shifted from exploratory mode to exploitative mode. The search process terminates when there is no further improvement in the global optimum solution or the number of iterations executed becomes
equal to its maximum predefined value. The entire process of PSO is represented as a flowchart in Figure 3.
2.1. Iterative Algorithm for PSO
Step 1. Initialize the size of swarm, dimension of search space, maximum number of iterations, and the PSO constants , , and . Define the random numbers and . Find out the current fitness of each
particle in the population.
Step 2. Assign the particles with some random initial positions and velocities . Set the counter for iteration to zero. For the initial population, local best fitness of each particle is its own
fitness value, and local best position of each particle is its own current position, that is,
Step 3. The global best fitness value is calculated by The position corresponding to global best fitness is the global best position .
Step 4. Update the particle velocity and particle position for next iteration by (1) and (2).
Step 5. By setting , increment the iteration counter.
Find out the current fitness of each particle.
If current fitness < local best fitness, set
Step 6. After calculating the local best fitness of each particle, the current global best fitness for the kth iteration is determined by If current global best fitness < global best fitness, then
The position corresponding to global best fitness is assigned to .
Step 7. Repeat Steps 5 and 6 until is equal to the maximum number of iterations defined in Step 1 or there is no improvement in the global best fitness value.
Step 8. Terminate the iterative algorithm, when there cannot be any further execution of iterations.
3. Bacterial Foraging Optimization
The BFO is a nongradient optimization problem which is inspired by the foraging strategy used by E. coli bacteria such that it maximizes their energy intake per unit time spent in foraging. The four
principal mechanisms observed in bacteria are chemotaxis, swarming, reproduction, and elimination-dispersal.
The flowchart of BFO algorithm which mimics the above four mechanisms is presented in Figure 4.
3.1. Chemotaxis
The movement of E. coli bacteria in the human intestine in search of nutrient-rich location away from noxious environment is accomplished with the help of the locomotory organelles known as flagella
by chemotactic movement in either of the ways, that is, swimming (in the same direction as the previous step) or tumbling (in an absolutely different direction from the previous one). Suppose
represents the th bacterium at th chemotactic, th reproductive, and th elimination-dispersal step. Then chemotactic movement of the bacterium may be mathematically represented by (10). In the
expression, is the size of the unit step taken in the random direction, and indicates a vector in the arbitrary direction whose elements lie in as follows:
3.2. Swarming
This group behavior is seen in several motile species of bacteria, where the cells, when stimulated by a high level of succinate, release an attractant aspertate. This helps them propagate
collectively as concentric patterns of swarms with high bacterial density while moving up in the nutrient gradient. The cell-to-cell signaling in bacterial swarm via attractant and repellant may be
modeled as per (11), where specifies the objective function value to be added to the actual objective function that needs to be optimized, to present a time varying objective function, indicates the
total number of bacteria in the population, is the number of variables to be optimized, and is a point in the -dimensional search domain. The coefficients , , , and are the measure of quantity and
diffusion rate of the attractant signal and the repellant effect magnitude, respectively,
3.3. Reproduction
The fitness value for th bacterium after travelling chemotactic steps can be evaluated by the following equation: Here represents the health of th bacterium. The least healthy bacteria constituting
half of the bacterial population are eventually eliminated while each of the healthier bacteria asexually split into two, which are then placed in the same location. Hence, ultimately the population
remains constant.
3.4. Elimination and Dispersal
The BFO algorithm makes some bacteria to get eliminated and dispersed with probability after number of reproductive events to ensure that the bacteria do not get trapped into a local optimum instead
of the global optima.
4. Active Power Filter
The shunt APF is intended to be used not merely for compensation of current harmonics but also for unbalance in the source current generated due to nonlinear loads. It injects filter-generated
current harmonics of equal magnitude and opposite phase as the load current harmonics at the point of common coupling (PCC) between the source and the load as illustrated in Figure 5. The APF
comprising a three-phase pulse-width modulation-(PWM-) based voltage source inverter (VSI) employing various control schemes has gained well recognition [10]. For proper functioning of APF, it is
crucial to design an appropriate control scheme. In the conventional instantaneous active and reactive power method, the entire reactive power and oscillating component of active power are used for
generation of reference compensation currents [11]. The multiplication of instantaneous load currents and voltages while calculating the instantaneous powers caused amplification of harmonic content
leading to imprecise harmonic compensation. Later, the instantaneous active and reactive current component method is proposed to replace the method as it brings down the total harmonic distortion
(THD) in supply current below 5% so as to satisfy the IEEE-519 standards even under nonideal supply voltage [12, 13].
In this paper, the performance of APF with conventional control is improved by means of PSO and BFO algorithms since the conventional approach becomes complex to implement as the power system
represents a highly nonlinear and nonstationary system. Moreover, the conventional control yields inadequate result at every operating point except the one at which it is designed to be operated [14,
4.1. Shunt APF System Configuration
The system configuration of a 3-phase 3-wire shunt APF is depicted in Figure 5. The filter performance is studied under ideal and unbalanced supply conditions. Here the nonlinear load is consisting
of diode rectifier with load on the dc side. The APF is comprised of a VSI with hysteresis PWM current control. Controller for the APF is designed using control scheme. Inputs to the controller are
the three-phase load currents and the dc-link capacitor voltage of the inverter. The gains of the PI controller used for dc-link voltage regulation are optimized using conventional, PSO and BFO
techniques. Output of the controller is reference compensation current template. The actual filter currents and the reference compensation currents are compared in a hysteresis comparator giving away
the current pulses for switching actions to be carried out in the switching devices (IGBTs) of the VSI. Finally the filter-generated compensating current is injected into the power system at PCC to
assure that sinusoidal and compensated current is drawn from the utility.
4.2. Optimization Problem for dc-Link Voltage Regulation
While operating under steady-state condition of the shunt APF, the VSI should neither absorb nor deliver active power. So the main concern lies in getting an optimally tuned PI controller which
satisfies the conditions of dynamics and stability together in order to make the dynamics of inverter dc-link voltage sufficiently low. In various conventional PI controller-tuning methods such as
Ziegler-Nichols method, the recommended settings are empirical in nature and require extensive experimentation. Hence, there is always scope for improving the tuning of PI controller that yields
suitable values of proportional gain and integral gain for which it gives better settling time within tolerable limits of maximum overshoot. This purpose can be accomplished by implementing PSO and
BFO to minimize the deviation of from its reference value. Maximum overshoot , rise time , and steady-state error are the constraints that imply the optimality of PI controller. The objective here is
to reduce dc-link voltage deviation which can be given by
The performance criteria chosen in this paper are integral square error (ISE), and the objective function to be optimized is estimated using (14) which represents a nonconstrained optimization
problem. In the expression, and symbolize weighing factors, is the starting time, and is the settling time of transient: The significance of weighing factors is that, is used to overcome the
steady-state voltage error and decides the values of and . Large value of results in less overshoot, whereas smaller value of results in reduced settling time.
4.3. Reference Compensation Current Extraction
The PI controller output signal represents the total active current required to maintain the dc-link voltage at a constant level and to compensate the losses in the APF due to the presence of
inductances and semiconductor switches. The three-phase load currents are tracked by the use of current sensors, upon which Park’s transformation is performed in order to find out the corresponding
axis current components and with the help of
According to control strategy, the load should only draw the average value of direct-axis component of load current from the supply. Here and indicate the fundamental frequency component of the
currents and , respectively. The oscillating components of the currents and , that is, and , respectively, are filtered out by using Butterworth’s low-pass filter with cut-off frequency of (
fundamental supply frequency). The currents and along with are utilized to generate the reference compensation current template and in coordinates, followed by inverse Park’s transformation giving
away the three-phase compensating currents , , and in reference frame as described in (16). The zero-sequence current is brought into play in order to make the transformation matrix a square one as
In Figure 6, the entire scheme of reference current generation for shunt APF using method has been illustrated. This scheme does not require a phase-locked loop (PLL) as only current quantities are
involved; hence, synchronization between phase currents and voltages is not needed.
The reference signals, thus, obtained are compared with the actual compensating filter currents in a hysteresis comparator as shown in Figure 5, where the actual current is forced to follow the
reference. Hysteresis band current controller is intended to be used for instantaneous compensation by the APF on account of its easy implementation and quick prevail over fast current transitions.
This consequently provides switching signals to trigger the IGBTs inside the inverter. Ultimately, the filter provides necessary compensation for harmonics in the source current and reactive power
unbalance in the system.
5. Simulation Results
The shunt APF load compensation capability is demonstrated by means of its (a) measure of harmonic compensation in the source current and (b) dynamic performance under ideal and unbalanced supply
conditions. The active filter performance is analyzed with the error between dc-link voltage and its reference value being regulated by the conventional PI controller followed by PSO- and BFO-based
optimally tuned PI controllers. A diode rectifier load with dc-side resistance , dc-side inductance , ac-side resistance , and ac-side inductance has been considered as shown in Figure 5. The nature
of dc-link voltage transient (maximum overshoot, settling time, etc.) is also observed. The simulation is performed taking parameter values as given in Table 1.
The THD of source current is a measure of the effective value of harmonic distortion and can be calculated as per (17) in which is the RMS value of fundamental frequency component of current and
represents the RMS value of th order harmonic component of current as follows:
When the APF is not being operated, the load current is exactly reflected in the source current. The FFT (Fast Fourier transform) analysis of the source current before compensation shows the THD to
be equal to 20.61%. The load current (or source current before compensation) along with its FFT analysis has been shown in Figure 7.
5.1. Compensation of Harmonics in the Source Current
The simulation for ideal supply is carried out with the source voltage of 230V (RMS) which is perfectly balanced and sinusoidal as depicted in Figure 8. The compensation currents produced by APF
employing conventional, PSO, and BFO techniques under ideal supply are shown in Figure 9. This current is added to the load current at PCC so that the resulting source current becomes sinusoidal
after compensation. The corresponding source currents are also shown in the figure. For the source current obtained with each of the APF controllers, FFT analysis is done and the results show that
THD obtained with APF employing BFO is much lower than conventional and PSO-based APFs.
Though the supply voltage is always desired to be ideal, sometimes unbalanced supply condition is produced in the power system which may be due to the generators in the system, presence of unequal
single phase loads, blown fuses in one of the phases of a three-phase capacitor bank, or single phasing conditions. For simulation with unbalanced supply condition, voltage in one of the three phases
is taken 230V (RMS), while in rest of the phases it is 200V (RMS). The unbalanced supply voltage simulation waveform is clearly presented in Figure 10. From Figure 11, the comparison of source
current THDs obtained with simulation under unbalanced supply reveals that BFO is a better alternative compared to the other two in terms of current harmonic compensation by lowering down the THD.
5.2. Dynamic Performance of APF
Convergence characteristics of voltage for both the supply conditions and all the methods of APF control are shown in Figures 12 and 13. It is observed that the APF employing BFO for PI controller
tuning makes to reach its reference value of 800V faster compared to other alternatives. As a result BFO provides speedy recovery for active filter losses and, hence, exhibits quick prevail over
current harmonics. The indices for performance measurement of APF, that is, settling time (), maximum overshoot (), and the source current THD values are listed in Tables 2 and 3 for ideal and
unbalanced supply, respectively. It shows that though BFO provides faster convergence to reach global optimum solution, there is no compromise over maximum overshoot of which is within the
permissible value. A comparison of the THDs of source current for both ideal and unbalanced supply conditions without and with compensation by conventional, PSO- and BFO-based APFs is presented in
Figure 14. Under ideal supply there is not much difference in the source current distortion obtained after compensation using the APFs with any of the PI controller tuning methods. But under
unbalanced supply, BFO has better command over the dc-link voltage, and, hence, ultimately the distortion in source current is lowered down drastically.
6. Conclusion
The conventional methods used to find the coefficients of PI controller are usually based on a linearized model, which may not give satisfactory result under transient conditions. Advanced
computational-intelligence-based optimization algorithms, PSO and BFO, have been implemented to tune the coefficients of PI controller to enhance the performance of power system under balanced and
unbalanced source voltage conditions. Simulation results were compared with conventional PI controller tuning. The dc-link voltage settles approximately within one cycle, and also the excursion in
voltage is less compared to the conventional PI controller. Further the THDs of the source current have improved significantly, which indicates the elimination of harmonics and verified superior
functionality of PSO and BFO algorithms. Out of the three approaches, the BFO-PI based APF validates excellent functionality, superior harmonic compensation capability, and robustness.
1. J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the 1995 IEEE International Conference on Neural Networks, vol. 4, pp. 1942–1948, Perth, Australia, December 1995.
2. P. J. Angeline, “Evolutionary optimization versus particle swarm optimization: philosophy and performance differences,” in Proceedings of the 7th Annual Conference on Evolutionary Programming,
pp. 601–610, New York, NY, USA, 1998.
3. R. C. Eberhart and Y. Shi, “Comparison between genetic algorithms and particle swarm optimization,” in Proceedings of the 7th Annual Conference on Evolutionary Programming, pp. 611–618, New York,
NY, USA, 1998.
4. D. Caputo, F. Grimaccia, M. Mussetta, and R. E. Zich, “Genetical swarm optimization of multihop routes in wireless sensor networks,” Applied Computational Intelligence and Soft Computing, vol.
2010, Article ID 523943, 14 pages, 2010. View at Publisher · View at Google Scholar
5. K. M. Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52–67, 2002. View at Publisher · View at Google
Scholar · View at Scopus
6. S. Mishra, “A hybrid least square-fuzzy bacterial foraging strategy for harmonic estimation,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 1, pp. 61–73, 2005. View at Publisher ·
View at Google Scholar · View at Scopus
7. S. Mishra and C. N. Bhende, “Bacterial foraging technique-based optimized active power filter for load compensation,” IEEE Transactions on Power Delivery, vol. 22, no. 1, pp. 457–465, 2007. View
at Publisher · View at Google Scholar · View at Scopus
8. A. Biswas, S. Dasgupta, S. Das, and A. Abraham, “A synergy of differential evolution and bacterial foraging optimization for faster global search,” International Journal on Neural and
Mass-Parallel Computing and Information Systems, pp. 607–626, 2007.
9. Y. Shi and R. Eberhart, “Modified particle swarm optimizer,” in Proceedings of the IEEE International Conference on Evolutionary Computation (ICEC '98), pp. 69–73, Anchorage, Alaska, USA, May
10. B. Singh, K. Al-Haddad, and A. Chandra, “Review of active filters for power quality improvement,” IEEE Transactions on Industrial Electronics, vol. 46, no. 5, pp. 960–971, 1999. View at Publisher
· View at Google Scholar · View at Scopus
11. H. Akagi, E. H. Watanabe, and M. Aredes, Instantaneous Power Theory and Applications to Power Conditioning, IEEE Press/Wiley-Inter-science, NJ, USA, 2007.
12. V. Soares, P. Verdelho, and G. D. Marques, “An instantaneous active and reactive current component method for active filters,” IEEE Transactions on Power Electronics, vol. 15, no. 4, pp. 660–669,
2000. View at Scopus
13. M. I. M. Montero, E. R. Cadaval, and F. B. González, “Comparison of control strategies for shunt active power filters in three-phase four-wire systems,” IEEE Transactions on Power Electronics,
vol. 22, no. 1, pp. 229–236, 2007. View at Publisher · View at Google Scholar · View at Scopus
14. S. H. Zak, Systems and Control, Oxford University Press, New York, NY, USA, 1st edition, 2003.
15. C. A. Smith and A. B. Corripio, Automatic Processes Control, México D.F., México, 1st edition, 1991. | {"url":"http://www.hindawi.com/journals/acisc/2012/897127/","timestamp":"2014-04-21T16:44:39Z","content_type":null,"content_length":"245549","record_id":"<urn:uuid:dfab82e8-f6d5-4d2e-9001-84134acfa400>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Natural Sciences & Mathematics
Courses & Advising
Math 1150 Foundations Seminars
The MATH 1150 Math Foundations Seminars offer challenging and interesting mathematical topics with a computer science component that requires only high school mathematics. The seminar topics vary
with each class and they are designed for all students.
Winter 2014
MATH 1150 - Math Foundations Seminar
Title: Graph Theory in the Real World
Section 1, CRN 2477, 30 students, Meets TRF 12-12:50PM in BAUD 101
Instructor: Allegra Reiber
A graph in its simplest form is merely a collection of dots, called vertices, and a collection of line segments, called edges, running between some or all of the vertices. The graph could model the
walking paths on campus, the preferred pizza toppings of a group of friends, or an abstract mathematical relationship. During this course, we will study the concepts and results of graph theory, how
to solve problems related to graphs, and how solving graph theory problems helps us understand real world problems: scheduling, map coloring, postal delivery routes, amicable seating charts,
population life cycle analysis, DNA sequencing, and more. This is a hybrid course, meaning that some of the course meetings are face-to-face (Tuesday, Thursday, Friday), while in between face-to-face
meetings, you will engage with course content, complete assignments, and communicate with classmates and the instructor through our Blackboard course. No specific mathematics knowledge is presumed,
but strengths in reading, writing, and reasoning will be necessary to succeed with assignments. We will do some work in this class with Mathematica, a software package which is licensed to DU and
free to students, in order to compute and visualize with graphs for some online assignments..
Previously offered seminars
• 2, Infinity & Beyond (Carney, Ormes)
• Cryptography (Arias, Curran, Vojtechovsky)
• Games and Logic (Galatos)
• Graph Theory (Zenk)
• Graph Theory in the Real World (Locke, Reiber)
• Great Ideas in Mathematics (Trujillo)
• Heart of Mathematics (Pula)
• Intro to Random Walks on Graphs (Sobieczky)
• Logic and Games (Galatos)
• Mathematical Art (Gudder)
• Mathematics for Decision Making (Ormes)
• Mathematics in Art and Music (Dobrinen)
• Mathematics of Chance (Latremoliere)
• Mathematics of Chance and Gambling (Arias)
• Mathematics of Gambling (Arias, Gudder, Hagler)
• Mathematics of Games (Pavlov)
• Mathematics of Politics (Hagler)
• Mathematics of Voting (Hagler)
• Models of Computing (Ball)
• Non-Classical Logics (Galatos)
• Patterns and Symmetry (Ormes)
• Perspectives in Art (Dobrinen)
• Pi: The Story of a Number (Kinyon)
• The Geometry of the Universe (Latremoliere)
• Thinking Machines (Ball) | {"url":"http://www.du.edu/nsm/departments/mathematics/coursesandadvising/ainaturalrequirement/math1150foundationsseminars.html","timestamp":"2014-04-16T07:22:54Z","content_type":null,"content_length":"21590","record_id":"<urn:uuid:c3349faf-5f64-4eb6-94f8-dfa5c74facbe>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00344-ip-10-147-4-33.ec2.internal.warc.gz"} |
Definitive Closest Giordano's Clone as of 6/12/13?
I wanted to give you some real good shots of the crust so you can see how different it is vs the other Chicago places.
If you can dodge a wrench you can dodge a ball.
(I'm sure there must be a way for me to repost those pictures in this reply, rather than making people have to follow a link to see them, but I can't figure out how to do it. Can anyone give me a
clue how to do that?)
At the bottom of a pic click on the paper clip....a window comes up to file it....repost from file.
Nate, I feel like most of your pics show a crust that looks very similar to my crust on this pizza. Especially this pic. Are you seeing something different?(I'm sure there must be a way for me to
repost those pictures in this reply, rather than making people have to follow a link to see them, but I can't figure out how to do it. Can anyone give me a clue how to do that?)
Not sure what that means, unless by "file it" you're saying it asks you if you want to save it. I know how to save a pic that's already here, then upload it. I'm trying to avoid saving and uploading
copies of pics that already exist here. Seems like it should be pretty easy. Am I missing something?
What I'm hoping to find out is if there is a way to do it without downloading the picture and then uploading a picture that already exists somewhere on this web site. With HTML, it's very easy to do
that because you just enter code that basically says "show this picture, which already exists in this specific place." I assume you can do that on these boards, too. But since the boards don't use
HTML, I have no idea how to do it.I already know where the picture files exist on this web site, which is why you can see the pictures when you follow my links a few posts back. I just don't know how
to tell the reply form to show the particular picture files in my reply. There's gotta be an easy way to do that.
Like I said their crust is totally original. Nothing else tastes and feels like it. You will have to try it for yourself and compare to your own.
pythonic you Gotta try his recipe and tell us/him Where he needs to go!
shoot! I would do it but ive only had like 1 frozen one in the last 10yrs! from the looks of it, I think your pizzas look awesome!
I'm using a 10" x 2" seasoned aluminum pan. Even though it's not a true deep dish pan, it's not a bad pan. I have four tin-plated steel deep dish pans, but the three smallest ones (6", 9", and 12")
don't really work for stuffed pizza because they're only 1.5" deep. The 14" deep dish pan is 2" deep, but I haven't used it yet because I've had no need to make a 14" stuffed pizza so far. That would
make almost a 6 lb pizza! | {"url":"http://www.pizzamaking.com/forum/index.php?topic=25774.msg275242","timestamp":"2014-04-20T14:20:21Z","content_type":null,"content_length":"91645","record_id":"<urn:uuid:ae165733-e642-491b-b16b-8dc8ea75aeea>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00316-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ray Tracer
Monte-Carlo Ray Tracer
This is a java ray tracer I wrote for a course on advanced rendering (COM S 517) taught by Kavita Bala which I took during the first term of my masters degree. There was some basic code provided for
us which loaded up the scene and saved the image. We implemented a bunch of stuff on top.
Here is a list of what we did.
┃ │ • Lambertian reflector ┃
┃BRDFs │ • Lambertian emitter ┃
┃ │ • Ward (anisotropic) ┃
┃ │ • Cook-Torrance ┃
┃Acceleration structure │ • KD-tree ┃
┃ │ • Uniform multisampling ┃
┃Box-filter antialiasing │ • Stratified multisampling ┃
┃ │ • N-Rooks multisampling ┃
┃ │ • Uniform sampling ┃
┃Direct illumination sampling │ • (Emitter) Power sampling ┃
┃ │ • Weighted balance sampling ┃
┃ │ • Uniform sampling ┃
┃ │ • Stratified sampling ┃
┃Indirect illumination sampling │ • Cosine weighted sampling ┃
┃ │ • Stratified cosine weighted sampling ┃
┃ │ • BRDF weighted sampling ┃
┃Depth recursion stopping criterion│ • Russian roulette ┃
These were implemented directly from the original papers. Ward's paper contains a formula for generating BRDF weighted samples already, so we didn't have to sweat over that one. We did not implement
BRDF sampling for the Cook-Torrance BRDF.
Acceleration Structure
We used a KD-tree to improve the performance of the ray tracer. This is basically a binar tree where each cell is partitioned along the x-, y-, or z-axis depending on which split will provide better
performance. The splitting heuristic I used compares weighs the number of primitives of each child by the surface area of the child cell and divides it by the surface area of the parent cell. This
prevents boxes from becoming too small.
We implemented basic antialiasing by running a box filter over every pixel (simple averaging). However, we implemented various pixel sampling strategies. The N-Rooks sampling strategy breaks the
pixel into a grid of N2 boxes and takes N samples such that each sample is the only sample in its row and column.
Direct Illumination
To speed up the ray tracer, we compute direct and indirect illumination separately, both using Monte Carlo sampling. We sampled the lights uniformly or based on the total power emitted by them. We
could also compute the direct lighting as part of the indirect lighting computation, but that slows down the rendering quite a bit--in fact, computing these separately is really a type of importance
sampling. We also have a hybrid, "weighted-balance" scheme which will alternately sample the lights or do importance sampling on the BRDF. This one is useful to sample the specular direction of
highly specular BRDFs.
Indirect Illumination
We did quite a bit of work here, mixing stratified sampling with the cosine-based importance sampling strategies and a BRDF sampling strategy.
Depth Recursion Stopping Criterion
Here we used Russian roulette, which is an unbiased stopping criterion that simply kills the ray or recurses with a (user) predefined probability.
Image-Based Rendering
To conclude the course, we did some image-based rendering. A GUI was provided to move the camera around, and we simply reprojected the ray tracing samples to the new camera location, interpolating
the fill in the gaps.
Here are some of the images we produced along with sampling methods used to generate each one. Note the color bleeding when indirect illumination is enabled.
Rays/pixel: 1 Rays/pixel: 1 Rays/pixel: 1
Antialiasing: none Antialiasing: none Antialiasing: none
Direct illum.: on Direct illum.: on Direct illum.: on
Direct illum. rays/point: 1 Direct illum. rays/point: 1 Direct illum. rays/point: 1
Sampling: center of emitter Sampling: center of emitter Sampling: uniform
Indirect illum. rays/point: 0 Indirect illum. rays/point: 0 Indirect illum. rays/point: 0
Sampling: none Sampling: none Sampling: none
Render time: 4.7s Render time: 3.4s Render time: 1.5s
Rays/pixel: 9 Rays/pixel: 9 Rays/pixel: 1
Antialiasing: uniform Antialiasing: stratified Antialiasing: none
Direct illum.: on Direct illum.: on Direct illum.: on
Direct illum. rays/point: 1 Direct illum. rays/point: 1 Direct illum. rays/point: 1
Sampling: uniform Sampling: uniform Sampling: uniform
Indirect illum. rays/point: 0 Indirect illum. rays/point: 0 Indirect illum. rays/point: 0
Sampling: none Sampling: none Sampling: none
Render time: 8.2s Render time: 8.3s Render time: 1.5s
Rays/pixel: 1 Rays/pixel: 1 Rays/pixel: 49
Antialiasing: none Antialiasing: none Antialiasing: uniform
Direct illum.: on Direct illum.: on Direct illum.: on
Direct illum. rays/point: 1 Direct illum. rays/point: 49 Direct illum. rays/point: 1
Sampling: uniform Sampling: uniform Sampling: uniform
Indirect illum. rays/point: 0 Indirect illum. rays/point: 0 Indirect illum. rays/point: 0
Sampling: none Sampling: none Sampling: none
Render time: 1.4s Render time: 21.1s Render time: 42.2s
Rays/pixel: 1 Rays/pixel: 1 Rays/pixel: 1
Antialiasing: none Antialiasing: none Antialiasing: none
Direct illum.: on Direct illum.: on Direct illum.: on
Direct illum. rays/point: 16 Direct illum. rays/point: 16 Direct illum. rays/point: 16
Sampling: uniform Sampling: uniform Sampling: power
Indirect illum. rays/point: 0 Indirect illum. rays/point: 0 Indirect illum. rays/point: 0
Sampling: none Sampling: none Sampling: none
Render time: 10.9s Render time: 11.0s Render time: 10.3s
Rays/pixel: 49 Rays/pixel: 49 Rays/pixel: 9
Antialiasing: uniform Antialiasing: uniform Antialiasing: uniform
Direct illum.: on Direct illum.: on Direct illum.: on
Direct illum. rays/point: 1 Direct illum. rays/point: 1 Direct illum. rays/point: 1
Sampling: uniform Sampling: uniform Sampling: uniform
Indirect illum. rays/point: 1 Indirect illum. rays/point: 1 Indirect illum. rays/point: 0
Sampling: uniform Sampling: cosine Sampling: none
Render time: 85.2s Render time: 83.4s Render time: 8.5s
Rays/pixel: 49 Rays/pixel: 49 Rays/pixel: 49
Antialiasing: uniform Antialiasing: uniform Antialiasing: uniform
Direct illum.: on Direct illum.: on Direct illum.: on
Direct illum. rays/point: 1 Direct illum. rays/point: 1 Direct illum. rays/point: 1
Sampling: uniform Sampling: uniform Sampling: uniform
Indirect illum. rays/point: 1 Indirect illum. rays/point: 1 Indirect illum. rays/point: 1
Sampling: uniform Sampling: brdf Sampling: brdf
Render time: 84.1s Render time: 85.0s Render time: 87.9s
Rays/pixel: 200 Rays/pixel: 49 Rays/pixel: 49
Antialiasing: uniform Antialiasing: uniform Antialiasing: uniform
Direct illum.: on Direct illum.: off Direct illum.: off
Direct illum. rays/point: 1 Direct illum. rays/point: 0 Direct illum. rays/point: 0
Sampling: uniform Sampling: none Sampling: none
Indirect illum. rays/point: 1 Indirect illum. rays/point: 1 Indirect illum. rays/point: 1
Sampling: brdf Sampling: uniform Sampling: uniform
Render time: 369.3s Render time: 45.8s Render time: 47.4s
Rays/pixel: 49 Rays/pixel: 49 Rays/pixel: 49
Antialiasing: uniform Antialiasing: uniform Antialiasing: uniform
Direct illum.: on Direct illum.: off Direct illum.: on
Direct illum. rays/point: 1 Direct illum. rays/point: 0 Direct illum. rays/point: 1
Sampling: uniform Sampling: none Sampling: weighted balance
Indirect illum. rays/point: 1 Indirect illum. rays/point: 1 Indirect illum. rays/point: 1
Sampling: uniform Sampling: brdf Sampling: brdf
Render time: 52.7s Render time: 36.4s Render time: 85.8s
Rays/pixel: 1
Antialiasing: none
Direct illum.: on
Direct illum. rays/point: 100
Sampling: uniform
Indirect illum. rays/point: 0
Sampling: none
Render time: 75.9s | {"url":"http://www.dgp.toronto.edu/~jacky/raytracer.html","timestamp":"2014-04-20T03:09:44Z","content_type":null,"content_length":"16911","record_id":"<urn:uuid:dcfe8580-ca46-4960-8486-5470d27212c7>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00142-ip-10-147-4-33.ec2.internal.warc.gz"} |
Queens domination
Author Queens domination
Using stacks, we have to make a program that makes us put in an NxN board, then it will return the minimum number of queens needed to be able to threaten every single square on said NxN
Joined: board. I'm pretty sure he doesn't want us to use discrete mathematics, so I simply can't use permutation to find the solution. The only trouble I'm having with this is that I have no idea
Nov 21, how to even start it. What would be a good algorithm to do this? I imagine that back-tracking would suffice, because we are not allowed to use recursion.
Posts: 2 This is not a class assignment, and the teacher threw it at us and it is purely optional. I think it would make a good base for future programs if I run into the same type of problem, or
something similar, in the future. Something I can look back on and be like "Oh! That could work!"
Ranch Hand
Will Sobczak wrote: I imagine that back-tracking would suffice, because we are not allowed to use recursion.
Jun 22,
Posts: Isn't back-tracking recursive? I don't know but you make me think.
Keep Smiling Always — My life is smoother when running silent. -paul
[FAQs] [Certification Guides] [The Linux Documentation Project]
Akhilesh Trivedi wrote:
Nov 21, Will Sobczak wrote: I imagine that back-tracking would suffice, because we are not allowed to use recursion.
Posts: 2 Isn't back-tracking recursive? I don't know but you make me think.
See, I thought so too, but the teacher said it wasn't. I don't know exactly what he meant, but any method will do, and I'll try to spruce it up best I can. I just need to get started on
it. I have no idea how to work it.
I have the main, and the ArrayStack class, but now I just need the QueenCover class, and I have no idea how to do this!!!
Will Sobczak wrote:I have the main, and the ArrayStack class, but now I just need the QueenCover class, and I have no idea how to do this!!!
Mar 17,
2011 Then forget Java and concentrate on the problem.
7029 My suggestion would be to turn off your computer and get out a chess set or, failing that, a pencil and paper, and do it yourself. That should give you solutions for eveything up to n=8,
and you may find a pattern.
You also know some things about the problem already:
1. Any board where n < 4 can be covered with one queen.
I like... 2. Any board where n >= 4 can be covered with with M=n-1 queens, by putting a Queen on every square of a diagonal except one corner.
So the challenge is to find a solution that is better than M=n-1 (or to prove that it can't be done).
I've never tried the problem before, but I think I'd probably use a heuristic approach, perhaps:
1. For each square on an empty board, calculate the number of squares threatened by a queen placed in that position.
2. Place a queen in the square that threatens most other squares (or one of them, if there's more than one possibility).
3. Repeat Steps 1 and two, but don't include any squares that have already been threatened or are already occupied, and don't allow any square that is already occupied to be used again.
4. Keep going until the total number of squares threatened or occupied == n * n.
Just one way; and I suspect that the real problem will then be proving that the resulting solution is optimal.
Edit: In addition, any board n >= 4 where n is odd can be covered with with M=n-2 queens, by putting a Queen on every square of its diagonal except either corner, since the centre square
covers both long diagonals.
Isn't it funny how there's always time and money enough to do it WRONG?
Artlicles by Winston can be found here
subject: Queens domination | {"url":"http://www.coderanch.com/t/559368/java/java/Queens-domination","timestamp":"2014-04-16T17:04:59Z","content_type":null,"content_length":"28270","record_id":"<urn:uuid:8004aa90-ce1a-471f-9b1b-f332e7a34d7f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Post a reply
hi mukesh,
Here's an outline of a way to prove this. see diagram below.
There's no right angle to get tanA easily so I used the sine and cosine rules:
Put these together to get tanA and simplify.
work on this expression for tanA, making use of the following:
After much simplification you can get this equal to -2tanB, from which the required result follows.
It's a tough one so expect it to take 2 or 3 pages. If you get stuck post back where you've got to, and I'll compare your answer with mine. | {"url":"http://www.mathisfunforum.com/post.php?tid=19732&qid=277208","timestamp":"2014-04-20T06:13:22Z","content_type":null,"content_length":"20482","record_id":"<urn:uuid:4aec5193-8202-4a82-a4fd-e72732591a75>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Holomorphic function on a unbounded domain
April 4th 2011, 03:30 PM #1
Junior Member
Apr 2011
Holomorphic function on a unbounded domain
I have problem in understanging the following question
Q. Show by means of an example that a non-constant holomorphic function on a n unbounded domain need not achieve it's maximum modulus on the boundary of that domain
Answer: here we assume that tha boundary is not empty.
Maximum Modulus Principle says that
"|f(z)| can only achieve its maximum value on the boundary unless it is constant"
So i am wondering how can i connect the MMP with the above question.
Any help will be appreciated....Thanks
i need some help as soon as possible!
I have problem in understanging the following question
Q. Show by means of an example that a non-constant holomorphic function on a n unbounded domain need not achieve it's maximum modulus on the boundary of that domain
Answer: here we assume that tha boundary is not empty.
Maximum Modulus Principle says that
"|f(z)| can only achieve its maximum value on the boundary unless it is constant"
So i am wondering how can i connect the MMP with the above question.
Any help will be appreciated....Thanks
Suppose you take the domain to be the right-hand half-plane. The boundary is then the imaginary axis, and the function $e^z$ is bounded there.
April 4th 2011, 11:01 PM #2
Junior Member
Apr 2011
April 4th 2011, 11:46 PM #3 | {"url":"http://mathhelpforum.com/differential-geometry/176814-holomorphic-function-unbounded-domain.html","timestamp":"2014-04-18T00:29:53Z","content_type":null,"content_length":"36711","record_id":"<urn:uuid:557e2356-cbd4-4db3-a256-ae825b116304>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00462-ip-10-147-4-33.ec2.internal.warc.gz"} |
Discrete Prob
May 3rd 2011, 10:06 AM #1
MHF Contributor
Mar 2010
Discrete Prob
This question was on my first probability test and I got the first part wrong. Therefore, could someone show me how to do it?
One half percent of the population has a particular disease. A test is developed for the disease. The test gives false positives 3% of the time and a false negative 2% of the time.
What is the probability that Joe tests positive?
I put 98% but that is wrong.
May 3rd 2011, 12:29 PM #2 | {"url":"http://mathhelpforum.com/statistics/179384-discrete-prob.html","timestamp":"2014-04-19T18:13:37Z","content_type":null,"content_length":"33435","record_id":"<urn:uuid:6fe31a51-0258-4906-8215-e1c277d7efff>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
2-3 In some cases, more than one variable is used in a single expression. For example, the expression AB C D is spoken "A AND B AND NOT C AND D." POSITIVE AND NEGATIVE LOGIC
To this point, we have been dealing with one type of LOGIC POLARITY, positive. Let s further define logic polarity and expand to cover in more detail the differences between positive and negative
logic. Logic polarity is the type of voltage used to represent the logic 1 state of a statement. We have
determined that the two logic states can be represented by electrical signals. Any two distinct voltages
may be used. For instance, a positive voltage can represent the 1 state, and a negative voltage can represent the 0 state. The opposite is also true.
Logic circuits are generally divided into two broad classes according to their polarity positive
logic and negative logic. The voltage levels used and a statement indicating the use of positive or negative logic will usually be specified on logic diagrams supplied by manufacturers.
In practice, many variations of logic polarity are used; for example, from a high-positive to a low-
positive voltage, or from positive to ground; or from a high-negative to a low-negative voltage, or from
negative to ground. A brief discussion of the two general classes of logic polarity is presented in the following paragraphs. Positive Logic
Positive logic is defined as follows: If the signal that activates the circuit (the 1 state) has a voltage
level that is more POSITIVE than the 0 state, then the logic polarity is considered to be POSITIVE. Table 2-2 shows the manner in which positive logic may be used.
Table 2-2. Examples of Positive Logic As you can see, in positive logic the 1 state is at a more positive voltage level than the 0 state. Negative Logic
As you might suspect, negative logic is the opposite of positive logic and is defined as follows: If the
signal that activates the circuit (the 1 state) has a voltage level that is more NEGATIVE than the 0 state,
then the logic polarity is considered to be NEGATIVE. Table 2-3 shows the manner in which negative logic may be used. | {"url":"http://electriciantraining.tpub.com/14185/css/14185_85.htm","timestamp":"2014-04-21T13:05:53Z","content_type":null,"content_length":"29205","record_id":"<urn:uuid:36fe3847-4869-4e65-ad8e-9a123cad3b8e>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00587-ip-10-147-4-33.ec2.internal.warc.gz"} |
NASA - Space Math I Educator Guide
Space Math I Educator Guide
The information in this document was accurate as of the original publication date.
These activities comprise a series of 20 practical mathematics applications in space science. This collection of activities is based on a weekly series of space science problems distributed to
teachers during the 2004-2005 school year. The problems in this booklet investigate space weather phenomena and math applications such as solar flares, satellite orbit decay, magnetism, the
Pythagorean Theorem, order of operations and probability. The problems are authentic glimpses of modern engineering issues that arise in designing satellites to work in space. Each word problem has
background information providing insight into the basic phenomena of the sun-Earth system, specifically space weather. The one-page assignments are accompanied by teacher pages with answer keys.
Note: This collection was formerly published as the Extra-Credit Problems in the Space Science Educator Guide.
Space Math I
[3MB PDF file]
Individual sections: Introductory Pages Problem 1, Aurora Timeline Problem 2, Aurora Drawing Problem 3, Radiation Effects Problem 4, Solar Flares and CMEs Problem 5, Do big sunspots make big solar
flares? Problem 6, Solar Storms and Satellite Orbit Decay Problem 7, Solar Electricity Problem 8, Solar Power Decay Problem 9, Space Weather Crossword Problem 10, Bird's-eye Look at the Sun-Earth
System Problem 11, The Height of an Aurora Problem 12, Earth's Wandering Magnetic Pole Problem 13, The Plasmasphere Problem 14, Magnetic Storms Problem 15, The Coronal Mass Ejection Problem 16,
Plasma Clouds Problem 17, Applications of Pythagorean Theorem to Magnetism Problem 18, Magnetic Forces and Particle Motion Problem 19, The Solar Wind and the Bow Shock Problem 20, Kinetic Energy and
Voltage More booklets in this series: Space Math II Space Math III Space Math IV Space Math V Space Math VI Space Math VII Adventures in Space Science Mathematics Algebra 2 With Space Science
Applications Astrobiology Math Black Hole Math Earth Math Electromagnetic Math Exploring Planetary Moons Exploring Stars in the Milky Way Exploring the Lunar Surface Exploring the Milky Way Image
Scale Math Lunar Math Magnetic Math Mars Math Radiation Math Remote Sensing Math Solar Math Space Weather Math Transit Math | {"url":"http://www.nasa.gov/audience/foreducators/topnav/materials/listbytype/Space_Math_I.html","timestamp":"2014-04-19T04:27:35Z","content_type":null,"content_length":"25383","record_id":"<urn:uuid:828aab69-abd1-4432-b33a-b2985bf421fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00159-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
find the derivative of y with respect to x:
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/507e4c91e4b0919a3cf32038","timestamp":"2014-04-21T10:12:48Z","content_type":null,"content_length":"65353","record_id":"<urn:uuid:9a75ebce-61c8-45c0-83d2-995a4b9fa4cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00296-ip-10-147-4-33.ec2.internal.warc.gz"} |
y, July 16
an augmented cube, the neglected polyhedron:
Derivation of r.d. from the cube
Why are pairs of faces coplanar? I.e., why do the triangles actually form rhombi?
Determine dimensions (lengths of edges, face diagonals, "space diagonals") of the cube and then of the r.d.
Determine volume of the r.d. and relate to the cube
What polygons are the intersections of the r.d. and planes?
Which Platonic solids can be inscribed in the r.d.?
What is the dual of the r.d.?
(Advanced) One can fill space with r.d. s. Then the intersection with a plane will give a tiling of the plane, a la Philip s course. What tilings arise this way? | {"url":"http://mathforum.org/pcmi/hstp/sum2001/wg/geometry/journal/geometry3D16.html","timestamp":"2014-04-19T14:55:30Z","content_type":null,"content_length":"2985","record_id":"<urn:uuid:846a17c3-0099-4733-8c91-48464578b2a5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Off-Line Signature Verification
Digital Signal and Image Processing Laboratory, Institute for Informatics and Automation Problems, Yerevan, Armenia
An off-line signature verification system attempts to authenticate the identity of an individual by examining his/her handwritten signature, after it has been successfully extracted from, for
example, a cheek, a debit or credit card transaction slip, or any other legal document. In this paper a novel off-line signature verification system selecting 120 feature points from the geometric
center of the signature and compares them with the already trained feature points.
At a glance: Figures
Keywords: offline signature, geometric center, feature point, FAR (false acceptance rate)
Journal of Computer Sciences and Applications, 2013 1 (2), pp 23-26.
DOI: 10.12691/jcsa-1-2-2
Received January 01, 2012; Revised April 20, 2013; Accepted April 23, 2013
© 2013 Science and Education Publishing. All Rights Reserved.
Cite this article:
• Khachaturyan, Vahe. "An Off-Line Signature Verification." Journal of Computer Sciences and Applications 1.2 (2013): 23-26.
• Khachaturyan, V. (2013). An Off-Line Signature Verification. Journal of Computer Sciences and Applications, 1(2), 23-26.
• Khachaturyan, Vahe. "An Off-Line Signature Verification." Journal of Computer Sciences and Applications 1, no. 2 (2013): 23-26.
Import into BibTeX Import into EndNote Import into RefMan Import into RefWorks
1. Introduction
Signature verification is an important research area in the field of personal authentication. The recognition of human handwriting is important concerning about the improvement of the interface
between human-beings and computers (e.g. [1-8]^[1]). If the computer is intelligent enough to understand human handwriting it will provide a more attractive and economic man-computer interface. In
this area signature is a special case that provides secure means for authentication, attestation authorization in many high security environment. The objective of the signature verification system is
to discriminate between two classes: the original and the forgery, which are related to intra and interpersonal variability (e.g. ^[1]). The variation among signatures of same person is called Intra
Personal Variation. The variation between originals and forgeries is called Inter Personal Variation.
Signature verification is so different with the character recognition, because signature is often unreadable, and it seems it is just an image with some particular curves that represent the writing
style of the person. Signature is just a special case of handwriting and often is just a symbol. So it is wisdom and necessary to just deal with a signature as a complete image with special
distribution of pixels and representing a particular writing style and not as a collection of letters and words (e.g. ^[7]).
A signature verification system and the techniques using to solve this problem can be divided into two classes: online and off-line (e.g. ^[9]). In an online system, a signature data can be obtained
from an electronic tablet and in this case, dynamic information about writing activity such as speed of writing, pressure applied, numbers of strokes are available (e.g. ^[4, 5, 6]). In off-line
systems, signatures written on paper as has been done traditionally are converted to electronic form with the help of a camera or a scanner and obviously, the dynamic information is not available. In
general, the dynamic information represents the main writing style of a person. Since the volume of information available is less, the signature verification using off-line techniques is relatively
more difficult (e.g. ^[2, 3]).
Our work is concerned with the techniques of off-line signature verification. The static information derived in an off-line signature verification system may be global, structural, geometric or
statistical. We concern with offline signature verification which is based on geometric center and is useful in separating skilled forgeries from the originals. The algorithms used have given
improved results as compared to the previously proposed algorithms based on the geometric center.
2. Feature Extraction
The geometric features are based on two sets of points in 2-dimensional plane (e.g. ^[7]). The vertical splitting of the image results sixty feature points (v1,v2,v3,…,v60) and the horizontal
splitting results sixty feature points (h1,h2,h3,…,h60). These feature points are obtained with relative to a central geometric point of the image. Here the centered image is scanned from left to
right and calculate the total number of black pixels. Then again from top to bottom and calculate the total number of black pixels. Then divide the image into two halves w.r.t. the number of black
pixels by two lines vertically and horizontally which intersects at a point called the geometric center. With reference to this point we extracted 120 feature points: 60 vertical and 60 horizontal
feature points of each signature image. We take 120 feature points based on many test. When we increase this number for current image size we will have very little sub images that we cannot split it.
In this case 120 is optimal number for our algorithm.
2.1. Processing of the Signature
The geometric features proposed here are based on two sets of points in two-dimensional plane. Each set having sixty feature points which represent the stroke distribution of signature pixels in
image. These sixty feature points are calculated by Geometric Center. Vertical Splitting and Horizontal Splitting are two main steps to retrieve these feature points. Before finding feature points we
have to do some adjustments to the signature image (e.g. ^[1]). The processing of the signature is discussed below.
2.2. Moving Signature into the Center of Image
The signature is moved to the center by taking the signature image into a fixed calculated frame and the unnecessary white spaces are removed without affecting the signature image such that the image
is in the middle of the frame. For this first we divide the whole frame of the signature into 10*10 square row-wise and column wise and find the variance (signature is considered to be binary and
consists of only black and white pixels). If a square block has a zero variance we remove that square, otherwise restore. Thus squares of unnecessary white spaces are removed and then the image is
restored in the fixed frame as shown in Figure 1.
2.3. Feature Points Based on Vertical Splitting
Sixty feature points are obtained based on vertical splitting w.r.t. the central feature point. The procedure for finding vertical feature points is given below:
Input: Static signature image after moving it to the center of the fixed sized frame.
Output: Vertical feature points: v1,v2,…,v59,v60.
The steps are:
1) Split the image with a vertical line passing through the geometric center (v0) which divides the image into two halves: Left part and Right part.
2) Find geometric centers v1 and v2 for left and right parts correspondingly.
3) Split the left and right part with horizontal lines through v1 and v2 to divide the two parts into four parts: Top-left, Bottom-left and Top-right, Bottom right parts from which we obtain v3, v4
and v5, v6.
4) We again split each part of the image through their geometric centers to obtain feature points v7,v8,v9,…, v13,v14.
5) Then we split each of the parts once again to obtain all the sixty vertical feature points (as shown in Figure 2).
2.4. Feature Points Based on Horizontal Splitting
Sixty feature points are obtained based on horizontal splitting w.r.t. the central feature point. The procedure for finding horizontal feature points is given below:
Input: Static signature image after moving it to the center of the fixed sized frame.
Output: Horizontal feature points: h1,h2,…,h59,h60.
The steps are:
1) Split the image with a horizontal line passing through the geometric center (h0) which divides the image into two halves: Top part and Bottom part.
2) Find geometric centers h1 and h2 for top and bottom parts correspondingly.
3) Split the top and bottom part with vertical lines through h1 and h2 to divide the two parts into four parts: Left-top, Right-top and Left-bottom, Right bottom parts from which we obtain h3, h4 and
h5, h6.
4) We again split each part of the image through their geometric centers to obtain feature points h7,h8,h9,..., h13,h14.
5) Then we split each of the parts once again to obtain all the sixty vertical feature points.
3. Classification
In this paper features are based on geometric properties. So we use Euclidean distance model for classification. This is the simple distance between a pair of vectors of size n. Here vectors are
nothing but feature points, so the size of vector is 2. How to calculate distance using Euclidean distance model is described in the following Section. In threshold calculation these distances are
A. Euclidean distance model
Let A(a[1],a[2],…,a[n]) and B(b[1],b[2],…,b[n]) are two vectors of size n. We can calculate (d) by using equation 1.
In our application, vectors are feature points on plane. So d is the simple distance between two points.
4. Threshold
We have calculated individual thresholds for vertical splitting and horizontal splitting. Here we proposed one method for threshold selection. Figure 3 shows the variations in single corresponding
feature points of training signatures. Let n is the number of training signatures and x1,x2,…,xn are corresponding single feature points of training signatures (taking one corresponding feature point
from each signature). Xmedian is the median of n features from n signatures.
Let d1,d2,…,dn are distances defined here,
Two main parameters we used in threshold calculation are davg and s. Equations 3 and 4 shows the calculation of these two parameters.
Like this total sixty different feature points are there for both vertical and horizontal splitting based on average distance (davg) and standard deviation (s). Equation 5 shows the main formula for
5. Experiments and Results
For experiment we took 27 original signatures from each person and selected 9 for training. These original signatures are taken in different days. Forgeries taken by three persons and 9 from each.
Total 18 originals and 27 forgeries for each person signature are going to be tested. There are two thresholds (one based on vertical splitting and another based on horizontal splitting) for each
person signature.
A. Training
Let n signatures are taking for training from each person. There are 120 feature points from each original signature, 60 are taken by vertical splitting (Section2 B) and 60 are taken by horizontal
splitting (Section2 C). Individual thresholds and patterns are calculating for vertical splitting and horizontal splitting. Pattern points based on vertical splitting are shown below.
Where vi;1,vi;2,…,vi;60 are vertical splitting features of i-th training signature sample. Threshold based on vertical splitting is shown below.
In equation 9 vd[avg,i] is same as average distance and
Where hi;1,hi;2,…,hi;60 are horizontal splitting features of i-th training signature sample. Threshold based on horizontal splitting is shown below.
We will store pattern points and thresholds of both horizontal splitting and vertical splitting. These values are useful in testing.
B. Testing
When new signature comes for testing we have to calculate features of vertical splitting and horizontal splitting. Feature points based vertical splitting are vnew;1,vnew;2,…,vnew;60. Distances
between new signature features and pattern feature points based on vertical splitting are shown below.
For classification of new signature we have to calculate vdistance and compare this with vthreshold. If vdistance is less than or equal to vthreshold then new signature is acceptable by vertical
Feature points based vertical splitting are hnew;1,hnew;2,hnew;3,…,hnew;60. Distances between new signature features and pattern feature points based on vertical splitting are shown below.
For classification of new signature we have to calculate hdistance and compare this with hthreshold. If hdistance is less than or equal to hthreshold then new signature is acceptable by horizontal
New signature features have to satisfy both vertical splitting and horizontal splitting thresholds.
C. Results
False Acceptance Rate (FAR) and False Rejection Rate (FRR) are the two parameters using for measuring performance of any signature verification method. FAR is calculated by equation 14 and FRR is
calculated by equation 15.
6. Conclusion
The Algorithm which is based on the 120 feature points is more efficient and gives more accurate results than the existing Techniques and survives against the skilled forgeries. We compared our
algorithms with other techniques based on feature extraction and techniques based on Polar and Cartesian coordinates. But as our algorithm takes 120 feature points for threshold calculations, a small
variation of a signature results in a large change in the values of threshold distance from the geometric center. Therefore in our algorithm the FRR and FAR value are decreased.
[1] Marc J.J. Brault and R. Plamondon, "Segmanting Handwritten Signatures at Their Perceptually Important Points", IEEE Trans. Pattern Analysis and Machine Intelligence, 15(9): 953-957, Sept.1993.
[2] J Edson, R. Justino, F. Bortolozzi and R. Sabourin, "A comparison of SVM and HMM classifiers in the offlinesignature verification", Pattern Recognition Letters 26, 1377-1385, 2005.
[3] J Edson, R. Justino, A. El Yacoubi, F. Bortolozzi and R. Sabourin, "An off-line Signature Verification System Using HMM and Graphometric features", DAS 2000, pp. 211-222, Dec.2000.
[4] B. Fang, C.H. Leung, Y.Y. Tang, K.W. Tse, P.C.K. Kwok and Y.K. Wong, "Off-line signature verification by the tracking of feature and stroke positions", Pattern Recognition, 36(1): 91-101. 2003.
[5] Migual A. Ferrer, Jesus B. Alonso and Carlos M. Travieso, "Off-line Geometric Parameters for Automatic SignatureVerification Using Fixed-Point Arithmetic", IEEE Tran. on Pattern Analysis and
Machine Intelligence, 27(6), June 2005.
[6] R. Plamondon and S.N. Srihari, "Online and Offline Handwriting Recognition: A Comprehensive Survey", IEEE Tran. on Pattern Analysis and Machine Intelligence, 22(1): 63-84, Jan.2000.
[7] J Edson, R. Justino, F. Bortolozzi and R. Sabourin, "An off-line signature verification using HMM for Random, Simple and Skilled Forgeries", Sixth International Conference on Document Analysis and
Recognition, pp.1031-1034, Sept.2001.
[8] J Edson, R. Justino, F. Bortolozzi and R. Sabourin, "The Interpersonal and Intrapersonal Variability Influences on Off-line Signature Verification Using HMM", Proc. XV Brazilian Symp. Computer
Graphics and Image Processing, 2002, pp. 197-202, Oct.2002.
[9] A. Zimmer and L.L. Ling, "A Hybrid On/Off Line Handwritten Signature Verification System", Seventh International Conference on Document Analysis and Recognition, 1: 424-428, Aug.2003. | {"url":"http://pubs.sciepub.com/jcsa/1/2/2/index.html","timestamp":"2014-04-21T04:32:57Z","content_type":null,"content_length":"73602","record_id":"<urn:uuid:692303f7-1e27-4ca8-a51a-0785b886afc8>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00245-ip-10-147-4-33.ec2.internal.warc.gz"} |
Knowledge of the Reasoned Fact
Posted by David Corfield
In a comment I raised the question of what to make of our expectation that behind different manifestations of an entity there is one base account, of which these manifestations are consequences.
If I point out to you three manifestations of the normal distribution - central limit theorem; maximum entropy distribution with fixed first two moments; approached by distribution which is the
projection onto 1 dimension of a uniform distribution over the $n$-sphere of radius $\sqrt{n}$ as $n$ increases - it’s hard not to imagine that there’s a unified story behind the scenes.
Perhaps it would encourage a discussion if I put things in a contextual setting.
On page 51 of Mathematical Kinds, or Being Kind to Mathematics, I mention the idea of Aronson, Harré and Way (Realism Rescued: How Scientific Progress is Possible, Duckworth, 1994), that one of the
crucial functions of sciences is to organise entities into a hierarchy of kinds. One important idea here, found in other writings of Rom Harré, is that the use of first-order logic by logical
positivists and then logical empiricists to analyse scientific reasoning has been a disaster. What these authors realised was that something important had been lost from earlier Aristotelian
In the Posterior Analytics, Aristotle claims that are four kinds of thing we want to find out:
• Whether a thing has a property
• Why a thing has a property
• Whether something exists
• What kind of thing it is
As you can see, these four types come in two pairs, the second of the pair asking a deeper question than the first. Indeed, the second and fourth questions ask about the cause of something and its
properties, cause being taken in Aristotle’s broad way. In fact, this is broad enough that he has no qualm in discussing examples from astronomy (‘Why do planets not twinkle? Because they are
near.’), mathematics (‘Why is the angle in a semicircle a right-angle?’) and everyday life (‘Why does one take a walk after supper? For the sake of one’s health. Why does a house exist? For the
preservation of one’s goods.’). This talk of cause in mathematics continued for many centuries. For example, as we learn from Mancosu’s book, Philosophy of Mathematics and Mathematical Practice in
the Seventeenth Century (OUP, 1996), mathematicians considered that giving a proof of a result by a reductio argument was not to give its ‘cause’.
The salient distinction here is that of Aristotle between ‘knowledge of the fact’ and ‘knowledge of the reasoned fact’. Aristotle gives this example to illustrate the difference:
Planets are near celestial bodies.
Near celestial bodies don’t twinkle.
Therefore, planets don’t twinkle.
Planets don’t twinkle.
Celestial bodies which don’t twinkle are near.
Therefore, planets are near.
To give an easy example of Aristotle’s distinction in mathematics:
$n$ is an even number.
Even numbers expressed in base 10 end in 0,2,4,6 or 8.
$n$ ends in 0,2,4,6 or 8.
$n$ expressed in base 10 ends in 0,2,4,6 or 8.
Numbers expressed in base 10 which end in 0,2,4,6 or 8 are even.
$n$ is even.
It’s not that the second syllogism is wrong, but rather that it hasn’t got the ‘causal’ ordering correct. Explanation is about tapping into the proper hierarchical organisation of entities. Your
ability to do this is what needs to be tested, as here in this account of an MIT mathematics oral examination (in particular, the Symplectic Topology section).
The idea that mathematics has a causal/conceptual ordering seems to be yet more radically lost to us than the counterpart idea in philosophy of science. It’s at stake in the example I give in my
paper Dynamics of Mathematical Reason (pp. 11-13), where singular cohomology goes from its being a cohomology theory which happens to satisfy the excision and other axioms, to its being a cohomology
theory because it satisfies the excision and other axioms.
Now, should we expect convergence to a single ordering?
Posted at October 23, 2006 1:25 PM UTC
Re: Knowledge of the Reasoned Fact
[M]athematicians [in the 17th century] considered that giving a proof of a result by a reductio argument was not to give its ‘cause’.
So only constructive proofs give causes?
This makes sense if we recall the computational content of constructive proofs; they can be turned systematically into computer programs. For example, a constructive proof of the infinitude of primes
automatically (at least if you write it out formally in type theory) gives an algorithm that, given a finite list of natural numbers (including being given its length), calculates a prime number that
is not on the list. More interestingly, a constructive proof of the uncountability of the continuum, applied to a (computable) enumeration of the algebraic numbers, automatically computes a
(computable) transcendental number. So the “reason” that transcendental numbers exist is that we have a way to construct them.
However, I don’t believe that this can be all that there is to it. Consider a reductio proof of Lagrange’s Theorem that every natural number may be decomposed as the sum of four squares. This proof
can easily be made constructive by checking that there are only finitely many possibilities (given any specific number) for such a decomposition (since the summands can never be larger than the
desired sum). However, the algorithm that results from this step is terribly inefficient; it simply searches through all possible summands until a solution is found! Surely the “cause” (whatever that
means) of this theorem must be something more than that, if you search through the possibilities, you eventually succeed. You still want to ask why you will succeed, and the reductio, even
constructivised, gives no help.
Much more satisfying is a proof (the usual one is already constructive, not relying on reductio) using unique factorisation in the quaternions. (How close is this to Lagrange’s original proof? I
don’t know.) I’m not sure that this really deserves to be called “cause”, but it strikes me as much more of an “explanation” at least!
(Personally, I don’t believe in causation. I’ve never been convinced by any philosopher, from Aristoteles to David Lewis, that the subject belongs in metaphysics (as opposed to epistemology) at all.
I believe in a universe described by relativity theory with a low-entropy big bang, but the consequences of this fall far short of most people’s understanding of cause and effect; it certainly
doesn’t apply to the theorem above!)
Posted by: Toby Bartels on October 23, 2006 7:58 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
I’ve never been convinced by any philosopher, from Aristoteles to David Lewis, that the subject [causality] belongs in metaphysics (as opposed to epistemology) at all.
Sounds like your view is close to my friend Jon Williamson, who holds a view he terms ‘epistemic causality’ (see some papers here): “Causality, then, is a feature of our epistemic representation of
the world, rather than of the world itself.”
Remember that what is translated as ‘cause’ in Aristotle had a different range of connotations from today’s usage. This must be so for him to have described physical and mathematical ‘causation’ as
though they were cases of the same thing.
Posted by: David Corfield on October 23, 2006 8:35 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
I find the idea of epistemic causality convincing, because causality seems to me like a kind of convenient shorthand, or maybe a pedagogical device. When we talk about physical causality, like “The
glass fell off the table because I pushed it”, we’re already bringing in plenty of epistemic constructs. Adding the judgement that one condition, described in terms of those constructs, is the cause
of a later one, seems again to be an epistemic act.
A lower-level (but still theory-laden) description of the same scenario might say something of the form “the reason the configuration of matter-fields in region X of spacetime is such-and-such… is
that the configuration on a Cauchy surface in the past light-cone of X was so-and-so”. If the glass takes about a second to fall, that Cauchy surface in the past will be huge, certainly including the
whole of the Earth - but saying that everything in the whole world is the cause of any act taking more than a second is not good pedagogy, or helpful for normal thinking. Moreover (assuming physical
determinism - more theory), it would be an equally good explanation to look at the future light-cone, so this isn’t much like a causal description.
Going from this low-level physics description to the much clearer causal story about the glass being pushed requires “registering” certain parts of the field as objects (I’m getting this language
from Brian Cantwell Smith), and then identifying which features of the past history are the salient ones in describing what those objects do. And both registration and saliency strike me as having a
very epistemic flavour.
It seems like every causal story similarly involves throwing out a lot of distracting information and focusing on salient facts, even though one can almost always imagine some (low-probability)
scenario where the salient “causes” are the same, but the effects are different (e.g. a bird decides to fly through the room, and just happens to knock the glass back up onto the table), so any
causal story would only be a rough sketch, with the implied proviso, “and everything else can be ignored”.
For mathematical objects, where we don’t have time dependence, I guess I’d have to take the same position. A number n being even is connected to all sorts of facts about n, but only some of them feel
clear as “causes” of that fact. Just like before, that feels to me like a decision, rather than some kind of metaphysical given.
To make this analogy between a concrete occurrence and a logical deduction, I suppose I’m taking it for granted that concepts like “number” and “even” are epistemic structures we build to organize
real-world experiences, just like “glass” and “table” (and “matter-field” and “Cauchy surface” for that matter). So then saying “n is even” is really making a blanket statement about a bunch of,
still concrete, features of the world. Lumping those together as similar through a process of abstraction seems again to be an epistemic act. I’m open to the idea that numbers and evenness (or indeed
tables) are actually Platonic forms, or some such idealist position - but I’ve never been persuaded of it yet.
Posted by: Jeff Morton on October 24, 2006 3:22 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
A number n being even is connected to all sorts of facts about n, but only some of them feel clear as “causes” of that fact.
I think this puts into words some of my uneasiness about that section of the original post.
I think it gets a bit easier from the structuralist viewpoint, where the natural number system is just the structure encoded by the Peano axioms. Divisibility properties (like evenness) are an extra
layer of structure on top of these axioms: we can use any model of the natural number structure to construct a model of the ring structure. Decimal expansions are a further – and a more arbitrary –
layer of structure. Causality seems to flow from divisibility properties to divisibility tests because the former structure is more primitive than the latter.
Of course, this assumes that my viewpoint on which structure is more primitive is an objective one.
Posted by: John Armstrong on October 24, 2006 4:20 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Dear David,
It appears to me that for some aspects of the issues you raise it would make sense to consider
a) Very basic mathematical facts that we learn in school or early college
b) Naive understanding (by children/students) of mathematical phenomena, and of causality and reasons for them.
(Perhaps somewhat like Chomskian linguistics.)
e.g. (school stuff, ask children, compare to what professionals say) what is the reason for
1. 5+8 = 8+5
2. 6*5 =5*6
(Is 1. part of the reason for 2.?? is there a common reason??)
3. 42 - 19 = 23.
High school:
4. why a^b is not equal b^a ?
5. The sum of angles in a triangle 180 degrees.
6. square root of 2 is not a rational number
7. A continuous function that takes a positive value at a and a negative at b takes 0 in between.
And a question that bothered me as a student and I never heard any satisfying answer to since:
8. Why is it that a real function that has a derivarive at any point may fail to have second derivative but a complex function once having a derivative at every point has second (and third etc)
Anyway, this looks like a good place to compare insights of laymen and “professionals” and to consider concretely some examples like suggested above.
Posted by: Gina on October 24, 2006 4:19 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
That’s a great idea, but for the reality of the situation. I don’t know the XML tag for pessimism, so I’ll just take it as understood.
1. 5+8 = 8+5
2. 6*5 =5*6
(Is 1. part of the reason for 2.?? is there a common reason??)
In much of the country, the “why” is “because that’s what the times tables say. It’s all arithmysticism.
3. 42 - 19 = 23.
Kids may be able to muster up, “because 19+23 = 42”.
4. why a^b is not equal b^a ?
5. The sum of angles in a triangle 180 degrees.
I think far too few high school students know these, let alone have any explanation. In the first case, I can attest that college students (Ivy leaguers, no less) don’t know it. In the second, it’s
one of those things most people can quote but few can justify beyond an appeal to authority in the form of a high school geometry teacher.
6. square root of 2 is not a rational number
Those who understand what a rational number is can probably say something halfway sensible. Those few.
7. A continuous function that takes a positive value at a and a negative at b takes 0 in between.
Here’s the first one that I think even has a naïve understanding.
8. Why is it that a real function that has a derivarive at any point may fail to have second derivative but a complex function once having a derivative at every point has second (and third etc)
And here I think all hope is lost. Nobody gets to the point of even learning the definition of a complex derivative without leaving the intuitive/naïve realm.
Chomskian linguistics works because everyone uses language and it seems perfectly natural to ask “why” questions about it. By and large most people are content to think of mathematics at even the
most basic level as some sort of esoterica. The only reason any of it is true for most people is that that’s what they were taught, if they remember it at all.
Posted by: John Armstrong on October 24, 2006 5:33 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Gina asked about the following:
1. 5+8 = 8+5
I think I have an intuitive justification for this, in terms of piles of pebbles, which one could reasonably call “naïve”. For this,
4. why a^b is not equal b^a ?
I’m not so sure. It’s easy to give an example where $a^b eq b^a$ which a schoolchild could check, but I don’t have much confidence in my ability to explain why such examples must exist.
Whether that matters, of course, depends upon the intellectual curiosity of the schoolchild. We’re in luck if she asks me a question which stumps me entirely!
5. The sum of angles in a triangle 180 degrees.
Thanks to Project Mathematics!, I always imagine extending the triangle’s sides to make the vertical angles, and then shrinking the entire triangle down to a point…
I side with John Armstrong on points six and eight. As to this one,
7. A continuous function that takes a positive value at a and a negative at b takes 0 in between.
I think we have segued into the realm of assertions which are easy to support with a pencil sketch but explode into subtleties as soon as you try to introduce any notion of rigor. How can we explain
what a “continuous function” is? A textbook might do so as follows:
In everyday speech, a ‘continuous’ process is one that proceeds without gaps of interruptions or sudden changes. Roughly speaking, a function $y = f(x)$ is continuous if it displays similar
behavior, that is, if a small change in $x$ produces a small change in the corresponding value $f(x)$.
In fact, a textbook does do so, specifically G. F. Simmons’s Calculus with Analytic Geometry (McGraw-Hill, 1985). This statement is “rather loose and intuitive, and intended more to explain than to
define.” To give a real definition, we break out the machinery of limits and begin employing deltas and epsilons, as Tom Lehrer describes here:
• Tom Lehrer, “There’s a Delta for Every Epsilon”, performed for Irving Kaplansky’s 80th birthday celebration (19 March 1997).
However, as Keith Devlin has written,
With limits defined in this way, the resulting definition of a continuous function is known as the Cauchy–Weierstrass definition, after the two nineteenth century mathematicians who developed it.
The definition forms the bedrock of modern real analysis and any standard “rigorous” treatment of calculus. As a result, it is the gateway through which all students must pass in order to enter
those domains. But how many of us manage to pass through that gateway without considerable effort? Certainly, I did not, and neither has any of my students in twenty-five years of university
mathematics teaching. Why is there so much difficulty in understanding this definition? Admittedly the logical structure of the definition is somewhat intricate. But it’s not that complicated.
Most of us can handle a complicated definition provided we understand what that definition is trying to say. Thus, it seems likely that something else is going on to cause so much difficulty,
something to do with what the definition means. But what, exactly?
Devlin advances the idea that going from the intuitive statement — “a line you can draw without picking up your pencil” — to the Cauchy–Weierstrass definition is not just a matter of refinement or
increasing the “rigor”, but instead a fundamental change from a dynamic to a static view:
Let’s start with the intuitive idea of continuity that we started out with, the idea of a function that has no gaps, interruptions, or sudden changes. This is essentially the conception Newton
and Leibniz worked with. So too did Euler, who wrote of “a curve described by freely leading the hand.” Notice that this conception of continuity is fundamentally dynamic. Either we think of the
curve as being drawn in a continuous (sic) fashion, or else we view the curve as already drawn and imagine what it is like to travel along it. […] When we formulate the final Cauchy–Weierstrass
definition, however, by making precise the notion of a limit, we abandon the dynamic view, based on the idea of a gapless real continuum, and replace it by an entirely static conception that
speaks about the existence of real numbers having certain properties. The conception of a line that underlies this definition is that a line is a set of points. The points are now the fundamental
objects, not the line. This, of course, is a highly abstract conception of a line that was only introduced in the late nineteenth century, and then only in response to difficulties encountered
dealing with some pathological examples of functions.
When you think about it, that’s quite a major shift in conceptual model, from the highly natural and intuitive idea of motion (in time) along a continuum to a contrived statement about the
existence of numbers, based on the highly artificial view of a line as being a set of points. When we (i.e., mathematics instructors) introduce our students to the “formal” definition of
continuity, we are not, as we claim, making a loose, intuitive notion more formal and rigorous. Rather, we are changing the conception of continuity in almost every respect. No wonder our
students don’t see how the formal definition captures their intuitions. It doesn’t. It attempts to replace their intuitive picture with something quite different.
All of these passages have been quoted from the following:
• Keith Devlin, “Will the real continuous function please stand up?”, Devlin’s Angle (MAA Online: May 2006).
In other words, the example which seems easiest to demonstrate intuitively and pictorially leads you to real issues when you try to formalize it.
Posted by: Blake Stacey on October 24, 2006 3:10 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
One of the subtleties mentioned is that the Cauchy-Weierstrass definition of continuity also implicitly takes on the job of capturing the completeness property of the real numbers, which the original
intuition doesn’t try to do. One can imagine (as many did, pre-Pythagoras) that all time quantities are rational numbers. In which case, if a continuous curve is one that can be drawn through time
without any jumps, then statement 7 is false.
Posted by: Jeff Morton on October 25, 2006 12:21 AM | Permalink | Reply to this
[T]he Cauchy-Weierstrass definition of continuity also implicitly takes on the job of capturing the completeness property of the real numbers […].
Devlin’s article mentions this towards the end:
[The] epsilon-delta statement […] does not eliminate (all) the vagueness inherent in the intuitive notion of continuity. Indeed, it doesn’t address continuity at all. Rather, it simply formalizes
the notion of “correspondingly” in the relation “correspondingly close.” In fact, the Cauchy-Weierstrass definition only manages to provide a definition of continuity of a function by assuming
continuity of the real line!
(I’m not sure if you meant to allude to this, Jeff, so I’m making it explicit anyway.)
Jeff again:
One can imagine […] that all time quantities are rational numbers. In which case, if a continuous curve is one that can be drawn through time without any jumps, then statement 7 is false.
For example (while I’m making things explicit), let f(x) be x^2 − 2.
More fully, Jeff wrote:
One can imagine that all time quantities are rational numbers.
But of course, it was not until Pythagoras that anybody knew that my function f does not cross the real line! (Still, one can reasonably imagine thus even after Pythagoras.)
Actually, it seems pretty ironic (to me) that #7, while seeming to many the most intuitive, is also the most doubtful! Besides frankly different mathematical interpretations of the intuition of
continuity (like using rational numbers instead of real numbers), it’s also possible to understand the same mathematical statement differently.
Since I do constructive mathematics sometimes, #7 jumped out to me as (potentially) wrong. To be precise, it is not provable in a neutral constructive setting (like Errett Bishop’s constructive
analysis), and it is flatly refutable in more restricted constructive settings (including both Brouwer’s intuitionistic analysis and the Russian school of constructive recursive analysis, for
different reasons).
This is interesting (to me, in part) because of all the discussion of static vs dynamic intuitions. The BHK interpretation of constructive logic gives the uniform continuity of a function (which is a
bit simpler than pointwise continuity) a dynamic flavour, albeit not the original dynamic intuition of tracing out a path.
To be precise, it invites us to view the ∀ε∃δ statement as the description of a process of transformation (or at least, the claim that one exists) that, given a natural number n (an upper bound on 1/
ε), returns a natural number m (giving δ as 1/m). That is, once we decide how closely we want to approximate the value of the function, we apply this transformation to determine how closely we must
approximate the argument. The dynamic nature here is not between points on the curve itself but rather between our uses of the function for measurement and calculation.
Posted by: Toby Bartels on October 27, 2006 4:08 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Dear all,
I agree with John that a Chomskian study of children’s learning, reasonings and insights about counting, arithmetics and mathematics will have limmited scope compared to the similar study for
languages. It may still be useful. Beside the philosophical issues of mathematical causality it can be relevant to the understanding of dyscalculia- learning disabilities related to mathematics (Math
I would conjecture that children usually understand why sum and product are commutative and the meaning of substractions but the algorithms for arithmetic operations on 2-digit numbers obscure this
understanding. (This was the point behind 42-19.)
Moving to naive or intuitive reasoning/causality for higher mathematics, the many ways to understand countinuous functions are fascinating. I would be happy to hear if there is a “reason” or
“intuition” behind the miraculous difference between real functions which have a derivative at every point and complex functions with the same property.
Posted by: Gina on October 25, 2006 8:35 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Gina writes:
Why is it that a real function that has a derivarive at any point may fail to have second derivative but a complex function once having a derivative at every point has second (and third etc)
derivatives?? I would be happy to hear if there is a “reason” or “intuition” behind the miraculous difference between real functions which have a derivative at every point and complex functions
with the same property.
There certainly is a reason, and when I teach complex analysis I try to explain it.
After all, this is one of the biggest pleasant surprises in mathematics. In real analysis you have to pay extra for each derivative: for example, most functions that are 37 times differentiable do
not have a 38th derivative. But in complex analysis being differentiable once ensures a function is infinitely differentiable! It’s as if you bought a doughnut at a local diner and they promised you
free meals for the rest of your life! And some people say there’s no such thing as a free lunch….
So, we have to understand this seeming miracle.
The first fact I try to impress on my students is that for a differentiable function $f$ on the complex plane, the amount $f(x + i y)$ changes when you change $y$ a teeny bit is $i$ times the amount
$f(x + i y)$ changes when you change $x$ a teeny bit. Why? Because a tiny step north is $i$ times a tiny step east, and the derivative is a linear approximation to $f$.
That’s simple enough. But, the fact that a step north is $i$ times a step east makes linearity far more powerful on the complex plane than on the real line, where you could only take, say, 2 steps
east, or -3 steps east. Now, just to be differentiable, a function must satisfy a differential equation:
${\partial f \over \partial y} = i {\partial f \over \partial x}$
the Cauchy-Riemann equation.
This makes all sorts of great things happen. For one thing, it means we can’t change $f$ one place without changing it lots of other places: if we mess with it in a tiny neighborhood, it won’t
satisfy the Cauchy-Riemann equation at the edge of that neighborhood anymore.
So, differentiable functions on the complex plane are not “floppy” the way differentiable functions on the real line are. You can’t wiggle them around over here without having an effect over there.
In fact, if you know one of these functions around the edge of a disk, you can solve the Cauchy-Riemann equation to figure out what it equals in the middle! So, such a function is like a drum-head:
if you take your fingers and press the drum-head down near the rim, the whole membrane is affected.
Indeed, the height of a taut drum-head satisfies the Laplace equation, which also holds for any function satisfying the Cauchy-Riemann equation:
$({\partial^2 \over \partial x^2} + {\partial^2 \over \partial y^2}) f = ({\partial \over \partial x} + i {\partial \over \partial y})({\partial \over \partial x} - i {\partial \over \partial y})f =
So, the analogy is not a loose one: you really can understand what complex-analytic functions look like - well, at least their real and imaginary parts - by looking at elastic membranes.
And if you do this, one thing you’ll note is that such membranes are really, really smooth. One way to think of it is that they’re minimizing energy, so any “unncessary wiggliness” is forbidden. We
can make this precise by noting that the Laplace equation follows from a principle of minimum energy, where energy is
$\int\int |abla f|^2 \; dx dy$
So, the reason why a once differentiable complex function is automatically smooth is that:
1) north is $i$ times east
2) to be differentiable, a function on the complex plane must satisfy a differential equation
3) this equation makes the function act like an elastic membrane.
This is a remarkable combination of insights, none particularly complicated, but fitting together in a wonderful way.
Posted by: John Baez on October 26, 2006 1:58 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Many thanks, John for this beautiful reason. It looks very appealing and quite different from the way I remembered it. Now, I wonder if your explanation (which is very inspiring) will qualify for
being a “causation” (of the kind David asked about) even on a heuristic level. To examine it, we should ask: Does such a reason apply in other cases? Namely, is there any (non trivial) example, or
even perhaps a whole large class of examples, where your point 3 applies: a differential equation that forces every differentiable solution to have derivatives of any order.
Posted by: Gina on October 26, 2006 1:00 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
How perturbable are the Cauchy-Riemann equations? What other local conditions on partial derivatives force global solutions in interesting ways?
Is there something special at play because things are easily expressible in terms of the complex field? Does quaternionic analysis force itself upon you in a similar way?
Posted by: David Corfield on October 28, 2006 8:57 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
David wrote:
How perturbable are the Cauchy-Riemann equations? What other local conditions on partial derivatives force global solutions in interesting ways?
A lot of basic stuff about existence and smoothness of solutions generalizes from the Cauchy-Riemann equation and Laplaceequation to any “elliptic” PDE, as sketched here. This is one reason Atiyah
and Singer were able to generalize the Riemann-Roch theorem from the Cauchy-Riemann operator to all elliptic operators.
Is there something special at play because things are easily expressible in terms of the complex field? Does quaternionic analysis force itself upon you in a similar way?
… there certainly are special features of the Cauchy-Riemann equations, coming from its intimate connection to the complex numbers!
By comparison, quaternionic analysis has been a bust. Several obvious ways of generalizing the concept of analytic function from the complex to quaternionic case give really pathetic results. The
good way is due to Fueter. It works not just for quaternions but also Clifford algebras. However, much to my shame, I’ve never really spent the time needed to learn it! And, few other people seem to
know it. I can’t tell if it’s unjustly neglected, or really not very interesting.
The last link above leads to a description and excerpt of Tony Sudbery’s paper “Quaternionic Analysis” - by far the best thing to read on this subject. It also has a link to his paper… which, alas,
no longer works!
And now I’ve lost my copy.
Posted by: John Baez on October 29, 2006 8:29 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Gina wrote:
Many thanks, John for this beautiful reason. It looks very appealing and quite different from the way I remembered it. Now, I wonder if your explanation (which is very inspiring) will qualify for
being a “causation” […] Namely, is there any (non trivial) example, or even perhaps a whole large class of examples, where your point 3 applies: a differential equation that forces every
differentiable solution to have derivatives of any order?
I’m glad you enjoyed my little explanation. It’s sad that most classes on complex analysis don’t explain this stuff.
Yes, there is a vast class of partial differential equations (PDE) such that any solution automatically has derivatives of arbitrarily higher order! These are the so-called elliptic differential
equations. If you talk to experts on PDE, you’ll find they often prefer to concentrate on one of these three kinds:
• Elliptic: here the classic example is the Laplace equation ${\partial^2 f \over \partial x^2} + {\partial^2 f \over \partial y^2} = 0$ Elliptic equations often describe static equilibrium.
• Hyperbolic: here the classic example is the wave equation ${\partial^2 f \over \partial x^2} - {\partial^2 f \over \partial t^2} = 0$ Hyperbolic equations often describe waves.
• Parabolic: here the classic examples are the heat equation ${\partial f \over \partial t} - {\partial^2 f \over \partial x^2} = 0$ and Schrödinger’s equation ${\partial f \over \partial t} + i {\
partial^2 f \over \partial x^2} = 0$ Parabolic equations often describe diffusion.
The techniques for dealing with the three kinds are very different. They have completely different personalities. Elliptic PDE are the easiest to prove lots of powerful results about, in part because
“elliptic regularity” guarantees that solutions are smooth.
To see if a linear PDE is elliptic, you write it down like this: $((4 + \sin x){\partial^4 \over \partial x^4}+ {\partial^4 \over \partial y^4} - x^2 {\partial \over \partial x}) f = 0$ and peel off
the differential operator involved: $(4 + \sin x){\partial^4 \over \partial x^4} + {\partial^4 \over \partial y^4} - x^2 {\partial \over \partial x}$ Then you replace the partial derivatives ${\
partial \over \partial x}, {\partial \over \partial y}$ by new variables, say $p_x, p_y .$ You get a function called the symbol of your PDE: $(4 + \sin x)p_x^4 + p_y^4 - x^2 p_x$ The order of your
PDE is the highest number of partial derivatives that show up. In the example above, the order is 4.
To see if your PDE is elliptic, just look at what happens to its symbol as the vector $p = (p_x,p_y)$ goes to infinity in any direction. If the symbol always grows roughly like $|p|^k$ where $k$ is
the order of your PDE, your PDE is elliptic.
In the example above, the symbol indeed grows like $|p|^4$ as $p$ goes to infinity in any direction. So, it’s an elliptic PDE. So, any solution is automatically smooth: it has partial derivatives of
arbitrarily high order!
If you have some spare time, you might convince yourself that the wave equation, heat equation and Schrödinger’s equation are not elliptic. In these examples the order is 2, but there are some
directions where the symbol does not grow like $|p|^2$.
(I’ve given examples of PDE involving just two variables, $x$ and $y$. Everything I said works for more variables, too.)
In my youth I mainly did hyperbolic PDE, since I liked physics and especially waves. I looked down on the elliptic folks for working on static phenomena like bubbles and taut drumheads - no life,
just sitting there, perfectly smooth. But, elliptic PDE certainly have their charm.
Posted by: John Baez on October 29, 2006 7:57 AM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
Thanks a lot, John, for this beautiful explanation. ( I suppose your “cause” for the many derivatives phenomenon will satisfy David.) Moreover, I wish students who have the impression that complex
analysis is a piece of heaven while PDE is down to earth “applied” stuff will read it.
Posted by: Gina on October 30, 2006 11:32 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
I would conjecture that children usually understand why sum and product are commutative and the meaning of substractions but the algorithms for arithmetic operations on 2-digit numbers obscure
this understanding. (This was the point behind 42-19.)
Children “understand” that 2+3 = 3+2, because they can see it on their fingers and the notion of the order they put their fingers up in is out of mind. “How could it be any other way?” they think.
What I’m not so sure they get is a truly general notion of number, and that “addition” is something abstracted from disjoint union of sets.
Okay, I don’t think they think in those terms. What I mean is that numbers to them are purely instantiated in collections of objects. If they don’t have a specific collection of 32 things to think of
as representing “32”, they just don’t think about “32”. The biggest hurdle is to get the student beyond instantiating one number in a set (hold up two fingers) and then another in another set (hold
up three more) and counting the result (five fingers). From what I’ve seen, both in talking to the few kids I come into contact with and in talking to adults long past these subjects, almost half the
time the best we manage it to teach the decimal algorithms for addition and any real meaning of the numbers is lost. At this point, a number is its decimal expansion, rather than any abstract notion
at all.
Posted by: John Armstrong on October 26, 2006 3:00 AM | Permalink | Reply to this
1. 5+8 = 8+5
2. 6*5 =5*6
4. why a^b is not equal b^a ?
I think that already commutativity of multiplication is subtler than commutativity of addition, so it’s not so surprising that commutativity of exponentiation breaks down entirely.
A + B means (the number of ways) to either choose “left” and do (pick a number less than) A or choose “right” and do B. To see that A + B is equivalent to B + A, you are simply (quite literally in my
formulation!) swapping left and right.
A × B means to do A and then do B. To see that A × B is equivalent to B × A, you have to swap before and after, which is not always possible. In this case it is, but only because A and B are
independent (both given before your activity begins). In $\sum _ { a : A } B _ a$ (of which A × B is a special case), no commutativity is possible in general, since in general B now depends on A.
While $\sum _ { a : A } B _ a$ means to do A and then do the appropriate version of B, $\prod _ { a : A } B _ a$ means to wait for me to do A and then do an appropriate version of B yourself. So even
when B is independent of A (as in A^B —whoops, I mean B^A!), there is no reason to suspect commutativity, since you’re not even doing the same thing.
So we move from swapping left and right (easy), to swapping before and after (possible in the independent case, but not in general), to swapping input and output (impossible).
Posted by: Toby Bartels on October 27, 2006 12:15 AM | Permalink | Reply to this
Re: Commutativity.
I think these interpretations of the “meaning” of arithmetic operations are interesting, and definitely worthy of consideration. I also think that they move even further from the question, which is
on how students learning these arithmetic operations think about them.
Posted by: John Armstrong on October 27, 2006 1:02 AM | Permalink | Reply to this
Re: Commutativity.
John Armstrong wrote in reply to me:
[Your post] move[s] even further from the question, which is on how students learning these arithmetic operations think about them.
Is that the topic of this thread? I had taken the topic to be causality in mathematics, but I guess that it’s for David to decide. (To be sure, Gina considered that topic in the context of how
students think.)
The New Math curriculum was supposed to modernise mathematics teaching by bringing in set theory from the early stages, but I don’t think that this was done seriously. However, arithmetic operations
on natural numbers (that is, whole numbers, including zero) certainly could be taught along these lines.
Posted by: Toby Bartels on October 27, 2006 3:10 AM | Permalink | Reply to this
Re: Commutativity.
Not the thread as a whole, no, but Gina’s point in raising those particular examples was to suggest a parallel to Chimskian linguistics: ask children learning the concepts “why” they are true and
that will give you insight into their epistemic causality. The idea is that the first stabs of an unsophisticated observer towards an explanation contain a deep insight into how the human mind
processes the concepts. This is to be contrasted with the viewpoint of an expert who has already thought long and hard about the nature of the subject, and who cannot simply “unknow” that knowledge.
Posted by: John Armstrong on October 27, 2006 3:50 AM | Permalink | Reply to this
Re: Commutativity.
My idea for this thread was to see whether a ‘realist’ notion of something quasi-causal going on in mathematics might be made to work. In that such quasi-causal accounts often need centuries of
effort to uncover, as have their physical counterparts, I wasn’t thinking that we’d make contact with children’s early encounters with arithmetic, even if we happen to talk about addition and
multiplication, as Toby does.
Having said this, in that the causal accounts strip down concepts to their bare bones (ur-concepts?), it is possible that children’s modes of thinking might make contact with aspects of them.
Nevertheless, my own interest is in the long-term disciplinary quest.
Posted by: David Corfield on October 28, 2006 9:15 PM | Permalink | Reply to this
Re: Commutativity.
I learned the New Math in grade school, so I really learned commutativity of addition and multiplication in terms of natural isomorphisms between sets: $S \sqcup T \cong T \sqcup S$ and $S \times T \
cong T \times S$ Of course they didn’t talk about “natural isomorphisms” - they just showed pictures of how it works. It’s very obvious stuff (which I am not explaining here).
The New Math may not have helped everyone, but it helped me. Basically they undid the mistake of decategorifying arithmetic. This helped me see that basic arithmetic was about sets of things, not
just abstract “numbers”.
I sometimes wonder if, much later, this helped me understand categorification.
Posted by: John Baez on October 29, 2006 8:41 AM | Permalink | Reply to this
Re: Commutativity.
The New Math foundered on many shoals, but its heart was in the right place.
Posted by: John Armstrong on October 29, 2006 1:21 PM | Permalink | Reply to this
Re: Knowledge of the Reasoned Fact
I’m stuck out in the wilds of cyberspace at the moment with the slowest connection known to man, so am finding it hard to keep up with cafe news. There’s something attractive I think to the epistemic
view of causality in that it gives an explanation for why mathematical and scientific reasoning share so many features. The question that intrigues me is whether there is a ‘best’ way of organising a
field. Even if many accounts were needed for pedagogical purposes, this wouldn’t rule out the notion. It is possible that the understanding of the ‘best’ organisation is only available to one who has
worked very hard at a field for many years, acquiring skill and understanding on the way.
To give an example, one would hardly teach a first-year student about adjoint functors to explain why the underlying set of the product of groups is isomorphic to the product of underlying sets, and
yet I find it plausible to think that whichever direction mathematics takes, so long as it doesn’t degenerate, it will understand that right adjoints preserving products is at stake, even if this is
seen as only a small part of a larger picture.
Posted by: David Corfield on October 26, 2006 10:38 AM | Permalink | Reply to this | {"url":"http://golem.ph.utexas.edu/category/2006/10/knowledge_of_the_reasoned_fact.html","timestamp":"2014-04-17T19:08:28Z","content_type":null,"content_length":"91966","record_id":"<urn:uuid:60abd82e-b894-4079-9b07-62963af7a965>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00264-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pi R Round
Author Message
We're all familiar with the formula for the area of a circle: A=Pi . r2.
Who can give a formula for the area of a circle that does not mention it's radius?
Circumference squared divided by 4pi
Can you prove :in a purely elastic collision between two bowling balls of the same mass that are approaching head on at two different speeds: That they simply rebound and when they do their
speeds are exchanged??
Hint: Both momentum and kinetic energy are preserved in an elastic collision.
It might help to pick two specific speeds since you will get two answers, one of which you must show is to be discarded.
To make the problem a little more challenging, how about an area-of-a-circle formula that makes no reference to either the radius, or the circumference?
a = pi * (diameter/2)^2
Can you prove :in a purely elastic collision between two bowling balls of the same mass that are approaching head on at two different speeds: That they simply rebound and when they do
their speeds are exchanged??
Hint: Both momentum and kinetic energy are preserved in an elastic collision.
It might help to pick two specific speeds since you will get two answers, one of which you must show is to be discarded.
The bottom expression is for v2', it is either v1 or v2. v2'=v2 is the trivial solution (no impact happened). Back subbing will show v1'=v2.
(ignore the detour I took to remember how the quadratic equation goes)
I don't think that you quite have it. Try it with V1 =5 and V2=2 say. I think it will be much easier. You need to state what the new V1 and V2 are. You will get another incorrect solution
which you must show is extraneous.
I don't think that you quite have it. Try it with V1 =5 and V2=2 say. I think it will be much easier. You need to state what the new V1 and V2 are. You will get another incorrect solution
which you must show is extraneous.
Admittedly my chicken scratches don't exactly flow, but the result is:
Pre-impact velocity of balls: v1, v2
Post-impact velocity of balls: v1', v2'
There are two solutions
Solution 1: v1' = v1, v2' = v2
Solution 2: v1' = v2, v2' = v1
Solution 1 says that neither ball changes velocity. That conserves kinetic energy and momentum but is not the solution we are interested in by inspection because it does not describe an impact
Solution 2 is therefore what we want. It shows the swapping of velocities between the balls.
I leave it as an exercise for the reader to plug in specific numbers.
The math must show that the exchange solution is correct. The math must show what happens to the velocities and therefore that the other solution does not preserve momentum and by induction
kinetic energy and is consequently invalid. You cannot use the apriori idea that the collision must exchange velocities since that is what you are trying to prove.
I am always amazed that Math accurately describes what physically happens in nature--it's the ultimate truth.
Can you prove that there are two solutions for firing angles to achieve the same range in ballistics. Assume the same muzzle velocity; Ignore air friction and the Coreolis effect.
Thank you to everyone who responded to the topic of an alternate formula for the area of a circle. Dividing the diameter in half in a formula for the area of a circle seems to refer to the
radius. It is possible to get the area without invoking either the radius, diameter, or circumference of a circle. clue: It should take either ten minutes or two hours, as measured by the
... I am always amazed that Math accurately describes what physically happens in nature--it's the ultimate truth.
Yes and no...
Yes in that Maths is designed to be self-consistent and so it is it's own "Truth", by design.
With regards to nature, it just so happens that we can accurately match some of the mathematical truths to what we observe in Nature.
Meanwhile, Nature will do it's own thing regardless of what our Maths might describe...
Having said/typed all that, so far we have found some of our Maths to describe and predict what Nature does, right down to some fantastic levels of precision and accuracy...
Keep searchin',
See new freedom: Mageia4
Linux Voice See & try out your OS Freedom!
The Future is what We make IT (GPLv3)
Having said/typed all that, so far we have found some of our Maths to describe and predict what Nature does, right down to some fantastic levels of precision and accuracy...
Sshhhh silly! If ID hears you he'll start banging on about intelligent design again .......
OK, so not exactly a mathematical approach...
Take a cylinder of known height, measure its volume using a displacement technique, and so since
volume = area x height
and we now know both height and area we can very simply calculate the area as being
area = volume / height
Bob Smith
Member of Seti PIPPS (Pluto is a Planet Protest Society)
Somewhere in the (un)known Universe?
You could probably use Archimedes "Method of Exhaustion" as well | {"url":"http://setiathome.berkeley.edu/forum_thread.php?id=73512","timestamp":"2014-04-19T12:38:15Z","content_type":null,"content_length":"31798","record_id":"<urn:uuid:978c4b6b-36c8-48d0-851c-aa503cf23829>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raritan, NJ Calculus Tutor
Find a Raritan, NJ Calculus Tutor
...For me, tutoring is first a career and second a job, so the satisfaction of the student is always more valuable than being paid for doing so. Please feel free to reach out to me with your
tutoring needs and we can discuss together how I can best be of support to you.I took AP Calculus BC as a ju...
15 Subjects: including calculus, English, Chinese, GRE
I have been a tutor for several years now. My specialties include Mathematics, from basic math to Calculus,I am certified in teaching secondary Math. copy of certification available upon request.
I hold a BA in Math.
14 Subjects: including calculus, Spanish, French, physics
...I also taught and tutor MATH at different levels (in high school and the university). I have a PhD in physics so I had to use calculus and precalculus during all my education and also during
my career. Both calculus and precalculus are like our mother language for physicists. I also taught ca...
9 Subjects: including calculus, physics, algebra 1, algebra 2
Since pursuing my Bachelor's in physics I started tutoring math and physics to primary, secondary, high-school and undergraduate students. After my graduation, I worked for 1 year as a
high-school teacher, teaching geometry to 6th graders and physics to 9th and 10th graders, which I left then to pursue my PhD. During my PhD I assisted the Applied Statistics class as part of my
teaching duties.
20 Subjects: including calculus, Spanish, physics, geometry
...This has significantly helped students with the fabricated “intimidation factor” of orgo as well as with the learning. RESULTS: My students approach the next level of their education much more
prepared then their peers in not only the class I tutored them in, but in related classes I have introd...
34 Subjects: including calculus, chemistry, physics, geometry
Related Raritan, NJ Tutors
Raritan, NJ Accounting Tutors
Raritan, NJ ACT Tutors
Raritan, NJ Algebra Tutors
Raritan, NJ Algebra 2 Tutors
Raritan, NJ Calculus Tutors
Raritan, NJ Geometry Tutors
Raritan, NJ Math Tutors
Raritan, NJ Prealgebra Tutors
Raritan, NJ Precalculus Tutors
Raritan, NJ SAT Tutors
Raritan, NJ SAT Math Tutors
Raritan, NJ Science Tutors
Raritan, NJ Statistics Tutors
Raritan, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Raritan_NJ_calculus_tutors.php","timestamp":"2014-04-16T07:58:05Z","content_type":null,"content_length":"24108","record_id":"<urn:uuid:802aae8f-ecc1-4c64-b8bc-ee736fab515d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Explicit map for Scholz reflection principle
up vote 7 down vote favorite
The question is about the specific case of reflection theorems (copied straight from Franz Lemmermeyer's "Class Groups of Dihedral Extensions"):
Let $k^+ = \mathbb{Q}(\sqrt{m})$ with $m\in \mathbb{N}$, and put $k^- = \mathbb{Q}(\sqrt{-3m})$; then the 3-ranks $r_3^+$ and $r_3^-$ of $Cl(k^+)$ and $Cl(k^-)$ satisfy the inequalities $r_3^+ \
le r_3^- \le r_3^+ + 1$.
The proofs I have seen either use p-adic arguments or galois actions.
Is there an explicit surjective map from $Cl(k^-)[3]$ to $Cl(k^+)[3]$ that might, as the theorem suggests, have kernel of size 3?
At the least, an algorithm for such a map?
nt.number-theory class-field-theory computational-number-theo
add comment
2 Answers
active oldest votes
One way to think the reflection principle, which is similar to what you are proposing in your question, is a relation between the index 3 subgroups of $Cl(k^{+})$, which I'll call $I_{3}(m)$,
and the subgroups of $Cl(k^{-})$ order 3 which I'll call $S_{3}(-3m)$. It is not difficult to see that $$|S_{3}(-3m)|=\frac{3^{r_{3}^{-}}-1}{2}$$ and that $$|I_{3}(m)|=\frac{3^{r_{3}^{+}}-1}
{2},$$ hence any injective map $$ \Phi_{m}: I_{3}(m) \rightarrow S_{3}(-3m)$$ would yield to $r_{3}^{+}\leq r_{3}^{-}$. It is a result of Hasse that the set $I_{3}(m)$ is in bijection with the
isomorphism classes of cubic fields of discriminant $m$ (notice that here I'm assuming that $m$ is fundamental, i.e., $m=disc(k^{+})$) hence what we are looking is for a map $\Phi$ that takes
a cubic field $K$ and produces a subgroup of $Cl(k^{-})$ of order $3$. In other words given a cubic field $K$ of discriminant $m$ we need to associate a primitive, binary quadratic form of
discriminant $-3m$ with the extra condition that the form has order 3 under Gauss composition. To shorten the exposition I'll assume $(3,m)=1$ however all that I'm saying can be worked out in
full generality. One natural way to define $\Phi_{m}$ is as follows: Let $O_{K}^{0}$ be the set of integral elements in $K$ with zero trace. Let $q_{K}(x):=Tr(x^{2})/2$. Then, one can show
that $q_{K}(x)$ is a primitive, binary quadratic form of discriminant $-3d$. Moreover, as an element of the class group $q_{K}^{2}$ has order $3$. It is possible to show that the map $\Phi_{m}
$ sending $K$ to the group generated by $q_{K}^{2}$ is injective, so the result follows.
vote 8 All the above results should be appearing at some point soon in ANT, but I can email you a copy of the article if you are curious of the details.
vote Added: In response to Alex comment I should say that the other inequality can be also derived with the same ideas I explained above. Now, you start with $I_{3}(-3m)$ and you notice that $$|I_
{3}(-3m)|=\frac{3^{r_{3}^{-}}-1}{2}$$ Moreover, $S_{3}(-3(-3m))=S_{3}(m)$ hence by using the trace you get a map $$ \Phi_{-3m}: I_{3}(-3m) \rightarrow S_{3}(m).$$ The diference here is that
this map is not injective, but it can be shown that roughly the map is 3-to-1 hence the other inequality. So summarizing Scholz reflection principle is a relation between index 3 subgroups in
one class group and subgroups of order 3 in the other, and one way to make this relation explicit is via the trace form.
One place to see where the difference in the behavior $\Phi_{m}$ and $\Phi_{-3m}$ is, as professor Lemmermeyer already pointed out, Bhargava's first paper on Higher composition laws, more
specifically Corollary 15. Another place to look at this is J. W. Hoffman and J. Morales, Arithmetic of binary cubic forms, Enseign. Math. (2) 46, 2000, 61-94.
That's very nice, Guillermo! I imagine that it will be not easy to bound the co-kernel, to get the other inequality? – Alex B. Oct 3 '10 at 13:12
Guillermo has since posted his nice paper to the arXiv: arxiv.org/abs/1104.4598 – Frank Thorne Dec 15 '12 at 17:11
add comment
Leopoldt's reflection theorem, of which Scholz's result (discovered independently by Reichardt) is a special case, bounds the sizes of certain eigenspaces of the class group of abelian
extensions. I see the main reason for their existence in the fact that abelian extensions over fields containing the appropriate roots of unity are Kummer extensions. Anyway what you are
looking for is not so much a map between class groups of different fields as a map between eigenspaces of the class group of one single field, which can be pulled down to subfields in some
up vote 2
down vote If you're interested in an explicit map in Scholz's case, you should have a look at Bhargava's excellent Higher composition laws. I: A new view on Gauss composition, and quadratic
generalizations. There he mentions a map defined in terms of binary quadratic forms studied already by Eisenstein, which I think might have something to do with the question you're asking.
I always wanted to study this part in detail, but haven't yet found time to do so. If you come to understand Eisenstein's result before I do let me know -)
add comment
Not the answer you're looking for? Browse other questions tagged nt.number-theory class-field-theory computational-number-theo or ask your own question. | {"url":"http://mathoverflow.net/questions/40895/explicit-map-for-scholz-reflection-principle/40942","timestamp":"2014-04-19T15:06:35Z","content_type":null,"content_length":"59176","record_id":"<urn:uuid:7af417dc-565d-4188-8e0e-776640bc7f74>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00166-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another Question, regarding Quadratic Formula??
September 29th 2013, 01:45 PM #1
Sep 2013
Another Question, regarding Quadratic Formula??
Attachment 3275
Is it correct to use the quadratic formula for this derivative of this? I am not getting the right answer of 2+- 2sqroot3
Help would be much appreciated.
Re: Another Question, regarding Quadratic Formula??
Make sure you get the right signs for $a$, $b$ and $c$. This should give:
Last edited by TwoPlusTwo; September 29th 2013 at 03:29 PM.
September 29th 2013, 03:24 PM #2
Junior Member
Sep 2010 | {"url":"http://mathhelpforum.com/calculus/222395-another-question-regarding-quadratic-formula.html","timestamp":"2014-04-19T08:51:12Z","content_type":null,"content_length":"34176","record_id":"<urn:uuid:7ad1072e-1ae5-4584-a8a3-4422403e9b8e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00495-ip-10-147-4-33.ec2.internal.warc.gz"} |
Special Programs - Fast Track Calculus
Differential, integral and multivariable calculus, is offered during the summer (late July through late August) for selected members of our entering freshman class who have demonstrated outstanding
ability in mathematics and studied a year of calculus during high school. Participants are expected to have scored at least 700 on the mathematics portion of the SAT or 31 on the mathematics portion
of the ACT. Students, who have a 680 mathematics score and at least a 700 verbal score on the SAT, or a 30 mathematics score and at least a 31 verbal score on the ACT have also been admitted to the
program. Participants who successfully complete Fast Track Calculus (graded on a pass/fail basis) satisfy Rose-Hulman's freshman Calculus requirement, are awarded 15 quarter hours of credit toward
graduation , and begin their college careers as "mathematical sophomores."
Admission to Fast Track Calculus is competitive. Interested students should contact the Head of the Mathematics Department or Director of Fast Track Calculus.
Fast Track Calculus is graded on a pass/fail basis. For course details, see the MAFTC course in the section on mathematics course descriptions. For information on the program and application
procedures see the Fast Track Calculus site. | {"url":"http://www.rose-hulman.edu/math/programs/FTC.php","timestamp":"2014-04-21T02:44:14Z","content_type":null,"content_length":"11947","record_id":"<urn:uuid:ba6cc7d9-b677-4cd9-823c-aaf438d0b204>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00456-ip-10-147-4-33.ec2.internal.warc.gz"} |
Statistics in Analytical Chemistry: Part 39—Inexact Replicates: Example
The previous installment (Part 38, American Laboratory, May 2010) outlined the regression-diagnostic steps needed when replicates are not exact. This article will illustrate the protocol using the
nitrite data that have been used throughout this series (please refer to Part 38 for details on each step). To avoid an “overload” of plots and numbers, only the results for the middle concentration
(62.5 ppt) will be illustrated for Steps 1 and 3.
Step 1: Check responses for trends within each group of concentrations
Inspection for trends showed none. For 62.5 ppt, the plot and p-value (for the line’s slope) are shown in Figure 1.
Table 1 - Listing of target concentrations and the means of the actual concentrations achieved
Step 2: Calculate the mean concentration for each target-concentration group
The mean values are shown in Table 1.
Step 3: Scale the actual responses to each mean
When this step is performed, the slope of the line is 83.50, which is the dy value needed to scale the actual peak areas (PAs) to each concentration’s mean. In Figure 2, the scaled PAs for the
62.5-ppt target concentration all plot at the mean concentration (62.94 ppt). Each raw-data point and its corresponding scaled value have the same marker shape/color.
Step 4: Model the standard deviation
The results of the standard deviation modeling are shown in Figure 3. The p-value for the slope is 0.0088, meaning that the slope is statistically significant. (See Part 8, American Laboratory, Nov
2003, for a discussion of the fundamentals of this type of modeling and calculation of weights.) Thus, weighted least squares (WLS) is needed for the fitting technique.
When Step 3 is repeated, using WLS to fit the line, the slope is 83.56. When ordinary least squares (OLS) was used originally, the slope was 83.50. This difference results in an insignificant change
in the scaled responses and can be ignored.
Step 5: Test the proposed model for the actual data
Figure 4 shows the plot of the actual PAs versus the actual concentrations; the proposed model is a straight line and WLS fitting (using the weight from Step 4) is used. The residual pattern is also
shown; the distribution of points appears to be random about the zero line. A lack-of-fit (LOF) test will allow a formal decision on the adequacy of the model.
Step 6: Perform an LOF test, using the scaled responses and mean concentrations
The p-value for this test is 0.8246, supporting the conclusion (in Step 5) that a straight line is an adequate model.
In Part 12 (American Laboratory, Jul 2004), the slope was found to be insignificant, but only barely so; the actual p-value was 0.0109. Figure 5 shows the plots (with prediction intervals) for the
two regressions. The differences are only slight.
Mr. Coleman is an Applied Statistician, Alcoa Technical Center, MST-C, 100 Technical Dr., Alcoa Center, PA 15069, U.S.A.; e-mail: david.coleman@alcoa.com. Ms. Vanatta is an Analytical Chemist, Air
Liquide-Balazs™ Analytical Services, 13546 N. Central Expressway, Dallas, TX 75243-1108, U.S.A.; tel.: 972-995-7541; fax: 972-995-3204; e-mail: lynn.vanatta@airliquide.com. | {"url":"http://www.americanlaboratory.com/914-Application-Notes/1113-Part-39-Inexact-Replicates-Example/","timestamp":"2014-04-18T20:14:20Z","content_type":null,"content_length":"41767","record_id":"<urn:uuid:fe662b05-abd0-42f1-8d80-c9e3c6c581d8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Inclusion of logarithmic de-Rham complex into differentials
up vote 1 down vote favorite
Let $X$ be a complex manifold and $D$ a normal crossing divisor. Let $U = X - D$ and $j: U \rightarrow X$ the natural map. Voisen observes that there is a natural inclusion $$\Omega^k_X(\log D) \
subset j_*\Omega_U^k$$ Why is this so? Certainly, by adjointness of $j^{-1}, j_*$ we get a natural map $\Omega^k_X(\log D) \rightarrow j_*\Omega_U^k$. It's not obvious to me that this map is
injective at the level of stalks. Basically I have two questions:
1.) Is it infact obvious that the natural map produced by adjointness is injective at the level of stalks? (Does this follow from more general "sheaf theory" theorems"?)
2.) Are you able to see in a more obvious way that a map $\Omega^k_X(\log D) \rightarrow j_*\Omega_U^k$ exists, and is an inclusion of sheaves "geometrically", that is, without using adjointness of
$j^{-1}, j_*$?
hodge-theory sheaf-theory complex-manifolds ag.algebraic-geometry
1 $j_*\Omega^k_U$ is the sheaf of differential forms which are holomorphic on $U$. You have an inclusion $\Omega^k_X(*D)\subset j_*\Omega^k_U$ of forms meromorphic along $D$. Furthermore, $\Omega^
k_X(log D)\subset \Omega^k_X(*D)$ as forms having a first-order pole along $D$. – Pavel Safronov Sep 27 '12 at 1:11
Why do you know that $j_*\Omega_U^k$ is the sheaf of $C^\infty$ differential $k$-forms on $X$ that are holomorphic on $U$? – LMN Sep 27 '12 at 1:17
1 Sorry, I didn't understand your question at first. Forms in $j_*\Omega^k_U$ are definitely not $C^\infty$ on $X$, since they are not even defined at $D$. Let me try to be more precise. The kernel
of $\Omega^k_X(*D)\rightarrow j_*\Omega^k_U$ consists of meromorphic forms on $X$ which vanish on $U$. Since they are zero on an open set, they are zero on the whole $X$. – Pavel Safronov Sep 27
'12 at 1:40
Great, thanks! If you reply below, I'll be happy to accept your response as an answer. – LMN Sep 27 '12 at 2:07
add comment
1 Answer
active oldest votes
Answers to the numbered questions:
1. Yes, it is in fact obvious that the natural map you describe is injective, because it is injective, in fact an isomorphism, on $U$ which is dense in $X$ and $\Omega_X^k(\log D)$ is
locally free (it would be enough that it is torsion-free).
up vote 4 2. I would actually say that the adjointness you are using is both geometric and obvious. In other words, your map is simply the restriction of logarithmic differentials from an open set
down vote $V\subseteq X$ to $U\cap V$. (Note that by the definition of $U$, $U\cap V\neq\emptyset$): $$ \Gamma(V,\Omega_X^k(\log D))\to \Gamma(U\cap V, \Omega_X^k(\log D))\simeq \Gamma (U\cap V,
\Omega_U^k)\simeq \Gamma (V, j_*\Omega_U^k). $$
Thanks for your comments Sándor! – LMN Sep 27 '12 at 3:40
add comment
Not the answer you're looking for? Browse other questions tagged hodge-theory sheaf-theory complex-manifolds ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/108201/inclusion-of-logarithmic-de-rham-complex-into-differentials?sort=oldest","timestamp":"2014-04-20T06:01:32Z","content_type":null,"content_length":"57911","record_id":"<urn:uuid:3a9e8fae-bfa6-4405-979d-7fa917262e5b>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00235-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oakton ACT Tutor
Find an Oakton ACT Tutor
...I have taken an 8th grader who did not know her addition facts and had her ready for algebra in one school year. I have also taught algebra and geometry to a gifted 3rd grader. I work with
parents so life-long learning habits are established.
32 Subjects: including ACT Math, reading, English, chemistry
...I invest in each and every one of my students and try to be flexible, accommodating and available. My goal is for each of my students to not only feel successful but BE successful. Feel free to
read my regular blog posts on math education.Having a strong understanding of Algebra 1 is quintessential for a student's success in higher mathematics.
24 Subjects: including ACT Math, reading, calculus, geometry
...I currently work as a professional economist. Though I am located in Arlington, Virginia, I am happy to travel to meet students, particularly to areas that are easily accessible via Metro.I
work as a professional economist, where I utilize econometric models and concepts regularly using both STA...
16 Subjects: including ACT Math, calculus, statistics, geometry
...I have found that this is a barrier for quite a few students. I have developed a number of methods for getting students past this barrier. I have picked up some of these methods from full time
Chemistry teachers, and have developed others myself.
13 Subjects: including ACT Math, chemistry, physics, calculus
...Actively facilitate activities, such as support services for children and families, fund-raising for human preservation and environmental protection and awareness, and obtain social-,
research-based-, and educational- contracts through intellectual, economic, social actions and resources. 2003 ...
64 Subjects: including ACT Math, chemistry, English, reading | {"url":"http://www.purplemath.com/Oakton_ACT_tutors.php","timestamp":"2014-04-19T23:22:19Z","content_type":null,"content_length":"23515","record_id":"<urn:uuid:0bdf957a-ab9c-41f6-b335-41cf9a254828>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00001-ip-10-147-4-33.ec2.internal.warc.gz"} |
TeX Resources on the Web
Additions and corrections are always welcome, please email webmaster@tug.org. (In fact, we are seeking a volunteer to do a systematic review and reorganization of this page; please contact us if you
are interested. A list of other tasks in TeX community is also available.)
If you have a general question, start with the TeX Frequently Asked Questions. If it doesn't help, try the visual FAQ.
Introductions to the TeX world:
General TeX help:
If you have questions not answered by the above, read on for more documentation links, or the most widely used general help forums for TeX are (no guarantees, this is all done by volunteers):
LaTeX documentation:
LaTeX tutorials:
LaTeX templates:
All of these collections would welcome additions and corrections.
LaTeX reference:
LaTeX for particular fields:
Writing new LaTeX packages, classes, and styles:
Books on LaTeX:
Online references for other TeX-related software:
Plain TeX:
Overall TeX system:
Presentations about TeX:
The TeX Family in 2009 article is available online, originally published in AMS Notices magazine.
See also the list of TeX journals and publications, and the AMS lists of TeX resources and TeX-related publications.
Finally, the TeX category in the Open Directory Project has a large list of links.
Some notable TeX implementations that are entirely, or least primarily, free software:
The AMS also maintains a list of freeware and shareware TeX implementations.
If you want to inspect Knuth's own sources for educational or other such purposes, without any of the scaffolding and enhancements that have come to surround them in modern systems, you can get them
from Stanford; the material is also mirrored on CTAN.
TeX engines and extensions
LaTeX, biggest and most widely used TeX macro package.
ConTeXt, Hans Hagen's powerful, modern, TeX macro package; a serious contender for those wanting a production-quality publishing system. Integrated support for XML, MetaPost, and much more. The
ConTeXt Garden Wiki is a good place to start. Also, Aditya Mahajan writes regular introductory ConTeXt articles for TUGboat: fonts, tables, tables II, indentations, Unicode/OpenType math, conditional
processing (modes). paper setup, images. Dave Walden has also written on ConTeXt: Trying ConTeXt and A bigger experiment.
Free editors and front-ends (see also vendors below):
Packages and programs for making slide presentations:
Packages and programs dealing with graphics.
PSTricks graphics:
PGF/TikZ graphics:
Xy-pic graphics:
Other programs for creating graphics:
Formats and large macro packages:
AMS-TeX and AMS-LaTeX , the American Mathematical Society's TeX packages
EDMAC, Dominik Wujastyk and John Lavagnino's package for typestting critical editions in plain TeX
Eplain, extended plain format
LaTeX 3, new work from the LaTeX developers (news).
The REVTeX package
Shyster, James Popple's case-based legal expert system which produces LaTeX output.
DVI drivers:
PDF viewers (concentrating on free software):
Excalibur, the Mac TeX-aware spell checker
Kdissert, a writing tool to help structure ideas and concepts (for KDE).
designer for LaTeX.
OpenOffice math plugin that allows writing LaTeX formulas in OpenOffice documents.
PerlTeX, Perl programming plus TeX typesetting.
PerlTeX: Defining LaTeX macros using Perl, an article by Scott Pakin, author of PerlTeX.
Programming with PerlTeX, an article by Andrew Mertz and William Slough using graduated examples.
ProofCheck, a system for writing mathematical proofs in a directly (La)TeXable format.
PyTeX, Python programming plus TeX typesetting.
stepTeX, porting the famous NeXTStep TeX previewer
TechWriter Pro Used in connection with EasiWriter, TechWriter provides an equation editor which exports to HTML, as a TeX file, and has Java support.
LaTeX Generator, for making LaTeX template documents (in German).
preview-latex, WYSIWYGish in-line previews right in your Emacs source buffer
texd, TeX as a daemon with a callable interface, written in Python.
TeXmacs, a WYSIWYG editor for typing technical and mathematical text.
TeXoMaker, free software for teachers to create and manage exercise sheets in LaTeX.
MathType and the Equation Editor in MS Word. MathType is a WYSIWYG equation editor that outputs TeX.
Label & card printing resources with TeX and LaTeX, a discussion of packages to print labels, envelopes, etc.
Multi-lingual typesetting in scripts and languages around the world:
BibTeX and bibliographies
BibTeX 101, an introduction to BibTeX by Oren Patashnik.
Massive bibliography collection, from Nelson Beebe, including bibnet and the TUG bibliography archive, both of which are mirrored on tug.org.
Tame the BeaST: The B to X of BibTeX, a comprehensive BibTeX manual by Nicolas Markey.
Brief BibTeX description, from Norm Walsh.
Aigaion, a php-based bibliography management system based on BibTeX.
ebib, BibTeX database manager for Emacs.
gbib, a BibTeX manager for GNU/Linux, including integration with LyX.
JabRef, Java-based GUI for managing BibTeX databases.
Pybliographer, a BibTeX tool which can be used for searching, editing, reformatting, etc. It provides Python classes, has a graphical GNOME interface, and references can be inserted directly into LyX
(version 1.0.x running on the GNOME desktop.
BibDB, a BibTeX Database Manager (DOS and Windows) by Eyal Doron.
BibEdit, program for editing BibTeX files under Windows NT and 98.
BibTeXMng, a BibTeX manager for Windows.
HotReference.com, a community site for sharing bibliography citations and article reviews, with BibTeX support.
More web-related projects: | {"url":"http://www.tug.org/interest.html","timestamp":"2014-04-20T08:46:16Z","content_type":null,"content_length":"55789","record_id":"<urn:uuid:0b2bf404-0753-482b-be57-ad51ed2598c0>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: Rotation matrix in N dimensions
Replies: 7 Last Post: Feb 16, 2013 2:51 PM
Messages: [ Previous | Next ]
Re: Rotation matrix in N dimensions
Posted: Feb 16, 2013 2:51 PM
Below is a link to "Homogeneous Transformation Matrices" which gives explicit methods to construct a homogeneous matrix that effects an n-dimensional rotation R given:
(1) an n+1 x 2 matrix h representing the invariant n-2 dimensional flat of the rotatation, and the angle of rotation r about the invariant n-2 dimensional flat.
(2) two n+1 by 1 matrices h1 and h2 representing oriented intersecting hyperplanes, where R carries h1 to h2.
Link: http://www.silcom.com/~barnowl/HTransf.htm
Note: n in the link means n+1 here.
Date Subject Author
7/17/03 Rotation matrix in N dimensions Shivani Agarwal
7/18/03 Re: Rotation matrix in N dimensions anonymous
7/18/03 Re: Rotation matrix in N dimensions Polycell
7/19/03 Re: Rotation matrix in N dimensions anonymous
7/19/03 Re: Rotation matrix in N dimensions Polycell
9/20/06 Re: Rotation matrix in N dimensions Carl
3/1/09 Re: Rotation matrix in N dimensions fenfen
2/16/13 Re: Rotation matrix in N dimensions Daniel VanArsdale | {"url":"http://mathforum.org/kb/message.jspa?messageID=8349412","timestamp":"2014-04-16T14:09:02Z","content_type":null,"content_length":"24655","record_id":"<urn:uuid:21eb2919-eea9-4fdb-a618-6d90a6f80f2e>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00611-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solid Angle
September 1st 2011, 11:45 AM #1
Sep 2011
I was reading my astrophysics text book and came across solid angles. I'm not sure I fully understand, for example there was a problem in the book that went as follows.
The attached "math.jpg" shows a light source (yellow) in the centre of an arc. The problem is 2D, but the arc is rotated about the x axis to form a 3D sphere. I have the flux (F photons/sec)
crossing the red line (hight h cm, zero thickness). But how do I translate that into the flux crossing the entire area after it's been rotated to be 3D.
I guess it would form a cone shape and I want the area of the face of that cone or something. So do I need to multiply the flux by the solid angle? In this case would that be 4*pi*r^2?
"math2.jpg" would represent the same problem, but just showing the cone bit. So again I have the flux crossing line of length h and zero thickness. And I want the flux that would come out of the
entire cone.
I hope that makes sense. I would really appreciate any help please.
Thank you. | {"url":"http://mathhelpforum.com/advanced-math-topics/187095-solid-angle.html","timestamp":"2014-04-19T03:08:52Z","content_type":null,"content_length":"30416","record_id":"<urn:uuid:a1705d67-ea11-440e-a18d-b3ceacbcf513>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00541-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Universe Out of Chaos
How did the universe come to be? We don’t know yet, of course, but we know enough about cosmology, gravitation, and quantum mechanics to put together models that standing a fighting chance of
capturing some of the truth.
Stephen Hawking‘s favorite idea is that the universe came out of “nothing” — it arose (although that’s not really the right word) as a quantum fluctuation with literally no pre-existing state. No
space, no time, no anything. But there’s another idea that’s at least as plausible: that the universe arose out of something, but that “something” was simply “chaos,” whatever that means in the
context of quantum gravity. Space, time, and energy, yes; but no order, no particular arrangement.
It’s an old idea, going back at least to Lucretius, and contemplated by David Hume as well as by Ludwig Boltzmann. None of those guys, of course, knew very much of our modern understanding of
cosmology, gravitation, and quantum mechanics. So what would the modern version look like?
That’s the question that Anthony Aguirre, Matt Johnson and I tackled in a paper that just appeared on arxiv. (Both of my collaborators have also been guest-bloggers here at CV.)
Out of equilibrium: understanding cosmological evolution to lower-entropy states
Anthony Aguirre, Sean M. Carroll, Matthew C. Johnson
Despite the importance of the Second Law of Thermodynamics, it is not absolute. Statistical mechanics implies that, given sufficient time, systems near equilibrium will spontaneously fluctuate
into lower-entropy states, locally reversing the thermodynamic arrow of time. We study the time development of such fluctuations, especially the very large fluctuations relevant to cosmology.
Under fairly general assumptions, the most likely history of a fluctuation out of equilibrium is simply the CPT conjugate of the most likely way a system relaxes back to equilibrium. We use this
idea to elucidate the spacetime structure of various fluctuations in (stable and metastable) de Sitter space and thermal anti-de Sitter space.
It was Boltzmann who long ago realized that the Second Law, which says that the entropy of a closed system never decreases, isn’t quite an absolute “law.” It’s just a statement of overwhelming
probability: there are so many more ways to be high-entropy (chaotic, disorderly) than to be low-entropy (arranged, orderly) that almost anything a system might do will move it toward higher entropy.
But not absolutely anything; we can imagine very, very unlikely events in which entropy actually goes down.
In fact we can do better than just imagine: this has been observed in the lab. The likelihood that entropy will increase rather than decrease goes up as you consider larger and larger systems. So if
you want to do an experiment that is likely to observe such a thing, you want to work with just a handful of particles, which is what experimenters succeeded in doing in 2002. But Boltzmann teaches
us than any system, no matter how large, will eventually fluctuate into a lower-entropy state if we wait long enough. So what if we wait forever?
It’s possible that we can’t wait forever, of course; maybe the universe spends only a finite time in a lively condition like we see around us, before settling down to a truly stable equilibrium that
never fluctuates. But as far as we currently know, it’s equally reasonable to imagine that it does last forever, and that it is always fluctuating. This is a long story, but a universe dominated by a
positive cosmological constant (dark energy that never fades away) behaves a lot like a box of gas at a fixed temperature. Our universe seems to be headed in that direction; if it stays there, we
will have fluctuations for all eternity.
Which means that empty space will eventually fluctuate into — well, anything at all, really. Including an entire universe.
This basic story has been known for some time. What Anthony and Matt and I have tried to add is a relatively detailed story of how such a fluctuation actually proceeds — what happens along the way
from complete chaos (empty space with vacuum energy) to something organized like a universe. Our answer is simple: the most likely way to go from high-entropy chaos to low-entropy order is exactly
like the usual way that systems evolve from low entropy to high-, just played backward in time.
Here is an excerpt from the paper:
The key argument we wish to explore in this paper can be illustrated by a simple example. Consider an ice cube in a glass of water. For thought-experiment purposes, imagine that the glass of
water is absolutely isolated from the rest of the universe, lasts for an infinitely long time, and we ignore gravity. Conventional thermodynamics predicts that the ice cube will melt, and in a
matter of several minutes we will have a somewhat colder glass of water. But if we wait long enough … statistical mechanics predicts that the ice cube will eventually re-form. If we were to see
such a miraculous occurrence, the central claim of this paper is that the time evolution of the process of re-formation of the ice cube will, with high probability, be roughly equivalent to the
time-reversal of the process by which it originally melted. (For a related popular-level discussion see <a href="http://blogs.discovermagazine.com/cosmicvariance/2010/03/16/
from-eternity-to-book-club-chapter-ten/" From Eternity to Here, ch. 10.) The ice cube will not suddenly reappear, but will gradually emerge over a matter of minutes via unmelting. We would
observe, therefore, a series of consecutive statistically unlikely events, rather than one instantaneous very unlikely event. The argument for this conclusion is based on conventional statistical
mechanics, with the novel ingredient that we impose a future boundary condition — an unmelted ice cube — instead of a more conventional past boundary condition.
Let’s imagine that you want to wait long enough to see something like the Big Bang fluctuate randomly out of empty space. How will it actually transpire? It will not be a sudden WHAM! in which
nothingness turns into the Big Bang. Rather, it will be just like the observed history of our universe — just played backward. A collection of long-wavelength photons will gradually come together;
radiation will focus on certain locations in space to create white holes; those white holes will spit out gas and dust that will form into stars and planets; radiation will focus on the stars, which
will break down heavy elements into lighter ones; eventually all the matter will disperse as it contracts and smooths out to create a giant Big Crunch. Along the way people will un-die, grow younger,
and be un-born; omelets will convert into eggs; artists will painstakingly remove paint from their canvases onto brushes.
Now you might think: that’s really unlikely. And so it is! But that’s because fluctuating into the Big Bang is tremendously unlikely. What we argue in the paper is simply that, once you insist that
you are going to examine histories of the universe that start with high-entropy empty space and end with a low-entropy Bang, the most likely way to get there is via an incredible sequence of
individually unlikely events. Of course, for every one time this actually happens, there will be countless times that it almost happens, but not quite. The point is that we have infinitely long to
wait — eventually the thing we’re waiting for will come to pass.
And so what?, you may very rightly ask. Well for one thing, modern cosmologists often imagine enormously long-lived universes, and events like this will be part of them, so they should be understood.
More concretely, we are of course all interested in understanding why our actual universe really does have a low-entropy boundary condition at one end of time (the end we conventionally refer to as
“the beginning”). There’s nothing in the laws of physics that distinguishes between the crazy story of the fluctuation into the Big Crunch and the perfectly ordinary story of evolving away from the
Big Bang; one is the time-reverse of the other, and the fundamental laws of physics don’t pick out a direction of time. So we might wonder whether processes like these help explain the universe in
which we actually live.
So far — not really. If anything, our work drives home (yet again!) how really unusual it is to get a universe that passes through such a low-entropy state. So that puzzle is still there. But if
we’re ever going to solve it, it will behoove us to understand how entropy works as well as we can. Hopefully this paper is a step in that direction.
In the second to last paragraph on page three the first sentence reads “[t]his story seem surprising not because the net result is unlikely, but because it consists of such a large number of
individually unlikely events.”
I think the third word should be “seems”.
How’s that for a deep, thoughtful comment on the substance of the paper?
• http://blogs.discovermagazine.com/cosmicvariance/sean/
Oh no! This changes everything!
Thanks for the catch.
No problem at all. When the revision is posted, as now it must be, I expect to be listed in the acknowledgments
• http://calamitiesofnature.com
During the very long time needed to wait for an ice cube to “unmelt” won’t there be many instances where the ice cube starts to unmelt but the unmelting doesn’t go to completion?
Similarly, if our Universe was formed in a random fluctuation like you describe, wouldn’t it be more likely that the Universe only proceeded partially toward a big crunch rather all the way to a
big crunch?
• http://blogs.discovermagazine.com/cosmicvariance/sean/
Tony, yes, that is exactly right. Which is why the idea that the universe is a fluctuation is very hard to make work — it’s very difficult to see why it would be such a big fluctuation.
This intuitive notion becomes a little tricky when gravity and curved spacetime are involved; the prospect of inflation in the early universe confuses things a bit. One of our motivations was to
un-confuse things as much as we can. And our tentative conclusion is that inflation doesn’t really help in this particular case.
Tony: yes to both. In fact this is, in essence, the major reason why a fluctuation is not a viable explanation for the universe we see.
Well Sean. I’ve come to the conclusion that your a bit of a one trick pony.
I’ve read your book from eternity twice. Your your obsession with statistical mechanics and the 2nd law of thermodynamics assumes too much. You make wild leaps from small humps my friend. Waiting
forever doesn’t mean every possibility will happen. Entropy is an easy subject to get stuck in and confuse the listener when discussing the origins of the universe and analysing time. Sorry I
just distrust your logic.
Change the record or at least make a better job of your argument.
Kind regards
i think instwad of fluctuations we had a transfer of matter from the preceeding universe. this matter was of E8 symmetry, see papers of lisi et al. this matter had very low entropy
• http://j.mp/drb123p
Thanks For Your Comments – Seems Your Proposal Is That The Universe Did NOT Come From “Nothing” – But From “Something” (“Chaos” and/or “Fluctuations”) Instead – Is This Type Of “Somethingness”
*Always* Present? – And Related Perhaps – Is True Absolute “Nothingness” *Always* NOT Present? – In Any Case – Thanks For Your Comments – And – Enjoy!
It actually might be a good idea to believe in a creator: that way we can leave it up to him/her to figure out how the universe they found themselves in, before they created this one, came to be
ie. it puts off all the brain-wracking on them!
There has been lots of thought and discussion about how the universe came about and what there was before the Big Bang, but I haven’t seen anything on how the “laws of physics” came about. They
seem to be constant and unchanging. We just assume that they have always been so. I know we can see back in time to nearly the beginning (edge?) of the universe and the physical “constants” seem
to be constant (taking into accout general relativity), but how do we know that the “laws of physics” remained the same through the Big Crunch?
Just wondering.
So if we go along with the idea that the big bang arose from a fluctuation, why do we think that fluctuation actually proceeded all the way to the big bang? Isn’t it more likely that the
fluctuation “almost” got there, and then reversed? Or is that even less likely then going all the way to the big bang and proceeding from there? Am I even making sense?
• Pingback: Overcoming Bias : Still Scandalous Heat
“we impose a future boundary condition — an unmelted ice cube — instead of a more conventional past boundary condition.”
…doesn’t this mean that future is definetely an unmelted ice cube if this is also the boundary condition….my understanding is that regarding any theory of universe the only interesting parts are
the boundary conditions at time zero and approaching infinity, what happens at those points to the universe…. so how can you derive a theory by just assuming certain boundary conditions when its
the boundary conditions we are trying to understand in the first place….
I’ve always liked this idea, but what about Boltzmann Brains? If you are going to use the 2nd law to prove anything here you are going to have to use statistics. Don’t the statistics
overwhelmingly say that I (as a conscious observer) should be a Boltzmann Brain?
So … of all histories that include a low-probability bottleneck in the future, the highest probability history is one which gradually approaches that low-probability bottleneck? Reminds me very
vaguely of “climbing Mt Improbable.”
Hi Sean,
I just have a question regarding this general idea of the universe from a high-entropy state. If the universe is the result of a stochastic fluctuation we would expect this to be the minimal
fluctuation, as this is much more probable than anything else. Now we can use anthropic reasoning here to say that it is the minimal fluctuation to create scientist to observe it. However,
judging by the vastness of the universe, this fluctuation is much more gratuitous (this is of course the “Boltzmann’s brain paradox”). Does modern cosmology have anything to say about this
improbable state of the universe? In other words, is there any reason at all to believe that the universe is a minimal fluctuation?
concerning #11: the laws of physics have only one chance to change: at each big bang, and E8 symmetry is involved in setting the rules for the next universe. in this way we can have an evolving
universe with anthropic characteriistics, which has long been a mystery.
“the idea that the universe is a fluctuation is very hard to make work — it’s very difficult to see why it would be such a big fluctuation.”–Sean
@Sean. Not to get all William-Blakean, but if we’re inside that fluctuation then isn’t that fluctuation only “big” from our perspective? While outside it, within the mother universe that our
daughter universe fluctuated from, that fluctuation might be less than a nanoparticle grain of sand?
• http://skepticsplay.blogspot.com
I was imagining a world in which there are fossils, but the fossils are of creatures that clearly couldn’t have evolved by natural selection. Such a world would be unlikely to come about from a
big bang initial state, and would therefore be unlikely to evolve into a big bang state. But then, everything is unlikely to evolve into a big bang state because it’s such low entropy. If we’re
already positing a path from chaos to our universe, is it really so unlikely that the path will happen to include fossils of non-evolved animals?
If I understand your description of the paper, the answer is, “Yes, it is much more unlikely.” Do I understand correctly?
Been mulling this. You’re arguing that the probability of some history leading up to the Big Bang, conditional upon there being a Big Bang, is maximized when that history looks like a
time-reversed Big Bang. Right?
i’m saying we may have had millions of big bangs, recycling the same immortal matter over and over again, recycling it thru an E8 symmetry entity each time. the laws of phyics can change only
while the E8 symmetry is controlling things. in this way the physical laws could gradually change over time (evolve), resulting in the anthropic universe we observe.
So if our current Universe is one of those fluctuations going backward to the Big Bang (but of course we perceive time the other way) AND it is much more likely to not make it all the way back to
the Big Bang (a larger fluctuation), doesn’t that mean the creationists might be right?! The Universe was much more likely “created” Just So in order to fool us; it really “started” (ends) at
some point well “after” the Big Bang.
Regarding Hawking’s idea – I fail to understand how the “laws of physics” could “allow” the universe to come into existence from nothing.
The laws of physics are not transcendent entities – they are just the properties of the various constituents of the universe.
But nothing would imply no universe and hence no “laws of physics” to allow anything.
Can a Boltzmann brain be conscious? Who observes the states of the Boltzmann brain? Who observes the observer of the states of the Boltzmann brain?
Is there a transcendent timeless reality out there from which our universe emerged? Could this explain the existence of our universe? What can we discover about this transcendent reality? Do we
have direct experience of this transcendent reality?
In Tegmark’s Ultimate Ensemble of all possible mathematical structures, what is the spotlight in the darkness shining upon a particular mathematical structure actualizing it in the sea of
potential mathematical structures? If all which can exists exist, why are we not all?
• http://www.jonstraveladventures.blogspot.com
“Along the way people will un-die, grow younger, and be un-born”
and along this path which way will their memories point? To what is (from our perspective) going to happen (this seems somewhat acausal – memories created before the event occurs), or the other
way around, in which case they remember their ‘old’ years and look forward to their ‘young’ years – all seems pretty strange to me. If it’s the former then we could just as well be in the
devolving universe state, but wouldn’t know because all we remember is what is to come, which we think is our past.
If indeed we are in the devolving universe state and heading in the direction of what we think of as the big bang, then there would be a very large chance that this fluctuation will stop pretty
soon and we’ll start going back in the ‘normal’ direction – it seems that in this scenario you never actually need to reach the big bang: The ice cube can half freeze, and then start melting
If this is the case, then the big bang need never have happened, but we could just have appeared to have come from it.
My thought is that the fault in the argument is that the progress towards a lower entropy state should take us through the state with complex structures we see around us…ie. the egg coming back
together etc. Surely there are simpler paths to a low entropy state than civilisation undoing itself in perfect unison (though this may be precisely what the US administration is attempting
Following up on Jonathan’s comment, and Sean’s statement that artists will ‘painstakingly remove paint from their canvases’ in this scenario:
It seems to me that these hypothetical artists will believe that they are actually applying paint to their canvases, not removing it. The reason is that their neurons etc. are simply following a
time-reversal of what we consider ‘normal’, and thus their consciousness at any instant in time is identical whether cosmological entropy is increasing or decreasing. They will not remember the
‘past’, but only their ‘future’.
Extrapolating this a bit, it seems like this reasoning implies that observers have *no way* of judging whether the universal ice cube is in the process of melting or freezing. Therefore this
theory must be necessarily untestable by conscious beings, and thus it is truly a philosophical rather than a physical effort at understanding the world.
As an alternative to Quantum Theory there is a new theory that describes and explains the mysteries of physical reality. While not disrespecting the value of Quantum Mechanics as a tool to
explain the role of quanta in our universe. This theory states that there is also a classical explanation for the paradoxes such as EPR and the Wave-Particle Duality. The Theory is called the
Theory of Super Relativity and is located at Super Relativity Website. This theory is a philosophical attempt to reconnect the physical universe to realism and deterministic concepts. It explains
the mysterious.
If the universe is cyclical and each cycle is larger than the previous cycle, then moving towards the low entropy of a big crunch is also moving towards the higher entropy of the next cycle.
Ian, I don’t think that follows. Of all possible histories, any which go through a big crunch are lower entropy than any which increase entropy indefinitely. The universe will tend to shy away
from low-entropy states (by the definition of low entropy), so any history which goes through a big crunch is less probable than a history which does not.
• Pingback: Twitted by TimTowtiddy
I understand the reasoning, but why are you making the point that the universe should always be the same. If i just displace or remove some particles it still kind of looks like our universe.
even statistical physics tells us that you should be looking at ensembles of universes. Maybe the amount of our-universe-like universes is just quite large.
• http://rowingpresents.com
Looked at from the perspective of a closed system, if we leave an ice cube (low-entropy) in a glass of water at room temperature, in a few minutes it will melt (high-entropy) and cool the water
inside. If we then remove a few milliliters of water from the glass (high-entropy) and freeze it back into an ice cube (low-entropy), we have successfully reversed the second law of
thermodynamics within the system and can begin the process again. (Also note: freezing is equivalent to ‘unmelting,’ and less of an awkward term). If the above accurately describes a
‘small-scale’ closed system, then it only makes sense that a ‘large-scale’ closed system, such as the universe, works the same way.
To me, the only issue arises with the concepts of equilibrium, and closed or open system. For example, if two systems are in thermal equilibrium, then their temperatures are the same. Thus if the
definition of an open system is that matter may flow in and out of the system boundaries, then when two systems are in thermal equilibrium, it must also be equivalent to say that this
equilibrated system is no longer open, but closed because there is no system boundary between them.
The universe is equivalent to an ice cube melting in a glass of room temperature water, or a lake freezing over in winter and then thawing again in spring; the universe must be a self-regulating
process of continual melting and freezing.
Sean, I look forward to your comments if you have any.
• http://www.jonstraveladventures.blogspot.com
Tyler, the problem with your argument is that in the re-freezing you are not considering the thing that you are using to re-freeze the water. If you include this then the entropy will increase as
you put in energy to freeze the water – your machine will use fuel, will heat up, etc. If you don’t include this, then you are not dealing with a closed system.
Sean (or anyone!) – I’m reading ‘From Eternity to Here’ at the moment and I’ve been niggled by something since Chapter 10. This post has increased my niggle. The post (and some comments) seem to
hint at what I’m thinking, but I haven’t seen it stated explicitly (I’m not a physicist, so more than likely I’m missing something). So: if the arrow of time is determined by the second law (and
specifically the fact that memory is a coherent concept only under the assumption of a lower entropy past) then how does it make sense to talk about a fluctuation from maximum entropy at all? You
cannot describe a fluctuation without a dimension of time (as in the horizontal axis of figure 54 in the book), but the initial part of the fluctuation involves entropy decreasing in the forward
direction of time, which can’t happen by definition. In this context does it make any sense to ask how long you need to wait for a maximum entropy universe to experience a fluctuation of the
magnitude that would lead to what we observe? Surely in a uniformly high entropy universe there is no such thing as time?
Reply to (32): Neal: When a star collapses before going supernova, the total entropy of the star system doesn’t decrease. Maybe big crunches are similar to stellar collapses in that space doesn’t
collapse and the entropy of the universe doesn’t actually decrease when moving through them.
Interesting, and in some ways similar both to Nietzsche’s eternal recurrence and Asimov’s last question (just to pick two). Of course, those didn’t make it to arxiv.
• http://vacua.blogspot.com
If a universe really were to revert to its initial state via the same stages by which it got to its endpoint, the conscious animals in that universe would not experience themselves living
backwards. They would still remember the future, not the past, assuming, of course, that mental states are absolutely determined by physical ones. So how exactly do you propose to distinguish the
road up and the road down in your story? Maybe things are running backwards right now.
• http://blogs.discovermagazine.com/cosmicvariance/sean/
As people have noted (and I’ve said many times myself, although not in this post), the “backwards-living” people in the universe we describe wouldn’t think they were living backwards at all. We
always remember the direction in which entropy was lower, so their evolution is internally indistinguishable from an ordinary Big Bang.
The only difference, therefore, is external. We are talking about processes that happen in a universe that lasts forever. Inside that universe, there will inevitably be universe-creating
fluctuations like this (as well as an enormously larger number of smaller fluctuations), and then these fluctuations will decay back to equilibrium. It only makes sense to say that the arrow of
time is “backwards” in any one region when we’re comparing it to other regions.
However, the fact that there are many much smaller fluctuations tends to imply that this is not the right story of the universe. (If it were, we would probably live in a much smaller
fluctuation.) So either the universe is not eternal, or something else, as we briefly touch on in the paper.
Stephen Hawking‘s favorite idea is that the universe came out of “nothing” — it arose (although that’s not really the right word) as a quantum fluctuation with literally no pre-existing state. No
space, no time, no anything.
These ideas are receiving a lot more attention in the popular media and press, and I think that a few pointers to the technical ideas that motivate them are necessary. So here’s some scientific
background and links on universe ex nihilo theories, a background that isn’t presented widely enough, even at scienceblogs that address the subject specifically.
Guth’s Inflationary Universe is a must-read, in which Guth explains ex nihilo theories with the colorful statement:
The question of the origin of the matter in the universe is no longer thought to be beyond the range of science—everything can be created from nothing … it is fair to say that the universe is
the ultimate free lunch.
Guth provides technical reasons for this claim:
Now we can return to a key question: How is there any hope that the creation of the universe might be described by physical laws consistent with energy conservations? Answer: the energy
stored in the gravitational field is represented by a negative number! … The immense energy that we observe in the form of matter can be canceled by a negative contribution of equal
magnitude, coming from the gravitational field. There is no limit to the magnitude of energy energy in the gravitational field, and hence no limit to the amount of matter/energy it can
cancel. For the reader interested in learning why the energy of a gravitational field is negative, the argument is presented in Appendix A.
Guth goes on to explain a simple argument for all this that if you grasp, you will understand a fact of gravity that evaded Newton. Unfortunately, Google books doesn’t have Appendix A online.
Guth’s technical explanation above is what is meant by the nontechnical, poetic description, like Hawking’s: “Because there is a law like gravity, the universe can and will create itself from
Here are some pointers to a quick technical explanation of the creation of a universe from literally nothing subject to the laws of quantum mechanics.
A technical account of the universe ex nihilo, following Vilenkin, “Creation of universes from nothing”. Physics Letters B Volume 117, Issues 1-2, 4 November 1982, Pages 25–28. Available here.
1. Observe the Friedmann–Lemaître–Robertson–Walker metric for universal expansion:
ds² = dt² – a(t)|dx|²
This is the space-time geometry with the spatial scale term a(t) describing the growth/contraction of the universe. This is Vilenkin’s equation (2).
2. Solve the evolution equation:
a(t) = (1/H)cosh(Ht)
where H² = (8π/3)Gρ is the Hubble parameter.
This is Vilenkin’s equation (3). So far, there is no explanation of a universe from nothing because the de Sitter space isn’t nothing, as everyone agrees.
3. Observe that at t = 0, the physics has the same form as a potential barrier, for which it is known that quantum tunneling is possible. The description of quantum tunneling involves a
transformation t → it, with i² = –1.
Now the evolution equation is
a(t) = (1/H)cos(Ht) [the cosine "cos", not the hyperbolic cosine "cosh"]
valid for |t| < π/2/H. This is Vilenkin’s equation (5). Space-Time is simply the 4-sphere, a compact, i.e, bounded space. At the scale a(t) = 0, this space is literally nothing. No space-time, no
energy, no particles. Nothing. The interpretation of (5) is quantum tunneling from literally nothing to de Sitter space, the universe as we know it. See Figure 1a in Vilenkin’s paper for a
depiction of the creation of the universe from nothing using this explanation.
Vilenkin says in the paper, “A cosmological model is proposed in which the universe is created by quantum tunneling from literally nothing into a de Sitter space. After the tunneling, the model
evolves along the lines of the inflationary scenario. This model does not have a big-bang singularity and does not require any initial or boundary conditions. … In this paper I would like to
suggest a new cosmological scenario in which the universe is spontaneously created from literally nothing, and which is free from the difficulties I mentioned in the preceding paragraph. This
scenario does not require any changes in the fundamental equations of physics; it only gives a new interpretation to a well-known cosmological solution. … The concept of the universe being
created from nothing is a crazy one. To help the reader make peace with this concept, I would like to give an example of a compact instanton in a more familiar setting. …”
This is what physicists mean by “nothing”. Nonexistent space-time, subject to the laws of quantum mechanics.
Guth provides a nontechnical explanation:
Alexander Vilenkin … suggested that the universe was created by quantum processes starting from “literally nothing,” meaning not only the absence of matter, but the absence of space and time
as well. This concept of absolute nothingness is hard to understand, because we are accustomed to thinking of space as an immutable background which could not possibly be removed. Just as a
fish could not imagine the absence of water, we cannot imagine a situation devoid of space and time. At the risk of trying to illuminate the abstruse with the obscure, I mention that one way
to understand absolute nothingness is to imagine a closed universe, which has a finite volume, and then imagine decreasing the volume to zero. In any case, whether one can visualize it or
not, Vilenkin showed that the concept of absolute nothingness is at least mathematically well-defined, and can be used as a starting point for theories of creation.
• http://www.loujost.com
I have a problem with the often-repeated idea that if the universe lasts an infinite amount of time, every possible state will happen. Doesn’t this assume the cardinality of the set of all
possible states is the same as the cardinality of time?
• http://www.loujost.com
“The point is that we have infinitely long to wait — eventually the thing we’re waiting for will come to pass.”
Even if the cardinalities of the set of all possible times and the set of all possible states were the same, it is still not clear to me that, given an infinite amount of time, every state will
occur. Consider a hotel with an infinite number of rooms. (Mathematicians spend an inordinal amount of time in this hotel when they think about infinity.) Suppose it has an infinitely long stream
of customers. The receptionist can choose to give every new guest an odd-numbered room. Even though there are in infinite number of guests, the hotel will never need to use the even-numbered
rooms, as the hotel will never fill up. (I think this example is given in a book called “Is God a mathematician” by Mario Livio.) If the rooms represent states, this suggests that not all states
need to be filled, even given an infinite amount of time.
Sean, I don’t get why we should consider “backwards-living” peoples in the first place. This would implicate a “backward natural selection”, despite this makes very little difference in the total
entropy. Am I wrong to think that a sterile universe would be far more likely for the backward direction?
Usually people talk about the idea that in an infinite Universe (in space and time), anything that can happen with non-zero probability will happen. Some things have zero probability of
occurring, and those “states” will not happen. So if there is a law of nature that says no occupation of even-numbered rooms, then indeed, they will not be occupied.
So for example, even if the Universe is infinite, we don’t expect there to be some part of it where, e.g., like electric charges attract.
But since there is some non-zero probability of you existing and posting a comment here, and there likely is a non-zero probability of you having done the same thing except with one more
sentence, it is quite plausible that in an infinite Universe, that alter-Lou exists.
• http://www.loujost.com
But Trevor, ANY particular state has a probability of zero, if states have a one-to-one correspondence with the points on a number line. And again, if the state space has higher cardinality than
the time dimension, it is actually impossible that all states will be reached, even given an infinite amount of time.
• http://www.loujost.com
And Trevor, if you do not accept that the probability is zero of any particular state, I can use your reasoning to show that the hotel example works even if the receptionist does not enforce any
rule about room numbers. If the rooms are filled at random and there is no rule about what room a guest receives, you would say that there is a non-zero probability that the first room is not
taken (since this is not prohibited by any law in this new example). There are an infinite number of arrangements of guests in which the first room is not taken. Therefore you cannot argue that
all rooms must be taken (or all states must occur) just because there are an infinite number of guests (or an infinite amount of time in the universe). I think Sean’s statement that “eventually,
the thing we’re waiting for will come to pass” is not true.
“Mathematicians spend an inordinal amount of time in this hotel when they think about infinity.”
I see what you did there.
• http://www.rohanmedia.co.uk
Sweet. Every girl that ever dumped me is inevitably going to desperately claw her way to get me back. Can’t wait!
• http://www.loujost.com
Neal, I am impressed. I didn’t expect anybody would notice….it was just my very lame attempt at mathematical humor….
• http://www.loujost.com
Here is another way to see my point that not all states need to occur after an infinite amount of time. Suppose all possible states can be represented as points on a plane. There are an infinite
number of such points. Now imagine an infinitely long path drawn on this plane, representing the succession of states actually occupied by our universe. This path can be infinitely long without
going through most of the points on the plane. Indeed, it can even be infinitely long without crossing itself. So I do not see how the mere fact of infinite time obliges the universe to occupy
all possible states, or to repeat previous states. This appears to be an abuse of the concept of infinity.
• http://j.mp/drb123p
Hawking: “Because there is a law like gravity, the universe can and will create itself from nothing.”
Smith: “…explanation of the creation of a universe from literally nothing subject to the
laws of quantum mechanics.”
Smith: “This is what physicists mean by “nothing”. Nonexistent space-time, subject to the
laws of quantum mechanics.”
Yes, But Are These Wordings *Really* Describing “Absolute Nothingness” As Noted (or
implied?) – Or Something *Less* (or more?) Than “Absolute Nothingness” – After All,
Conditions Are Noted (“law of gravity,” “laws of quantum mechanics,” and the like) As,
Perhaps, Pre-Existing(?) When Discussing “Absolute Nothingness” – Wouldn’t “Absolute
Nothingness” Be Entirely Un-Conditional, And Without Such Pre-Existing (original/initial/a
priori?) Conditions Also? – If Not, Aren’t Such Noted Conditions “Something”? – And Maybe,
Require An Explanation Of Their Beginnings From True “Absolute Nothingness” As Well?
In Any Case – Enjoy!
I think whether Mr. Jost is correct or not depends on the definition of possible events, and how the number of them changes over time. I think of an event which has non-zero probability as one
which has a finite probability of occurring in a finite amount of time. Under that definition, it seems to me at first that all non-zero-probability events would have to occur within infinite
time. This may be a naive way of defining probabilities, however. At any time in this universe, (as far as I know) there are a finite number of particles and a finite amount of energy, which can
only exist in a finite number of quantum states, so there is not an infinite set of events – yet. But maybe the number of different “possible” events is growing faster than the amount of time is
increasing, so the probability of a specific event is decreasing with time.
• http://www.loujost.com
JimV, if space and time are continuous variables, then the set of all possible position states is uncountably infinite and has the cardinality of the set of real numbers. This implies any
particular state has measure 0, and hence zero probability.
Certainly you are right that any physical constraints that make the set of states finite will change the argument.
This is easy to solve. Nothing is unstable. Since its ‘nothing’ there is nothing to confine what can and will arise from it. Anything can come out of nothing. So this nothingness quantumagically
fluctuated and God came out of it and then decided to create the Universe as we see it.
• http://j.mp/drb123p
This is easy to solve. Nothing is unstable. Since its ‘nothing’ there is nothing to confine what can and will arise from it. Anything can come out of nothing. So this nothingness quantumagically
fluctuated and God came out of it and then decided to create the Universe as we see it.
@gnome – Thanks For Your Comments – *Really* Enjoyed Your “quantumagical fluctuated” Phrasing – Nonetheless, Even Positing “Nothing is unstable” May Be Setting A Condition (or property of sorts?)
And Perhaps, May Need Some Explaining? – Seems That Descriptions Of “Nothing,” Even Those Noted Earlier In This Blog-Thread (by Smith/Guth/Vilenkin) As “Absolute Nothingness,” Actually Contains
“Something” (noted or implied) (ie, “law of gravity,” “laws of quantum mechanics,” “quantum gravity,” “fluctuations,” “chaos,” “low/high entropy,” etc) Instead – The “Spontaneous Creation” Of
“Something” From “Nothing” To, In Hawking’s Poetic Phrasing, “light the blue touch paper and set The universe going” Seems To Be Getting A Bit “curiouser and curiouser”? – In Any Case – Thanks
Again – And – Enjoy!
I like these ideas. That was really nice to read the whole paper, get it, and learn so much… sections III and IV are very beautiful. Love the math.
Hi Dennis,
I was being tongue-in-cheek and using terminology I’ve heard before loosely, I wasn’t being literally serious. A “true” nothing would really be devoid of all properties and that is why no limits
can be set on what could/would emerge from it, if anything. Part of the problem is that it’s difficult to talk about nothing without assigning some properties to it.
The main reason for my very tongue-in-cheek statement above was to call attention to how speculative and esoteric some cosmologists have become. I was just joining in the speculation game.
Anyhow, I think in these cases we’re reaching well beyond good empirical science. And scientists look down on metaphysics.
• http://j.mp/drb123p
@gnome – Thanks For Your Latest Comments – I *Entirely* Agree With Your Thinking – Also, I Was Well Aware, And *Thoroughly Enjoyed*, Your Earlier Tongue-In-Cheek Statements But, Nonetheless, Took
The Occasion To Try And Add A Bit More To The Main Discussion – Seems We’re *Very* Similar In Our Viewpoints – Thanks Again For Your Comments – And – Enjoy!
Sean is utterly confused about entropy and thermodynamics and time. His confusion seems to be the origin of all these papers, rather than there being a real physical issue to solve.
If the universe is in a pure state, and the laws are unitary, then BY DEFINITION the universe BEGAN in a state of ZERO ENTROPY. That is the thermodynamic definition of the BEGINNING. Sean thinks
that the universe should have begun with lots of entropy. But that is nonsense. If the universe was born with high entropy, it would have been born in a highly mixed state, which contradicts the
assumption (that most people agree with) that the universe is in a pure state. Instead the entropy at the beginning was of course zero. And its entropy grew over time as we lost track of the
details of the micro-state of the universe. That is how entropy works. Sean is utterly confused to think that the universe should have began with high entropy – thats thermodynamically a
contradiction in terms.
By the way, if one imposes an (unphysical) future low entropy boundary condition, as Sean has done in this paper, then I think everyone would agree that the evolution from high to low would be
the time reversed version of low to high – how is this a new result? I thought this is obvious. Am I missing the novelty here?
Speaking of confusion, why is Lou Jost so confused about infinity?
• http://www.loujost.com
Hi Fireworks,
I don’t know, why don’t you tell me? My point was that an infinitely long path does not have to cross every point in an infinite multi-dimensional state space. That seems obvious, and it is easy
to give simple examples of curves that are infinitely long and yet do not cover an unbounded plane.
Just watched Hawkin’s Discovery special. Please bear in mind, I’m not a scientist. But the Hawkin’s explanation leaves out “the spark” (order) element. What triggered the Bang? Whatever it was,
it’s occurrence might have been in/with time. Not before. Therefore, I think a conception of chaos previous to the bang is more consistent with the behavior of the universe as we know it. So yes,
Dr. Carroll, I look forward to your posts on the subject. You have a fan rooting for you in Panama. Central America.
• http://autodynamicslborg.blogspot.com/
The Carezani’s Cosmology elevating the Mass Decay-Energy absorption to a Universal Law prove that the Thermodynamics seconf law is incorrect. As consequence the Universe’s Entropy is constant
because energy absorption is creating low entropy. See the whole blog at: http://autodynamicslborh.blogspot.com/
Lucy Haye Ph. D.
SAA’s representative.
@Lou Jost 63.
in any useful statement about a continuous phase, one must coarse grain. This leads to a countable set of allowed states. If the probability for each is non-zero (e.g., they are all equally
likely), then each allowed state will occur if one waits long enough. So what are you so confused about?
• http://www.loujost.com
Fireworks, look at my Comment 55. I was very clear that if you add physical assumptions such as your “coarse-grain” assumption, the argument changes. My point was a mathematical one about
infinity, and you said I was confused about that. I stand by my point that an infinitely long path through an unbounded state space does not have to pass through every point in the space.
I think that coarse-graining also does not escape my point. If the state space (which has many dimensions) is unbounded, even if it is coarse-grained, most infinitely-long paths will not cover
every grain.
• http://www.loujost.com
Even in a bounded, finite universe, the allowed wavelengths of a particle are (countably) infinite. The energy states of bound particles are also countably infinite. Therefore even with
reasonable physical conditions, an infinitely long path through state space will not hit every point in that space. So even with realistic physical conditions, Sean’s statement that “eventually,
the thing we’re waiting for will come to pass” is not true.
@ 37 Bill Davis
Your thoughts are mine – but no one answered. Your clarity is admirable, and you stated better than I could have. Salute. –Dan
@Lou Jost 68.
apparently you don’t understand bound states or free particles. There are 2 kinds of states: (1) those whose energy spectrum goes up to infinity, such as the harmonic oscillator or free
particles, and (2) those whose energy spectrum approaches a finite fixed point, such as the hydrogen atom. In both cases the number of states naively appears infinite, but under realistic
physical conditions, it is finite.
In (1), a realistic physical condition is that the universe only has a finite amount of energy, so there is a maximum energy that individual states can carry. This truncates the available states
to a finite set. This is true for bound states or free particles. And even if you allow arbitrary energy to the whole universe, individual particles cannot meaningfully have energies greater than
the Planck scale; such sub-Planckian wavelengths cannot be resolved.
In (2), a realistic physical condition is that we cannot discern the difference between the asymptotically high orbital states, which all have asymptotically similar energies. So the
asymptotically high orbital states are all grouped into one common state (that we might call “loosely bound”). This is the effect of coarse graining.
An all encompassing way of saying this, is that the known finiteness of the entropy of the universe, restricts the number of effectively dissimilar states to a finite set. This appears to
disprove your point.
• http://www.loujost.com
I am not sure….given that the temporal evolution of the universe is chaotic (tiny variations in conditions can lead to macroscopically different trajectories), coarse graining may not be
appropriate. Even if the energy of the universe is finite, any of the infinite number of higher-energy states could be occupied for very short periods of time. So it appears to me that there are
still a countably infinite number of states available, and that coarse-graining is inappropriate in a chaotic universe. Which means the universe will never repeat itself.
• http://skepticsplay.blogspot.com
@Lou Jost and interlocutors,
Mathematically speaking, it is perfectly possible to have an infinite path that does not hit every point. The path just needs to go in a loop that does not pass through every state. If there are
an infinite number of states, we could also have a non-repeating path that does not hit every point, just as the sequence of odd numbers is infinite, non-repeating, and fails to hit every natural
number. I am not convinced that coarse-graining leads to a finite set of states (it depends on how the coarse-graining is done), but it doesn’t even matter here.
But neither is it clear to me that Sean was making a claim that the trajectory of the universe must go through every state. I think it is simply highly probable that the any given state of the
universe will eventually reach at least one low entropy state.
• http://www.loujost.com
@miller, the reason I interpreted Sean that way was because of this statement: “The point is that we have infinitely long to wait — eventually the thing we’re waiting for will come to pass.” Or
again, “But if we wait long enough … statistical mechanics predicts that the ice cube will eventually re-form.” I do not think this really follows from the properties of infinity, if state space
is unbounded or if it is continuous in some dimensions, and if the universe is chaotic so that we cannot do coarse-graining.
• http://skepticsplay.blogspot.com
Yes, I have a different interpretation of the same statement. *shrugs*
• http://www.loujost.com
@Miller, how did you interpret the statement about the ice cube reforming if we wait long enough?
There is no infinite amount of time for the ice cube to eventually reform. Protons decay and the O and H will decompose in a finite amount of time.
@Lou, there must be an infinite number of paths that do not ever cover all points in an unbounded state space?
The question of retracing backwards through time, or time reversal/entropy decreasing and passing through everything, including ourselves, eggs uncooking, broken glass reforming, ice cubes
refreezing, etc., is not even worth considering. The probability of any one possible path reversing all events approaches zero as the number of possible paths approaches infinity(!?); the
statistical likelyhood is an infinitely regressing series whose limiting factor is zero, or one over infinity for any one path. And as Lou stated(and I agree), even an infinite number of
scenarios may not, almost certainly will not, follow the path of time/entropy reversal, which still doesn’t even consider the infinite number of reversals possible, of which just one would obtain
where the ‘movie’ of existence is played backwards.
In any event(lol), I want to add that the universe, or even the possibility of the universe, arising out of ‘nothing’, is itself impossible as ‘nothing’ means the lack of everything – including
I’m not sure that a state of nothingness is even theoretically possible as there is no such thing as time(or the possibility of it) for the state of nothingness to ‘exist’.
Now, ‘nothing’ can only exist instantaneously, ie. for zero time. There is no causal relationship possible without time. There is no such place as ‘the beginning’ to extrapolate backwards to, nor
‘the end,’ with zero entropy, to extrapolate forwards to.
The universe is inevitably proceeding to a state of equilibrium, maximum entropy, chaos. We intuitively know that when things get too chaotic, that is when all hell breaks loose. There is
confusion, upset, noise, and then, Bang! The universe just doesn’t know when to quit, does it?
Human history will be changed by that document, if people read it.
It basically unites hawkings variance theory…..with Chris langans ctmu theory….
Chaos and entropy leads to order…and ordercor rule based systems are unstable. By design so they evolve Into higher order omplexity, or intelligence…
Read that document guys…..You will never see the world the same again.
I would be interested in seeing what Dr. Hawking has to say about this. the tautological fractal reality.
Your awreness is increasing by the day. Good progress.
Nonesense…there are no laws…with sufficient awareness…there are always tricks….you can implement as the programmer..,by “confusing” reality at the extremes.
Someone make hawking read this thread. He’s got to read it.
• Pingback: Science is so much cooler when you aren’t afraid of it | Unsettled Christianity
• Pingback: A Universe Out of Chaos – Discover Magazine (blog) | My Blog
• Pingback: The second law of thermodynamics and the history of the universe | cartesian product | {"url":"http://blogs.discovermagazine.com/cosmicvariance/2011/08/03/a-universe-out-of-chaos/","timestamp":"2014-04-16T19:11:44Z","content_type":null,"content_length":"197280","record_id":"<urn:uuid:92e065f8-c7ac-4f55-b560-e00fb8ab20de>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00403-ip-10-147-4-33.ec2.internal.warc.gz"} |
Raritan, NJ Calculus Tutor
Find a Raritan, NJ Calculus Tutor
...For me, tutoring is first a career and second a job, so the satisfaction of the student is always more valuable than being paid for doing so. Please feel free to reach out to me with your
tutoring needs and we can discuss together how I can best be of support to you.I took AP Calculus BC as a ju...
15 Subjects: including calculus, English, Chinese, GRE
I have been a tutor for several years now. My specialties include Mathematics, from basic math to Calculus,I am certified in teaching secondary Math. copy of certification available upon request.
I hold a BA in Math.
14 Subjects: including calculus, Spanish, French, physics
...I also taught and tutor MATH at different levels (in high school and the university). I have a PhD in physics so I had to use calculus and precalculus during all my education and also during
my career. Both calculus and precalculus are like our mother language for physicists. I also taught ca...
9 Subjects: including calculus, physics, algebra 1, algebra 2
Since pursuing my Bachelor's in physics I started tutoring math and physics to primary, secondary, high-school and undergraduate students. After my graduation, I worked for 1 year as a
high-school teacher, teaching geometry to 6th graders and physics to 9th and 10th graders, which I left then to pursue my PhD. During my PhD I assisted the Applied Statistics class as part of my
teaching duties.
20 Subjects: including calculus, Spanish, physics, geometry
...This has significantly helped students with the fabricated “intimidation factor” of orgo as well as with the learning. RESULTS: My students approach the next level of their education much more
prepared then their peers in not only the class I tutored them in, but in related classes I have introd...
34 Subjects: including calculus, chemistry, physics, geometry
Related Raritan, NJ Tutors
Raritan, NJ Accounting Tutors
Raritan, NJ ACT Tutors
Raritan, NJ Algebra Tutors
Raritan, NJ Algebra 2 Tutors
Raritan, NJ Calculus Tutors
Raritan, NJ Geometry Tutors
Raritan, NJ Math Tutors
Raritan, NJ Prealgebra Tutors
Raritan, NJ Precalculus Tutors
Raritan, NJ SAT Tutors
Raritan, NJ SAT Math Tutors
Raritan, NJ Science Tutors
Raritan, NJ Statistics Tutors
Raritan, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/Raritan_NJ_calculus_tutors.php","timestamp":"2014-04-16T07:58:05Z","content_type":null,"content_length":"24108","record_id":"<urn:uuid:802aae8f-ecc1-4c64-b8bc-ee736fab515d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculus II For Dummies
Cheat Sheet
Calculus II For Dummies
By its nature, Calculus can be intimidating. But you can take some of the fear of studying Calculus away by understanding its basic principles, such as derivatives and antiderivatives, integration,
and solving compound functions. Also discover a few basic rules applied to Calculus like Cramer's Rule, and the Constant Multiple Rule, and a few others, and you'll be on your way to acing the
The Most Important Derivatives and Antiderivatives to Know
The table below shows you how to differentiate and integrate 18 of the most common functions. As you can see, integration reverses differentiation, returning the function to its original state, up to
a constant C.
The Riemann Sum Formula For the Definite Integral
The Riemann Sum formula provides a precise definition of the definite integral as the limit of an infinite series. The Riemann Sum formula is as follows:
Below are the steps for approximating an integral using six rectangles:
1. Increase the number of rectangles (n) to create a better approximation:
2. Simplify this formula by factoring out w from each term:
3. Use the summation symbol to make this formula even more compact:
The value w is the width of each rectangle:
Each h value is the height of a different rectangle:
So here is the Riemann Sum formula for approximating an integral using n rectangles:
4. For a better approximation, use the limit
5. to allow the number of rectangles to approach infinity:
Integration by Parts with the DI-agonal Method
The DI-agonal method is basically integration by parts with a chart that helps you organize information. This method is especially useful when you need to integrate by parts more than once to solve a
problem. Use the following table for integration by parts using the DI-agonal method:
The Sum Rule, the Constant Multiple Rule, and the Power Rule for Integration
When you perform integration, there are three important rules that you need to know: the Sum Rule, the Constant Multiple Rule, and the Power Rule.
The Sum Rule for Integration tells you that it’s okay to integrate long expressions term by term. Here it is formally:
The Constant Multiple Rule for Integration tells you that it’s okay to move a constant outside of an integral before you integrate. Here it is expressed in symbols:
The Power Rule for Integration allows you to integrate any real power of x (except –1). Here’s the Power Rule expressed formally:
where n ≠ –1
How to Solve Integrals with Variable Substitution
In Calculus, you can use variable substitution to evaluate a complex integral. Variable substitution allows you to integrate when the Sum Rule, Constant Multiple Rule, and Power Rule don’t work.
1. Declare a variable u, set it equal to an algebraic expression that appears in the integral, and then substitute u for this expression in the integral.
2. Differentiate u to find
and then isolate all x variables on one side of the equal sign.
3. Make another substitution to change dx and all other occurrences of x in the integral to an expression that includes du.
4. Integrate by using u as your new variable of integration.
5. Express this answer in terms of x.
How to Use Integration by Parts
When doing Calculus, the formula for integration by parts gives you the option to break down the product of two functions to its factors and integrate it in an altered form. To use integration by
parts in Calculus, follow these steps:
1. Decompose the entire integral (including dx) into two factors.
2. Let the factor without dx equal u and the factor with dx equal dv.
3. Differentiate u to find du, and integrate dv to find v.
4. Use the formula:
5. Evaluate the right side of this equation to solve the integral.
How to Solve Compound Functions Where the Inner Function Is ax + b
Some integrals of compound functions f (g(x)) are easy to do quickly in Calculus. These include compound functions for which you know how to integrate the outer function f, and the inner function g(x
) is of the form ax + b — that is, it differentiates to a constant.
Here are some examples:
Solve Compound Functions Where the Inner Function Is ax
When figuring Calculus problems, some integrals of compound functions f (g(x)) are easy to do quickly. These include compound functions for which you know how to integrate the outer function f, and
the inner function g(x) is of the form ax — that is, it differentiates to a constant.
Here are some examples: | {"url":"http://www.dummies.com/how-to/content/calculus-ii-for-dummies-cheat-sheet.html","timestamp":"2014-04-17T11:31:58Z","content_type":null,"content_length":"54991","record_id":"<urn:uuid:e423c61f-5e56-4d6c-a414-dc33b67839cb>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Why LL(1) Parsers do not support left recursion?
Hans-Peter Diettrich <DrDiettrich1@aol.com>
25 Jul 2006 00:40:16 -0400
From comp.compilers
| List of all articles for this month |
From: Hans-Peter Diettrich <DrDiettrich1@aol.com>
Newsgroups: comp.compilers
Date: 25 Jul 2006 00:40:16 -0400
Organization: Compilers Central
References: 06-07-059 06-07-065
Keywords: parse
Posted-Date: 25 Jul 2006 00:40:16 EDT
SM Ryan schrieb:
> # 1. LL parsers cannot handle left recursion, whereas LR parsers cannot
> # handle right recursion.
> A left recursive grammar is not an LL(k) grammar, but the grammar can
> be mechanically transformed to rid it of left recursion. The resulting
> grammar might still not be LL(k).
> LR(k) can handle any deterministic grammar. Left recursive productions
> only need one production on the stack; right recursion piles up the
> entire recursive nest on the stack and then reduces all of them at
> once: right recursion requires a deeper stack.
Thanks for your explanations, but I'm still not fully convinced ;-)
So far I only used CoCo/R to create recursive descent parsers, so that
I had to handle all non-LL(1) cases manually. Perhaps hereby I have
applied some modifications to the grammar, when I made work e.g. my C
parser without problems with left recursive rules.
That's why I thought that LR parsers have similar problems with right
recursion (epsilon moves?), where a parser generator also would have
to apply some built-in rules, in order to resolve these problems. But
this only is a feeling, I'm not very familiar with LR parsers, because
I couldn't yet find a working parser generator with Pascal output.
At least I understand now, that right recursion should be *avoided* in
LR grammars, in order to keep the stack size low.
> # 2. Most languages are ambiguous at the syntax level, so that implicit
> # disambiguation (longest match...) or explicit semantical constraints
> # must be introduced. (see: dangling else...).
> Only poorly designed programming languages are ambiguous. (Natural
> languages are ambiguous and don't use deterministic language theory.)
The argument list of a subroutine is nothing but a list, which can be
expressed using either left or right recursion in a grammar.
Perhaps I misused the term "ambiguous" here, when I meant that different
parse trees can be constructed for a sentence of a language?
> Many programming language grammars are ambiguous because the people
> writing the grammars are lazy and/or uneducated in these matters. The
> dangling else nonproblem was solved some 40 years ago. Anyone who
> thinks this is still a problem is just too lazy to write a well known
> solution.
The dangling else problem can be solved by adding implicit general rules
to the interpretation of a language (or grammar?). Of course there exist
ways to prevent the occurence of such problems, just in the language
specification. But AFAIR it's impossible to prove, in the general case,
that a language is inambigous.
> # 3. Only LL(1) recursive descent parsers are readable, that's why no
> # LL(k) parser generators exist, in contrast to LR(k) parser generators.
> Recursive descent is not LL(k). Recursive descent is an implementation
> technique not a language class.
Okay, but what's the relationship between leftmost/rightmost derivation
and a language?
> There are recursive ascent parsers for
> LR(k) grammars. LR(k) parsers can be written as recursive finite state
> transducers, with right recursion and embedding requiring recursion
> and left recursion merely looping; if a language uses left recursion
> only (type iii), the LR(k) state diagram is easily convertible to a
> finite transducer for the language.
I'm not sure what you want to tell me. AFAIR LR(k) (languages?
grammars?) can be transformed into LR(1), for which a finite state
machine can be constructed easily. I assume that a transformation from
LL(k) into LL(1) would be possible as well, using essentially the same
transformation algorithms.
My point is that table driven parsers are unreadable, due to the lost
relationship between the parser code and the implemented language or
grammar. Do there exist LR parsers or parser generators at all, which
preserve the relationship between the parser code and the grammar?
> # 4. When at least one terminal must be consumed, before a recursive
> # invocation of a rule, no infinite recursion can occur. (every loop will
> # terminate after all terminals in the input stream have been consumed)
> Confusing implementation with language class. Any grammar that
> includes a rule such as A -> A | empty is ambiguous therefore
> nondeterministic therefore not LR(k) therefore not LL(k).
I wanted to present an proof, whether a given grammar is (non-?)
deterministic, regardless of the reason for that property.
> There are many other things that keep a grammar or language from
> being LL(k); LL(k) does not include all deterministic languages.
> # Ad (1+2): We should keep grammars apart from languages. Most languages
> # require recursive grammars, but allow for both left or right recursive
> # grammars.
> A language definition that does not depend on vague handwaving (one or
> two such definitions actually exist) bases the semantics on the parse
> tree. Since right and left recursion build different parse trees, this
> issue is very important in definitions with riguous semantics.
With regards to programming languages, there exist many constructs that
do not impose or require the construction of an specific (unique) parse
tree. As long as the parser must not know about the definition of an
identifier, the placement of the definitions in an parse tree is
irrelevant, when it doesn't change the semantics of the language
(visibility constraints...).
> # Languages with "expr '+' expr" or "list ',' list" can be parsed in
> # multiple ways. Unless there exist additional (semantical) restrictions,
> # correct and equivalent left or right recursive grammars can be
> # constructed for such languages.
> Or just write an unambiguous production. It's not any harder to do it
> right than to do it lazy and wrong.
> If the semantics of a subtract production are the value of the right
> subtree is subtracted from the value of left subtree, then
> 3 - 2 - 1
> with left recursion is
> = (3 - 2) - 1 = 1 - 1 = 0
> with right recursion is
> = 3 - (2 - 1) = 3 - 1 = 2
This is a property of the asymmetric subtraction operation, which
doesn't apply to the symmetric addition or multiplication operations. Of
course it's a good idea to enforce a unique sequence of *numerical*
operations in program code, whereas in mathematical formulas such
additional restrictions should *not* be built into a grammar.
> Rather different answers but unless your software is controlling a
> satellite to Venus, I guess sloppiness can be repaired in the next
> patch release.
No doubt, but IMO you want to introduce more restrictions than required.
A compiler is allowed to apply certain *valid* transformations on an
parse tree, in so far I cannot see a reason why any grammar or parser
for that language must enforce the construction of one-and-only-one
valid parse tree.
> Algol-60 used left recursion except exponentiation so that the value
> could be determined from the parse tree without a lot misreadable
> prose.
> # And when a human is allowed to disambiguate a grammar for such a
> # language himself, a parser generator should be allowed to do the same ;-)
> hy bother inserting ambiguity and then remove it again with obscure
> rules? Eschew ambiguity from the onset.
Here you're talking about the construction of languages, not about the
construction of parsers ;-)
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/06-07-071","timestamp":"2014-04-16T16:05:48Z","content_type":null,"content_length":"15543","record_id":"<urn:uuid:d862d1f4-758b-41f3-b817-e9932b782d03>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00597-ip-10-147-4-33.ec2.internal.warc.gz"} |
Results 1 - 10 of 234
, 2013
"... We present a general framework for representing cryptographic protocols and analyzing their security. The framework allows specifying the security requirements of practically any cryptographic
task in a unified and systematic way. Furthermore, in this framework the security of protocols is preserved ..."
Cited by 611 (34 self)
Add to MetaCart
We present a general framework for representing cryptographic protocols and analyzing their security. The framework allows specifying the security requirements of practically any cryptographic task
in a unified and systematic way. Furthermore, in this framework the security of protocols is preserved under a general protocol composition operation, called universal composition. The proposed
framework with its security-preserving composition operation allows for modular design and analysis of complex cryptographic protocols from relatively simple building blocks. Moreover, within this
framework, protocols are guaranteed to maintain their security in any context, even in the presence of an unbounded number of arbitrary protocol instances that run concurrently in an adversarially
controlled manner. This is a useful guarantee, that allows arguing about the security of cryptographic protocols in complex and unpredictable environments such as modern communication networks.
, 1997
"... A group signature scheme allows members of a group to sign messages on the group's behalf such that the resulting signature does not reveal their identity. Only a designated group manager is
able to identify the group member who issued a given signature. Previously proposed realizations of group sig ..."
Cited by 264 (26 self)
Add to MetaCart
A group signature scheme allows members of a group to sign messages on the group's behalf such that the resulting signature does not reveal their identity. Only a designated group manager is able to
identify the group member who issued a given signature. Previously proposed realizations of group signature schemes have the undesirable property that the length of the public key is linear in the
size of the group. In this paper we propose the first group signature scheme whose public key and signatures have length independent of the number of group members and which can therefore also be
used for large groups. Furthermore, the scheme allows the group manager to add new members to the group without modifying the public key. The realization is ba...
- Journal of Cryptology , 1991
"... We show how a pseudo-random generator can provide a bit commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the
assumption of the existence of pseudo-random generators suffices to assure amortized O(1) bits of communicat ..."
Cited by 228 (15 self)
Add to MetaCart
We show how a pseudo-random generator can provide a bit commitment protocol. We also analyze the number of bits communicated when parties commit to many bits simultaneously, and show that the
assumption of the existence of pseudo-random generators suffices to assure amortized O(1) bits of communication per bit commitment.
- SIAM Journal on Computing , 1990
"... : The wide applicability of zero-knowledge interactive proofs comes from the possibility of using these proofs as subroutines in cryptographic protocols. A basic question concerning this use is
whether the (sequential and/or parallel) composition of zero-knowledge protocols is zero-knowledge too. We ..."
Cited by 190 (14 self)
Add to MetaCart
: The wide applicability of zero-knowledge interactive proofs comes from the possibility of using these proofs as subroutines in cryptographic protocols. A basic question concerning this use is
whether the (sequential and/or parallel) composition of zero-knowledge protocols is zero-knowledge too. We demonstrate the limitations of the composition of zeroknowledge protocols by proving that
the original definition of zero-knowledge is not closed under sequential composition; and that even the strong formulations of zero-knowledge (e.g. black-box simulation) are not closed under parallel
execution. We present lower bounds on the round complexity of zero-knowledge proofs, with significant implications to the parallelization of zero-knowledge protocols. We prove that 3-round
interactive proofs and constant-round Arthur-Merlin proofs that are black-box simulation zeroknowledge exist only for languages in BPP. In particular, it follows that the "parallel versions" of the
first interactive proo...
- SIAM J. COMPUTING , 1991
"... This paper investigates the possibility of disposing of interaction between prover and verifier in a zero-knowledge proof if they share beforehand a short random string. Without any assumption,
it is proven that noninteractive zero-knowledge proofs exist for some number-theoretic languages for which ..."
Cited by 188 (19 self)
Add to MetaCart
This paper investigates the possibility of disposing of interaction between prover and verifier in a zero-knowledge proof if they share beforehand a short random string. Without any assumption, it is
proven that noninteractive zero-knowledge proofs exist for some number-theoretic languages for which no efficient algorithm is known. If deciding quadratic residuosity (modulo composite integers
whose factorization is not known) is computationally hard, it is shown that the NP-complete language of satisfiability also possesses noninteractive zero-knowledge proofs.
, 1989
"... We present strong evidence that the implication, "if one-way permutations exist, then secure secret key agreement is possible" is not provable by standard techniques. Since both sides of this
implication are widely believed true in real life, to show that the implication is false requires a new m ..."
Cited by 162 (0 self)
Add to MetaCart
We present strong evidence that the implication, "if one-way permutations exist, then secure secret key agreement is possible" is not provable by standard techniques. Since both sides of this
implication are widely believed true in real life, to show that the implication is false requires a new model. We consider a world where dl parties have access to a black box or a randomly selected
permutation. Being totally random, this permutation will be strongly oneway in provable, information-thevretic way. We show that, if P = NP, no protocol for secret key agreement is secure in such
setting. Thus, to prove that a secret key greement protocol which uses a one-way permutation as a black box is secure is as hrd as proving F NP. We also obtain, as corollary, that there is an oracle
relative to which the implication is false, i.e., there is a one-way permutation, yet secret-exchange is impossible. Thus, no technique which relativizes can prove that secret exchange can be based
on any one-way permutation. Our results present a general framework for proving statements of the form, "Cryptographic application X is not likely possible based solely on complexity assumption Y." 1
- Journal of Cryptology , 1995
"... Constant-round zero-knowledge proof systems for every language in NP are presented, assuming the existence of a collection of claw-free functions. In particular, it follows that such proof
systems exist assuming the intractability of either the Discrete Logarithm Problem or the Factoring Problem for ..."
Cited by 157 (8 self)
Add to MetaCart
Constant-round zero-knowledge proof systems for every language in NP are presented, assuming the existence of a collection of claw-free functions. In particular, it follows that such proof systems
exist assuming the intractability of either the Discrete Logarithm Problem or the Factoring Problem for Blum Integers.
, 1994
"... The views and conclusions in this document are those of the authors and do not necessarily represent the official policies or endorsements of any of the research sponsors. How do we build
distributed systems that are secure? Cryptographic techniques can be used to secure the communications between p ..."
Cited by 150 (8 self)
Add to MetaCart
The views and conclusions in this document are those of the authors and do not necessarily represent the official policies or endorsements of any of the research sponsors. How do we build distributed
systems that are secure? Cryptographic techniques can be used to secure the communications between physically separated systems, but this is not enough: we must be able to guarantee the privacy of
the cryptographic keys and the integrity of the cryptographic functions, in addition to the integrity of the security kernel and access control databases we have on the machines. Physical security is
a central assumption upon which secure distributed systems are built; without this foundation even the best cryptosystem or the most secure kernel will crumble. In this thesis, I address the
distributed security problem by proposing the addition of a small, physically secure hardware module, a secure coprocessor, to standard workstations and PCs. My central axiom is that secure
coprocessors are able to maintain the privacy of the data they process. This thesis attacks the distributed security problem from multiple sides. First, I analyze the security properties of existing
system components, both at the hardware and
, 1992
"... In this note, we present new zero-knowledge interac-tive proofs and arguments for languages in NP. To show that z G L, with an error probability of at most 2-k, our zero-knowledge proof system
requires O(lzlc’) + O(lg ” l~l)k ideal bit commitments, where c1 and cz depend only on L. This construction ..."
Cited by 146 (2 self)
Add to MetaCart
In this note, we present new zero-knowledge interac-tive proofs and arguments for languages in NP. To show that z G L, with an error probability of at most 2-k, our zero-knowledge proof system
requires O(lzlc’) + O(lg ” l~l)k ideal bit commitments, where c1 and cz depend only on L. This construction is the first in the ideal bit commitment model that achieves large values of k more
efficiently than by running k independent iterations of the base interactive proof system. Under suitable complexity assumptions, we exhibit a zero-knowledge arguments that require O(lg ’ Izl)ki bits
of communication, where c depends only on L, and 1 is the security parameter for the prover.l This is the first construction in which the total amount of communication can be less than that needed to
transmit the NP witness. Our protocols are based on efficiently checkable proofs for NP [4].
, 2001
"... We propose a new security measure for commitment protocols, called Universally Composable ..." | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=159040","timestamp":"2014-04-17T05:30:44Z","content_type":null,"content_length":"36719","record_id":"<urn:uuid:5cc4d5ac-3f14-4532-a31a-0247d483b862>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00630-ip-10-147-4-33.ec2.internal.warc.gz"} |
Columbia Theory Seminar, Fall 2008
For Fall 2008, the usual time for the meetings will be Fridays at 11:00am in the CS conference room, CSB 453. Abstracts for talks are given below the talk schedule.
Friday, November 7, 11:00am, CSB 453: Ragesh Jaiswal: New Proofs of New Direct Product Theorems
Talk Abstracts
Friday, September 26:
Approximation norms and duality for communication complexity lower bounds
Troy Lee
Columbia University
Abstract: We will discuss a general framework for showing lower bounds on communication complexity based on matrix norms and approximation norms, the minimum norm of a matrix entrywise close to the
target matrix. An advantage of this approach is that one can use duality theory to obtain a lower bound quantity phrased as a maximization problem, which can be more convenient to work with in
showing lower bounds.
Time permitting, we will discuss two applications of this approach.
1. One of the strongest lower bound techniques for randomized and quantum communication complexity is approximation rank---the minimum rank of a matrix which is entrywise close to the target matrix.
We show that an approximation norm known as gamma_2 is polynomially related to approximation rank. This gives a polynomial time algorithm to approximate approximation rank, and also shows that
approximation rank lower bounds quantum communication with shared entanglement, as gamma_2 does.
2. We show how the norm framework naturally generalizes to the case of multiparty complexity and how an approximation norm for multiparty number-on-the forehead complexity was recently used, in
combination with techniques developed by Sherstov and Chattopadhyay, to show nontrivial lower bounds on the disjointness function for up to c log log n players, c <1.
This talk surveys joint work with Adi Shraibman.
Monday, October 6:
Succinct Approximation of Convex Pareto Curves
Ilias Diakonikolas
Columbia University
Abstract: We study the approximation of multiobjective optimization problems. We propose the concept of $\epsilon$-convex Pareto ($\epsilon$-CP) set as the appropriate one for the convex setting, and
observe that it can offer arbitrarily more compact representations than $\epsilon$-Pareto sets in this context.
We characterize when an $\epsilon$-CP can be constructed in polynomial time in terms of an efficient routine $\textrm{Comb}$ for optimizing (exactly or approximately) monotone linear combinations of
the objectives.
We investigate the problem of computing minimum size $\epsilon$-convex Pareto sets, both for discrete (combinatorial) and continuous (convex) problems, and present general algorithms using a $\textrm
{Comb}$ routine. For bi-objective problems, we show that if we have an exact $\textrm{Comb}$ optimization routine, then we can compute the minimum $\epsilon$-CP for continuous problems (this applies
for example to bi-objective Linear Programming and Markov Decision Processes), and factor 2 approximation to the minimum $\epsilon$-CP for discrete problems (this applies for example to bi-objective
versions of polynomial-time solvable combinatorial problems such as Shortest Paths, Spanning Tree, etc.). If we have an approximate $\textrm{Comb}$ routine, then we can compute factor 3 and 6
approximations respectively to the minimum $\epsilon$-CP for continuous and discrete bi-objective problems. We consider also the case of three and more objectives and present some upper and lower
Joint work with Mihalis Yannakakis.
Monday, October 20:
Optimization Problems in Social Networks
David Kempe
Abstract: A social network - the graph of relationships and interactions within a group of individuals - plays a fundamental role as a medium for the spread of information, ideas, influence, or
diseases among its members. An idea or innovation will appear, and it can either die out quickly or make significant inroads into the population. Similarly, an infectious disease may either affect a
large share of the population, or be confined to a small fraction.
The collective behavior of individuals and the spread of diseases in a social network have a long history of study in sociology and epidemiology. In this talk, we will investigate graph-theoretic
optimization problems relating to the spread of information or diseases. Specifically, we will focus on two types of questions: influence maximization, wherein we seek to identify influential
individuals to start a cascade of an innovation to maximize the expected number of eventual adopters; and infection minimization, wherein we seek to remove nodes so as to keep a given infected
component small.
We will present constant factor and bicriteria algorithms for versions of these problems, and also touch on many open problems and issues regarding competition among multiple innovators.
(This talk represents joint work with Jon Kleinberg, Eva Tardos, Elliot Anshelevich, Shishir Bharathi, Ara Hayrapetyan, Martin Pal, Mahyar Salek, and Zoya Svitkina.)
Wednesday, October 29:
Affine Dispersers from Subspace Polynomials
Eli Ben-Sasson
Abstract: This talk describes new explicit constructions of dispersers for affine sources of dimension below the notorious n/2 threshold. The main novelty in our construction lies in the method of
proof which relies (solely) on elementary properties of linearized polynomials. In this respect we differ significantly from previous solutions to the problem, due to [Barak et al. 2005] and
[Bourgain 2007]. These two breakthrough results used recent sum-product theorems over finite fields, whereas our analysis relies on properties of linearized polynomials that have been well-known
since the work of Ore in the 1930's.
Definition of affine dispersers: A disperser for affine sources of dimension d is a function Disp:F_2^n -> F_2 that is nonconstant on every affine space of dimension > d. Formally, for every affine S
\subset F_2^n, dim(S)>d we have {Disp(s): s in S}={0,1}.
Joint work with Swastik Kopparty.
Friday, October 31:
Reach for A*: an Efficient Point-to-Point Shortest Path Algorithm
Andrew Goldberg
Microsoft Research
Abstract: We study the point-to-point shortest path problem in a setting where preprocessing is allowed. The two main techniques we address are ALT and REACH. ALT is A* search with lower bounds based
on landmark distances and triangle inequality. The REACH approach of Gutman precomputes locality values on vertices and uses them to prune the search.
We improve on REACH in several ways. In particular, we add shortcut arcs which reduce vertex reaches. Our modifications greatly reduce both preprocessing and query times. Our algorithm combines in a
natural ALT, yielding significantly better query times and allowing a wide range of time-space trade-offs.
The resulting algorithms are quite practical for our motivating application, computing driving directions, both for server and for portable device applications. The ideas behind our algorithms are
elegant and may have other applications.
(Joint work with Haim Kaplan and Renato Werneck)
Friday, November 7:
New Proofs of New Direct Product Theorems
Ragesh Jaiswal
Columbia University
Abstract: Direct Product Theorems are formal statements of the intuition: "if solving one instance of a problem is hard, then solving multiple instances is even harder". For example, a Direct Product
Theorem with respect to bounded size circuits computing a function is a statement of the form: "if a function f is hard to compute on average for small size circuits, then f^k(x_1, ..., x_k) = f
(x_1), ..., f(x_k) is even harder on average for certain smaller size circuits". The proof of the such a statement is by contradiction: we start with a circuit which computes f^k on some
non-negligible fraction of the inputs and then use this circuit to construct another circuit which computes f on almost all inputs. By viewing such a constructive proof as decoding certain
error-correcting code, it was independently observed by Trevisan and Impagliazzo that constructing a single circuit is not possible in general. Instead, we can only hope to construct a list of
circuits such that one of them computes f on almost all inputs. This makes the list size an important parameter of the Theorem which can be minimized. We achieve optimal value of the list size which
is a substantial improvement compared to previous proofs of the Theorem. In particular, this new version can be applied to uniform models of computation (e.g., randomized algorithms) whereas all
previous versions applied only to nonuniform models (e.g., circuits).
Consider the following stronger and a more general version of the previous Direct Product Theorem statement: "if a problem is hard to solve on average, then solving more than the expected fraction of
problem instances from a pool of multiple independently chosen instances becomes even harder". Such statements are useful in cryptographic settings where the goal is to amplify the gap between the
ability of legitimate users to solve a type of problem and that of attackers. We call such statements "Chernoff-type Direct Product Theorems" and prove such a statement for a very general setting.
Contact rocco-at-cs.columbia.edu if you want to volunteer to give a talk (especially encouraged for students!). The talk can be about your or others' work. It can be anything from a polished
presentation of a completed result, to an informal black-board presentation of an interesting topic where you are stuck on some open problem. It should be accessible to a general theory audience. I
will be happy to help students choose papers to talk about. There is a mailing list for the reading group. General information about the mailing list (including how to subscribe yourself to it) is
available here. If you want to unsubscribe or change your options, send email to theoryread-request@lists.cs.columbia.edu with the word `help' in the subject or body (don't include the quotes), and
you will get back a message with instructions.
Comments on this page are welcome; please send them to rocco-at-cs.columbia.edu
Last updated 09/02/2008. | {"url":"http://www.cs.columbia.edu/theory/f08-theoryread.html","timestamp":"2014-04-18T13:13:13Z","content_type":null,"content_length":"12716","record_id":"<urn:uuid:bd7fcede-79c6-4a67-916c-1bd43903d8f7>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
A quick question about gear ratios and how to find them
Ilyo: I am still not sure what you are trying to figure out, but without taking the drive gears apart, Danger's recommendation to count the turns of the driving and driven shaft is the easiest way to
figure gear ratio. If your 180 rpm motor drives something at 90 rpm your gear ratio is 2-1. If it drives it at 360 rpm it is 1-2. Diameter and circumference only come into play to determine mph, or
distance over time.
Regarding voltage and wattage for an electric motor: A motor will turn a certain amount of RPMs depending on its design, meaning how it is wired, for a given voltage. Double the voltage and you will
double the RPM, disregarding losses for friction, resistance, etc. (Some motors are designed to accept 110v or 220v, but the connections are different.) Wattage is a different calculation and not is
relevant to speed except in an indirect way. It is the product of volts times amps and is a measure of the power used. (Ohms law. Your 100 watt light bulb at 110 volt draws a touch less than one amp.
Calculator at
.) For your motor, double the volts and amps will be half, and the wattage will remain the same. To repeat, this depends on the original design of the motor and I am assuming you are not changing the
wiring or hookups. (Resistance also gets involved, but, again, I am assuming you are not changing the motor.) Be careful about increasing or decreasing the voltage because the motor may not be
mechanically or electrically designed to handle the extra loads imposed. There are also issues of direct and alternating current involved in motors. Don't substitute one for the other. I am really
curious about what you want to accomplish. PatC | {"url":"http://www.physicsforums.com/showthread.php?p=2157648","timestamp":"2014-04-17T12:45:45Z","content_type":null,"content_length":"70230","record_id":"<urn:uuid:f19c772a-18af-4305-bbf3-1159fdbc1d35>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00385-ip-10-147-4-33.ec2.internal.warc.gz"} |
One Method Of Pitching A Softball Is Called The
help with homework problem
One method of pitching a softball is called the"windmill" delivery method, in which the pitcher's arm rotatesthrough approximately 360° in a vertical plane before the 198gram ball is released at the
lowest point of the circular motion.An experienced pitcher can throw a ball with a speed of95.0 mi/h.Assume that the angular acceleration is uniform throughout thepitching motion, and take the
distance between the softball and theshoulder joint to be 74.6 cm.
(b) Find the value of the angular acceleration in rev/s^2(a) Determine the angular speed of the arm in rev/s at the instantof release. and the radial and tangential acceleration of the balljust
before it is released.
(c) Determine the force exerted on the ball by the pitcher's hand(both radial and tangential components) just before it isreleased.
Answers (1)
• One method of pitching a softball is called the"windmill" delivery method, in which the pitcher's arm rotatesthrough approximately 360° in a vertical plane before the 198gram ball is released at
the lowest point of the circular motion.An experienced pitcher can throw a ball with a speed of95.0 mi/h.Assume that the angular acceleration is uniform throughout thepitching motion, and take
the distance between the softball and theshoulder joint to be 74.6 cm.
(b) Find the value of the angular acceleration in rev/s^2(a) Determine the angular speed of the arm in rev/s at the instantof release. and the radial and tangential acceleration of the balljust
before it is released.
(c) Determine the force exerted on the ball by the pitcher's hand(both radial and tangential components) just before it isreleased.
Rating:4 stars 4 stars 1
Varmond answered 1 day later | {"url":"http://www.chegg.com/homework-help/questions-and-answers/one-method-pitching-softball-called-windmill-delivery-method-pitcher-s-arm-rotatesthrough--q391588","timestamp":"2014-04-19T16:45:19Z","content_type":null,"content_length":"28012","record_id":"<urn:uuid:5934c9b2-60da-4e69-b66f-ddf83c5c6f87>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hidden within Maxwell’s equations is the speed of light …
» Hidden within Maxwell’s equations is the speed of light …
Hidden within Maxwell’s equations is the speed of light …
February 14, 2012 Posted by News under Cosmology, News, Physics 2 Comments
A reader writes, re the seven equations that rule our world …
My favorite are Maxwell’s equations with the wave equation and how all of them together explain light. Even hidden within them is the speed of light, as if the speed is an integral part of it’s
being! Why is it that specific number? My last two semesters were on those equations and they still blow me away and show how ordered the universe is, even down to light. And the fact that God
has allowed us minds to comprehend it is even more amazing.
Reminds some of us of our old high school science teacher, who liked to wear a tee shirt to teacher conferences* on which a Maxwell’s equation was written, ending “Let there be light!”
* He wore, as was expected in those days, a suit to school, under his lab coat. And no one dared tell him he couldn’t wear the tee shirt, in settings where business dress was not required.
Follow UD News at Twitter!
2 Responses to Hidden within Maxwell’s equations is the speed of light …
1. General Relativity too!
Maxwell’s equations
Excerpt: Einstein dismissed the aether as unnecessary and concluded that Maxwell’s equations predict the existence of a fixed speed of light, independent of the speed of the observer, and as
such he used Maxwell’s equations as the starting point for his special theory of relativity (e=mc^2). In doing so, he established the Lorentz transformation as being valid for all matter and
not just Maxwell’s equations. Maxwell’s equations played a key role in Einstein’s famous paper on special relativity; for example, in the opening paragraph of the paper, he motivated his
theory by noting that a description of a conductor moving with respect to a magnet must generate a consistent set of fields irrespective of whether the force is calculated in the rest frame
of the magnet or that of the conductor.[31] General relativity has also had a close relationship with Maxwell’s equations. For example, Theodor Kaluza and Oskar Klein showed in the 1920s that
Maxwell’s equations can be derived by extending general relativity into five dimensions. This strategy of using higher dimensions to unify different forces remains an active area of research
in particle physics.
James Clerk Maxwell and the Christian Proposition
Excerpt: The minister who regularly visited him in his last weeks was astonished at his lucidity and the immense power and scope of his memory, but comments more particularly,[20] … his
illness drew out the whole heart and soul and spirit of the man: his firm and undoubting faith in the Incarnation and all its results; in the full sufficiency of the Atonement; in the work of
the Holy Spirit. He had gauged and fathomed all the schemes and systems of philosophy, and had found them utterly empty and unsatisfying – “unworkable” was his own word about them – and he
turned with simple faith to the Gospel of the Saviour.
2. I like this quote from the ’7 Equations that rule our world’ article:
There is (or may be) one equation, above all, that physicists and cosmologists would dearly love to lay their hands on: a theory of everything that unifies quantum mechanics and relativity.
The best known of the many candidates is the theory of superstrings. But for all we know, our equations for the physical world may just be oversimplified models that fail to capture the deep
structure of reality. Even if nature obeys universal laws, they might not be expressible as equations.
Well to throw a somewhat of a damper on the enthusiasm for finding a mathematical equation that is a ‘theory of everything’. First, String Theory, and M-theory, are both infamous for their lack
to be established by empirical confirmation;
‘What is referred to as M-theory isn’t even a theory. It’s a collection of ideas, hopes, aspirations. It’s not even a theory and I think the book is a bit misleading in that respect. It gives
you the impression that here is this new theory which is going to explain everything. It is nothing of the sort. It is not even a theory and certainly has no observational (evidence),,, I
think the book suffers rather more strongly than many (other books). It’s not a uncommon thing in popular descriptions of science to latch onto some idea, particularly things to do with
string theory, which have absolutely no support from observations.,,, They are very far from any kind of observational (testability). Yes, they (the ideas of M-theory) are hardly science.” –
Roger Penrose – former close colleague of Stephen Hawking – in critique of Hawking’s new book ‘The Grand Design’; The exact quote is in this following video clip:
Roger Penrose Debunks Stephen Hawking’s New Book ‘The Grand Design’ – video
String Theory Fails Another Test, the “Supertest” – December 2010
Excerpt: It looks like string theory has failed the “supertest”. If you believe that string theory “predicts” low-energy supersymmetry, this is a serious failure.
Second, Godel, who is considered by many to be the preeminent mathematician of the 20th century, showed that one cannot develop a ‘complete’ mathematical theory of everything without including
God as the basis of that mathematical ‘theory of everything’;
THE GOD OF THE MATHEMATICIANS – DAVID P. GOLDMAN – August 2010
Excerpt: we cannot construct an ontology that makes God dispensable. Secularists can dismiss this as a mere exercise within predefined rules of the game of mathematical logic, but that is
sour grapes, for it was the secular side that hoped to substitute logic for God in the first place. Gödel’s critique of the continuum hypothesis has the same implication as his incompleteness
theorems: Mathematics never will create the sort of closed system that sorts reality into neat boxes.
Godel also had this to say:
The God of the Mathematicians – Goldman
Excerpt: As Gödel told Hao Wang, “Einstein’s religion [was] more abstract, like Spinoza and Indian philosophy. Spinoza’s god is less than a person; mine is more than a person; because God can
play the role of a person.” – Kurt Gödel – (Gödel is considered by many to be the greatest mathematician of the 20th century)
And when we realize that God is required to bring ‘completeness’ to mathematics, and if we allow the possibility that God might have actually become incarnate in Christ, just as is resolutely
claimed in Christianity, then we find that a very credible, empirically backed, reconciliation between General Relativity and Quantum Mechanics ‘naturally’ jumps out at us:
General Relativity, Quantum Mechanics, Entropy, and The Shroud Of Turin – updated video
Centrality of Each Observer In The Universe and Christ’s Very Credible Reconciliation Of General Relativity and Quantum Mechanics
Verse and Music:
Philippians 2: 5-11
Let this mind be in you, which was also in Christ Jesus: Who, being in the form of God, thought it not robbery to be equal with God: But made himself of no reputation, and took upon him the
form of a servant, and was made in the likeness of men: And being found in fashion as a man, he humbled himself, and became obedient unto death, even the death of the cross. Wherefore God
also hath highly exalted him, and given him a name which is above every name: That at the name of Jesus every knee should bow, of things in heaven, and things in earth, and things under the
earth; And that every tongue should confess that Jesus Christ is Lord, to the glory of God the Father.
“In Christ Alone” / scenes from “The Passion of the Christ”
You must be logged in to post a comment. | {"url":"http://www.uncommondescent.com/physics/hidden-within-maxwells-equations-is-the-speed-of-light/","timestamp":"2014-04-17T18:24:46Z","content_type":null,"content_length":"59948","record_id":"<urn:uuid:74c579ef-bbdf-43db-8d52-9fb8c64096d3>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
A primer on Karnaugh maps
Simplification can actually reduce the number of logic gates you need. And Karnaugh maps are a good simplification tool.
Last time, I introduced the ideas behind symbolic logic, logic circuits, and so on. This month, I'll talk more about minimizing logic equations; but before we start, I'd like to make a couple of
Electronics Workbench
First, last month I mentioned the computer program, Electronics Workbench. I wondered out loud if the company that makes it was still around. A Google search reveals that they are (
www.electronicsworkbench.com). Their current product, MultiSim, seems to fill the niche formerly occupied by their Windows product. Unfortunately, a sales rep told me that they now concentrate only
on industrial customers and no longer sell individual licenses. For the record, I invited their sales manager to write a paragraph explaining their current product line, but got no reply. I guess
that's because I'm not a corporation. Should you be interested in tinkering with electronics sims, better choose one of the others.
Second, I should explain where I'm going with this new focus on electronics. Some of you may find electronics in general and digital electronics in particular interesting, and there's certainly
nothing wrong with that. Some of us still get that nostalgic glow when we smell rosin-core solder, so if I pique your interest, that's fine. But it's not my intention to turn you all into electronics
wizards. Rather, my focus here is on the use of symbolic logic. It's my view that a little knowledge of the rules and techniques of symbolic logic will be a nice tool to add to your toolbox and will
help you write better software.
NAND or NOR?
Finally, reader Hank Schultz e-mailed to chide me for using the terms NAND and NOR too loosely. At first, I thought Hank was being too critical. I distinctly remembered using symbols for Fairchild's
RTL logic gates as both NAND and NOR gates.
Here's the deal. Last month, I gave you DeMorgan's theorem, which I'll reproduce here for convenience:
From this, it's easy to see that a logic gate designed to do the AND function can just as easily do the OR function; it's simply a matter of defining which voltage high or low you define as
representing a 1 (true) or 0 (false). I showed you symbols for each usage, for both RTL and TTL logic families (though the former is now mostly useful only to historians).
By convention, any signal that has a little circle at its connection is considered to be inverted. With TTL NAND gates, the inputs work correctly if a high voltage is 1. The circle is on the output.
I showed the circles on the input for the RTL gate.
Well, maybe I used the gates that way, but Fairchild never did. The URL www.wps.com/archives/solid-state-datasheets/Datasheets/Fairchild-uL900-914/1.JPG shows an original Fairchild data sheet,
copyrighted 1966. Their diagram clearly shows the μL914 as a NOR gate, not a NAND. They were consistent in that depiction. Figure 1 shows the symbol they used, which is perfectly consistent with
modern practice.
Figure 1: Equivalent logic
In my defense, consider Fairchild's own words: "The μL914 device is a dual two-input gate. Each gate performs the NAND/NOR logical function using RTL circuitry."
Ok, maybe they didn't use the NAND symbol for the 914, but they certainly agreed that one could use it as a NAND gate. And I did.
Minimal is good
Let's get back to logic minimization, with an example. X is a function of four variables, A through D. Remember, in this notation, "+" stands for "or," and "." stands for "and." Also remember, a bar
over a symbol or expression stands for "not."
Looks pretty messy, doesn't it? Nevertheless, this is often the kind of equation one gets after considering what a certain logical behavior must be. Let's see if we can simplify the equation by
factoring. We'll start by factoring out terms in A and
Note that the term
A little more factoring gives:
Now, C has to be either true or not true; it can't be something else:
And our equation becomes:
It's not obvious how we can simplify further, but perhaps we can. Let's try expanding the terms again, and factoring in a different way. We get:
Now, DeMorgan's theorem says:
The first factored term in Equation 8 becomes:
Sometimes it helps to multiply a term by 1, in the form of Equation 6:
Now we can combine the two terms that have D as a factor, to get:
In the first term, we have a thing
Equation 12 now becomes:
When all else fails, try expanding again:
Now look at the first and fourth terms. They factor to give, simply, D:
At this point, we can see that we have one term involving D alone, plus a lot of terms involving
Inside the parentheses, we have one term involving A alone, plus a lot of terms involving
There's that familiar form of Equation 6 again. Simplifying from here gives:
Are you kidding me? After all that work, the equation reduces to nothing but a single constant?
Yep, that's what it does. You might have guessed it's because I designed the equation that way. Even so, I wasn't faking the simplification; I needed every step to get to the final form.
Admittedly, this is an extreme case. But it serves to illustrate a very important point: simplifying logic equations can be important. If we were to implement Equation 2 directly, using hardware
logic gates, we'd need 27 gates. After simplifying, we need none at all; only a single wire from the high (plus, true, 1) supply to X. Perhaps, if X feeds into some other equation, we'd get even
further simplifications there.
Now you can see why my friend's computer program that simplified logic automatically was so valuable. Equation 2, as complicated as it is, is only one equation with four inputs. Real systems might
have hundreds of such equations, with dozens of inputs. If you're blessed with so much intuition that you could look at Equation 2 and see immediately that its simplification is trivial, you'll go
far. Most of us have trouble even finding the best simplification. I had trouble doing this myself, and I'm the one who wrote down the equation in the first place. One wrong turn in the process, and
we might easily have missed the key step.
I mentioned last month that there's more than one way to represent a given logical relationship. The most fundamental one is the truth table, in which you write down the desired output for all
possible sets of inputs. The other is the logic equation and a third is an electronic circuit mechanizing the relationship. In this particular example, the truth table would have been the one to use.
The truth table would have shown 1s for every single set of inputs, so the logic's trivial nature would have been obvious.
We don't, however, usually get to choose the form of the problem as it's given to us. We can convert from one form to another, but the advantage of one form over the other is rarely so clear. What we
need, in general, are techniques that lead us to minimizations that aren't so obvious.
The Karnaugh map
One such technique is called the Karnaugh map, and it's one of the neatest approaches you'll ever see. I remember when I first saw it in a GE manual, I felt as though I'd found the pot of gold at the
end of the rainbow.
Here's how it works. First, for four inputs (the case where the Karnaugh map works best), make a 4x4 array as in Figure 2.
Figure 2: The basic Karnaugh map
Two things are noteworthy about this map. First, we've arranged the 16 possible values of the four inputs as a 4x4 array, with two bits encoding each row or column.
The second and key feature is the way we number the rows and columns. They aren't in binary sequence, as you might think. As you can see, they have the sequence 00, 01, 11, 10. Some of you may
recognize this as a Gray code.
Why this particular sequence? Because the codes associated with any two adjacent rows or columns represent a change in only one variable. In a true binary counting code, sometimes several digits can
change in a single step; for example, the next step after 0x1111 is 0x10000. Five output signals must change values simultaneously. In digital circuits, this can cause glitches if one gate delay is a
bit faster or slower than another. The Gray code avoids the problem. It's commonly used in optical encoders.
Suppose the output value for two adjacent cells is the same. Since only one input variable is changed between the two cells, this tells us that the output doesn't depend on that input. It's a "don't
care" for that output.
Figure 3: Looking for patterns
Look at Figure 3. Group X is true for inputs ABCD = 0100 and 1100. That means that it doesn't depend on A, and we can write:
Similarly, Group Y doesn't depend on B or C. Its value is:
Note that the groupings don't necessarily have to be in contiguous rows or columns. In Group Z, the group wraps around the edges of the map.
If we can group cells by twos, we eliminate one input. By fours, two inputs, and so on. If the cell associated with a given output is isolated, it depends on all four inputs, and no minimization is
The Karnaugh map gives us a wonderful, graphical picture that lets us group the cells in a near-optimal fashion. In doing so, we minimize the representation. Neat, eh?
Figure 4: Equation 2, mapped
Now let's see how Equation 2 plots onto the Karnaugh map, as shown in Figure 4. To make it easier to see, I'll assign a different lowercase letter to each term of the equation:
In this case, the individual groups don't matter much. All that counts is the largest group we can identify, which is of course the entire array of 16 cells. The output X is true for all values of
the inputs, so all four are don't-care variables.
As you can see, using a Karnaugh map lets us see the results and, usually, the optimal grouping at a glance. It might seem that drawing a bunch of graphs is a tedious way to minimize logic, but after
a little practice, you get where you can draw these simple arrays fast and see the best groupings quickly. You have to admit, it's a better approach than slogging through Equations 2 through 19.
The decade counter
Next, I'd like to show you a classic problem that we used to think was really important at least until we got chips that did all the work for us. It's the decade counter. In this problem, we have
four inputs and four outputs, which happen to be the next state of the four inputs. I'll denote the current states by the upper case letters, A through D, and the output (next value) states,
lowercase a through d. Table 1 gives the truth table.
Note that, although a 4-bit number can encode 16 unique values, there are only 10 rows in the truth table. The other six rows aren't needed because the counter should never be in those states. When
we draw the Karnaugh maps for this truth table, the unused cells will show up as six "don't-care" states. Quite literally, we don't care what value is output for those cases, since the inputs should
never occur.
I should note in passing that if we truly implement this design, we had danged well better make sure that the don't- care states really never, ever occur. This may require some care when things power
Figure 5: Looking for patterns in decade counters
For the decade counter, we'll need four Karnaugh maps as shown in figure 5, one for each output value. In the figures, note how I let the groupings spill over into the don't-care areas if it will let
me make larger groups.
The logic equations are:
These are by no means the only possible choices of terms, but we have reasonable assurance that they are, if not optimal, at least nearly so. That's because we used the groupings of the Karnaugh map
to get the largest groups we could.
An exercise for the student
As another example, I'll give you a problem and its solution. This one encodes the output of a decade counter, to drive a seven-segment display.
figure 6: The seven-segment display
The standard nomenclature for this display is shown in figure 6. It assigns the letters a through g to the segments. Given any combination of 4-bit numbers, our goal is to light the combination of
the seven segments in such a way as to display the proper decimal digit.
The truth table is shown in Table 2. Note that, as in Table 1, I'm using A as the most significant bit of the binary number.
Table 2: Seven-segment decoder
A B C D a b c d e f g
As I said, I was going to show you how to generate the equations for the decoder. In restrospect, though, I think it's much better to involve a little audience participation. You have the truth
table, you have the technology. You can use the practice, so how about you solve the problem instead of me?
You're going to need seven different Karnaugh maps; one for every segment of the display. Remember, we're talking decade counter here, so don't try to generate the remaining hex digits. Because only
10 of the 16 states are going to occur, you can use the remaining six as don't-care states, as I did in Figures 5a through 5d.
One hint: look for similar patterns in each of the seven maps. If you can minimize the logic the same way on more than one map, it would lower the total gate count for a practical implementation.
Next month, I'll show you my solution and share a personal tale about how I tried to build my own displays. I'll also show you the more powerful method by Quine and McCuskey.
See you then.
Jack Crenshaw is a senior software engineer at Spectrum-Astro and the author of Math Toolkit for Real-Time Programming, from CMP Books. He holds a PhD in physics from Auburn University. E-mail him at
Reader response
K-maps are the most underutilized tools in the industry. You make a brief reference to Gray codes and problems with race conditions. It is worth pointing out that one of the most powerful uses of
K-maps is the ability to resolve race conditions (particularly those caused by propagation delays in the complement of a signal) by adding redundant logic terms that "overlap" adjoining groups. It is
almost impossible to identify and resolve these types of glitches with other reduction methods with which I'm familiar.
Pete Jungwirth
Firmware Engineer
Renaissance Learning
While Karnaugh maps (even multidimensional ones) were a good tool, we let the synthesis tool take of the logic minimization for us these days.
Mike Murphree
I found your site searching on the internet for Karnaugh maps, and I found it a worthy explanation to stop and read through. What I would like to know, is what is your solution for the above 8
segment display decoder. I'm in the process of figuring out how to make such a display with 16 rows to display the a, b, c, d, e,and f, too. I would be interested in seeing what you came up with as
Karnaugh maps. Your site helped me alot, and I'd like to return to find other topics about decoders and circuits and such to learn about. Thanks for your input!
Suzanne Rogers
Never forget that K-maps don't always work! Whereas Boolean algebra can always find a minimum expression, K-maps cannot be expected to do so. Assuming that a technique that simplifies calculations
will always work is one of the biggest errors one can make. | {"url":"http://www.embedded.com/electronics-blogs/programmer-s-toolbox/4024897/A-primer-on-Karnaugh-maps","timestamp":"2014-04-21T02:14:05Z","content_type":null,"content_length":"84020","record_id":"<urn:uuid:1a35e0b0-bf0c-454f-8c22-41eeb204fcbc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00391-ip-10-147-4-33.ec2.internal.warc.gz"} |
Differential Equation system Problem
March 2nd 2008, 11:13 AM #1
Junior Member
Mar 2008
Differential Equation system Problem
Can anybody help?
A system is modelled by Tx' + x = Ky
given the following results deduce the parameters T and K
. When the input y(t) of 5 units was applied the output x(t) ultimately settled with a value of 20 units
. when a sinusoidal input was applied at an input frequency of 10 rad/s the output lagged behind the input by exactly -45 degrees.
ii) if y(t) was a step change drive, at what time would x(t) reach half of its final steady state value ( T & K deduced previously)
Can anybody help?
A system is modelled by Tx' + x = Ky
given the following results deduce the parameters T and K
. When the input y(t) of 5 units was applied the output x(t) ultimately settled with a value of 20 units
. when a sinusoidal input was applied at an input frequency of 10 rad/s the output lagged behind the input by exactly -45 degrees.
ii) if y(t) was a step change drive, at what time would x(t) reach half of its final steady state value ( T & K deduced previously)
$T \frac{dx}{dt} + x = 5K$ has the solution $x = 20$.
$T \frac{dx}{dt} + x = K \sin (10t)$ has the solution $x = A \sin \left( 10\left( t - \frac{\pi}{4} \right) \right)$.
Use the above to get the value of T and K.
I have no idea on where to even start this one. Has anybody got any pointers? I assume you end up with a simultaneous equation of some form. But i'm not sure how to get there.
You substitute the solutions given into the DE's and solve for what you don't know:
Sub x = 20 into the first:
20 = 5K => K = 4.
Sub K = 4 into the second:
$T \frac{dx}{dt} + x = 4 \sin (10t)$
Now sub $x = A \sin \left( 10\left( t - \frac{\pi}{4} \right) \right) =$(expand using compound angle formula) into the DE. Equate coefficients of sin and cos to get two simultaneous equations in
A and T. Solve those equations to get the value of A and T.
My knowledge of the compound angle formula is a bit rusty. So i've cheated and used my calculator. This has returned a very long expansion.
A(sqrt2/2 sin(t) - sqrt2/2 Cos(t)) x {512(sqrt2/2 cos(t) + sqrt2/2 sin(t))^9 - 1024(sqrt2/2 Cos(t) + sqrt2/2 sin(t))^7 + 672(sqrt2/2 Cos(t) + sqrt2/2 sin(t))^5 - 160(sqrt2/2 Cos(t) + sqrt2/2 sin
(t))^3 + 10(sqrt2/2 Cos(t) + sqrt2/2 sin(t))}
this is not the smallest expresion to put into a DE. is my calculator right? i suspect it is! as i've tested it out on some smaller equations, or is there an easier method of solving the original
My knowledge of the compound angle formula is a bit rusty. So i've cheated and used my calculator. This has returned a very long expansion.
A(sqrt2/2 sin(t) - sqrt2/2 Cos(t)) x {512(sqrt2/2 cos(t) + sqrt2/2 sin(t))^9 - 1024(sqrt2/2 Cos(t) + sqrt2/2 sin(t))^7 + 672(sqrt2/2 Cos(t) + sqrt2/2 sin(t))^5 - 160(sqrt2/2 Cos(t) + sqrt2/2 sin
(t))^3 + 10(sqrt2/2 Cos(t) + sqrt2/2 sin(t))}
this is not the smallest expresion to put into a DE. is my calculator right? i suspect it is! as i've tested it out on some smaller equations, or is there an easier method of solving the original
Actually I was anticipating you'd do something like the following:
$\sin \left( 10\left( t - \frac{\pi}{4} \right) \right)$
$= \sin \left( 10 t - \frac{10 \pi}{4} \right)$
$= \sin \left( 10 t - \frac{5 \pi}{2} \right)$
$= \sin \left( 10 t - \frac{\pi}{2} \right)$
$= - \cos (10 t)$.
You could also use symmetry and the complementary angle formula to get the last line.
Substituting a solution where the argument of the trig function is 10t is pretty essential since the term on the right hand side is a a trig with an argument of 10t. Hard to justify equating
coefficients of sin and cos on each side if they are sining and cosing different arguments ......
I wasn't far off then!!!
ok so X=-Acos(10t)
X'=10Asin(10t) (hopefully)
putting these into the DE
T(10ASin(10t) + Acos(10t) = 4Sin(10t)
I'm not confident this is correct or where i'm headed as trig tends to do stange things!
Well, there's a problem now because you require:
10AT = 4 AND A = 0 .......
I'm not sure how you'll resolve this - perhaps get clarification on the original question .....?
March 2nd 2008, 06:03 PM #2
March 10th 2008, 12:21 AM #3
Junior Member
Mar 2008
March 10th 2008, 01:39 AM #4
March 10th 2008, 04:10 AM #5
Junior Member
Mar 2008
March 10th 2008, 04:17 AM #6
March 10th 2008, 06:58 AM #7
Junior Member
Mar 2008
March 10th 2008, 11:52 AM #8 | {"url":"http://mathhelpforum.com/calculus/29733-differential-equation-system-problem.html","timestamp":"2014-04-17T04:07:41Z","content_type":null,"content_length":"59986","record_id":"<urn:uuid:556567f3-8948-4a68-b212-494b950ff428>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
Combinatorial Games
Erik Demaine's Combinatorial Games Page
Recently I have become quite interested in combinatorial game theory, particularly algorithmic combinatorial game theory. In both settings, the object of interest is a combinatorial game, which
usually involves complete information, with no hidden cards and no randomness--a pure strategy game. In general, combinatorial game theory is a suite of techniques for analyzing such games. The
algorithmic side asks for efficient algorithms for playing games optimally, or for computational intractability results suggesting that no such efficient algorithms exist.
Survey Paper
I recently completed a survey paper about results in algorithmic combinatorial game theory, plus a short introduction to combinatorial game theory. Two versions of ``Playing Games with Algorithms:
Algorithmic Combinatorial Game Theory'' are available: the full draft (recommended), and a shorter version appearing in the 26th Symposium on Mathematical Foundations in Computer Science.
How Many Players?
One main categorization of combinatorial games is how many players are involved in play. There is a large body of work on two-player games. In particular, the book Winning Ways by Berlekamp, Conway,
and Guy, builds a beautiful theory for classifying games.
Another type of combinatorial game is a one-player game, also called a puzzle. Many games in real life are essentially one-player. One-player games also arise naturally when examing a portion of a
two-player game.
The final main type of combinatorial game is a zero-player game. The main example of such a game is a cellular automaton such as John H. Conway's Game of Life.
My Research
I am particularly excited by one-player combinatorial games, and would like to advocate their study. Here are some combinatorial puzzles we have analyzed (more information to come soon):
• Triangulation games: A variety of geometric games involving the construction, transformation, or marking of the edges of a planar triangulation. We give polynomial-time winning strategies in
several cases.
• Tetris: The classic computer game in which tetrominoes (pieces made up of 4 unit squares) fall one at a time into a rectangular board, the player can slide the piece left or right during the
fall, and any completely filled rows are erased. We prove that the offline (perfect-information) version with a generalized board is NP-complete under many variations on the rules and goal of the
• Clobber: A recently invented two-player perfect-information game in which players alternate capturing an opponent's stone with a horizontally or vertically adjacent stone of theirs, and the goal
is to move last. We analyze the solitaire version of Clobber where one player controls both sides, and the goal is to remove as many stones as possible.
• Sliding blocks and NCL: Sliding-block puzzles consist of a collection of rectangular blocks in a rectangular box, and the goal is to move one piece to a particular location. We prove that these
puzzles are PSPACE-complete even for 1-by-2 blocks, and when the goal is just to move one piece at all. We also prove several other puzzles PSPACE-complete using a general model called
Nondeterministic Constraint Logic.
• Pushing blocks: Puzzles involving a robot walking around in a square grid, pushing square-block obstacles subject to various rules, in order to reach a specified goal position.
• Clickomania: A puzzle game in which the player clicks on a connected group of two or more blocks of a common color, and blocks above fall down to take their place. The goal is to remove as many
blocks as possible.
• Phutball: A two-player game by Conway involving one black stone (the ball) and several white stones placed on a grid. In each turn, a player can drop a white stone on an empty grid point, or
"kick" the black stone multiple times over sequences of white stones (horizontally, vertically, or diagonally) and remove those white stones. We analyze the complexity of deciding "mate in 1",
which is a combinatorial puzzle.
• Moving coins: A wide variety of coin-sliding and coin-moving puzzles can be solved in polynomial time, leading to several new puzzles that are difficult for humans to solve but are guaranteed to
be solvable by algorithmic methods.
• Black box
You may also be interested in some puzzles we have designed for fun.
(See also the survey paper mentioned at the top of this page.)
There are several other webpages on the topic of combinatorial games:
Off the web, the classic and most complete reference on combinatorial game theory is Winning Ways, by Elwyn R. Berlekamp, John H. Conway, and Richard K. Guy. The two-volume book was published by
Academic Press (London) in 1982. Unfortunately, it is currently out-of-print, but work is underway to publish a second edition. Another excellent reference on combinatorial game theory, with a more
formal mathematical slant, is On Numbers and Games by John H. Conway, also published by Academic Press (London), 1976. | {"url":"http://erikdemaine.org/games/","timestamp":"2014-04-16T13:03:17Z","content_type":null,"content_length":"7910","record_id":"<urn:uuid:be832eae-cce6-413a-a646-ee6e8cf3221a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00612-ip-10-147-4-33.ec2.internal.warc.gz"} |
Help with Craps.. Still not Understandin
We must create a craps game...
The teacher suggest we use the following as a plan to attack:
0. Use two global variables to represent the dice.
1. Define local variables
3. Lost Flag [boolean]
4. Won Flag [boolean]
5. Holds point "target" which has to be re-rolled to win [int]
6. Holds value of re-rolls [int]
7. seed random number generator
8. generate first die roll [store result to point variable]
9. show results to console window
10. check if lost on first roll [set lost flag]
11. check if won on first roll [set won flag]
12. while we haven’t won or lost loop to re-roll until point is made or we crap out
0. display point
1. hold console window open
2. roll and show dice [store result to point variable]
3. check if roll is equal to point [set won flag]
4. check if we crapped out [set lose flag]
8. display loss status if lost
9. display won status if won
My Code is Nothing like that: The following is my code:
//Christy Windham
#include <iostream>
#include <ctime>
#include <cstdlib>
using namespace std;
int GetRandom(int a, int b)
int r;
return (r);
int main()
int dieOne = 0;
int dieTwo = 0;
int roll = 0;
int point = 0;
int win = 0;
int loss = 0;
double odds = 0;
srand((unsigned int)time(NULL));
dieOne = (rand()%6)+1; //Creates dice
dieTwo = (rand()%6)+1;
roll = dieOne + dieTwo;
cout<<dieOne<<" plus "<<dieTwo<<" = "<<roll<<endl; //Prints random dice numbers to screen
if(roll==7 || roll==11) //Win on first roll
else if (roll==2 || roll==3 || roll==12)// Lose on first roll
else if (roll==4 || roll==5 || roll==6 || roll==8 || roll==9 roll==10)
point=roll;//point set
dieOne=(rand()%6)+1;//rolls dice again
cout<<dieOne<<" plus "<<dieTwo<<" = "<<roll<<endl;
if(roll==point)//wins if player rolls point
else if(roll==7)//lose if player rolls 7
else if(roll!=7 && roll!=point) //nothing otherwise
}while(roll!=point && roll!=7);//keep going until point or 7 is rolled
That is nothing like he wants it though. Fisrt I must make a function to create random numbers when the dice is rolled. For instance, if a number is passed through that function it will give me a
random result.
He wants this in the program basically:
Show that you can create a function that generates a random number from
1 to n and return the generated random number. Also show that you can display
the random number returned by this function to the console window.
HELP ME!!
Last edited on
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/beginner/91724/","timestamp":"2014-04-17T03:52:53Z","content_type":null,"content_length":"9005","record_id":"<urn:uuid:b535ff7a-97b0-4f0d-8394-8ef5ac090e89>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00576-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Games
If you are the parent of a child who has just finished second grade and will be going into the third grade in a month or two, how are you going to keep his/her math skills honed so that he/she is
completely ready for third grade math? As an elementary math specialist, I have found that math games are perfect summer skill sharpeners.
By the end of grade two, students should:
• understand place value and number relationships in addition and subtraction
• use simple concepts of multiplication
• be able to measure quantities with appropriate units
• classify shapes and see relationships among them by paying attention to their geometric attributes
• collect and analyze data and verify the answers.
As a veteran second grade teacher, I have found that understanding place value and number relationships is the most important skill that second graders need to practice and master.
The following games are two of many second grade/third grade games that help children understand and have fun with place value:
Who Will Win?
What you need:
2 players
1 die
deck of cards, 10s and face cards removed
counters (pennies, paperclips, etc.)
Player #1 takes a card and turns it over for all to see. Player #2 does the same. Player #1 takes a second card and turns it over for all to see. Player #2 does the same. Each player uses his/her two
cards to make a two-digit number. Players say their numbers out loud. Player #1 rolls the die to determine who will earn a counter.
1,3,5 odd roll – the lower number earns a counter
2,4,6 even roll – the higher number earns a counter.
Players continue building numbers and alternating the throw of the die. The first player to accumulate 10 counters is the winner.
Get Close to 100
What you need:
2 – 4 players
deck of cards, 10s and face cards removed
paper and pencils for each player
The object of the game is to make a two-digit addition problem that comes as close to 100 as possible.
Shuffle cards and place them face down in a pile.
Player #1 turns over 4 cards and moves the cards around until he/she has created a two-digit addition problem whose sum will be as close to 100 as he/she can make it. Player #1 records this problem
on his/her recording sheet. Player #2 checks for addition accuracy.
Example: Player #1 draws a 4, a 7, a 2, and a 5. He/she moves the cards around until she/he decides that 47 + 52 = 99 is the closest that he/she can get.
Player # 2 draws four cards and does the same.
The points for each round are the difference between their sum and 100.
Example: A sum of 95 scores 5 points and so does a sum of 105.
Players compare scores at the end of this first round. They put their four cards in a discard pile and player #2 begins first and turns over four more cards for the second round.
After six rounds, players total their points and the player with the lowest score wins.
Math Games for First Graders
I think that every topic in mathematics is fun if you master it and understand it. Since fun has always been a part of my teaching philosophy, I have found that math games are an enormously engaging
and effective way to help children on their way to understanding and mastery in math. The greatest benefits of math games are the way they improve students’ basic arithmetic and problem-solving
By the end of first grade, students should know or be able to do the following:
• understand and use the concept of ones and tens in the place value number system
• add and subtract small numbers with ease
• measure with simple units
• locate objects in space.
• describe data
• analyze and solve simple problems.
I have taught first grade for many years. If I could pick one math skill that I think is the most important skill for first graders to master, it would be the ability to know (without counting on
fingers) all the addition facts to 10. Counting on fingers is a good beginning strategy, but children need to have all the facts in long-term memory and be able to recall them automatically.
What are all the facts that add to 10? (10+0, 9+1, 8+2, 7+3, 6+4, 5+5, 4+6, 3+7, 2+8, 1+9, 0+10)
What are all the facts that add to 9? 8? 7? 6? 5? 4? 3? 2?
The following game is one of many that help children master these basic addition skills, while having fun:
Add-em Up
What you need:
2 players
Add-em Up game board for each player – each player writes the numbers 2-12 horizontally at the bottom of their papers.
2 dice
Counters – paper clips, pennies, etc.
Players place a counter above each number.
Player #1 rolls the dice and adds the 2 numbers. He/she may then remove the counter over the sum from the game board or the counters over any 2 numbers that add up to that same sum.
Example: Player #1 rolls a 3 and a 4. He/she may remove the counter above the 7 or the counters above any combination for 7, such as 1 & 6, or 2 & 5, or 3 & 4.
Players take turns rolling the dice and removing counters. When a player cannot remove counters that match the sum rolled or a combination, he/she loses that turn.
Play continues until neither player can remove counters. The player with the most counters removed wins.
Fun and (Math) Games!
Saturday School A Success At Lincoln Elementary reads the headline from Madison, Wisconsin. Even on a Saturday, and even on a day that felt like summer, dozens of students at one elementary school
spent the morning in class.
Every Saturday since the end of January, about 100 students have gathered for about two hours a week to get a little extra work done and to do so while having a little bit of fun. It is easy to
assume that kids would want to be anywhere but school on a weekend morning, but this program is proving to be different. Instead of traditional instruction, students learn through playing games.
It seems somehow sad to me that kids are allowed to have fun with math only on Saturdays. Why isn’t math engaging, challenging, and fun all the time? As a veteran elementary teacher, I do understand
that teachers feel like they don’t have enough time to teach all of the content within the course of a school year. Why on earth would they ever want to add more material in the form of math games
when they can’t seem to finish the assigned math textbook? Turns out that making time to incorporate math games in the classroom can lead to rich results. I’ve been using games to teach mathematics
for many years, and here are some of the significant benefits of doing so:
Benefits of Using Math Games in the Classroom
• Meets Mathematics Standards
• Easily Linked to Any Mathematics Textbook
• Offers Multiple Assessment Opportunities
• Meets the Needs of Diverse Learners (UA)
• Supports Concept Development in Math
• Encourages Mathematical Reasoning
• Engaging (maintains interest)
• Repeatable (reuse often & sustain involvement
• Open-Ended (allows for multiple approaches & solutions)
• Easy to Prepare
• Easy to Vary for Extended Use & Differentiated Instruction
• Improves Basic Skills
• Enhances Number and Operation Sense
• Encourages Strategic Thinking
• Promotes Mathematical Communication
• Promotes Positive Attitudes Toward Math
• Encourages Parent Involvement
Pick a skill that your students need to practice. One of the big ones is subtraction at any level. Kindergarteners through 6th graders find subtraction to be a challenge. Here’s a great double-digit
subtraction game:
500 Shakedown
What you need:
2 players
2 dice
paper and pencil for each
Each player starts with 500 points.
Player #1 rolls the dice and makes the biggest two-digit number he/she can. Now he/she subtracts this number from 500.
Example: Player #1 rolls a 2 and a 4 and makes 42. Now he/she subtracts 42 from 500.
Player #2 rolls the dice and does the same. Players continue to alternate turns. The first person to reach 0 wins.
There’s only one complication! When you throw a 1, the rules change. You don’t subtract. Instead you make the smallest two-digit number you can and add.
Example: If the player throws a 1 and a 5, the smallest two-digit number is 15. So he/she adds 15 to the total.
Variation: Start with 5,000 points and use three dice or start with 50,000 and use 4 dice.
Summer Math for the Fun of It!
Summer is coming. What are you going to do to keep your child’s math skills from losing ground? Research has shown that there is clearly a case for use it or lose it with math. Teachers know that
students return to school in the fall with a 1 to 2 month loss in math skills. Not good, and definitely not necessary.
Carrie Launius, a veteran teacher, has this to say in her article titled, “Keeping Kids Busy During Summer”, “Card games like solitaire are very good for kids to practice mental math and math
thinking as well as Gin, Rummy, or Spades”.
It is essential that, over the summer vacation, parents create active and memorable learning experiences for their children in math. “Children learn more effectively when information is presented
through the use of active learning experiences instead of passive ones”, reports Marilyn Curtian-Phillps, M. Ed.
Parents often get caught up in having their child do workbook pages from some expensive book that they order or buy from a teacher store. Just give them authentic, real world experiences where
learning can take place naturally. Math games are much more appropriate and engaging than workbooks, dittos, or even flashcards.
Children throw themselves into playing games the way they never throw themselves into filling out workbook pages or dittos. And games can help children learn almost everything they need to master in
elementary math. Good, child-centered games are designed to take the boredom and frustration out of the repetitive practice necessary for children to master important math skills and concepts.
Playing math games is even more beneficial than spending the same amount of time drilling basic facts using flash cards. Not only are games a lot more fun, but the potential for learning and
reasoning about mathematics is much greater, as well. In a non-threatening game format, children will be more focused and retention will be greater.
Math games for kids and families are the perfect way to reinforce, sharpen, and extend math skills over the summer. They are one of the most effective ways that parents can develop their child’s math
skills without lecturing or applying pressure. When studying math, there’s an element of repetition that’s an important part of learning new concepts and developing automatic recall of math facts.
Number facts (remember those times tables?) can be boring and tedious to learn and practice. A game can generate an enormous amount of practice – practice that does not have kids complaining about
how much work they are having to do. What better way can there be than an interesting game as a way of mastering them?
Here’s an example of a great game for children who need to sharpen their multiplication skills:
Salute Multiplication
What you need:
2 players
deck of cards, face cards removed
Shuffle deck and place face down in a pile.
Player #1 turns over the top card and places it face up on the table for all to see.
Player #2 draws a card and does not look at it. Player 2 holds the card above his or her eyes so that player #1 can see it, but he can’t.
Player #1 multiplies the 2 cards mentally and says the product out loud.
Player #2 listens and decides what his or her card must be and says that number out loud.
Example: Player #1 turns over a 6 for all to see. Without looking at it,
player #2 puts a 4 on his forehead. Player #1 mentally
multiplies 6 x 4 and says, “24”. Player #2 must figure out
6 x ? = 24.
Both players decide if the response is correct. If it is, player #1 gets 1 point.
Players reverse roles and play continues until one player has 10 points.
Math Games – a Great Summer Skill Sharpener!
Many οf thе math computational skills whісh generally аrе nοt practiced over thе summer, аrе simply forgotten. Parents can help their children retain and sharpen their mathematics skills this summer
by doing and supporting math at home.
Math games offer targeted practice in math fundamentals. Games can, if you select the right ones, help children learn almost everything they need to practice and master in elementary math. Good,
child-centered games are designed to take the boredom and frustration out of the repetitive practice necessary for children to master important math skills and concepts.
The following dice game gives first graders, second graders, and third graders practice with addition and subtraction.
Get Close to 105
What you need:
2 or more players
3 dice
pencils and paper for everyone
The object of this game is to get a final score closer to 105 than any other player.
Player #1 rolls the dice, adds them together, and puts the sum as his/her score for that round.
Player #2 rolls the dice, and does the same as player #1.
At the end of 10 rounds (and everyone has to take 10 rounds), the player with the score closest to 105 wins the game.
Variation: Players can make the goal number anything they want, such as 147, etc. Is there a target score that will be too high for three dice and 10 rounds? A question for the kids, not the parents.
Never forget that games are supposed to be fun! If pleasure is not connected to the game, children will be unwilling to play and little learning will take place.
A Math Game for First, Second, and Third Graders
When working with first graders, second graders, and sometimes even third graders, I have found that when asked, “How much is your number + 10 (e.g., 23 + 10)”, they struggle to know the answer and
end up counting on their fingers. Counting on fingers is a good beginning strategy, but as children gain in number sense, fingers should no longer be necessary. The same is true if I ask, “How much
is your number -10?”
A major learning goal for students in the primary grades is to develop an understanding of properties of, and relationships among, numbers. Building on students’ intuitive understandings of patterns
and number relationships, teachers can further the development of this one aspect of number concepts and logical reasoning by using a math game - Tens and Ones.
Tens and Ones
What you need:
2 players
0-99 chart for each player (find one and download it from the internet or have your child make one using a 10×10 grid.
1 counter (button, paper clip, rock, etc.) for each player
1 regular die with instructions for rolling (following)
Roll 1 or 2 – +10
Roll 3 or 4 – +1
Roll a 5 – -1
Roll a 6 – -10
Each player places a counter on the zero on his/her own 1-99 chart. Players take turns rolling the die.
Player #1 rolls the die and moves his/her counter according to the roll on his/her 0-99 chart. Player #1 checks to make sure that player #2 agrees and then hands the die to player #2.
Player #2 follows the same steps as player #1 using his/her own 0-99 chart.
It may be visually helpful to have the child roll the die, leave the counter where it is and then count on using his finger. When he/she reaches +10, the player will then be able to see that he/she
is exactly one row down from where he/she started. Then the counter can be moved to the new spot.
The winner is the first player to move his/her counter to 99. To win a player must land on 99 exactly. For example, if a player lands on 90 and rolls a +10 on the next turn, the player must pass, as
there are only nine boxes from 90 to 99. Players may not move their counters past 99 and off the chart.
Basic Math Skills and Meaningful Jobs
Being able to read, write, and do basic math is a requirement for almost any meaningful job these days. The reason we have to spend so many resources on remedial work, whether that be at
universities, community colleges or other adult education programs, is some adults did not learn their basic math facts when their young minds were most capable of learning.
That is true today, and it will be true in the future. In order for your child to have success with more advanced math, and be prepared for a future with a meaningful job, it is essential that they
memorize their basic math facts to the level of automaticity.
Your child is introduced to basic math concepts such as counting and simple adding in kindergarten.
First graders and second graders should have addition and subtraction combinations to 20 at their fingertips.
Third graders and fourth graders need to master the multiplication tables to 12×12 and the related division facts.
The exact order and manner in which math facts and concepts are introduced varies with the curriculum your child’s school uses and math standards, which can vary from state to state, but the above is
a general guide.
Essentially, your child should demonstrate mastery of these types of facts by the end of fourth grade in order to be prepared for the challenges of more advanced math. It may come quickly for your
child, or it may take time, but through focused practice, they will be able to increase their proficiency.
This can be achieved through skill and drill repetition (dittos, workbook pages, timed tests, and/or flashcards) which is usually extremely boring and tedious. There is another more effective,
creative, and fun method. Math games! Games are engaging (maintain interest); dittos, workbook pages, or flash cards rarely are.
Parents can offer greater opportunities for their child to succeed in math if they support the learning of the basics at home. Games fit the bill wonderfully!
Math games for kids and families are the perfect way to reinforce and extend the skills children learn at school. They are one of the most effective ways that parents can develop their child’s math
skills without lecturing or applying pressure. When studying math, there’s an element of repetition that’s an important part of learning new concepts and developing automatic recall of math facts.
Number facts (remember those times tables?) can be boring and tedious to learn and practice. A game can generate an enormous amount of practice – practice that does not have kids complaining about
how much work they are having to do. What better way can there be than an interesting game as a way of mastering them?
Games are fun and create a context for developing children’s mathematical reasoning. Through playing and analyzing games, children also gain computational fluency by describing more efficient
strategies and discussing relationships among numbers.
Games teach or reinforce many of the skills that a formal curriculum teaches, plus a skill that math homework sometimes, mistakenly, leaves out – the skill of having fun with math, of thinking hard
and enjoying it.
Memorizing the Basic Facts with Math Games
Frank L. Palaia, PhD, is a science teacher in the Lee County School District and at Edison State College. As a guest columnist for the News-Press.com of Ft. Myers, Florida, he had this to say about
students in his classes, “Most students today have not memorized basic math facts in elementary and middle school. Each year there will be otherwise intelligent junior or senior students in my
high-school classes who asks a question like, “What is eight times seven?”
As an elementary math specialist, I see that children no longer memorize their addition facts or multiplication tables. With the math curriculum as extensive as it is, teachers cannot afford to take
the time to ensure that students learn the basic facts (sad, but true).
Parents are partners in the process, and can offer greater opportunities for their child to succeed in math if they support the learning of the basics at home. Math games fit the bill wonderfully!
Math games for kids and families are the perfect way to reinforce and extend the skills children learn at school. They are one of the most effective ways that parents can develop their child’s math
skills without lecturing or applying pressure. When studying math, there’s an element of repetition that’s an important part of learning new concepts and developing automatic recall of math facts.
Number facts (I’m sure you remember memorizing those times tables?) can be boring and tedious to learn and practice. A game can generate an enormous amount of practice – practice that does not have
kids complaining about how much work they are having to do. What better way can there be than an interesting game as a way of mastering them?
Games are fun and create a context for developing children’s mathematical reasoning. Through playing and analyzing games, children also gain computational fluency by describing more efficient
strategies and discussing relationships among numbers.
First graders and second graders need to have the addition facts to 10 in long-term memory. When they hear 6+4, they immediately know (without counting fingers) that the answer is 10. Using fingers
to count is a good, early strategy but with practice, those facts should be automatic.
Third graders and fourth graders need to have all of the multiplication facts to automaticity.
Methods such as flash cards, dittos, and workbook pages stress rote memorization of basic number facts and are usually boring and do not require learners to participate actively in thought and
reflection. They do not go easily or quickly into long-term memory.
Games teach or reinforce many of the skills that a formal curriculum teaches, plus a skill that math homework sometimes, mistakenly, leaves out – the skill of having fun with math, of thinking hard
and enjoying it.
Math Games and At-Risk Kids
As an elementary mathematics specialist, I work in K-6 classrooms all the time. Time after time teachers ask the same question, “How do I help floundering students who lack basic math skills?” In
every class there are a handful of students who are at risk of failure in math.
What can be done for such students? How can we help children be proficient at the basic skills.
Struggling math students typically need a great deal of practice. Math games can be an effective way to stimulate student practice.
First graders and second graders need to have the addition facts to 10 in long-term memory. When they hear 6+4, they immediately know (without counting fingers) that the answer is 10. Using fingers
to count is a good, early strategy but with practice, those facts should be automatic.
Family Fact Feud is a great game for achieving that goal.
What you need:
2 players
deck of cards, face cards removed
Players sit side by side (not across from each other)
Teacher/parent decides the particular fact to practice (i.e. +1, +2, +3, etc.) Once the constant addend is determined, that card is placed between the two players. Players then divide the cards
evenly between themselves. Each player turns over one card and adds that card to the constant addend in the middle. The player with the highest sum collects both cards. Players must verbalize the
math sentence.
Teacher/parent decides the constant addend will be +1.
Player #1 turns over a 5, and says, “5 + 1 = 6″.
Player #2 turns over an 8 and says, “8 + 1 = 9″.
Player #2 collects both cards.
In the event of a tie (both players have the same sum), each player turns over one more card and adds this card to the 1. The player with the greatest sum takes all four cards.
When the deck is finished up, players count their cards. The player with the most cards is the winner.
Third graders and fourth graders need to have all of the multiplication facts to automaticity.
Multiplication Fact Feud is great for that.
What you need:
2 players
deck of cards, face cards removed
Teacher/parent decides the particular multiplication fact to practice (i.e. x7, x4, x8, etc.) Once the constant factor is determined, that card is placed between the two players. Players then divide
the remaining cards evenly between themselves.
Each player turns over one card and multiplies that card by the constant in the middle. Players must verbalize their math sentence. The player with the highest product collects both cards.
Teacher/parent selects x5 as the constant.
Player #1 draws a 4 and says, “4 x 5 = 20″.
Player #2 draws a 7 and says “7 x 5 = 35″
Player #2 would collect both cards.
In the event of a tie (i.e. both players have the same product), each player turns over one more card and multiplies that by the constant factor. The player with the highest product wins all four
When the cards are all used up, the player with the most cards wins the game.
The Perfect Math Game!
Are you looking for creative and engaging ways to help your students/children learn basic math concepts and skills?
Teachers and parents often ask for suggestions about activities to do with their children at school and at home to help further their mathematical understanding. I’ve been teaching math to children
for many years, and I’ve found that math games are, from a teacher’s and a parent’s point of view, wonderfully useful. Math games put children in exactly the right frame of mind for learning.
Children are normally very eager to play games. They relax when they play, and they concentrate. They don’t mind repeating certain facts or procedures over and over.
Children throw themselves into playing games the way they never throw themselves into filling out workbook pages or dittos. And games can help children learn almost everything they need to master in
elementary math. Good, child-centered games are designed to take the boredom and frustration out of the repetitive practice necessary for children to master important math skills and concepts.
Playing math games is even more beneficial than spending the same amount of time drilling basic facts using flash cards. Not only are games a lot more fun, but the potential for learning and
reasoning about mathematics is much greater, as well. In a non-threatening game format, children will be more focused and retention will be greater.
Math games are the perfect way to develop, reinforce, and extend children’s math skills without lecturing or applying pressure. When studying math, there’s an element of repetition that’s an
important part of learning new concepts and developing automatic recall of math facts. Number facts (remember those times tables?) can be boring and tedious to learn and practice. A game can generate
an enormous amount of practice – practice that does not have kids complaining about how much work they are having to do. What better way can there be than an interesting game as a way of mastering
One of the most effective and engaging math games is War. It has many variations. Give one or more of the following a try:
More or Less
Many of you may know this game as “War”. For mathematical purposes, I think it is more appropriate to call it “More” or “Less”.
What you need:
2 players
1 deck of cards
Shuffle cards well and deal them face-down equally to all players. Players do not look at their cards. All players turn over their top card at the same time. The player with the greatest number
(More) collects all the cards. In the event of a tie, players turn over one more card and put it on top of their first card. The player with the biggest number takes all four cards.
Each player might add the two cards together and the player with the biggest total would take all four cards. Or the biggest number on the second card turned over could be the winner. You decide what
is most appropriate.
You follow the same rules to play “Less”. The player with the smallest number wins the cards.
• Addition War – Each player turns over two cards and adds them together. The player with the greatest sum or the smallest sum (you decide which) wins all four cards.
• Addition War (3, 4, 5, etc. addends) – Each player turns over three cards and adds them together.
• Subtraction War – Each player turns over two cards and subtracts the smaller number from the larger number. The player with the smallest or greatest difference (you decide which) wins.
• Addition and Subtraction War – Each player turns over two cards and adds them together. Then each player turns over one more card and subtracts it from their sum. The player with the greatest or
smallest difference wins. I like this game because it involves the use of two operations.
• Product War – Turn up two cards and multiply.
• Product War II– Turn up three (or more) cards and multiply.
• Product War (advanced) – Each player turns up three cards and moves them around and arranges them in a problem where two-digit number is multiplied by a one-digit number. The player with the
greatest or least product (you decide) wins.
• Division War - Each player turns up three cards and moves them around and arranges them in a problem where two-digit number is divided by a one-digit number. The player with the least or greatest
quotient (you decide) wins.
• Fraction War – Each player turns up two cards and use the larger card as the numerator and the smaller card as the denominator (or vice versa, whichever you choose). The player with the greatest or
least fraction (you choose) wins.
• Integer Addition War – Each player takes two cards and adds them together. Red cards are negative (I’m in the red), and black cards are positive. The greatest sum wins. | {"url":"http://www.mathgamesandactivities.com/tag/second-grade-math-games/","timestamp":"2014-04-18T15:39:06Z","content_type":null,"content_length":"68236","record_id":"<urn:uuid:17b8e9d0-f1a3-4d7b-9ef2-d6379908b62f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Design A Binary Counter Which Counts Down From ... | Chegg.com
Design a binary counter which counts down from 7 to 2 and then resets. Use Don't Cares for unused states. Fill in the truth table, draw the state diagram, and calculate next state logic. Use DFF's
for the state memory
Electrical Engineering | {"url":"http://www.chegg.com/homework-help/questions-and-answers/design-binary-counter-counts-7-2-resets-use-don-t-cares-unused-states-fill-truth-table-dra-q1000780","timestamp":"2014-04-17T22:49:11Z","content_type":null,"content_length":"21222","record_id":"<urn:uuid:78c1763f-53d1-46bb-ae9e-b90b1e90e4d0>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tewksbury Prealgebra Tutor
...I am good at explaining subjects in simple English in many different ways so that it's simple and fast for someone to learn. For instance, in standardized test preparation, students who can
quickly stop doing old practices have told me that they achieved their target scores after only one tutoring lesson. English and writing students usually advance two grade levels within three
55 Subjects: including prealgebra, reading, English, writing
...After graduation, I became a full-time math tutor with MATCH Education at Lawrence High School. I tutored math foundations, pre-algebra and algebra 1 & 2. My students were mainly Spanish
speakers and this gave me a chance to practice my Spanish skills.
10 Subjects: including prealgebra, calculus, geometry, algebra 1
...Taught Algebra II as a separate course, and also as part of the pre-calculus courses taught in long term substitute assignments. Taught this to freshmen and sophomores at Reading High School in
long term temp assignments (usually maternity leaves). This subject is taught at the middle school le...
8 Subjects: including prealgebra, geometry, algebra 1, trigonometry
...Thank you for helping open her eyes and widening her options. ... I cannot express what your tutoring has done for her confidence in this pursuit of bettering her SAT scores." "Thank you so
much for your efforts with B.! We think you have made a real difference, and taught him how to effectivel...
38 Subjects: including prealgebra, reading, English, writing
...Having an enthusiastic tutor in those subjects would have helped me tremendously. I'd like to share my enthusiasm for the subject and help your child work towards their goals. My methods are
suited to each child's needs.
9 Subjects: including prealgebra, geometry, algebra 1, algebra 2 | {"url":"http://www.purplemath.com/Tewksbury_prealgebra_tutors.php","timestamp":"2014-04-21T13:04:33Z","content_type":null,"content_length":"24013","record_id":"<urn:uuid:acfcca58-62dd-45ea-8e6c-a6670decd066>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: st: biprobit: test for difference in marginal effects
[Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index]
Re: st: biprobit: test for difference in marginal effects
From May Boggess <mboggess@stata.com>
To statalist@hsphsun2.harvard.edu
Subject Re: st: biprobit: test for difference in marginal effects
Date 04 Nov 2004 13:38:05 -0600
On Thursday, Wenhui Wei wrote:
> I'm runing a biprobit model (two treatment modes) and get two sets
> marginal effects by the MFX command.
> My quesiton is: for a certain variable, for example, gender (with male
> as the base), how to test whether its estimated marginal effects are
> significant different in the 2 equations. i.e. the marginal effect of
> female in treatment mode 1 is significantly different from that in
> treatment mode 2?
> test command only test for the difference in the estimated
> coefficients, not marginal effects.
We can do this by saving the marginal effects and their covariance
matrix as estimation results and then use -test-. Let's begin with an
example we can work with:
sysuse auto
set seed 12345
gen y1=uniform()>0.5
gen y2=uniform()>0.5
biprobit y1 y2 mpg for
I am interested in the marginal effect for for=1 and for=0.
I am going to use a matrix to pass the numbers into the -at()-
option of -mfx-:
matrix A1=(20, 1)
mfx, var(mpg) at(A1) tr(2)
mat m1=e(Xmfx_dydx)
mat D1=(.32062124, .00432451, .00432453, .11510433, -.0019256,
-.0019256, -.00114118)
When you use the -tracelvl()- option on -mfx- you get to the see
the second derivatives that a calculated so -mfx- can use the
delta method to get the standard error of the marginal effect.
-mfx- doesn't save those guys in a matrix for us, so we'll have to copy
and paste them in by hand. Same thing for the marginal effect at for=0:
matrix A0=(20, 0)
mfx, var(mpg) at(A0) tr(2)
mat m0=e(Xmfx_dydx)
mat D0= (.28952027, 0, .00463938, .19246524, 0, -.00111842, .00006737)
mat D=D1\D0
mat list D
Now, to get the covariance matrix for these two marginal effects,
I need the covariance matrix for the coefficients of the model:
mat V=e(V)
Then multiply:
mat COV=D*V*D'
mat rownames COV = m1 m0
mat colnames COV = m1 m0
mat list COV
To be able to test if the two marginal effects are the same,
I want to use a Wald test. This is done in Stata using the command
I can take advantage of that by posting the marginal effects
and the covariance matrix as estimation results. Before I do so,
I have to make sure the rows and columns are labeled appropriately,
which means, mathing the names on the covariance matrix COV:
mat b=[m1[1,1],m0[1,1]]
mat colnames b = m1 m0
mat list b
eret post b COV
eret display
mat list e(b)
mat list e(V)
You can see the marginal effects and the covariance matrix are now
stored in the e() results. So now we can use test:
test _b[m1]=_b[m0]
* For searches and help try:
* http://www.stata.com/support/faqs/res/findit.html
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2004-11/msg00162.html","timestamp":"2014-04-18T18:48:10Z","content_type":null,"content_length":"8014","record_id":"<urn:uuid:99beeb62-4f58-4897-8164-8d6c40051fce>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00049-ip-10-147-4-33.ec2.internal.warc.gz"} |