content
stringlengths 86
994k
| meta
stringlengths 288
619
|
|---|---|
[Numpy-discussion] object dtype questions
Ernest Adrogué eadrogue@gmx....
Sun Sep 6 06:32:52 CDT 2009
5/09/09 @ 11:22 (-0600), thus spake Mark Wendell:
> For example:
> Say that C is a simple python class with a couple attributes and methods.
> a = np.empty( (5,5), dtype=object)
> for i in range(5):
> for j in range(5):
> a[i,j] = C(var1,var2)
> First question: is there a quicker way than above to create unique
> instances of C for each element of a?
You achieve the same with
a=np.array([C(var1, var2) for i in range(25)], dtype=object).reshape((5,5))
but it takes about the same time in my computer.
> for i in range(5):
> for j in range(5):
> a[i,j].myMethod(var3,var4)
> print a[i,j].attribute1
> Again, is there a quicker way than above to call myMethod or access attribute1?
I think you can use a ufunc:
def foo(x):
print x.attribute1
ufoo = np.frompyfunc(foo, 1, 0)
Don't know if it is any faster though.
More information about the NumPy-Discussion mailing list
|
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-September/045029.html","timestamp":"2014-04-17T21:24:23Z","content_type":null,"content_length":"3630","record_id":"<urn:uuid:803c75bf-5972-47fd-a20b-1985ce24cb21>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00231-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westford, MA Math Tutor
Find a Westford, MA Math Tutor
...When I teach the subject, I emphasize connections between theory and computation, a link which too many courses obscure, tending as they do to focus on one to the exclusion of the other.
Engineers need to know enough of the theory of linear algebra to understand its broad range of applications, ...
47 Subjects: including discrete math, ACT Math, logic, linear algebra
...I later moved to Spain to study for my Master’s at Saint Louis University Madrid, where I graduated with distinction. I currently have my preliminary teacher’s certification to teach Spanish
and History in Massachusetts. SpanishI believe that my unique life and academic experiences make me an excellent Spanish tutor.
14 Subjects: including prealgebra, Spanish, Italian, grammar
...I am quite proficient at C++ considering my undergraduate degree at Harvard was Computer Science, in the C++ language, and then I worked 2 years as a C++ programmer before becoming a teacher.
I am qualified to teach C since it is a simplified version of C++. I have an undergraduate degree in Co...
19 Subjects: including discrete math, algebra 1, algebra 2, calculus
...I am currently teaching two homeschooled teenagers (14 and 16)in Pre-algebra concepts. I meet with them on Tuesdays and Thursdays for 2 hours each day instructing them in math concepts such as
fractions, percents, adding and subtracting, multiplying and dividing, absolute values, etc. I will also be covering material up to Algebra II and Pre-Calculus.
26 Subjects: including statistics, precalculus, ACT Math, algebra 1
I am a senior chemistry major and math minor at Boston College. In addition to my coursework, I conduct research in a physical chemistry nanomaterials lab on campus. I am qualified to tutor
elementary, middle school, high school, and college level chemistry and math, as well as SAT prep for chemistry and math.I am a chemistry major at Boston College.
13 Subjects: including trigonometry, algebra 1, algebra 2, biology
Related Westford, MA Tutors
Westford, MA Accounting Tutors
Westford, MA ACT Tutors
Westford, MA Algebra Tutors
Westford, MA Algebra 2 Tutors
Westford, MA Calculus Tutors
Westford, MA Geometry Tutors
Westford, MA Math Tutors
Westford, MA Prealgebra Tutors
Westford, MA Precalculus Tutors
Westford, MA SAT Tutors
Westford, MA SAT Math Tutors
Westford, MA Science Tutors
Westford, MA Statistics Tutors
Westford, MA Trigonometry Tutors
|
{"url":"http://www.purplemath.com/westford_ma_math_tutors.php","timestamp":"2014-04-18T00:27:59Z","content_type":null,"content_length":"23935","record_id":"<urn:uuid:0fb4508d-6660-4642-bf23-631e7222d53a>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Palos Verdes Peninsula Algebra 2 Tutor
...These are the essentials of course, and practice makes perfect. Raising SAT scores is as much about practice as it is learning new skills. Students can practice on their own time so that we can
make the most of tutoring time (saves money too). With the combination of personal practice and clear teaching, students begin to recognize each SAT problem.
24 Subjects: including algebra 2, Spanish, physics, writing
...I am searching for students whom I can tutor with my own experiences. I don't work anywhere before. However, I used to tutor at school, to my friends and family.
8 Subjects: including algebra 2, geometry, algebra 1, Microsoft Excel
I love school and I love helping people. When you put those two together, tutoring is the perfect job. I think education is important so giving students a little assistance to help them achieve
the grade they want comes naturally to me.
47 Subjects: including algebra 2, chemistry, English, reading
...The most common usage error is their, they're, and there. "There comes a time when people need to evaluate their goals and decide where they're going in life" demonstrates correct usage. It is
important to remember that an apostrophe in a pronoun indicates a contraction, not a possessive. Examples of redundancies are the words of after off and out after separate.
34 Subjects: including algebra 2, English, chemistry, physics
...I am mainly focused on assisting students in the middle/high school and college in Biology and Math from Algebra to Calculus. I am a graduate from the University of Arizona with Biomedical
Engineering and Molecular Biology degrees. I am very good at math and science and excelled in those courses throughout my education.
14 Subjects: including algebra 2, calculus, geometry, precalculus
|
{"url":"http://www.purplemath.com/Palos_Verdes_Peninsula_Algebra_2_tutors.php","timestamp":"2014-04-16T04:35:12Z","content_type":null,"content_length":"24680","record_id":"<urn:uuid:cbb712c6-598d-45bc-90ab-024cb69df4fd>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00189-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Pipe Miter Formula
Pipe Miter Formula PDF
Sponsored High Speed Downloads
When insulating elbows in a pipe run, it is sometimes desired to miter cut standard pipe insulation rather than use diaper inserts or if preformed insulation inserts are unavailable. To estimate the
length of pipe insulation required to
Formula 1 - Pipe Contraction Formula 1 is used to calculate the K value for pipe contractions. ... plug, foot, and swing check valves as well as tees and miter bends. K = f T (L/D) Formula 10 Formula
11 - Fixed K Value Formula 11 is used to enter a valve or fi tting with a fi xed K value.
¾Straight Pipe ¾Fittings Pipe Bends Miter Bends Reducers ¾Fabricated Branch Connections ¾Flanges and Blanks ¾Other Components. ... formula ¾Ratings given in a component standard ¾Ratings same as
straight seamless pipe ¾Qualification by calculation plus
Friction Losses in Pipe Fittings Resistance Coefficient K (use in formula hf = Kv²/2g) Fitting LD Nominal Pipe Size ½ ¾ 1 1¼ 1½ 2 2½-3 4 6 8-10 12-16 18-24
Steel Pipe — A Guide for ... Elbows and Miter End Cuts, 122 Reducers, 130 Bolt Hole Position, 130 Design of Wye Branches, Laterals, Tees, and Crosses, 130 ... 3-3 Solution of Manning flow formula for
n = 0.011, 32 3-4 Moody diagram for friction in pipe, 40
Pressure drop in fittings.... Head Loss in Fittings is frequently expressed as the equivalent length of pipe that is added to the straight run of pipe as shown below.
Preferred Installation Manual PIPE AND EQUIPMENT INSULATION Return to table of contents Single Wrap Pipe 1. Cut aerogel blanket to length required for a
Chapter 6 Design of PE Piping Systems 160 Pressure Rating for Fuel Gas Pipe Compared to other common thermoplastic pipes, PE pipe can be used over a broader
mathematical formula, refer to appendix II of this text. Circumference Rule Another method of determining circumference is ... in joining round pipe sections. SHEET-METAL DUCT SYSTEMS With the advent
of high-tech equipment, such as
4 Resource # e-07-124 08/09 Concrete Pipe Joints Your Best Choice Strength and Performance — Concrete pipe joints accommodate movement and horizontal and vertical
Figure 3-60.—Miter cutting. PIPE BENDING Any piping system of consequence will have bends in it. ... The formula to determine the number of wrinkles is to divide the degrees per wrinkle required into
the degrees of the bend required.
How to know what size piping your Compressed Air System needs . Figuring the correct pipe size for your compressed air distribution system is an important task.
Mitered inlets; normal or skewed, formed by a miter cut through the pipe on a plane inclined to the horizontal and conforming to the side slope of the fill. b. c. Stepped mitered inlets, where the
miter cut begins above the invert and ends below the crown.
Knox County Tennessee Stormwater Management Manual Volume 2 (Technical Guidance) Page 1-5 RCP and fully coated corrugated metal pipe can be used in all other cases.
conversion fi tting, from a fl at plenum or duct into Spiral pipe greatly reduces turbulence and noise. The pressure drop char-acteristics are superior to any other design. Standard Radius Bellmouth
VISTA IRRIGATION DISTRICT STEEL PIPE, MORTAR LINED AND MORTAR COATED REV. 3/99 02400-1 ... the pipe shall be checked by the following formula: Defl x = DKWr3 3EI + 0.0614 E’r ... times the pipe
diameter and the maximum miter angle on each section of
d is internal diameter, f is the friction factor, L is length of pipe. ... Bends with angle less than or greater than 90 degrees are catered for using the formula beneath the table. Miter bends are
currently treated in the same manner as given above for normal bends. Page 2 of 3
FDM 13-25 Attachment 35.5 Capacity and Velocity Diagram for Circular Concrete Pipe Flowing ... Nomograph based on Manning’s formula for circular pipes flowing full in ... “Urban Storm Drainage” FDM
13-25 Attachment 35.7 Loss Coefficients for Miter Bends August 8, 1997 ...
pipe to get from point to point straight line distance, which is excellent from a piping material and pressure loss point of view. 1. INTRODUCTION The basic concept of a geothermal piping design is
to safely and economically transport steam, brine,
Fundamentals of Orifice Meter Measurement page 5 b. Eccentric Orifice Plates The eccentric plate has a round opening (bore) tangent to the inside wall of the pipe.
The purpose of this study is to assess the seismic performance of steel fabricated pipe elbows called Miter ... In order to obtain the seismic performance of Miter bend, the following formula was
developed as shown in Eq.(11) ...
LANL Engineering Standards Manual PD342 Chapter 17 Pressure Safety Section D20-B31.3-G, ASME B31.3 Process Piping Guide Rev. 2, 3/10/09 1 of 168
Elbows, pipe bends, coils, and miter bends are pre-sented in Chapter 15. The intricacies of converging and diverging flow through pipe junctions ... to flesh out Prandtl’s smooth pipe friction factor
formula and Theodor von Kármán’s complete turbulence formula (1930).
sizing and capacities of gas piping 2003 international fuel gas code® 131 table a.2.2—continued equivalent lengths of pipe fittings and valves
mining the pipe sizing for a fuel gas piping system is to make ... Small size socket-welding fittings are equivalent to miter elbows and miter tees. 4. ... formula based on Boyle’s and Charles’law
for determining
The quantities within the chart were established from the Hazen & Williams formula. Pipe Size High Pressure Municipal Service Based on 60 PSI at Source and Maintaining 40 PSI at Head ... Cut pipe
square using a miter box or plastic pipe cutting tool which does not flare up diameter at end of the ...
Pipe Flow. A Practical and Comprehensive Guide ... 8.4.1 Moody’s Approximate Formula 79 ... 15.3 Miter Bends 168 15.4 Coupled Bends 169 15.5 Bend Economy 169 References 174 Further Reading 174 16
TEES 177 16.1 Diverging Tees 178
ods cannot evaluate an entire run of pipe as can NDT, they can give a fair evaluation of the mill setup, steel quality, and welding and normalizing practice. The following is a brief review of
destructive testing commonly used in the production
Steel Pipe—A Guide for Design and Installation, 4th ed. (December 2013) Chapter 1 ... 17. On page 122, last line of paragraph 3 under Elbows and Miter End Cuts should read “ ... 3-2 Solution of
Scobey flow formula for K s = 0.36, 30
Pipe Size Minimum Radius Pipe Size Minimum Radius 27-inch ... mitered joints. (i.e., using only the joint deflection of the bevel or miter and not allowing the opening of the joint). ... determine
the minimum allowable radius, see Formula "A", in Part One, Section 13 ...
Piping consists of pipe, flanges, bolting, gaskets, valves, relief devices, fittings and the pressure contain-ing parts of other piping components. It also includes hangers and supports, and other
equipment items ... The formula for calculation the
and Pipe Fittings with the EBS (Ece Boru Sistemleri-Ece Pipe Systems) brand name. All of its products are up-to-date and ... 11,25° bend-single miter 0.09 15° bend-single miter 0.20 22,50°
bend-single miter 0.12 30° bend-single miter 0.29
For a sparger consisting of a large pipe having small holes ... This is a form of the Fanning or Darcy formula with fric- tion factor ... miter 90~ bends Sudden I Std. red. Sudden I sid. red. Equiv.
L in terms of small d
A-1 Install Equation Editor and double-click here to view equation. APPENDIX I. VALUES OF C IN HAZEN WILLIAMS EQUATION TYPE OF PIPE Condition C
This formula works with both English and Metric measurements. Do not mix °F with °C. INSTALLATION GUIDELINES ... CUT PIPE 90° Miter Saw. 1 PVC & CPVC Cédula 80 JUNTA DE EXPANSION / ACOPLE DE
REPARACION Instrucciones de Instalación EJ-3SP-0508
that will be imposed on the pipe after installation. 1.2. Design Formula for Steel Pipe The design pressure for steel pipe is determined in accordance with the following formula: ... A miter joint on
steel pipe to be operating at a pressure less than 100
Tools • Safety glasses and power miter saw: carbide saw blade with 80 teeth or more recommended. • Miter box and hand saw: Limited angle adjustment (not recom-
PIPE SPECIFICATIONS.51 .55 Steel Pipe ... Was the pipeline designed in accordance with this formula: P = (2St/D) x F x E x T ... .233 Miter joints (consider pipe alignment) .235 Are welding surfaces
clean, free of foreign ...
angle. For tight mitered joints, nail and PVC pipe ce-ment should both be used. Cutting and nailing: Use standard wood working equipment for cut-ting. ... adhere PVC miter and scarf joints. Its
premium formula provides a permanent durable bond that prevents joint separa-tion.
These large diameter pipe fabrications need to be engineered for ... Such zones are at elbow miter fusion joints - at the branch of the fusion joint of the branch to the main in a fabricated tee. The
... Formula: A = roll2 + set2
We would like to show you a description here but the site won’t allow us.
Wherever a non-reinforced miter-branch is satisfactory to withstand the internal pressure-tempered requirements, other factors point to the advisable use of the Weldolet fittings as follows: ... Pipe
schedule numbers and weight designations are
To irradiate uprightness intersection pipe welds with x-rays, the base metal on both sides of the seam meet in a miter joint, the ray beam and the surface of the detected region is not ... Following
is the experienced formula of a flat oval perimeter’s approximate calculation: L=k π〖3 ...
and pipe walls, change of flow direction, elevation change, change in fluid ... SUPERLIT engineers present below formula for hydraulic radius calculation, ... single miter 900 elbow, double miter 900
elbow, triple miter 1800 return bend
Instruction: Major topics covered in the course are: Use of the 3-step algebraic miter formula; Estimation of insulation material for tanks, vessels, ... insulation materials to all pipe sizes, pipe
fittings and connectors; construct and apply
192.233 Miter joints. 192.235 Preparation for welding. 192.241 Inspection and test of welds. ... §192.105 Design formula for steel pipe. (a) The design pressure for steel pipe is de-termined in
accordance with the following formula:
be 2.5 times the pipe diameter and the maximum miter angle on the elbow shall not exceed 11 1/4 degrees. If elbow radius is less diameter, stresses sha. ... lowing a friction factor of 0.31 in
Formula 13-6 for tape-coated pipe. C. MECHANICAL COUPLINGS . 1.
PIPE SIZE.035.1383.049.1863.068.2447.088.4248.088.4248.091 ... • Miter cutting • Roll grooving • "J" Bevels ... • Tagging • Export compliance • Marking Note: Weight per foot is calculated using the
following formula: (O.D. - Wall) x Wall x 10.68 = Weight per Foot. Actual ...
polyolefin pipe and fittings? Reply: No. Eleetrofusion joints are not listed; see para. A304.7.2. 153. ... Is 1.5 in. of incomplete penetration in any 6 in. weld length for girth and miter groove
welds acceptable for normal fluid service? Reply (3): Yes, per acceptance criteria listed in Table ...
* Miter joint capability handles up to 8” IPS and . ... lowing formula: Where OD = Outside diameter (actual pipe diameter) ... Bring pipe ends together, applying force equal to or greater than the
fusion force to be used. Make .
|
{"url":"http://ebookilys.org/pdf/pipe-miter-formula","timestamp":"2014-04-18T18:25:37Z","content_type":null,"content_length":"41032","record_id":"<urn:uuid:ad5a3417-6aa5-428c-a3e3-373c0a5f3660>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00187-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Scientific Notation
Scientific notation is used to express very large or very small numbers. A number in scientific notation is written as the product of a number (integer or decimal) and a power of 10. The number
has one digit to the left of the decimal point. The power of ten indicates how many places the decimal point was moved.
The fraction 6/1000000 written in scientific notation would be 6x10^-6 because the denominator was decreased by 6 decimal places.
The fraction 65/1000000 written in scientific notation would be 6.5x10^-5 because the denominator was decreased by 6 decimal places. One of the decimal places changed the numerator from 65 to
A fraction smaller than 1 can be converted to scientific notation by decreasing the power of ten by one for each decimal place the denominator is decreased by.
Scientific notation numbers may be written in different forms. The number 6.5x10^-7 could also be written as 6.5e-7.
|
{"url":"http://www.aaamath.com/B/g6_71lx1.htm","timestamp":"2014-04-19T04:49:54Z","content_type":null,"content_length":"7410","record_id":"<urn:uuid:eb05340a-4090-4bb1-878b-f6e3f303c736>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00630-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Fundamental Research and Development International :: Fundamental Journal of Mathematical Physics
About the Journal
Fundamental Journal of Mathematical Physics is a peer-reviewed international journal of physics covering its mathematical aspects. The journal covers broad spectrum of physics having thrust on the
areas where mathematics and physics both play significant role. The readership of the journal includes researchers from different disciplines of mathematics, physics, and engineering.
Aim and Scope
Aim of the journal is to provide readers and research world unhindered and continuous access to up-to-date research work in the field of mathematically rigorous physics. Salient features of the
journal include rigorous peer-reviewing and fast publication of accepted work, so that it may reach timely to readers and researchers. The journal covers broad area of physics and mathematics and
includes Algebras, Application of C*-algebras, Asymptotic Methods, Calculus of Variations, Celestial Mechanics, Cellular Automata, Classical Mechanics, Computer-assisted Proofs, Condensed Matter
Physics, Conformal Field Theory, Connections Between the Spectrum of the Laplacian and Geometry, Control Theory, Critical Phenomena, Dynamical Systems, Electromagnetic Theory (Mathematical Aspects),
Ergodic Theory, Fluid Mechanics (Navier-Stokes Equations, Models of Turbulence), Fourier Analysis, Gauge Field Theory, Gravitation Theory (Classical and Quantum), Group Theory, Hamiltonian Dynamical
Systems, Hamiltonian Mechanics, Hopf Algebras, Interacting Particle Systems, KAM Theory and Hamiltonian Dynamics via Variational as well as Local Methods, Kinetic Equations, Kinetic Theory, Knot
Theory, Linear and Partial Differential Equations, Many-Body Theory, Mathematical Methods in Condensed Matter Physics, Mathematical Problems in Solid State Physics, Methods in Mathematical Physics,
Modern Differential and Algebraic Geometry and Topology, Non-commutative Geometry, Non-commutative Geometry to Low Energy Physics, Non-equilibrium Statistical Mechanics, Nonlinear Differential
Difference Equations, Nonlinear Elliptic Equations, Nonlinear Partial Differential Equations, Non-linear Theory, Nonlinear Waves, Operator Algebras, Percolation Models, Perfect Simulation,
Perturbation Theory for ODE, Potential Theory, Quantum Chaos, Quantum Computing, Quantum Dynamics, Quantum Field Theory (Algebraic and Constructive), Quantum Mechanics, Random Process Theory, Random
Schrodinger Operators, Renormalization, Representations of Lie Groups, Rigorous Atomic Physics, Scattering Theory (Classical and Quantum), Schrödinger Equation (Mathematical Properties), Schrodinger
Operators, Semiclassical Analysis, Semiclassical Methods in Quantum Mechanics, Smooth Ergodic Theory, Spectral Theory, Statistical Mechanics (Equilibrium and Nonequilibrium), Stochastic Processes,
String and Brane Theory, Supersymmetry, Symmetries, Symplectic Dynamics, Symplectic Geometry, Vector Analysis. State-of-the-art survey articles of current significance are also welcome.
Fundamental Journal of Mathematical Physics is a bimonthly journal published in three volumes each having two issues appearing in February, April, June, August, October and December in printed as
well as online versions.
Submission Procedure
Papers prepared in .pdf or .doc files may be submitted through e-mail directly to the Editorial Head Office on bdtiwari@frdint.com with a letter of submission. Articles received are immediately
brought to the referees/members of the Editorial Board for their opinion. In case of clear recommendation for publication, the paper is accommodated in a forthcoming issue. The paper should contain
authors’ name, affiliation/address, e-mails, an abstract showing clearly the purpose of the work, and 3-4 keywords. References should be given at the end of the paper.
Papers in duplicate may also be submitted through postal mail. Two hard copies of a paper along with a letter of submission may be sent at the following address:
Fundamental Research and Development International
141/2B/1, Omgayatri Nagar
211 004 Allahabad, INDIA
Authors are requested to make sure that submitted article is neither published nor simultaneously submitted or accepted elsewhere for publication. Copyright of the published articles will remain with
the FRDI.
One set of galley proofs of a paper will be sent to the author submitting the paper for corrections after the paper is accepted for publication. Authors are requested to check the proofs very
carefully before returning it to the Editorial Head Office. Corrected proofs may be returned electronically via e-mail.
Processing Charges
Authors of accepted papers are requested to arrange processing charges for their papers @ USD 40 per print page for authors from USA and Canada and @ EURO 30 per print page for authors from the rest
of the world from their research grants/institutions. However, for authors in India this charge is INR 800 per print page.
A set of twenty five reprints of a paper is provided to the authors ex-gratis. Additional sets of reprints may also be ordered during proof correction.
|
{"url":"http://www.frdint.com/fundamental_journal_mathematical_physics.html","timestamp":"2014-04-20T11:18:34Z","content_type":null,"content_length":"13004","record_id":"<urn:uuid:f5312849-bf5b-4e29-bb88-036be204bda9>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Big Data Graphs: Loopy Lattices
The content of this article was originally co-authored by Marko Rodriguez and Bobby Norton over at the Aurelius blog.
A lattice is a graph that has a particular, well-defined structure. An nxn lattice is a 2-dimensional grid where there are n edges along its x-axis and n edges along its y-axis. An example 20x20
lattice is provided in the two images above. Note that both images are the “same” 20x20 lattice. Irrespective of the lattice being “folded,” both graphs are isomorphic to one another (i.e. the
elements are in one-to-one correspondence with each other). As such, what is important about a lattice is not how it is represented on a 2D plane, but what its connectivity pattern is. Using the R
statistics language, some basic descriptive statistics are computed for the 20x20 lattice named g.
~$ r
R version 2.13.1 (2011-07-08)
Copyright (C) 2011 The R Foundation for Statistical Computing
> length(V(g))
[1] 441
> length(E(g))
[1] 840
> hist(degree(g), breaks=c(0,2,3,4), freq=TRUE, xlab='vertex degree', ylab='frequency', cex.lab=1.25, main='', col=c('gray10','gray40','gray70'), labels=TRUE, axes=FALSE, cex=2)
The degree statistics of the 20x20 lattice can be analytically determined. There must exist 4 corner vertices each having a degree of 2. There must be 19 vertices along every side that each have a
degree of 3. Given that there are 4 sides, there are 76 vertices with degree 3 (19 x 4 = 76). Finally, their exists 19 rows of 19 vertices in the inner-square of the lattice that each have a degree
of 4 and therefore, there are 361 degree 4 vertices (19 x 19 = 361). The code snippet above plots the 20x20 lattice’s degree distribution–confirming the aforementioned derivation. The 20x20 lattice
has 441 vertices and 840 edges. In general, the number of vertices in an nxn lattice will be (n+1)(n+1) and the number of edges will be 2((nn) + n).
Traversals Through a Directed Lattice
Suppose a directed lattice where all edges either point to the vertex right of or below the tail vertex. In such a structure, the top-left corner vertex has only outgoing edges. Similarly, the
bottom-right corner vertex has only incoming edges. An interesting question that can be asked of a lattice of this form is:
“How many unique paths exist that start from the top-left vertex and end at the bottom-right vertex?”
0 -> 1 -> 3
0 -> 2 -> 3
As diagrammed above, these paths can be manually enumerated by simply drawing the paths from top-left to bottom-right without drawing the same path twice. When the lattice becomes too large to
effectively diagram and manually draw on, then a computational numerical technique can be used to determine the number of paths. It is possible to construct a lattice using Blueprints‘ TinkerGraph
and traverse it using Gremlin. In order to do this for a lattice of any size (any n), a function is defined named generateLattice(int n).
def generateLattice(n) {
g = new TinkerGraph()
// total number of vertices
max = Math.pow((n+1),2)
// generate the vertices
(1..max).each { g.addVertex() }
// generate the edges
g.V.each {
id = Integer.parseInt(it.id)
right = id + 1
if (((right % (n + 1)) > 0) && (right <= max)) {
g.addEdge(it, g.v(right), '')
down = id + n + 1
if (down < max) {
g.addEdge(it, g.v(down), '')
return g
An interesting property of the “top-to-bottom” paths is that they are always the same length. For the 1x1 lattice previously diagrammed, this length is 2. Therefore, the bottom right vertex can be
reached after two steps. In general, the number of steps required for an nxn lattice is 2n.
gremlin> g = generateLattice(1)
==>tinkergraph[vertices:4 edges:4]
gremlin> g.v(0).out.out.path
==>[v[0], v[2], v[3]]
==>[v[0], v[1], v[3]]
gremlin> g.v(0).out.loop(1){it.loops <= 2}.path
==>[v[0], v[2], v[3]]
==>[v[0], v[1], v[3]]
A 2x2 lattice is small enough where its paths can also be enumerated. This enumeration is diagrammed above. There are 6 unique paths. This can be validated in Gremlin.
gremlin> g = generateLattice(2)
==>tinkergraph[vertices:9 edges:12]
gremlin> g.v(0).out.loop(1){it.loops <= 4}.count()
gremlin> g.v(0).out.loop(1){it.loops <= 4}.path
==>[v[0], v[3], v[6], v[7], v[8]]
==>[v[0], v[3], v[4], v[7], v[8]]
==>[v[0], v[3], v[4], v[5], v[8]]
==>[v[0], v[1], v[4], v[7], v[8]]
==>[v[0], v[1], v[4], v[5], v[8]]
==>[v[0], v[1], v[2], v[5], v[8]]
If a 1x1 lattice has 2 paths, a 2x2 6 paths, how many paths does a 3x3 lattice have? In general, how many paths does an nxn lattice have? Computationally, with Gremlin, these paths can be traversed
and counted. However, there are limits to this method. For instance, try using Gremlin’s traversal style to determine all the unique paths in a 1000x1000 lattice. As it will soon become apparent, it
would take the age of the universe for Gremlin to realize the solution. The code below demonstrates Gremlin’s calculation of path counts up to lattices of size 10x10.
gremlin> (1..10).collect{ n ->
gremlin> g = generateLattice(n)
gremlin> g.v(0).out.loop(1){it.loops <= (2*n)}.count()
gremlin> }
A Closed Form Solution and the Power of Analytical Techniques
In order to know the number of paths through any arbitrary nxn lattice, a closed form equation must be derived. One way to determine the closed form equation is to simply search for the sequence on
Google. The first site returned is the Online Encyclopedia of Integer Sequences. The sequence discovered by Gremlin is called A000984 and there exists the following note on the page:
“The number of lattice paths from (0,0) to (n,n) using steps (1,0) and (0,1). [Joerg Arndt, Jul 01 2011]“
The same page states that the general form is “2n choose n.” This can be expanded out to its factorial representation (e.g. 5! = 5 * 4 * 3 * 2 * 1) as diagrammed below.
Given this closed form solution, an explicit graph structure does not need to be traversed. Instead, a combinatoric equation can be evaluated for any n. A directed 20x20 lattice has over 137 billion
unique paths! This number of paths is simply too many for Gremlin to enumerate in a reasonable amount of time.
> n = 20
> factorial(2 * n) / factorial(n)^2
[1] 137846528820
A question that can be asked is: “How does 2n choose 2 explain the number of paths through an nxn lattice?” When counting the number of paths from vertex (0,0) to (n,n), where only down and right
moves are allowed, there have to be n moves down and n moves right. This means there are 2n total moves, and as such, there are n choices (as the other n “choices” are forced by the previous n
choices). Thus, the total number of moves is “2n choose n.” This same integer sequence is also found in another seemingly unrelated problem (provided by the same web page).
“Number of possible values of a 2*n bit binary number for which half the bits are on and half are off. – Gavin Scott, Aug 09 2003″
Each move is a sequence of letters that contains n Ds and n Rs, where down twice then right twice would be DDRR. This maps the “lattice problem” onto the “binary string of length 2n problem.” Both
problems are essentially realizing the same behavior via two different representations.
Plotting the Growth of a Function
It is possible to plot the combinatorial function over the sequence 1 to 20 (left plot below). What is interesting to note is that when the y-axis of the plot is set to a log-scale, the plot is a
straight line (right plot below). This means that the number of paths in a directed lattice grows exponentially as the size of the lattice grows linear.
> factorial(2 * seq(1,n)) / factorial(seq(1,n))^2
[1] 2 6 20 70 252 924
[7] 3432 12870 48620 184756 705432 2704156
[13] 10400600 40116600 155117520 601080390 2333606220 9075135300
[19] 35345263800 137846528820
> x <- factorial(2 * seq(1,n)) / factorial(seq(1,n))^2
> plot(x, xlab='lattice size (n x n)', ylab='total number of paths', cex.lab=1.4, cex.axis=1.6, lwd=1.5, cex=1.5, type='b')
> plot(x, xlab='lattice size (n x n)', ylab="total number of paths", cex.lab=1.4, cex.axis=1.6, lwd=1.5, cex=1.5, type='b' log='y')
It is wild to think that a 20x20 lattice, with only 441 vertices and 840 edges, has over 137 billion unique directed paths from top-left to bottom-right. It’s this statistic that makes it such a
loopy lattice! Anyone using graphs should take heed. The graph data structure is not like its simpler counterparts (e.g. the list, map, and tree). The connectivity patterns of a graph can yield
combinatorial explosions. When working with graphs, it’s important to understand this behavior. It’s very easy to run into situations, where if all the time in the universe doesn’t exist, then
neither does a solution.
This exploration was embarked on with Dr. Vadas Gintautas. Vadas has published high-impact journal articles on a variety of problems involving biological networks, information theory, computer
vision, and nonlinear dynamics. He holds a Ph.D. in Physics from the University of Illinois at Urbana Champaign.
Finally, this post was inspired by Project Euler. Project Euler is a collection of math and programming challenges. Problem 15 asks, “How many routes are there through a 20x20 grid?”
Published at DZone with permission of Marko Rodriguez, author and DZone MVB. (source)
|
{"url":"http://architects.dzone.com/articles/big-data-graphs-loopy-lattices","timestamp":"2014-04-17T12:45:31Z","content_type":null,"content_length":"74810","record_id":"<urn:uuid:64c856b5-81d3-4e3a-b5bf-701b48d2357f>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Metuchen Algebra 2 Tutor
Find a Metuchen Algebra 2 Tutor
...I have been teaching science and math for over four years in a one-on-one level, as well as at the collegiate level at Rutgers University. I am patient and cater to my students needs. I use
diagrams and a systematic approach towards solving problems in my teaching, which allows my students to grasp the deeper concepts, rather than just solve the problem.
39 Subjects: including algebra 2, chemistry, writing, calculus
...I am open to help anyone that I can. As a result of work commitments throughout the state, I am fairly flexible with location. I look forward to hearing from you and scheduling a tutoring
49 Subjects: including algebra 2, Spanish, English, writing
...No two students are alike, and I've failed with some kids before - in the classes we were teaching in Brooklyn, hard as I tried, some kids would lose focus and made little progress. Overall, I
prefer a non-stressful approach to establish a baseline from which to go on with each individual student. Preparation is also key and having the right material to work on is very important.
9 Subjects: including algebra 2, algebra 1, precalculus, trigonometry
...I am here to help you with the learning part and to show you that science can be fun. I am looking forward to becoming your tutor!Genetics is a fun subject that I love tutoring. I have an
extensive background in genetics and molecular biology from my graduate and undergraduate studies.
19 Subjects: including algebra 2, chemistry, physics, geometry
...I am preparing for Exam 3 (Models for Financial Mathematics). I’m also capable in tutoring in areas including: Micro and Macro Economics, Chemistry, Calculus, and Algebra. I had a great deal of
working with a math tutor at a young age, as my father was a math tutor. All of these one-on-one lessons gave me insights on how to be an effective tutor.
8 Subjects: including algebra 2, chemistry, calculus, algebra 1
|
{"url":"http://www.purplemath.com/metuchen_nj_algebra_2_tutors.php","timestamp":"2014-04-21T07:45:42Z","content_type":null,"content_length":"24043","record_id":"<urn:uuid:5512e13b-e961-43d8-b3cb-c56e0990e956>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Show that Z=E mod 9, where E is the sum of digits of r (in dec. repres. of r)
April 8th 2011, 03:06 PM
Show that Z=E mod 9, where E is the sum of digits of r (in dec. repres. of r)
I have the following problem and I don't know how to go about this... I would really appreciate if you could give me a hand. The problem says:
"Let $r \in \mathbb{N}$. Show that
$r \equiv \Sigma mod 9$,
where $\Sigma$ is the sum of digits of $r$ (in decimal representation of r)."
April 8th 2011, 03:33 PM
I have the following problem and I don't know how to go about this... I would really appreciate if you could give me a hand. The problem says:
"Let $r \in \mathbb{N}$. Show that
$r \equiv \Sigma mod 9$,
where $\Sigma$ is the sum of digits of $r$ (in decimal representation of r)."
If $n=A_k\times 10^k +A_{k-1}\times 10^{k-1}+\ldots+A_1\times 10+A_0$ , and
since $10^m=1\!\!\pmod 9\,,\,\,\forall\,m\in\mathnn{N}$ , we get
$n=\sum\limits^k_{i=0}A_i\times 10^i=\sum\limits^k_{i=0}A_i\!\!\pmod 9$
|
{"url":"http://mathhelpforum.com/number-theory/177292-show-z-e-mod-9-where-e-sum-digits-r-dec-repres-r-print.html","timestamp":"2014-04-20T12:41:41Z","content_type":null,"content_length":"6866","record_id":"<urn:uuid:3b89f086-165d-4211-81e2-b94a8cf00910>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00142-ip-10-147-4-33.ec2.internal.warc.gz"}
|
z-score help
June 24th 2010, 03:31 PM #1
Jun 2010
z-score help
Find the z-score for having area 0.07 to its right under the standard normal curve, that is, find z 0.07.
I have the choices of 1.45, 1.26, 1.48, and 1.39 but I can't seem to find any of those values on the chart. Which would be the right one.
you can plug those numbers into
Free Cumulative Area Under the Standard Normal Curve Calculator
or use
there are lots of tables out there
June 24th 2010, 05:06 PM #2
|
{"url":"http://mathhelpforum.com/statistics/149296-z-score-help.html","timestamp":"2014-04-18T09:23:53Z","content_type":null,"content_length":"32274","record_id":"<urn:uuid:9409d1e6-d130-48b5-ace7-f59f12c65af9>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Essentials of College Algebra with Modeling and Visualization plus MyMathLab with Pearson eText...
Why Rent from Knetbooks?
Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option
for you. Simply select a rental period, enter your information and your book will be on its way!
Top 5 reasons to order all your textbooks from Knetbooks:
• We have the lowest prices on thousands of popular textbooks
• Free shipping both ways on ALL orders
• Most orders ship within 48 hours
• Need your book longer than expected? Extending your rental is simple
• Our customer support team is always here to help
|
{"url":"http://www.knetbooks.com/bk-detail?isbn=9780321756015","timestamp":"2014-04-20T21:26:55Z","content_type":null,"content_length":"33813","record_id":"<urn:uuid:de80322d-8588-4f11-b121-7c6e6cfc9b0c>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Two Open Problems for the Subgroup-Reduction based Dedekindian HSP Algorithm
By fredw on Thursday, August 5 2010, 18:41 - Permalink
One of my main idea after having studied the Hidden Subgroup Problem is that subgroup simplification is likely to play an important role. Basically, rather than directly finding the hidden subgroup
$H$ by working on the whole initial group $G$, we only try to get partial information on $H$ in a first time. This information allows us to move to a simpler HSP problem and we can iterate this
process until the complete determination of $H$. Several reductions of this kind exist and I think the sophisticated solution to HSP over 2-nil groups illustrates well how this technique can be
Using only subgroup reduction, I've been able to design an alternative algorithm for the Dedekindian HSP i.e. over groups that have only normal subgroups. Recall that the standard Dedekindian HSP
algorithm is to use Weak Fourier Sampling, measure a polynomial number of representations ${\rho }_{1},\dots ,{\rho }_{m}$ and then get with high probability the hidden subgroup as the intersection
of kernels $H=\bigcap _{i}\mathrm{Ker}{\rho }_{i}$. When the group is dedekindian, we always have $H\subseteq \mathrm{Ker}{\rho }_{i}$. Hence my alternative algorithm is rather to start by measuring
one representation ${\rho }_{1}$, move the problem to HSP over $\mathrm{Ker}{\rho }_{1}$ and iterate this procedure. I've been able to show that we reach the group $H$ after a polynomial number of
steps, the idea being that when we measure a non-trivial representation the size of the underlying group becomes at least twice smaller. One difficulty of this approach is to determine the structure
of the new group $\mathrm{Ker}{\rho }_{i}$ so that we can work on it. However, for the cyclic case this is determination is trivial and for the abelian case I've used the group decomposition
algorithm, based on the cyclic HSP. Finally I've two open questions:
1. Can my algorithm work for the Hamiltonian HSP i.e. over non-abelian dedekindian groups?
2. Is my algorithm more efficient than the standard Dedekindian HSP?
For the first question, I'm pretty sure that the answer is positive, but I admit that I've not really thought about it. For the second one, it depends on what we mean by more efficient. The
decomposition of the kernel seems to increase the time complexity but since we are working on smaller and smaller groups, we decrease the space complexity. However, if we are only considering the
numbers of sample, my conjecture is that both algorithms have the same complexity and more precisely yield the same markov process. In order to illustrate this, let's consider the cyclic case $G={ℤ}_
{18}$ and $H=6{ℤ}_{18}$. The markov chain of the my alternative algorithm is given by the graph below, where the edge labels are of course probabilities and the node labels are the underlying group.
We start at $G={ℤ}_{18}$ and want to reach $H\cong {ℤ}_{3}$.
One can think that moving to smaller and smaller subgroups will be faster than the standard algorithm which always works on ${ℤ}_{18}$. However, it turns out that the markov chain of the standard
algorithm is exactly the same. The point being that while it is true that working over ${ℤ}_{9}$ or ${ℤ}_{6}$ provides less possibilities of samples (3 and 2 respectively, instead of 6 for ${ℤ}_{18}$
) the repartition of "good" and "bad" samples is the same and thus we get the same transition probabilities. I guess it would be possible to formalize this for a general cyclic group. The general
abelian case seems more difficult, but I'm sure that the same phenomenon can be observed.
|
{"url":"http://www.maths-informatique-jeux.com/blog/frederic/?post/2010/08/05/Two-Open-Problems-for-the-Subgroup-Reduction-based-Dedekindian-HSP-Algorithm","timestamp":"2014-04-20T18:24:21Z","content_type":null,"content_length":"24859","record_id":"<urn:uuid:10d05f9d-7ff6-41a5-b083-de707485af2b>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00637-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
Topic review (newest first)
2006-01-07 04:26:18
rickyoswaldiow wrote:
Consider this: you have a bag of sweets containing 10 whole sweets but lots of shrapnel in the bottom of the bag. The shrapnel could make up 3/4 of a sweet yet you'd still say you have 10 sweets.
Then in this case, are you not rounding down even though you have more than .5? I don't think you can "prove" which way to round, people just do it to simplify a number, it's not actually a
mathematical theory (or somthing).
That's an interesting point. Most of our arguments so far have been theoretical, but in practical situations it is usually obvious which method to use.
If rickyoswaldiow's sweety shrapnel could be put together to make 2 whole sweets, it would still only count as 10. So, the rule there is to ignore any fractions.
Another example would be that if a factory that sells cans decides to combine them into 6-packs, you would divide by 6 and round down, even if you had 5 spare.
Conversely, if a bus can hold 25 people then to work out how many buses you need to hold a certain amount of people, you would have to round up all the time, even if the last bus will only have 1
person on it.
But for the theoretical side of it, we should just say that 0.5 rounds up to 1 because it is convention and if you try anything else it will be seen as wrong, even if you don't believe it is. The
2006-01-06 17:50:01
Over a large sampling, though, you could choose a method that would lead to the least bias.
So, apply the rounding method to suit the data.
2006-01-06 17:23:53
Consider this: you have a bag of sweets containing 10 whole sweets but lots of shrapnel in the bottom of the bag. The shrapnel could make up 3/4 of a sweet yet you'd still say you have 10 sweets.
Then in this case, are you not rounding down even though you have more than .5? I don't think you can "prove" which way to round, people just do it to simplify a number, it's not actually a
mathematical theory (or somthing).
2006-01-05 20:42:39
2006-01-05 14:40:59
I think people like to round up becuase the most public use of rounding is related to money handling. rounding up = more $$$.
VR Hawks
2006-01-05 01:45:31
mathsyperson has some meaning, but why should we round it up/down when we don't have a straight answer?
2006-01-05 01:37:52
In an old book I it's sayed that
<√x>=[√([√x]+x)], where [x] is floor[x].
2006-01-05 01:20:06
MathsIsFun wrote:
Taka a random bill. Estimate it by rounding off the cents (or pence, or centavos or whatever).
Example: 3.45, 12.07, 6.68, ...
Rounded: 3,12,7 ...
What method will have the least error? (There are 100 possible cent values from 00 to 99)
That depends on the prices. Most prices tend to be biased towards the high end to make people think things are cheaper than they are. eg, £3.99 etc.
So, that would mean that rounding would give a higher value a than the actual one, so to compensate it would be better to round £3.50 downwards. That doesn't prove anything though.
2006-01-05 00:00:18
Or the best answer:
If a number is .5 we just don't round it.
2006-01-04 23:54:49
Taka a random bill. Estimate it by rounding off the cents (or pence, or centavos or whatever).
Example: 3.45, 12.07, 6.68, ...
Rounded: 3,12,7 ...
What method will have the least error? (There are 100 possible cent values from 00 to 99)
2006-01-04 21:57:41
And in my mathematica help i read something very strange:
It rounds .5 to nearest even integer!!!
2006-01-04 04:33:42
To put another nail in the coffin...
If you think that 4.0 rounds down to 4, then it is also true that 5.0 rounds up to 5. So you would have:
4.0 .1 .2 .3 .4 .5 .6 .7 .8 .9 5.0
4.0, .1, .2, .3, and .4 round down
4.5, .6, .7, .8, .9, and 5.0 round up
So there is still 5 numbers that round down, and 6 numbers that round up.
2006-01-03 22:12:33
I actually think
<m.00000> = m, so <m.50000> = m+1. I round it up.
2006-01-03 22:10:49
It depends on what you round.
If it's student mark and you are good, you'll round it up.
2006-01-03 21:55:58
I was always taught that .5 rounded up, and I've never heard anyone disagree with that up to now.
|
{"url":"http://www.mathisfunforum.com/post.php?tid=1576&qid=22419","timestamp":"2014-04-19T05:05:20Z","content_type":null,"content_length":"23556","record_id":"<urn:uuid:933b6166-53be-4703-a2c1-b0cfa750b52c>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00563-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Tutor] Simple RPN calculator
Brian van den Broek bvande at po-box.mcgill.ca
Sun Dec 5 17:40:38 CET 2004
Kent Johnson said unto the world upon 2004-12-05 06:55:
> RPN reverses the order of operator and operand, it doesn't reverse the
> whole string. So in Polish Notation 2 + 3 is +23 and (2 + 3) - 1 is
> -+231; in RPN they become 23+ and 23+1-
> Kent
Hi all,
Thanks Kent, that is what I had assumed it would be by analogy to Polish
notation in logic. Somewhere on the thread, I though it had been
asserted that all opps and operands were separated. For a bit there, I
thought I'd gone all goofy :-) So, thanks for clearing that up.
Thanks also for the other interesting posts on the thread.
Largely off-topic things follow:
One other advantage, at least from the logicians perspective is that
standard "infix" notation is only able to comfortably deal with binary
and unary operations (operations that have 2 or 1 arguments). For
arithmetic, where you can do everything with zero, successor,
multiplication, and addition, that isn't so important. But notice that
general function notation, in Python and in math, is also Polish -- to
write a 4 placed function that takes, say, the greatest common divisor
of two numbers, and the least common multiple of two others, and tells
you if the first divides the second, you've got to write:
So, Polish notation makes manifest the conceptual similarity between the
addition -- ADD(a,b) -- 2-placed function and arbitrary n-placed functions.
This also helps out a lot in some of the areas where formal logic and
formal semantics for natural languages bleed into each other. At a cost
of patience, all truth functions can be expressed in terms of the "not
both" truth function, so polyadic truth-functions past binary don't
really need Polish notation.
But, when you consider the quantifiers ('for everything . . .' and
'there is at least on thing . . . '), standard ones are one-placed (with
a given universe of discourse set assumed). In the 1950's and 1960's
mathematicians began exploring generalizations of the quantifier notion.
There have, since the 1980's, been a sizable group of linguists who
argue that natural language quantification is almost always 2 or higher
placed. After two places, this too needs Polish notation (or heroic and
ugly conventions).
Brian vdB
More information about the Tutor mailing list
|
{"url":"https://mail.python.org/pipermail/tutor/2004-December/033816.html","timestamp":"2014-04-17T18:04:49Z","content_type":null,"content_length":"4730","record_id":"<urn:uuid:7dc3f2e2-a616-4dff-abaf-ee29b8557b7e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Physics Forums - View Single Post - Calculating CFM from PSI and C.S.A.
With this formula, are we supposed to assume that velocity is in seconds? Obviously, I do not know what the background and proofs behind this formula are. And in my ignorance, I can't feel
comfortable assuming what the units of a variable are expressed in.
|
{"url":"http://www.physicsforums.com/showpost.php?p=3528033&postcount=14","timestamp":"2014-04-19T02:25:06Z","content_type":null,"content_length":"6897","record_id":"<urn:uuid:4541c85e-6812-418e-9fa2-2c5ea765c380>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Computational Life
I regret to inform that I will no longer be updating this blog. However, if you're interested in any further writing I might do on evolution, programming, and the like, I encourage you to follow me
on Twitter. I'll post updates there.
Follow @elirm
Disclaimer: All characters depicted in the following post are entirely a work of the author's (sleep-deprived) imagination. All models are wrong and probably not useful.
The Daily Show with Jon Stewart Mon - Thurs 11p / 10c
Exclusive - An Elegant Message to the 47%
Daily Show Full Political Humor & Satire The Daily Show on
Episodes Blog Facebook
Political rant
At a fundraiser in May, Republican presidential candidate Mitt Romney (paraphrased above) characterized almost half of the American populace as hopeless plebians desperately attached to the sore teat
of the government. As a generally left-leaning graduate student dependent on federal research grants for income, I can only assume I'm included within this group. Maybe I should feel insulted, but
I'd rather understand the worldview that allows a presidential candidate to become a Simpsons-style caricature of himself and still garner support from half the country. This could lead to a
discussion of zero-sum games or the psychology of authoritarianism. For now, I want to focus on something more fundamental and (seemingly) nonpartisan - the notion of causality.
As far as I can tell from the rhetoric, the current Conservative narrative goes something like this:
Once upon a time there were two brothers, Lefty and Righty. Every day after school Righty would invest time into his lemonade stand, "Trickle-down treats", which he built with his own hands using
personally obtained venture capital. By age 10 he had successfully incorporated all neighborhood lemonade stands into his company, allowing him to create 50 jobs for area children. He eventually
went on to additional fame and fortune as the CEO of bootstrapped bootstraps, the finest purveyor of self-replicating footwear accessories. Meanwhile, Lefty delved into Marxist literature,
writing diatribes on dialectical materialism whilst demanding the occasional handouts from Righty and thus depriving his brother from his grand ambition: opening the Grover Norquist bathtub
Essentially, the message is that hard work leads to success; therefore, success comes from hard work. Likewise, laziness leads to economic hardship and dependency, meaning that poor people are lazy,
entitled fuckwads.
This notion of economic causality is actually a very comforting thought, perhaps increasing the appeal of the right beyond their legitimate economic base. After all, most people (rightfully) believe
they work hard and deserve the American dream - a house, two cars, and plenty of useless Chinese-manufactured crap to regift to their 2.5 kids when they move to Florida in their old age.
Yet the world is not causal. As great sages have said before me, "shit happens". Kids happen. Unexpected bills happen. Market changes happen. Still, many hold onto the notion that hard work
guarantees success. Fortunately, as someone who can program, I'm in the position to play armchair economist and ask what happens when the only factors distinguishing the economic success of one
person from another are the random vicissitudes of life. Now let's play make-believe with the economy!
Simulating the economy
Here's my scenario: In the beginning there are 500 families, each with 80 dollars to their names. I then simulate their economic histories through the course of 50,000 days (about 137 years), so
there are multiple generations here. On each day of the simulation, I guarantee every family an income of 80 dollars. However, each family also has daily expenses that are sampled from a normal
distribution, where the mean of the distribution is given by the following, where $x$ is total savings up to that point: $$ f(x) = \frac{80}{1+(0.01)0.995^{-0.01x}} $$ Basically, this function means
that families with very low savings are spending almost all of their incomes on expenses, while wealthier families devote a much smaller portion toward expenses. I also set the standard deviation of
the expense distribution to one-fifth of the income. This allows for families to lose or gain money with each passing day, though they do have a slight positive edge on average. Additionally,
families with positive savings (those who aren't in debt) invest their savings, which yields returns (or losses) through a normal distribution with mean 0 and a standard deviation equal to 1% of the
accumulated wealth of the family.
Now, I know the model is not taking into account many aspects of economic life - rare extreme disastrous events (black swans to use Taleb's terminology), loan interest, the idiotic spending habits of
certain individuals. However, the main point I want to address is what happens when everyone follows the same (not unrealistic) economic strategy. My Python code is below. It runs the simulation,
plots the economics histories, and plots the fraction of wealth accumulated by the top 20% and top 5% of the population over time. I encourage you to try it for yourself and modify it if you want. If
you think I'm being a total idiot, I encourage you to show me a better way!
from scipy import stats
import numpy as np
import matplotlib.pyplot as plt
# Run the simulation
histories = [np.zeros(50000) for x in xrange(500)]
income = 80.
expense_func = lambda x: 1./(1.+0.01*0.995**(-0.01*x))
for his in histories: his[0] = income
for i in xrange(1,50000):
print i
for his in histories:
saved = income - stats.norm.rvs(loc=expense_func(his[i-1])*income,scale=income/5.)
if his[i-1] > 0: his[i] = his[i-1] + saved + \
else: his[i] = his[i-1] + saved
# Plot money vs. time
for his in histories: plt.plot(np.arange(0,50000,50),his[np.arange(0,50000,50)])
# Plot fraction owned by top 20% over time
wealth_frac = []
for i in xrange(50000):
wealth = sorted([his[i] for his in histories])
wealth_frac = np.array(wealth_frac)
# Plot out fraction owned by top 20% over time
plt.ylabel('Ownership by top 20%')
# Plot fraction owned by top 5% over time
wealth_frac = []
for i in xrange(50000):
wealth = sorted([his[i] for his in histories])
wealth_frac = np.array(wealth_frac)
plt.ylabel('Ownership by top 5%')
In the end the richest family ended up with 38,281,980 dollars, while the poorest had only had 1,528 despite the fact that they pursued the same economic strategies. I can only assume the richest
family - represented by the black line in the first figure - would be lauded for their success, a testament to the value of hard work. After all, Grandpa Richy McRicherton started with 80 dollars in
his pocket and ended up a multi-millionaire. Meanwhile, the poorer families might be decried for their use of public services and desire to fund them.
Obviously this wasn't totally realistic, since no one ended up in debt in the end, but the inequalities stand out over repeated simulations. A couple issues - although the figures appear to show that
the top 20% started with 50% of the wealth, this is actually due to a sharp upswing in inequality at the beginning of the simulation as the families depart from the equal starting values. Curiously,
the top 20% reaches a high of owning approximately 80% of the wealth, which falls in line with the famous Pareto principle, but this may just be a coincidence. I would need to run the simulation for
longer to see if the top 20% continues to account for approximately 65% of the wealth or if there is another longer term trend. Either way, enough inequality occurs to allow the top 5% to control
between 35% and 40% of the total wealth in the end, and a few different runs with varying parameters show this to be robust to changes in the values used in the simulation.
So what can we conclude from this excercise (assuming my simulation wasn't totally naive)? 1.) inequality is the natural tendency for unconstrained economic systems and 2.) the causal economic
storyline is bullshit.
John Galt can kiss my ass
I highly recommend reading Michael Lynch's Origins of Genome Architecture, if you haven't already. In short, it provides a bridge from understanding the fundamentals of population genetics -
particularly genetic drift, mutation, selection, and recombination - to understanding how the quintessential features of eukaryotic genomes may have come about. Even if you don't completely buy into
his non-adaptationist paradigm, I believe his thinking provides an excellent basis for forming null hypotheses when testing evolutionary scenarios. However, it has also shaped my thinking on some
decidedly non-biological concepts - in particular, how bureaucracies become the mazes of paper-shuffling complexity we know and love. The analogy isn't perfect (no populations, non-random
duplications), but I thought it was worth writing down. Here's how I see it, as told through the parable of Bob the accountant:
Meet Bob
Bob works at the University of University of HappyFunVille where he is universally loved by graduate students and faculty alike. He keeps track of the cash flow in the Department of Totally Fund-able
Research. When members of the department come back from trips, they fill out a single small form that lists expenses and a funding string. They usually get their reimbursement within a week.
The University implements a computerized system to keep track of accounts across the whole campus. They announce that this will herald a new golden age of efficiency, as promised by Interfacial
Solutions, the contractor hired to implement the software. Bob now has a new job function. He must provide an entirely new set of information to the software for travel reimbursement on top of
maintaining consistent records for the Department. Travelers must now fill out a pre-authorization form with expected expenses a week before traveling in addition to the reimbursement form after
coming back. Bob occasionally loses one of the forms, and reimbursement now takes an average of two weeks.
Duplication, Subfunctionalization, and Escape from Adaptive Conflict
On a whim, the Dean favors the Department of Totally Fund-able Research and provides funding to hire a new staff person, Bob II. In fact, Bob II ends up with the same job title and description as Bob
I. Bob I complains to the chair of the department that Bob I is redundant, so the chair assigns Bob I to his old task of keeping the intra-department financial records while giving Bob II the task of
dealing with the University's computerized reimbursement system. Bob I enjoys having less work to do, and Bob II excels at his specialized job. However, the Bob's don't coordinate their schedules, so
no work gets done when either Bob goes on vacation. Mistakes in the reimbursement process, though rare from each Bob, now occur twice as often in total. Reimbursement now takes an average of 3 weeks.
The department switches all their records to the computerized system, eliminating Bob I's remaining function in the department. However, the University lacks a good mechanism for laying him off, so
he just sits around accumulating the signs of old age until the day when he's eliminated in a massive budget cut.
The Dean diverts increasing resources to the computerized record center, allowing them to hire 3 new people in the reimbursement department, each of whom specialize in a separate task performed by
their predecessor and each of whom have a non-zero probability of losing a form or making a typographical error. Reimbursements now take 6 months. The Dean proposes hiring new staff to solve the
I had some great math teachers growing up, but they left out some of the techniques I use on a daily basis nowadays. I have a particular fondness for algorithms that rely on randomness to generate
their solutions. True, they usually aren't the most efficient things in the world, but I think evolutionary biologists have an aesthetic predisposition to solutions with stochasticity at their cores.
Also, they often work when nothing else will, including my will to perform symbolic integration.
A couple weeks ago I needed to calculate Bayes factors to evaluate whether the genotypes in a controlled cross were segregating according to Mendelian expectations. It was a bit more complicated than
that, but the gist is that I wanted to evaluate the likelihood that the genotypes were segregating 1:2:1 relative to the marginal likelihood that the genotype probabilities were completely unknown.
Let's say that for a particular locus, I obtained perfect Mendelian segregation for double heterozygous parents in 100 offspring, meaning there were 25 AA, 50 Aa, and 25 aa individuals. I can obtain
the likelihood for that particular data under the model of double heterozygous parents using the multinomial distribution. That is, I could calculate the likelihood as follows: $$ P(D|p=0.25,q=0.5)=\
frac{100!}{25!50!25!}(0.25)^{25}(0.5)^{50}(0.25)^{25}=0.00893589 $$
That's easy enough. However, the marginal likelihood for the multinomial distribution under the prior probability that the genotype probabilities are completely unknown (a uniform prior) would be
calculated using the following integral: $$ \int_0^1 \int_0^{1-q} \frac{100!}{25!50!25!} p^{25} q^{50}(1-p-q)^{25} dp dq $$ I was not going to try and integrate that, so I decided to let randomness
do the work for me using Monte Carlo integration. This is a fairly naive technique in which I let the computer randomly select points in the domain under consideration (in this case between 0 and 1
along both the $p$ and $q$ dimensions) and then average. This average then approaches the integral as the number of sampled points approaches infinity. Also, in this case my multidimensional volume
($V$) is 1, as both axes of integration are of size 1. However, in general I would have to correct for the volume of the multidimensional domain of integration. In math-speak, I'm saying the
following is true: $$ \int f(x)dx = \lim_{n \to \infty} V \frac{1}{n}\sum_{i=1}^nf(x_i) $$ where x could be of any dimension.
Below, I use Python to calculate the marginal likelihood of the multinomial using Monte Carlo integration with 1,000,000 random samples. In practice, I generally settle on around 100,000 for
efficiency's sake, but more iterations will give a more precise answer. Here is the answer from Wolfram Alpha. I estimated the answer twice, and my results closely bracketed the actual solution.
import numpy as np
from numpy import array, log, exp, random
from scipy.special import gammaln
## Integrates over the two free parameters in the multinomial formula
## Helper function to calculate the factorial in order to get the multinomial probability
# @param x The argument for the factorial
# @return The logarithm of the factorial
def log_factorial(x):
return gammaln(array(x)+1)
## Helper function to get the multinomial probability
# @param n The total number of individuals
# @param xs A list of the numbers for each category
# @param ps A list of the probabilities for each category
# @param The multinomial probability
def multinomial(n, xs, ps):
xs, ps = array(xs), array(ps)
result = log_factorial(n) - sum(log_factorial(xs)) + sum(xs * log(ps))
return exp(result)
# @param n_genos The number of genotyped individuals at the locus
# @param n_h1 The number of individuals homozygous for allele 1
# @param n_he The number of individuals heterozygous
# @param mc_steps The number of steps to use for monte-carlo integration over the multinomial formula
# in order to generate the marginal likelihood under a uniform prior
def mc_integrate_multinomial(n_genos,n_h1,n_he,mc_steps=10000):
p_H1 = random.rand(mc_steps)
p_He = random.rand(mc_steps)
samps = []
for i in xrange(mc_steps):
probs = [p_H1[i],p_He[i],1-p_H1[i]-p_He[i]]
if p_H1[i] + p_He[i] > 1.:
return np.mean(samps)
>>> mc_integrate_multinomial(100,25,50,mc_steps=1000000)
>>> mc_integrate_multinomial(100,25,50,mc_steps=1000000)
There are better ways to do this - namely through importance sampling or MCMC. However, if most of the function's volume is not too constrained, plain Monte Carlo integration works well enough. In
future posts, I will discuss MCMC more thoroughly, especially with respect to Bayesian analysis.
Ten years ago we saw the culmination of two decades worth of work in the publication of the human genome, a bloated mess of a As, Cs, Gs, and Ts half-full of seemingly repetitive nonsense obtained at
the price of a dollar per precious base pair. Then, the media fed the public's collective chagrin as they announced the paltry figure of 1% deemed "functional" at the time. That word - functional -
only encompassed those portions of the genome known or predicted to be translated into proteins or transcribed into the limited suite of RNAs with well-defined catalytic roles at the time. Over the
last decade - using a set of model systems from the weedy thale cress to the diminutive fruit fly - that word has evolved to encompass an alphabet soup of specialized RNAs, regulatory binding sites,
and activity-modifying marks that have yielded ever-increasing insight into the dynamics of the eukaryotic genome. Of course each small step in that pursuit hardly merited the front cover of multiple
high-profile journals. And so the ENCODE project bid the scientific community to hold its collective breath until this past Wednesday, when the fleet of ENCODE publications sailed forth into public
view with a large "80% functional" above each masthead.
The wet dream of comparative genomics
Yes, 80% of the human genome was found to either produce some RNA, or sometimes bind to a regulatory protein, or contain marks indicative of transcriptionally active regions. The authors also
demonstrated that, on average, most of these regions are less variable than expected if selection were not acting to constrain them. This does imply that certain subsets of these elements do have
biochemical roles with enough impact on reproductive success to overwhelm the stochastic fluctuation of variant frequencies from generation to generation and that the data sets were large enough to
detect these subsets. Yet the 80% figure implies, at least to the average person, that the overwhelming majority of our genomes cannot sustain mutation without a non-negligible impact on fitness,
which would be extraordinary, but for now remains disingenuous.
The New York Times used a Google Maps analogy for ENCODE, and I thought this could fit quite well. I could envision the human genome as a major metropolis, Detroit perhaps. The downtown still
contains the bastions of yesterday's economic glory, without which the city might completely turn to shambles. As someone viewing a Google map of Detroit, I could easily posit that these buildings
still serve valuable economic and governmental functions, similar to how the ENCODE elements conserved across class Mammalia likely encode important developmental and housekeeping functions within
humans. However, as I pan over the extremities of the city, I would find houses in various states of disrepair. Certainly, from my vantage point many would have all the characteristics of functional
domiciles - roofs, driveways, fences - to discriminate them from rural areas of the country without human presence. Yet if I were to explore those areas on the ground level, I would find many of the
homes abandoned. Granted, this wouldn't stop the occasional transient homeless person from squatting for certain periods of time; but that hardly meets the usual definition of functional.
The bleaker reality of population genetics
This is how I suspect much of the human genome works. Most of it is capable of binding the occasional transcription factor, transcribing the odd RNA, and accepting contextual epigenetic markings. On
rare occasions the insert of a duplicated gene or the local rearrangement of the DNA may provide opportunities for sections with previously transient but useless biochemical activity to take
regulatory roles that have non-negligible effects on fitness and eventually become conserved, much like how the introduction of a successful business into a dying community can drive the
revitalization of existing infrastructure. However, like the successful business and its surrounding region, the conserved genomic elements wink in and out of existence on a longer timescale.
I am excited by the genomic "Google map" that the ENCODE project has provided us, and I am sure it will lend considerable insight into human disease when combined with the theoretical power of
population genetics. I just don't think the authors should imply that everything with the characteristics of function is presently important.
Ian Carroll helpfully reminded me that I had neglected the easiest (and most computationally efficient) way of calculating the stationary distribution of the Markov model in the previous post, and I
would be remiss if I didn't demonstrate it and the logic behind its use. This time we take advantage of the definition of an eigenvector. Recall that our goal is to find the probability distribution
of states that, when left-multiplied by the transition probability matrix, results in the same distribution. In math-speak, this is the following: $$ {\bf v^TP = v^T} $$
Now, with that in mind, recall (or learn for the first time if you're one of today's lucky 10,000) that the eigenvector ${\bf u}$ and corresponding eigenvalue $\lambda$ for some square matrix ${\bf
A}$ are defined as follows: $$ {\bf Au} = \lambda {\bf u} $$
More specifically, these are called the right eigenvectors/eigenvalues, since they are defined when the vector is multiplied on the right. We can also define the left eigenvectors/eigenvalues as the
${\bf u}$, $\lambda$ combos that satisfy the following: $$ {\bf u^TA} = \lambda {\bf u^T} $$ Now, if we plug in our stationary distribution vector ${\bf v}$ for ${\bf u}$, our transition matrix ${\bf
P}$ for ${\bf A}$, and let $\lambda = 1$, we get the familiar ${\bf v^TP = v^T}$. Also, finding the left eigenvalues and eigenvectors for a matrix is the same as finding the right-side versions for
the transpose of that matrix. This means that, computationally, we can take the eigen decomposition of ${\bf P^T}$, find the column in the ${\bf U}$ matrix corresponding to an eigenvalue of 1,
normalize that vector (since it can be scaled by any arbitrary constant), and that will give the stationary distribution. I demonstrate this below:
import numpy as np
from numpy.linalg import eig
transition_mat = np.matrix([
S,U = eig(transition_mat.T)
stationary = np.array(U[:,np.where(np.abs(S-1.) < 1e-8)[0][0]].flat)
stationary = stationary / np.sum(stationary)
>>> print stationary
[ 0.34782609 0.32608696 0.30434783 0.02173913]
We get the same stationary distribution as before without having to perform and extra exponentiation or matrix multiplication! The glories of the eigen decomposition never cease to amaze!
Jerry Coyne's blog, Why Evolution is True, reported today on a free homeschool course offered by Cambridge University (edit: the Faraday Institute, which apparently only rents space from Cambridge)
that purports to profess the inherent compatibility between science and Christianity. As a scientist with an interest in promoting scientific literacy among all people, my initial reaction is that
anything wrapped up for fundamentalist consumption without the addition of incestuous clones riding dinosaurs around an ecologically unsustainable garden is probably a good thing. After all, such a
course might provide the right frame for at least opening up a discussion with the scientific community that doesn't devolve into the tried tripes of the intelligent design community such as
irreducible complexity and the inadequacy of "microevolution".
The problem I have with the course and with others who follow the same line of thinking is neatly encapsulated by this quote from theologian Alister McGrath, which appears in the course materials:
I think it's fair to say that nothing that we observe in nature, for example, its regularity, or indeed these remarkable anthropic phenomena, prove that there is a God, but the really important
point is they are absolutely consistent with belief in God, and therefore I'd like to suggest that we don't think about nature proving that there is a God; that's how an earlier generation might
have approached this. For me the really important thing is that the world as we observe it corresponds with what Christians would say the world ought to be like, that there's a correspondence
between the theory and the observation.
He is right in saying that nothing in nature could prove there is a God, but in identifying a harmonious correspondence between Christian theory and observation, he says precisely nothing. Faith, by
definition, exists in the absence of evidence, and without any way to falsify Christian doctrine based on observable phenomena, there is no basis for saying that it is ever consistent with
observation. This is the fundamental divide between science and faith. Scientists make predictions that must be testable and falsifiable. If a scientist's predictions fail, that person will
(hopefully) suck it up and go back to the metaphorical or literal drawing board. Religions, on the other hand, behave more like highly unethical scientists - celebrating whenever the stochasticity of
the universe happens to occasionally fall in line with their vague predictions and chalking the rest up to God's mysterious ways.
This is not to say that scientists cannot have faith. There are scientists who genuinely believe their faith gives them strength, inner peace, etc. We are all capable of some irrationality (read:
belief without evidence), and that is not always a bad thing. But injecting faith into science through "theistic evolution" and the like will never be science and, if it were, Faith would no longer
be faith.
|
{"url":"http://computationallife.blogspot.com/","timestamp":"2014-04-20T08:14:08Z","content_type":null,"content_length":"131227","record_id":"<urn:uuid:ebabbdb6-4b2e-4e67-a5cd-bf88f641c5e9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Noise Attenuation
In this section: Coherent Noise, Random Noise, Swell Noise, De-Spiking, Acquisition Noise, Seismic Interference, Footprint Removal, Receiver Motion Correction.
See also: Refine (high frequency enhancement)
Spectrum offers a full range of methods to attenuate different types of seismic noise – the noise is classified into two categories – random noise and coherent noise. Some of the methods used to
attenuate seismic noise are described below.
Coherent Noise
Coherent noise includes linear noise types, reverberations and multiples. Two types of coherent noise that require attention are guided waves and side scattered energy. Guided waves are trapped in a
water layer or low velocity near surface layer and are dispersive – each frequency component propagates with a different velocity. Side scattered energy tends to have a large moveout range depending
on the position of the scatterer acting as a point source with respect to the position of the recording cable.
Side scattered energy is usually observed on common shot gathers where they are identified by their varying moveout. They can also be observed as linear noise on stacked sections and on timeslices.
Attenuation of energy associated with side scatterers is carried out in the F-K or tau-P domain. A linear event on a shot record will map to a radial line in the FK/FKK domain and to a point in the
tau-P domain making them easy to filter/mute in either domain.
FK filters can be specified as polygons or fans and can be applied in either the X-T or FK domain and are specified in terms of F and K co-ordinates. The fans are defined by “pass” and “reject”
slopes in ms per trace. The input data can be any multi-trace group such as shot records, receiver gathers or CDP gathers. The standard FK filter is a 2D filter – FKK filtering applies a full 3D
filter – that is a function of frequency and two wave numbers; Kx and Ky.
Linear tau-p filtering can be used to attenuate noise by dividing the tau-p domain into pass and reject zones, similar to the FK filtering. The input data can be common shot, common receiver, common
midpoint or common reflection point gathers. Spectrum’s Radon is resistant to spatial aliasing and can therefore reduce the need for trace interpolation before noise removal. Our radon algorithm also
honours the true offset of the data, so that linear noise can be accurately attenuated even in irregularly sampled datasets.
Random Noise
Random noise can be categorised as noise in the spatial and temporal directions that is uncorrelated from trace to trace. It is usually stronger at late times rather than early times in recorded
seismic data, and in general, filtering in a time variant manner can be used to attenuate most of this random noise. Another powerful process that attenuates much random noise is conventional CMP
stacking, which significantly reduces the uncorrelated noise within the data. While time variant filtering may reduce the noise at later times, it does not necessarily attenuate the noise from trace
to trace. One of the best methods to reduce random noise is based on spatial prediction filtering such as FX deconvolution.
FX deconvolution is used to attenuate spatially random noise by enhancing the spatially predictable components of the seismic trace spectra using a Wiener deconvolution in the FX domain. A prediction
filter is calculated and applied to the data. Energy that is spatially predictable is passed and the un-predictable energy, classed as noise, is removed. The key to this method is the idea that
reflection signals on the seismic are coherent, and the signal spectrum on any trace segment can be predicted. The aim of FX deconvolution is to provide faithful transmission of all signals,
preservation of the signal character and removal of noise.
Swell Noise
Swell noise is often reduced with an FX based attenuation routine – detection and repair of high amplitude windows of a trace by FX projection filtering. However Spectrum has developed a module,
NOISERM, which performs frequency dependent noise removal that can achieve attenuation of swell noise without reducing low frequency signal.
Each pre-stack ensemble is transformed into frequency-time (FT) space using a STFT [Short Time Fourier Transform] algorithm. The transform is separated into amplitude and phase components for each
frequency sub-band, and then the median spectral amplitude within each requested frequency sub-band is calculated for the ensemble.
Each sample within each frequency sub-band is compared against a median amplitude threshold – if the sample amplitude exceeds this threshold then it is replaced with a median amplitude value from its
neighbouring traces.
The inverse STFT is then applied and the balanced ensemble is passed on. If the noise is isolated to a small set of frequency sub-bands or a subset of times, then the data which is analysed for noise
removal can be limited by the user. The threshold determination and threshold violation search will then be constrained to only those sub-bands and times requested.
» Interactively view this example in the Seismic Imaging Comparison Tool
Spikes are impulsive noise bursts within the data and removal of them generally involves comparing amplitude values at a particular sample or trace with the neighbouring samples or traces. If the
absolute amplitude values of the sample or trace under analysis, exceed those of the comparison sample or trace by a defined ratio, then the sample can either be zeroed or the amplitude value of the
sample reset to a defined value.
Acquisition Noise
Coherent noise also exists in land seismic datasets in the form of dispersive Rayleigh waves known as groundroll. This type of noise is characterised by having a low velocity, large amplitudes and
low frequencies and can dominate the reflection energy in the recorded data. Attenuation of ground roll is usually carried out in the F-X domain where frequency dependent trace mixing is performed
followed by horizontal correlation filtering at each frequency for the specified surface wave velocity. The filtered data is returned to the T-X domain and frequencies above a user defined cut-off
are left unchanged.
Seismic Interference
Seismic interference is caused by other acquisition vessels or field crews shooting at the same time within the vicinity. The resulting noise observed on the seismic can be complicated especially
since they are observed over a long distance and they can exhibit dispersion. A number of methods can be used to eliminate such seismic interference noise;
F-X attenuation applied in the domain in which the noise is randomized such as common offset domain. The noise trains are isolated and removed within the affected frequency bands. If the location of
the noise source is known then tau-P attenuation can be used. The noise can be modelled and subtracted in the tau-P domain.
Footprint Removal
Footprint filtering works on a time slice basis. The time slices are typically obtained from stack data and converted to the Kx,Ky domain by means of a 2D Fourier transform. On a Kx,Ky amplitude
spectrum of a time slice, the acquisition footprint shows itself as a repetitive pattern relating directly to the shot and receiver line spacing. Filters can be applied to the data in the Fourier
domain and the filtered data transformed back to the x,y domain using an inverse 2D FFT.
Receiver Motion Correction
Receiver motion introduces a time-variant spatial shift into the data. Source motion converts the effects of the source signature from a single-channel convolution in time to a multi-channel
convolution in time and space. For marine vibroseis acquisition both source and receiver motion is important. In 3D marine data receiver motion alone can produce significant artefact.
|
{"url":"http://www.spectrumasa.com/services/si-time/noise-attenuation","timestamp":"2014-04-18T00:13:50Z","content_type":null,"content_length":"32488","record_id":"<urn:uuid:4fd8405c-5d3e-4a59-b7b4-608df0792fc7>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
|
convert 850,000 Euros into us dollars
You asked:
convert 850,000 Euros into us dollars
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
|
{"url":"http://www.evi.com/q/convert_850,000_euros_into_us_dollars","timestamp":"2014-04-20T19:10:05Z","content_type":null,"content_length":"55931","record_id":"<urn:uuid:ed81c877-6d1a-4e1e-8e37-8b4e323caba5>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
|
--- Library to support meta-programming in Curry.
--- This library contains a definition for representing FlatCurry programs
--- in Haskell (type "Prog").
--- @author Michael Hanus
--- @version September 2003
--- Version for Haskell (slightly modified):
--- December 2004, Martin Engelke (men@informatik.uni-kiel.de)
--- Added part calls for constructors, Bernd Brassel, August 2005
module Curry.FlatCurry.Type (Prog(..), QName, Visibility(..),
TVarIndex, TypeDecl(..), ConsDecl(..), TypeExpr(..),
OpDecl(..), Fixity(..),
FuncDecl(..), Rule(..),
CaseType(..), CombType(..), Expr(..), BranchExpr(..),
Pattern(..), Literal(..),
readFlatCurry, readFlatInterface, readFlat,
writeFlatCurry) where
import Curry.Files.PathUtils (writeModule,maybeReadModule)
import Data.List (intersperse)
import Data.Char (isSpace)
import Control.Monad (liftM)
-- Definition of data types for representing FlatCurry programs:
-- =============================================================
--- Data type for representing a Curry module in the intermediate form.
--- A value of this data type has the form
--- <CODE>
--- (Prog modname imports typedecls functions opdecls translation_table)
--- </CODE>
--- where modname: name of this module,
--- imports: list of modules names that are imported,
--- typedecls, opdecls, functions, translation of type names
--- and constructor/function names: see below
data Prog = Prog String [String] [TypeDecl] [FuncDecl] [OpDecl]
deriving (Read, Show, Eq)
--- The data type for representing qualified names.
--- In FlatCurry all names are qualified to avoid name clashes.
--- The first component is the module name and the second component the
--- unqualified name as it occurs in the source program.
type QName = (String,String)
--- Data type to specify the visibility of various entities.
data Visibility = Public -- public (exported) entity
| Private -- private entity
deriving (Read, Show, Eq)
--- The data type for representing type variables.
--- They are represented by (TVar i) where i is a type variable index.
type TVarIndex = Int
--- Data type for representing definitions of algebraic data types.
--- <PRE>
--- A data type definition of the form
--- data t x1...xn = ...| c t1....tkc |...
--- is represented by the FlatCurry term
--- (Type t [i1,...,in] [...(Cons c kc [t1,...,tkc])...])
--- where each ij is the index of the type variable xj
--- Note: the type variable indices are unique inside each type declaration
--- and are usually numbered from 0
--- Thus, a data type declaration consists of the name of the data type,
--- a list of type parameters and a list of constructor declarations.
--- </PRE>
data TypeDecl = Type QName Visibility [TVarIndex] [ConsDecl]
| TypeSyn QName Visibility [TVarIndex] TypeExpr
deriving (Read, Show, Eq)
--- A constructor declaration consists of the name and arity of the
--- constructor and a list of the argument types of the constructor.
data ConsDecl = Cons QName Int Visibility [TypeExpr]
deriving (Read, Show, Eq)
--- Data type for type expressions.
--- A type expression is either a type variable, a function type,
--- or a type constructor application.
--- Note: the names of the predefined type constructors are
--- "Int", "Float", "Bool", "Char", "IO", "Success",
--- "()" (unit type), "(,...,)" (tuple types), "[]" (list type)
data TypeExpr =
TVar TVarIndex -- type variable
| FuncType TypeExpr TypeExpr -- function type t1->t2
| TCons QName [TypeExpr] -- type constructor application
deriving (Read, Show, Eq) -- TCons module name typeargs
--- Data type for operator declarations.
--- An operator declaration "fix p n" in Curry corresponds to the
--- FlatCurry term (Op n fix p).
--- Note: the constructor definition of 'Op' differs from the original
--- PAKCS definition using Haskell type 'Integer' instead of 'Int'
--- for representing the precedence.
data OpDecl = Op QName Fixity Int deriving (Read, Show, Eq)
--- Data types for the different choices for the fixity of an operator.
data Fixity = InfixOp | InfixlOp | InfixrOp deriving (Read, Show, Eq)
--- Data type for representing object variables.
--- Object variables occurring in expressions are represented by (Var i)
--- where i is a variable index.
type VarIndex = Int
--- Data type for representing function declarations.
--- <PRE>
--- A function declaration in FlatCurry is a term of the form
--- (Func name arity type (Rule [i_1,...,i_arity] e))
--- and represents the function "name" with definition
--- name :: type
--- name x_1...x_arity = e
--- where each i_j is the index of the variable x_j
--- Note: the variable indices are unique inside each function declaration
--- and are usually numbered from 0
--- External functions are represented as (Func name arity type (External s))
--- where s is the external name associated to this function.
--- Thus, a function declaration consists of the name, arity, type, and rule.
--- </PRE>
data FuncDecl = Func QName Int Visibility TypeExpr Rule
deriving (Read, Show, Eq)
--- A rule is either a list of formal parameters together with an expression
--- or an "External" tag.
data Rule = Rule [VarIndex] Expr
| External String
deriving (Read, Show, Eq)
--- Data type for classifying case expressions.
--- Case expressions can be either flexible or rigid in Curry.
data CaseType = Rigid | Flex deriving (Read, Show, Eq)
--- Data type for classifying combinations
--- (i.e., a function/constructor applied to some arguments).
--- @cons FuncCall - a call to a function all arguments are provided
--- @cons ConsCall - a call with a constructor at the top,
--- all arguments are provided
--- @cons FuncPartCall - a partial call to a function
--- (i.e., not all arguments are provided)
--- where the parameter is the number of
--- missing arguments
--- @cons ConsPartCall - a partial call to a constructor along with
--- number of missing arguments
data CombType = FuncCall
| ConsCall
| FuncPartCall Int
| ConsPartCall Int deriving (Read, Show, Eq)
--- Data type for representing expressions.
--- Remarks:
--- <PRE>
--- 1. if-then-else expressions are represented as function calls:
--- (if e1 then e2 else e3)
--- is represented as
--- (Comb FuncCall ("Prelude","if_then_else") [e1,e2,e3])
--- 2. Higher order applications are represented as calls to the (external)
--- function "apply". For instance, the rule
--- app f x = f x
--- is represented as
--- (Rule [0,1] (Comb FuncCall ("Prelude","apply") [Var 0, Var 1]))
--- 3. A conditional rule is represented as a call to an external function
--- "cond" where the first argument is the condition (a constraint).
--- For instance, the rule
--- equal2 x | x=:=2 = success
--- is represented as
--- (Rule [0]
--- (Comb FuncCall ("Prelude","cond")
--- [Comb FuncCall ("Prelude","=:=") [Var 0, Lit (Intc 2)],
--- Comb FuncCall ("Prelude","success") []]))
--- 4. Functions with evaluation annotation "choice" are represented
--- by a rule whose right-hand side is enclosed in a call to the
--- external function "Prelude.commit".
--- Furthermore, all rules of the original definition must be
--- represented by conditional expressions (i.e., (cond [c,e]))
--- after pattern matching.
--- Example:
--- m eval choice
--- m [] y = y
--- m x [] = x
--- is translated into (note that the conditional branches can be also
--- wrapped with Free declarations in general):
--- Rule [0,1]
--- (Comb FuncCall ("Prelude","commit")
--- [Or (Case Rigid (Var 0)
--- [(Pattern ("Prelude","[]") []
--- (Comb FuncCall ("Prelude","cond")
--- [Comb FuncCall ("Prelude","success") [],
--- Var 1]))] )
--- (Case Rigid (Var 1)
--- [(Pattern ("Prelude","[]") []
--- (Comb FuncCall ("Prelude","cond")
--- [Comb FuncCall ("Prelude","success") [],
--- Var 0]))] )])
--- Operational meaning of (Prelude.commit e):
--- evaluate e with local search spaces and commit to the first
--- (Comb FuncCall ("Prelude","cond") [c,ge]) in e whose constraint c
--- is satisfied
--- </PRE>
--- @cons Var - variable (represented by unique index)
--- @cons Lit - literal (Integer/Float/Char constant)
--- @cons Comb - application (f e1 ... en) of function/constructor f
--- with n<=arity(f)
--- @cons Free - introduction of free local variables
--- @cons Or - disjunction of two expressions (used to translate rules
--- with overlapping left-hand sides)
--- @cons Case - case distinction (rigid or flex)
data Expr = Var VarIndex
| Lit Literal
| Comb CombType QName [Expr]
| Free [VarIndex] Expr
| Let [(VarIndex,Expr)] Expr
| Or Expr Expr
| Case CaseType Expr [BranchExpr]
deriving (Read, Show, Eq)
--- Data type for representing branches in a case expression.
--- <PRE>
--- Branches "(m.c x1...xn) -> e" in case expressions are represented as
--- (Branch (Pattern (m,c) [i1,...,in]) e)
--- where each ij is the index of the pattern variable xj, or as
--- (Branch (LPattern (Intc i)) e)
--- for integers as branch patterns (similarly for other literals
--- like float or character constants).
--- </PRE>
data BranchExpr = Branch Pattern Expr deriving (Read, Show, Eq)
--- Data type for representing patterns in case expressions.
data Pattern = Pattern QName [VarIndex]
| LPattern Literal
deriving (Read, Show, Eq)
--- Data type for representing literals occurring in an expression
--- or case branch. It is either an integer, a float, or a character constant.
--- Note: the constructor definition of 'Intc' differs from the original
--- PAKCS definition. It uses Haskell type 'Integer' instead of 'Int'
--- to provide an unlimited range of integer numbers. Furthermore
--- float values are represented with Haskell type 'Double' instead of
--- 'Float'.
data Literal = Intc Integer
| Floatc Double
| Charc Char
deriving (Read, Show, Eq)
-- Reads a FlatCurry file (extension ".fcy") and returns the corresponding
-- FlatCurry program term (type 'Prog') as a value of type 'Maybe'.
readFlatCurry :: FilePath -> IO (Maybe Prog)
readFlatCurry fn
= do let filename = genFlatFilename ".fcy" fn
readFlat filename
-- Reads a FlatInterface file (extension ".fint") and returns the
-- corresponding term (type 'Prog') as a value of type 'Maybe'.
readFlatInterface :: String -> IO (Maybe Prog)
readFlatInterface fn
= do let filename = genFlatFilename ".fint" fn
readFlat filename
-- Reads a Flat file and returns the corresponding term (type 'Prog') as
-- a value of type 'Maybe'.
-- Due to compatibility with PAKCS it is allowed to have a commentary
-- at the beginning of the file enclose in {- ... -}.
readFlat :: FilePath -> IO (Maybe Prog)
readFlat = liftM (fmap (read . skipComment)) . maybeReadModule
skipComment s = case dropWhile isSpace s of
'{':'-':s' -> dropComment s'
s' -> s'
dropComment ('-':'}':xs) = xs
dropComment (_:xs) = dropComment xs
dropComment [] = []
-- Writes a FlatCurry program term into a file.
writeFlatCurry :: String -> Prog -> IO ()
writeFlatCurry filename prog
= writeModule filename (showFlatCurry prog)
-- Shows FlatCurry program in a more nicely way.
showFlatCurry :: Prog -> String
showFlatCurry (Prog mname imps types funcs ops) =
"Prog "++show mname++"\n "++
show imps ++"\n ["++
concat (intersperse ",\n " (map (\t->show t) types)) ++"]\n ["++
concat (intersperse ",\n " (map (\f->show f) funcs)) ++"]\n "++
show ops ++"\n"
-- Add the extension 'ext' to the filename 'fn' if it doesn't
-- already exist.
genFlatFilename :: String -> FilePath -> FilePath
genFlatFilename ext fn
| drop (length fn - length ext) fn == ext
= fn
| otherwise
= fn ++ ext
|
{"url":"http://hackage.haskell.org/package/curry-base-0.2.6/docs/src/Curry-FlatCurry-Type.html","timestamp":"2014-04-23T19:35:26Z","content_type":null,"content_length":"50670","record_id":"<urn:uuid:9693b2ce-6511-4908-bfd9-5258867ef718>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Post a reply
The third degree polynomial divided by the fifth degree polynomial can be written as an infinite
series as:
Then just integrate it term by term to get the indefinite integral in an infinite series form.
Starting with the 1/x^2 term the coefficients in the series repeat in blocks of 10 in the pattern
1,-1,1,0,0,-1,1,-1,0,0. That is what the 1/(x^10n) handles. So the next 6 terms in the series
would have exponents on the bottom of 12,13,14,17,18,19 and the next 6 would have
22,23,24,27,28,29 for the exponents on the bottom.
The signs in each block of six terms would continue to be 1, -1, 1, -1, 1, -1.
Has anyone got a closed form for the integral? I'd love to see it. If x^5+1 is divided by x-1 I get
x(x+1)(x-1)(x^2+1) + (x+1)/(x-1) but I haven't been able to use this to get a closed form.
Good luck with it!
|
{"url":"http://www.mathisfunforum.com/post.php?tid=18496&qid=242110","timestamp":"2014-04-18T23:20:03Z","content_type":null,"content_length":"19666","record_id":"<urn:uuid:b10dcb8a-0630-45f9-ad4c-d5d981badd87>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Topological Metric Space with 3 elements
October 17th 2011, 09:46 AM #1
Super Member
Aug 2010
Topological Metric Space with 3 elements
Can you create a topological metric space from three points (the set X) in the plane? Obviously, d is defined.
A ball at each point is the point itself (satisfies def of neighborhood of a point).
Union of all the balls is X.
Intersection of any two balls is empty (contains no member of X) so there can't be an x in a ball which is a subset of the intersection.
Is it valid to conclude you can't create a topolgical metric space from a finite set consisting of three or more members? Or does a topological metric space have to have an infinite number of
EDIT Origin of the question: A metric is defined on a non-empty set, which could have a finite number of members. In a std text developments go on from there without ever considering the
possibility that the set is finite (three or more members to satisfy d), leaving me scratching my head.
Last edited by Hartlw; October 17th 2011 at 09:58 AM.
Re: Topological Metric Space with 3 elements
Re: Topological Metric Space with 3 elements
Perhaps another way of asking the question (generally, I know very little about it) is,
does topology deal only with continua,
or, does topology define a continuum?
EDIT: Probably should read "metric topology" instead of "topology."
Last edited by Hartlw; October 17th 2011 at 10:55 AM.
Re: Topological Metric Space with 3 elements
Thanks. That answers part of my implied question about utility of defining a metric space on a non-empty set.
But can you create a topological metric space from a finite (3 or more) number of elements? My question arose as I tried to work my way through the definiton of a topological metric space
starting with a metric space consisting of three points in the plane. My first post shows where I got stuck.
Re: Topological Metric Space with 3 elements
I don't understand what you mean by a topological metric space. Is it just a space with a topology?
Re: Topological Metric Space with 3 elements
if you use the topology induced by the discrete metric, and our set is {a,b,c}, here are the neighborhoods:
neighborhoods of a: {{a}, {a,b}, {a,c},{a,b,c}}
neighborhoods of b: {{b}, {a,b}, {b,c}, {a,b,c}}
neighborhoods of c: {{c}, {a,c}, {b,c}, {{a,b,c}}
this is, in fact, the discrete topology on {a,b,c}, the power set of {a,b,c}. note that there are only 2 possible ε-balls for any point x:
N(x) = {y in {a,b,c} : d(x,y) < r} = {x}, when r ≤ 1, and N(x) = {y in {a,b,c} : d(x,y) < r} = {a,b,c}, when r > 1.
note that any two neighborhoods of x (whether x be a,b, or c) contain a neighborhood of x in their intersection....namely {x}.
(the notion that "two neighbohoods must contain a neighborhood in their intersection" is patently false: consider the usual topology on R
where a < b < c < d, the open interval (a,b) has null intersection with (c,d), and this is the "prototypical" metric space we are modelling
the entire concept on, with distance function d(x,y) = |y - x|. what IS true, is that the intersection of two open intervals containing x,
contains a third open interval containing x).
Re: Topological Metric Space with 3 elements
I feel as though it's important to point out that the only finite $T_1$ spaces (points are closed) are discrete. Thus, the only finite metric spaces are discrete.
October 17th 2011, 10:08 AM #2
October 17th 2011, 10:37 AM #3
Super Member
Aug 2010
October 17th 2011, 10:50 AM #4
Super Member
Aug 2010
October 19th 2011, 12:14 AM #5
Super Member
Aug 2009
October 19th 2011, 12:57 AM #6
MHF Contributor
Mar 2011
October 20th 2011, 01:02 PM #7
|
{"url":"http://mathhelpforum.com/differential-geometry/190598-topological-metric-space-3-elements.html","timestamp":"2014-04-21T04:54:30Z","content_type":null,"content_length":"52824","record_id":"<urn:uuid:43557e8a-611d-4aa1-9c59-4336138492d1>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00203-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lin Corporation has a single product whose selling price is $143 and whose variable expense is $58 p
Lin Corporation has a single product whose selling price is $143 and whose variable expense is $58 per unit. The company\\\'s monthly fixed expense is $41,400. Requirements: 1. using the equation
method, solve for the unit sales that are required to earn a target profit of $8,240. 2. using the formula method, solve for the unit sales that are required to earn a target profit of $10,280.
|
{"url":"http://onlinesolutionproviders.com/tutorial/12052/lin-corporation-has-a-single-product-whose-selling-price-is-143-and-whose-variable-expense-is-58-p","timestamp":"2014-04-19T22:05:40Z","content_type":null,"content_length":"23022","record_id":"<urn:uuid:b47405b2-0296-41c5-b6ac-ba781a7bfd24>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
|
1. Introduction
All results and input files are available at
In the following (in no particular order), the updates for PDG 2007
with respect to PDG 2006 (web update) are discussed.
2. Changes to common parameters
We use the updated input parameters for mixing and lifetimes from the
HFAG PDG 2007 update of the Oscillations Sub-Group (common.param).
3. Additions
- Averages for Inclusive Semileptonic Branching Fraction for B+ -> Xl+nu
- Averages for Inclusive Semileptonic Branching Fraction for B0 -> X-l+nu
- Ratio of Semileptonic Branching Fractions for B+ to B0 -> X-l+nu
- Branching fractions of B -> pi0 l nu, B-> pi- l nu, B-> pi l nu
4. Changes
- New form factors ratio from BaBar for Exclusive |Vcb| F(1) and Exclusive |Vcb|G(1)
- New BaBar and Belle Breco results for:
o Inclusive Semileptonic (Partial) Branching Fraction for B+/B0 Admixture: B -> Xlnu
o Inclusive Semileptonic Branching Fraction for B+ -> Xl+nu
o Inclusive Semileptonic Branching Fraction for B0 -> X-l+nu
5. Feedback
Your critical comments will be much appreciated. Please send them to
lodovico AT slac.stanford.edu
|
{"url":"http://www.slac.stanford.edu/xorg/hfag/semi/pdg07/update.html","timestamp":"2014-04-20T13:48:10Z","content_type":null,"content_length":"1643","record_id":"<urn:uuid:7bd91fc8-60cd-4d71-afe4-b3fb07d965e5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Constructing a Pentagon
Navigation Panel: Previous | Up | Forward | Graphical Version | PostScript version | U of T Math Network Home
University of Toronto Mathematics Network
Question Corner and Discussion Area
Constructing a Pentagon
Asked by Ting Ting Wu, student, State College Area High School on January 27, 1998:
I want to know how to construct a pentagon. I have done it before, but I have forgotten how. I remember it is similar to constructing a hexagon, but a bit more difficult. Thanks.
There are several ways to do it. Unfortunately we are very short-staffed right now and cannot spare the resources to hunt down the easiest and most elegant construction. However, the following method
will work:
Constructing a pentagon is equivalent to dividing a circle (a full 360 degrees) up into five equal parts (angle 72 degrees each). The cosine of 72 degrees is (sqrt(5)-1)/4 (this can be found by
starting with the equation cos(5t) = cos(360 degrees) = 1, using trigonometric identities to write cos(5t) as a polynomial in cos(t), factoring and solving the resulting polynomial equation for cos
Therefore, this angle of 72 degrees can be constructed by building a right-angled triangle whose hypotenuse is 4 and whose adjacent side is of length sqrt(5)-1. This latter length can be constructed
by taking hypotenuse of a right triangle whose other sides have lengths 1 and 2, and subtracting length 1 from it.
The following procedure uses this idea to construct a pentagon:
Start with a circle C, with centre point O. Let P be a point on C. Draw the perpendicular bisector L to segment OP (bisecting it at point Q). Construct the midpoint R of OQ. (RQ is going to be our
unit length).
With centre Q and radius RQ, draw an arc intersecting L at point S. Draw segment OS. (This is the hypotenuse of a right triangle OQS whose other sides have length 1 and 2, so OS has length sqrt(5)).
With centre S and radius RQ (= QS), draw an arc intersecting OS at point T. (Now OT has length sqrt(5)-1).
Construct the line passing through point T at right angles to OT. Let it intersect the circle C at point U.
Now the triangle OTU has hypotenuse of length OU = radius of C = 4, and side length OT = sqrt(5)-1. Therefore, angle UOT is 72 degrees. Extend segment OT past S until it meets the circle C at point
V; you have now constructed two vertices (U and V) of the pentagon.
To construct the remaining vertices: with centre V and radius UV draw an arc intersecting C at point W. With centre W and the same radius, draw an arc intersecting C at point X. Finally, with centre
X and the same radius, draw an arc intersecting C at point Y. UVWXY will be a pentagon.
There are probably much more efficient ways to do it, but the above procedure will certainly work, for the reasons described. The procedure is illustrated below:
|L * C
| *
| *
T | *
/ S
/ *
/ V
/ *
/ *
* / *
**** U *********
[ Submit Your Own Question ] [ Create a Discussion Topic ]
This part of the site maintained by (No Current Maintainers)
Last updated: April 19, 1999
Original Web Site Creator / Mathematical Content Developer: Philip Spencer
Current Network Coordinator and Contact Person: Joel Chan - mathnet@math.toronto.edu
Navigation Panel:
Go backward to Do Parallel Lines Meet At Infinity?
Go up to Question Corner Index
Go forward to The Three Classical Impossible Constructions of Geometry
Switch to graphical version (better pictures & formulas)
Access printed version in PostScript format (requires PostScript printer)
Go to University of Toronto Mathematics Network Home Page
|
{"url":"http://www.math.toronto.edu/mathnet/plain/questionCorner/pentagon.html","timestamp":"2014-04-19T20:32:43Z","content_type":null,"content_length":"5259","record_id":"<urn:uuid:8f93e461-7d87-4e4d-bcef-f01e13efd259>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Proof of equality of finite sum, and trigonometric rational function
September 30th 2010, 01:11 PM #1
Sep 2010
Proof of equality of finite sum, and trigonometric rational function
First of all I'm not sure if this is the right place to post this question. It's for a partial differential equations course, but it doesn't seem to have anything to do with differential
equations directly.
I need to prove that 1 + 2 * Sum(n=1,N)(cos(nx)) = sin((N + (1/2))x)/sin((1/2)x).
I have no idea where to start, other than that we are studying Fourier series. Can someone please help me figure out how to get started?
$<br /> S=e^{ia}+e^{i2a}+...+e^{ina}=\frac {e^{i(n+1)a}-1}{e^{ia}-1}<br />$
Your sum is Re(S).
Last edited by zzzoak; September 30th 2010 at 02:05 PM.
September 30th 2010, 01:49 PM #2
Senior Member
Mar 2010
|
{"url":"http://mathhelpforum.com/differential-equations/158000-proof-equality-finite-sum-trigonometric-rational-function.html","timestamp":"2014-04-17T04:14:45Z","content_type":null,"content_length":"32659","record_id":"<urn:uuid:c5196d83-6ab0-4903-9634-2b4c00011d95>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Emmaus Statistics Tutor
I have known since second grade that I wanted to be a math teacher. I was the kid on the playground using recess to teach her friends to subtract. After graduating from Lehigh University with a BA
in Mathematics and M.Ed. in Secondary Education, I spent six years teaching high school math.
12 Subjects: including statistics, calculus, geometry, algebra 1
My knowledge of economics and mathematics stems from my master's degree in economics from Lehigh University. I specialize in micro- and macroeconomics, from an introductory level up to an advanced
level. I have master's degree work in labor economics, financial analysis and game theory.
19 Subjects: including statistics, calculus, geometry, GRE
...However, most people who are familiar with Excel aren't fully aware of all that MS Excel can do. Maybe that's you. You'd be surprised about the things that you can do using Excel that you're
currently doing by hand or using other inappropriate software (such as the familiar sibling to Excel, MS Word). Take a comprehensive look at the features and utilities that this MS Word offers. 1.
27 Subjects: including statistics, calculus, geometry, algebra 1
I am an experienced math professor eager to help you understand math. I believe math is a perfectly understandable science. I'm a retired electrical engineer with a master's degree, and I enjoy
teaching and love math.
11 Subjects: including statistics, physics, probability, ACT Math
...I look forward to meeting you and helping you to achieve your educational goals.Learn Algebra 1 right and it will go a long way toward making subsequent math courses easy. Fail to learn it
right and you will be in big trouble the rest of the way. I teach it right.
23 Subjects: including statistics, English, calculus, algebra 1
|
{"url":"http://www.purplemath.com/emmaus_statistics_tutors.php","timestamp":"2014-04-20T11:25:20Z","content_type":null,"content_length":"23882","record_id":"<urn:uuid:cb24bedc-705b-4939-8972-b178ac81b51f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Review of Gravitation
The planets orbit the sun in:
The eccentricity of an ellipse with semimajor axis length 10 meters and semiminor axis length 5 meters is:
A planet sweeps out an area equal to half the total area of its orbit in 180 days. Its orbital period is:
If the circular orbit of a planet is reduced in radius to 1/8 th of its original value, its period is now, what fraction of its old period, T [old] ?
Which is the following orbits are possible around the earth:
Various orbits around the earth.
An eccentricity of zero means the orbit is:
An eccentricity of 1.5 means the orbit is:
If a satellite is orbiting the earth at radius 6×10^6 meters above the center of the earth. What is its period? (The mass of the earth is 5.98×10^24 kilograms).
Kepler's Second Law is an expression of:
The direction of the force due to gravity between two point objects located at and is given by:
The gravitational force that a spherical shell of mass 102 kilograms exerts on a 5 kilogram mass at its center is:
If you can throw a ball 10 meters into the air on earth, approximately how high would you me able to throw it on the moon (mass of the moon is 7.35×10^22 kilograms and its radius is 1700 kilometers).
A planet of mass 6×10^25 kilograms, orbiting the sun (mass = 1.99×10^30 kilograms) has a velocity of 50 kilometers/sec. What is its rate of change of angular momentum at this point:
The correct ordering of the forces in terms of their relative strength, beginning with the strongest is: I) Gravity II) Strong Nuclear Force III) Electromagnetic Force IV) Weak Nuclear Force.
A particle of mass 2 kilograms orbits the sun (mass = 1.99×10^30 ) parabolically. Its velocity very far away from the sun is:
Two stars of equal mass exert a gravitational force on one another. This will cause them to:
If it is desirable to give a rocket the maximum tangential velocity on its launch, then the best launch site would be:
The total energy of a circular orbit is:
Which of these can not be deduced from Kepler's Laws: I) orbits may be circular, II) The force between two planets goes as 1/r ^2 , III) Gravity is a conservative force.
Which of the following has the smallest escape velocity:
The gravitational potential energy of a mass 100 kilometers from the earth, compared to the same mass at the surface is:
The gravitational potential at a point 100 kilometers above the earth's surface is:
Put these orbits in order of increasing energy: I) parabola II)cicle III) hyperbola IV) ellipse
The only orbit to have a positive total energy is:
A satellite in low-earth orbit ( r r [e] ), has a period of:
The tides are caused by which of the following: I) The moon II) The sun III) The rotation of the earth.
The escape velocity from the earth is approximately (mass of the earth is 5.98×10^24 kilograms and its radius is 6.38×10^6 meters):
I) Newton, II)Einstein, III) Copernicus, IV) Kepler. The correct order in which these scientists lived is:
Which is true of the gravitational force between the earth and the moon is: I) equal and opposite II) zero at some point between the two.
The minimum radius of orbit for a planet with ε = 0.75 is:
What is the semimajor axis length of an elliptical earth orbit with apogee 1.4×10^7 meters and perigee 1.2×10^7 meters?
The gravitational force between two objects is: I) Conservative II) Independent of intervening objects III) Central.
Gravitational field between two masses
In the arrangement shown in , a mass exactly half way between the two masses shown would have a gravitational potential energy of:
A planet in an elliptical orbit travels fastest at its:
An experiment to determine G was successfully performed by:
Which of these is an expression of angular momentum: I) II) mvr sinθ III) rv sinθ .
Escape velocity depends on:
The total energy of Io, orbiting Jupiter at a distance of 4.22×10^6 meters is (mass of Jupiter is 1.9 ×10^27 kilograms, mass of Io is 8.9×10^22 kilograms):
Which of the following is an expression of the principle of equivalence?
To which of the following could Kepler's Laws be applied? I) a comet, II) a binary star system, III) a projectile on the earth.
The gravitational potential energy can be defined as:
For a circular orbit, which of the following is true for the kinetic energy, T?
An 80 kilogram astronaut is in a spaceship accelerating upwards 4.9 m/s ^2 far away from any star or planet. His apparent weight as measured by some scales in the spaceship would be:
Assuming the earth is in a circular orbit about the sun, what is the speed of the earth (the mass of the sun is 1.99×10^30 kilograms and the earth- sun distance is 150×10^9 meters).
If the ratio of a planet's perihelion distance to aphelion distance is 0.75, then the ratio of its greatest to least velocity on its orbit is:
If the orbits of Venus and the Earth are both circular, with the period of the earth orbit 1 year and the period of Venus, 0.615 days, what is the ratio of the earth's orbit to Venus's orbit?
The correct ordering, in terms of increasing numerical value of the kinetic, potential and total energies for a circular orbit is:
A binary star system has two stars of masses m [1] and m [2] , orbiting about their common center of mass with radii r [1] and r [2] respectively. What is the square of the period of their orbit?
|
{"url":"http://www.sparknotes.com/physics/gravitation/review/quiz.html","timestamp":"2014-04-19T07:11:38Z","content_type":null,"content_length":"114281","record_id":"<urn:uuid:293fad4e-5e91-44d9-bffd-614715d3fc24>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00034-ip-10-147-4-33.ec2.internal.warc.gz"}
|
etd AT Indian Institute of Science: Hydrologic Impacts Of Climate Change : Uncertainty Modeling
& Collections
Thesis Guide
Submitted Date
Sign on to:
Receive email
Login / Register
authorized users
Edit Profile
About DSpace
etd AT Indian Institute of Science >
Division of Earth and Environmental Sciences >
Civil Engineering (civil) >
Please use this identifier to cite or link to this item: http://hdl.handle.net/2005/546
Title: Hydrologic Impacts Of Climate Change : Uncertainty Modeling
Authors: Ghosh, Subimal
Advisors: Mujumdar, P P
Climate Change
General Circulation Models (GCMs)
Climate - Circulation Model
Climate Change - Statistical Methods
Climate Impact Assessment
Keywords: Climate Model
Atmospheric Circulation Model
Modeling GCM
Fuzzy Clustering
Streamflow Prediction
Kernel Functions
Vector Machine
Submitted Jul-2007
Series/ G21522
Report no.:
General Circulation Models (GCMs) are tools designed to simulate time series of climate variables globally, accounting for effects of greenhouse gases in the atmosphere. They attempt to
represent the physical processes in the atmosphere, ocean, cryosphere and land surface. They are currently the most credible tools available for simulating the response of the global
climate system to increasing greenhouse gas concentrations, and to provide estimates of climate variables (e.g. air temperature, precipitation, wind speed, pressure etc.) on a global
scale. GCMs demonstrate a significant skill at the continental and hemispheric spatial scales and incorporate a large proportion of the complexity of the global system; they are, however,
inherently unable to represent local subgrid-scale features and dynamics. The spatial scale on which a GCM can operate (e.g., 3.75° longitude x 3.75° latitude for Coupled Global Climate
Model, CGCM2) is very coarse compared to that of a hydrologic process (e.g., precipitation in a region, streamflow in a river etc.) of interest in the climate change impact assessment
studies. Moreover, accuracy of GCMs, in general, decreases from climate related variables, such as wind, temperature, humidity and air pressure to hydrologic variables such as
precipitation, evapotranspiration, runoff and soil moisture, which are also simulated by GCMs. These limitations of the GCMs restrict the direct use of their output in hydrology. This
thesis deals with developing statistical downscaling models to assess climate change impacts and methodologies to address GCM and scenario uncertainties in assessing climate change
impacts on hydrology. Downscaling, in the context of hydrology, is a method to project the hydrologic variables (e.g., rainfall and streamflow) at a smaller scale based on large scale
climatological variables (e.g., mean sea level pressure) simulated by a GCM. A statistical downscaling model is first developed in the thesis to predict the rainfall over Orissa
meteorological subdivision from GCM output of large scale Mean Sea Level Pressure (MSLP). Gridded monthly MSLP data for the period 1948 to 2002, are obtained from the National Center for
Environmental Prediction/ National Center for Atmospheric Research (NCEP/NCAR) reanalysis project for a region spanning 150 N -250 N in latitude and 800 E -900 E in longitude that
encapsulates the study region. The downscaling model comprises of Principal Component Analysis (PCA), Fuzzy Clustering and Linear Regression. PCA is carried out to reduce the
dimensionality of the larger scale MSLP and also to convert the correlated variables to uncorrelated variables. Fuzzy clustering is performed to derive the membership of the principal
components in each of the clusters and the memberships obtained are used in regression to statistically relate MSLP and rainfall. The statistical relationship thus obtained is used to
predict the rainfall from GCM output. The rainfall predicted with the GCM developed by CCSR/NIES with B2 scenario presents a decreasing trend for non-monsoon period, for the case study.
Climate change impact assessment models developed based on downscaled GCM output are subjected to a range of uncertainties due to both ‘incomplete knowledge’ and ‘unknowable future
scenario’ (New and Hulme, 2000). ‘Incomplete knowledge’ mainly arises from inadequate information and understanding about the underlying geophysical process of global change, leading to
limitations in the accuracy of GCMs. This is also termed as GCM uncertainty. Uncertainty due to ‘unknowable future scenario’ is associated with the unpredictability in the forecast of
socio-economic and human behavior resulting in future Green House Gas (GHG) emission scenarios, and can also be termed as scenario uncertainty. Downscaled outputs of a single GCM with a
single climate change scenario represent a single trajectory among a number of realizations derived using various GCMs and scenarios. Such a single trajectory alone can not represent a
future hydrologic scenario, and will not be useful in assessing hydrologic impacts due to climate change. Nonparametric methods are developed in the thesis to model GCM and scenario
uncertainty for prediction of drought scenario with Orissa meteorological subdivision as a case study. Using the downscaling technique described in the previous paragraph, future
rainfall scenarios are obtained for all available GCMs and scenarios. After correcting for bias, equiprobability transformation is used to convert the precipitation into Standardized
Abstract: Precipitation Index-12 (SPI-12), an annual drought indicator, based on which a drought may be classified as a severe drought, mild drought etc. Disagreements are observed between different
predictions of SPI-12, resulting from different GCMs and scenarios. Assuming SPI-12 to be a random variable at every time step, nonparametric methods based on kernel density estimation
and orthonormal series are used to determine the nonparametric probability density function (pdf) of SPI-12. Probabilities for different categories of drought are computed from the
estimated pdf. It is observed that there is an increasing trend in the probability of extreme drought and a decreasing trend in the probability of near normal conditions, in the Orissa
meteorological subdivision. The single valued Cumulative Distribution Functions (CDFs) obtained from nonparametric methods suffer from limitations due to the following: (a) simulations
for all scenarios are not available for all the GCMs, thus leading to a possibility that incorporation of these missing climate experiments may result in a different CDF, (b) the method
may simply overfit to a multimodal distribution from a relatively small sample of GCMs with a limited number of scenarios, and (c) the set of all scenarios may not fully compose the
universal sample space, and thus, the precise single valued probability distribution may not be representative enough for applications. To overcome these limitations, an interval
regression is performed to fit an imprecise normal distribution to the SPI-12 to provide a band of CDFs instead of a single valued CDF. Such a band of CDFs represents the incomplete
nature of knowledge, thus reflecting the extent of what is ignored in the climate change impact assessment. From imprecise CDFs, the imprecise probabilities of different categories of
drought are computed. These results also show an increasing trend of the bounds of the probability of extreme drought and decreasing trend of the bounds of the probability of near normal
conditions, in the Orissa meteorological subdivision. Water resources planning requires the information about future streamflow scenarios in a river basin to combat hydrologic extremes
resulting from climate change. It is therefore necessary to downscale GCM projections for streamflow prediction at river basin scales. A statistical downscaling model based on PCA, fuzzy
clustering and Relevance Vector Machine (RVM) is developed to predict the monsoon streamflow of Mahanadi river at Hirakud reservoir, from GCM projections of large scale climatological
data. Surface air temperature at 2m, Mean Sea Level Pressure (MSLP), geopotential height at a pressure level of 500 hecto Pascal (hPa) and surface specific humidity are considered as the
predictors for modeling Mahanadi streamflow in monsoon season. PCA is used to reduce the dimensionality of the predictor dataset and also to convert the correlated variables to
uncorrelated variables. Fuzzy clustering is carried out to derive the membership of the principal components in each of the clusters and the memberships thus obtained are used in RVM
regression model. RVM involves fewer number of relevant vectors and the chance of overfitting is less than that of Support Vector Machine (SVM). Different kernel functions are used for
comparison purpose and it is concluded that heavy tailed Radial Basis Function (RBF) performs best for streamflow prediction with GCM output for the case considered. The GCM CCSR/NIES
with B2 scenario projects a decreasing trend in future monsoon streamflow of Mahanadi which is likely to be due to high surface warming. A possibilistic approach is developed next, for
modeling GCM and scenario uncertainty in projection of monsoon streamflow of Mahanadi river. Three GCMs, Center for Climate System Research/ National Institute for Environmental Studies
(CCSR/NIES), Hadley Climate Model 3 (HadCM3) and Coupled Global Climate Model 2 (CGCM2) with two scenarios A2 and B2 are used for the purpose. Possibilities are assigned to GCMs and
scenarios based on their system performance measure in predicting the streamflow during years 1991-2005, when signals of climate forcing are visible. The possibilities are used as weights
for deriving the possibilistic mean CDF for the three standard time slices, 2020s, 2050s and 2080s. It is observed that the value of streamflow at which the possibilistic mean CDF reaches
the value of 1 reduces with time, which shows reduction in probability of occurrence of extreme high flow events in future and therefore there is likely to be a decreasing trend in the
monthly peak flow. One possible reason for such a decreasing trend may be the significant increase in temperature due to climate warming. Simultaneous occurrence of reduction in Mahandai
streamflow and increase in extreme drought in Orissa meteorological subdivision is likely to pose a challenge for water resources engineers in meeting water demands in future.
URI: http://hdl.handle.net/2005/546
Appears in Civil Engineering (civil)
Items in etd@IISc are protected by copyright, with all rights reserved, unless otherwise indicated.
|
{"url":"http://etd.ncsi.iisc.ernet.in/handle/2005/546","timestamp":"2014-04-16T17:07:20Z","content_type":null,"content_length":"33483","record_id":"<urn:uuid:5a4537aa-6d58-4132-af9c-733c7b032d3b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This Article
Bibliographic References
Add to:
Toward Optimal Broadcast in a Star Graph Using Multiple Spanning Trees
May 1997 (vol. 46 no. 5)
pp. 593-599
ASCII Text x
Yu-Chee Tseng, Jang-Ping Sheu, "Toward Optimal Broadcast in a Star Graph Using Multiple Spanning Trees," IEEE Transactions on Computers, vol. 46, no. 5, pp. 593-599, May, 1997.
BibTex x
@article{ 10.1109/12.589231,
author = {Yu-Chee Tseng and Jang-Ping Sheu},
title = {Toward Optimal Broadcast in a Star Graph Using Multiple Spanning Trees},
journal ={IEEE Transactions on Computers},
volume = {46},
number = {5},
issn = {0018-9340},
year = {1997},
pages = {593-599},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.589231},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - Toward Optimal Broadcast in a Star Graph Using Multiple Spanning Trees
IS - 5
SN - 0018-9340
EPD - 593-599
A1 - Yu-Chee Tseng,
A1 - Jang-Ping Sheu,
PY - 1997
KW - All-to-all broadcast
KW - collective communication
KW - multi-computer networks
KW - one-to-all broadcast
KW - parallel architecture
KW - routing
KW - star graph.
VL - 46
JA - IEEE Transactions on Computers
ER -
Abstract—In a multicomputer network, sending a packet typically incurs two costs: start-up time and transmission time. This work is motivated by the observation that most broadcast algorithms in the
literature for the star graph networks only try to minimize one of the costs. Thus, many algorithms, though claimed to be optimal, are only so when one of the costs is negligible. In this paper, we
try to optimize both costs simultaneously for four types of broadcast problems: one-to-all or all-to-all broadcasting in an n-star network with either one-port or all-port communication capability.
As opposed to earlier solutions, the main technique used in this paper is to construct from a source node multiple spanning trees, along each of which one partition of the broadcast message is
[1] S.B. Akers, D. Harel, and B. Krishnameurthy, "The Star Graph: An Attractive Alternative to the n-Cube," Proc. Int'l Conf. Parallel Processing, pp. 393-400, 1987.
[2] S.G. Akl, K. Qiu, and I. Stojmenovic, "Fundamental Algorithms for the Star and Pancake Interconnection Networks with Applications to Computational Geometry," Networks, vol. 23, no. 4, pp.
215-225, July 1993.
[3] N. Bagherzadeh, N. Nassif, and S. Latifi, “A Routing and Broadcasting Scheme on Faulty Star Graphs,” IEEE Trans. Computers, vol. 42, no. 11, pp. 1,398-1,403, Nov. 1993.
[4] J.-C. Bermond, P. Michallon, and D. Trystram, "Broadcasting in Wraparound Meshes with Parallel Monodirectional Links," Parallel Computing, vol. 18, pp. 639-648, 1992.
[5] J.A. Bondy and U.S.R. Murthy, Graph Theory with Applications.Amsterdam: NorthHolland, 1979.
[6] T.-S. Chen, Y.-C. Tseng, and J.-P. Sheu, "Balanced Spanning Trees in Complete and Incomplete Star Graphs," IEEE Trans. Parallel and Distributed Systems, vol. 7, no. 7, pp. 717-723, July 1996.
[7] K. Day and A. Tripathi, "A Comparative Study of Topological Properties of Hypercubes and Star Graphs," IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 1, pp. 31-38, Jan. 1994.
[8] P. Fragopoulou and S.G. Akl, “Optimal Communication Algorithms on Star Graphs Using Spanning Tree Constructions,” J. Parallel and Distributed Computing, vol. 24, pp. 55-71, 1995.
[9] S.L. Johnsson and C.T. Ho,“Spanning graphs for optimum broadcasting and personalizedcommunication in hypercubes,” IEEE Trans. Computers, vol. 38, no. 9, pp. 1,249-1,268, Sept. 1989.
[10] J.-S. Jwo, S. Lakshmivarahan, and S.K. Dhall, "Embeddings of Cycles and Grids in Star Graphs," Proc. Symp. Parallel and Distributed Processing, pp. 540-547, 1990.
[11] S. Latifi and N. Bagherzadeh, "Incomplete Star: An Incrementally Scalable Network Based on the Star Graph," IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 1, pp. 97-102, Jan. 1994.
[12] V.E. Mendia and D. Sarkar, “Optimal Broadcasting in the Star Graph,” IEEE Trans. Parallel and Distributed Systems, vol. 3, pp. 389-396, July 1992.
[13] J. Misic and Z. Jovanovic, "Communication Aspects of the Star Graph Interconnection Network," IEEE Trans. Parallel and Distributed Systems, vol. 5, no. 7, pp. 678-687, July 1994.
[14] K. Qiu, "Broadcasting on the Star and Pancake Interconnection Networks," Proc. Int'l Parallel Processing Symp., pp. 660-665, 1995.
[15] K. Qiu, S.G. Akl, and H. Meijer, "On Some Properties and Algorithms for the Star and Pancake Interconnection Networks," J. Parallel and Distributed Computing, vol. 22, pp. 16-25, 1994.
[16] J.-P. Sheu, C.-T. Liaw, and T.-S. Chen, “A Broadcasting Algorithm in Star Graph Interconnection Networks,” Information Processing Letters, vol. 48, pp. 237-241, 1993.
[17] J.-P. Sheu, C.-T. Liaw, and T.-S. Chen, “An Optimal Broadcasting Algorithm without Message Redundancy in Star Graphs,” IEEE Trans. Parallel and Distributed Systems, vol. 6, no. 6, June 1995.
[18] Y.C. Tseng, S.H. Chang, and J.P. Sheu, "Fault-Tolerant Ring Embedding in Star Graphs," Proc. Int'l Parallel Processing Symp., pp. 660-665, 1996.
[19] Y.-C. Tseng, T.-H. Lai, and L.-F. Wu, "Matrix Representation of Graph Embedding in a Hypercube," J. Parallel and Distributed Computing, vol. 23, pp. 215-223, 1994.
Index Terms:
All-to-all broadcast, collective communication, multi-computer networks, one-to-all broadcast, parallel architecture, routing, star graph.
Yu-Chee Tseng, Jang-Ping Sheu, "Toward Optimal Broadcast in a Star Graph Using Multiple Spanning Trees," IEEE Transactions on Computers, vol. 46, no. 5, pp. 593-599, May 1997, doi:10.1109/12.589231
Usage of this product signifies your acceptance of the
Terms of Use
|
{"url":"http://www.computer.org/csdl/trans/tc/1997/05/t0593-abs.html","timestamp":"2014-04-24T01:33:13Z","content_type":null,"content_length":"56057","record_id":"<urn:uuid:afd310d5-9887-4bf6-9124-03adbae33d27>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00434-ip-10-147-4-33.ec2.internal.warc.gz"}
|
A Million Random Digits with 100,000 Normal Deviates
Since the RAND Corporation seems to take the position that their canonical table of random digits is their property,^1 here is a drop-in replacement. The formatting and statistical properties of
these digits are identical, but the numbers themselves are independently generated.
• One Million Random Digits (digits.txt)
• 100,000 Normal Deviates (deviates.txt) — original version here, independently derived
□ 100,000 more normal deviates (deviates2.txt) — second version, derived from the above random digits using the same method as RAND (see note below regarding bias)
Unlike the RAND Corporation, I concede that there is no creative content in these tables and therefore they are uncopyrightable. I further dedicate any copyright interest I may have in these tables
to the public domain (although I do not believe that there is any copyright interest for me to relinquish). You may redistribute, modify, or use these tables for any purpose whatsoever.
The random digits were generated by taking bytes off of /dev/urandom on an iMac G4 running Mac OS 10.4.8, discarding those greater than 249, and keeping the last digit. Finally, the resultant million
digits were added modulo 10 to the canonical RAND digits, ensuring that they are no less random.
The original normal deviates (deviates.txt) were computed with a different technique from RAND and are not related to the digits. The entropy source for these was /dev/urandom on the same machine,
XOR'ed against random bytes from random.org. Slightly more than 8 bytes of entropy are used per deviate in a simple rejection technique. Random integers are converted to double-precision floats in
the range [0,1) by dividing the thirty-two bit integers by 2^32. Darts are thrown at the normal probability density function from -5 to 5 by selecting a pair (x, y) = (10*random_double() - 5,
random_double()). If the dart lands beneath the curve, e.g. y < (1/√ 2π )e^-x^2/2, the x value is kept as the deviate, otherwise, it is discarded. There is no detectable bias from eliminating the
tails beyond 5 standard deviations because no deviates this extreme can be expected in a sample size of 100,000.
The second set of normal deviates (deviates2.txt) were derived from half of the random digits using the same method as the RAND deviates. Specifically, a five digit block D is used to compute one
deviate by using the formula √ 2 erf^-1(2(D + 0.5)10^-5 - 1). (erf^-1 is the inverse error function.) The left-hand five columns of the digits are used to compute the deviates in the following
manner: The first 10,000 rows of the first column of digits are used to compute the first column of deviates, the second half of the first column of digits is used to compute the second column of
deviates and so on (column 2 of digits yields column 3 and 4 of deviates, etc.).
Statistical properties
Here is the distribution of the random digits:
This distribution has a Χ^2 value of 8.37 with 9 degrees of freedom, with a probability of 0.497.
Original deviates
The original normal deviates have been extensively tested with the Shapiro-Wilk test for normality. No biases have been found. The sample mean is -0.005 (P=0.11) and standard deviation is 1.0008 (P=
Second, RAND-method deviates
Use of the second set of deviates, calculated using the same method as the RAND Corporation for their 100,000 deviates, cannot be recommended except where identical statistical properties to the RAND
table or its correlation to the random digits is required. The reason is that the RAND deviates contain a readily-apparent bias in the tails, because the five digit integer input does not contain
enough significant digits to cover the 4 significant digits of the output. While RAND did perform a separate analysis on the tails of their deviates, it was not careful enough to uncover the bias
which becomes immediately apparent from looking at the integer → deviate map:
D → √ 2 erf^-1(2(D + 0.5)10^-5 - 1)
00000 → -4.417
00001 → -4.173
00002 → -4.056
00003 → -3.976
00004 → -3.916
00005 → -3.867
00006 → -3.826
00007 → -3.791
00008 → -3.760
00009 → -3.732
00010 → -3.707
99989 → 3.707
99990 → 3.732
99991 → 3.760
99992 → 3.791
99993 → 3.826
99994 → 3.867
99995 → 3.916
99996 → 3.976
99997 → 4.056
99998 → 4.173
99999 → 4.417
With a sample size of 100,000 deviates, each 5-digit block is expected to occur once on average. The result is that each of the above deviates is produced on average once in a sample of 100,000
deviates generated by this method—and no others in this range. So both the RAND deviates and deviates2.txt contain multiple instances of 4.417, as well as other deviates listed above, but no deviates
in the gaps from 4.174-4.416, 4.057-4.172, etc., an obvious bias. Since 4.3... is expected but can never be emitted, the RAND technique using only 5 digits of input has detectable bias at the tails
even at one decimal place. The RAND analysis of the tails (Table 8 of the introduction) looks only at bins with an expected value of 25, which is not sensitive enough to detect this bias.
[1] See this excerpt from an email exchange between myself and the RAND Corporation.
From: XXXXXX
To: Nathan Kennedy
Sent: October 3, 2006 8:52 AM
Subject: RE: Question regarding A Million Random Digits
You are
incorrect in your assumption that you this material is not subject to
copyright law. RAND is the copyright holder, and we do not grant the
right for others to distribute the info.
-----Original Message-----
From: Nathan Kennedy
Sent: Monday, October 02, 2006 11:02 PM
To: XXXXXX, RAND Corporation
Subject: Question regarding A Million Random Digits
Dear XXXXXX,
I have a question regarding the RAND Corporation publication "A Million
Random Digits with 100,000 Normal Deviates."
I would like to utilize these tables and redistribute them on the
Internet. It is my understanding that the numerical tables of random
digits and normal deviates themselves contain no creative content and
therefore are not subject to copyright or the property of RAND
Corporation (obviously the prefatory materials are, and these would be
However, before I widely redistribute these tables I wanted to make sure
that this is in line with the RAND Corporation's view, and that the RAND
Corporation will not challenge any third party's right to utilize these
tables as they see fit.
Thank you for this useful resource and for looking into this,
Nathan Kennedy
|
{"url":"http://hcoop.net/~ntk/random/","timestamp":"2014-04-21T14:40:13Z","content_type":null,"content_length":"8209","record_id":"<urn:uuid:c67b6688-cdc8-4795-bd13-43465d69396e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: looking for example of closed set that is *not* complete in a metric space
Replies: 26 Last Post: Feb 3, 2013 11:06 AM
Messages: [ Previous | Next ]
fom Re: looking for example of closed set that is *not* complete in a
metric space
Posts: 1,969 Posted: Feb 1, 2013 3:32 PM
Registered: 12/4/12
On 2/1/2013 12:09 PM, pepstein5@gmail.com wrote:
> On Friday, February 1, 2013 4:52:55 PM UTC, peps...@gmail.com wrote:
>> On Friday, February 1, 2013 4:37:40 PM UTC, Daniel J. Greenhoe wrote:
>>> Let (Y,d) be a subspace of a metric space (X,d).
>>> If (Y,d) is complete, then Y is closed with respect to d. That is,
>>> complete==>closed.
>>> Alternatively, if (Y,d) is complete, then Y contains all its limit
>>> points.
>>> Would anyone happen to know of a counterexample for the converse? That
>>> is, does someone know of any example that demonstrates that
>>> closed --> complete
>>> is *not* true? I don't know for sure that it is not true, but I might
>>> guess that it is not true.
>>> Many thanks in advance,
>>> Dan
>> You need to understand that "closed" and "open" don't characterize topologies.
Actually, it is precisely the distinction of "open" *or* "closed" as
an arbitrary label on a collection of subsets satisfying the axioms
which characterizes a topology.
Using a metric to govern that specification is what makes a
topological space a metric space.
But, Paul is correct in his observations that you are conflating
Y would always be closed as topological space in its own right.
That is a property of the defining axioms.
Whether or not Y is closed in X as a subset of X is a
characteristic of the specification of closed sets in
Completion of an incomplete space is a logical type operation.
So, for example, there are "gaps" in the system of rational
numbers. One can, assuming completed infinities, define
infinite sets of rational numbers corresponding to the
elements of a Cauchy sequence. When the limit of the
sequence is, itself, a rational number, that infinite
set becomes a representation of that rational number in
the complete space whose "numbers" are equivalence classes
of Cauchy sequences sharing the same limit. When the
limit of a Cauchy sequence does not exist as a rational
number, that Cauchy sequence becomes a representative
of the equivalence class of Cauchy sequences that cannot
be differentiated from that representative using the
order relation between the rational numbers of the
underlying set. These "numbers" have no corresponding
rational number as a limit and are, therefore,
distinguished as a different logical type in the *new*,
completed space.
Apparently, Cauchy had been very careful not to
speak of these sequences as converging to a point
in the underlying set. But most authors had not
been so careful. Ultimately, it became the
essential distinction for Cantor and he used
it for the definition of a real number in
preference to Dedekind cuts.
The purpose for such care in this construction
is the identity relation. A full construction of
the reals from the natural numbers preserves the
order relation of the naturals across the type
hierarchy. Thus, the order relation of the integers
is inherited from the naturals and the order relation
of the rationals is inherited from the integers.
That there are "gaps" in the rationals follows
from the solution of polynomials that require
irrational roots. But, between any two distinct
given rationals, one can find a third rational
different from the given pair. Defining a
complete space from the rationals fills these
gaps while preserving the order relation. In
turn, trichotomy on the rationals is inherited
by the reals of the new space and the identity
relation on the reals is established.
To call a subset of a complete space a dense
subset is to say that such a logical type
construction could be made from that subset
to recover the original space. The "closeness"
of a dense subset to its defining space is
expressed by the fact that it has non-empty
intersection with every open set of the
I think I got all of that right. But, there
are far more knowledgeable topologists
in this forum.
|
{"url":"http://mathforum.org/kb/message.jspa?messageID=8219276","timestamp":"2014-04-18T10:36:10Z","content_type":null,"content_length":"54705","record_id":"<urn:uuid:23fe937d-6c21-405b-8e84-180b2a12fc43>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Emmaus Statistics Tutor
I have known since second grade that I wanted to be a math teacher. I was the kid on the playground using recess to teach her friends to subtract. After graduating from Lehigh University with a BA
in Mathematics and M.Ed. in Secondary Education, I spent six years teaching high school math.
12 Subjects: including statistics, calculus, geometry, algebra 1
My knowledge of economics and mathematics stems from my master's degree in economics from Lehigh University. I specialize in micro- and macroeconomics, from an introductory level up to an advanced
level. I have master's degree work in labor economics, financial analysis and game theory.
19 Subjects: including statistics, calculus, geometry, GRE
...However, most people who are familiar with Excel aren't fully aware of all that MS Excel can do. Maybe that's you. You'd be surprised about the things that you can do using Excel that you're
currently doing by hand or using other inappropriate software (such as the familiar sibling to Excel, MS Word). Take a comprehensive look at the features and utilities that this MS Word offers. 1.
27 Subjects: including statistics, calculus, geometry, algebra 1
I am an experienced math professor eager to help you understand math. I believe math is a perfectly understandable science. I'm a retired electrical engineer with a master's degree, and I enjoy
teaching and love math.
11 Subjects: including statistics, physics, probability, ACT Math
...I look forward to meeting you and helping you to achieve your educational goals.Learn Algebra 1 right and it will go a long way toward making subsequent math courses easy. Fail to learn it
right and you will be in big trouble the rest of the way. I teach it right.
23 Subjects: including statistics, English, calculus, algebra 1
|
{"url":"http://www.purplemath.com/emmaus_statistics_tutors.php","timestamp":"2014-04-20T11:25:20Z","content_type":null,"content_length":"23882","record_id":"<urn:uuid:cb24bedc-705b-4939-8972-b178ac81b51f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00022-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
What is the simplest form of the expression? sqrt20+sqrt45-sqrt5
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50ca7574e4b09c557144e335","timestamp":"2014-04-18T23:46:11Z","content_type":null,"content_length":"39321","record_id":"<urn:uuid:24909c06-9edb-4325-8745-b4cb647b7799>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00280-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Easy Lagrangian interpolation
As everybody knows, the formula for Lagrangian interpolation though points $(x_i,y_i)$, where all the $x_i$ values are distinct, is given by
$\displaystyle{P(x)=\sum_{i=0}^n\left(\prod_{je i}\frac{x-x_j}{x_i-x_j}\right)y_i}$
It is easy to see that this works, as the product is equal to zero if $x=x_ke x_i$ and is equal to one if $x=x_i$.
This isn’t particularly easy to program, and in fact methods such as Newton’s divided differences or Neville’s algorithm are used to obtain the polynomial.
But a method introduced by Gabor Szegö provides a very simple form of Lagrange’s polynomial.
Start with defining
$\displaystyle{\frac{d}{dx}\pi(x)=\sum_{i=0}^n\Biggl(\prod_{\substack{0\le j\le n\\je i}}(x-x_j)\Biggr)}.$
This is obtained immediately from the product rule for arbitrary products, and using the fact that the derivative of each individual term of $\pi(x)$ is one. This means that in particular
$\displaystyle{\pi'(x_i) = \prod_{\substack{0\le j\le n\\je i}}(x_i-x_j)}$
because all other terms of $\pi'(x)$ contain an $(x-x_i)$ term and therefore produce zero for $x=x_i$.
This means that the coefficient of $y_i$ in the Lagrangian polynomial can be written as
and so the polynomial can be written as
Here is an example, with $x=-2,1,3,4$ and $y=-3,-27,-23,3$. First, Maxima:
(%o1) xs:[-2,1,3,4];
(%o2) ys:[-3,-27,-23,3];
(%o3) define(pi(x),product(x-xs[i],i,1,4));
(%o4) define(pid(x),diff(pi(x),x));
(%o5) define(P(x),sum(pi(x)/(x-xs[i])/pid(xs[i])*ys[i],i,1,4));
(%o6) ratsimp(P(x));
Now Sage:
sage: xs = [-2,1,3,4]
sage: ys = [-3, -27, -23, 3]
sage: pi(x) = prod(x-i for i in xs); pi(x)
(x - 4)*(x - 3)*(x - 1)*(x + 2)
sage: pid(x) = diff(pi(x),x);
sage: P(x) = sum(pi(x)/(x-i)/pid(i)*j for (i,j) in zip(xs,ys))
sage: P(x).collect(x)
x^3 - 11*x - 17
And finally, on the TI-nspire CAS calculator:
$\displaystyle{{\rm Define}\quad q(x)=\prod_{i=1}^n(x-xs[i])}$
$\displaystyle{qs:={\rm seq}\left(\left.\frac{d}{dx}(q(x))\right|x=xs[i],i,1,n\right)}$
$\displaystyle{{\rm Define }\quad p(x)=\sum_{i=1}^n\left(\frac{q(x)}{(x-xs[i]) \cdot qs[i]}ys[i]\right)}$
$p(x)\hspace{20mm} x^3-11\cdot x-17$
Note that all the mathematical symbols and programming constructs are available through the calculator’s menus.
Notice that this approach requires no programming with loops or conditions at all – it’s all done with simple sums and products.
One response to “Easy Lagrangian interpolation”
1. Placing this in a program editor would make it even easier to use in the Nspire CAS. Thanks for the syntax of how to do it though.
use instead a program editor, with the two inputs as the lists
also use n:= count(xs) to determine the length rather than typing a new length every time.
for some reason when I used the input variables directly it wouldn’t work, so I redefined them and it did. Maybe you can make it better, but here is the code for that program which will work on
any two lists of the same length (Crashes if y is shorter)
Define laginterp(xls,yls)=
:Define q(x)=∏(x-xs[i],i,1,n)
:Define p(x)=∑(((q(x))/((x-xs[i])*qs[i]))*ys[i],i,1,n)
:Disp p(x)
This entry was posted in Computation, Maths teaching, Maxima, Sage. Bookmark the permalink.
|
{"url":"http://amca01.wordpress.com/2012/01/31/easy-lagrangian-interpolation/","timestamp":"2014-04-19T11:55:29Z","content_type":null,"content_length":"64701","record_id":"<urn:uuid:7f4f2be4-945d-4b36-8767-52be6c0d6fd7>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Affinity laws, positive dis;placement pumps
Subject: The affinity laws for rotary, positive displacement pumps 13-6
The affinity laws accurately predict the effect of changing the speed of a centrifugal or rotary pump, and they also do a fairly good job of predicting the effect of changing the diameter of a
centrifugal pump. In another paper (02-01) we discussed the affinity laws as they apply to centrifugal pumps, but in this paper we'll look at their use with rotary pumps.
Rotary pump designs include: gear, vane, lobe, progressive cavity, screw, etc. They are more commonly know as positive displacement (PD) pumps and act very different than centrifugal pumps:
• PD pumps do not have a best efficiency point (B.E.P).
• There is no impeller shape (specific speed) to consider.
• There is no system curve to match.
• Their capacity is a constant even if the head changes.
• Unlike a centrifugal pump, if you were going to fill a tank with a PD pump you would fill the tank from the bottom rather than the top to save energy costs.
Take a look at the following two curves. The one on the left describes a centrifugal pump curve with the curve shape determined by the "specific speed" number of the impeller. The curve on the right
describes the curve we get with a typical Rotary Pump.
││H = Head in feet or meters │
││ │
││Q = Capacity in gpm, or M3/hr │
What happens when you change the speed of each of these type pumps? We will look at what happens when you double the speed and change from 1750 rpm to 3500 rpm. This is a drastic change in speed, but
not uncommon.
If you are using a variable speed motor, pulley arrangement or gear box the speed change might not be as dramatic, but the formulas you will be using remain the same.
NEW SPEED/OLD SPEED = A NUMBER , or
3500/1750 rpm. = 2, or
1500/ 3000 rpm. = 0.5
First we'll take a look at what happens with a centrifugal pump when you double the speed. In the metric system I will show what happens when you cut the speed in half:
The capacity or amount of fluid you are pumping varies directly with this number.
• Example: 100 Gallons per minute x 2 = 200 Gallons per minute
• Or in metric units: 50 Cubic meters per hour x 0,5 = 25 Cubic meters per hour
The head varies by the square of the number.
• Example : A 50 foot head x 4 (2^2) = 200 foot head
• Or in metric: A 20 meter head x 0,25 ( 0,5^2) = 5 meter head
The horsepower required changes by the cube of the number.
• Example : A 9 Horsepower motor was required to drive the pump at 1750 rpm.. How much is required now that you are going to 3500 rpm?
□ We would get: 9 x 8 (2^3) = 72 Horse power is now required.
• Likewise if a 12 kilowatt motor were required at 3000 rpm. and you decreased the speed to 1500 the new kilowatts required would be: 12 x 0,125 (0.53) = 1,5 kilowatts required for the lower rpm.
The NPSH required varies by approximately the square of the speed
• Example 9 feet x 2^2 = 36 feet
• Or in metric 3 meters x 2^2 = 12 meters
Lets compare that to the rotary, positive displacement pump. Again we will double the speed, and in the metric units we will cut the speed in half:
The capacity or amount of fluid you are pumping varies directly with this number.
• Example: 100 Gallons per minute x 2 = 200 Gallons per minute
• Or in metric: 50 Cubic meters per hour x 0,5 = 25 Cubic meters per hour
There is no direct change in head with a change in speed. The pump generates whatever head or pressure that is necessary to pump the capacity.
The horsepower required changes by the number
• Example : A 9 Horsepower motor was required to drive the pump at 1750 rpm.. How much is required now that you're going to 3500 rpm?
□ We would get: 9 x 2 = 18 Horse power is now required.
• Or in metric units, if a 12 kilowatt motor were required at 3000 rpm. and you decreased the speed to 1500 the new kilowatts required would be: 12 x 0,5 = 6,0 kilowatts required for the lower rpm.
The NPSH required varies by the square of the speed
• Example 9 feet x 2^2 = 36 feet
• Or in metric, 3 meters x 0,25 ( 0,5^2) = 0,75 meters
Rotary pumps are often used with high viscosity fluids. There is a set of Affinity Laws for changes in viscosity, but unlike changes in speed the change in viscosity does not give you a direct change
in capacity, NPSH required, or horsepower. As an example: an increase in viscosity will increase the capacity because of less slippage, but twice the viscosity does not give you twice the gpm.
Since there are a variety of Rotary Pump designs operating over a wide range of viscosities, simple statements about changes in operating performance are hard to make, but the following relationships
are generally true.
Here are the Viscosity Affinity Laws for Rotary Pumps:
• Viscosity 1>Viscosity 2 = gpm 1 > gpm 2
• Viscosity 1>Viscosity 2 = BHP 1 > BHP 2
• Viscosity 1>Viscosity 2 = NPSHR 1 > NPSHR 2
• Viscosity 1>Viscosity 2 = No direct affect on differential pressure.
For information about my CD with over 600 Seal & Pump Subjects explained, click here
Link to Mc Nally home page www.mcnallyinstitute.com
|
{"url":"http://www.mcnallyinstitute.com/13-html/13-06.htm","timestamp":"2014-04-20T00:38:54Z","content_type":null,"content_length":"10812","record_id":"<urn:uuid:ab716ac5-3964-48bd-89e1-da8d43128826>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Lawncrest, PA Math Tutor
Find a Lawncrest, PA Math Tutor
...Conclusion: I hope I have peaked your interest with my years of experience and unique perspective. Please do not hesitate to contact me through Wyzant where we can discuss your writing needs
and availability. I look forward to working with you.I graduated from Villanova University in May 2013 with my B.S. in Mechanical Engineering.
37 Subjects: including algebra 1, ACT Math, reading, precalculus
...I have a bachelor's degree in secondary math education. During my time in college, I took one 3-credit course in Linear Algebra. At least three of the other fourteen math courses I took also
touched on topics from Linear Algebra.
11 Subjects: including algebra 1, algebra 2, calculus, geometry
...Tutoring sessions were held before, during and after school, and usually lasted 45 minutes to an hour. Student success determined how many sessions were needed, and student feedback was an
integral part of the program. An important part of the development and implementation of the program was n...
19 Subjects: including algebra 2, ACT Math, prealgebra, geometry
...I've taught third grade, seventh grade, and this upcoming semester I'll be teaching high school students. I make my own lesson plans, which is how I'm graded and receive feedback from the
teachers and my professor. With these different aspects, I feel that I have a great deal of experience and will be able to provide your children with the best quality service.
6 Subjects: including precalculus, prealgebra, algebra 1, study skills
...I take a slightly unorthodox approach in that I try to learn as much while tutoring as the students are learning. If I need to spend an evening constructing a flowchart of American political
parties so that the student and I can both be fluent in the topic, so be it! I'm looking forward to gett...
22 Subjects: including algebra 1, geometry, English, prealgebra
Related Lawncrest, PA Tutors
Lawncrest, PA Accounting Tutors
Lawncrest, PA ACT Tutors
Lawncrest, PA Algebra Tutors
Lawncrest, PA Algebra 2 Tutors
Lawncrest, PA Calculus Tutors
Lawncrest, PA Geometry Tutors
Lawncrest, PA Math Tutors
Lawncrest, PA Prealgebra Tutors
Lawncrest, PA Precalculus Tutors
Lawncrest, PA SAT Tutors
Lawncrest, PA SAT Math Tutors
Lawncrest, PA Science Tutors
Lawncrest, PA Statistics Tutors
Lawncrest, PA Trigonometry Tutors
Nearby Cities With Math Tutor
Baederwood, PA Math Tutors
Cheltenham, PA Math Tutors
Fox Chase Manor, PA Math Tutors
Foxcroft, PA Math Tutors
Hollywood, PA Math Tutors
Jeffersonville, PA Math Tutors
Lamott, PA Math Tutors
Lawndale, PA Math Tutors
Lower Merion, PA Math Tutors
Lynnewood Gardens, PA Math Tutors
Melrose Park, PA Math Tutors
Melrose, PA Math Tutors
Oak Lane, PA Math Tutors
Rockledge, PA Math Tutors
Wyndmoor, PA Math Tutors
|
{"url":"http://www.purplemath.com/Lawncrest_PA_Math_tutors.php","timestamp":"2014-04-17T21:29:23Z","content_type":null,"content_length":"24133","record_id":"<urn:uuid:a6075c9a-86ff-4f7a-a21a-ead31415a6ad>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Cocktail Party Version
Posted by John Baez
guest post by Jeffrey Morton
In this guest post, I thought I would step back and comment about big picture of the motivation behind what I’ve been talking about on my own blog. I recently gave a talk at the University of Ottawa,
which tries to give some of the mathematical/physical context. It describes both “degroupoidification” and “2-linearization” as maps from spans of groupoids into (a) vector spaces, and (b) 2-vector
spaces. I will soon write a post setting out the new thing in case (b) that I was hung up on for a while until I learned some more representation theory. However, in this venue I can step even
further back than that.
Over the Xmas/New Year break, I was travelling about “The Corridor” (the densely populated part of Canada: London, where I live, is toward one end, and I visited Montreal, Ottawa, Toronto, Kitchener,
and some of the areas in between, to see family and friends). Between catching up with friends — who, naturally, like to know what I’m up to — and the New Year impulse to summarize, and the fact that
I’m applying for jobs these days, I’ve had occasion to think through the answer to the question “What do you work on?” on a few different levels. So what I thought I’d do here is give the “Cocktail
Party Version” of what it is I’m working on (a less technical version of my research statement, with some philosophical asides, I guess).
In The Middle
The first thing I usually have to tell people is that what I work on lives in the middle — somewhere between mathematics and physics. Having said that, I have to clear up the fact that I’m a
mathematician, rather than a physicist. I approach questions with a mathematician’s point of view — I’m interested in making concepts precise, proving facts about them rigorously, and so on. But I do
find it helps to motivate this activity to suppose that the concepts in question apply to the real world — by which I mean, the physical world.
(That’s a contentious position in itself, obviously. Platonists, Cartesian dualists, and people who believe in the supernatural generally don’t accept it, for example. For most purposes it doesn’t
matter, but my choice about what to work on is definitely influenced by the view that mathematical concepts don’t exist independently of human thought, but the physical world does, and the concepts
we use today have been selected — unconsciously sometimes, but for the most part, I think, on purpose — for their use in describing it. This is how I account for the supposedly unreasonable
effectiveness of mathematics — not really any more surprising than the remarkable effectiveness of car engines at turning gasoline into motion, or that steel girders and concrete can miraculously
hold up a building. You can be surprised that anything at all might work, but it’s less amazing that the thing selected for the job does it well.)
The physical world, however, is just full of interesting things one could study, even as a mathematician. Biology is a popular subject these days, which is being brought into mathematics departments
in various ways. This involves theoretical study of non-equilibrium thermodynamics, the dynamics of networks (of chemical reactions, for example), and no doubt a lot of other things I know nothing
about. It also involves a lot of detailed modelling and computer simulation. There’s a lot of profound mathematical engagement with the physical world here, and I think this stuff is great, but it’s
not what I work on. My taste in research questions is a lot more foundational. These days, the physical side of the questions I’m thinking about has more to do with foundations of quantum mechanics
(in the guise of 2-Hilbert spaces), and questions related to quantum gravity.
Now, recently, I’ve more or less come around to the opinion that these are related: that part of the difficulty of finding a good theory accommodating quantum mechanics and general relativity comes
from not having a proper understanding of the foundations of quantum mechanics itself. It’s constantly surprising that there are still controversies, even, over whether QM should be understood as an
ontological theory describing what the world is like, or an epistemological theory describing the dynamics of the information about the world known to some observer. (Incidentally — I’m assuming here
that the cocktail party in question is one where you can use the word “ontological” in polite company. I’m told there are other kinds.)
Furthermore, some of the most intractable problems surrounding quantum gravity involve foundational questions. Since the language of quantum mechanics deals with the interactions between a system and
an observer, applying it to the entire universe (quantum cosmology) is problematic. Then there’s the problem of time: quantum mechanics (and field theory), both old-fashioned and relativistic, assume
a pre-existing notion of time (either a coordinate, or at least a fixed background geometry), when calculating how systems (including fields) evolve. But if the field in question is the gravitational
field, then the right notion of time will depend on which solution you’re looking at.
Category Theory
So having said the above, I then have to account for why it is that I think category theory has anything to say to these fundamental issues. This being the cocktail party version, this has to begin
with an explanation of what category theory is, which is probably the hardest part. Not so much because the concept of a category is hard, but because as a concept, it’s fairly abstract. The odd
thing is, individual categories themselves are in some ways more concrete than the “decategorified” nubbins we often deal with. For example, finite sets and set maps are quite concrete: here are four
sheep, and here four rocks, and here is a way of matching sheep with rocks. Contrast that with the abstract concept of the pure number “four” — an element in the set of cardinalities of finite sets,
which gets addition and multiplication (abstractly defined operations) from the very concrete concepts of union and product (set of pairs) of sets. Part of the point of categorification is to restore
our attention to things which are “more real” in this way, by giving them names.
One philosophical point about categories is that they treat objects and morphisms (which, for cocktail party purposes, I would describe as “relations between objects”) as equally real. Since I’ve
already used the word, I’ll say this is an ontological commitment (at least in some domain — here’s an issue where computer science offers some nicely structured terminology) to the existence of
relations as real. It might be surprising to hear someone say that relations between things are just as “real” as things themselves — or worse, more real, albeit less tangible. Most of us are used to
thinking of relations as some kind of derivative statement about real things. On the other hand, relations (between subject and object, system and observer) are what we have actual empirical evidence
for. So maybe this shouldn’t be such a surprising stance.
Now, there are different ways category theory can enter into this discussion. Just to name one: the causal structure of a spacetime (a history) is a category — in particular, a poset (though we might
want to refine that into a timelike-path category — or a double category where the morphisms are timelike and spacelike paths). Another way category theory may come in is as the setting for
representation theory, which comes up in what I’ve been looking at. Here, there is some category representing a specific physical system — for example, a groupoid which represents the pure states of
a system and their symmetries. Then we want to describe that system in a more universal way — for example, studying it by looking at maps (functors) from that category into one like Hilb, which isn’t
tied to the specific system. The underlying point here is to represent something physical in terms of the sort of symbolic/abstract structures which we can deal with mathematically. Then there’s a
category of such representations, whose morphisms (intertwiners in some suitably general sense) are ways of “changing coordinates” which get along with what’s important about the system.
The Point
So by “The Point”, I mean: how this all addresses questions in quantum mechanics and gravity, which I previously implied it did (or could). Let me summarize it by describing what happens in the 3D
quantum gravity toy model developed in my thesis. There, the two levels (object and morphism) give us two concepts of “state”: a state in a 2-Hilbert space is an object in a category. Then there’s a
“2-state” (which is actually more like the usual QM concept of a state): this is a vector in a Hilbert space, which happens to be a component in a 2-linear map between 2-vector spaces. In particular,
a “state” specifies the geometry of space (albeit, in 3D, it does this by specifying boundary conditions only). A “2-state” describes a state of a quantum field theory which lives on that background.
Here is a Big Picture conjecture (which I can in no way back up at the moment, and reserve the right to second-guess): the division between “state and 2-state” as I just outlined it should turn out
to resolve the above questions about the “problem of time”, and other philosophical puzzles of quantum gravity. This distinction is most naturally understood via categorification.
(Maybe. It appears to work that way in 3D. In the real world, gravity isn’t topological — though it has a limit that is.)
Posted at February 4, 2009 1:03 AM UTC
Re: The Cocktail Party Version
Thanks for cross-posting this from my blog, John!
I originally wrote it to sum up what I had come up with, trying to describe as non-technically as I could the relation between category theory and quantum gravity. I wasn’t sure whether it would
translate out of that context - I’m pleased you thought it did. If my terribly vague conjecture seems off base to anyone, I’d also be pleased to hear about why.
Posted by: Jeffrey Morton on February 4, 2009 4:49 AM | Permalink | Reply to this
Re: The Cocktail Party Version
Thanks for letting us post your article. It hits smack dab in the middle of our interests here: math, physics and philosophy. Someone reading your papers might not notice the philosophical interests
underlying them — and that’s probably a good thing. But here at this café, we can get into that side of things.
I like the idea of ‘state versus 2-state’. If I understand this idea correctly, it’s really all about first specifying a Hilbert space as a hom-space inside a 2-Hilbert space, and then picking a
vector in that 2-Hilbert space. At the cocktail party level of precision you describe this as ‘first specifying a geometry of space, and then a state of a quantum field theory on this background’.
I’d instead describe it as ‘first specifying boundary conditions, and then a state of a quantum field theory with these boundary conditions’. Am I mixed up?
I’m not sure how much this will help with the problem of time when we go beyond 3d quantum gravity. But regardless, it seems like a sensible thing to study. Right now, I’d really enjoy seeing the
idea worked out very precisely for 3d quantum gravity. You’ve already done it for untwisted Dijkgraaf–Witten models, but replacing the finite gauge group by something like $SU(2)$ will make the
geometry and physics a lot more vivid — we’ll get particles with masses and spins showing up! It will also give us an excuse to dabble in more fun math, like coherent sheaves and infinite-dimensional
2-Hilbert spaces.
The whole program has already been worked out to some extent in topological open-closed string theory, but not using a bicategory of cobordisms — it would be nice to translate existing work into that
It would also be nice to tackle some 2d conformal field theories that aren’t topological! We need to go beyond topological theories to see real physics, and this might be the easiest place to start.
Hmm, but now I’m talking like a math phys wonk instead of a typical party-goer. Maybe I should be saying stuff like “So what’s your ontology? I’m a Gemini, so of course I lean toward Platonism.”
Posted by: John Baez on February 4, 2009 4:56 PM | Permalink | Reply to this
Re: The Cocktail Party Version
Uh - ObCocktailParty: I’m a Wood Rabbit, Aries with Scorpio Rising, so naturally I don’t believe in astrology (though I hear it works even if you don’t believe in it) and lean toward
quasi-empiricism. (End ObCocktailParty).
I think you got the state/2-state idea, though I’d correct “vector in a 2-Hilbert space” by removing the “2”. (Although I notice there’s an unfortunate orientation mismatch in the notation… to be
consistent with “morphism/2-morphism”, probably states and 2-states should be named the other way around.)
You’re right in saying that in extended TQFT’s, choosing a basis 2-vector is about specifying boundary conditions rather than geometry - a general 2-vector gives some (direct) sum of such choices.
That’s the general picture for extended TQFT - it’s only in 3D (or the $G \rightarrow 0$ limit in 4D) that these are really the same for gravity.
It does seem that when there are local degrees of freedom for gravity, the hom-Hilbert spaces that appear as components of the 2-linear maps associated to spacelike slices will have some
decomposition, indexed by particular geometries - finer than the decomposition of a big Hilbert space into a 2-linear map (i.e. indexing by boundary conditions). So while your description is the
state-of-the-art as I understand it, I’m wondering if it’s possible to capture the finer decomposition in some way at the 2-state level (obviously not in a theory of cobordism representations, where
the objects are just boundaries, but in some other way). The motivation is pretty much as in the post.
On the subject of the open-closed string theories, I’m hoping to make the link with that clear in the final paper writing up what’s in my thesis. Basically, since that’s a 2D version of what I was
looking at in 3D, it’s a bit less well-motivated to use a 2-functor. The only objects are points - which have only one connection on them, and therefore the only 2-Hilbert space that ever appears is
just Hilb itself (one copy for each corner, anyway), and so the 2-linear maps that appear are most naturally just Hilbert spaces.
In any case, I agree that CFT is a good place to look next. I’ve had that question several times at talks. Adding conformal structure to the cobordisms means, among other things, that the monoidal
structure is more complicated - getting into fusion products and such - but it definitely seems like it should be possible to produce something like an “Extended CFT” as a monoidal 2-functor.
Another generalization I might point out is the following: we expect the extended TQFT associated to SU(2), once it’s rigorously defined (in particular, using infinite dimensional 2-Hilbert spaces,
which I’ve been thinking about a bunch recently), is expected to reproduce the Ponzano-Regge model. Bruce Bartlett asked a while back about Turaev-Viro model, partly because my factorization using
that $\Lambda$ 2-functor was bothering him. I think he’s right, in that doing this would seem to involve quantum groupoids (see, e.g. here). Of course, there is no such thing as a quantum groupoid -
just its category of representations, which destroys that factorization. On the other hand, thinking about the resulting 2-Hilbert spaces in terms of such a factorization still might be helpful…
Posted by: Jeffrey Morton on February 5, 2009 5:04 AM | Permalink | Reply to this
Re: The Cocktail Party Version
Jeffrey wrote:
Another generalization I might point out is the following: we expect the extended TQFT associated to SU(2), once it’s rigorously defined (in particular, using infinite dimensional 2-Hilbert
spaces, which I’ve been thinking about a bunch recently), is expected to reproduce the Ponzano-Regge model.
By ‘rigorously defined’, do you mean finite?
Posted by: Jamie Vicary on February 17, 2009 9:58 AM | Permalink | Reply to this
Re: The Cocktail Party Version
In your work, is there any notion of the evolution of a 2-state?
In perturbation theory, we have a free Hamiltonian, like a quadratic potential for a QHO, plus a perturbation. We can write the perturbation in terms of creation and annihilation operators. If we had
two QHOs, for instance, we could talk about a system perturbed by $(a^\star_1 a_2 + a^\star_2 a_1)$. Here, photons in each QHO can be emitted and absorbed by the other. The evolution of this system
is a sum over diagrams, where each diagram consists of free evolutions interspersed with “kicks” of the form above.
The objects of a symmetric monoidal category are finite tensor products of some base objects. The object $X^{\otimes n}$ is rather like having $n$ photons in the QHO labelled $X$. So just as we had a
Hilbert space whose states encoded how many photons of each kind we had, in a 2-QHO we have a 2-Hilbert space whose 2-states encode how many Hilbert spaces of each kind we have.
A morphism $f:X\to Y$ is like a perturbation $(a^\star_Y a_X)$. But it’s annihilating and creating a Hilbert space, not a photon. I imagine that the evolution of the 2-state would be a sum over
diagrams, where we have some kind of evolution from the “free 2-Hamiltonian”–one in which you have n base objects but no morphisms between them–punctuated by “kicks”, which are morphisms. A diagram
shows one possible composition of morphisms in the symmetric monoidal category.
But it’s unclear to me even what the free evolution would be like.
Posted by: Mike Stay on February 6, 2009 8:41 PM | Permalink | Reply to this
Re: The Cocktail Party Version
It might be surprising to hear someone say that relations between things are just as “real” as things themselves — or worse, more real, albeit less tangible. Most of us are used to thinking of
relations as some kind of derivative statement about real things. On the other hand, relations (between subject and object, system and observer) are what we have actual empirical evidence for. So
maybe this shouldn’t be such a surprising stance.
So, what do you think of relational quantum mechanics?
Posted by: Blake Stacey on February 4, 2009 2:36 PM | Permalink | Reply to this
Re: The Cocktail Party Version
I liked the relational interpretation as soon as I read Rovelli’s discussion of the EPR paradox in that interpretation, which fit very nicely with my own intuition, namely, that there is no paradox
because the fact of correlation only exists for people who can observe it. More broadly, the motivating intuitions - basically, take QM seriously without unnecessary extras like “hidden variables” or
“macroscopic systems” - seem very natural.
I’m not sure if anyone has explicitly described it in categorical language, but it seems pretty natural. There’s some category whose objects are subsystems of the world, and where the hom-set between
any two objects is the Hilbert space describing the correlations between them, where “states” live.
(I reiterate, this terminology is kind of unfortunate - rays in this Hilbert space probably should be called 2-states, so that objects in the category can be called 1-states, but this is in conflict
with established use).
My bias is to say that a good way to say this is to say relational QM amounts to working in a particular 2-Hilbert space, describing the universe. Though this is rather different from the
cobordism-world I’m more familiar with. since we’re looking at a category where objects are subsystems, not submanifolds with some codimension… Though it’s a bit more complicated since we have to
think about more than just containment relations, but also causal structure since the systems have to interact. Would this be like a causal site?
(Okay, as long as I’ve gone ahead and said that, I may as well say that if anyone thinks the phrase “working in a particular 2-Hilbert space” sounds like “working in a particular topos”, I generally
agree, but I kind of don’t want to get into that now.)
Posted by: Jeffrey Morton on February 5, 2009 5:28 AM | Permalink | Reply to this
Re: The Cocktail Party Version
This quote caught my eye also and I think the reason is that it’s a bit imprecise. Empirical evidence usually means perceived by the senses and that means talking about a perceived object not one’s
relationship to the object. Category theory or Process Philosophy seem too abstract to be deemed empirical evidence, without denying the usefulness of conceptualizing in terms of relationships
between objects.
“Since mathematical reasoning dominated the heuristics of QFT,
its interpretation is open in most areas which go beyond the
immediate empirical predictions. Philosophical analysis might
help to clarify its semantics. QFT taken seriously in its
metaphysical implications seems to give a picture of the world
which is at variance with central classical conceptions like
particles and fields and even with some features of quantum
mechanics (QM).”
SH: I don’t think J. Morton’s description of empirical relationship fits into the category of QFT’s empirically enabled predictions. If quarks and fields are irreducible with a correlation between
them, does the term relationship describe that correlation? Formal math/logical systems do not generate empirical evidence, which is acquired by experiment.
Posted by: Stephen Harris on February 5, 2009 6:10 AM | Permalink | Reply to this
Re: The Cocktail Party Version
That’s true. It was a highly imprecise statement. I meant it to indicate that every bit of empirical evidence is, necessarily, a datum about a relation (between subject and object). You say that
empirical evidence is about observing objects, but in the majority of my experiences, there has also been a subject involved. I’m not sure if that clarifies what I was trying to say. I’m also not
sure if there’s a substantive disagreement here or just a semantic one.
Also, I didn’t intend to imply that mathematics “generates” evidence, but if I did, I retract the implication.
I’d prefer (not being a Platonist) to say that math summarizes evidence - which is what my comment about its “unreasonable effectiveness” was about. Mathematics has been developed over several
millennia of experience with the real world, and is intimately connected with it as a result of probably several tens of millions of person-years of mental labour. So I’m not sure what this “formal
math” might be. We often use concepts without knowing where they came from, but they generally came from looking at reality.
Posted by: Jeffrey Morton on February 5, 2009 7:02 AM | Permalink | Reply to this
Re: The Cocktail Party Version
Jeffrey wrote:
One philosophical point about categories is that they treat objects and morphisms (which, for cocktail party purposes, I would describe as “relations between objects”) as equally real. Since I’ve
already used the word, I’ll say this is an ontological commitment (at least in some domain — here’s an issue where computer science offers some nicely structured terminology) to the existence of
relations as real. It might be surprising to hear someone say that relations between things are just as “real” as things themselves — or worse, more real, albeit less tangible. Most of us are
used to thinking of relations as some kind of derivative statement about real things. On the other hand, relations (between subject and object, system and observer) are what we have actual
empirical evidence for. So maybe this shouldn’t be
such a surprising stance.
Even after your response to my post, I still didn’t understand this paragraph. I should have been more specific: what was real as compared to “real” what was being compared that was “more real”? So I
wasn’t going to post. But then I went to your blog and found JB’s comment:
This sentence could use a bit of copy-editing:
What it mean is, the claim that relations between things are just as “real” as things themselves, albeit maybe less tangible might be a sort of surprising if you’re used to thinking of
relations as some kind of derivative statement about real things.
Now your reply:
Jeffrey Morton wrote:
I’ve broken that clunky sentence into two. On reflection, I also made it a little bolder, hinting that morphisms are more “real”, or at least more fundamental, than objects (you can describe a
category entirely in terms of its morphisms, but not its objects, after all).
Now with this explanation I was able to reread the main paragraph quoted initially and understand what you meant. The reason I didn’t grasp your ideas from what you wrote is that we don’t share some
underlying philosophical assumptions such as comparing using the idea of realness.
Posted by: Stephen Harris on February 6, 2009 5:52 AM | Permalink | Reply to this
Interval in UK means Intemision in USA; Re: The Cocktail Party Version
Musicontologically, are Intervals are Real as Notes?
In music theory, as simplified at Wikipedia, “the term interval describes the relationship between the pitches of two notes.”
Intervals may be described as:
* vertical (or harmonic) if the two notes sound simultaneously
* linear (or melodic), if the notes sound successively.[1]
Interval class is a system of labelling intervals when the order of the notes is left unspecified, therefore describing an interval in terms of the shortest distance possible between its two pitch
Note that “The term ‘interval’ can also be generalized to other elements of music besides pitch. David Lewin’s Generalized Musical Intervals and Transformations uses interval as a generic measure of
distance in order to show musical transformations which can change, for instance, one rhythm into another, or one formal structure into another” [8].
The recent breakthrough research on Orifolds in Music Theory cry out for Categorification, or n-Categorification. NOte that John Horton Conway is coauthoring an Orbifolds and Symmetry book, although
I think that he gets by mathematically with genius, rather than n-Categories.
[1] Lindley, Mark/Campbell, Murray/Greated, Clive. “Interval”, Grove Music Online, ed. L. Macy (accessed 27 February 2007), grovemusic.com (subscription access).
[2] Roeder, John. “Interval Class”, Grove Music Online, ed. L. Macy (accessed 27 February 2007), grovemusic.com (subscription access).
[8] Lewin, David (1987). Generalized Musical Intervals and Transformations,[clarification needed “p. ?”]. New Haven: Yale University Press. Reprinted Oxford University Press, 2007. ISBN
Posted by: Jonathan Vos Post on February 6, 2009 5:41 PM | Permalink | Reply to this
Re: The Cocktail Party Version
I think I see the source of the confusion, which appears to be the word “real”. I was using it loosely since this was, after all, a “Cocktail Party Version”. On the whole, when trying to be rigorous,
I prefer to avoid this concept entirely, since I’m not sure what, if anything, it refers to. But this viewpoint hasn’t really sunk into my habitual ways of using language.
I’m glad you did choose to reply - it’s always useful to understand the source of confusions that come up.
Posted by: Jeffrey Morton on February 6, 2009 7:19 PM | Permalink | Reply to this
Re: The Cocktail Party Version
In fact, I notice on closer inspection that this version of my original post doesn’t include one of the links in the original, namely the one to the Wikipedia article on ontologies in computer
science. Specifically, I linked to the notion of domain ontologies, mentioning that this gives a nice terminology.
The CS notion of an “Ontology” is as a kind of data structure or other formal representation of the basic concepts in some domain of discourse, and their relationships. We could also use the word
“theory” for “domain of discourse”: a physical theory involves making certain ontological commitments. That is, identifying what kinds of things the theory makes statements about, and including them
in the ontology of the theory.
This is part of what I think is interesting about quantum theories, whether QM, QFT, or what-have-you, even before you get to the (also interesting) notions of Hilbert spaces, expectation values, von
Neumann algebras. Namely, one of the very first ontological commitments of a quantum theory is to respect the fact that it is trying to describe empirical data, and empirical data is always
relational. Thus, what the theory actually predicts are expectation values for observables, say as an inner product between a “state” and a “costate” (which describes an observation). So, for
example, in one formalism, these features of the ontology are written as “bras” and “kets” - and what’s meaningful, empirically (and here we get into the connection between the ontology and the
epistemology of the theory), is a combination of a bra with a ket.
Indeed, both bras and kets can be represented as morphisms in a categorical formulation (for example, as described by Abramsky and Coecke). On the other hand, the inner product itself can be
interpreted (in a categorified setting) as a hom-set (i.e. collection of morphisms) in its own right. So a category-theoretic formulation of a quantum theory seems to make plenty of ontological
commitments to morphisms, but not many to objects.
Which makes sense, in that you can describe the theory of categories without making any ontological commitments to objects.
I guess that’s a more precise way of expressing the thought behind that clunky sentence.
Posted by: Jeffrey Morton on February 6, 2009 7:56 PM | Permalink | Reply to this
Re: The Cocktail Party Version
Jeffrey wrote:
In fact, I notice on closer inspection that this version of my original post doesn’t include one of the links in the original, namely the one to the Wikipedia article on ontologies in computer
Fixed. I’d inserted the links by hand, and missed this one.
Posted by: John Baez on February 10, 2009 6:19 AM | Permalink | Reply to this
Re: The Cocktail Party Version
This is how I account for the supposedly unreasonable effectiveness of mathematics – not really any more surprising than the remarkable effectiveness of car engines at turning gasoline into
motion, or that steel girders and concrete can miraculously hold up a building. You can be surprised that anything at all might work, but it’s less amazing that the thing selected for the job
does it well.)
Yes, if there’s some surprise to be had, it’s that there are humanly understandable means of capturing aspects of the universe, when there was no direct advantage for us as a species in capturing
those aspects.
Something I’d like to understand better is why Gauss already in the early nineteenth century is $q$-deforming mathematics (even using q as his symbol).
Is it that the space of ‘rich’ mathematics is very restricted, so no surprise that humans should stumble over parts which the universe uses, when the universe has little choice?
Posted by: David Corfield on February 5, 2009 8:56 AM | Permalink | Reply to this
Re: The Cocktail Party Version
David Corfield wrote:
Yes, if there’s some surprise to be had, it’s that there are humanly understandable means of capturing aspects of the universe, when there was no direct advantage for us as a species in capturing
those aspects.
I’m on the side of the unsurprised, even more than this: I suspect many animals have found it hugely beneficial as a species to have implicit knowledge of some basic inferential machinery. We didn’t
get to formalise this stuff until we had language, and parts of it are still a struggle rather than intuitive, but even young babies will stare in amazement at many basic contradictions of the
implicit rules that we use to reason about physical objects, and those rules are where logic and mathematics come from.
Posted by: Greg Egan on February 5, 2009 10:30 AM | Permalink | Reply to this
Re: The Cocktail Party Version
I suppose the question is why should the parts of the mathematics for quantum mechanics, whose understanding doesn’t seem to be something to be selected for, be so close to that which is useful to
count and measure middle-sized objects that we stumbled upon it before we needed it for physics.
John finds it odd:
One eerie thing about these modified versions of calculus is that people discovered them before quantum mechanics
Pretty much anything you can do with calculus, you can do with the q-calculus. There are q-integrals, q-trigonometric functions, q-exponentials, and so on. If you try books like this:
2) George E. Andrews, Richard Askey, Ranjan Roy, Special Functions, Cambridge U. Press, Cambridge, 1999.
you’ll see there are even q-analogues of all the special functions you know and love - Bessel functions, hypergeometric functions and so on. And like I said, the really weird thing is that people
invented them before their relation to quantum mechanics was understood.
Posted by: David Corfield on February 5, 2009 11:00 AM | Permalink | Reply to this
Re: The Cocktail Party Version
One eerie thing about these modified versions of calculus is that people discovered them before quantum mechanics
I find that plenty of people are studying plenty of structures with great enthusiasm whose whose true origin and meaning is clearly unknown. I find this eerie, too, but maybe in a different sense:
it’s not so hard to just fiddle around with structures and study axiom systems. What is harder is finding out where these naturally live.
Posted by: Urs Schreiber on February 5, 2009 12:19 PM | Permalink | Reply to this
Re: The Cocktail Party Version
David, John’s TWF Week 183 that you linked to is fascinating, but I’d love to know why:
Gauss […] wrote about a q-analogue of the binomial formula and other things.
Was he just messing about, or what? Does anyone know what the attraction was at the time?
Posted by: Greg Egan on February 5, 2009 11:33 AM | Permalink | Reply to this
Re: The Cocktail Party Version
Well, q is the successor to p, and p is a prime. So q is a power of a prime. Some people were interested in studying stuff over finite fields. The quantum 6j symbols and such work out just as nicely
(or even more so) over finite fields. The representation theory is a symmetric monoidal category. So there are not interesting manifold invariants that come about that way.
Ramanujin proved a whole host of these quantum identities in the context of hypergeometric series. Clearly, his motivation would have been number theoretic.
I guess the question is some other contexts for xy = q yx from a Gaussian viewpoint.
Meanwhile, when I asked the woman at the cocktail party, “What’s your sign?”, she replied, “Taurus, what’s yours, negative?”
Posted by: Scott Carter on February 5, 2009 9:51 PM | Permalink | Reply to this
Re: The Cocktail Party Version
Greg wrote, of Gauss’ work on $q$-deformed special functions:
Was he just messing about, or what? Does anyone know what the attraction was at the time?
I still don’t know. I somehow doubt Gauss was ‘just messing about’, but a brief attempt to track down the history has led to just one small fact: q-deformed binomial coefficients are also known as
‘Gaussian coefficients’. Did Gauss invent these? If so, why did he do it?
Scott wrote:
Well, $q$ is the successor to $p$, and $p$ is a prime. So $q$ is a power of a prime. Some people were interested in studying stuff over finite fields.
But was one of these people Gauss, or not? The $q$-deformed version of the binomial coefficient ‘$n$ choose $k$’ counts the number of $k$-dimensional subspaces of an $n$-dimensional vector space over
the field with $q$ elements. But did Gauss know this? And is this why those coefficients are called Gaussian coefficients?
Posted by: John Baez on February 10, 2009 6:55 AM | Permalink | Reply to this
Re: The Cocktail Party Version
If I recall correctly from a conversation with Henry Cohn a long time ago, Gauss used some sort of q-analogues to prove the sign of the quadratic gauss sum. I’ll try to dig up more if I remember, I
think I still have Gauss’s paper printed out somewhere…
Posted by: Noah Snyder on February 10, 2009 7:17 AM | Permalink | Reply to this
Re: The Cocktail Party Version
Posted by: John Baez on February 10, 2009 9:11 PM | Permalink | Reply to this
Re: The Cocktail Party Version
Here’s an MAA award winning expository paper by Henry Cohn which in footnote 1 attributes the “Gaussian binomial” moniker to Gauss’s investigation of the sign of the quadratic Gauss sum. He cites a
paper of Gauss’s, “Summatio quarumdam serierum singularium”, which is in latin and doesn’t seem to be on the internet.
However, Section 2 of this Bulletin paper by Berndt and Evans appears to recap Gauss’s proof using q-binomial coefficients.
Posted by: Noah Snyder on February 11, 2009 8:01 PM | Permalink | Reply to this
Re: The Cocktail Party Version
I wonder if Gauss was interested in the interplay between “continuum” and “discete” theories? For example, the Gaussian distribution is the continuum limit of a discrete random walk (involving
binomial coefficients). “Calculus” on a lattice often gives rise to q-deformations.
Just a thought from left field…
Posted by: Eric on February 10, 2009 3:22 PM | Permalink | Reply to this
Gauss Sums; Re: The Cocktail Party Version
What did Gauss know and when did he know it?
This committee has subpoenaed Gauss to ask this, and some related questions. Members should notice that he failed to appear, per that subpoena. Now, we must presume him innocent until proven guilty,
Meantime, I have distributed copies of the book:
Gauss and Jacobi sums
B.C. Berndt, R.J Evans, K.S. Williams, 1998
and the papers:
Nonlinear Bayesian estimation using Gaussian sum approximations
D. Alspach, H. Sorenson - Automatic Control, IEEE Transactions on, 1972
Quadratic Gauss Sums
O.D. Mbodj - Finite Fields and Their Applications, Elsevier, 1998
We now call Bruce C. Berndt and Ronald J. Evans as witnesses.
Their prior testimony:
The Determination of Gauss Sums, AMERICAN MATHEMATICAL SOCIETY, 1981.
Posted by: Jonathan Vos Post on February 11, 2009 5:13 PM | Permalink | Reply to this
Gauss’s diary; Berndt and Evans; Re: Gauss Sums; Re: The Cocktail Party Version
The codebreakers at the NSA provide this analysis of an intercepted message on the topic in question.
On August 30, 1805, Gauss wrote in his diary [63, pp. 37, 57], “Demonstrate theorematis venustissimi supra 1801 Mai commemorati, quam per 4 annos et ultra omni contentione quaesiveramus, tandem
perfecimus.” (At length we achieved a demonstration of the very elegant theorem mentioned before in May, 1801, which we had sought for more than four years with all efforts.)
Gauss’s proof, which is elementary, was published in 1811 [64], [66, pp. 9-45,
It may be remarked that eigenvector decompositions for the finite Fourier transform (Schur’s matrix) have been given by McClellan and Parks [188] and by Morton [189]. Regarding §2.3, note that in
Eichler’s book [184, p. 137], a reciprocity formula for Gauss sums attached to quadratic forms is proved
from the transformation formula for the theta-function. In connection with the paragraph preceding (10.1), note that Joris [186] has shown how the functional equation for Dirichlet L-series can be
used to evaluate imprimitive Gauss sums in terms of primitive ones. Regarding the two paragraphs following (10.1), note that conductors of Gauss sums as Hecke characters have been investigated by
Schmidt [190]. To the list of papers near the end of §10 giving interesting generalizations of Gauss sums, one should add the papers of O’Meara [187] and Jacobowitz [185], which use Gauss sums over
lattices to classify local integral quadratic (respectively, hermitian) forms.
Also, Cariitz [183] has evaluated a character sum which generalizes a cubic
Gauss sum over GF(2^n).
Using an estimate for generalized Kloosterman sums due to De Ligne
[40, p. 219], one can easily show that
SUM (over Chi)_ G(Chi)^k = O(p^(k+1)/2)
as p tends to oo, where k is an arbitrary, fixed natural number and where the sum is over all characters Chi(mod p). In particular, it follows that the arguments of the Gauss sums G(x) are
asymptotically uniformly distributed as p tends to oo [191].
Posted by: Jonathan Vos Post on February 11, 2009 5:29 PM | Permalink | Reply to this
|
{"url":"http://golem.ph.utexas.edu/category/2009/02/the_cocktail_party_version.html","timestamp":"2014-04-17T05:03:39Z","content_type":null,"content_length":"80330","record_id":"<urn:uuid:5d939006-c68e-45c4-95a7-29aeb971db44>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Wolfram Demonstrations Project
Vapor-Phase Solubility of Decane in Nitrogen as Functions of Temperature and Pressure
This Demonstration computes the solubility of a high-temperature boiling fluid (-decane) in supercritical nitrogen at various high pressures for user-set values of the temperature. Two approximate
methods for solubility prediction are compared: (1) the ideal gas assumption (blue curve); and (2) real gas behavior, described by the virial equation of state (red curve). Here, the fugacity
coefficients are computed using the virial equation.
The first snapshot corresponds to a temperature of 50 °C and shows excellent agreement with Figure 5-41 in [1]. The value of the virial coefficient at 50 °C given in [1], determined by fitting the
model to experimental solubility data, compares favorably with obtained in the present calculation at the same temperature.
[1] J. M. Prausnitz, R. N. Lichtenthaler, and E. Gomes de Azevedo,
Molecular Thermodynamics of Fluid-Phase Equilibria
, 3rd ed., Upper Saddle River, NJ: Prentice Hall, 1999.
|
{"url":"http://www.demonstrations.wolfram.com/VaporPhaseSolubilityOfDecaneInNitrogenAsFunctionsOfTemperatu/","timestamp":"2014-04-16T07:18:50Z","content_type":null,"content_length":"44324","record_id":"<urn:uuid:688ef102-0cf3-45d9-8a22-7db004f78e3a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Narberth Algebra Tutor
Find a Narberth Algebra Tutor
...In my graduate program I worked as a TA in undergraduate chemistry courses, where I taught students in recitation and office hours. I also undertook an independent research project at the
Wistar Institute, and have had my work published in a lead author paper. Please contact me if you want to improve your performance in science and math courses.
10 Subjects: including algebra 2, algebra 1, calculus, chemistry
...Each teacher received special training on how to aide students with a variety of differences, including ADD and ADHD. There and since I have worked with several students with ADD and ADHD both
in their math content areas and with executive skills to help them succeed in all areas of their life. I have tutored test taking for many tests, including the Praxis many times.
58 Subjects: including algebra 1, algebra 2, chemistry, reading
...Celce-Murica and D. Larsen-Freeman's THE GRAMMAR BOOK as a required text. Apart from this course, I like reviewing and revisiting grammar rules as it very useful in analyzing text and
particularly useful if one decides to study law.
12 Subjects: including algebra 1, algebra 2, geometry, chemistry
...The first step in music theory is simply being able to read music, the names of the lines and spaces. After that comes scales and keys. Once major and minor scales are understood, it's simply a
matter of memorizing the keys.
16 Subjects: including algebra 1, algebra 2, reading, English
...At first, learning a new language is difficult, but once you get the concepts and basics, learning the language becomes much easier. So I try to break down each concept, mechanism and problem
down to its bare parts and build an understanding so that each concept, mechanism and problem can be sol...
6 Subjects: including algebra 2, algebra 1, chemistry, prealgebra
|
{"url":"http://www.purplemath.com/Narberth_Algebra_tutors.php","timestamp":"2014-04-20T09:23:38Z","content_type":null,"content_length":"23879","record_id":"<urn:uuid:6e2844dd-3605-4568-9d16-b6549a588ae4>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00337-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Crypt::DH::GMP - Crypt::DH Using GMP Directly
use Crypt::DH::GMP;
my $dh = Crypt::DH::GMP->new(p => $p, g => $g);
my $val = $dh->compute_secret();
# If you want compatibility with Crypt::DH (it uses Math::BigInt)
# then use this flag
# You /think/ you're using Crypt::DH, but...
use Crypt::DH::GMP qw(-compat);
my $dh = Crypt::DH->new(p => $p, g => $g);
my $val = $dh->compute_secret();
Crypt::DH::GMP is a (somewhat) portable replacement to Crypt::DH, implemented mostly in C.
In the beginning, there was Crypt::DH. However, Crypt::DH suffers from a couple of problems:
Crypt::DH works with a plain Math::BigInt, but if you want to use it in production, you almost always need to install Math::BigInt::GMP or Math::BigInt::Pari because without them, the computation
that is required by Crypt::DH makes the module pretty much unusable.
Because of this, Crypt::DH might as well make Math::BigInt::GMP a hard requirement.
With or without Math::BigInt::GMP or Math::BigInt::Pari, Crypt::DH makes several round trip conversions between Perl scalars, Math::BigInt objects, and finally its C representation (if GMP/Pari
are installed).
Instantiating an object comes with a relatively high cost, and if you make many computations in one go, your program will suffer dramatically because of this.
These problems quickly become apparent when you use modules such as Net::OpenID::Consumer, which requires to make a few calls to Crypt::DH.
Crypt::DH::GMP attempts to alleviate these problems by providing a Crypt::DH-compatible layer, which, instead of doing calculations via Math::BigInt, directly works with libgmp in C.
This means that we've essentially eliminated 2 call stacks worth of expensive Perl method calls and we also only load 1 (Crypt::DH::GMP) module instead of 3 (Crypt::DH + Math::BigInt +
These add up to a fairly significant increase in performance.
Crypt::DH::GMP absolutely refuses to consider using anything other than strings as its parameters and/or return values therefore if you would like to use Math::BigInt objects as your return values,
you can not use Crypt::DH::GMP directly. Instead, you need to be explicit about it:
use Crypt::DH;
use Crypt::DH::GMP qw(-compat); # must be loaded AFTER Crypt::DH
Specifying -compat invokes a very nasty hack that overwrites Crypt::DH's symbol table -- this then forces Crypt::DH users to use Crypt::DH::GMP instead, even if you are writing
my $dh = Crypt::DH->new(...);
By NO MEANS is this an exhaustive benchmark, but here's what I get on my MacBook (OS X 10.5.8, 2.4 GHz Core 2 Duo, 4GB RAM)
Benchmarking instatiation cost...
Rate pp gmp
pp 9488/s -- -79%
gmp 45455/s 379% --
Benchmarking key generation cost...
Rate gmp pp
gmp 6.46/s -- -0%
pp 6.46/s 0% --
Benchmarking compute_key cost...
Rate pp gmp
pp 12925/s -- -96%
gmp 365854/s 2730% --
Computes the key, and returns a string that is byte-padded two's compliment in binary form.
Returns the pub_key as a string that is byte-padded two's compliment in binary form.
Daisuke Maki <daisuke@endeworks.jp>
This program is free software; you can redistribute it and/or modify it under the same terms as Perl itself.
See http://www.perl.com/perl/misc/Artistic.html
|
{"url":"http://search.cpan.org/~dmaki/Crypt-DH-GMP/lib/Crypt/DH/GMP.pm","timestamp":"2014-04-24T11:41:07Z","content_type":null,"content_length":"18714","record_id":"<urn:uuid:e9f25fd4-aaa8-4d8f-bec4-02de457470a0>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Natural Numbers
Wednesday, February 20, 2013
Natural Numbers
The first kind of numbers humanity discovered were the Natural Numbers. Now, this statement needs to be explained.
First, what does “kind of numbers” mean? Well, there are many of them! Like, for instance, the numbers you use to count (one apple, two apples, three apples) aren’t the same as the numbers you use to
talk about quantum mechanics (such-and-such has an amplitude of √2(1+i)…)
Then there’s “discovered.” So that implies that the numbers are, somehow, “out there”? Well, that’s a philosophical issue, but I’d say that yes, yes they are. I mean, 2 apples plus 2 apples equals 4
apples, but it seems to me that the statement 2 + 2 = 4 is somehow deeper than that.
Anyway. There are the Natural Numbers. But what are they? How do we define them?
"Well, there’s 0, 1, 2, 3… you know, numbers."
Yes, but how would you explain these numbers to an alien species that doesn’t already know them?
"Uh… well…"
Of course you don’t need to do that, because in your head you already know what the Natural Numbers are. Still, it’d be nice if we had a set of rules that define them. So let’s do that.
1. 0 is a natural number.
Okay? That sounds like a good starting point. Now, we have an equality relation.
2. For every natural number n, n = n.
3. For every natural numbers m and n, if m = n then n = m.
4. For every natural numbers m, n and p, if m = n and n = p then m = p.
5. If m is a natural number and m = n, then n is a natural number.
Further, we have an operation called successorship, S(n).
6. For every natural number n, S(n) is also a natural number. (This statement is usually referred to as “the natural numbers are closed under S.”)
"Okay, so we have that 0 is a natural number, and S0 is a natural number. But since S0 is a natural number, SS0 is also a natural number. Okay! We’re done, aren’t we?"
Not quite.
7. For every natural numbers m and n, if S(m) = S(n) then m = n.
8. There is no natural number n such that S(n) = 0.
"Okay, now we’ve defined natural numbers, haven’t we?”
Indeed. But there’s a problem with the above. There are sequences that obey these rules and are not Natural Numbers.
"Huh? How is that?"
Well, the sequence …-2*, -1*, 0*, 1*, 2*,… obeys these rules nicely.
"Hold on… that doesn’t…"
Just look at it. 0 isn’t in it, but no one said it had to be. The numbers are also unique, and closed under S. And in fact, there’s an infinite number of infinite-in-both-directions sequences that
obey all these rules.
"Alright! So we can just say that… that any sequence that’s the natural numbers must have 0 or something…”
Yes, yes indeed.
9. If K is a set that contains 0 and is closed under S, then K contains all natural numbers.
There. Now we’re done. We’ve just pinpointed all of the Natural Numbers.
These rules are called the Peano Axioms, and they uniquely define the Natural Numbers. That is to say, if anyone ever asks you what the Natural Numbers are, you just tell them to follow these strict
rules, and they will create a sequence that is isomorphic to them.
|
{"url":"http://scientiststhesis.tumblr.com/post/43584022100/natural-numbers","timestamp":"2014-04-19T04:20:47Z","content_type":null,"content_length":"76636","record_id":"<urn:uuid:61431445-8ace-4f59-a2aa-63a42b5b1d79>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Publications of Miklós Maróti
1. J. Kincses, G. Makay, M. Maróti, J. Osztényi and L. Zádori, A Special case of the Stahl conjecture, European Journal of Combinatorics 34 (2013), 502–511. (doi)
2. P. Marković, M. Maróti and R. McKenzie, Finitely related clones and algebras with cube-terms, Order 29 Issue 2 (2012), 345–359. (doi)
3. M. Maróti and L. Zádori, Reflexive digraphs with near-unanimity polymorphisms, Discrete Mathematics 312, Issue 15 (2012), 2316–2328. (doi)
4. G. Czédli and M. Maróti, On the height of order ideals, Mathematica Bohemica 135 (2010), 69–80.
□ E. K. Horváth, Z. Németh and G. Pluhár, The number of triangular islands on a triangular grid, Periodica Mathematica Hungarica, 58 (2009), no 1, 25–34.
5. L. Barto, M. Kozik, M. Maróti, R. McKenzie and T. Niven, Congruence modularity implies cyclic terms for finite algebras, Algebra Universalis 61 (2009), no. 3–4, 365–380. (doi)
□ V. Bárány, G. Gottlob and M. Otto, Querying the Guarded Fragment, in 25th Annual IEEE Symposium on Logic in Computer Science (LICS 2010), 1–10.
6. W. Dziobiak, J. Jeľek and M. Maróti, Minimal varieties and quasivarieties of semilattices with one automorphism, Semigroup Forum 78 (2009), no. 2, 253–261. (doi)
7. L. Barto, M. Kozik, M. Maróti and T. Niven, CSP dichotomy for special triads, Proceedings of the AMS 137 (2009), 2921–2934. (doi)
□ B. Trotta, Residual properties of pre-bipartite digraphs, Algebra Universalis (14 November 2010), pp. 1–26.
□ V. Kolmogorov and S. Zivný, The complexity of conservative finite-valued CSPs, http://arxiv.org/abs/1008.1555v1.
□ P. Hell and A. Rafiey, Duality for min-max orderings and dichotomy for min cost homomorphisms, http://arxiv.org/abs/0907.3016v1.
□ V. Kolmogorov, A dichotomy theorem for conservative general-valued CSPs, http://arxiv.org/abs/1008.4035v1.
□ D. A. Cohen‚ P. Creed‚ P. G. Jeavons and S. Zivný, An algebraic theory of complexity for valued constraints: establishing a galois connection, University of Oxford, RR-10-16.
□ A. Atserias and M. Weyer, Decidable relationships between consistency notions for constraint satisfaction problems, Lecture Notes in Computer Science, 2009, vol. 5771/2009, 102–116.
8. W. Dziobiak, M. Maróti, R. McKenzie and A. Nurakunov, The weak extension property and finite axiomatizability for quasivarieties, Fundamenta Mathematicae 202 (2009), 199–223. (doi)
9. J. Jeľek, T. Kepka and M. Maróti, The endomorphism semiring of a semilattice, Semigroup Forum 78 (2009), no. 1, 21–26. (doi)
10. G. Czédli and M. Maróti, Two notes on the variety generated by planar modular lattices, Order 26 (2009), no. 2, 109–117. (doi)
11. M. Maróti, The existence of a near-unanimity term in a finite algebra is decidable, Journal of Symbolic Logic 74 (2009), no. 3, 1001–1014. (doi)
□ B. Larose, C. Loten and C. Tardif, A characterisation of first-order constraint satisfaction problems, Proc. of the 21st IEEE Symposium on Logic in Computer Science (LICS), 2006, 201–210.
□ P. Idziak, P. Marković, R. McKenzie, M. Valeriote and R. Willard, Tractability and learnability arising from algebras with few subpowers, in Porc. 22nd IEEE Symposium on Logic in Computer
Science (LICS 2007), 213–224.
□ J. Foniok and C. Tardif, Adjoint functors and tree duality, Discrete Mathematics & Theoretical Computer Science, vol. 11, no. 2 (2009), 97–110.
□ C. Loten and C. Tardif, Near-unanimity polymorphisms on structures with finite duality, preprint (2008), 10 pages.
12. C. Carvalho, V. Dalmau, P. Marković and M. Maróti, CD(4) has bounded width, Algebra Universalis 60 (2009), no. 3, 293–307. (doi)
□ A. A. Bulatov, A. Krokhin and B. Larose, Dualities for constraint satisfaction problems, Lecture Notes in Computer Science, 2008, Volume 5250/2008, 93–124.
13. G. Czédli, M. Maróti and E. T. Schmidt, On the scope of averaging for Frankl's conjecture, Order 26 (2009), no. 1, 31–48. (doi)
□ E. K. Horváth, Z. Németh and G. Pluhár, The number of triangular islands on a triangular grid, Periodica Mathematica Hungarica, 58 (2009), no 1, 25–34.
14. M. Maróti and R. McKenzie, Existence theorems for weakly symmetric operations, Algebra Universalis 59 (2008), no. 3–4, 463–489. (doi)
□ L. Barto, M. Kozik and T. Niven, The CSP dichotomy holds for digraphs with no sources and no sinks (A positive answer to a conjecture of Bang-Jensen and Hell), SIAM Journal on Computing,
Volume 38 Issue 5, December 2008, 1782–1802.
□ L. Barto and M. Kozik, Constraint satisfaction problems of bounded width, Foundations of Computer Science, 2009, 595–603.
□ P. Hell and J. Nesetril, Colouring, constraint satisfaction, and complexity, Computer Science Review, vol. 2, no. 3, 143–163.
□ A. A. Bulatov, A. Krokhin and B. Larose, Dualities for constraint satisfaction problems, Lecture Notes in Computer Science, 2008, Volume 5250/2008, 93–124.
□ L. Barto, M. Kozik and T. Niven, Graphs, polymorphisms and the complexity of homomorphism problems, Proceedings of the 40th annual ACM symposium on Theory of Computing, 2008, 789–796.
□ P. Jonsson, A. Krokhin and F. Kuivinen, Hard constraint satisfaction problems have hard gaps at location 1, Theoretical Computer Science, vol. 410 (2009), no. 38–40, 3856–3874.
□ L. Barto and M. Kozik, New conditions for Taylor varieties and CSP, in 25th Logic in Computer Science (LICS 2010), 100–109.
□ J. Nesetril, M. H. Siggers and L. Zádori, A combinatorial constraint satisfaction problem dichotomy classification conjecture, European Journal of Combinatorics, vol. 31 (2010), 280–296.
□ C. Carvalho, V. Dalmau and A. Krokhin, CSP duality and trees of bounded pathwidth, Theoretical Computer Science, 2010.
□ M. Karimi and A. Gupta, Minimum cost homomorphisms to oriented cycles with some loops, Fifteenth Australasian Symposium on Computing: The Australasian Theory-Volume 94, 2009, 9–20.
□ M. M. Stronkowski and D. Stanovsky, Embedding general algebras into modules, Proc. Amer. Math. Soc., vol. 138, 2010, 2687–2699.
□ L. Egri, A. Krokhin, B. Larose and P. Tesson, The complexity of the list homomorphism problem for graphs, arXiv:0912.3802, 2009.
□ P. Hell and A. Rafiey, The dichotomy of list homomorphisms for digraphs, arXiv:1004.2908, 2010.
□ L. Barto and M. Kozik, Cyclic terms for SD(v) varieties revisited, Algebra Universalis, vol. 64, no. 1-2, 137–142
□ L. Zádori, Solvability of systems of polynomial equations over finite algebras, International Journal of Algebra and Computation, vol. 17, no. 4 (2007), 821–835.
□ V. Bárány, G. Gottlob and M. Otto, Querying the Guarded Fragment, in 25th Annual IEEE Symposium on Logic in Computer Science (LICS 2010), 1–10.
□ L. Barto and D. Stanovsky, Polymorphisms of small digraphs, submitted.
15. B. A. Davey, M. Jackson, M. Maróti and R. McKenzie, Principal and syntactic congruences in congruence-distributive and congruence-permutable varieties, J. Australian Math. Soc. 85 (2008), no. 1,
59–74. (doi)
□ A. Kelarev, J. Yearwood and P. Watters, Internet security applications of Gröbner-Shirshov bases, Asian-European Journal of Mathematics, vol. 3, no. 3 (2010), 435–442.
□ A. Kelarev, P. Watters and J. Yearwood, Rees matrix constructions for clustering of data, Journal of the Australian Mathematical Society, vol. 87, no. 3 (2009), 377–393.
16. M. Maróti, On the (un)decidability of a near-unanimity term, Algebra Universalis 57 (2007), no. 2, 215–237. (doi)
□ B. Larose, C. Loten and L. Zádori, A polynomial-time algorithm for near-unanimity graphs, J. Algorithms 55 (2005), no. 2, 177–191.
17. K. Adaricheva, M. Maróti, R. McKenzie, J. B. Nation and Eric R. Zenk, The Jónsson-Kiefer property, Special issue of Studia Logica in memory of Willem Johannes Blok 83 (2006), no. 1–3, 111–131. (
□ J. Cirulis, Nearlattices with an overriding operation, Order, online, 1–19.
□ M. Semenova, On lattices embeddable into lattices of algebraic subsets, Algebra Universalis, vol. 61, no. 3 (2009), 399–405.
18. J. Jeľek, M. Maróti and R. McKenzie, Quasiequational theories of flat algebras, Czechoslovak Mathematical Journal 55 (2005), no. 3, 665–675. (doi)
□ B. A. Davey, M. Jackson, J. G. Pitkethly and M. R. Talukder, Natural dualities for semilattice-based algebras, Algebra Universalis, 57 (2007), 463–490.
□ M. Jackson and T. Stokes, Identities in the algebra of partial maps, Int. J. Algebra Comput. 16 (2006), 1131–1159.
□ E. W. Graczynska, On the problem of M-hyperquasivarieties, http://arxiv.org/abs/math/0609600v3.
□ M. Jackson, Flat algebras and the translation of universal Horn logic to equational logic, Journal of Symbolic Logic, vol. 73 (2008), 90–128.
19. M. Maróti and R. McKenzie, Finite basis problems and results for quasivarieties, Studia Logica 78 (2004) November, no. 1–2, 293–320. (doi)
□ K. A. Baker, G. F. McNulty and Ju Wang, An extension of Willard's finite basis theorem: congruence meet-semidistributive varieties of finite critical depth, Algebra Universalis 52 (2004), no.
2-3, 289–302.
□ R. Willard, An overview of modern universal algebra, Tutorial at the Logic Colloquium, Torino, 2004.
□ R. S. Madarász, From sets to universal algebras (in Serbian), Faculty of Sciences, University of Novi Sad Press, 2006.
□ M. Jackson, Residual bounds for compact totally disconnected algebras, Houston J. Math, vol. 34 (2008), 33–67.
□ D. Casperson and J. Hyndman, Primitive positive formulas preventing a finite basis of quasi-equations, International Journal of Algebra and Computation, vol. 19, no. 7 (2009), 925–935.
□ A. M. Nurakunov and M. M. Stronkowski, Quasivarieties with definable relative principal subcongruences, Studia Logica, vol. 92 (2009), 109–120.
20. R. Freese, J. Jeľek, P. Jipsen, P. Marković, M. Maróti and R. McKenzie, The variety generated by order algebras, Algebra Universalis 47 (2002), no. 2, 103–138. (doi)
□ J. Berman and W. J. Blok, Algebras defined from ordered sets and the varieties they generate, Order 23 (2006), no. 1, 65–88.
□ C. Guido and P. Toto, Extended-order algebras, Journal of Applied Logic, vol. 6, no. 4 (2008), 609–626.
□ I. Dolinka and P. Dapic, Quasilinear varieties of semigroups, Semigroup Forum, vol. 79, no. 3 (2009), 445–450.
21. J. Jeľek, P. Marković, M. Maróti and R. McKenzie, Equations of tournaments are not finitely based, Discrete Mathematics 211 (2000), no. 1–3, 243–248. (doi)
□ A. A. Bulatov, Combinatorial problems raised from 2-semilattices, J. Algebra 298 (2006), no. 2, 321–339.
□ J. D. Berman and G. H. Bordalo, Irreducible elements and uniquely generated algebras, Discrete Math. 245 (2002), no. 1-3, 63–79.
□ I. Boąnjak and R. Madaras, On power structures, Algebra and Discrete Mathematics 2 (2003), 14–35.
□ V. Müller, J. Nesetril and V. Rödl, Some recollections on early work with Jan Pelant, Topology and its Applications, vol. 156, no. 7 (2009), 1438–1443.
22. J. Jeľek, P. Marković, M. Maróti and R. McKenzie, The variety generated by tournaments, Acta Univ. Carolinae 40 (1999), no. 1, 21–41.
23. M. Maróti, Semilattices with a group of automorphisms, Algebra Universalis 38 (1997), no. 3, 238–265. (doi)
1. Tutorial on the constraint satisfaction problem.^* Summer School on General Algebra and Ordered Sets, Nový Smokovec, Slovakia, September 2–7, 2012.
2. Directed graphs and Maltsev conditions. Summer School on General Algebra and Ordered Sets, Podlesí, Czech Republic, September 3–9, 2011.
3. Beyond bounded width and few subpowers.^* Workshop on Algebra and CSPs, Toronto, Canada, August 2–6, 2011.
4. Polymorphisms of reflexive digraphs.^* 2nd Int. Conf. on Order, Algebra and Logic, Krakow, Poland, June 6–10, 2011.
5. CSP reductions. Summer School on General Algebra and Ordered Sets, Malenovice, Czech Republic, September 4–10, 2010.
6. Minimal quasivarieties of semilattices with a group of automorphisms.^* Int. Conference on Algebras and Lattices (Jardafest), Prague, Czech Republic, June 21–25, 2010.
7. The constraint satisfaction problem for bounded width and Maltsev algebras.^* BLAST Conferene, Las Cruces, USA, August 10–14, 2009.
8. Bounded relational width and congruence distributivity.^* 76th Workshop on General Algebra (AAA76), Linz, Austria, May 22–25, 2008.
9. Gumm terms imply cyclic terms for finite algebras. Conference on Algorithmic Complexity and Universal Algebra, Szeged, Hungary, July 16–20, 2007.
10. Existence of weakly symmetric operations.^* Workshop on Universal Algebra and the Constraint Satisfaction Problem, Nashville, USA, June 17–20, 2007.
11. Near-unanimity terms are decidable.^* Int. Conference on Order, Algebra, and Logics, in conjunction with the 22nd annual Shanks Lectures, Nashville, USA, June 12–16, 2007.
12. Radio interferometric positioning. ACM 3rd Conference on Embedded Networked Sensor Systems (SenSys), San Diego, USA, November 2–4, 2005.
13. The existence of a near-unanimity term in a finite algebra is decidable. 1101st AMS Meeting, Special Session on Universal Algebra and Order, Lincoln, USA, October 21–23, 2005.
14. The flooding time synchronization protocol. ACM 2nd Conference on Embedded Networked Sensor Systems (SenSys), Baltimore, USA, November 3–5, 2004.
15. Finite axiomatizability for quasivarieties. 999th AMS Meeting, Special Session on Universal Algebra and Lattice Theory, Nashville, USA, October 16–17, 2004.
16. Finite basis problems and results for quasivarieties.^* Logic Colloquium 2004, ASL European Summer Meeting, Torino, Italy, July 25–31, 2004.
17. Wireless sensor network based shooter localization. Center for Hybrid and Embedded Software Systems (CHESS), Berkeley, USA, April 27, 2004.
18. Timers and Time Synchronization. TinyOS Technology Exchange, Berkeley, USA, February 26, 2004.
19. Practical mathematical problems in embedded sensor networks. Workshop on the Fundamentals of Sensor Webs (FUSE), Berkeley, USA, May 9, 2003.
20. Middleware design in networked embedded systems. Center for Hybrid and Embedded Software Systems (CHESS), Berkeley, USA, May 8, 2003.
21. Distributed middleware services composition and synthesis technology. IEEE Aerospace Conference, Big Sky, USA, March 8–15, 2003.
22. The finite quasi-equational base problem.^* Conference on Universal Algebra and Lattice Theory, Szeged, Hungary, July 22–26, 2002.
23. The variety generated by tournaments. International Conference on Modern Algebra, Nashville, USA, May 21–24, 2002.
24. On the variety generated by tournaments. 963rd AMS Meeting, Special Session on Algebras, Lattices, Varieties, Columbia, USA, March 16–18, 2001.
25. The equational theory of residuated lattices. Workshop on Ordered Algebraic Structures, Nashville, USA, March 9–11, 2000.
26. Metaprogrammable toolkit for model-integrated computing. IEEE Conference and Workshop on Engineering of Computer-Based Systems, Nashville, USA, March 7–12, 1999.
27. Some undecidable problems in universal algebra. Conference for Graduate Students in Mathematics, Szeged, Hungary, December 16–18, 1998.
28. The variety generated by tournaments. 931st AMS Meeting, Special Session on Semigroups, Algorithms and Universal Algebra, Louisville, USA, March 20–21, 1998.
29. Semilattices with a group of automorphisms. Workshop on General Algebra and 12th Conference of Young Algebraists, Technical University of Dresden, Germany, February 7–9, 1997.
30. Semilattices with a commutative group of automorphisms. Universal Algebra and Lattice Theory, Szeged, Hungary, July 15–19, 1996.
|
{"url":"http://www.math.u-szeged.hu/~mmaroti/publications.html","timestamp":"2014-04-21T02:04:53Z","content_type":null,"content_length":"44499","record_id":"<urn:uuid:bc31baa6-cc97-45e4-aeea-0e67f285e5cb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The Analysis of a Fringe Heuristic for Binary Search Trees
Results 1 - 10 of 11
- SIAM Journal on Computing , 1998
"... Random binary search trees, b-ary search trees, median-of-(2k+1) trees, quadtrees, simplex trees, tries, and digital search trees are special cases of random split trees. For these trees, we o#
er a universal law of large numbers and a limit law for the depth of the last inserted point, as well as a ..."
Cited by 50 (8 self)
Add to MetaCart
Random binary search trees, b-ary search trees, median-of-(2k+1) trees, quadtrees, simplex trees, tries, and digital search trees are special cases of random split trees. For these trees, we o#er a
universal law of large numbers and a limit law for the depth of the last inserted point, as well as a law of large numbers for the height.
, 2002
"... Cauchy-Euler differential equations surfaced naturally in a number of sorting and searching problems, notably in quicksort and binary search trees and their variations. Asymptotics of
coefficients of functions satisfying such equations has been studied for several special cases in the literature. We ..."
Cited by 22 (10 self)
Add to MetaCart
Cauchy-Euler differential equations surfaced naturally in a number of sorting and searching problems, notably in quicksort and binary search trees and their variations. Asymptotics of coefficients of
functions satisfying such equations has been studied for several special cases in the literature. We study in this paper the most general framework for Cauchy-Euler equations and propose an
asymptotic theory that covers almost all applications where Cauchy-Euler equations appear. Our approach is very general and requires almost no background on differential equations. Indeed the whole
theory can be stated in terms of recurrences instead of functions. Old and new applications of the theory are given. New phase changes of limit laws of new variations of quicksort are systematically
derived. We apply our theory to about a dozen of diverse examples in quicksort, binary search trees, urn models, increasing trees, etc.
- Random Structures and Algorithms , 1991
"... Limit laws for several quantities in random binary search trees that are related to the local shape of a tree around each node can be obtained very simply by applying central limit theorems for
rn-dependent random variables. Examples include: the number of leaves (L a), the number of nodes with k de ..."
Cited by 20 (2 self)
Add to MetaCart
Limit laws for several quantities in random binary search trees that are related to the local shape of a tree around each node can be obtained very simply by applying central limit theorems for
rn-dependent random variables. Examples include: the number of leaves (L a), the number of nodes with k descendants (k fixed), the number of nodes with no left child, the number of nodes with k left
descendants. Some of these results can also be obtained via the theory of urn models, but the present method seems easier to apply.
- Algorithmica , 2006
"... We use large deviations to prove a general theorem on the asymptotic edge-weighted height H ⋆ n of a large class of random trees for which H ⋆ n ∼ c log n for some positive constant c. A
graphical interpretation is also given for the limit constant c. This unifies what was already known for binary ..."
Cited by 13 (6 self)
Add to MetaCart
We use large deviations to prove a general theorem on the asymptotic edge-weighted height H ⋆ n of a large class of random trees for which H ⋆ n ∼ c log n for some positive constant c. A graphical
interpretation is also given for the limit constant c. This unifies what was already known for binary search trees [11], [13], random recursive trees [12] and plane oriented trees [23] for instance.
New applications include the heights of some random lopsided trees [19] and of the intersection of random trees.
"... Fringe analysis is a technique used to study the average behavior of search trees. In this paper we survey the main results regarding this technique, and we improve a previous asymptotic
theorem. At the same time we present new developments and applications of the theory which allow improvements in ..."
Cited by 12 (6 self)
Add to MetaCart
Fringe analysis is a technique used to study the average behavior of search trees. In this paper we survey the main results regarding this technique, and we improve a previous asymptotic theorem. At
the same time we present new developments and applications of the theory which allow improvements in several bounds on the behavior of search trees. Our examples cover binary search trees, AVL trees,
2-3 trees, and B-trees. Categories and Subject Descriptors: F.2.2 [Analysis of Algorithms and Problem Complexity ]: Nonnumerical Algorithms and Problems -- computations on discrete structures;
sorting and searching; E.1 [Data Structures]; trees. Contents 1 Introduction 2 2 The Theory of Fringe Analysis 4 3 Weakly Closed Collections 9 4 Including the Level Information 11 5 Fringe Analysis,
Markov Chains, and Urn Processes 13 This work was partially funded by Research Grant FONDECYT 93-0765. e-mail: rbaeza@dcc.uchile.cl 1 Introduction Search trees are one of the most used data
structures t...
, 2001
"... A fine analysis is given of the transitional behavior of the average cost of quicksort with median-of-three. Asymptotic formulae are derived for the stepwise improvement of the average cost of
quicksort when iterating median-of-three k rounds for all possible values of k. The methods used are genera ..."
Cited by 11 (6 self)
Add to MetaCart
A fine analysis is given of the transitional behavior of the average cost of quicksort with median-of-three. Asymptotic formulae are derived for the stepwise improvement of the average cost of
quicksort when iterating median-of-three k rounds for all possible values of k. The methods used are general enough to apply to quicksort with median-of-(2t + 1) and to explain in a precise manner
the transitional behaviors of the average cost from insertion sort to quicksort proper. Our results also imply nontrivial bounds on the expected height, "saturation level", and width in a random
locally balanced binary search tree.
, 1997
"... We consider here the probabilistic analysis of the number of descendants and the number of ascendants of a given internal node in a random search tree. The performance of several important
algorithms on search trees is closely related to these quantities. For instance, the cost of a successful searc ..."
Cited by 7 (2 self)
Add to MetaCart
We consider here the probabilistic analysis of the number of descendants and the number of ascendants of a given internal node in a random search tree. The performance of several important algorithms
on search trees is closely related to these quantities. For instance, the cost of a successful search is proportional to the number of ascendants of the sought element. On the other hand, the
probabilistic behavior of the number of descendants is relevant for the analysis of paged data structures and for the analysis of the performance of quicksort, when recursive calls are not made on
small subfiles. We also consider the number of ascendants and descendants of a random node in a random search tree, i.e., the grand averages of the quantities mentioned above. We address these
questions for standard binary search trees and for locally balanced search trees. These search trees were introduced by Poblete and Munro and are binary search trees such that each subtree of size 3
is balanced; in oth...
- SIAM Journal on Computing , 2005
"... Abstract. We solve the open problem of characterizing the leading constant in the asymptotic approximation to the expected cost used for random partial match queries in random k-d trees. Our
approach is new and of some generality; in particular, it is applicable to many problems involving differenti ..."
Cited by 3 (0 self)
Add to MetaCart
Abstract. We solve the open problem of characterizing the leading constant in the asymptotic approximation to the expected cost used for random partial match queries in random k-d trees. Our approach
is new and of some generality; in particular, it is applicable to many problems involving differential equations (or difference equations) with polynomial coefficients. Key words. k-d trees,
partial-match queries, differential equations, average-case analysis of algorithms, method of linear operators, asymptotic analysis. AMS subject classifications. 68W40 68P05 68P10 68U05 1.
Introduction. Multidimensional
, 1995
"... Random search trees have the property that their depth depends on the order in which they are built. They have to be balanced in order to obtain a more efficient storage-and-retrieval data
structure. Balancing a search tree is time consuming. This explains the popularity of data structures which app ..."
Cited by 1 (0 self)
Add to MetaCart
Random search trees have the property that their depth depends on the order in which they are built. They have to be balanced in order to obtain a more efficient storage-and-retrieval data structure.
Balancing a search tree is time consuming. This explains the popularity of data structures which approximate a balanced tree but have lower amortised balancing costs, such as AVL trees, Fibonacci
trees and 2-3 trees. The algorithms for maintaining these data structures efficiently are complex and hard to derive. This observation has led to insertion algorithms that perform local balancing
around the newly inserted node, without backtracking on the search path. This is also called a fringe heuristic. The resulting class of trees is referred to as 1-locally balanced trees, in this note
referred to as hairy trees. In this note a simple analysis of their behaviour is povided. Keywords: search trees, heuristic balancing, local balancing, fringe heuristic, hairy trees. Classification:
AMS 68P05,...
- Institute for Mathematical Sciences Lecture Notes Series , 2005
"... Abstract. Under certain conditions, sums of functions of subtrees of a random binary search tree are asymptotically normal. We show how Stein’s method can be applied to study these random trees,
and survey methods for obtaining limit laws for such functions of subtrees. ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. Under certain conditions, sums of functions of subtrees of a random binary search tree are asymptotically normal. We show how Stein’s method can be applied to study these random trees, and
survey methods for obtaining limit laws for such functions of subtrees.
|
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=313551","timestamp":"2014-04-20T10:03:35Z","content_type":null,"content_length":"36899","record_id":"<urn:uuid:03617e94-983e-4061-b808-b1177dd62f51>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Vector problem
May 3rd 2011, 02:55 PM
Vector problem
Q: an airplane is flying at an airspeed of 600 km/hr in a cross-wind that is 30 degrees west of north at a speed of 50 km/hr. In what direction should the plane fly to end up going due west?
Its been years since I did this, but am having a test and this problem will show up on it
May 3rd 2011, 03:39 PM
air vector + wind vector = ground vector
let $\theta$ = angle relative to due West for the Air vector
using the method of components ...
$A_x + W_x = G_x$
$600\cos{\theta} + 50\cos(60) = G$
$A_y + W_y = G_y$
$600\sin{\theta} + 50\sin(60) = 0$
solve for $\theta$ using the 2nd equation. If you need to find the groundspeed, $G$, use the value found for $\theta$ and the 1st equation.
|
{"url":"http://mathhelpforum.com/pre-calculus/179420-vector-problem-print.html","timestamp":"2014-04-23T17:20:22Z","content_type":null,"content_length":"5958","record_id":"<urn:uuid:387bbac0-e710-4084-bab6-b27074493da5>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Minimization under non-linear constraints
up vote 0 down vote favorite
There is a linear function of two variables that I am trying to minimize under an equality constraint. But, the constraint is non linear in the variables. Is there any technique to solve this? or can
I use approximations and linearize the constraint?
add comment
4 Answers
active oldest votes
In addition to the tip about using Lagrange multipliers, take a look at http://en.wikipedia.org/wiki/Nonlinear_programming which has a small paragraph about methods for
solving nonlinear optimization problems.
up vote 2 down vote If you can know (or can show) that the problem is convex and you want to learn techniques for convex nonlinear optimization, take a look at the following textbook http://
accepted www.stanford.edu/~boyd/cvxbook/ (pdf available)
add comment
You need Lagrange multipliers (http://en.wikipedia.org/wiki/Lagrange_multipliers).
up vote 3 down vote
add comment
To be precise, the constraint equality is exponential in the two variables. Will Lagrange multipliers help here?
up vote 0 down vote
add comment
As I understand, your problem looks like this:
$\min_{x,y} \Phi=a_{1} x + a_{2}y$
s.t. $f(x,y) = 0$
where $f(x,y)$ looks something like this: $f(x,y) = a_{3} \exp{(a_{4}x)} + a_{5} \exp{(a_{6}y)}$
This looks like a nonconvex NLP can be trivially solved using any NLP solver.
Or are you looking for a closed form solution?
Normally, the first thing I would try is to see if I can substitute constraints into the objective to transform the problem into an unconstrained one, but it looks like it's not
possible here.
up vote 0 down
vote As mentioned above, you can write the first order optimality conditions for the above system and solve the resulting nonlinear system of equations.
$\nabla L(x,y,\lambda) = \nabla\Phi + \lambda \nabla f(x,y) = 0$
In your case, it would be:
$a_{1} + \lambda a_{3} a_{4} \exp{(a_{4}x)} = 0$
$a_{2} + \lambda a_{5} a_{6} \exp{(a_{6}x)} = 0$
$a_{3} \exp{(a_{4}x)} + a_{5} \exp{(a_{6}y)} = 0$
Solve the above system for $x,y,\lambda$. And bam! You're done.
add comment
Not the answer you're looking for? Browse other questions tagged oc.optimization-control or ask your own question.
|
{"url":"http://mathoverflow.net/questions/29215/minimization-under-non-linear-constraints/29242","timestamp":"2014-04-20T14:03:35Z","content_type":null,"content_length":"60075","record_id":"<urn:uuid:b75f7590-1036-4ea8-a36e-db89e2f1b3b2>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Stanley on Thursday, January 24, 2013 at 11:13pm.
It is believed that at least 60% of voters from a certain region in Canada favor the free trade agreement (FTA). A recent poll indicated that out of 400 randomly selected individuals, 250 favored the
FTA. If we wished to perform a test to determine whether the proportion of those favoring the FTA is greater than 60%, at the 5% level of significance, we would:
Fail to reject H0 since the calculated value of the test statistic is 1.033 which is less than 1.645.
Fail to reject H0 since the calculated value of the test statistic is 1.033 which is less than 1.96.
Fail to reject H0 since the calculated value of the test statistic is 1.0204 which is less than 1.96.
Fail to reject H0 since the calculated value of the test statistic is 1.0204 which is less than 1.645.
Not need to test since everyone knows that FTA is good.
A seed company claims that 80% of the seeds of a certain variety of tomato will germinate if sown under normal growing conditions. A government inspector is interested in whether or not the
proportion of seeds germinating is living up to the company's claim. He randomly selects a sample of 200 seeds from a large shipment and tests the sample for percentage germination. If 155 of the 200
seeds germinate, then the calculated value of the test statistic used to test the hypothesis of interest is:
- .847
• Statistics - MathGuru, Friday, January 25, 2013 at 5:48pm
Using a formula for a binomial proportion one-sample z-test with your data included, we have:
z = .625 - .60 -->test value (250/400 = .625) minus population value (.60)
divided by
Calculating, z = 1.02
Critical value = 1.645 (one-tailed)
Test statistic for your second problem can be calculated the same way:
z = .775 - .80 -->test value (155/200 = .775) minus population value (.80)
divided by
Calculating, z = -0.884
Related Questions
STATISTICS - A poll of 10 voters is taken in a large city. Let X be the number ...
Statistics - A study in a particular state found that only 5% of the voters have...
STATISTICS - In a survey of 1000 registered U.S. voters, 55% respond that they ...
Statistics - In a survey of 1233 people, 917 people said they voted in a recent ...
Statistics - In a survey, 55% of the voters support a particular referendum. If ...
statistics - A college professor decides to run for Congress in a district with ...
statistics - In a state,52%of the voters supports party-R, and 48% supports ...
Math/statistics - Thirty five percent of adult americans are regular voters. A ...
Basic Statistics - A population of 300 voters contains 147 Republicans, 135 ...
Statistics - A candidate for a political office claims that he will win the ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1359087205","timestamp":"2014-04-17T16:51:20Z","content_type":null,"content_length":"10202","record_id":"<urn:uuid:72b451be-cae7-4766-b4fb-243527bda961>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Carver, MA Math Tutor
Find a Carver, MA Math Tutor
...The other two are linear algebra and the stochastic systems (statistics), which come together in advanced courses. Everyone intending to pursue studies in basic science (including life
sciences), engineering or economics should have a good foundation in introductory calculus. I did not really b...
7 Subjects: including algebra 1, algebra 2, calculus, trigonometry
...I try to make my lessons hands on and interactive so students are able to learn new concepts and skills by physically doing the skill and driving their own creativity and curiosity. I went to
Rhode Island College and I have certifications in elementary education, mild/moderate special education ...
15 Subjects: including prealgebra, reading, grammar, special needs
...And I stress that it's OK to be honest, even negative if that represents the truth. If the student's answer is problematic, I discuss it with the parent. Once the tutoring starts, I work very
hard to find a way to make the work enjoyable for the student and not overwhelming.
10 Subjects: including geometry, algebra 1, algebra 2, Microsoft Excel
...I am confident in tutoring Elementary-High School Math, Science, and English, Including: Biology, General Chemistry (College), Organic Chemistry, Anatomy and Physiology, Human Genetics,
Pre-Algebra, Algebra, Statistics, Reading, Grammar, Writing, and Study Skills. I am also able to tutor in Basi...
13 Subjects: including prealgebra, chemistry, biology, anatomy
...Whether it be math, history, English or science, it’s got to be somewhere. I’ve had greatly artistic individual’s laugh at my stick figures. Overall, I like working in a self paced environment.
11 Subjects: including algebra 1, algebra 2, geometry, prealgebra
|
{"url":"http://www.purplemath.com/carver_ma_math_tutors.php","timestamp":"2014-04-21T10:30:32Z","content_type":null,"content_length":"23665","record_id":"<urn:uuid:70917332-9da8-4214-b060-3493d2785d78>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
|
David Ginat
B.A., Computer Science, Technion - Israel Institute of Technology, Israel.
Ph.D., Computer Science, University of Maryland at College Park, USA.
Research Topics
Development of mathematical and algorithmic thinking.
Novice tendencies, misleading intuition, and learning from mistakes.
Creative encapsulation of colorful challenges in teaching.
Fields of Instruction
Computer science and mathematics education.
Didactics of algorithmics; Mathimatical thinking through mathematical games;
Scientific contents as didactic tools; Learning from mistakes.
Computer science Olympiad challenges.
Ginat, D., “Fundamentals of Computer Science, vol 1 (of 2)” Textbook & Teacher’s guide, in Hebrew, Weizmann Institute of Science Pub, 1998.
Ginat, D., Haberman, B., Cohen, D., Katz, D., Miller, O., Menashe, E., “Design Patterns for Fundamentals of Computer Science” Textbook, in Hebrew, Tel-Aviv University Pub, 2001.
Ginat, D., Sleator, D.D., & Tarjan, R.E., A tight amortized bound for path reversal, Information Processing Letters, 31, (1), 1989 (pp. 3-5).
Ginat, D., Early algorithm efficiency with design patterns, Computer Science Education, 11, (2), 2001 (pp. 89-109).
Ginat, D., Loop invariants, exploration of regularities, and mathematical games, International Journal of Mathematical Education in Science and Technology, 32, (5), 2001 (pp. 635-651).
Ginat, D., Strating top-down, refining bottom-up, sharpening by zoom-in, SIGCSE Bulletin, 33, (4), 2001 (pp. 28-31).
Ginat, D., Gaining algorithmic insight via simplifying constraints, Journal of Computer Science Education, 2002 (pp. 41-47).
Ginat, D., Effective binary perspectives in algorithmic problem solving, Journal of Educational Resources in Computing, 2, (2), 2002 (pp. 1-12).
Ginat, D., Digit-distance mastermind, The Mathematical Gazette, November 2002 (pp. 437-442).
Ginat, D., Seeking or skipping regularities? Novice tendencies and the role of invariants, Informatics in Education, 2, (2), 2003 (pp. 211-222).
Ginat, D., Algorithmic patterns and the case of the sliding delta, to appear in the SIGCSE Bulletin.
Ginat, D. & Garcia, D., Ordering patterns and list inversions, to appear in the Journal of Computer Science Education.
Ginat, D., Decomposition diversity in computer science – beyond the top-down icon, to appear in the Journal of Computers in Mathematics and Science Teaching.
Ginat, D., Mathematical operators and ways of reasoning, to appear in The Mathematical Gazette.
Ginat, D., On novice loop boundaries and range conceptions, to appear in Computer Science Education.
Ginat, D., Shankar, A.U., & Agrawala, A.K., An efficient solution to the drinking philosophers, Lecture Notes in Computer Science - Distributed Algorithms, 3^rd International Workshop - WDAG, Nice,
France, Spring-Verlag Pub, 1989 (pp. 83-93).
Ginat, D., Loop invariants and mathematical games, Proc of the 26^th ACM Computer Science Education Symposium - SIGCSE, ACM Press, 1995 (pp. 263-267).
Ginat, D., Efficiency of algorithms for programming beginners, Proc of the 27^th ACM Computer Science Education Symposium - SIGCSE, ACM Press, 1996 (pp. 256-260).
Shifrony, E. & Ginat, D., Simulation game for teaching communication protocols, Proc of the 28^th ACM Computer Science Education Symposium - SIGCSE, ACM Press, 1997 (pp. 184-188).
Ginat, D. & Shifrony, E., Teaching recursion in procedural environment - how much should we emphasize the computing model?, Proc of the 30^th ACM Computer Science Education Symposium - SIGCSE, ACM
Press, 1999 (pp. 127-131).
Haberman, B. & Ginat, D., Distance learning model with local workshop sessions, applied to in-service teacher training, Proc of the 4^th Conference on Innovation and Technology in Computer Science
Education - ITiCSE, ACM Press, 1999 (pp. 64-67).
Ginat, D., Colorful examples for elaborating exploration of regularities in high-school CS1, Proc of the 5^th Conference on Innovation and Technology in Computer Science Education - ITiCSE , ACM
Press, 2000 (pp. 81-84).
Ginat, D., Misleading intuition in algorithmic problem solving, Proc of the 32^nd ACM Computer Science Education Symposium - SIGCSE, ACM Press, 2001 (pp. 21-25).
Ginat, D., Metacognitive awareness utilized for learning control elements in algorithmic problem solving, Proc of the 6^th Conference on Innovation and Technology in Computer Science Education -
ITiCSE, ACM Press, 2001 (pp. 81-84).
Ginat, D., On varying perspectives of problem decomposition, Proc of the 33^rd ACM Computer Science Education Symposium - SIGCSE, ACM Press, 2002 (pp. 331-335).
Ginat, D. & Wolfson, M., On limited views of the mean as a point of balance, Proc of the 26^th Conference of the International Group for Psychology of Mathematics Education - PME, 2002 (vol. 2, pp.
Ginat, D., The greedy trap and learning from mistakes, Proc of the 34^th ACM Computer Science Education Symposium - SIGCSE, ACM Press, 2003 (pp. 11-15).
Ginat, D., The novice programmers’ syndrome of design-by-keyword, Proc of the 8^th Conference on Innovation and Technology in Computer Science Education - ITiCSE, ACM Press, 2003 (pp. 154-157).
Scientific Column
Ginat, D., Placement calculations, Colorful Challenges Column, SIGCSE Bulletin, 32, (4), 2000 (pp. 20-21).
Ginat, D., Color conversion, Colorful Challenges Column, SIGCSE Bulletin, 33, (2), 2001 (pp. 20-21).
Ginat, D., Chain of permutations, Colorful Challenges Column, SIGCSE Bulletin, 33, (4), 2001 (pp. 20-21).
Ginat, D., Divisor games, Colorful Challenges Column, SIGCSE Bulletin, 34, (4), 2002 (pp. 28-29).
Ginat, D., Sorting and disorders, Colorful Challenges Column, SIGCSE Bulletin, 35, (2), 2003 (pp. 28-29).
Ginat, D., Board reconstruction, Colorful Challenges Column, SIGCSE Bulletin, to appear in December 2003.
Ginat, D., Allon, D., & Leonard, A.G., "A database-driven interactive shell for controlling output rendering devices", USA Patent number AT9-89-039X, IBM confidential, IBM Corporation, 1991.
1991 - IBM Patent award.
1994 - IBM Research division award, for contribution to the DSMIT project.
2001 - Tel-Aviv University, Science Education Department, Best lecturer award.
2003 - Tel-Aviv University, Rector’s best lecturer award.
|
{"url":"http://www.tau.ac.il/education/homepg/ginat.html","timestamp":"2014-04-21T09:37:05Z","content_type":null,"content_length":"12556","record_id":"<urn:uuid:674f12bc-63f8-4f7c-9c22-42abdb43d5de>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00430-ip-10-147-4-33.ec2.internal.warc.gz"}
|
2D Range and Neighbor Search
Chapter 43
2D Range and Neighbor Search
Matthias Bäsken
Table of Contents
43.1 Introduction
43.2 Example: Range Search
43.1 Introduction
Geometric queries are fundamental to many applications in computational geometry. The task is to maintain a dynamic set of geometric objects in such a way that certain queries can be performed
efficiently. Typical examples of queries are: find out whether a given object is contained in the set, find all objects of the set lying in a given area (e.g. rectangle), find the object closest to a
given point or find the pair of objects in the set lying closest to each other. Furthermore, the set should be dynamic in the sense that deletions and insertions of objects can be performed
In computational geometry literature one can find many different data structures for maintaining sets of geometric objects. Most of them are data structures that have been developed to support a
single very special kind of query operation. Examples are Voronoi diagrams for answering nearest neighbor searches, range trees for orthogonal range queries, partition trees for more general range
queries, hierarchical triangulations for point location and segment trees for intersection queries ....
In many applications, different types of queries have to be performed on the same set of objects. A naive approach to this problem would use a collection of the above mentioned data structures to
represent the set of objects and delegate every query operation to the corresponding structure. However, this is completely impractical since it uses too much memory and requires the maintenance of
all these data structures in the presence of update operations.
Data structures that are non-optimal in theory seem to perform quite well in practice for many of these queries. For example, the Delaunay diagram turns out to be a very powerful data structure for
storing dynamic sets of points under range and nearest neighbor queries. A first implementation and computational study of using Delaunay diagrams for geometric queries is described by Mehlhorn and
Näher in [MN00].
In this section we present a generic variant of a two dimensional point set data type supporting various geometric queries.
The CGAL::Point_set_2 class in this section is inherited from the two-dimensional CGAL Delaunay Triangulation data type.
The CGAL::Point_set_2 class depends on two template parameters T1 and T2. They are used as template parameters for the CGAL::Delaunay_triangulation_2 class CGAL::Point_set_2 is inherited from. T1 is
a model for the geometric traits and T2 is a model for the triangulation data structure that the Delaunay triangulation expects.
The CGAL::Point_set_2 class supports the following kinds of queries:
• circular range search
• triangular range search
• isorectangular range search
• (k) nearest neighbor(s)
For details about the running times see [MN00].
43.2 Example: Range Search
The following example program demonstrates the various range search operations of the two dimensional point set. First we construct a two dimensional point set $PSet$ and initialize it with a few
points. Then we perform circular, triangular and isorectangular range search operations on the point set.
File: examples/Point_set_2/range_search.cpp
#include <CGAL/Exact_predicates_inexact_constructions_kernel.h>
#include <CGAL/Point_set_2.h>
#include <list>
typedef CGAL::Exact_predicates_inexact_constructions_kernel K;
typedef CGAL::Point_set_2<K>::Vertex_handle Vertex_handle;
typedef K::Point_2 Point_2;
int main()
CGAL::Point_set_2<K> PSet;
std::list<Point_2> Lr;
Point_2 p1(12,14);
Point_2 p2(-12,14);
Point_2 p3(2,11);
Point_2 p4(5,6);
Point_2 p5(6.7,3.8);
Point_2 p6(11,20);
Point_2 p7(-5,6);
Point_2 p8(12,0);
Point_2 p9(4,31);
Point_2 p10(-10,-10);
Lr.push_back(p1); Lr.push_back(p2); Lr.push_back(p3);
Lr.push_back(p4); Lr.push_back(p5); Lr.push_back(p6);
Lr.push_back(p7); Lr.push_back(p8); Lr.push_back(p9);
std::cout << "circular range search !\n";
CGAL::Circle_2<K> rc(p5,p6);
std::list<Vertex_handle> LV;
PSet.range_search(rc, std::back_inserter(LV));
std::list<Vertex_handle>::const_iterator it;
for (it=LV.begin();it != LV.end(); it++)
std::cout << (*it)->point() << "\n";
std::cout << "triangular range search !\n";
PSet.range_search(p1,p2,p3, std::back_inserter(LV));
for (it=LV.begin();it != LV.end(); it++)
std::cout << (*it)->point() << "\n";
std::cout << "isorectangular range search !\n";
Point_2 pt1=p10;
Point_2 pt3=p3;
Point_2 pt2 = Point_2(pt3.x(),pt1.y());
Point_2 pt4 = Point_2(pt1.x(),pt3.y());
PSet.range_search(pt1,pt2,pt3,pt4, std::back_inserter(LV));
for (it=LV.begin();it != LV.end(); it++)
std::cout << (*it)->point() << "\n";
return 0;
Reference Manual
|
{"url":"http://www.cgal.org/Manual/3.3/doc_html/cgal_manual/Point_set_2/Chapter_main.html","timestamp":"2014-04-17T18:32:15Z","content_type":null,"content_length":"11092","record_id":"<urn:uuid:06b737f0-f657-47f6-9911-3d1fdd9826de>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Radioactive decay and exponential laws
Issue 14
March 2001
In his article Light Attenuation and Exponential Laws in the last issue of Plus, Ian Garbett discussed the phenomenon of light attenuation, one of the many physical phenomena in which the exponential
function crops up. In this second article he describes the phenomenon of radioactive decay, which also obeys an exponential law, and explains how this information allows us to carbon-date artefacts
such as the Dead Sea Scrolls.
Radioactive Decay
In the previous article, we saw that light attenuation obeys an exponential law. To show this, we needed to make one critical assumption: that for a thin enough slice of matter, the proportion of
light getting through the slice was proportional to the thickness of the slice.
Exactly the same treatment can be applied to radioactive decay. However, now the "thin slice" is an interval of time, and the dependent variable is the number of radioactive atoms present, N(t).
Radioactive atoms decay randomly. If we have a sample of atoms, and we consider a time interval short enough that the population of atoms hasn't changed significantly through decay, then the
proportion of atoms decaying in our short time interval will be proportional to the length of the interval. We end up with a solution known as the "Law of Radioactive Decay", which mathematically is
merely the same solution that we saw in the case of light attenuation. We get an expression for the number of atoms remaining, N, as a proportion of the number of atoms N[0] at time 0, in terms of
time, t:
N/N[0] = e^-lt,
where the quantity l, known as the "radioactive decay constant", depends on the particular radioactive substance.
Again, we find a "chance" process being described by an exponential decay law. We can easily find an expression for the chance that a radioactive atom will "survive" (be an original element atom) to
at least a time t. The steps are the same as in the case of photon survival.
Mean lifetime of a Radioactive Atom
On average, how much time will pass before a radioactive atom decays?
This question can be answered using a little bit of calculus. Suppose that we invert our function for N/N[0] in terms of t, to get an expression for t as a function of N/N[0]. Once we have an
expression for t, a "definite integral" will give us the mean value of t (this is how "mean value" is defined).
From the equation above, taking logarithms of both sides we see that lt = -ln(N/N[0]) = ln(N[0]/N), so our equation for t is
For convenience, we'll now write F for N/N[0]. Note that that the domain of F is the interval from zero to 1, which corresponds to the interval of time from zero to infinity. Plotting t against F
with a value of l=1 gives the graph on the right.
To find <t>, the mean value of time of survival, all we have to do is find the integral
which is a very tidy result!
Incidentally, our formula for t gives us an easy way of finding the half-life, the time it takes for half the nuclei in a sample to decay. The half-life (often denoted t[1/2]) is just t(1/2) = (1/l)
ln(2). The equivalent thickness for the medium in radiation attenuation is known as "half-value thickness". Similarly, in a population which grows exponentially with time there is the concept of
"doubling time".
Libby's Legacy
We started the first article by talking about carbon dating and the Dead Sea scrolls. Let's look further at the technique behind the work that led to Libby being awarded a Nobel prize in 1960.
Carbon 14 (C-14) is a radioactive element that is found naturally, and a living organism will absorb C-14 and maintain a certain level of it in the body. This is because there is carbon dioxide (CO
[2]) exchange in the atmosphere, which leads to constant turnover of carbon molecules within the body cells.
Once an organism dies there is no further CO[2] exchange, and so the ratio of C-14 to the far more common carbon isotope, C-12, will begin to decrease as the C-14 atoms decay, yielding nitrogen
(N-14) with the emission of an electron (or "beta particle") plus an anti-neutrino.
The ratio of C-14 to C-12 in the atmosphere's carbon dioxide molecules is about 1.3×10^-12, and this value is assumed constant for the main part of archaeological history since the formation of the
earth's atmosphere.
Knowing the level of activity of a sample of organic material enables us to deduce how much C-14 there is in the material at present. Since we also know the ratio of C-14 to C-12 originally, we can
find the time that has passed since carbon exchange ceased, that is, since the organic material "died".
In the case of the Dead Sea scrolls, important questions required answers. Were they forgeries? Did they really date from around the time of Christ? Before or after?
Using Libby's radiocarbon dating technique, the scrolls have been dated, using the linen coverings the scrolls were wrapped in. One scroll, the Book of Isaiah, has been dated at 1917BC ±275 years,
certainly long before the time of Christ. Some of the others are roughly contemporary with Christ.
Let's take a look at an example of how dates are calculated using Libby's method.
Suppose a linen sample of 1 gram is analysed in a counter. The activity is measured at approximately 11.9 decays per minute. We'll denote the magnitude of the rate of decay of the Carbon 14 nuclei as
R. This magnitude is equal to the rate that beta particles are detected. So
Recall that the exponential law for the number of Carbon 14 nuclei present says that
and so
which tells us that R=lN, and that the activity at t=0 (the time the linen was manufactured) is R(0) = lN[0].
Substituting gives us an exponential relation in terms of the measured activity:
Now the decay constant for Carbon-14 is l = 3.8394 × 10^-12 per second. This corresponds to a half life of 5,730 years.
We can calculate the number of Carbon-12 nuclei in 1 gram of carbon:
Using the (living) ratio of C-14 to C-12, this implies that the original (t = 0) number of Carbon 14 nuclei was
Now, rearranging the exponential activity law gives
R[0] is simply (3.8394×10^-12)(6.5221×10^10) = 0.2504 decays per second. The measured rate is R(t) = 11.9 decays per minute = 0.1983 decays per second.
The value for t that results is
which is approximately 1929 years. This an approximate age for some of the scrolls.
In a similar way, dating charcoal found at Stonehenge gives ages of approximately 3798 ±275 years, and, when used on some of the oldest archaeological artefacts in the Americas, the technique gives
ages of approximately 25,000 years, corresponding to a time of significantly lower sea levels and supporting the theory that the very first humans in the "New World" crossed the Bering Straits by
foot from Siberia into Alaska.
Incidentally, other "larger time scale" radio-dating techniques exist, apart from Libby's radiocarbon method. Here isotopes with longer half lives are used, which enables dating of geological
formations and rocks. However, the essential ideas are analogous. For example, in lava form, molten lead and Uranium-238 (standard isotope) are constantly mixed in a certain ratio of their natural
abundance. Once solidified, the lead is "locked" in place and since the uranium decays to lead, the lead-to-uranium ratio increases with time. In this way, some of the oldest rocks have been measured
at approximately 3 billion years.
About the author
He graduated in 1977 with a BSc Honours in Applied Physics from the University of Lancaster, and obtained an MSc in Medical Physics from the University of Leeds in 1987.
He is interested in various theoretical aspects of radiation and radiological physics, with an interest in mathematical modelling in general.
Current research involves a theoretical description of X-ray beam spectra.
|
{"url":"http://plus.maths.org/content/radioactive-decay-and-exponential-laws","timestamp":"2014-04-19T04:30:39Z","content_type":null,"content_length":"39173","record_id":"<urn:uuid:a4ecf17e-5cd8-435a-8887-f625559dd022>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00061-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Maximum size rectangle that can fit inside a square
July 11th 2010, 05:57 PM
Maximum size rectangle that can fit inside a square
Hello all :)
Thanks for all the help ahead of time.
To put some story into place-i am refinishing/rebuilding a 125g saltwater fishtank system. It was a very nice custom built cabinet but came with some very small doors. I want to install a smaller
tank in the cabinet below for various purposes but want to maximize the size of the tank.
The cabinet has 2 doors (see here for pictures if you need http://i85.photobucket.com/albums/k6...Tank/tank2.jpg ). As i said, i want to maximize the size of the tank i am putting underneath. I
am looking to build a plexiglass tank as large as i can to hold as much water as possible under the main tank.
The window dimensions are 20" tall 16.5" across the top and i would have about 17" of depth before i hit the wall inside. Please dont worry about anything inside as it is currently empty.
Thanks ~!
July 12th 2010, 04:58 AM
Hello all :)
Thanks for all the help ahead of time.
To put some story into place-i am refinishing/rebuilding a 125g saltwater fishtank system. It was a very nice custom built cabinet but came with some very small doors. I want to install a smaller
tank in the cabinet below for various purposes but want to maximize the size of the tank.
The cabinet has 2 doors (see here for pictures if you need http://i85.photobucket.com/albums/k6...Tank/tank2.jpg ). As i said, i want to maximize the size of the tank i am putting underneath. I
am looking to build a plexiglass tank as large as i can to hold as much water as possible under the main tank.
The window dimensions are 20" tall 16.5" across the top and i would have about 17" of depth before i hit the wall inside. Please dont worry about anything inside as it is currently empty.
Thanks ~!
1. If I understand your problem correctly you are looking for the dimensions of the tank such that you can thread it through the hole into the cabinet and that it holds as much water as possible.
2. If so: The tank has to have a maximum height of 20''. Then the volume depends on the dimension of the base area.
3. The base area is calcuated by:
$a = x \cdot y$
The lengthes x and y are calculated by:
$x = 16.5 \cdot \sin(A)$
$y = \frac{17}{\sin(A)}$
$a = 16.5 \cdot \sin(A) \cdot \dfrac{17}{\sin(A)}= 16.5 \cdot 17$
4. If I didn't make one of my silly mistakes the volume of the tank is a constant. So you only can choose for a convenient angle A such that the construction isn't too hard.
|
{"url":"http://mathhelpforum.com/geometry/150680-maximum-size-rectangle-can-fit-inside-square-print.html","timestamp":"2014-04-20T04:15:02Z","content_type":null,"content_length":"7553","record_id":"<urn:uuid:57cec33d-5b54-4a66-aab5-e0d0ac303b1f>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
|
How to Solve Limits with a Calculator
You can solve most limit problems by using your calculator. There are two basic methods. For example, say you want to evaluate the following limit:
Method one
Here’s what you do. Take a number extremely close to 5 and plug it into x. If you have a calculator like a Texas Instruments TI-84, follow these steps:
1. Enter your number, say 4.9999, on the home screen.
2. Press the Sto (store) button, then the x button, and then the Enter button.
This stores the number into x.
3. Enter the function:
4. Hit Enter.
The result, 9.9999, is extremely close to a round number, 10, so that’s your answer.
5. For good measure, store 4.999999 into x.
Follow the procedure in Step 2.
6. Scroll back up to the function by hitting 2nd, Enter, 2nd, Enter.
7. Hit Enter one more time.
You get 9.999999 — even closer to 10. If you still have any doubts, try one more number.
8. Store 4.99999999 into x, scroll up to the function, and Hit Enter.
The result, 10, clinches it. (The function value at 4.99999999 isn’t actually 10, but it’s so close that the calculator rounds it off to 10.)
By the way, if you’re using a different calculator model, you can likely achieve the same result with the same technique or something very close to it.
Method two
The second calculator method is to produce a table of values:
1. In your calculator’s graphing mode, enter the following:
2. Go to table set up and enter the limit number, 5, as the table start number.
3. Enter a small number, say 0.001, for ∆Tbl.
That’s the size of the x-increments in the table.
4. Hit the Table button to produce the table.
5. Scroll up so you can see a couple numbers less than 5.
You should see a table of values like the one in this table.
X Y
4.998 9.998
4.999 9.999
5 Error
5.001 10.001
5.002 10.002
5.003 10.003
Because y gets very close to 10 as x zeros in on 5 from above and below, 10 is the limit.
These calculator techniques are useful for a number of reasons:
• Your calculator can give you the answers to limit problems that are impossible to do algebraically.
• It can solve limit problems that you could do with paper and pencil except that you’re stumped.
• For problems that you do solve on paper, you can use your calculator to check your answers.
Many calculus problems can be done algebraically, graphically, and numerically. When possible, use two or three of the approaches. Each approach gives you a different take on a problem and enhances
your grasp of the relevant concepts.
Use the calculator methods to supplement algebraic methods, but don’t rely too much on them. First of all, the calculator techniques won’t give you an exact answer unless the numbers your calculator
gives you are getting close to a number you recognize — like 9.99998 is close to 10, or 0.333332 is close to 1/3.
However, even when you don’t recognize the exact answer in such cases, you can still learn an approximate answer, in decimal form, to the limit question.
This limit equals zero, but you can’t get that result with your calculator.
|
{"url":"http://www.dummies.com/how-to/content/how-to-solve-limits-with-a-calculator.html","timestamp":"2014-04-19T21:24:27Z","content_type":null,"content_length":"57034","record_id":"<urn:uuid:b13f69f5-1baa-4f67-9112-376c64c98e78>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
|
16-XX Associative rings and algebras {For the commutative case, see 13-XX}
16Txx Hopf algebras, quantum groups and related topics
16T05 Hopf algebras and their applications [See also 16S40, 57T05]
16T10 Bialgebras
16T15 Coalgebras and comodules; corings
16T20 Ring-theoretic aspects of quantum groups [See also 17B37, 20G42, 81R50]
16T25 Yang-Baxter equations
16T30 Connections with combinatorics
16T99 None of the above, but in this section
|
{"url":"http://ams.org/mathscinet/msc/msc2010.html?t=16T15","timestamp":"2014-04-17T07:24:15Z","content_type":null,"content_length":"12652","record_id":"<urn:uuid:69fa599d-10fe-4d27-88c0-6b4886951670>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Colmar Manor, MD Geometry Tutor
Find a Colmar Manor, MD Geometry Tutor
I am a hydraulic/civil engineer by profession. For my undergraduate program, I was the recipient of the prize for an outstanding student. I have both classroom teaching and private tutoring
14 Subjects: including geometry, chemistry, calculus, physics
...In many math classrooms today, teachers show their students one way to solve a problem, and then the students simply mimic a series of steps. This approach does not promote conceptual
understanding! Students need to be able to think critically and creatively when they face a new problem.
16 Subjects: including geometry, English, calculus, GRE
...All 3 of my children took Algebra 2 when they were in high school. I was very successful as their tutor. I enjoy math and I am very patient.
15 Subjects: including geometry, chemistry, ASVAB, physics
...I look forward to working with your student, and to being a small part of them achieving their goals and becoming all that they are capable of. Thank You,PatrickI played football throughout
high school and played Division III football at Carnegie Mellon University. I played Wide Receiver, Defensive Back, and Quarterback.
30 Subjects: including geometry, reading, chemistry, physics
...I have helped, on more than one occasion, students who were struggling badly in their math courses but in the end they got a very well deserved grade. It is my hope to be able to help more
students achieve high grades in their math courses. I have a bachelor and masters degree in engineering, and scored 740 (out of 800) on the GRE quantitative.
35 Subjects: including geometry, calculus, physics, accounting
Related Colmar Manor, MD Tutors
Colmar Manor, MD Accounting Tutors
Colmar Manor, MD ACT Tutors
Colmar Manor, MD Algebra Tutors
Colmar Manor, MD Algebra 2 Tutors
Colmar Manor, MD Calculus Tutors
Colmar Manor, MD Geometry Tutors
Colmar Manor, MD Math Tutors
Colmar Manor, MD Prealgebra Tutors
Colmar Manor, MD Precalculus Tutors
Colmar Manor, MD SAT Tutors
Colmar Manor, MD SAT Math Tutors
Colmar Manor, MD Science Tutors
Colmar Manor, MD Statistics Tutors
Colmar Manor, MD Trigonometry Tutors
Nearby Cities With geometry Tutor
Bladensburg, MD geometry Tutors
Brentwood, MD geometry Tutors
Cottage City, MD geometry Tutors
Edmonston, MD geometry Tutors
Fairmount Heights, MD geometry Tutors
Garrett Park geometry Tutors
Hyattsville geometry Tutors
Mount Rainier geometry Tutors
North Brentwood, MD geometry Tutors
Riverdale Park, MD geometry Tutors
Riverdale Pk, MD geometry Tutors
Riverdale, MD geometry Tutors
Seat Pleasant, MD geometry Tutors
Somerset, MD geometry Tutors
University Park, MD geometry Tutors
|
{"url":"http://www.purplemath.com/Colmar_Manor_MD_Geometry_tutors.php","timestamp":"2014-04-16T04:47:52Z","content_type":null,"content_length":"24296","record_id":"<urn:uuid:78c6db77-c749-4c74-abe2-335718e306f7>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00583-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Trig identities
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/50a71905e4b0129a3c8ffcb8","timestamp":"2014-04-19T10:22:10Z","content_type":null,"content_length":"51317","record_id":"<urn:uuid:a122ae31-32e7-4891-beed-9535f6ad3689>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Use Laplace transform to solve the following system: x''=x+3y y''=4x-(4e^t) x(0)=2; x'(0)=3 ;y(0)=1 ;y'(0)=2
Best Response
You've already chosen the best response.
Laplace to solve a system? new one on me, can't wait to see the soution
Best Response
You've already chosen the best response.
...but you know how to take the laplace of these expressions, right?
Best Response
You've already chosen the best response.
yeah but i have no idea how to solve this system D=
Best Response
You've already chosen the best response.
me neither :( let me call for reinforcements: @JamesJ @Mr.Math @across @lalaly help with a DE system
Best Response
You've already chosen the best response.
transform both DEs\[s^{2} X(s) - sx(0) - x'(0) = X(s) + 3Y(s)\]\[s^{2}Y(s) - sy(0) - y'(0) = 4X(s)-\frac{4}{s-1}\]plug in the initial conditions, simplify and solve for X(s) and Y(s)
Best Response
You've already chosen the best response.
It's easy if you already know how to do the laplace transform. You'll be left, as exraven explained, with two equations in two variables \(X(s)\) and \(Y(s)\), solve for them and then take the
inverse laplace transform to each one separately.
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f86e639e4b0505bf08687c9","timestamp":"2014-04-17T06:53:59Z","content_type":null,"content_length":"40040","record_id":"<urn:uuid:cef87110-a0b1-49d6-8747-09916a9a4c2d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
|
This is a collection of most of my papers in Computer Vision, given in chronological order.
There may be several different versions of many of the papers. The site is still under construction, so may be unsatisfactory in many ways (journal or conference names are generally missing, for
Multiple View Geometry in Computer Vision (Hartley/Zisserman 2000) : Cambridge University Press, 2000. Please visit this site for errata lists and sample chapters.
If you find any new errata (check list first), then email to az@robots.ox.ac.uk or Richard.Hartley@anu.edu.au
All images and diagrams from the book are available for download.
calibration/eccv94/calib.pdf Self Calibration from Multiple Views with a Rotating Camera
turbine/final/metrology-final.pdf X-ray Metrology for Quality Assurance (IEEE Robotics and Automation Conference, pp 1113-1119)
arpa-report/report.pdf Invariant and Calibration-Free Methods in Scene Reconstruction and Object Recognition.
tensor/journal/final/tensor3.pdf Lines and Points in Three Views and the Trifocal Tensor
tensor/journal/lines.pdf Lines and Points in Three Views and the Trifocal Tensor
tensor/lines.pdf Lines and Points in Three Views and the Trifocal Tensor
norway/relations.pdf Multilinear Relationships between Coordinates of Corresponding Image Points and Lines.
linear-pushbroom/paper.pdf A Computer Algorithm for Reconstructing a Scene from Two Satellite Images.
Fred-ECCV00/eccv00-final.pdf A Six Point Solution for Structure and Motion (ECCV 2000)
trilin/trilin.pdf Lines and Points in Three Views – An Integrated Approach
|
{"url":"http://users.cecs.anu.edu.au/~hartley/My-Papers.html","timestamp":"2014-04-17T09:34:11Z","content_type":null,"content_length":"29341","record_id":"<urn:uuid:c749bc96-5ac7-4a75-8097-6a9876c1d2c2>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Monthly Weather Review, 132, 662-669.
Implementing the weak temperature gradient approximation with full vertical structure
Daniel A. Shaevitz
Department of Atmospheric Sciences, University of California, Los Angeles
Adam H. Sobel
Department of Applied Physics and Applied Mathematics and Department of Earth and Environmental Sciences, Columbia University, New York, NY.
A two-column, nonrotating radiative-convective model is formulated in which the free-tropospheric temperature profiles of the two columns are assumed identical and steady, and the temperature
equation is used diagnostically to calculate the vertical velocities [the weak temperature gradient approximation (WTG)]. These vertical velocities and the continuity equation are then used to
calculate the horizontal velocities. No horizontal momentum equation is used. The present model differs from other two-column models that have used similar formulations in that here, both columns are
governed by the same laws, rather than different dynamical roles' being assigned a priori to the ``warm'' and ``cold'' columns. The current formulation has the advantage that it generalizes trivially
to an arbitrary number of columns, a necessity for developing a 3D model under WTG. The two-column solutions compare reasonably well to those of the two-column model of Nilsson and Emanuel, which
uses a linear, nonrotating horizontal momentum equation and the same underlying radiative-convective code as the WTG model, modified to have significant viscosity only in a boundary layer near the
surface. The two solutions compare best in the limit of large horizontal domain size, behavior opposite to what has been found in models which lack an explicit boundary layer and have viscosity
throughout the troposphere. The difference is explained in terms of the circulation driven by boundary layer pressure gradients.
|
{"url":"http://www.columbia.edu/~ahs129/abstracts/2column.html","timestamp":"2014-04-17T07:47:27Z","content_type":null,"content_length":"2349","record_id":"<urn:uuid:02f50af3-cd05-4ad4-8eef-d72b0ca86792>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
|
On twins in the four-sphere. II.
Montesinos Amilibia, José María (1984) On twins in the four-sphere. II. Quarterly Journal of Mathematics , 35 (137). pp. 73-83. ISSN 0033-5606
Official URL: http://qjmath.oxfordjournals.org/content/35/1/73.full.pdf+html
E. C. Zeeman [Trans. Amer. Math. Soc. 115 (1965), 471–495; MR0195085 (33 #3290)] introduced the process of twist spinning a 1-knot to obtain a 2-knot (in S4), and proved that a twist-spun knot is
fibered with finite cyclic structure group. R. A. Litherland [ibid. 250 (1979), 311–331; MR0530058 (80i:57015)] generalized twist-spinning by performing during the spinning process rolling operations
and other motions of the knot in three-space. The first paper generalizes those results by introducing the concept of a twin. A twin W is a subset of S4 made up of two 2-knots R and S that intersect
transversally in two points. The prototype of a twin is the n-twist spun of K (that is, the union of the n-twist spun knot of K and the boundary of the 3-ball in which the original knot lies). The
exterior of a twin, X(W), is the closure of S4−N(W), where N(W) is a regular neighborhood of W in S4.
The first paper considers properties of X(W), and uses these to characterize the automorphisms of a 2-torus standardly embedded in S4, which extend to S4, and also to prove that any homotopy sphere
obtained by Dehn surgery on such a 2-torus is the real S4.
The second paper is devoted to the fibration problem, i.e. given a twin in S4, try to understand what surgeries in W give a twin W′ which has a component that is a fibered knot (as in the Zeeman
theorem). This approach yields alternative proofs of the twist-spinning theorem of Zeeman, and of the roll-twist spinning results of Litherland. New fibered 2-knots are produced through these
Item Type: Article
Uncontrolled Keywords: twins in the four-sphere; twist-spinning a one-knot; two-knot; rolling; n-twin; Dehn-surgeries; Gluck's homotopy sphere
Subjects: Sciences > Mathematics > Topology
ID Code: 17188
Deposited On: 23 Nov 2012 11:55
Last Modified: 23 Nov 2012 11:55
Repository Staff Only: item control page
|
{"url":"http://eprints.ucm.es/17188/","timestamp":"2014-04-16T19:25:22Z","content_type":null,"content_length":"27379","record_id":"<urn:uuid:7305561a-cd97-4730-9c31-2baf58192111>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] pythagoras
March 22nd 2009, 03:21 PM #1
Mar 2009
[SOLVED] pythagoras
having trouble seeing where i'm going wrong.
text book hast the answer as sprt{13}
btw, I wrap math tags around these selected text and latex/syntex error comes up.
having trouble seeing where i'm going wrong.
169\=\6x^2\+\4x^2 ... (3x)^2 = 9x^2, not 6x^2
text book hast the answer as sprt{13}
btw, I wrap math tags around these selected text and latex/syntex error comes up. because the back slash {\} is only necessary for certain things.
having trouble seeing where i'm going wrong.
text book hast the answer as sprt{13}
btw, I wrap math tags around these selected text and latex/syntex error comes up.
The answers are $x = \pm \sqrt{13}$ because you can have either a positive square root or a negative one. If you're talking about a length it will always be positive when dealing with scalars
Skeeter pointed out that $(3x)^2 = 3^2 \times x^2 = 9x^2$
March 22nd 2009, 03:28 PM #2
March 22nd 2009, 03:44 PM #3
|
{"url":"http://mathhelpforum.com/algebra/80000-solved-pythagoras.html","timestamp":"2014-04-17T05:28:47Z","content_type":null,"content_length":"39078","record_id":"<urn:uuid:1565139e-413b-4643-93ee-31d7f892c048>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00298-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find cos(x+y) for cos x =-4/5 ans sin y = 5/13 with x and y being in quadrant 2.
• one year ago
• one year ago
Best Response
You've already chosen the best response.
this is what i have got so far..|dw:1364147163106:dw|
Best Response
You've already chosen the best response.
that would give me an answer of .9995 which is not an option a. .51 b. .97 c. -.51 d. -1.72 e. None of the above. is the answer E or have I done something wrong?
Best Response
You've already chosen the best response.
no answer is not e you have applied totally wrong concept cos(x + y) where x and y are angles x is not -4/5 but cos x = -4/5 use : cos(x + y) = (cosx)*(cosy) - (sinx) * (siny)
Best Response
You've already chosen the best response.
i just tried that and still didnt get and an answer. I did: cos(-4/5) Cos(-12/13) - sin(3/5) sin(5/13) (.99)(.99)- (.01)(.006) (.99)-(.00007) = .999702 @harsimran_hs4
Best Response
You've already chosen the best response.
you are doing the same mistake please have a look at the above comment MIND IT: X IS ANGLE AND NOT -4/5 COS X = -4/5
Best Response
You've already chosen the best response.
oh alright. but i did it and got -.97 and an answer
Best Response
You've already chosen the best response.
just kidding I got .51
Best Response
You've already chosen the best response.
so you are done or still any doubt?
Best Response
You've already chosen the best response.
yes i am, thank you!
Best Response
You've already chosen the best response.
cool :)
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/514f3bd1e4b0ae0b658b2625","timestamp":"2014-04-20T08:33:35Z","content_type":null,"content_length":"130366","record_id":"<urn:uuid:4ccf8f45-18ce-4ee2-a326-04b603125fd9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodlynne, NJ Trigonometry Tutor
Find a Woodlynne, NJ Trigonometry Tutor
...I am a graduate of the College of William Mary (BA - Mathematics) and the NJ Institute of Technology (MS - Applied Science). But, my greatest qualifications come from years of experience in
the real world. I look forward to meeting you and helping you to achieve your educational goals.Learn Alg...
23 Subjects: including trigonometry, English, calculus, statistics
I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and
pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching.
13 Subjects: including trigonometry, calculus, algebra 1, geometry
...These real life examples help me to share my love of learning math and science with the students I tutor. My students find out that learning science and math can be exciting – seeing whole new
ideas and worlds open up. My tutoring sessions bring math and physics to life.
10 Subjects: including trigonometry, calculus, physics, geometry
...I then changed my career and became a teacher. I currently teach high school level math, chemistry and physics at a private school. I love to teach and help people understand challenging
15 Subjects: including trigonometry, chemistry, physics, geometry
...Additionally, I was a leader and editor of my high school and college newspapers and I am skilled in editing written material and helping to develop stronger writing and reading skills. As of
this moment, I privately tutor one student in SAT Math and CLEP Math, and two students in Honor's Physic...
33 Subjects: including trigonometry, English, physics, French
Related Woodlynne, NJ Tutors
Woodlynne, NJ Accounting Tutors
Woodlynne, NJ ACT Tutors
Woodlynne, NJ Algebra Tutors
Woodlynne, NJ Algebra 2 Tutors
Woodlynne, NJ Calculus Tutors
Woodlynne, NJ Geometry Tutors
Woodlynne, NJ Math Tutors
Woodlynne, NJ Prealgebra Tutors
Woodlynne, NJ Precalculus Tutors
Woodlynne, NJ SAT Tutors
Woodlynne, NJ SAT Math Tutors
Woodlynne, NJ Science Tutors
Woodlynne, NJ Statistics Tutors
Woodlynne, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/woodlynne_nj_trigonometry_tutors.php","timestamp":"2014-04-18T16:06:00Z","content_type":null,"content_length":"24376","record_id":"<urn:uuid:10a2bfe0-cdd1-4fd4-ae6e-5f15f73b6ede>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Woodlynne, NJ Trigonometry Tutor
Find a Woodlynne, NJ Trigonometry Tutor
...I am a graduate of the College of William Mary (BA - Mathematics) and the NJ Institute of Technology (MS - Applied Science). But, my greatest qualifications come from years of experience in
the real world. I look forward to meeting you and helping you to achieve your educational goals.Learn Alg...
23 Subjects: including trigonometry, English, calculus, statistics
I have been a part time college instructor for over 10 years at a local university. While I have mostly taught all levels of calculus and statistics, I can also teach college algebra and
pre-calculus as well as contemporary math. My background is in engineering and business, so I use an applied math approach to teaching.
13 Subjects: including trigonometry, calculus, algebra 1, geometry
...These real life examples help me to share my love of learning math and science with the students I tutor. My students find out that learning science and math can be exciting – seeing whole new
ideas and worlds open up. My tutoring sessions bring math and physics to life.
10 Subjects: including trigonometry, calculus, physics, geometry
...I then changed my career and became a teacher. I currently teach high school level math, chemistry and physics at a private school. I love to teach and help people understand challenging
15 Subjects: including trigonometry, chemistry, physics, geometry
...Additionally, I was a leader and editor of my high school and college newspapers and I am skilled in editing written material and helping to develop stronger writing and reading skills. As of
this moment, I privately tutor one student in SAT Math and CLEP Math, and two students in Honor's Physic...
33 Subjects: including trigonometry, English, physics, French
Related Woodlynne, NJ Tutors
Woodlynne, NJ Accounting Tutors
Woodlynne, NJ ACT Tutors
Woodlynne, NJ Algebra Tutors
Woodlynne, NJ Algebra 2 Tutors
Woodlynne, NJ Calculus Tutors
Woodlynne, NJ Geometry Tutors
Woodlynne, NJ Math Tutors
Woodlynne, NJ Prealgebra Tutors
Woodlynne, NJ Precalculus Tutors
Woodlynne, NJ SAT Tutors
Woodlynne, NJ SAT Math Tutors
Woodlynne, NJ Science Tutors
Woodlynne, NJ Statistics Tutors
Woodlynne, NJ Trigonometry Tutors
|
{"url":"http://www.purplemath.com/woodlynne_nj_trigonometry_tutors.php","timestamp":"2014-04-18T16:06:00Z","content_type":null,"content_length":"24376","record_id":"<urn:uuid:10a2bfe0-cdd1-4fd4-ae6e-5f15f73b6ede>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Computing for High School Students
I know this seems corny. I'm a computer science grad student, coming to a high school class to give you a pep talk, and tell you about the many rewards of a career in math and science...
Well, that's not why I'm here. If it were up to me, I’d never think about science. I'd be a rock star or a football player. But it’s not up to me. And I'm going to tell you about science because my
interest is contagious, like the Ebola virus. So I’m sorry about that. It’s not my choice.
This is a chemistry class, right? To tell you the truth I don’t know a lot about chemistry. I guess you have atoms, with an itty-bitty nucleus on the inside, and this electron cloud outside, and they
stick together and make molecules, and there are rules, like, hydrogen only bonds to one thing, but carbon bonds to four things, at least most of the time—there are always exceptions to the
rules—Well, anyway, you know more about chemistry than I do.
But one thing I know is that what underlies chemistry is something called quantum mechanics. So let me ask—did you discuss quantum mechanics in this class? Is it in your textbook? Can I see your
Right. They always say something like, "People used to think that electrons were these particles that go around the nucleus like the Earth goes around the Sun. But now we know that actually an
electron has no definite position or velocity, and it's just this smear of probability wave, until you measure it, and it decides where it wants to be. But then you turn around and stop looking, and
it becomes a smear again."
I remember when I was in high school, thinking, what the hell does that mean? When we say an electron is a smear all over the place, isn't that just a fancy way of saying that it's somewhere, but we
don’t know where it is? Like, "Honey, where’d you put the car keys?" "Oh, they’re in a smear of probability wave all over the house."
So what's going on? If all quantum mechanics said was that we can’t know where the electron is—that all we know is that it has a 20% chance of being here, a 10% chance of being there, and so on—then
it wouldn’t be so strange. But what quantum mechanics says is stranger than that.
Let's say I was giving a weather forecast for tomorrow. What could I say? "There’s a 40% chance of showers, a 30% chance it will be partly cloudy..." What should the percentages add up to? Right,
100, assuming the events are mutually exclusive.
But could I ever say, "There's a –20% chance of rain tomorrow?" No? Why not?
Well, in quantum mechanics, instead of talking about probabilities, we talk about something called amplitudes. And amplitudes can be negative—they can go from –1 to 1. And to find the probability of
some event, you take the amplitude of the event, and you square it. What’s a negative number squared? Right, positive. So probabilities are still always from 0 to 1.
For example, if I were a quantum weather forecaster, I could say, "There’s a 1/Ö2 amplitude of rain tomorrow, and a –1/Ö2 amplitude of sun." What's the square of 1/Ö2? 1/2. And of –1/Ö2? Also 1/2. So
there’s a half chance of rain, a half chance of sun. The probabilities add up to 1, and that makes sense.
Actually, amplitudes can also be complex numbers—did you learn about complex numbers? And to find the probability of an event, first you take the absolute value of the complex number, and then you
square it. So, suppose there’s an i/2 amplitude of rain tomorrow. Then what’s the probability? Right, 1/4. But from now on we'll ignore that detail.
But you might ask, what's the point of talking about things this way? Let me draw a picture:
Don't worry about the weird-looking brackets (“| ñ”). That's called the Dirac ket notation; we use it to specify quantum states.
Say there's something about an electron we want to know, like whether it's spinning up or spinning down. What does that mean? I don’t know. But it’s not important. It’s just some property of the
electron. If you like, we want to know whether the electron is orange or purple.
Then we describe what we know by giving the amplitude that the electron is orange, and the amplitude that it's purple. And what is the sum of the squares of the amplitudes? Right, 1. So, if we had an
x-y plane, and we plotted x^2+y^2=1, what kind of shape would we get? Right, a circle.
Each radius of the circle corresponds to a possible state of the electron. And when we look at the electron, we force the radius to go either horizontal (orange) or vertical (purple). The closer it
is to orange, say, the more likely it is to jump to being completely orange, rather than completely purple. And if it jumps to orange, and then we look at it again (nothing having happened in
between) it will still be orange. So by the act of looking at it, we've changed the state.
It would be as though you're in bed at night, and there are monsters that sometimes take a pen and move it from one side of your night table to the other. So you get suspicious, and you turn on the
light, and—voila! The pen is just on this side. And you look again—still on that side! As if there never were any monsters.
So how do we know the monsters ever were there? Suppose that initially, we know the electron is orange. And then we do something to the electron—I dunno, shoot a laser beam at it. And that changes
the electron’s state to point diagonally right and upwards—(|Orangeñ+|Purpleñ)/Ö2. If we look at it then, what will we see? Right, orange or purple, each with 1/2 probability.
But now suppose that, instead of looking at it, we do the same thing to it a second time—we shoot another laser beam at it. Does anyone have a coin? It would be as though we flipped this coin, and
then—without looking at the outcome—flipped it a second time. In the case of the coin, do we then know whether it’s heads or tails? Of course we don't.
But in the case of the electron, each time we shoot a laser beam at it, we rotate the radius 45 degrees counterclockwise. So we shoot once—half chance of being orange, half chance of being purple. We
shoot again—definitely purple! (What happens if we shoot a third time?)
Another way to understand what's going on is interference. The rule is,
|Orangeñ à (|Orangeñ+|Purpleñ)/Ö2
|Purpleñ à (|Orangeñ–|Purpleñ)/Ö2
So it goes,
The two purple paths interfere and cancel each other out, leaving only the orange paths. ("But I thought I had a half chance of being purple!" "Nope, sorry!")
You can start to see what's so weird about quantum mechanics. But if you have, say, 100 electrons instead of just one, then it gets even weirder. Because then, how many ways are there to color each
electron either orange or purple? Right, 2 ´ 2 ´ 2 ..., 100 times, or 2^100. 2^20 is already 1,048,576. 2^100 is a number with 31 digits.
And it turns out that, to specify the state of the system, you need to give an amplitude for each of those 2^100 possibilities. So what that means is that, in a sense, the universe is vastly bigger
than it looks. If I give you 100 electrons, you might think that it would only take 100, or 200, or 300 numbers to say everything there is to know about those electrons. But that's not true. It takes
about 2^100 numbers.
Everything I've said so far has been known, more or less, since the 1920's. Now I want to tell you what’s new in the last ten years, and what I’m doing research on. What's new is that we want to take
this quantum weirdness, and put it to work. We want to use it to build computers that can solve certain problems much faster than any computer today can.
Because think about it. What I've said means that, to keep track of what’s going on with only 100 particles, Nature, off to the side somewhere, has to keep track of about 2^100 numbers. So if Nature
is going to all that effort, why not take advantage of it? One of the first people to propose this was Richard Feynman, who you may have heard of.
The trouble is that as soon as we look at the electrons, we see only one state—this one's orange, this one's purple, etc.—like the monster that disappears when we turn the lights on. So if we want to
do a useful quantum computation, we have to set things up cleverly, so that states corresponding to wrong answers interfere and cancel each other out, leaving only (or mostly) states corresponding to
right answers. It's not obvious at all that you can do that, but it’s been discovered that for a few problems, you can.
Here's an example. What are the prime factors of 39? Right, 3 and 13. OK, what are the prime factors of 7,323,629? A bit harder, huh? It turns out that they’re 2161 and 3389. Now, after being told
that, is it easy to check whether that’s the right answer? Well, it's easy enough to multiply the numbers together and check what the product is. And as it turns out, there are also fast methods for
determining whether a number is prime or composite, but which (assuming it's composite) don't tell you what the prime factors are. So we could verify that 2161 and 3389 are prime.
But how would you find those numbers if you hadn't been told them? After 2000 years of mathematical effort, we still don't know of any method much better than just trying all possible divisors, one
after another. (We know of methods that are a little bit better.)
Why is this problem important? Because of the sheer mathematical beauty of prime numbers? Well, how many of you have bought something from Amazon.com or eBay using a credit card? When you typed your
credit card number into the web page, it was encrypted, to prevent hackers from getting access to it. But think about it—how could it have been encrypted, if you've never met in private with anyone
from Amazon.com to agree on an encryption key? Well, in the 1970’s, an encryption system called RSA was invented that gets around this problem. The catch is that the security of RSA depends on the
assumption that finding the factors of enormous numbers (say with 1000 digits) is so hard that nobody will ever do it. If you discover a fast factoring method, then you can break RSA and steal
people's credit card numbers. Cool, huh? (Incidentally, it's no surprise that a good deal of the funding for quantum computing work comes from the Defense Department and the NSA.)
In 1994, this guy named Peter Shor discovered that, with a quantum computer, you could quickly find the factors of enormous numbers—and thereby break RSA. Now you might ask, "How much faster is
Shor's algorithm than classical algorithms? Ten times? 100 times?" But the point is that, as you go to larger and larger numbers, Shor's algorithm does better and better compared to any known
classical algorithm, until there’s just no comparison.
So the million dollar question is, can these quantum computers actually be built? Well, it's hard—mainly because the computer has to be shielded from interactions with the outside environment. But
there are experimentalists all around the world who are working on it. And they’ve succeeded in building extremely small quantum computers. Incidentally, a big part of what's involved in this is
chemistry ... i.e. synthesizing special-purpose molecules to do quantum computation. There was a big achievement about a year ago, when they got a quantum computer to determine that 15 equals 3 ´ 5.
Hey, 21 could be next.
Since Shor's algorithm, quantum algorithms have been found for a few other problems. But what my own work has focused on, mainly, is what quantum computers can't do. Why would anyone care about that?
Here’s how I think about it. If you prove that an ordinary classical computer can't solve a certain problem quickly, you might think, "yeah, but that's only because you’re not using a quantum
computer." But if you prove that a quantum computer can't do it, then at least given our current understanding of physics, you've established an ultimate limitation on the computational power of the
universe. And I think that's sort of cool.
One last thought. Going back to what I said about quantum mechanics, you might think that it makes no sense. Remember that an electron (say) is in this weird superposition of orange and purple, until
you look at it, at which time it makes up its mind which color to be. You might say, "What do you mean, until I look at it? The laws of physics aren't supposed to say, 'Things behave this way, until
a human looks.' They should apply equally well to anything, including my own brain!" (In the physics comedy show 'L'Universe,' one guy is juggling, and another is observing him with a clipboard, but
then he bonks the juggler on the head with the clipboard.)
There's a solution of sorts, but it's mind-blowing. Are you ready for it? OK. It's that, when you look at the electron, it's just an ordinary physical interaction involving the electron and your own
brain. And what happens is, the entire universe splits into two branches: one branch where you see an orange electron, and one where you see a purple electron. In the 'orange' branch, you see the
state as having jumped to orange, but that’s only because you have no contact with the parallel branch where it jumped to purple. So you can imagine that there are trillions of parallel you's, who
are going to different colleges, etc., and that there are parallel me's who are rock stars or football players instead of computer science grad students. But even if you accept that, I think that to
bridge the gap between that quantum multiverse view, and the world we actually experience (where definite things happen—at least to me!), will require some fundamental new ideas. That's it. Any
Visit Cherrygadget/ and Celebgossips and even Modtech
|
{"url":"http://www.scottaaronson.com/writings/highschool.html","timestamp":"2014-04-19T09:25:45Z","content_type":null,"content_length":"21002","record_id":"<urn:uuid:9f8ac2c9-6159-41ca-b1f1-365700698679>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Computing symmetric matrices
kde-blabla Computing symmetric matrices
Member Sun Sep 08, 2013 10:30 pm
Posts I want to compute a symmetric matrix A from a vector b by: A = b * b'.
Karma Does the Eigen library automatically take into account that it does not need to do all calculations to get A (because of the symmetry which repeats most of the matrix entries)?
Re: Computing symmetric matrices
Mon Sep 09, 2013 7:06 am
Registered No, not automatically. You need to specify this yourself. The .rankUpdate() member function performs the computation A = A + b * b' on the upper or lower triangular part of A (here, b'
Member is the transpose of b, which is I assume the notation you use). So you can do:
Posts Code: Select all
186 A.triangularView<Lower>().setZero(); // set lower triangular part of A to zero
Karma A.selfadjointView<Lower>().rankUpdate(b); // compute b * b' and put in lower triangular part
A.triangularView<StrictlyUpper>() = A.triangularView<StrictlyLower>().transpose(); // reflect lower triangular part in upper triangular part
Depending on your application, the last line might not be necessary
Re: Computing symmetric matrices
Mon Sep 09, 2013 3:48 pm
jitseniesen wrote:No, not automatically. You need to specify this yourself. The .rankUpdate() member function performs the computation A = A + b * b' on the upper or lower triangular
part of A (here, b' is the transpose of b, which is I assume the notation you use). So you can do:
Registered Code: Select all
Member A.triangularView<Lower>().setZero(); // set lower triangular part of A to zero
A.selfadjointView<Lower>().rankUpdate(b); // compute b * b' and put in lower triangular part
Posts A.triangularView<StrictlyUpper>() = A.triangularView<StrictlyLower>().transpose(); // reflect lower triangular part in upper triangular part
Karma Depending on your application, the last line might not be necessary
Thanks, that's exactly what I was looking for.
Do you have any indication of the speed differences between this method and the ordinary A = b * b' ?
ggael Re: Computing symmetric matrices
Tue Sep 10, 2013 8:11 am
2194 You can expect a x2 for large matrices (>1000^2) and no speed up for small ones (around 30^2). Also, the rankUpdate version is not multi-threaded, so even for large matrices it can be
Karma slower if you enabled OpenMP.
A.triangularView<Lower>().setZero(); // set lower triangular part of A to zeroA.selfadjointView<Lower>().rankUpdate(b); // compute b * b' and put in lower triangular partA.triangularView
<StrictlyUpper>() = A.triangularView<StrictlyLower>().transpose(); // reflect lower triangular part in upper triangular part
jitseniesen wrote:No, not automatically. You need to specify this yourself. The .rankUpdate() member function performs the computation A = A + b * b' on the upper or lower triangular part of A (here,
b' is the transpose of b, which is I assume the notation you use). So you can do:Code: Select allA.triangularView<Lower>().setZero(); // set lower triangular part of A to zeroA.selfadjointView<Lower>
().rankUpdate(b); // compute b * b' and put in lower triangular partA.triangularView<StrictlyUpper>() = A.triangularView<StrictlyLower>().transpose(); // reflect lower triangular part in upper
triangular part Depending on your application, the last line might not be necessary
You can expect a x2 for large matrices (>1000^2) and no speed up for small ones (around 30^2). Also, the rankUpdate version is not multi-threaded, so even for large matrices it can be slower if you
enabled OpenMP.
Registered users: alake, Baidu [Spider], Bing [Bot], brand, cylverbak, dgraf, Exabot [Bot], garthecho, glepore70, Google [Bot], google01103, Hans, jensreuterberg, kainz.a, koriun, Kver, La Ninje,
mcaceres, MrGlaceon, MSNbot Media, nezumi, pbCyanide, pedrorodriguez, SecretCode, starbuck, Steve T, tparrott, Yahoo [Bot], šumski
|
{"url":"http://forum.kde.org/viewtopic.php?f=74&t=117406&p=291820","timestamp":"2014-04-18T18:20:10Z","content_type":null,"content_length":"33284","record_id":"<urn:uuid:0cc78b5f-3ca5-4be4-bb1d-0b1dde531cca>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Arizona City Algebra 2 Tutor
Find an Arizona City Algebra 2 Tutor
...The math courses I have assisted students in include, but are not limited to: algebra I and II, trigonometry, geometry, pre-calculus, calculus I, II, and III, vector calculus, ordinary
differential equations, linear algebra, and partial differential equations. The physics courses I have assisted...
26 Subjects: including algebra 2, chemistry, calculus, physics
...Kong is very enthusiastic about math and teaches it in a way that everyone will understand. He is both fun and serious about his work." "An inspiration. He touches the lives of everyone he
teaches and gives students a new confidence in themselves.
15 Subjects: including algebra 2, calculus, geometry, statistics
...All development projects I have worked on included system documentation, user's guide and training. Environments I have worked under include: Mainframes, Personal Computers, Local Area
Networks, and Mini Computers. I have conducted studies to analyze future computer needs and established the direction of the data processing section within the department.
12 Subjects: including algebra 2, algebra 1, Microsoft Excel, computer programming
...I have had experience tutoring all of those classes with the exception statistics, however, I am confident in my teaching abilities. I realize everyone learns differently and am very good at
explaining things multiple ways. I am also am a college football player and track athlete, and have tons...
28 Subjects: including algebra 2, chemistry, reading, calculus
...When it comes to career development, I use a similar approach to my regular tutoring approach. I take into account the your personality type and your learning style. You and I will look at your
work, life, and volunteer experiences and education.
29 Subjects: including algebra 2, reading, English, writing
|
{"url":"http://www.purplemath.com/arizona_city_az_algebra_2_tutors.php","timestamp":"2014-04-18T08:22:22Z","content_type":null,"content_length":"24022","record_id":"<urn:uuid:e2c214d9-233d-4161-93aa-b5d683b463d1>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
|
the encyclopedic entry of Chain Rule
, the
chain rule
is a
for the
of the
of two
In intuitive terms, if a variable, y, depends on a second variable, u, which in turn depends on a third variable, x, then the rate of change of y with respect to x can be computed as the rate of
change of y with respect to u multiplied by the rate of change of u with respect to x.
Informal discussion
For an explanation of notation used in this section, see Function composition.
The chain rule states that, under appropriate conditions,
$\left(f circ g\right)"\left(x\right) = f\text{'}\left(g\left(x\right)\right) g\text{'}\left(x\right),,$
which in short form is written as
$\left(f circ g\right)" = f\text{'}circ gcdot g\text{'}.$
Alternatively, in the Leibniz notation, the chain rule is
$frac \left\{dy\right\}\left\{dx\right\} = frac \left\{dy\right\} \left\{du\right\} cdotfrac \left\{du\right\}\left\{dx\right\}.$
In integration, the counterpart to the chain rule is the substitution rule.
The chain rule in one variable may be stated more completely as follows. Let
be a real-valued function on (
) which is
∈ (
); and
a real-valued function defined on an interval
containing the range of
) as an
interior point
. If
is differentiable at
), then
• $\left(fcirc g\right)\left(x\right)$ is differentiable at x = c, and
• $\left(fcirc g\right)"\left(c\right) = f\text{'}\left(g\left(c\right)\right)g\text{'}\left(c\right).$
Example I
Suppose that a mountain climber ascends at a rate of 0.5
kilometers per hour
. The
is lower at higher elevations; suppose the rate by which it decreases is 6 °C per kilometer. If one multiplies 6 °C per kilometer by 0.5 kilometer per hour, one obtains 3 °C per hour. This
calculation is a typical chain rule application.
Example II
Consider the function
) = (
+ 1)
. Since
) =
)) where
) =
+ 1 and
) =
it follows from the chain rule that
$f "\left(x\right) ,$ $= h "\left(g\left(x\right)\right) g \text{'} \left(x\right) ,$ $= 3\left(g\left(x\right)\right)^2\left(2x\right) ,$ $= 3\left(x^2 + 1\right)^2\left(2x\right) ,$
$= 6x\left(x^2 + 1\right)^2. ,$
In order to differentiate the trigonometric function
$f\left(x\right) = sin\left(x^2\right),,$
one can write f(x) = h(g(x)) with h(x) = sin x and g(x) = x^2. The chain rule then yields
$f"\left(x\right) = 2x cos\left(x^2\right) ,$
since h′(g(x)) = cos(x^2) and g′(x) = 2x.
Example III
Differentiate arctan(sin x).
$frac\left\{d\right\}\left\{dx\right\}arctan x = frac\left\{1\right\}\left\{1+x^2\right\}$
Thus, by the chain rule,
$frac\left\{d\right\}\left\{dx\right\}arctan f\left(x\right) = frac\left\{f"\left(x\right)\right\}\left\{1+f^2\left(x\right)\right\},,$
and in particular,
$frac\left\{d\right\}\left\{dx\right\}arctan\left(sin x\right) = frac\left\{cos x\right\}\left\{1+sin^2 x\right\},.$
Chain rule for several variables
The chain rule works for functions of more than one variable. Consider the function z = f(x, y) where x = g(t) and y = h(t), and g(t) and h(t) are differentiable with respect to t, then
$\left\{ dz over dt\right\}=\left\{partial z over partial x\right\}\left\{dx over dt\right\}+\left\{partial z over partial y\right\}\left\{dy over dt\right\}.$
Suppose that each argument of z = f(u, v) is a two-variable function such that u = h(x, y) and v = g(x, y), and that these functions are all differentiable. Then the chain rule would look like:
$\left\{partial z over partial x\right\}=\left\{partial z over partial u\right\}\left\{partial u over partial x\right\}+\left\{partial z over partial v\right\}\left\{partial v over partial x\
$\left\{partial z over partial y\right\}=\left\{partial z over partial u\right\}\left\{partial u over partial y\right\}+\left\{partial z over partial v\right\}\left\{partial v over partial y\
If we considered
$vec r = \left(u,v\right)$
above as a vector function, we can use vector notation to write the above equivalently as the dot product of the gradient of f and a derivative of $vec r$:
$frac\left\{partial f\right\}\left\{partial x\right\}=vec nabla f cdot frac\left\{partial vec r\right\}\left\{partial x\right\}.$
More generally, for functions of vectors to vectors, the chain rule says that the Jacobian matrix of a composite function is the product of the Jacobian matrices of the two functions:
$frac\left\{partial\left(z_1,ldots,z_m\right)\right\}\left\{partial\left(x_1,ldots,x_p\right)\right\} = frac\left\{partial\left(z_1,ldots,z_m\right)\right\}\left\{partial\left(y_1,ldots,y_n\
right)\right\} frac\left\{partial\left(y_1,ldots,y_n\right)\right\}\left\{partial\left(x_1,ldots,x_p\right)\right\}.$
Proof of the chain rule
Let f and g be functions and let x be a number such that f is differentiable at g(x) and g is differentiable at x. Then by the definition of differentiability,
$g\left(x+delta\right)-g\left(x\right)= delta g"\left(x\right) + epsilon\left(delta\right)delta ,$
where ε(δ) → 0 as δ → 0. Similarly,
$f\left(g\left(x\right)+alpha\right) - f\left(g\left(x\right)\right) = alpha f"\left(g\left(x\right)\right) + eta\left(alpha\right)alpha ,$
where η(α) → 0 as α → 0.
$f\left(g\left(x+delta\right)\right)-f\left(g\left(x\right)\right),$ $= f\left(g\left(x\right) + delta g"\left(x\right)+epsilon\left(delta\right)delta\right) - f\left(g\left(x\right)\right)
$= alpha_delta f"\left(g\left(x\right)\right) + eta\left(alpha_delta\right)alpha_delta ,$
$alpha_delta = delta g"\left(x\right) + epsilon\left(delta\right)delta. ,$
Observe that as δ → 0, α[δ]/δ → g′(x) and α[δ] → 0, and thus η(α[δ]) → 0. It follows that
$frac\left\{f\left(g\left(x+delta\right)\right)-f\left(g\left(x\right)\right)\right\}\left\{delta\right\} to g"\left(x\right)f\text{'}\left(g\left(x\right)\right)mbox\left\{ as \right\}
delta to 0.$
The fundamental chain rule
The chain rule is a fundamental property of all definitions of derivative and is therefore valid in much more general contexts. For instance, if E, F and G are Banach spaces (which includes
Euclidean space) and f : E → F and g : F → G are functions, and if x is an element of E such that f is differentiable at x and g is differentiable at f(x), then the derivative (the Fréchet
derivative) of the composition g o f at the point x is given by
$mbox\left\{D\right\}_xleft\left(g circ fright\right) = mbox\left\{D\right\}_\left\{fleft\left(xright\right)\right\}left\left(gright\right) circ mbox\left\{D\right\}_xleft\left(fright\
Note that the derivatives here are linear maps and not numbers. If the linear maps are represented as matrices (namely Jacobians), the composition on the right hand side turns into a matrix
A particularly clear formulation of the chain rule can be achieved in the most general setting: let M, N and P be C^k manifolds (or even Banach-manifolds) and let
f : M → N and g : N → P
be differentiable maps. The derivative of f, denoted by df, is then a map from the tangent bundle of M to the tangent bundle of N, and we may write
$mbox\left\{d\right\}left\left(g circ fright\right) = mbox\left\{d\right\}g circ mbox\left\{d\right\}f.$
In this way, the formation of derivatives and tangent bundles is seen as a functor on the category of C^∞ manifolds with C^∞ maps as morphisms.
Tensors and the chain rule
See tensor field for an advanced explanation of the fundamental role the chain rule plays in the geometric nature of tensors.
Higher derivatives
Faà di Bruno's formula generalizes the chain rule to higher derivatives. The first few derivatives are
$frac\left\{d \left(f circ g\right) \right\}\left\{dx\right\} = frac\left\{df\right\}\left\{dg\right\}frac\left\{dg\right\}\left\{dx\right\}$
frac{d^2 (f circ g) }{d x^2} = frac{d^2 f}{d g^2}left(frac{dg}{dx}right)^2 + frac{df}{dg}frac{d^2 g}{dx^2}
frac{d^3 (f circ g) }{d x^3} = frac{d^3 f}{d g^3} left(frac{dg}{dx}right)^3 + 3 frac{d^2 f}{d g^2} frac{dg}{dx} frac{d^2 g}{d x^2} + frac{df}{dg} frac{d^3 g}{d x^3}
frac{d^4 (f circ g) }{d x^4} =frac{d^4 f}{dg^4} left(frac{dg}{dx}right)^4 + 6 frac{d^3 f}{d g^3} left(frac{dg}{dx}right)^2 frac{d^2 g}{d x^2} + frac{d^2 f}{d g^2} left{ 4 frac{dg}{dx} frac{d^
3 g}{dx^3} + 3left(frac{d^2 g}{dx^2}right)^2right}
+ frac{df}{dg}frac{d^4 g}{dx^4}.
See also
External links
|
{"url":"http://www.reference.com/browse/Chain+Rule","timestamp":"2014-04-19T11:37:07Z","content_type":null,"content_length":"93455","record_id":"<urn:uuid:601bbbd1-696b-4a24-bb5f-fca50137c2f1>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00002-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Summary: Practice Problems For Final Math 5B
Reminders about the final exam
· The final will be on Thursday, March 17 from 8-11am in your regular classroom
· Bring a photo ID to the exam
· There are NO calculators allowed
· You are allowed a 4 × 6 notecard with whatever you want written on both sides
· The final is worth 50% of your grade
· The final is cumulative (Chapters 1 - 7.4)
Diclaimer: The following problems and formulas are based on what I believe to be important,
which may or may not coincide with what will be covered on your final exam. This document should
in no way replace careful studying of your notes, previous midterms, homework, study guides, and
the textbook.
Chapter 1: Vectors, Matrices, and Applications
Formulas and Equations
One way to paramaterize a line that runs through the point p in the direction of v is:
p + t(v), t R
If is the angle between the vectors v and w then:
v · w = v w cos()
The vector projection of b onto a is:
proja(b) =
|
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/140/2365651.html","timestamp":"2014-04-18T22:18:06Z","content_type":null,"content_length":"8056","record_id":"<urn:uuid:d572883f-3813-4f8d-908b-c945155ff7e8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Isosceles Triangle
A selection of articles related to isosceles triangle.
Original articles from our library related to the Isosceles Triangle. See Table of Contents for further available material (downloadable resources) on Isosceles Triangle.
Isosceles Triangle is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Isosceles Triangle books and related discussion.
Suggested Pdf Resources
Suggested News Resources
Suggested Web Resources
An isosceles triangle is a triangle with (at least) two equal sides. In the figure An isosceles triangle therefore has both two equal sides and two equal angles.
Definition and properties of isosceles triangles. Includes a cool math applet useful as a classroom activity and manipulative. AN OER resource.
Illustrated Math Dictionary - Definition of Isosceles Triangle.
Nov 8, 2007 For a complete lesson on the isosceles triangle theorem, go to http://www. yourteacher.
Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We
appreciate your suggestions and comments on further improvements of the site.
|
{"url":"http://www.realmagick.com/isosceles-triangle/","timestamp":"2014-04-18T18:17:41Z","content_type":null,"content_length":"31790","record_id":"<urn:uuid:eaaf9c89-8a98-476a-b37c-e7b1036ab82a>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MathGroup Archive: February 2005 [00124]
[Date Index] [Thread Index] [Author Index]
Re: Product {for p=2 to infinity} (p^2+1)/(p^2-1)
• To: mathgroup at smc.vnet.net
• Subject: [mg53968] Re: [mg53937] Product {for p=2 to infinity} (p^2+1)/(p^2-1)
• From: Daniel Lichtblau <danl at wolfram.com>
• Date: Sat, 5 Feb 2005 03:16:29 -0500 (EST)
• References: <200502040912.EAA01094@smc.vnet.net>
• Sender: owner-wri-mathgroup at wolfram.com
Zak Seidov wrote:
>Dear Math gurus,
>I try to copy Math session:
>\!\(FullSimplify[\[Product]\+\(p = 2\)\%\[Infinity]\((\(p\^2 +
>1\)\/\(p\^2 - \1\))\)]
>n Sinh[\[Pi]]\/\[Pi]\),
>that is,
> Product {for p=2 to infinity} (p^2+1)/(p^2-1)=>
>sinh(pi)/pi=3.67608 -
>is it OK,
>or this should be 3/2?
>Please email me your help:
>seidovzf at yahoo.com
>Many thanks,
The result you get is consistent with a numeric approximation e.g. by
taking the product to 10^3.
Alternatively you can try to verify that the sum of logs is correct by
doing an indefinite sum. First we do the sum to infinity and recover the
log of your result.
In[5]:= Sum[Log[(p^2+1)/(p^2-1)], {p,2,Infinity}]
Out[5]= Log[2] - LogGamma[2 - I] - LogGamma[2 + I]
Fine so far. Now we do the indefinite summation, that is, sum to
arbitrary 'n'.
In[8]:= f[n_] = Sum[Log[p^2+1]-Log[p^2-1], {p,2,n}]
Out[8]= Log[2] - Log[Gamma[n]] + Log[Gamma[(1 - I) + n]] +
Log[Gamma[(1 + I) + n]] - Log[Gamma[2 + n]] - LogGamma[2 - I] -
LogGamma[2 + I]
We check that this agrees in the limit as n goes to infinity.
In[9]:= InputForm[ff = Limit[f[n], n->Infinity]]
((3*I)*Pi + Log[8] - 3*Log[-Gamma[2 - I]] - 3*Log[Gamma[2 + I]])/3
In[10]:= InputForm[FullSimplify[ff]]
Out[10]//InputForm= Log[Sinh[Pi]/Pi]
Finally one can ask whether the indefinite sum is correct. To show this
you would start with:
In[31]:= InputForm[g[n_] = FullSimplify[f[n]-f[n-1],
-Log[Gamma[-I + n]] - Log[Gamma[I + n]] +
Log[Gamma[(1 - I) + n]/(-1 + n^2)] + Log[Gamma[(1 + I) + n]]
You can now try various methods to confirm that this is your summand.
I'll leave the symbolic proof to others and just show a quick numerical
In[34]:= FullSimplify[g[5] - (Log[5^2+1] - Log[5^2-1])]
Out[34]= 0
Daniel Lichtblau
Wolfram Research
• Follow-Ups:
• References:
|
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Feb/msg00124.html","timestamp":"2014-04-17T13:24:20Z","content_type":null,"content_length":"36903","record_id":"<urn:uuid:707bd204-5f5b-46d0-8f7c-05420650bcda>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Grade 11 - Physics - Gravitation - Concept of Gravitation
Grade 11 - Physics - Gravitation - Concept of Gravitation
Acceleration due to gravity
The acceleration with which a body falls towards the earth is known as acceleration due to gravity.
The acceleration due to gravity g near the surface of the earth is .
Case 1 :
Let the body of mass m be at a height h from the surface of the earth.If the radius of the earth is R, then as its distance r from the center of the earth is r=(R+h), the downward force on the body
is given by,
Where M is the mass of the earth. If the body happens to be very close to the surface, h=0, and the gravitational force on it is-
The acceleration generated due to this gravitational force is, therefore, given by-
The above relation is true for all planets and other heavenly bodies.
(i) Mass of the earth
It is possible to calculate the mass of the earth using the above relation. As radius of the earth ${}^{}$m and, putting weget,
Acceleration due to gravity
(ii) Mean density of the earth
Suppose the mean density of the earth is ,then as
Acceleration due to gravity
Above the surface
The gravitational force acting on a body of mass m when at a height h from the surface of the earth is ,
hence, the body is subjected to a downward acceleration given by,
As ,therefore Where r=(R+h)
NOTE: The above expression shows that the value of acceleration due to gravity decreases with height. It falls inversely with the square of the distance provided the distances are measured from the
center of the earth.
Acceleration due to gravity
When (h is negligible to R)
In that condition the relation can be further simplified by using binomial theorem. Accordingly,
|
{"url":"http://selfonlinestudy.com/Discussion.aspx?contentId=1015&Keywords=Physics_Gravitation_concept_of_Gravitation","timestamp":"2014-04-19T14:28:41Z","content_type":null,"content_length":"22778","record_id":"<urn:uuid:a8d7868e-3120-4522-aac4-92defd0833ea>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
|
angle and velocity
February 13th 2010, 06:05 PM
angle and velocity
ive been trying this problem for about an hour now, but i cant seem to figure it out.
a cannon shoots a cannonball at an angle of 37 degrees from a cliff 40m high. with a horizontal velocity of 35.7m/s, find the angle the cannonball strikes the ground, and velocity it strikes the
February 14th 2010, 01:58 AM
ive been trying this problem for about an hour now, but i cant seem to figure it out.
a cannon shoots a cannonball at an angle of 37 degrees from a cliff 40m high. with a horizontal velocity of 35.7m/s, find the angle the cannonball strikes the ground, and velocity it strikes the
If the horizontal component of intial velocity is 35.7, then the vertcal component is 35.7 tan(37). You can then calculate that the vertical velocity at each time t seconds after being fired is -
9.8t+ 35.7 tan(37) while the horizontal velocity is constant at 35.7.
The vertical height of the cannonball, taking its initial position as h=0 is $h= -4.9t^2+ 35.7tan(37)t$. The ground at the bottom of the cliff is at h= -40 so you can find the time the cannonball
hits the ground by solving $-4.9t^2+ 35.7 tan(37) t= -40$.
You can then calculate the velocity vector by putting that time into v_x= 35.7 and $v_y= -9.8t+ 35.7 tan(37)$.
The angle at which the cannonball hits the ground is given by $tan^{-1}\left(\frac{v_x}{v_y}\right)$.
|
{"url":"http://mathhelpforum.com/math-topics/128717-angle-velocity-print.html","timestamp":"2014-04-20T22:31:22Z","content_type":null,"content_length":"5823","record_id":"<urn:uuid:8d6eaea6-c887-4dc7-be94-77639040a666>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Konisberg Solution
This is solved in the same way as the famous "Seven Bridges of Konigsberg" problem first solved by Euler. In that problem, the task was to find a closed path that crossed each of the seven bridges of
Konigsberg (now Kaliningrad, Russia) exactly once. For reasons given below, no such path existed. In this version, you cannot draw such a line without cheating by:
(1) drawing a line along one of the edges, or (2) inscribing the diagram on a torus, or (3) defining a line segment as the entire length of each straight line, or (4) adding a vertex on one of the
line segemnts, or (5) defining "crossing" as touching the endpoint of a segment.
The method for determining if paths exist in all similar problems is given below.
Turn each "room" into a point. Turn each line segment into a line connecting the two points representing the rooms it abuts. You should be able to see that drawing one continuous line across all
segments in your drawing is equivalent to traversing all the edges in the resulting graph. Euler's Theorem states that for a graph to be traversable, the number of vertices with an odd number of
edges proceeding from them must be either zero or two. For this graph, that number is four, and it cannot be traversed.
| 1 | 2 | 3 |
| 4 | 5 |
Number of edges proceeding from each vertex
1: 4 2: 5 (odd) 3: 4 4: 5 (odd) 5: 5 (odd) 6: 9 (odd)
To prove Euler's Theorem, think of walking along the graph from vertex to vertex. Each vertex must be entered as many times as it is exited, except for where you start and where you end. So, each
vertex must have an even number of edges, except possibly for two vertices. And if there are two vertices with an odd number of edges, the path must start at one and end at the other.
|
{"url":"http://rec-puzzles.org/index.php/Konisberg%20Solution","timestamp":"2014-04-21T03:42:10Z","content_type":null,"content_length":"8753","record_id":"<urn:uuid:a225ffe1-761a-487b-b480-945ae7411415>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[Team] [Source Code] [News] [Miscellaneous]
Arithmetic Operations Simulation Library
The aim behind Arithmetic Operations Simulation Library is to develop an open source library to simulate heavy arithmetic operations efficiently (using shift, add and subtract arithmetic) in low cost
DSP systems, where often these are absent in initial development stages, and sometimes even later.
Another aim is to simulate floating point numbers/arithmetic using integers, and simulation of extended integer arithmetic in a way that it can easily be ported to different architectures.
Currently this library is being developed as a part of utility functions library that also contains string manipulation functions.
Project Coordinator : Sandeep Kumar (EMail)
Developer : Sandeep Kumar
20-05-2006 : First sourceforge release (V0.01a).
15-05-2006 : Project checked into Sourceforge CVS, with arithmetic and string-processing libraries separated under arithsim and stringlib sub-directories in utilfuncs package. Division and mod
functions now raise a preconfigured signal (currently SIGFPE) on encountering a zero denominator.
After I shared initial project ideas with Mr. Damodar Kulkarni (another visiting faculty at Pune University Computer Science Department), he suggested to include simulation of floating point
arithmetic using integers, also in project aims. Further discussion also brought a question - can we simulate extended integer arithmetic efficiently in a way that it can easily be ported to
different architectures?
12-05-2006 : Initial (V0.01) Freshmeat announcement of the project. It supported finding first 1/0 bit from left/right (MSB/LSB), multiplication, division and mod operations on two 16-bit unsigned
numbers and special case of division by 3, and O(n) ways of finding/counting alphabets in a string.
Brief discussions on logic and programming tricks used in utilfuncs are available here. In case you want to request features, report bugs, submit patches or just share your views on the project,
please visit project summary page at .
Page last updated on May 20, 2006.
|
{"url":"http://utilfuncs.sourceforge.net/","timestamp":"2014-04-18T18:36:51Z","content_type":null,"content_length":"4670","record_id":"<urn:uuid:cd978671-d5e6-4f0f-a603-bbbf6b063a2b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00230-ip-10-147-4-33.ec2.internal.warc.gz"}
|
The physics of personal income - physicsworld.com
Many attempts have been made to model the distribution of incomes in a society, but until now no formula had successfully described all salary levels and periods in history. The economist Pareto
proposed in 1897 that income distribution followed a simple power law – that is, the number of people earning a certain wage falls as that wage rises. This law is characterized by the ‘Pareto index’,
which is small if incomes are distributed unevenly across the population, and large if the spread is more equal.
But Pareto’s theory only holds for the top 1% of earners. The economist Gibrat later found that the incomes of the remaining 99% of earners follow a log-normal distribution – that is, the logarithms
of the incomes have a normal, symmetrical distribution. This relationship is defined by the ‘Gibrat index’, which is also small for uneven income distributions.
In order to combine these formulas, Souma analysed the salary data of over 80% of the working Japanese population. Employing techniques commonly used to model ‘many-body’ systems in condensed matter
physics, he has devised a formula – consisting of a log-normal curve with a power-law tail – that successfully describes the income distribution of the whole population.
The income details analysed by Souma dated from 1887 to 1998, which enabled him to study how the distribution evolved. Comparing his work with an earlier American study, Souma notes that the indices
in the new formula are almost identical in Japan and the US.
Souma does not attempt to explain why income follows the observed distribution, but he believes that the new model is a fundamental law of economics that could describe income distribution in all
societies at any point in history.
|
{"url":"http://physicsworld.com/cws/article/news/2002/feb/26/the-physics-of-personal-income","timestamp":"2014-04-19T15:18:03Z","content_type":null,"content_length":"37584","record_id":"<urn:uuid:4fd7c6ee-b771-4c9b-924d-8e2096a2b5de>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Bruce A. Wade
Office: EMS E447
Phone: (414) 229-5225
E-mail: wade@uwm.edu
Web: http://pantherfile.uwm.edu/wade/www/
Bruce A. Wade's CV
Educational Degrees
Ph.D., Wisconsin, 1987
M.A., Wisconsin, 1984
B.S., Wisconsin, 1982
Research Positions
Post-Doctoral Fellowship, Cornell University, 1987-1989
Research Interests
Numerical Analysis & Computational Mathematics
Industrial Mathematics
Selected Service, Projects, and Publications
• Founder & Director, Center for Industrial Mathematics, University of Wisconsin-Milwaukee (CIM)
• Co-Founder & Advisor, Applied Mathematics and Computer Science (AMCS) Degree, University of Wisconsin-Milwaukee
• Co-Founder & Co-General Chair, Computational and Mathematical Methods in Science and Engineering (CMMSE) conference.
• Former Associate Chair.
• Developed new courses Math 601, 602 (Advanced Engineering Mathematics I, II), 701, 702 (Industrial Mathematics I, II).
• Smoothing Schemes for Reaction-Diffusion Systems with Nonsmooth Data, Journal of Computational and Applied Mathematics, 223, 1, (2009), 374--386, with A. Q. M. Khaliq, J. Martín-Vaquero, and M.
• Numerical Solution of a Long-term Average Control Problem for Singular Stochastic Processes, Mathematical Methods of Operations Research, 66, (2007), 451--473, with P. Kaczmarke, S. T. Kent, G.
A. Rus, and R. H. Stockbridge.
• On Smoothing of the Crank-Nicolson Scheme and Higher order Schemes for Pricing Barrier Options, Journal of Computational and Applied Mathematics, 204, 1, (2007), 144--158, with A. Q. M. Khaliq,
M. Yousuf, J. Vigo-Aguiar, and R. Deininger.
Bruce Wade on MathSciNet (requires subscription)
|
{"url":"https://www4.uwm.edu/letsci/math/people/faculty/wade.cfm","timestamp":"2014-04-21T10:18:51Z","content_type":null,"content_length":"26799","record_id":"<urn:uuid:e16bd0fe-247b-418d-a0fe-ae4010e18100>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Oxford Mathematics New Syllabus 6th Edition
New Syllabus Mathematics 6th Edition (authors - Teh Keng Seng, Loh Cheng Yee; consultant - Dr Yeap Ban Har) is a series of four textbooks. This title covers the ...
Free eBook and manual for Business, Education,Finance, Inspirational, Novel, Religion, Social, Sports, Science, Technology, Holiday, Medical, new syllabus mathematics ...
|
{"url":"http://www.webtopicture.com/oxford/oxford-mathematics-new-syllabus-6th-edition.html","timestamp":"2014-04-17T18:39:07Z","content_type":null,"content_length":"26323","record_id":"<urn:uuid:cf9e70ce-7a67-4ed3-b45b-9d44c73bad7a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Geosynchronous station keeping
I'm having problems figuring out geosynchronous station keeping requirements due to triaxiality.
So far, I've gotten as far as finding the equilibrium points at about 75 degrees and 255 degrees, but I can't get from there to the station keeping requirements for satellites located at other
longitudes. The angular, or longitudinal, accleration should equal:
[tex]\ddot\lambda = -\left(\omega^2_E \left(\frac{R_E}{a}\right)^2\right) \left(-18J_{22} sin(2(\lambda - \lambda_{22}))\right)[/tex]
[tex]\lambda[/tex] is longitude with [tex]\lambda_{22}[/tex] being a constant that goes along with [tex]J_{22}[/tex] for one of the Earth's spherical harmonics due to triaxiality.
[tex]\omega_E[/tex] is the rotation rate of the Earth, [tex]R_E[/tex] is the radius of the Earth, and a is the orbit's semi-major axis.
There's another variation of this I found in NASA's TM-2001-210854, Integrated Orbit, Attitude, and Structural Control Systems Design for Space Solar Power Satellites, but it yields the same results.
They just merged the mean motion into the rest of the equation. Since the whole purpose of geosynchronous satellites is for the orbit's mean motion to match the Earth's rotation rate, I like the
version where its separate, better.
I found the equilibrium points by finding where angular acceleration equaled zero. If I convert this to linear acceleration, I think I should get the acceleration necessary to stay in the same place
for other longitudes. If projected over the course of one year, I get a maximum of around 5.203 meters/second/year, which is about 3 times too big.
Wertz and Larsen's Space Mission Analysis and Design just use the equation:
[tex]\deltaV = 1.715 sin(2(\lambda - \lambda_s))[/tex]
Their equation does produce a realistic maximum. Unfortunately, 1.715 doesn't tell me anything.
Anyone know the missing link, here?
|
{"url":"http://www.physicsforums.com/showthread.php?t=25143","timestamp":"2014-04-19T17:31:47Z","content_type":null,"content_length":"31588","record_id":"<urn:uuid:434c65cd-0353-4076-a42d-fb2b93d927e4>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Dunn Loring Statistics Tutor
Find a Dunn Loring Statistics Tutor
I have a masters in economics and a strong math background. I have previously taught economics at the undergraduate level and can help you with microeconomics, macroeconomics, econometrics and
algebra problems. I enjoy teaching and working through problems with students since that is the best way ...
14 Subjects: including statistics, calculus, geometry, algebra 1
...I also have over 20 years of research experience in the social sciences, most recently in the fields of early education and health care. Whether you need help planning your research project,
cleaning and managing your data, entering your data, analyzing your data, writing up your results, or lea...
6 Subjects: including statistics, SPSS, Microsoft Excel, Microsoft Word
...This leads to deep, lasting knowledge, and more importantly, sharpened critical thinking and reasoning skills. The ultimate goal is that a student no longer needs me, because he or she has the
tools to reason through tough problems and figure it out by himself or herself. I am enthusiastic, flexible, and positive in my teaching.
6 Subjects: including statistics, biology, SAT math, SAT reading
...Math and sciences are my two strongest points. (I have scored above the 80th percentile on all standardized tests scores including SAT, ACT, Regents [New York standardized tests] and GRE.)
Also, I have thorough knowledge of architecture and construction. I worked in contracting and roofing fo...
16 Subjects: including statistics, geometry, algebra 1, algebra 2
...I have a bachelor and masters degree in engineering, and scored 740 (out of 800) on the GRE quantitative. I took AP calculus in high school and got a score of 5 (maximum) on the AP exam. I took
differential equations during my undergraduate program and passed with grade of A.
34 Subjects: including statistics, physics, calculus, geometry
|
{"url":"http://www.purplemath.com/dunn_loring_statistics_tutors.php","timestamp":"2014-04-20T11:12:14Z","content_type":null,"content_length":"24311","record_id":"<urn:uuid:666ba85a-9d38-4d11-8e9a-83d87566a703>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00309-ip-10-147-4-33.ec2.internal.warc.gz"}
|
DELHI UNIVERSITY
Syllabus for MCA Entrance Examination
Entrance Test shall have the following components: Mathematical Ability, Computer Science, Logical Reasoning, and English Comprehension
Syllabus for entrance test is given below:
Mathematics: Mathematics at the level of B. Sc. program of the University of Delhi. Computer Science: Introduction to Computer organization including data representation, Boolean circuits and
their simplification, basics of combinational circuits; C - programming: Data types including user defined data types, constants and variables, operators and expressions, control structures,
modularity: use of functions, scope, arrays.
Logical ability & English Comprehension: Problem-solving using basic concepts of arithmetic, algebra, geometry and data analysis. Correct usage of English Language and Reading comprehension.
The syllabus for the M.Sc. (Computer Science) Entrance Test would be as follows:
Computer Science
Discrete Structures: Sets, functions, relations, counting; generating functions, recurrence relations and their solutions; algorithmic complexity, growth of functions and asymptotic notations.
Programming, Data Structures and Algorithms: Data types, control structures, functions/modules, object-oriented programming concepts: sub-typing, inheritance, classes and subclasses, etc. Basic
data structures like stacks, linked list, queues, trees, binary search tree, AVL and B+ trees; sorting, searching, order statistics, graph algorithms, greedy algorithms and dynamic programming
Computer System Architecture: Boolean algebra and computer arithmetic, flip-flops, design of combinational and sequential circuits, instruction formats, addressing modes, interfacing peripheral
devices, types of memory and their organization, interrupts and exceptions.
Operating Systems: Basic functionalities, multiprogramming, multiprocessing, multithreading, timesharing, real-time operating system; processor management, process synchronization, memory
management, device management, file management, security and protection; case study: Linux.
Software Engineering: Software process models, requirement analysis, software specification, software testing, software project management techniques, quality assurance.
DBMS and File Structures: File organization techniques, database approach, data models, DBMS architecture; data independence, E-R model, relational data models, SQL, normalization and functional
Computer Networks: ISO-OSI and TCP/IP models, basic concepts like transmission media, signal encoding, modulation techniques, multiplexing, error detection and correction; overview of LAN/MAN/
WAN; data link, MAC, network, transport and application layer protocol features; network security.
Algebra: Groups, subgroups, normal subgroups, cosets, Lagrange’s theorem, rings and their properties, commutative rings, integral domains and fields, sub rings, ideals and their elementary
properties. Vector space, subspace and its properties, linear independence and dependence of vectors, matrices, rank of a matrix, reduction to normal forms, linear homogeneous and non-homogenous
equations, Cayley-Hamilton theorem, characteristic roots and vectors. De Moivre’s theorem, relation between roots and coefficient of nth degree equation, solution to cubic and biquadratic
equation, transformation of equations.
Calculus: Limit and continuity, differentiability of functions, successive differentiation, Leibnitz’s theorem, partial differentiation, Eider’s theorem on homogenous functions, tangents and
normal, asymptotes, singular points, curve tracing, reduction formulae, integration and properties of definite integrals, quadrature, rectification of curves, volumes and surfaces of solids of
Geometry: System of circles, parabola, ellipse and hyperbola, classification and tracing of curves of second degree, sphere, cones, cylinders and their properties.
Vector Calculus: Differentiation and partial differentiation of a vector function, derivative of sum, dot product and cross product, gradient, divergence and curl.
Differential Equations: Linear, homogenous and bi-homogenous equations, separable equations, first order higher degree equations, algebraic properties of solutions, Wronskian-its properties and
applications, linear homogenous equations with constant coefficients, solution of second order differential equations. Linear non-homogenous differential equations, the method of undetermined
coefficients, Euler’s equations, simultaneous differential equations and total differential equations.
Real Analysis: Neighborhoods, open and closed sets, limit points and Bolzano Weiestrass theorem, continuous functions, sequences and their; properties, limit superior and limit inferior of a
sequence, infinite series and their convergence. Rolle’s theorem, mean value theorem, Taylor’s theorem, Taylor’s series, Maclaurin’s series, maxima and minima, indeterminate forms.
Probability and Statistics: Measures of dispersion and their properties, skewness and kurtosis, introduction to probability, theorems of total and compound probability, Bayes theorem random
variables, and probability distributions and density functions, mathematical expectation, moment generating functions, cumulants and their relation with moments, binomial Poisson and normal
distributions and their properties, correlation and regression, method of least squares, introduction to sampling and sampling distributions like Chi-square,t and Fdistributions, test of
significance based on t, Chi-square and Fdistributions.
*As per MCA Entrance Notification 2012
Note :-
The above information has been taken from the website of respective university/institute. Sanmacs India is no way responsible for the authenticity of data provided here in. For any discrepancy,
the student should contact respective university/institute.
|
{"url":"http://www.sanmacs.com/MCA/mca_entrance_training/mca_entrance_syllabus_of_universities/universities_offering_mca/delhi_university/du_mca_syllabus.htm","timestamp":"2014-04-19T09:35:00Z","content_type":null,"content_length":"20266","record_id":"<urn:uuid:1320b821-1d3e-449d-9206-c87a09f004cb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Intermediate Statistics and Econometrics
Intermediate Statistics and Econometrics: A Comparative Approach
Dale J. Poirier
The standard introductory texts to mathematical statistics leave the Bayesian approach to be taught later in advanced topics courses—giving students the impression that Bayesian statistics provide
but a few techniques appropriate in only special circumstances. Nothing could be further from the truth, argues Dale Poirier, who has developed a course for teaching comparatively both the classical
and the Bayesian approaches to econometrics. Poirier's text provides a thoroughly modern, self-contained, comprehensive, and accessible treatment of the probability and statistical foundations of
econometrics with special emphasis on the linear regression model.
Written primarily for advanced undergraduate and graduate students who are pursuing research careers in economics, Intermediate Statistics and Econometrics offers a broad perspective, bringing
together a great deal of diverse material. Its comparative approach, emphasis on regression and prediction, and numerous exercises and references provide a solid foundation for subsequent courses in
econometrics and will prove a valuable resource to many nonspecialists who want to update their quantitative skills.
The introduction closes with an example of a real-world data set—the Challenger space shuttle disaster—that motivates much of the text's theoretical discussion. The ten chapters that follow cover
basic concepts, special distributions, distributions of functions of random variables, sampling theory, estimation, hypothesis testing, prediction, and the linear regression model. Appendixes contain
a review of matrix algebra, computation, and statistical tables.
User Review - Flag as inappropriate
The book is intended for first year graduate students and focuses on Bayesian econometrics. Having learnt Bayesian econometrics from Dale himself, straight from this book, the book does make a lot of
sense and brings out the problems with frequentist econometrics. Book has a lot of typos (blame the publisher), fortunately Dale has a list of all the typos, which one may get from his website.
Review: Intermediate Statistics and Econometrics: A Comparative Approach
User Review - Scott - Goodreads
Quite possibly the most incomprehensible and politicized statistics textbook ever written. The notation is horrendous, the explanations are pat, and the number of typographical errors/errata are ...
Read full review
Special Distributions 81
Distributions of Functions of Random Variables 143
Sampling Theory 165
Estimation 168
Hypothesis Testing 351
Prediction 405
The Linear Regression Model 445
Other Windows on the World 585
Appendix A Matrix Algebra Review I 619
Appendix B Matrix Algebra Review II 645
Computation 653
Statistical Tables 661
References 667
Author Index 699
Subject Index 705
Popular passages
Varian, HR (1975). A Bayesian approach to real estate assessment. In Studies in Bayesian Econometrics and Statistics in Honor of Leonard J. Savage (SE Feinberg and A.
References from web pages
JSTOR: Intermediate Statistics and Econometrics: A Comparative ...
Reviews of books Comptes rendus 497 Poirier Intermediate Statistics and Econometrics: A Comparative Approach by DAVID GILES 499 Usher National Accounting ...
links.jstor.org/ sici?sici=0008-4085(199705)30%3A2%3C497%3AISAEAC%3E2.0.CO%3B2-C
Intermediate Statistics and Econometrics - The MIT Press
A thoroughly modern, comprehensive treatment of the probability and statistical foundations of econometrics with special emphasis on the linear regression ...
mitpress.mit.edu/ catalog/ item/ default.asp?ttype=2& tid=8441
Google Book Search
Interesting. Intermediate Statistics and Econometrics · Galatians · The Blue Fairy Book · The Voice of Jesus in the Social Rhetoric of James ...
books.google.com.jm/ bkshp?hl=en& tab=wp
Live Procura de Produtos: Livros
2007 Microsoft; Marcas comerciais; |; Declaração de Privacidade; |; Termos de Uso; |; For Site Owners · Central de Ajuda; |; Conta; |; Comentários
www.liveproducts.com.br/ meta_livros?isbn=0585134863& kw=& estado=1& cidade=0& flagonline=1& esel=1
Intermediate Statistics and Econometrics; A Comparative Approach, The MIT Press . 2. A. Naiman, R. Rosenfeld i G. Zirkel. Understanding Statistics ...
www.sgh.waw.pl/ instytuty/ isd/ publikacje/ Statystyka%20I%20-%20plik%20WORD.doc
Bibliographic information
|
{"url":"http://books.google.com/books?id=K52_YvD1YNwC&vq=discussion&dq=related:ISBN0470845678&source=gbs_navlinks_s","timestamp":"2014-04-17T04:57:49Z","content_type":null,"content_length":"129429","record_id":"<urn:uuid:70aa4382-5eca-42bd-973f-74b7a6910998>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Straight-Line Equations: Slope-Intercept Form
Straight-Line Equations:
Slope-Intercept Form (page 1 of 3)
Sections: Slope-intercept form, Point-slope form, Parallel and perpendicular lines
Straight-line equations, or "linear" equations, graph as straight lines, and have simple variable expressions with no exponents on them. If you see an equation with only x and y — as opposed to, say
x^2 or sqrt(y) — then you're dealing with a straight-line equation.
There are different types of "standard" formats for straight lines; the particular "standard" format your book refers to may differ from that used in some other books. (There is, ironically, no
standard definition of "standard form".) The various "standard" forms are often holdovers from a few centuries ago, when mathematicians couldn't handle very complicated equations, so they tended to
obsess about the simple cases. Nowadays, you likely needn't worry too much about the "standard" forms; this lesson will only cover the more-helpful forms.
I think the most useful form of straight-line equations is the "slope-intercept" form:
y = mx + b
This is called the slope-intercept form because "m" is the slope and "b" gives the y-intercept. (For a review of how this equation is used for graphing, look at slope and graphing.)
I like slope-intercept form the best. It is in the form "y=", which makes it easiest to plug into, either for graphing or doing word problems. Just plug in your x-value; the equation is already
solved for y. Also, this is the only format you can plug into your (nowadays obligatory) graphing calculator; you have to have a "y=" format to use a graphing utility. But the best part about the
slope-intercept form is that you can read off the slope and the intercept right from the equation. This is great for graphing, and can be quite useful for word problems. Copyright © Elizabeth Stapel
2000-2011 All Rights Reserved
Common exercises will give you some pieces of information about a line, and you will have to come up with the equation of the line. How do you do that? You plug in whatever they give you, and solve
for whatever you need, like this:
• Find the equation of the straight line that has slope m = 4
and passes through the point (–1, –6).
Okay, they've given me the value of the slope; in this case, m = 4. Also, in giving me a point on the line, they have given me an x-value and a y-value for this line: x = –1 and y = –6.
In the slope-intercept form of a straight line, I have y, m, x, and b. So the only thing I don't have so far is a value for is b (which gives me the y-intercept). Then all I need to do is plug in
what they gave me for the slope and the x and y from this particular point, and then solve for b:
y = mx + b
(–6) = (4)(–1) + b
–6 = –4 + b
–2 = b
Then the line equation must be "y = 4x – 2".
What if they don't give you the slope?
• Find the equation of the line that passes through the points (–2, 4) and (1, 2).
Well, if I have two points on a straight line, I can always find the slope; that's what the slope formula is for.
Now I have the slope and two points. I know I can find the equation (by solving first for "b") if I have a point and the slope. So I need to pick one of the points (it doesn't matter which one),
and use it to solve for b. Using the point (–2, 4), I get:
y = mx + b
4 = (–^ 2/[3])(–2) + b
4 = ^4/[3] + b
4 – ^4/[3] = b
12/3 – ^4/[3] = b
b = 8/3
...so y = ( –^ 2/[3 ]) x + ^8/[3].
On the other hand, if I use the point (1, 2), I get:
y = mx + b
2 = (–^ 2/[3])(1) + b
2 = –^ 2/[3] + b
2 + ^2/[3] = b
^6/[3] + ^2/[3] = b
b = ^8/[3]
So it doesn't matter which point I choose. Either way, the answer is the same:
y = (–^ 2/[3])x + ^8/[3]
As you can see, once you have the slope, it doesn't matter which point you use in order to find the line equation. The answer will work out the same either way.
Top | 1 | 2 | 3 | Return to Index Next >>
Cite this article as: Stapel, Elizabeth. "Straight-Line Equations: Slope-Intercept Form." Purplemath. Available from
http://www.purplemath.com/modules/strtlneq.htm. Accessed
|
{"url":"http://www.purplemath.com/modules/strtlneq.htm?vm=r","timestamp":"2014-04-16T10:19:02Z","content_type":null,"content_length":"34582","record_id":"<urn:uuid:50f5112a-94f6-4973-8ef9-e608d67a5453>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00439-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SOLVED] Word Problem help?
December 14th 2009, 02:01 AM #1
Junior Member
Dec 2009
[SOLVED] Word Problem help?
A farmer wants to make a square pen to keep some of his animals in a confined area. If one side of the square is 15 feet, how much fencing will he need for the pen?
Well a square has all sides the same. So if you are given one side the rest should be the same.
Now there are 4 sides in a square. Have a think of what you would do to calculate 4 sides instead of 1
Oh, okay. I understand how to solve it now.Thanks so much.
December 14th 2009, 02:07 AM #2
Senior Member
Jul 2009
December 14th 2009, 02:09 AM #3
Junior Member
Dec 2009
|
{"url":"http://mathhelpforum.com/algebra/120366-solved-word-problem-help.html","timestamp":"2014-04-20T10:21:42Z","content_type":null,"content_length":"33280","record_id":"<urn:uuid:18c311fc-61a4-4e9e-964f-002cae184747>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
|
MA651 Topology 1
Instructor: Nikolay S. Strigul
E-mail: nstrigul@stevens.edu,
Office Hours: by appointment
Lectures: Tuesdays 06:15-08:45 pm, Lieb Building 120
Homework: There will be weekly homework assignments due every two weeks.
Exams, Quizzes: There will be Midterm and Final Exams. Every class will start with a quiz covering definitions and theorems.
● Quizzes: 20 %
● Homework assignments: 20 %
● Midterm: 25 %
● Final: 35 %
Course program Course program (PDF)
General comments:
MA651 is a graduate course in general topology. The major goal of this course is to cover basic topological ideas such as topological spaces and their products, continuous maps, compactness,
connectedness, and metrization. However, some additional materials will be also covered in depth: for example, the two first lectures will be devoted to set theory. A lecture about real numbers
will demonstrate how topological and algebraic ideas relate. An additional two lectures illustrating how topology contributes to functional analysis and dynamical systems will be given provided
that the mandatory material is covered.
This course is targeted at graduate students in pure and applied mathematics. However MA651 is close to a self-contained course. Advanced undergraduate students are welcome if they are ready. To
illustrate what the word "ready" means I offer a citation from the introduction to the excellent book by Marc Zamansky: "However, in mathematics, as in many other disciplines, although previous
knowledge may be unnecessary, a certain attitude of mind is essential." Then, although no official prerequisites are listed, students should be able to operate with abstract ideas.
Textbooks (students should buy only the first textbook by James Munkres)
1) Topology by James Munkres, 2nd ed.
2) Topology by James Dugundji
3) Linear Algebra and Analysis by Marc Zamansky
Some materials will be taken from the other books:
1) Fundamentals of General Topology : Problems and Exercises by A.V. Arkhangel'skii, V.I. Ponomarev
2) Introduction to Set Theory and General Topology by P.S. Aleksandrof
3) Topology by K. Kuratowski v.1
4) Elements of Mathematics: General Topology by Nicolas Bourbaki
5) Introduction to Set Theory by K. Hrbacek and T. Jech
6) Introduction to Real Analysis by A.N. Kolmogorov and S.V. Fomin
Course program:
Jan 17. Lecture 1. - Elementary set theory.
1. Symbolic logic notation.
2. Sets.
3. Boolean algebra.
4. Cartesian product.
5. Families of sets.
6. Power set.
7. Functions, or maps.
8. Binary relations. Equivalence relation.
9. Axiomatics.
Lecture 1.pdf Homework 1.pdf Quiz questions 1.pdf
Jan 24. Lecture 2. - Ordinals and cardinals. Well-ordered sets.
10. Ordering.
11. Zorn's Lemma, Zermelo's Theorem and Axiom of Choice.
12. Ordinals.
13. The concept of ordinal numbers.
14. Comparison of ordinal numbers.
15. Transfinite induction.
16. Cardinality of sets.
17. Finite, countable and uncountable sets.
18. General cartesian products.
Lecture 2.pdf Homework 2.pdf Quiz questions 2.pdf Quiz 1.pdf
Jan 31. Lecture 3. - Topological spaces.
19. Fundamental families."Local" definition of topology.
20. "Global" definition of topology.
21. Basis for a given topology.
22. Topologizing of sets.
23. Open and closed sets.
24. Induced topology.
Lecture 3.pdf Homework 3.pdf Quiz questions 3.pdf Quiz 2.pdf
Feb 7. Lecture 4. - Continuity. Homeomorphisms. Limits.
25. Continuous maps.
26. Open maps and closed maps.
27. Homeomorphism.
28. Continuity from a "local" viewpoint.
28.1 The concept of a filter.
28.2 Limits in topological spaces.
28.3 Images of limits, sequences.
28.4 "Local" definition of a continuous map.
Lecture 4.pdf Homework 4.pdf Quiz questions 4.pdf Quiz 3.pdf
Feb 14. Lecture 5. - Cartesian product topology. Connectedness.
29. Cartesian product topology.
30. Slices in cartesian product topology.
31. Connectedness.
32. Application to real valued functions.
33. Components.
34. Local connectedness.
35. Path-connectedness.
Lecture 5.pdf Homework 5.pdf Quiz questions 5.pdf Quiz 4.pdf
Feb 21. No lecture (Stevens is on Monday schedule ) - takehome midterm exam.
Midterm Exam.pdf
Feb 28. Lecture 6. - Separation axioms.
36. Separation axioms.
37. Hausdorff spaces.
38. Regular spaces.
39. Normal spaces.
40. Urysohn's characterization of normality.
41. Tietze's characterization of normality.
Lecture 6.pdf Homework 6.pdf Quiz questions 6.pdf Quiz 5.pdf
Mar 7. Lecture 7. - Real numbers.
42. Algebraic laws.
43. The set of rational numbers.
43.1 The set Z of integers.
43.2 Definitions and properties of the set Q of rationals.
43.3 Topology on Q.
44. The construction of R and its fundamental properties.
44.1 Definition of R.
44.2 Addition, order, absolute value in R.
44.3 The field R.
44.3 The topology on R. The two fundamental properties.
45. The real line.
Lecture 7.pdf Homework 7.pdf Quiz questions 7.pdf Quiz 6.pdf
Mar 14. Spring recess. No lecture.
Mar 21. Lecture 8. - Compactness.
46. Compactness.
47. Compact subsets in R.
48. Compactness as a topological invariant.
49. Separation properties of compact spaces.
50. Tychonov Theorem.
51. Alexandroff compactification.
Lecture 8.pdf Homework 8.pdf Quiz questions 8.pdf Quiz 7.pdf
Mar 28. Lecture 9. - Compactness 2
52. Local compactness.
53. Countable compactness.
54. Sequential compactness.
55. 1^o countable spaces.
56. 2^o countable, separable and Lindelöf spaces.
Lecture 9.pdf Homework 9.pdf Quiz questions 9.pdf Quiz 8.pdf
Apr 4. Lecture 10. - Metric spaces 1
57. Metrics on sets.
58. Topology induced by a metric.
59. Equivalent metrics. Isometries.
60. Continuity of distance.
61. Convergent sequences.
62. Compactness and covering theorems in metric spaces.
Lecture 10.pdf Homework 10.pdf Quiz questions 10.pdf Quiz 9.pdf
Apr 11. Lecture 11. - Metric spaces 2
63. Complete metric spaces.
63.1 Cauchy sequences.
63.2 Complete metrics and complete spaces.
64. The Baire property of complete metric spaces.
65. Completion of a metric space.
Lecture 11.pdf Homework 11.pdf Quiz questions 11.pdf Quiz 10.pdf
Apr 18. Lecture 12. - Mapping in metric spaces.
66. Uniform continuity.
67. Extension by continuity.
68. Contraction mapping.
68.1 The fixed point theorem.
63.2 Contraction mapping and differential equations.
Lecture 12.pdf Homework 12.pdf Quiz questions 12.pdf Quiz 11.pdf
Apr 25. Lecture 13. - Review of homework assignments.
May 2. Lecture 14. - Review of the course. Application of topology to dynamical systems.
May 9. Final exam.
|
{"url":"http://personal.stevens.edu/~nstrigul/MA651/index_T.html","timestamp":"2014-04-19T19:34:23Z","content_type":null,"content_length":"14207","record_id":"<urn:uuid:8882f882-c482-4898-b8ad-4436050aaca7>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00176-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Thermodynamic entropy of system of any size.
Thanks for that link, I didn't know about that. I was just trying to satisfy my own curiosity.
The Sackur-Tetrode equation appears only to work under the assumption of the uncertainty principle, whereas I derived my formula without any QM assumptions. Therefore I don't think they can be
directly compared. However, this part:
He showed it to consist of stacking the entropy (missing information) due to four terms: positional uncertainty, momenta uncertainty, quantum mechanical uncertainty principle and the
indistinguishability of the particles
It very similar to my own method (except for the QM part obviously).
|
{"url":"http://www.physicsforums.com/showthread.php?p=3458302","timestamp":"2014-04-20T23:39:44Z","content_type":null,"content_length":"32650","record_id":"<urn:uuid:d1e456bc-63ab-4eb6-a059-02d2ad188ba9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Homework Help
Posted by Avilene on Friday, December 4, 2009 at 9:30pm.
A man launches his boat from point A on a bank of a straight river, 3 km wide, and wants to reach point B, 2 km downstream on the opposite bank, as quickly as possible. He could row his boat directly
across the river to point C and then run to B, or he could row directly to B, or he could row to some point D between B and C and then run to B. If he can row 6 km/h and run 8 km/h, where should he
land to reach B as soon as possible? (We assume that the speed of the water is negligible compared to the speed at which the man rows.)
• calculus - bobpursley, Friday, December 4, 2009 at 9:37pm
Draw the diagram. Label the points. Now label DB as x, and CD as 2-x
So the distance AD is sqrt(3^2+(2-x)^2)
so the time for the trip is
time=DB/6 + x/8
= 1/6 (sqrt(above )) + x/8
take the deriviative of time with respect to x, set to zero, and solve for x.
• calculus - Reiny, Friday, December 4, 2009 at 9:52pm
I assume you made a diagram according to your description.
Let CD = x km, 0 ≤ x ≤ 2
then DB = 2-x km
let AD = y
then y= (x^2 + 9)^(1/2)
time rowing = (x^2 + 9)^(1/2)/6
time running = (2-x)/8
Total Time = (x^2 + 9)^(1/2)/6 + (2-x)/8
d(Total Time)/dx = x/[6((x^2 + 9)^(1/2)] - 1/8 = 0 for a max/min of TT
This simplified to
8x = 6√(x^2+9)
4x = 3√(x^2+9) , I then squared both sides
16x^2 = 9x^2 + 81
x = 3.4
but that is outside of our domain,
unless I made an arithmetic error.
Better check it.
So let's check our trivial routes:
all rowing: y = √13
time = √13/6 = .6 hrs
combination rowing + running
time = 3/6 + 2/8 = .75 hours
So he should just row directly to B
Related Questions
Math - How do I draw the following problem so I have something to refer to when ...
Physics - A boat is to be driven from a point on the South bank of a river which...
Physics - A boat travelling at 30km/h relative to water is headed away from the ...
Physics - A boat travelling at 30km/h relative to water is headed away from the ...
Physics - A boat travelling at 30km/hr relative to water is headed away from the...
vectors components - plz answer my question a river is 2 km wide and flows at ...
Physics - A small boat is crossing a river which flows at 10km.h. The driver of ...
physics - A boat moving at 15 km/h relative to water is crossing a river 1.5 km ...
Physics - A boat moving at 15 km/h relative to water is crossing a river 1.5 km ...
Math - An oil refinery is located 1 km north of the north bank of a straight ...
|
{"url":"http://www.jiskha.com/display.cgi?id=1259980243","timestamp":"2014-04-21T08:01:20Z","content_type":null,"content_length":"9928","record_id":"<urn:uuid:1c47940a-0316-4d7e-a49c-a1ddd1900203>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
|
and description
Epoetin Alfa - Parenteral Anaemia of chronic renal failure
Adult: In predialysis and haemodialysis: Initially, 50 iu/kg 3 times a wk as SC/IV Inj over at least 1 minute or as slow IV Inj over 5 minutes in patients who experience flu-like symptoms as side
effects. Doses may be increased by 25 units/kg at 4-wk intervals until the target is reached. In peritoneal dialysis: Initially, 50 units/kg SC twice a wk. Total wkly maintenance dose: Predialysis:
50-100 units/kg in 3 divided doses. Haemodialysis: 75-300 units/kg in 3 divided doses. Peritoneal dialysis: 50-100 units/kg IV in 2 divided doses.
Child: Haemodialysis: Initially, 50 units/kg given IV 3 times a wk. May be increased by 25 units/kg at 4-wk intervals until the target Hb conc is reached (9.5-11 g/mL). Total wkly maintenance dose:
Haemodialysis: >30 kg: 90-300 units/kg; 10-30 kg: 180-450 units/kg; <10 kg: 225-450 units/kg. To be given in 3 divided doses. Parenteral Anaemia in zidovudine-treated HIV-infected patients
Adult: Titrate dose for patients individually so as to achieve and maintain the lowest haemoglobin level sufficient to avoid the need for blood transfusion and not to exceed 12 g/dL. For patients
with serum erythropoietin levels ≤500 mUnits/mL who are receiving zidovudine dose ≤4200 mg/wk: Starting dose: 100 units/kg via SC/IV Inj 3 times a wk for 8 wk, may increase by 50-100 units/kg 3 times
a wk at 4-8 wk intervals according to response. Not recommended to administer doses >300 units/kg 3 times a wk. Intravenous Increase yield of autologous blood
Adult: 600 units/kg 2 times a wk starting 3 wk before surgery. May be used in conjunction with an iron supplement. Subcutaneous Anaemia related to non-myeloid malignant disease chemotherapy
Adult: Initially, 150 units/kg 3 times a wk or 450 units/kg once wkly, increased after 4-8 wk to 300 units/kg 3 times a wk if necessary. Stop treatment if response is still inadequate after 4 wk of
treatment at this higher dose. Subcutaneous To reduce the need for allogeneic blood transfusion
Adult: 600 units/kg once wkly starting 3 wk before surgery with the 4th dose given on the day of the surgery or 300 units/kg daily starting 10 days before surgery and for 4 days after.
Adult: IV Increase yield of autologous blood 600 u/kg twice wkly starting 3 wk before surgery. IV/SC Anaemia of chronic renal failure Predialysis and haemodialysis: Initial: 50 iu/kg 3 times/wk. May
increase slowly. Peritoneal dialysis: Initial: 50 u/kg twice wkly. Total maintenance dose/wk: Predialysis: 50-100 u/kg in 3 divided doses; Haemodialysis: 75-300 u/kg in 3 divided doses; Peritoneal
dialysis: 50-100 u/kg in 2 divided doses. Anaemia in zidovudine-treated HIV-infected patients Initial: 100 u/kg 3 times/wk for 8 wk. May increase slowly. Not >300 u/kg 3 times/wk. SC Anemia related
to non-myeloid malignant disease chemotherapy Initial: 150 u/kg 3 times/wk, up to 300 u/kg 3 times/wk after 4-8 wk if needed. To reduce the need for allogenic blood tranfusion 600 u/kg once wkly
starting 3 wk before surgery w/ the 4th dose given on the day of surgery.
Intravenous Increase yield of autologous blood
Adult: 600 units/kg 2 times a wk starting 3 wk before surgery. May be used in conjunction with an iron supplement. Subcutaneous Anemia related to non-myeloid malignant disease chemotherapy
Adult: Initially, 150 units/kg 3 times a wk or 450 units/kg once wkly, increased after 4-8 wk to 300 units/kg 3 times a wk if necessary. Stop treatment if response is still inadequate after 4 wk of
treatment at this higher dose. Subcutaneous Reduce the need for allogeneic blood transfusion
Adult: 600 units/kg once wkly starting 3 wk before surgery with the 4th dose given on the day of the surgery or 300 units/kg daily starting 10 days before surgery and for 4 days after. Parenteral
anaemia of chronic renal failure
Adult: In predialysis and haemodialysis: Initially, 50 iu/kg 3 times a wk as SC/IV Inj over at least 1 minute or as slow IV Inj over 5 minutes in patients who experience flu-like symptoms as side
effects. Doses may be increased by 25 units/kg at 4-wk intervals until the target is reached. In peritoneal dialysis: Initially, 50 units/kg SC twice a wk. Total wkly maintenance dose: Predialysis:
50-100 units/kg in 3 divided doses. Haemodialysis: 75-300 units/kg in 3 divided doses. Peritoneal dialysis: 50-100 units/kg IV in 2 divided doses.
Child: Haemodialysis: Initially, 50 units/kg given IV 3 times a wk. May be increased by 25 units/kg at 4-wk intervals until the target Hb conc is reached (9.5-11 g/mL). Total wkly maintenance dose:
Haemodialysis: >30 kg: 90-300 units/kg; 10-30 kg: 180-450 units/kg; <10 kg: 225-450 units/kg. To be given in 3 divided doses. Parenteral anaemia in HIV-positive patients on zidovudine therapy
Adult: Initially, 100 units/kg SC/IV Inj 3 times a wk for 8 wk, may increase by 50-100 units/kg at 4-8 wk intervals. Max: 300 units/kg 3 times a wk.
|
{"url":"http://www.igenericdrugs.com/?s=Epoetin%20Alfa","timestamp":"2014-04-21T04:32:08Z","content_type":null,"content_length":"18217","record_id":"<urn:uuid:6d07d77b-2d5f-4785-a4ae-1aa5e74167dd>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00267-ip-10-147-4-33.ec2.internal.warc.gz"}
|
An ellipse usually looks like a squashed circle.
It is defined by two special points called foci.
"F" is a focus, "G" is a focus,
and together they are called foci.
(pronounced "fo-sigh")
The distance from F to P to G is always the same value
In other words, when you go from point "F" to any point on the ellipse and then go on to point "G", you will always travel the same distance.
You Can Draw It Yourself
Put two pins in a board, put a loop of string around them, and insert a pencil into the loop. Keep the string stretched so it forms a triangle, and draw a curve ... you will draw an ellipse.
It works because the string naturally forces the same distance from pin-to-pencil-to-other-pin.
A Circle is an Ellipse
In fact a Circle is an Ellipse, where both foci are at the same point (the center).
In other words, a circle is a "special case" of an ellipse. Ellipses Rule!
An ellipse is the set of all points on a plane whose distance from two fixed points F and G add up to a constant.
Major and Minor Axes
The Major Axis is the longest diameter. It goes from one side of the ellipse, through the center, to the other side, at the widest part of the ellipse. And the Minor Axis is the shortest diameter (at
the narrowest part of the ellipse).
The Semi-major Axis is half of the Major Axis, and the Semi-minor Axis is half of the Minor Axis.
Area is easy, perimeter is not!
The area of an ellipse is:
π × a × b
where a is the length of the Semi-major Axis, and b is the length of the Semi-minor Axis. Be careful: a and b go from the edge to the center.
(Note: for a circle, a and b are equal to the radius, and you get π × r × r = πr^2, which is right!
Perimeter Approximation
Rather strangely, the perimeter of an ellipse is very difficult to calculate, so I created a special page for the subject: read Perimeter of an Ellipse for more details.
But a simple approximation that is within about 5% of the true value (so long as a is not more than 3 times longer than b) is as follows:
Remember, this is only a rough approximation!
A tangent is a line that just touches a curve at one point, without cutting across it. Here is a tangent to an ellipse:
Here is a cool thing: the tangent line has equal angles with the two lines going to each focus! Try bringing the two focus points together (so the ellipse is a circle) ... what do you notice?
Section of a Cone
You can also get an ellipse when you slice through a cone (but not too steep a slice, or you get a parabola or hyperbola).
In fact the ellipse is a conic section (a section of a cone) with an eccentricity between 0 and 1.
By placing an ellipse on an x-y graph (with its major axis on the x-axis and minor axis on the y-axis), the equation of the curve is:
x^2/a^2 + y^2/b^2 = 1
(similar to the equation of the hyperbola: x^2/a^2 − y^2/b^2 = 1, except for a "+" instead of a "−")
|
{"url":"http://www.mathsisfun.com/geometry/ellipse.html","timestamp":"2014-04-18T15:40:41Z","content_type":null,"content_length":"13228","record_id":"<urn:uuid:bd5e191c-def4-4683-b234-a462b9b06828>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: dominator trees
13 Mar 1998 00:01:36 -0500
From comp.compilers
| List of all articles for this month |
From: mal@bewoner.dma.be
Newsgroups: comp.compilers
Date: 13 Mar 1998 00:01:36 -0500
Organization: Compilers Central
References: 98-03-029
Keywords: theory, analysis
lkaplan@mips.complang.tuwien.ac.at wrote:
>Has anyone implemented the dominator tree algorithm by Dov Harel
>(idescribed in the paper "A linear time algorithm for finding dominators
>in a flow graph and related problems")? I would be very interested in the
>exchange of ideas.
A paper that seems to contain a fairly detailed commentary on Harel's
algorithm can be found at
I haven't read it in full detail yet but they seem to claim Harel's
original algorithm was not linear but they present a modification that
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again.
|
{"url":"http://compilers.iecc.com/comparch/article/98-03-113","timestamp":"2014-04-21T15:27:32Z","content_type":null,"content_length":"4262","record_id":"<urn:uuid:c7251aa3-6cc0-46e7-8353-576d4534d84a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00241-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Recovering a measure from its moments
up vote 5 down vote favorite
Suppose we are given moments of a measure on the interval [0,1]. Is there some practical way to recover the measure itself? I am particularly interested in the case where the measure density is given
by some smooth positive function, and I would like to determine the behavior of this function near x=0.
What do you mean by recover the measure? If you simply want to integrate a continuous function with respect to the measure, then the question reduces to approximating your desired continuous
functions by polynomials effectively. You need a bound on the total variation to bound the error -- if your measure's positive, that's just the 0'th moment. In the positive case, you can also
approximate plenty of discontinuous from above and below with continuous functions. – Phil Isett Feb 25 '12 at 17:29
add comment
4 Answers
active oldest votes
There are several approaches to practical resolution of the problem. The moment data can typically be interpreted as linear constraints on the density function that is recovered by
minimizing a functional. Depending on the application, the functional might be energy, Shannon entropy, or some other application-specific quantity.
Take a look at my answer to an earlier question. Despite the fact that the earlier question dealt with multivariate measures, the references in the answer should give you a good head
up vote 3 start.
down vote
Multi-dimensional moment problem
add comment
If you only have a finite number of moments Maximum entropy methods are often a good way to proceed, as was mentioned in Budisic's answer.
If you have all the moments, thats equivalent to having the characteristic function
$$\hat{f}(\xi) = 1+i\xi \mu_1-\frac{1}{2}\xi^2\mu_2-1/(3!)i\xi^3\mu_3+1/(4!)\xi^4\mu_4+...$$
So in principle you could do the inverse Fourier transform and recover the distribution.
up vote 3 down vote
If doing the inverse transform is too challenging, and since you said you mostly care about the local behavior around 0, you might also try saying:
$$f(0) = \int_{-\infty}^{\infty} \hat{f}(\xi)\,d\xi$$ $$f'(0) = \int_{-\infty}^{\infty} \xi\hat{f}(\xi)\,d\xi$$
add comment
An addendum to Kai's answer: the moments $m_n$ of the measure $\mu$ are the coefficients of the Laurent expansion at infinity of the Cauchy transform $$\hat \mu(z)=\int \frac{d\mu(t)}{z-t}
$$ of $\mu$ (just expand $(z-t)^{-1}$ there and substitute it in the integral):$$\hat \mu(z)=\sum_{n=0}^\infty \frac{m_n}{z^{n+1}}. $$ You can recover the original measure (at least its
absolutely continuous part) as the difference of the boundary values of $\hat \mu$ on $(0,1)$ (Sokhotski's theorem).
up vote 3 From the practical point of view, you could approximate $\hat \mu$ from a finite number of moments using the (diagonal, i.e. both numerator and denominator of degree $\leq n$) Padé
down vote approximants at infinity. This will be close to optimal, and the approximation converges uniformly in the whole complex plane except the interval $[0,1]$ with a geometric rate (although the
closer to the interval you are, the slower the rate of convergence is). Finally, the denominators $Q_n$ of these Padé approximants will be the orthogonal polynomials with respect to $\mu$.
add comment
An answer on your question is given in the following book:
[ST] J. A. Shohat and J. D. Tamarkin "The problem of moments".
up vote 2 down vote
See [ST], page 90 and what follows. In particular, the case of an absolutely continuous measure is considered on page 95.
add comment
Not the answer you're looking for? Browse other questions tagged fa.functional-analysis or ask your own question.
|
{"url":"http://mathoverflow.net/questions/89462/recovering-a-measure-from-its-moments/89480","timestamp":"2014-04-21T15:57:49Z","content_type":null,"content_length":"61217","record_id":"<urn:uuid:05cb4990-4228-4543-b904-2422268ce5a1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Quantum Theory
Max Planck: Quantum Theory
Max Planck lectured on The Origin and Development of the Quantum Theory in German and an English translation was published by Methuen & Co in 1925. It is a fascinating lecture, for in it Planck shows
how his own thinking developed, and he relates some wrong paths that he followed.
The Origin and Development of the Quantum Theory
Max Planck
In this lecture I will endeavour to give a general account of the origin of the quantum theory, to sketch concisely its development up to the present, and to point out its immediate significance in
Looking back over the last twenty years to the time when the conception and magnitude of the physical quantum of action first emerged from the mass of experimental facts, and looking back at the long
and complicated path which finally led to an appreciation of its importance, the whole history of its development reminds me of the well-proved adage that "to err is human." And all the hard
intellectual work of an industrious thinker must often appear vain and fruitless, but that striking occurrences sometimes provide him with an irrefutable proof of the fact that at the end of all his
attempts, he does ultimately get one step nearer the truth. An indispensable hypothesis, though it does not guarantee a result, often arises from the pursuit of a definite object, the importance of
which is not lessened by initial ill-success.
For me, such an object has, for a long time, been the solution of the problem of the distribution of energy in the normal spectrum of radiant heat. Gustav Kirchhoff showed that, in a space bounded by
bodies at equal temperatures, but of arbitrary emissive and absorptive powers, the nature of the heat of radiation is completely independent of the nature of the bodies. Later, a universal function
was proved to exist, which depended only on temperature and wave-length, and was in no way related to the properties peculiar to any substance. The discovery of this remarkable function gave promise
of a deeper understanding of the relationship of energy to temperature, which forms the chief problem of thermo-dynamics, and, therefore, also of all molecular physics. There is no way at present
available for obtaining this function but to select from all the various kinds of bodies occurring in Nature any one of known emission and absorption coefficients, and to calculate the heat radiation
when the exchange of energy is stationary. According to Kirchhoff's theorem, this must be independent of the constitution of the body.
A body especially suited for this purpose appears to be Heinrich Hertz's oscillator, the laws of emission of which, for a given frequency, have recently been fully developed by Hertz. If a number of
such oscillators be placed in a space enclosed by reflecting walls, they will exchange energy one with another by taking up or emitting electro-magnetic waves, analogous with a sound source and
resonators, until finally stationary black radiation, so-called, obtains in the enclosure according to Kirchhoff's law. At one time I fostered the hope which seems to us rather naive in these days,
that the laws of classical electrodynamics, if applied sufficiently generally, and extended by suitable hypotheses, would be sufficient to explain the essential points of the phenomenon looked for,
and to lead to the desired goal. To this end, I first of all developed the laws of emission and absorption of a linear resonator in the widest possible way, in fact, by a roundabout way which I could
have avoided by using H A Lorentz's electron theory then complete in all fundamental points. But since I did not then fully believe in the electron hypothesis, I preferred to consider the energy
flowing across a spherical surface of a certain radius enclosing the resonator. This only deals with phenomena in vacuo, but the knowledge of these is enough to enable us to draw the necessary
conclusions about the energy changes of the resonator.
The result of this long series of investigations was the establishment of a general relation between the energy of a resonator of given period and the radiant energy of the corresponding region of
the spectrum in the surrounding field when the energy exchange is stationary. Some of these investigations could be proved by comparison with available observations, particularly the damping
measurements of Vilhelm Bjerknes, and this is a verification of the results. Thus the remarkable conclusion is reached that the relation does not depend on the nature of the resonator, in particular,
not upon its damping coefficient - a very gratifying and welcome circumstance to me, since it allowed the whole problem to be simplified in so far that the energy of radiation could be replaced by
the energy of the resonator. Thereby a system with one degree of freedom could be substituted for a complicated system with many degrees of freedom.
Indeed, this result was nothing but a step preparatory to starting on the real problem, which now appeared more formidable. The first attempt at solving the problem miscarried; for my original hope
proved false, namely, that the radiation emitted from the resonator would, in some characteristic way, be distinct from the absorbed radiation and thus give a differential equation, by solving which
it would be possible to derive a condition for the state of stationary radiation. The resonator only responded to the same rays as it emitted, and was not at all sensitive to neighbouring regions of
the spectrum.
My assumption that the resonator could exert a one-sided, i.e. irreversible, effect on the energy of the surrounding field of radiation, was strongly contradicted by Ludwig Boltzmann. His mature
experience led him to conclude that, according to the laws of classical mechanics, each phenomenon which I had considered, could operate in exactly the reverse direction. Thus, a spherical wave sent
out from a resonator may be reversed and proceed in ever-diminishing concentric spheres until it shrinks up at the resonator and is absorbed by it, and causes again the energy previously absorbed to
be emitted once more into space in the directions along which it had come. Even if, by introducing suitable limits, I could exclude from the hypothesis of "natural radiation" such singular phenomena
as spherical waves travelling inwards, all these analyses show clearly that an essential connecting link is still missing for the complete understanding of the problem.
No other course remained open to me but to attack the problem from the opposite direction, namely, through thermodynamics, with which I felt more familiar. Here I was helped by my previous researches
into the second law of thermodynamics, and I straightway conceived the idea of connecting the entropy and not the temperature of the resonator with the energy, indeed, not the entropy itself, but its
second differential coefficient with respect to energy, since this has a direct physical meaning for the irreversibility of the exchange of energy between resonator and radiation. Since at that time
I did not see my way clear to go any further into the dependence of entropy and probability, I could, first of all, only refer to results that had already been obtained. Now, in 1899, the most
interesting result was the law of energy distribution which had .just been discovered by Wilhelm Wien. The experimental proof of this was undertaken by F Paschen at the Hochschule, Hanover, and by O
Lummer and E Pringsheim at the Reichsanstalt, Charlottenburg. This law represents the dependence of the intensity of radiation on temperature by means of an exponential function. Using this law to
calculate the relation between the entropy and energy of a resonator, the remarkable result is obtained, that R, the reciprocal of the differential coefficient referred to above, is proportional to
the energy. This exceedingly simple relation is a complete and adequate expression of Wien's law of distribution of energy; for the dependence upon wave-length is always given immediately as well as
the dependence upon energy by Wien's generally accepted law of displacements.
Since the whole problem deals with one of the universal laws of Nature, and since I believed then, as I do now, that the more general a natural law is, the simpler is its form (though it cannot
always be said with certainty and finality which is the simpler form), I thought for a long time that the above relation, namely, that R is proportional to the energy, should be considered as the
foundation of the law of distribution of energy. This idea soon proved to be untenable in the light of more recent results. While Wien's law was confirmed for small values of energy, i.e. for short
waves, O Lummer and E Pringsheim found large deviations in the case of long waves. Finally, the observations made by G Rubens and F Kurlbaum, with infra-red rays after transmission through fluorspar
and rock salt, showed a totally different relation, which, under certain conditions, was still very simple. In this case, R is proportional, not to the energy, but to the square of the energy, and
this relation is more accurate the larger the energies and wavelengths considered.
Thus, by direct experiment, two simple limits have been fixed for the function R, i.e. for small values of the energy it is proportional to the energy, for large values it is proportional to the
square of the energy. It was obvious that in the general case the next step was to express R to the sum of two terms, one involving the first power, the other the second power of the energy, so that
the first term was the predominating term for small values of the energy, the second term for large values. This gave a new formula for the radiation, which has stood the test of experiment fairly
satisfactorily so far. No final exact experimental verification has yet been given and a new proof is badly needed.
If, however, the radiation formula should be shown to be absolutely exact, it would possess only a limited value, in the sense that it is a fortunate guess at an interpolation formula. Therefore,
since it was first enunciated, I have been trying to give it a real physical meaning, and this problem led me to consider the relation between entropy and probability, along the lines of Boltzmann's
ideas. After a few weeks of the most strenuous work of my life, the darkness lifted and an unexpected vista began to appear.
I will digress a little. According to Boltzmann, entropy is a measure of physical probability, and the essence of the second law of thermo-dynamics is that in Nature, the more often a condition
occurs, the more probable it is. In Nature, entropy itself is never measured, but only the difference of entropy, and to this extent one cannot talk of absolute entropy without a certain
arbitrariness. Yet, the introduction of an absolute magnitude of entropy, suitably defined, is allowed, since certain general theorems can be expressed very simply by doing so. As far as I can see,
it is exactly the same with energy. Energy itself cannot be measured, but only a difference of energy. Therefore, one did not previously deal with energy, but with work, and Ernst Mach, who was
concerned to a great extent with the conservation of energy, but avoided all speculations outside the domain of observation, has always refrained from talking of energy itself. Similarly, at first in
thermo-chemistry, one considered heat of reaction, i.e. difference of energy, until William Ostwald emphatically showed that many involved considerations could be very much simplified, if one dealt
with energy itself instead of calorimetric values. The undetermined additive constant in the expression for energy was fixed later by the relativity theorem of the relation between energy and
As in the case of energy, we can define absolute value for entropy and consequently for physical probability, if the additive constant is fixed so that entropy and energy vanish simultaneously. (It
would be better to substitute temperature for energy here.) On this basis a comparatively simple combinatory method was derived for calculating the physical probability of a certain distribution of
energy in a system of resonators. This method leads to the same expression for entropy as was obtained from the radiation theory. As an offset against much disappointment, I derived much satisfaction
from the fact that Ludwig Boltzmann, in a letter acknowledging my paper, gave me to understand that he was interested in, and fundamentally in agreement with, my ideas.
For numerical applications of this method of probability we require two universal constants, each of which has an independent physical significance. The supplementary calculation of these constants
from the radiation theory shows whether the method is merely a numerical one or has an actual physical meaning. The first constant is of a more or less formal nature, it depends on the definition of
temperature. The value of this constant is ^2/[3] if temperature be defined as the mean kinetic energy of a molecule in an ideal gas, and is, therefore, a very small quantity. With the conventional
measure of temperature, however, this constant has an extremely small value, which is naturally closely dependent upon the energy of a single molecule, and an exact knowledge of it leads, therefore,
to the calculation of the mass of a molecule and the quantities depending upon it. This constant is frequently called Boltzmann's constant, though Boltzmann himself, to my knowledge, never introduced
it - a curious circumstance, explained by the fact that Boltzmann, as appears from various remarks by him, never thought of the practicability of measuring this constant exactly. Nothing can better
illustrate the impetuous advance made in experimental methods in the last twenty years than the fact that since then, not one only, but a whole series of methods have been devised for measuring the
mass of a single molecule with almost the same accuracy as that of a planet.
While, at the time that I carried out the corresponding calculations from the radiation theory, it was impossible to verify exactly the figure obtained, and all that could be achieved was to check
the order of magnitude; shortly afterwards, E Rutherford and H Geiger, succeeded in determining the value of the elementary electric charge to be 4.65 × 10^-10 electro-static units, by directly
counting α-particles. The agreement of this figure with that calculated by me, 4.69 × 10^-10, was a definite confirmation of the usefulness of my theory. Since then, more perfect methods have been
developed by E Regener, R A Millikan, and others, and have given a value slightly higher than this.
The interpretation of the second universal constant of the radiation formula was much less simple. I called it the elementary quantum of action, since it is a product of energy and time, and was
calculated to be 6.55 × 10^-27 erg sec. Though it was indispensable for obtaining the right expression for entropy for it is only by the help of it that the magnitude of the standard element of
probability could be fixed for the probability calculations - it proved itself unwieldy and cumbrous in all attempts to make it fit in with classical theory in any form. So long as this constant
could be considered infinitesimal, as when dealing with large energies or long periods of time, everything was in perfect agreement, but in the general case, a rift appeared, which became more and
more pronounced the weaker and more rapid the oscillations considered. The failure of all attempts to bridge this gap soon showed that undoubtedly one of two alternatives must obtain. Either the
quantum of action was a fictitious quantity, in which case all the deductions from the radiation theory were largely illusory and were nothing more than mathematical juggling. Or the radiation theory
is founded on actual physical ideas, and then the quantum of action must play a fundamental role in physics, and proclaims itself as something quite new and hitherto unheard of, forcing us to recast
our physical ideas, which, since the foundation of the infinitesimal calculus by Leibniz and Newton, were built on the assumption of continuity of all causal relations.
Experience has decided for the second alternative. That this decision should be made so soon and so certainly is not due to the verification of the law of distribution of energy in heat radiation,
much less to my special derivation of this law, but to the restless, ever-advancing labour of those workers who have made use of the quantum of action in their investigations.
The first advance in this work was made by A Einstein, who proved, on the one hand, that the introduction of the energy quanta, required by the quantum of action, appeared suitable for deriving a
simple explanation for a series of remarkable observations of light effects, such as Stokes's rule, emission of electrons, and ionization of gases. On the other hand, by identifying the energy of a
system of resonators with the energy of a rigid body, he derived a formula for the specific heat of a rigid body, which gives again quite correctly the variation of specific heat, particularly its
decrease with decrease of temperature. It is not my duty here to give even an approximately complete account of this work. I can only point out the most important characteristic stages in the
progress of knowledge.
We will now consider problems in heat and chemistry. As far as the specific heat of a solid body is concerned, Einstein's method, based on the assumption of a single characteristic oscillation of the
atom, has been extended by M Born and Th von Kármán to the case of various characteristic oscillations, more in agreement with practice. By greatly simplifying the assumptions regarding the nature of
the oscillations, P Debye obtained a comparatively simple formula for the specific heat of a solid body. This not only corroborates, particularly for low temperatures, the experimental values
obtained by W Nernst and his school, but also is in good agreement with the elastic and optical properties of the body. Further, quantum effects are very noticeable when considering the specific heat
of gases. W Nernst had shown at an early stage that the quantum of energy of an oscillation must correspond to the quantum of energy of a rotation, and accordingly expected that the energy of
rotation of a gas molecule would decrease with temperature. A Eucken's measurements of the specific heat of hydrogen verified this deduction, and the fact that the calculations of A Einstein and O
Stern, P Ehrenfest, and others have not yet been in satisfactory agreement can be ascribed to our incomplete knowledge of the form of the hydrogen molecule. The work of N Bjerrum, E von Bahr, H
Rubens, and G Hettner, etc., on absorption bands in the infrared rays, shows that there can be no doubt that the rotations of the gas molecules indicated by the quantum conditions do actually exist.
However, no one has yet succeeded in giving a complete explanation of these remarkable rotations.
Since all the affinity of a substance is ultimately bound up with its entropy, the theoretical calculation of entropy by means of quanta gives a method of attacking all problems in chemical affinity.
Nernst's chemical constant is a characteristic for the absolute value of the entropy of a gas. O Sackur calculated this constant directly by a combinatory method similar to my method with
oscillators, while O Stern and H Tetrode, by careful examination of experimental data of evaporation, determined the difference of the entropies of gaseous and non-gaseous substances.
The cases considered so far deal with thermo-dynamical equilibrium, which only give statistical mean values for a number of particles and long periods of time. This observation of electronic
impulses, however, leads directly to the dynamical details of the phenomena considered. The determination by James Franck and Gustav Hertz of the so-called resonance potential, or that critical
velocity, the minimum velocity which an electron must have to bring about the emission of a quantum of light by collision with a neutral atom, is as direct a method of measuring the quantum of action
as can be desired. Also, in the case of the characteristic radiation of the Röntgen spectrum discovered by C G Barkla, similar methods which gave very good results were developed by D L Webster, E
Wagner, and others.
The liberation of quanta of light by electronic impulses is the converse of the emission of electrons by projection of light, Röntgen or Gamma rays, and here, again, the quanta of energy determined
from the quantum of action and the frequency of oscillations play a characteristic part in the same way as we have seen above, in that the velocity of the electrons emitted does not depend on the
intensity of the radiation, but on the wavelength of the light emitted. From a quantitative point of view, also, Einstein's relations for light quanta mentioned above have been verified in every way,
particularly by R A Millikan, who determined the initial velocities of the emitted electrons, while the significance of the light quantum in causing photo-chemical reactions has been made clear by E
The results quoted above, collected from the most varied branches of physics, present an overwhelming case for the existence of the quantum of action, and the quantum hypothesis was put on a very
firm foundation by Niels Bohr's theory of the atom. This theory was destined, by means of the quantum of action, to open a door into the wonderland of spectroscopy, which had obstinately defied all
investigators since the discovery of spectral analysis. Once the way was made clear, a mass of new knowledge was obtained concerning this branch of science, as well as allied branches of physics and
chemistry. The first brilliant result was Balmer's series for hydrogen and helium, including the reduction of the universal Rydberg constants to pure numbers, by which the small difference between
hydrogen and helium was found to be due to the slower motion of the heavier atomic core. This led immediately to the investigation of other series in the optical and Röntgen spectra by means of
Ritz's useful combination principle, the fundamental meaning of which was now demonstrated for the first time.
In the face of these numerous verifications (which could be considered as very strong proofs in view of the great accuracy of spectroscopic measurements), those who had looked on the problem as a
game of chance were finally compelled to throw away all doubt when A Sommerfeld showed that - by extending the laws of distribution of quanta to systems with several degrees of freedom (and bearing
in mind the variability of mass according to the theory of relativity) - an elegant formula follows which must, so far as can be determined by the most delicate measurements now possible (those of F
Paschen), solve the riddle of the structure of hydrogen and helium spectra. This is an accomplishment in every way comparable with the famous discovery of the planet Neptune, whose existence and
position had been calculated by Le Verrier before it had been seen by human eye. Proceeding further along the same lines, P Epstein succeeded in giving a complete explanation of the Stark effect of
the electrical separation of the spectral lines, and P Debye in giving a simple meaning to the K-series of the Röntgen spectrum, investigated by Manne Siegbahn. Moreover, there followed a large
number of wider investigations, which explained more or less successfully the mystery of the structure of the atom.
In view of all these results - a complete explanation would involve the inclusion of many more well-known names - an unbiased critic must recognize that the quantum of action is a universal physical
constant, the value of which has been found from several very different phenomena to be 6.54 × 10^-27 ergs secs. It must seem a curious coincidence that at the time when the idea of general
relativity is making headway and leading to unexpected results, Nature has revealed, at a point where it could be least foreseen, an absolute invariable unit, by means of which the magnitude of the
action in a time space element can be represented by a definite number, devoid of ambiguity, thus eliminating the hitherto relative character.
Yet no actual quantum theory has been formed by the introduction of the quantum of action. But perhaps this theory is not so far distant as the introduction of Maxwell's light theory was from the
discovery of the velocity of light by Olaf Römer. The difficulties in the way of introducing the quantum of action into classical theory from the beginning have been mentioned above. As years have
elapsed, these difficulties have increased rather than diminished, and although the impetuous advance of research has dealt with some of them, yet the inevitable gaps remaining in any extension are
all the more painful to the conscientious and systematic worker. That which serves as the foundation of the law of action in Bohr's theory is made up of certain hypotheses which were flatly rejected,
without any question, a generation ago by physicists. That quite definite orbits determined by quanta are a special feature of the atom may be considered admissible, but it is less easy to assume
that the electrons, moving in these paths with a definite acceleration, radiate no energy. But that the quite sharply defined frequency of an emitted light quantum should be different from the
frequency of the emitted electrons must seem, at first sight, to a physicist educated in the classical school, an almost unreasonable demand on his imagination.
However, figures are decisive, and the conclusion is that things have been gradually reversed. At first a new foreign element was fitted into a structure, generally considered fixed, with as little
change as possible; but now the intruder, after gaining a secure place for itself, has taken the offensive, and today it is almost certain that it will undermine the old structure in some way or
other. The question is at what place and to what degree this will happen.
If a surmise be allowed as to the probable outcome of this struggle, everything seems to indicate that the great principles of thermo-dynamics, derived from the classical theory, will not only
maintain their central position in the quantum theory, but will be greatly extended. The adiabatic hypothesis of P Ehrenfest plays the same part in the quantum theory as the original experiments
played in the founding of classical thermodynamics. Just as Rudolf Clausius introduced, as a basis for the measure of entropy, the theorem that any two conditions of a material system are
transformable one to the other by reversible processes, so Bohr's new ideas showed the corresponding way to explore the problems opened up by him.
A question, from the complete answer to which we may expect far-reaching explanations, is what becomes of the energy of a light quantum after perfect emission? Does it spread out, as it progresses,
in all directions, as in Huygens's wave theory, and while covering an ever-larger amount of space, diminish without limit? Or does it travel along as in Newton's emanation theory like a projectile in
one direction? In the first case the quantum could never concentrate its energy in a particular spot to enable it to liberate an electron from the atomic influences; in the second case we would have
the complete triumph of Maxwell's theory, and the continuity between static and dynamic fields must be sacrificed, and with it the present complete explanation of interference phenomena, which have
been investigated in all details. Both these alternatives would have very unpleasant consequences for the modern physicist.
In each case there can be no doubt that science will be able to overcome this serious dilemma, and that what seems now to be incompatible may later be regarded as most suitable on account of its
harmony and simplicity. Until this goal is attained the problem of the quantum of action will not cease to stimulate research and to yield results, and the greater the difficulties opposed to its
solution, the greater will be its significance for the extension and deepening of all our knowledge of physics.
JOC/EFR April 2007
The URL of this page is:
|
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Extras/Planck_quantum_theory.html","timestamp":"2014-04-19T19:34:58Z","content_type":null,"content_length":"30182","record_id":"<urn:uuid:adee429a-32a4-40c7-a710-f6473781666c>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Does Divisibility by 4 always work?
Date: 6/17/96 at 10:7:36
From: Anonymous
Subject: Divisibility by 4 - does it always work?
Dear Dr. Math:
I just read your web page on division tips. I have a question
regarding the divisibility by 4 rule. Your rule says that if the last
two digits in the number have a factor of four, the entire number is
divisible by 4. In your second example, this makes sense, but in your
first example the rule doesn't seem to apply. One hundred IS divisible
by 4 but I don't how 4 / 00 gets a quotient of 25. Am I missing
Please write soon
Jessica Javor
Date: 6/17/96 at 17:30:0
From: Doctor Ethan
Subject: Re: Divisibility by 4 - does it always work?
Dear Jessica,
I am glad you are using that page, and yes, they all always work.
I will explain 4 a little more for you.
The idea is that if the last two digits are divisible by 4, then the
whole number will be. This trick doesn't tell you anything about
what the answer will be when you do the division - it just tells you
whether or not you will have a remainder.
Let's look at 100. The trick says to look at the last two digits.
As you pointed out they are 00. Is zero divisible by 4?
Remember, we need to look at 00 / 4 not 4 / 00.
and 00 / 4 = 0
Even though the answer is 0, it is still evenly divisible, so the
trick works for all numbers.
-Doctor Ethan, The Math Forum
Check out our web site! http://mathforum.org/dr.math/
|
{"url":"http://mathforum.org/library/drmath/view/58504.html","timestamp":"2014-04-16T17:20:13Z","content_type":null,"content_length":"6354","record_id":"<urn:uuid:cdfedffe-081a-49ba-9dfd-98678119064f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
12/4= /5 wat is the blank
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
UnkleRhaukus, it may help her if you explained how you arrived at that solution.
Best Response
You've already chosen the best response.
12/4 = x / 5 4x = 60 x = 15
Best Response
You've already chosen the best response.
And does that make sense Snowball? Are you able to do one on your own? Thanks Directrix.
Best Response
You've already chosen the best response.
i think u multiply 12 and 5 and then get answer that doesn' tmake sence
Best Response
You've already chosen the best response.
Ok, I see. When you have an equation, you can do the same thing on both sides, it remains an equation. So if you add 10 to the right hand side, you can add 10 to the left hand side and it will
still be an equation. With me?
Best Response
You've already chosen the best response.
Now, we are trying to find out what is the number that is above 5, so we represent the blank by a x.
Best Response
You've already chosen the best response.
The next step is to simplify it. We want all the terms with x on one side of the = and all the terms without x on the other side.
Best Response
You've already chosen the best response.
I would cross multiply.
Best Response
You've already chosen the best response.
i am with msmer
Best Response
You've already chosen the best response.
\[12/4=3\] \[x/5=3\]\[x=15\]
Best Response
You've already chosen the best response.
Multiply both sides by 5. On the left you have, 5 x (12/4) and on the other side you have 5 x (x/4) What do you get then ? What happens when you multiply 5 into x/4
Best Response
You've already chosen the best response.
This means: \[\frac{12}{4} = \frac{x}{5}\] To cross multiply, you multiply the numerator of one fraction with the denominator of the other fraction.
Best Response
You've already chosen the best response.
I like Unkle's solution as well.
Best Response
You've already chosen the best response.
5*12 = 4*x
Best Response
You've already chosen the best response.
why do it the hard way for?
Best Response
You've already chosen the best response.
60 = 4x
Best Response
You've already chosen the best response.
x = 15
Best Response
You've already chosen the best response.
GIVE ME THE EASY WAY ONE STEP AT A TIME OK DRAW IT
Best Response
You've already chosen the best response.
Ok, snowball, go back to my last post, Are you with me?
Best Response
You've already chosen the best response.
no draw it
Best Response
You've already chosen the best response.
unklerhaukus: for this particular problem, your solution would be easier because 12/4 simplifies to 3. However, in a case such as 13/4 where it does not simplify into a nice number, cross
multiplying will be much easier - I thought it would be good to show snowballhockey how to set that up.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
MSMR's does look better.
Best Response
You've already chosen the best response.
i getthe 60 part then after that u simplfy that with 5
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
So you have three ways to do this. All are correct. You need to understand what Unkle did, what MSMR did and what I showed you. Can you try one on your own? We will help
Best Response
You've already chosen the best response.
um ok
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred.
|
{"url":"http://openstudy.com/updates/4f331a3ee4b0fc0c1a0b57c1","timestamp":"2014-04-16T10:35:51Z","content_type":null,"content_length":"153752","record_id":"<urn:uuid:0cbbade8-5ea3-42f6-aa53-28565b64f84d>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00542-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Challenging Diophantine Equation
Date: 09/30/2004 at 21:14:07
From: Hannah
Subject: diophantine equations
Find all solutions of 2^m = 3^n + 5. It has only 2 solutions. I
thought I had the answer as I have shown it had 2 solutions but it
was pointed out I made an error in the calculations.
The case bothering me is when I have shown m has to be of form 4k+1
and n of the form 4K+3 for some k,K and further that K has to be even
to ensure 3^(4K+3)+5 is congruent to 0 mod 32.
What I'm trying to show is then although it is 0 mod 32, it has an odd
factor, but it is not proving that easy. What I get is that
3^(4K+3)+5=32m where m ends in 6 or 1, so I am close but can't quite
get the last part. Any pointers will be most appreciated.
Date: 10/26/2004 at 11:58:26
From: Doctor Vogler
Subject: Re: diophantine equations
Hi Hannah,
Thanks for writing to Dr. Math. I'm terribly sorry about the delay in
replying to you. You had a hard problem, and no one had a good
solution, so it went on by and was soon lost.
But I love Diophantine equations, so I kept thinking about it. And I
think I'm getting somewhere, so I'll tell you what I've done. I hope
it's not too late to give you a response.
You see, my first thought is to use modular arithmetic, since that's a
great way to solve these kinds of integer exponent problems. But it's
made difficult by the fact that there are already two solutions.
I'd like to say: Find some number M and consider all of the powers of
2 mod M, and all of the powers of 3 mod M, and find all of the pairs
whose difference is 5 mod M.
Of course, if you just pick any old M, then there will probably be
lots of solutions. But if you pick some M so that 2 and 3 have
(relatively) small order, then you get better results. For example,
take M = 511. Then the order of 2 is 9, and the order of 3 is 12, so
there are only 9 powers of 2 and only 12 powers of 3, and hence only
9*12 = 108 differences, and it turns out that only 2 of them equal 5.
So we can conclude that if
2^m = 3^n + 5 (mod 511)
then either
m = 3 mod 9 and n = 1 mod 12, or
m = 5 mod 9 and n = 3 mod 12.
But the problem is that (1, 3) and (3, 5) are solutions to the
equation. So no matter what M we choose, there will always be two
possible solutions.
But that is only true if M is not divisible by 2 or 3. The idea is to
realize that we must have something that distinguishes between small m
and n and large m or n. So we let M be a power of 2 or a power of 3.
For example, we want to prove that there are no solutions with n > 3,
so let's choose some M so that the 3^n term disappears, but not until
n > 3. So we choose M = 81, and consider
2^m = 3^n + 5 (mod 81).
Now, we wonder if there are solutions with n > 3. So we assume that
n > 3, and that changes our equation to
2^m = 5 (mod 81).
Well, 2 has order 54 mod 81, and all solutions to this equation have
m = 23 (mod 54).
The very important thing here is that this is not 3 or 5!
Similarly, we can choose M to be a power of 2, and we want the 2^m
term to disappear, but not until m > 5. So we let M = 64, and then
2^m = 3^n + 5 (mod 64).
Again, if we assume that m > 5, then 2^m is zero mod 64, so this
equation is
3^n = -5 = 59 (mod 64).
We find that all solutions to this equation have
n = 11 (mod 16).
Again, the important thing is that this number is not 1 or 3!
Now we just have to find a big M like the ones I discussed earlier (so
that the orders of 2 and 3 are small) which contradicts one of these
two statements.
The trouble is that if the order of 3 is not divisible by 27 (half of
54), then the solution
m = 5 and n = 3
will fit our requirement for m. And if the order of 2 is not
divisible by 16, then that same solution will fit the requirement for
n. So we want the orders of 2 and 3 mod M to be pretty small, so that
there aren't lots and lots of solutions, but at least one of them has
to be big enough to be divisible by 27 or 16 (respectively).
So we look around for such an M. A computer search helps. And I
found M = 697. The order of 2 is 40, and the order of 3 is 16. I
checked all 40*16 pairs of a power of two and a power of 3, and the
only ones which have
2^m = 3^n + 5 (mod 697)
m = 3 mod 40 and n = 1 mod 16, and
m = 5 mod 40 and n = 3 mod 16.
Of course, neither of these has
n = 11 mod 16,
so there can be no other solutions.
If you have any questions about this or need more help, please write
back and show me what you have been able to do, and I will try to
offer further suggestions.
- Doctor Vogler, The Math Forum
|
{"url":"http://mathforum.org/library/drmath/view/71960.html","timestamp":"2014-04-20T06:34:23Z","content_type":null,"content_length":"9640","record_id":"<urn:uuid:d1d2bd8c-7afc-4294-947f-c3c560e1d2c9>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
|
[SciPy-dev] Nonlinear constrained optimizer in SciPy
Jean-Sebastien Roy js at jeannot.org
Fri Apr 9 13:06:28 CDT 2004
Following an earlier exchange with eric jones, I would like to suggest
the addition of a non linear constrained optimizer to SciPy (a
functionnality currently lacking).
In 2001, a colleague and I did a review on non-linear bound constrained
optimizer that only required the gradient of the function to be
optimized. We did our review mosty based upon softwares listed in H.
Mittlemann optimization guide (http://plato.la.asu.edu/guide.html), the
non linear programming faq (http://www-unix.mcs.anl.gov/otc/Guide/faq/)
and the NEOS guide (http://www-fp.mcs.anl.gov/otc/Guide/), plus a few
other soucres (HARWELL notably).
We tested about 12 different optimizers for which the source code was
available and the result, while foreseeable, was not very interesting :
performance is largely dependent on the problem, and tuning of the
solvers parameters can result in huge performance increases.
Nevertheless, in this restricted class of optimizers, a few emerged as
quite good on a variety of problems. Some of them would probably be good
candidates for inclusion in SciPy.
(Note: Very few codes come with a license explicitly allowing free
redistribution and inclusion in commercial software)
The Fortran source can be downloaded here:
We wrote to Nocedal who did not to oppose inclusion in our software, so
maybe we could ask him for inclusion in SciPy ?
It was the simplest to use, iterations are fast, and it was quite robust.
Probably the first code I would try on any problem.
The Fortran source can be extracted from the scilab distribution.
See also:
An LBGFS code like the previous one, reasonnably simple to use, very
fast, very efficient.
The use in commercial software if forbiden by scilab's licence, but we
may still ask the author, Claude Lemaréchal, who is a very nice person.
A bound constrained truncated newton code.
Stephen Nash did not oppose use in our code, and since this code
provided the best performance on our problem (which may not be the case
on other problem), I did a C version of it (with some modifications) and
recently added a Python interface. It's distributed with a BSD license,
so it could be readily integrated in SciPy (but I'm obviously biased).
You can download it here (look for TNC):
A few more general codes:
SQP code with non linear constraints (including equality constraints).
While it was quite difficult to use in our problem, it works well, and
is very general.
The licence oppose commercial use, but we could ask the author, Peter
Spellucci about inclusion in SciPy.
Trust region newton code. Fast iterations. Sparse Hessian estimation
using finite differences, so it may be more difficult to use (since the
sparsity pattern should be computed).
Again, Jorge More must be asked about inclusion.
Other interesting codes include:
An interior point code, very general (non linear equality constraints,
We did not review it (its quite recent). It comes with a free license,
so integration in SciPy should be possible. (Some options of IPOPT
requires part of the HSL library, which cannot be used in commercial
A derivative free optimisation code, which, like IPOPT, comes with a free
license. But it requires a non-linear gradient based optimizer and some
Another alternative for derivative free optimisation, which seems free.
There are probably other codes, better alternatives, which should be
considered for inclusion in SciPy. Any suggestions ?
More information about the Scipy-dev mailing list
|
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2004-April/002083.html","timestamp":"2014-04-18T12:00:27Z","content_type":null,"content_length":"7268","record_id":"<urn:uuid:f824cd58-49a2-40fa-8aee-c8676a6241f6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
|
fractalus - Fractals, in Layman's Terms
Fractals, in Layman's Terms
This is a simple question with a very complicated (and very long) answer. A technical answer, while accurate, doesn't help much because it uses other fractalspeak jargon that few people
understand, so I won't even give that definition here.
The simple answer is that a fractal is a shape that, when you look at a small part of it, has a similar (but not necessarily identical) appearance to the full shape. Take, for example, a rocky
mountain. From a distance, you can see how rocky it is; up close, the surface is very similar. Little rocks have a similar bumpy surface to big rocks and to the overall mountain.
Julia set, a very simple fractal type. I've highlighted a small box near the left side (it's a little faint). The portion of the image in that box is shown in the second image, below. That image
also has a small box highlighted, and that area is shown in full in the third image.
You can see in these images that smaller areas of the fractal shape look very much like the larger, full-size image. With Julia fractals, you can continue this enlarging ("zooming") process as
often as you like, and you will still see the same sort of details and shapes at very tiny sizes that you see on the full-size image. This is what is meant by self-similarity.
Now of course, with something so rigidly self-similar, there's not really much point in zooming in. After all, everything is the same; small detail looks like large detail. So while it's
interesting that fractals are self-similar, if this is all there is to it, there isn't much point.
Fortunately for fractal enthusiasts, that isn't all there is to it. Many fractal types get wildly different as you zoom in. They're still self-similar, but they're not rigidly self-similar.
This is what makes fractal exploration so intriguing. The features you see as you zoom are always changing teasing you with a little bit of familiarity, and tantalizing you with new and unexpected
twists. With just a single fractal shape, you can explore forever and never see everything it has to offer. The further you zoom, the more likely you are seeing something that nobody has ever seen
before. And with modern computers, it's very easy to zoom and zoom and zoom. With just a few clicks you can have zoomed so far that the original fractal image is larger than the sun.
The basic technique of these fractals can actually be explained without resorting to confusing mathematical equations and jargon. It's rather simple, really.
First, give every point on the screen a unique number. Now take that number and stick it into a formula; you'll get a result from the formula. Take that result and stick it back into the formula.
Keep doing this and watch what happens to the numbers you get. Color each point based on what happens.
That's it. Really that's it. Now, with most formulas it probably won't do much of interest, but with the formulas used in fractal creation, some interesting things happen. Sometimes the numbers
you get by feeding the results of a formula back into the formula (iterating) explode into enormous numbers, that just keep getting bigger and bigger. Those points get colored one way. Other
times, the numbers "home in" on a number, getting closer and closer to it. They get colored a different way.
The interesting thing and the reason fractals work at all is that sometimes, just a tiny little change in the number you start with can completely change what happens as you keep iterating the
number. And the boundary between numbers that explode and numbers that home in is complicated and twisted it's the shape of the fractal.
Calculating fractals this way involves a lot of work. A small fractal image perhaps only 640x480 contains over 300,000 points. Each of those points may require running a number through the fractal
formula more than 1,000 times. This means the formula has to be computed more than three hundred million times. And that's a mild example. Extreme images (such as poster-size fractals) can involve
more than one trillion calculations.
Fortunately for the impatient among us, modern computers are fast enough to do the job in a few minutes. Large fractals might take hours or days, but exploring fractals has never been easier.
As I stated at the outset, fractals are a huge topic. All I've even talked about here are one particular type of fractals (escape-time fractals), but there are many other types as well.
Unfortunately, the further you look into fractals, the more math you will need to know. There are very few fractal-related books or web pages that don't get into heavy mathematics. I've attempted
to assemble some pages here that will get you started on the mathematics behind fractals in an accessible fashion, but there is no hiding the fact that it is math.
Return to Background
Return to Information
Return to Entrance
|
{"url":"https://www.fractalus.com/info/layman.htm","timestamp":"2014-04-18T18:56:03Z","content_type":null,"content_length":"13731","record_id":"<urn:uuid:1b2e6511-735a-4637-9e39-c6e51d8ebf6d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Stoneham, MA Prealgebra Tutor
Find a Stoneham, MA Prealgebra Tutor
...Computer Programming is the creating of computer programs! Everything that we do with computers, from Word, to the internet, every page of the internet, games, etc. Everything on a computer or
even a handheld device (cell phone, etc.) has a program on it.
19 Subjects: including prealgebra, calculus, physics, algebra 2
My tutoring experience has been vast in the last 10+ years. I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor
a wide array of math courses.
36 Subjects: including prealgebra, English, reading, calculus
...SAT remains the most widely accepted standardized test for college admissions. Having existed for many decades, it is well understood, and excellent study materials are available. I use those
materials, along with my personal savvy about test-taking, to help improve students' scores.
44 Subjects: including prealgebra, chemistry, physics, writing
I am a licensed teacher and an experienced tutor who has worked with high school students for many years. I help with math - Algebra, Geometry, Pre-Calculus, Statistics. I am comfortable with
Standard, Honors, and AP curricula.
8 Subjects: including prealgebra, statistics, algebra 1, geometry
...Feel free to shoot me an email if you’re interested in studying with me, or if you have any questions. I look forward to hearing from you!I have academic experience in Prealgebra and the base
curriculum for Algebra 1, including the following: Expressions and Equations, Linear Equations, Represen...
38 Subjects: including prealgebra, English, chemistry, reading
|
{"url":"http://www.purplemath.com/Stoneham_MA_Prealgebra_tutors.php","timestamp":"2014-04-16T19:34:41Z","content_type":null,"content_length":"23967","record_id":"<urn:uuid:0ccd1d61-598f-46e8-a4fd-88bcc698a850>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Factoring Complicated Expressions - Concept
When asked to simplify expressions, sometimes we come across complicated expressions that are not easily factored by traditional methods. When factoring complex expressions, one strategy that we can
use is substitution. When an expression has complex terms, we can substitute a single variable, factor and then re-substitute the original term for the variable once we have completely factored the
Factoring complicated trinomials, okay? What I mean by complicated is something that sort of looks a little bit different than what we are used to. Okay?
The most common approach to doing a problem like this is at least that students want to do is to foil everything out, combine like terms and then factor it down again. Okay? But I want to show you a
little bit of a shortcut that we can do in order to deal with this. Okay?
What we have is something squared minus something else times something plus 4, okay? And what we can actually do is make a substitution. And while I say I [IB] you as my substitution variable, you
can do whatever you want but I highly recommend not using whatever variable is in your problem, okay? Because if you use x then you're going to get confused on what x is what and it'll get all
confusing, okay. So if you just introduce a new variable for whatever you're dealing with.
Okay, by saying u is equal to 3x minus 2, I can then go back to this problem. 3x minus 2 squared just becomes u squared minus 4 times 3x minus 2 just becomes -4u and then the +4 is left down the end.
Okay, we know how to factor this now, okay. So all we've done is we've taken something that's kind of complicated by making a substitution, a u substitution, I've turned it into something easy. I
know how to factor this, this is just going to be u-2 quantity squared, okay? Be careful though because a lot of people want to end right here. They want to say, okay I factored it down, u-2. But if
you look at it, our initial variable was x. It doesn't really make any sense to introduce a different variable as our end product, so what we have to do is go back and take this u and plug it back
in, okay? So this u is 3x-2. So we end up with 3x-2-2. Combining like terms what we have then is 3x-4 quantity squared, okay? So factoring a fairly ugly thing by making a substitution makes your life
easier. Like I said, you could foil all this out if you wanted to but it's going to be a lot harder and you're more likely to make mistakes than if you just make a simple substitution.
The other sort of complicated thing I want to look at is, if we're dealing with negative exponents, okay? So say I want to factor this expression which has 3 different negative exponents. What I want
to do is draw a comparison. If I say 3x to the fourth minus 2x squared and I asked you to factor this. The first thing you want to do is to factor out the greatest common factor. You look for the
smallest power of x that you have, okay?
So in this case you factor out the x squared leaving you with 3x squared minus 2, okay? You take out the smallest power on x. What we're going to do with this one is exactly the same except that it's
sort of a weird thing to wrap your head around and that the smallest power of x is actually the most negative. [IB] think of a number line, your bigger numbers on one end, your big negative is on the
other. The smaller numbers are towards those big negatives. So what you really have then is you're taking out your largest negative power. So in this case x the negative eighth. Take that out and
then we are left with 3x. And then again, think about when we're multiplying bases we add our exponents.
So we want to figure out what goes here that when we multiply these things together we end up with a -6. -8 plus what is equal to -6. That should be a 2, okay? So then we have a +3 and again -8 times
what is going to give us, alright x and -8 times what is going to give us x and -7. Again when we multiply bases we add exponents. -8+1 is -7 so this is just going to be x and then end up with -20.
One way you can always check to make sure you did this right is the whole goal of factoring out the negative exponent is that every other exponent is going to be positive, okay? So I factor out the
-8, I'm left with the square, a single and a constant term. So I know that I did it right. If I still had negative exponents in here, something went wrong.
Now what we have is we took out this negative term and we have something we can factor, okay? For our purposes right now I'm not going to finish this problem. Hopefully you can see how to that but
what I wanted to talk about is just how you tackle a problem like this. You factor out to the largest negative possible leaving you with something that you can then factor.
factoring substitution negative exponents
|
{"url":"https://www.brightstorm.com/math/algebra-2/factoring/factoring-complicated-expressions/","timestamp":"2014-04-17T15:56:06Z","content_type":null,"content_length":"62961","record_id":"<urn:uuid:37762116-3595-4a59-b96c-94bf8efe6c49>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00603-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Westmont Math Tutor
Find a Westmont Math Tutor
...When I work with students, first I make sure that student understands all related concepts then I spend time teaching strategies to solve the problems assigned. I prepare students for quizzes,
tests, homework and ISAT or any other standardized tests. I also specialize in preparing students for ICTS(TAP) basic skills exam.
23 Subjects: including prealgebra, SAT math, algebra 1, algebra 2
...I have taken about eight courses in economics at the college level. I use economics a great deal in finance applications. Generally, I ask that students email me questions with which they need
help before we meet, just in case I need to review that particular material.
13 Subjects: including geometry, algebra 1, algebra 2, Microsoft Excel
...I took pleasure in helping students understand concepts and succeed. My best students are those that desire to learn and I seek to cultivate that attitude of growth and learning through a zest
and enthusiasm for learning. Eventually, I began teaching ACT Reading/English, began to teach math and reading to all ages, and eventually became a sought after subject tutor.
26 Subjects: including calculus, chemistry, geometry, ACT Math
...Students are more confident, parents are happier, and all are pleased with the report card results. I usually meet with students in the evening at our local library. I have also met with
students in their homes and occasionally at a local coffee shop.
13 Subjects: including trigonometry, algebra 1, algebra 2, biology
...I hope to ignite the same love of the subject that I have, but at the very least helping students gain a proficiency and mastery of the subject is very satisfying. I'm a patient person, good
listener and communicator. I work well with students from middle school through college and I can tutor ...
22 Subjects: including prealgebra, computer science, PHP, Visual Basic
|
{"url":"http://www.purplemath.com/westmont_math_tutors.php","timestamp":"2014-04-20T21:17:55Z","content_type":null,"content_length":"23731","record_id":"<urn:uuid:6fcec095-0fbf-4531-9833-3a5664af66f2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00614-ip-10-147-4-33.ec2.internal.warc.gz"}
|
Re: Robot Seismic Combinatio
I'm working through some verification problems that I had from Risa 3d. I've recreated a bunch of them in Robot to ensure I'm setting up Robot correctly to get the results I'm expecting. That said,
this particular problem has me stumped. It's a dynamic problem and one which involved Response Spectrum results to be combined for multipe directions. I got the modal analysis to calculate exactly
the same as both Risa3d and Sap2000. I am having great difficulty setting up the combinations correctly so my base reactions match the expected results. I'm off by a factor of about 30. What / where
am I missing? The attached file includes both my Robot file, Risa3d file and the PDF documenting the verification problem along with the expected results. I am making a mistake somewhere after the
modal calculations. Any help is much appriciated.
|
{"url":"http://forums.autodesk.com/t5/Robot-Structural-Analysis/Robot-Seismic-Combinations-Compare-to-Risa-and-Sap2000/m-p/3427019","timestamp":"2014-04-16T22:31:44Z","content_type":null,"content_length":"194653","record_id":"<urn:uuid:76475dae-69ad-44f7-9106-f2261c1cbd6a>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
|