content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
eee What is the slope of a line parallel to 5x + 4y + 9 = 0? eee View Solution
eee The lines 30x - 5y + 11 = 0 and y = 6x + 14 are eee View Solution
eee Find the value of k if the line ky = (k - 1)x + 3k is parallel to y = 2x + 5. View Solution
eee What is the slope of a line parallel to 6x + 5y + 11 = 0? eee View Solution
eee The lines 20x - 4y + 9 = 0 and y = 5x + 12 are eee View Solution
eee In a plane, two lines parallel to a third line are eee View Solution
eee In a plane, two lines perpendicular to a third line are eee View Solution
eee The slopes of non-vertical parallel lines are View Solution
eee Identify the correct statement.
I. Vertical lines are perpendicular to each other.
II. Vertical lines are parallel to each other. View Solution
III. Vertical lines are not parallel to each other.
IV. Vertical lines are intersecting. eee
eee What is the slope of a line parallel to x-axis? View Solution
eee Number of lines parallel to a line and passing through a given point is/are View Solution
eee Find the product of the slopes of x-axis and y-axis. View Solution
eee A(1, - 3), B(5, 4), C(3, 2), D(2, 0) are the coordinates of 4 points in the x - y plane. Is AC↔ || BD↔? View Solution
eee Which of the following lines is parallel to x = 3? View Solution
eee Slope of a line parallel to 2x + 3y + 6 = 0 is View Solution
eee A(4.5, 5), B(2, 5), C(1.5, - 2), D(3, - 2) are the coordinates of 4 points in the x-y plane. What can you say about the lines AB↔ and CD↔? eee View Solution
eee What is the slope of a line parallel to y = 3? View Solution
eee The lines x = 3 and x = 7 are View Solution
eee Which of the following is not parallel to y = (35)x + 6? eee View Solution
eee Find the equation of a line parallel to x = 3 and passing through (4, 5). View Solution
eee Find the equation of the line parallel to y = 4 and passing through (6, 5). View Solution
eee Find the equation of the line parallel to y = 2x + 3 and passing through (1, 2). View Solution
eee What is the inclination of a line parallel to y = x - 3 ? eee View Solution
eee What is the slope of a line parallel to y-axis? eee View Solution
eee A line AB↔ makes an angle of 60^o with the positive x-axis. What is the slope of any line parallel to AB? View Solution
eee If the line through (3, y) and (2, 7) is parallel to the line through (- 1, 4) and (0, 6), then what is the value of y? eee View Solution
eee Choose the equation of the line passing through the point (- 5, 1) and parallel to the line joining the points A(7, - 1) and B(0, 3). View Solution
eee What is the value of p if the lines y = 3x + 7 and 2y + px = 3 are parallel to each other? eee View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxexbgdkhxkjdfk&.html","timestamp":"2014-04-20T00:37:32Z","content_type":null,"content_length":"72846","record_id":"<urn:uuid:f45263ed-3d2a-41c4-aa05-208c82970914>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
- -
Please install Math Player to see the Math Symbols properly
Click on a 'View Solution' below for other questions:
eee What is the slope of a line parallel to 5x + 4y + 9 = 0? eee View Solution
eee The lines 30x - 5y + 11 = 0 and y = 6x + 14 are eee View Solution
eee Find the value of k if the line ky = (k - 1)x + 3k is parallel to y = 2x + 5. View Solution
eee What is the slope of a line parallel to 6x + 5y + 11 = 0? eee View Solution
eee The lines 20x - 4y + 9 = 0 and y = 5x + 12 are eee View Solution
eee In a plane, two lines parallel to a third line are eee View Solution
eee In a plane, two lines perpendicular to a third line are eee View Solution
eee The slopes of non-vertical parallel lines are View Solution
eee Identify the correct statement.
I. Vertical lines are perpendicular to each other.
II. Vertical lines are parallel to each other. View Solution
III. Vertical lines are not parallel to each other.
IV. Vertical lines are intersecting. eee
eee What is the slope of a line parallel to x-axis? View Solution
eee Number of lines parallel to a line and passing through a given point is/are View Solution
eee Find the product of the slopes of x-axis and y-axis. View Solution
eee A(1, - 3), B(5, 4), C(3, 2), D(2, 0) are the coordinates of 4 points in the x - y plane. Is AC↔ || BD↔? View Solution
eee Which of the following lines is parallel to x = 3? View Solution
eee Slope of a line parallel to 2x + 3y + 6 = 0 is View Solution
eee A(4.5, 5), B(2, 5), C(1.5, - 2), D(3, - 2) are the coordinates of 4 points in the x-y plane. What can you say about the lines AB↔ and CD↔? eee View Solution
eee What is the slope of a line parallel to y = 3? View Solution
eee The lines x = 3 and x = 7 are View Solution
eee Which of the following is not parallel to y = (35)x + 6? eee View Solution
eee Find the equation of a line parallel to x = 3 and passing through (4, 5). View Solution
eee Find the equation of the line parallel to y = 4 and passing through (6, 5). View Solution
eee Find the equation of the line parallel to y = 2x + 3 and passing through (1, 2). View Solution
eee What is the inclination of a line parallel to y = x - 3 ? eee View Solution
eee What is the slope of a line parallel to y-axis? eee View Solution
eee A line AB↔ makes an angle of 60^o with the positive x-axis. What is the slope of any line parallel to AB? View Solution
eee If the line through (3, y) and (2, 7) is parallel to the line through (- 1, 4) and (0, 6), then what is the value of y? eee View Solution
eee Choose the equation of the line passing through the point (- 5, 1) and parallel to the line joining the points A(7, - 1) and B(0, 3). View Solution
eee What is the value of p if the lines y = 3x + 7 and 2y + px = 3 are parallel to each other? eee View Solution | {"url":"http://www.icoachmath.com/solvedexample/sampleworksheet.aspx?process=/__cstlqvxbefxexbgdkhxkjdfk&.html","timestamp":"2014-04-20T00:37:32Z","content_type":null,"content_length":"72846","record_id":"<urn:uuid:f45263ed-3d2a-41c4-aa05-208c82970914>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00153-ip-10-147-4-33.ec2.internal.warc.gz"} |
Wolfram Demonstrations Project
Synthesis of Ethanol by Hydration of Ethylene
Consider the following gas-phase reaction to produce ethanol from ethylene: . Take the initial number of moles of ethylene to equal 1. The initial number of moles of water is determined by the
selected steam-to-ethylene molar ratio. Initially, there is no ethanol in the reaction vessel. This Demonstration plots the extent of reaction versus pressure, , for user-set values of the
temperature, . It follows from the stoichiometry of the reaction that the extent of the reaction is equal to the fractional conversion of ethylene at equilibrium.
The reaction gas mixture is described by a virial equation of state , where the second virial coefficient is given by .
The blue curve is the plot for a reacting gas mixture that is an ideal solution (the fugacity coefficients are computed using a virial equation of state that does not include the cross-coefficient
terms: ). The magenta curve is the plot for a reacting mixture of real gases (the calculation of the fugacity coefficients includes the cross-coefficient terms in the virial equation of state).
Finally, the brown curve is the plot for a reacting mixture that behaves as an ideal gas mixture: (all fugacity coefficients are set equal to unity). It is clear that the conversion approaches unity
at low temperatures since the equilibrium constant is large for this exothermic reaction. Also, the conversion for all cases (ideal gas mixture, ideal solution, and real gas mixture) are identical
at low pressures. Finally, since (i.e., the sum of the stoichiometric coefficients is negative), increasing the pressure will lead to higher conversions (Le Chatelier's principle). Present
calculations are valid only for the gas-phase reaction, which means that the pressure, , cannot be higher than the approximate dew point pressure displayed in red in the extent of reaction versus
Finally, the equilibrium compositions, computed for the real gas mixture case, are also displayed in a separate plot. The blue, magenta, and brown curves correspond to the mole fraction of ethanol,
water, and ethylene at equilibrium, respectively.
The equilibrium constant is equal to , where is the fugacity coefficient of species in solution, , , and .
1: For the ideal gas mixture, assume .
2: For the ideal solution mixture, assume , where is the fugacity coefficient of the pure species . From the virial EOS, , where is the pure-species second virial coefficient.
3: For the real gas mixture case: the second order cross virial coefficients must be used and we have , where .
[1] J. M. Smith, H. C. Van Ness, and M. M. Abbott,
Introduction to Chemical Engineering Thermodynamics
, 7th ed., New York: McGraw-Hill, 2005. | {"url":"http://demonstrations.wolfram.com/SynthesisOfEthanolByHydrationOfEthylene/","timestamp":"2014-04-19T09:26:47Z","content_type":null,"content_length":"48447","record_id":"<urn:uuid:bb86d799-8b56-41e2-973f-671319eff95a>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00354-ip-10-147-4-33.ec2.internal.warc.gz"} |
The basics of FPGA mathematics | EE Times
Most Recent Comments
3:41:46 AM
Flash Poll
All Polls
Frankenstein's Fix, Teardowns, Sideshows, Design Contests, Reader Content & More
Engineer's Bookshelf
The Engineering Life - Around the Web
Surprise TOQ Teardown at EELive!
Caleb Kraft Post a comment
This year, for EELive! I had a little surprise that I was quite eager to share. Qualcomm had given us a TOQ smart watch in order to award someone a prize. We were given complete freedom to ...
Design Contests & Competitions
Engineering Investigations
Frankenstein's Fix: The Winners Announced!
Caleb Kraft 8 comments
The Frankenstein's Fix contest for the Tektronix Scope has finally officially come to an end. We had an incredibly amusing live chat earlier today to announce the winners. However, we ...
MORE EELife
Top Comments of the Week
Like Us on Facebook
Datasheets.com Parts Search
185 million searchable parts
(please enter a part number or hit search to begin) | {"url":"http://www.eetimes.com/messages.asp?piddl_msgthreadid=39252&piddl_msgorder=asc","timestamp":"2014-04-19T08:10:23Z","content_type":null,"content_length":"173020","record_id":"<urn:uuid:2e8024b2-1b62-4f55-92d7-5005aac1fcef>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00434-ip-10-147-4-33.ec2.internal.warc.gz"} |
1. <spreadsheet> In a spreadsheet, the intersection of a row a column and a sheet, the smallest addressable unit of data. A cell contains either a constant value or a formula that is used to
calculate a value. The cell has a format that determines how to display the value. A cell can be part of a range. A cell is usually referred to by its column (labelled by one or more letters from the
sequence A, B, ..., Z, AA, AB, ..., AZ, BA, BB, ..., BZ, ... ) and its row number counting up from one, e.g. cell B3 is in the second column across and the third row down. A cell also belongs to a
particular sheet, e.g. "Sheet 1".
2. <networking> ATM's term for a packet.
Last updated: 2007-10-22
Try this search on Wikipedia, OneLook, Google
Nearby terms: CEI-PACT « Celeron « CELIP « cell » Cellang » CELLAS » Cello
Copyright Denis Howe 1985 | {"url":"http://foldoc.org/cell","timestamp":"2014-04-19T22:09:17Z","content_type":null,"content_length":"5255","record_id":"<urn:uuid:87f6a2fc-bc8a-4848-a5de-fb0c6b27dd0e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: Large matchings in uniform hypergraphs and the conjectures of
Erdos and Samuels
Noga Alon
Peter Frankl
Hao Huang
Vojtech R¨odl §
Andrzej Ruci´nski ¶
Benny Sudakov
In this paper we study conditions which guarantee the existence of perfect matchings and per-
fect fractional matchings in uniform hypergraphs. We reduce this problem to an old conjecture
by Erdos on estimating the maximum number of edges in a hypergraph when the (fractional)
matching number is given, which we are able to solve in some special cases using probabilistic
techniques. Based on these results, we obtain some general theorems on the minimum d-degree
ensuring the existence of perfect (fractional) matchings. In particular, we asymptotically deter-
mine the minimum vertex degree which guarantees a perfect matching in 4-uniform and 5-uniform
hypergraphs. We also discuss an application to a problem of finding an optimal data allocation
in a distributed storage system.
1 Introduction
A k-uniform hypergraph or a k-graph for short, is a pair H = (V, E), where V := V (H) is a finite | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/912/4103931.html","timestamp":"2014-04-20T03:17:25Z","content_type":null,"content_length":"8495","record_id":"<urn:uuid:5bd29552-dae6-4fc5-97e9-7407218d774a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00430-ip-10-147-4-33.ec2.internal.warc.gz"} |
From Wikibooks, open books for an open world
It might seem strange, but counting is one of the most difficult things in mathematics sometimes. In fact, it won't be far from the truth to call combinatorics the art of arranging objects and
counting them. Brute force techniques, when objects are counted by enumerating all possibilities usually are doomed to fail in combinatorics and we are forced to rely on various techniques and
mathematical ideas. Sometimes when even these ideas fail us we have to be content with establishing giving estimates or bounds of the objects to be counted.
Double Counting[edit]
In combinatorics, double counting, also called two-way counting, is a proof technique that involves counting the size of a set in two ways in order to show that the two resulting expressions for the
size of the set are equal. We describe a finite set X from two perspectives leading to two distinct expressions. Through the two perspectives, we demonstrate that each is to equal |X|.
Let us look at two examples. The first is called the handshaking lemma and can be stated succinctly as:
At a convention the number of delegates who shake hands at odd number of times are even.
To see this, let $D_1,\cdots D_n$ be the delegates. Let $x_i\,$ be the number of times $D_i\,$ shakes hands and $y\,$ the number of handshakes that occur. Clearly the total number of handshake pairs,
or the total number of times hands were extended is
But counting another way it's just $2y\,$ for each handshake entitled two extended hands. So,
Now how many odd $x_i\,$ can be there in the sum. If the number of odd $x_i\,$ was odd then their total must have been odd too (say 2a+1). This when added to the sum of the even $x_i\,$ (say 2b)
would have given an odd number (2a+2b+1). But we just saw that $\sum_{i=1}^{n}x_i$ was even. So the number of odd $x_i\,$ was even. But that's just another way of saying that the number of delegates
who shook hands an odd number of times was even.
Let's take a look at another example. We want to derive the formula for the sum of the first n natural numbers. Suppose we have an (n + 1)×(n + 1) square of points. The number of points on the
diagonal is exactly n + 1, and clearly the number of points S that are strictly above the diagonal equals the number of points strictly below the diagonal, so the total number of points in the square
is n + 1 + 2S. On the other hand, the total number of points in the square is (n + 1)^2, so
$(n+1)^2 = n+1+2S\,$,
$n(n+1) = 2S\,$,
$S = \sum_{k=1}^n{k} = \frac{n(n+1)}{2}.$ | {"url":"https://en.wikibooks.org/wiki/Combinatorics/Counting","timestamp":"2014-04-16T22:53:36Z","content_type":null,"content_length":"28094","record_id":"<urn:uuid:b13c4d3c-8daf-497a-bf5c-11b41eca6533>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00496-ip-10-147-4-33.ec2.internal.warc.gz"} |
Orbits of Planets & Satellites
Planets in Circular Orbits
Uniform circular motion calculations for planets, and graph showing that a[c], and thus gravity, falls off as 1/R^2.
Course Material Related to This Topic:
Back to Top
Escape Velocity and Circular Orbits
Calculation of escape velocity as well as velocity and period of circular orbit.
Course Material Related to This Topic:
Back to Top
Circular Orbit Examples
Calculation of v, T for shuttle, moon, Earth, and Jupiter; V and T independent of mass.
Course Material Related to This Topic:
Back to Top
Earth-Sun Gravitational System
Law of universal gravitation; velocity, potential, and kinetic energy for Earth's orbit; escape velocity for Earth.
Course Material Related to This Topic:
Back to Top
Kepler's Laws
Statements of Kepler's three laws of planetary motion; numerical evidence for third law; consequences of third law.
Course Material Related to This Topic:
Back to Top
Calculating Semi-major Axis and Period
Calculated from initial orbital conditions; exam problemple of Earth orbit solved explicitly.
Course Material Related to This Topic:
Back to Top
Velocity at Apogee and Perigee
Calculation using conservation of angular momentum; velocity and position of apogee and perigee calculated for Earth orbit.
Course Material Related to This Topic:
Back to Top
Changing Orbits
Qualitative description of change from circular to elliptical orbit for changing speed.
Course Material Related to This Topic:
Back to Top
Passing a Ham Sandwich
Course Material Related to This Topic:
Calculation of speed and trajectory for throwing sandwich between two spacecraft in same orbit; finding infinite number of solutions.
Computer simulation of several possible trajectories for the sandwich, including several failures.
Back to Top
Planetary Motion
Course Material Related to This Topic:
Circular orbits; elliptical orbits with exam problemple; escape velocity; general planetary motion; kinetic energy and momentum of two-particle systems.
Conservation of energy and momentum of orbiting bodies; characteristics of circular, elliptical, hyperbolic, and parabolic orbits; Kepler's Laws, with example.
Back to Top
Angular Momentum of Orbits
Equations for angular momentum of orbiting bodies; connection of angular momentum and rotational energy to equation of orbit.
Course Material Related to This Topic:
Back to Top
Kepler's Laws and Planetary Motion
Kepler's laws defined; description of Kepler two body problem; reduction of two body problem and solution of one body problem; energy diagram of circular, elliptic, parabolic, and hyperbolic orbits;
equations for position, energy, and angular momentum of an orbiting body; properties of an ellipse; Kepler's equal area law defined; Kepler's law for period of orbit.
Course Material Related to This Topic:
Back to Top
Circular Orbits
Motion of spacecraft in orbit around a planet.
Course Material Related to This Topic:
Back to Top
Period of the Moon
Modeling the orbit of the moon and finding its period.
Course Material Related to This Topic:
Back to Top
Planet Orbiting a Star
Motion of a planet orbiting a star through a cloud of dust.
Course Material Related to This Topic:
Back to Top
Synchronous Satellite
Finding the radius of the orbit of a synchronous satellite that circles the earth.
Course Material Related to This Topic:
Back to Top
Gravitational Potential and Kinetic Energy
Energy required to change a satellite's orbit from circular to elliptical.
Course Material Related to This Topic:
Back to Top
Satellite Launch
Finding initial velocity for satellite launched with given acceleration and angle.
Course Material Related to This Topic:
Back to Top
Going to the Sun
7-part orbit problem; finding impulses to allow spacecraft to reach sun.
Course Material Related to This Topic:
Back to Top
Very Elliptical Orbit
Short qualitative problem about when to fire engines for reentry in elliptical orbit.
Course Material Related to This Topic:
Back to Top
Conservation of Energy
Motion of a small mass launched from the surface of the earth.
Course Material Related to This Topic:
Back to Top
Elliptic Orbit
Motion of a satellite in an elliptical orbit around a planet.
Course Material Related to This Topic:
Back to Top
Planetary Orbits
Elliptical orbit of a comet around the sun.
Course Material Related to This Topic:
Back to Top
Binary Star System
5-part binary star problem; calculating F[g], a, T.
Course Material Related to This Topic:
Back to Top
Elliptical Orbit
Speed and energy at apogee for elliptically orbiting satellite.
Course Material Related to This Topic:
Back to Top
Bound Orbit
4-part elliptical orbit problem; finding apogee v, total energy, v[0].
Course Material Related to This Topic:
Back to Top | {"url":"http://ocw.mit.edu/high-school/physics/exam-prep/oscillations-gravitation/orbits-of-planets-satellites/","timestamp":"2014-04-17T16:21:50Z","content_type":null,"content_length":"72680","record_id":"<urn:uuid:5fbe6526-c730-436f-8786-10ed69d2e9b1>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00320-ip-10-147-4-33.ec2.internal.warc.gz"} |
Prof. J.K. Langley, School of Mathematical Sciences, University of Nottingham, NG7 2RD.
PDF format .
All these notes are Copyright © J.K. Langley, but freely available for personal use.
In Autumn 2013-14 I will be lecturing the module G12MAN Mathematical Analysis:
This is the first part of the lecture notes for the module: pdf file (beamer).
This is the second part of the lecture notes for the module: pdf file (beamer).
These are the practice problems for the module: pdf file.
G12MAN 2013-14: provisional schedule.
2nd October (Week 2): first lecture 10.00
3rd October (week 2): problem class. Questions 1.1 to 1.3, plus 1.4 parts (a) and (b) (G11ACF revision)
Week 3: tutorials. Questions 1.4(c), 1.4(d), 1.5, 2.1, 2.2 (and, if time permits, 1.6)
17th October (week 4): problem class. Questions 2.3, 2.5, 2.8, 2.9
17th October (week 4): first non-assessed coursework. Questions 2.6, 2.10, 2.11.
Week 5: tutorials. Questions 3.2, 3.3, 3.4, 3.5.
31st October (Week 6): problem class. Questions 3.8, 3.9, 3.11, 3.12, 3.13.
31st October (week 6): second non-assessed coursework. Question 3.10.
Week 7: Thurs. 5.00 examples class to prepare for the in-class test.
Week 8: tutorials. Questions 3.19, 3.20, 3.24, 3.26(a), 3.26(b).
Week 8: in-class test (scheduled for Friday 15/11 at 5.00)
21st November (Week 9): problem class. Questions 3.26(c), 3.27, 3.28, 3.30.
Week 10: tutorials. Questions 4.2, 4.5, 4.6, 4.9.
Week 10: third non-assessed coursework. Questions 4.1, 4.4, 4.7, 4.8.
5th December (Week 11): problem class. Questions 4.10, 4.11, 4.12, 4.13, 4.14, 4.15.
12th December (Week 12) @ 4.00: Examples class. Professor Langley will go through some problems related to the last chapter and to revision.
In Spring 2013-14 I will be lecturing the module HG1M12 Engineering Mathematics 2:
This is the first part of the lecture notes for the module: pdf file (beamer).
This is the second part of the lecture notes for the module: pdf file (beamer).
This is the third part of the lecture notes for the module: pdf file (beamer).
For the module HG1M12 Prof. Langley will occasionally use additional hand-written notes: these can be found below.
Part 1 (introduction to vectors): pdf file.
Part 2 (scalar and vector products): pdf file.
Part 3 (triple product, components and straight lines): pdf file.
Part 4 (Examples on planes, and some (OPTIONAL) background material): pdf file.
Part 5 (Examples from 11/2/2014): pdf file.
Part 6 (Final part of the vectors section): pdf file.
Part 7 (First part of the calculus section): pdf file.
Part 8 (Second part of the calculus section): pdf file.
Part 9 (Least squares and estimating errors): pdf file.
Part 10 (Last part of the calculus section): pdf file.
Part 11 (First part of the ODEs section): pdf file.
Part 12 (Second part of the ODEs section): pdf file.
Part 13 (Third part of the ODEs section): pdf file.
Part 14 (Fourth part of the ODEs section): pdf file.
Part 15 (Fifth part of the ODEs section): pdf file.
Part 16 (Final part of the ODEs section): pdf file. | {"url":"https://www.maths.nottingham.ac.uk/personal/jkl/teach.html","timestamp":"2014-04-19T09:53:06Z","content_type":null,"content_length":"5985","record_id":"<urn:uuid:1d5ad726-0795-4b35-a46e-92484f574cf9>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematics Practice 1 - For SAT I, S
1. rectangular cards, 3 inches by 5 inches are cut from a rectangular sheet 3 feet by 5 feet. How many rectangular cards that can be cut from the sheet?
2. The sum of five consecutive odd numbers is 485, what is the largest of the five numbers?
3. A woman spent 2/5 of her money. She lost 2/3 of the remaining and then she had $8 left. With how much money did she start?
4. In three bowling games, Sally scores 137, 142, and 146. What score will Sally need in a forth game in order to have an average score of 145 for all four games?
5. The sum of 7 consecutive odd numbers is 525. What is the largest of the seven numbers?
6. K T V X Y U K A B C D T E F G H I V J K L M X O P Y Q R S T U U V W X Y Z The word "Attitude", using the code breaker, are translated into KTYYYYTKYYUTKYTV. What would "POSITIVE" be?
7. Amy, Betty, Cindy, and Dora are seated in a row on four seats numbered 1 to 4. Lisa looks at them and says:"Betty is next to Cindy.""Amy is between Betty and Cindy."However each one of the Lisa's
statement is false. Betty is actually sitting in seat 3. Who is in seat 2?
8. If Beth behaves, then her mom will treat her to ice-cream. If this statement is always true, which of the following statements is also always true?
9. The length of a rectangle is increased by 30% and its width is decreased by 30%. What is the percentage change in its area?
10. What is the product of 3/2 x 4/3 x 5/4 … x 2010/2009? | {"url":"http://www.proprofs.com/quiz-school/story.php?title=hunter-college-hs-mathematics-practice-1","timestamp":"2014-04-19T11:58:00Z","content_type":null,"content_length":"113769","record_id":"<urn:uuid:18d780c3-7de3-46f0-8b05-7d2920852dc1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00540-ip-10-147-4-33.ec2.internal.warc.gz"} |
STANDARDIZED INFECTION RATIO AND RATE/RATIO COMPARISONS
Adapted from “Methods of Comparing Nosocomial Infection Rates” by David H. Culver, PhD, Chief, Statistics and Information Systems Branch, Hospital Infections Program, Centers for Disease Control and Prevention; presented at a SHEA pre-conference workshop in April 1996.
In this handout we will discuss the comparison of surgical site infection (SSI) rates, define the Standardized Infection Ratio (SIR), and discuss the comparison of SIRs. The hypothesis testing
methods described here are general in the sense that they apply to the comparison of any two proportions. Hence, the same methods can
be used, for example, to perform internal comparisons of SSI rates or SIRs between surgeons or comparison of the SSI rates or SIRs of the same surgeon at two different points in time, keeping in mind that all comparisons must be done on risk stratified SSI data.
Comparing SSI Rates Within a Particular Procedure-Risk Category Of the SP Component
To illustrate the statistical methods, let's assume that we have been following cardiac surgeries and coronary artery bypass grafts under the Surgical Patient (SP) component and the following report has been prepared on the SSI experience of the patients of a team of our cardiac surgeons, Team A: v1.6 1
Table 1:
Infection Control Report Team A
Number of SSIs Number of Operation NNIS Rate
Risk Category
SSI Rate
0,1 2,3 0 1 2,3
3.75 15.00 10.00 4.35 8.33 5.50
2.02 5.29 1.59 3.15 5.76 ----
Team A performed 400 operations over a three-month time period, and their patients experienced 22 SSIs for an overall SSI rate of 5.5%, but we know that this overall SSI rate is not a comparative rate. In order to compare their rates with that of other cardiac surgical teams, individual surgeons, and with NNIS, we have partitioned their operations by procedure and risk index and calculated SSI rates in each procedure-risk category. In each of the
procedure-risk categories, notice that their SSI rate exceeds the pooled mean rate of NNIS. But also notice that the volume of surgery done by Team A over this short time span was quite low, less than 100 operations in all but one of the categories (CBGB-1). One or at most two fewer SSIs in any of these categories would have brought their SSI rate down to, or below the level of, the NNIS rate. If Team A's surgical volume had been ten times greater
(4,000 operations), and their rates were the same as in Table 1, then perhaps we would feel that there is compelling evidence that their rates exceed those of NNIS and signal a need for further investigation. However, based on the relatively small sample sizes at hand, can we draw such a conclusion?
To put it another way, if we were to present Table 1 to Team A and point out that their SSI rate following cardiac surgery on patients with fewer than two risk factors (CARD-0,1) was nearly double the NNIS rate (3.75% vs. 2.0%), their reaction might well be:
"3.75% -- so what!
That's only three months of surgery.
Over the long run, our rate is only 2%."
How can we respond to such a claim?
Statisticians have developed,
and epidemiologists use, a method for answering this question called a hypothesis test. Let us assume for the moment that this
claim (hypothesis) is true and that among a large number of operations performed by Team A, perhaps several years worth of surgical experience, their SSI rate in this procedure-risk category would indeed be 2%. then the following: The question posed by the epidemiologist is If we were to select randomly a sample of 80
procedures from this large pool of operations, what are the chances that three or more of those 80 operations would result in an SSI?
In other words, what is the probability that by chance alone we could obtain an infection rate (3/80 = 3.75%) as large or larger than the one experienced by Team A over the past quarter? In
short, just how unusual would Team A's SSI rate over the past quarter be, if their long-term rate is really only 2%? Figure 1 is
a graphical depiction of this hypothetical sampling experiment and the question posed by the epidemiologist. Figure 1.
The probability of obtaining three or more SSIs in a sample of 80 operations, based entirely upon the assumption that Team A's longterm rate is NOT different from that of NNIS (2%), is called the "p-value." If this p-value is very small, implying that the recent unusual under this
experience of Team A's patients was very
assumption, then we regard it as evidence that the claim of Team A is wrong. Indeed, the smaller the p-value the stronger is the
evidence against the claim and in support of the conclusion that the long-term SSI rate of Team A must really be larger than the NNIS rate.
How small must the p-value be before we conclude that the SSI rate of Team A is "significantly greater" than the NNIS rate? A p-value
less than 0.05 (1 in 20 chance) is often chosen as a convenient cut point for rejecting the claim of no difference between the rates (long-term rate of Team A vs. NNIS rate), but this choice is arbitrary. Thus, while we may use this convenient cut point to
illustrate the interpretation process, in practice we simply report the p-value and interpret its value as a measure of the strength of the evidence against the hypothesis or claim being tested.
How do we perform this hypothesis test and calculate the resulting p-value?
r =
i * 100 n
be your rate (e.g., Team A)
R =
I * 100 N
be the NNIS rate.
Note: * means multiply by or “times”.
In this notation, i = no. of SSIs in your rate n = no. of operations in your rate I = no. of SSIs in NNIS rate and, N = no. of operations in NNIS rate.
Rates r and R are the two proportions that we wish to compare. calculate the following Z-statistic:
Z =
r - R - [50 *(1/n + 1/N)] P *(100 - P) *(1/n + 1/N)
(Formula 1)
P =
i+I *100 n+N
In the of the
is the result of pooling your rate with this NNIS rate. numerator of Formula 1, |r-R| is the absolute value
difference between the two rates, i.e., ignore the sign (+) of the difference in the rates. [50*(1/n+1/N)], correction. is called The second term in the numerator, the continuity correction or Yates
If the numerator of the Z-statistic ends up negative,
e.g., |r-R|< [50*(1/n + 1/N)], just set Z = 0.
If the null hypothesis (no significant difference between the rates) is true, the value of Z should be very small. Large values
of Z indicate strong evidence against the null hypothesis.
The value of Z calculated from Formula 1 is to be compared against the unit-normal distribution (also called the Z-curve, standardized normal curve, or bell curve) to obtain its associated p-value. bell curve is The called the is reference the area distribution under the for the The Z-
distribution to the right of the Zstatistic. These areas can be
obtained from Table 2.
Table 2: Areas (Pr (Z > z)) Under the Unit-Normal Distribution
z 0.0x 0.1x 0.2x 0.3x 0.4x 0.5x 0.6x 0.7x 0.8x 0.9x 1.0x 1.1x 1.2x 1.3x 1.4x 1.5x 1.6x 1.7x 1.8x 1.9x 2.0x 2.1x 2.2x 2.3x 2.4x 2.5x 2.6x 2.7x 2.8x 2.9x 3.0x 3.1x 3.2x 3.3x 3.4x 3.5x 3.6x 3.7x 3.8x 3.9x 4.0x
x=0 0.500000 0.460172 0.420740 0.382089 0.344578 0.308538 0.274253 0.241964 0.211855 0.184060 0.158655 0.135666 0.115070 0.096800 0.080757 0.066807 0.054799 0.044565 0.035930 0.028717 0.022750 0.017864 0.013903 0.010724 0.008198 0.006210 0.004661 0.003467 0.002555 0.001866 0.001350 0.000968 0.000687 0.000483 0.000337 0.000233 0.000159 0.000108 0.000072 0.000048 0.000032
x=1 0.496011 0.456205 0.416834 0.378280 0.340903 0.305026 0.270931 0.238852 0.208970 0.181411 0.156248 0.133500 0.113139 0.095098 0.079270 0.065522 0.053699 0.043633 0.035148 0.028067 0.022216 0.017429 0.013553 0.010444 0.007976 0.006037 0.004527 0.003364 0.002477 0.001807 0.001306 0.000935 0.000664 0.000466 0.000325 0.000224 0.000153 0.000104 0.000069 0.000046 0.000030
x=2 0.492022 0.452242 0.412936 0.374484 0.337243 0.301532 0.267629 0.235762 0.206108 0.178786 0.153864 0.131357 0.111232 0.093418 0.077804 0.064255 0.052616 0.042716 0.034380 0.027429 0.021692 0.017003 0.013209 0.010170 0.007760 0.005868 0.004396 0.003264 0.002401 0.001750 0.001264 0.000904 0.000641 0.000450 0.000313 0.000216 0.000147 0.000100 0.000067 0.000044 0.000029
x=3 0.488034 0.448283 0.409046 0.370700 0.333598 0.298056 0.264347 0.232695 0.203269 0.176186 0.151505 0.129238 0.109349 0.091759 0.076359 0.063008 0.051551 0.041815 0.033625 0.026803 0.021178 0.016586 0.012874 0.009903 0.007549 0.005703 0.004269 0.003167 0.002327 0.001695 0.001223 0.000874 0.000619 0.000434 0.000302 0.000208 0.000142 0.000096 0.000064 0.000042 0.000028
x=4 0.484047 0.444330 0.405165 0.366928 0.329969 0.294599 0.261086 0.229650 0.200454 0.173609 0.149170 0.127143 0.107488 0.090123 0.074934 0.061780 0.050503 0.040930 0.032884 0.026190 0.020675 0.016177 0.012545 0.009642 0.007344 0.005543 0.004145 0.003072 0.002256 0.001641 0.001183 0.000845 0.000598 0.000419 0.000291 0.000200 0.000136 0.000092 0.000062 0.000041 0.000027
x=5 0.480061 0.440382 0.401294 0.363169 0.326355 0.291160 0.257846 0.226627 0.197663 0.171056 0.146859 0.125072 0.105650 0.088508 0.073529 0.060571 0.049471 0.040059 0.032157 0.025588 0.020182 0.015778 0.012224 0.009387 0.007143 0.005386 0.004025 0.002980 0.002186 0.001589 0.001144 0.000816 0.000577 0.000404 0.000280 0.000193 0.000131 0.000088 0.000059 0.000039 0.000026
x=6 0.476078 0.436441 0.397432 0.359424 0.322758 0.287740 0.254627 0.223627 0.194895 0.168528 0.144572 0.123024 0.103835 0.086915 0.072145 0.059380 0.048457 0.039204 0.031443 0.024998 0.019699 0.015386 0.011911 0.009137 0.006947 0.005234 0.003907 0.002890 0.002118 0.001538 0.001107 0.000789 0.000557 0.000390 0.000270 0.000185 0.000126 0.000085 0.000057 0.000037 0.000025
x=7 0.472097 0.432505 0.393580 0.355691 0.319178 0.284339 0.251429 0.220650 0.192150 0.166023 0.142310 0.121000 0.102042 0.085343 0.070781 0.058208 0.047460 0.03836 4 0.030742 0.024 419 0.019226 0.015003 0.011604 0.008894 0.006756 0.005085 0.003793 0.002803 0.002052 0.001489 0.001070 0.000762 0.000538 0.000376 0.000260 0.000178 0.000121 0.000082 0.000054 0.000036 0.000024
x=8 0.468119 0.428576 0.389739 0.351973 0.315614 0.280957 0.248252 0.2176 95 0.189430 0.16 3543 0.140071 0 .119000 0.100273 0.083793 0.069437 0.057053 0.046479 0.037538 0.030054 0.023852 0.018763 0.014629 0.011304 0.008656 0.006569 0.004940 0.003681 0.002718 0.001988 0.001441 0.001035 0.000736 0.000519 0.000362 0.000251 0.000172 0.000117 0.000078 0.000052 0.000034 0.000023
x=9 0.464144 0.424655 0.385908 0.348268 0.312067 0.277595 0.245097 0.214764 0.186733 0.161087 0.137857 0.117023 0.098525 0.082264 0.068112 0.055917 0.045514 0.036727 0.029379 0.023295 0.018309 0.014262 0.011011 0.008424 0.006387 0.004799 0.003573 0.002635 0.001926 0.001395 0.001001 0.000711 0.000501 0.000349 0.000242 0.000165 0.000112 0.000075 0.000050 0.000033 0.000022
Example 1: CARD-0,1
r = 3/80 * 100 = 3.75% R = 103/5088 * 100 = 2.02%
(i = 3,
n = 80)
(I = 103, N = 5088)
P =
3+103 *100 = 2.05% 80+5088
Z =
3.75-2.02 -50*(1/80 + 1/5088) 2.05 (100-2.05)(1/80 + 1/5088)
1.73-0.63 2.549
= 0.69
Hence, p-value = 0.25
Example 2: CARD-2,3 r = 3/20 * 100 = 15.00% R = 63/1191 * 100 = 5.29% (i = 3, n = 20) (I = 63, N = 1191)
P =
3+63 * 100 = 5.45% 20+1191
Z =
15.00-5.29 -50*(1/20 + 1/1191) 5.45(100-5.45)(1/20 + 1/1191)
9.71-2.54 26.1771
Hence, p-value = 0.08
yourself: Example 3: CBGB - 0 r = R = 1.59%* P = And Z = (i = , n = ) )
(I = 13 , N = 819
Hence, p-value = *The NNIS data should be obtained from the latest published report.
Underlying Assumptions of the Z-Test In order to use the Z-test, the data being compared must be normally distributed. In general, when the sample sizes (n, N) are We can easily check our
greater than 30, this will be the case.
data to see if they meet the condition for normalcy by displaying them in a 2x2 table and calculating the minimum expected cell frequency (explained below). If the minimum expected cell
frequency is greater than 1, then we have evidence that our data are distributed normally and we can use the p-value obtained from the Z-test.
Displaying Data in a 2x2 Table Whenever you compare two proportions (or percentages), the data can v1.6 10
always be displayed in a 2x2 table format:
No. Observed w/ SSI
No. w/o SSI
No. of Oper
n [Row Total]
I [Column Total]
N-I [Column Total]
N [Row Total] [Table Total]
The numbers in the four cells of this table (i, n-i, I, N-I) are called the observed cell frequencies. The assumptions (i.e.,
statistical theory) that underlie the Z-test of Formula 1 are valid unless one of the sample sizes (n,N) is so small that the expected frequency (e) of SSI among the four cells is less than 1. An easy
way to check on this condition is to calculate the expected frequency for the smallest number in any cell of the table, i.e., the minimum expected frequency (emin) using the formula below:
Row Total * Column Total Table Total
emin =
Once the value of this cell is known, the rest of the cells can be filled in since the marginals (row, column, and table totals) do not change.
Example 1: CARD-0,1 No. Observed w/ SSI Hospital NNIS 3 103 106 No. w/o SSI 77 4985 5062 No. of Oper.
Example 1: emin = 80 * 106 = 1.64 5168
No. Expected w/ SSI Hospital NNIS 1.64
No. w/o SSI 78.36
No. of Oper.
104.36 4983.64 106 5062
Example 2: Example 3:
emin = 20 * 66 /1211 = 1.09 emin = 10 * 14 / 829 = 0.17
As you can see, the Z-test of Formula 1 is valid in Examples 1 and 2, but not in Example 3, because emin<1.
Fisher’s Exact Test The Fisher’s Exact Test is an alternative hypothesis testing procedure whose assumptions are always met. Therefore, it can
always be used, even when the minimum cell frequency is less than 1. However, since the calculation of its p-value is beyond human The the
patience, it requires us to use good statistical software. reference distribution used in Fisher's Exact the Test is
hypergeometric distribution.
Let's return to our Infection Control Report for Team A.
Here is a
version of that report showing the minimum expected frequency (emin) and the p-value obtained from the Z-test and Fisher's Exact Test for each of the five procedure-risk categories:
Table 3: Infection Control Report (with p-values) -- Team A Procedure-Risk SSI Rate(%) SSI Rate (%) Category Team A NNIS CARD-0,1 CARD-2,3 CBGB-0 CBGB-1 CBGB-2,3 TOTAL v1.6 3/80=3.75 3/20=15.00 1/10=10.00 103/5088=2.0 63/1191=5.3 13/819=1.6 Min Exp p-value p-value Freq (Z-test) (Fisher) 1.64 1.09 0.17 0.25 0.08 0.21 0.20 0.28 --0.23 0.09 0.16 0.19 0.26 ---
10/230=4.35 1010/32065=3.1 7.24 5/60=8.33 22/400=5.50 446/7745=5.8 ----------13 3.47 ---
As you can see, none of these p-values is lower than the arbitrary cut point of 0.05, so we would say that none of Team A's SSI rates are "significantly greater" than the NNIS rates. Therefore, given
the number of operations performed by Team A during this quarter, we cannot conclude that their long-term rates are really larger than those of NNIS.
Use of Epi Info There are two programs in Epi Info that are useful in implementing the methods of this section: EPITABLE and STATCALC. Both require
you to enter the observed cell frequencies into a 2x2 table: No. Observed w/ SSI Hospital i NNIS I No. w/o SSI n-i N-I No. of Oper. n N
In the EPITABLE program, follow the path Probability ---> Fisher's Exact Test, and enter the observed frequencies into the cells of the 2x2 table. Press Calculate and the one- and two-tailed p-
values of the Fisher's Exact Test are displayed; report the onetailed p-value. The output for CBGB-0 is shown below:
Alternatively, from the EPITABLE program, follow the path Compare ---> Proportion ---> Percentage ----> choose 2 samples. enter the rates (r,R) and the sample sizes (n,N) into Then the
appropriate boxes.
When e < 5, the Yates corrected chi-square
statistic is calculated and its associated p-value is displayed. The Yates corrected chi-square statistic is just the square of the Z-statistic (chi-square = Z2) in Formula 1 and the p-value
associated with this chi-square statistic is twice the p-value associated with the Z-statistic. Hence, divide the p-value by 2
and you will have the one-tailed p-value associated with the Zstatistic. v1.6 The output for CARD-0,1 is shown below: 15
When e 5, the continuity correction in Formula 1 is ignored and the uncorrected Pearson chi-square statistic and associated p-value are calculated and displayed. The output for CBGB-1 is shown next:
In the STATCALC program, choose Tables (2x2,2xn) and enter the observed frequencies into the cells of the table. calculate. Press enter to
You’ll note that many statistics are given, including
odds ratio and relative risk and their confidence intervals. Ignore these. Three chi-square statistics are listed: uncorrected, As mentioned above, the
Mantel-Haenszel, and Yates corrected.
Yates corrected chi-square is the square of the Z-statistic of Formula 1 and the p-value is twice the p-value associated with the Therefore, simply divide the Yates corrected chiby 2 and you’ll have the one-tailed p-value
Z-statistic. square
associated with the Z-statistic.
When e<5, the Fisher's Exact Test
p-values are also calculated and displayed; use the one-tailed pvalue. The output for CBGB-0 is shown below:
Other Applications of These Methods The Z-test of Formula 1 and Fisher's Exact Test are appropriate whenever two proportions (i.e., percentages) are being compared, provided a denominator-based sampling design, such as cohort
sampling, has been used to obtain the data. Consequently, these methods can be used to perform internal comparisons of SSI rates, such as comparison between two surgeons or between two time periods for the same surgeon. As always, keep in mind that such comparisons must be done for a specific procedure and risk category (i.e., only on the risk-stratified SSI rate).
In the ICU and HRN surveillance, the device utilization ratios are proportions, even though both the numerator and the denominator involve the counting of patient-days. As a result, the methods of
this section can be used to perform both external and internal comparisons of these measures of device utilization.
The Standardized Infection Ratio: A Useful Risk-Adjusted Summary Measure for Surgical Site Infections
Definition of the SIR There is another tool available to us for comparing SSI rates called the Standardized Infection Ratio (SIR). To introduce the SIR, let’s return to Table 3, the infection control report for v1.6 18
Cardiac Surgery Team A discussed earlier. is reproduced below:
For convenience, Table 3
Table 3: Infection Control Report (with p-values)--Team A
Risk Number of Number of Category SSIs Opers SSI Rate NNIS Rate
0,1 2,3 0 1 2,3
3.75 15.00 10.00 4.35 8.33 5.50
2.02 5.29 1.59 3.15 5.76 ----
0.25 0.08 0.16 0.20 0.28
The best method of comparing the SSI rates of Team A with those of NNIS is to do so within each of the procedure-risk categories, as illustrated in this table. If any of Team A's rates had been
significantly higher than those of NNIS, we would have known immediately the type of procedure being performed and the group of patients for which the potentially excessive infections were
occurring. This would be a useful starting point for further investigation.
Another method of comparing Team A's experience with that of NNIS is to focus on the 22 infections that occurred collectively among their 400 patients. v1.6 How many infections would we have "expected" 19
to occur among these patients, taking into account the type and number of procedures performed (100 CARD and 300 CBGB) and the risk categories of the patients? We can calculate the "expected" number of SSIs as follows. Cardiac surgery was performed on 80 patients in risk category 0,1. According to the pooled NNIS rate, the risk of an SSI for each of these patients was 2.02%. Hence, the expected
number of SSIs among these 80 patients was 80 * 0.0202 = 1.6. Multiplying the number of operations performed by Team A by the NNIS rate in each row, we get the last column of Table 4. Summing
the numbers in the last column, we see that the expected number of SSIs among all 400 operations performed by Team A was 13.6.
Table 4: Infection Control Report -- Team A
Risk Category Number of SSIs Number of Opers SSI Rate NNIS Rate Expected p-value # of SSIs
0,1 2,3 0 1 2,3
3.75 15.00 10.00 4.35 8.33 5.50
2.02 5.29 1.59 3.15 5.76 ----
0.25 0.08 0.16 0.20 0.28
1.6 1.1 0.2 7.2 3.5 13.6
The ratio of the observed number of SSIs that occurred (22) to the expected number (13.6) is called the Standardized Infection Ratio.
SIR =
Observed Number of SSIs 22 = = 1.62 Expected Number of SSIs 13.6
The Standardized Infection Ratio is deceptively simple. It is an easy to interpret summary measure of the SSI experience of an individual surgeon, service, or hospital. Values that exceed 1.0
indicate that more infections occurred than were expected (and by how much), whereas values that are less than 1.0 indicate the opposite. expected. For Team A, the 22 SSIs represent 62% more than were In calculating the expected number of SSIs, we account
for the type of procedures performed and the distribution of patients by risk index, i.e., case mix. Hence, the SIR is a
risk-adjusted summary measure and can be used for comparative purposes. In contrast, the overall SSI rate (22/400 = 5.5% for Team A) is NOT a comparative rate and should not be used for comparative purposes.
How can we use the SIR?
First of all, it can be compared against
its nominal value of 1.0 and a p-value calculated to determine if the observed number of SSIs significantly exceeds the expected number of SSIs, or whether the discrepancy between them might well be explained by chance alone. This is an external comparison since the nominal value of 1.0 represents perfect conformity with the pooled mean rates of NNIS, i.e., the number of observed SSI = the number of expected SSI. v1.6 21
Comparing a Standardized Infection Ratio Against Its Nominal Value of 1.0
O = observed number of SSIs, E = expected number of SSIs,
SIR = O/E
As illustrated for Team A, the expected number of SSIs is always calculated by multiplying the number of operations performed in each procedure-risk category by the NNIS rate of that procedurerisk category and summing the products. To test whether or not the
SIR differs significantly from its nominal value of 1.0, there are two methods that can be used, a Z-test and the Poisson test.
Z-test Let
Z 2 O E
(Formula 2)
Ignore the sign (+) of Z, i.e., if Z is negative, just drop the minus sign. Take the magnitude of Z and refer to the unit-normal If SIR > 1 (O > E),
distribution (Table 2) to obtain the p-value.
then the p-value indicates how strongly the data support the conclusion that the observed number of SSIs significantly exceeds the expected number of SSIs. Likewise, if SIR < 1 (O < E), then
conclusion in the opposite direction, that the observed number of SSIs is significantly below the expected number of SSIs.
For Team A, we get
Z = 2( 22 -
13.6) = 2.01
and the p-value = 0.02.
The number of SSIs that occurred (22) was significantly greater than would be expected based upon the aggregate experience of cardiac surgeons in NNIS hospitals. Hence, although none of the
five procedure-risk category comparisons resulted in a small enough p-value for us to conclude that any of Team A's category-specific rates differed from those of NNIS, collectively there is evidence that an excess of SSIs may be occurring among their patients. Obviously, the collective evidence stems from the fact that a total of 400 operations were performed and each of their five category-specific SSI rates exceeded the corresponding rate of NNIS.
Poisson Test in Epi Info The Z-test of Formula 2 gives an approximate p-value, which should be good for all practical purposes unless the expected number of v1.6 23
SSIs (E) is less than 1.
An exact p-value can be obtained by
performing another type of statistical test known as a Poisson test. Once again, the name of this test comes from the fact that
the reference distribution used to obtain the p-value is the Poisson distribution.
If SIR > 1, then the p-value of the Poisson test is Pr(X O), where X is assumed to have a Poisson distribution with mean equal to E, the expected number of SSI. When SIR < 1, the p-value is
Pr(X O), where again X is assumed to have a Poisson distribution with mean equal to E.
It is easy to get the exact p-value of the Poisson test using EpiInfo. In the EPITABLE program, follow the path Probability ---> Poisson and enter the value of O for the "observed number of events" and value of E for the "expected number of events”. The pvalue is displayed as the "probability that the number of events found is” O (when O > E) or O (when O < E). For Team A, the
Poisson test gives us a p-value of 0.02, which agrees with the Ztest (see output on the top of p. 26).
Try calculating the Z-statistic and looking up the p-value for yourself: Example 4: Surgeon B performed 50 cholecystectomies with the following SSI experience:
NNIS Risk Category #SSI #Oper Rate Rate
Expected #SSI
1 2 0 3 ____ 6
20 20 0 10 _____ 50
5.0 10.0 ---30.0 _____ 12.0
0.86 1.81 3.55 6.25 ____
# of SSIs per 100 operations
SIR =
Z =
p-value =
Poisson test: p-value =
(from the output on the bottom of p.26)
Comparing Two Standardized Infection Ratios
The SIR is a very convenient summary measure of SSI experience. You can think of it as a risk-adjusted replacement for the "overall SSI rate" or the "clean wound SSI rate," neither of which is a comparative rate.
In addition to comparing individual SIRs against their nominal value of 1.0, two different SIRs can be compared. The SIRs of two
surgeons, or two surgical teams or two hospitals can be compared. Likewise, the SIRs of a surgeon, team, or hospital over two different time periods can be compared. It is important to note
that it does not matter that the case mix, i.e., the procedures performed and the distribution of patients into risk categories, may be vastly different for the SIRs that are to be compared. In
calculating the expected number of SSIs for each SIR, proper adjustment is made for differences in case mix.
Using the data of Example 5, let's calculate and compare the SIRs of two orthopedic surgeons. Dr. X performed spinal fusions (FUSN)
and laminectomies (LAM), while Dr. Y did knee prosthesis (KPRO) and limb amputations (AMP). Eight SSIs occurred among each of the
surgeon's patients, with unadjusted overall rates of 6.7% for Dr. Y and 1.8% for Dr. X. After stratifying these patients according to
the risk index and calculating the procedure-risk SSI rates, we can v1.6 27
determine if there is a significant difference in their SSI rates. Example 5: Compare the SIRs of Two Orthopedic Surgeons Observed #SSIs 1 5 2 ___ 0X = 8 Dr. Y KPRO-0,1 KPRO-2,3 AMP-0,1,2,3 3 2 3 ___ 0Y = 8
ProcedureRisk Category Dr. X FUSN-0 FUSN-1,2,3 LAM-0,1,2,3
#Oper 50 100 300 ____ 450 70 20 30 ____ 120
Rate 2.0 5.0
NNIS Expected (1) Rate #SSIs 0.94 4.49 1.16 0.47 4.49 3.48 _______ EX = 8.44 1.04 2.65 4.48 0.73 0.53 1.34 ________ EY = 2.60
0.7 ____ 1.8 4.3 10.0 10.0 _____ 6.7
# of SSIs per 100 operations
For each surgeon, if we multiply the number of operations performed by the surgeon times the NNIS rate, we get the last column of the table in Example 5 (shown in bold in for FUSN-0), i.e., the expected number of SSIs. of SSIs for Dr. X and Summing, we see that the expected number Dr. Y is EX = 8.44 and EY 2.60,
Calculating SIRs, we get
8 8.44
8 = 2.60
To compare the two SIRs, we can perform two tests: a Z-test, using a different formula, or a binomial test.
Z-test To compare two SIRs to each other using a Z-test, use the following formula:
= 2
( SIRY - SIR X) 1 1 + EY EX
(Formula 3)
Using the data for Example 5, this yields
Z = 2
( 3.08 -
1 1 + 8.44 2.60
and p-value = 0.022 (from Table 2). Once again, this Z-test provides us with an approximate p-value. v1.6 29
Binomial Test in Epi Info An exact p-value can be obtained by performing a binomial test using Epi Info or other statistical software. The name binomial
test comes from the fact that the reference distribution is a binomial distribution rather than a unit-normal distribution. perform this test, first find the larger of the two SIRs. Example 5 it is SIRY = 3.08. To In
Then the p-value of the binomial test
comparing SIRX against SIRY is given by:
p-value = Pr (U OY), where U Binomial (OX + OY, p) and p = EY / (EX + EY).
that with
reference size”
distribution equal to
is OX+OY
a and
binomial ”event
probability” equal to p = EY /(EX + EY). probability of OY or more ”events” occurring.
The p-value is the
In the EPITABLE program of Epi Info, follow the path Probability ---> Binomial and enter OY for the “numerator,” OX+OY for the “total observations,” and p*100 for the expected percentage. The p-value
is then displayed as the probability that the “number of cases” (i.e.,”events”) is greater than or equal to () OY.
In Example 5, OY=8, OY+OX=16, and p*100=2.60/(2.60+8.44)*100=23.55. The binomial test yields a p-value of 0.019, in close agreement with the Z-test. The Epi Info output is shown below:
Summary In this handout we have explored how to compare risk-stratified SSI rates using a hand-calculation formula of the Z-test and Fisher’s Exact Test and Chi-square test using the Epi Info software. We
have also learned about the SIR and how to compare it to both the nominal value of 1.0 and to another SIR using hand-calculating formulas of the Z-test. Finally, we learned how to make these
comparisons using the Epi Info software: the Binomial Test for v1.6 31
comparing an SIR to 1.0 and the Poisson Test for comparing two SIRs to each other.
Limited information is included on the underlying assumptions of the statistical tests and their p-values. It is recommended that
you consult a statistician or statistics text for further details. | {"url":"http://www.docstoc.com/docs/15337959/STANDARDIZED-INFECTION-RATIO-AND-RATERATIO-COMPARISONS","timestamp":"2014-04-18T04:40:05Z","content_type":null,"content_length":"86147","record_id":"<urn:uuid:f3f9d2b7-7c92-44b1-a687-7e92c19dd265>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
£3250 conversion to us dollars
You asked:
£3250 conversion to us dollars
Say hello to Evi
Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we
will be adding all of Evi's power to this site.
Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire. | {"url":"http://www.evi.com/q/%C2%A33250_conversion_to_us_dollars","timestamp":"2014-04-17T15:53:00Z","content_type":null,"content_length":"55999","record_id":"<urn:uuid:132fc07b-17d8-4331-98d4-07df05922908>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00233-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nikolai Ivanovich Lobachevsky (1792 - 1856)
Lobachevsky was born to a poor family in Kazan in Russia. He attended the local school on a scholarship, and later studied at the recently founded Kazan University. Like Daniel Bernoulli before him,
Lobachevsky originally intended to study medicine, but fortunately for mathematics he soon switched to mathematics and physics.
He quickly made excellent progress and within a few years was a professor, teaching a wide range of subjects including maths, physics and astronomy.
This period in his life was not entirely happy. Lobachevsky clashed frequently with the head of the University administration - curator M L Magnitski - over its conservative approach to modern
scientific and philosophic developments.
However, things changed with the succession of Tsar Nicholas I to the Russian throne. A new curator was appointed, bringing in a far more tolerant system, and Lobachevsky was appointed rector.
Lobachevsky was rector for almost 20 years, during which time the University went from strength to strength, both in its faculties and its student numbers. Lobachevsky can be given much of the credit
for this success.
He continued to teach an ever increasing selection of mathematical subjects, from mechanics to differential equations, from the calculus of variations to hydrodynamics.
But it is in the field of geometry that Lobachevsky left his mark. Lobachevsky’s work on geometry had really important implications for modern geometry - he along with Gauss can be said to be one of
the discoverers of non-Euclidean geometry.
Bolyai and Lambert both had the same opportunities as Lobachevsky but could not go the final mile and embrace the utterly new territory that lay at their feet. Lobachevsky could as did Gauss, though
Gauss did not publish his work. Riemann developed their work even further.
In 1837 Lobachevsky published his article Géométrie imaginaire, and a summary of his new geometry Geometrische Untersuchungen zur Theorie der Parellellinien was published in Berlin in 1840.
His major work, Geometriya completed in 1823, was not published in its original form until 1909.
He retired from the University in 1846, and fell ill after the death of his eldest son, eventually losing his sight as a result of the great stress. Sadly, Lobachevsky’s mathematical contributions
were not recognised during his lifetime, and he died a poor man, not knowing the importance of his work.
He is, however, immortalised in a song by the American mathematician, Tom Lehrer: as the song goes ‘Nikolai Ivanovich Lobachevsky was his name’.
Lobachevsky’s mathematics
The background to Non-Euclidean geometry is easy to describe.
Euclidean geometry is the geometry of flat surfaces, unlike say, the geometry on the surface of a sphere, which not unaturally is called Spherical geometry. Euclidean geometry starts off with a set
of assumptions and basic ideas - called axioms and postulates. These are taken as given. Using them, Theorems are progressivley discovered and proved. Each Theorem when proved, may be used in the
proof of a new Theorem. Here is an example of this at work.
Theorem already proved: alternate angles of a pair of parallel lines are equal; corresponding angles of a pair of parallel lines are equal;
Postulates - angles on a straight line sum to 180^\circ; lines may be extended indefinitely; through a given point may be drawn a line parallel to any other line;
THEOREM: The angles of a triangle add up to 180^\circ
PROOF: Take any triangle. Extend a side of the triangle, and draw a line parallel to another side as shown. Mark equal alternate and corresponding angles as shown:
Because of the properties of parallel lines (alternate angles, the angles marked \angle B; corresponding angles, the angles marked ) there are two sets of equal angles as marked. The angles A, B and
C now make a straight line and so
\angle A+\angle B+\angle C=180^\circ \mbox{as required. The proof is complete.}
The fifth postulate is one of the ones we have just made use of: though a given point may be drawn a line paralell to a given line.
Notice that it says ‘a’ line, not lines - there is just one such parallel. On this simple point much would emerge.
Lobachevsky did not try to prove this fifth postulate as a theorem nor to disprove alternative versions of it. Instead he took the alternative forms and simply pursued the geometry that resulted.
There were only two alternatives - there was no parallel; or there were more than one such parallel.
The geometry was strange - although he didn’t know it at first, he was looking at the geometry on curved surfaces. On such surfaces triangles did not have angle sums of 180º. In the first case they
were less and in the second more. Lobachevsky categorised euclidean geometry - in which there was exactly one parallel line through a point, parallel to another line - as a special case of this more
general geometry.
He published his work on non-euclidean geometry, the first account of the subject to appear in print, in 1829. It was published in the Kazan Messenger but rejected for publication when it was
submitted to the St Petersburg Academy of Sciences. | {"url":"http://www.counton.org/timeline/test-mathinfo.php?m=nikolai-ivanovich-lobachevsky","timestamp":"2014-04-19T17:26:00Z","content_type":null,"content_length":"8418","record_id":"<urn:uuid:37020fbc-65c3-490e-b556-be8cdb6a825c>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Standard Error of a Binomial Distribution
March 31st 2011, 06:21 AM #1
Mar 2011
Standard Error of a Binomial Distribution
Checked the FAQ and there is nothing that says 'no gambling problems' so this question might be ok. I am trying to create a soccer model using shots on target and 'conversion rate' (CR) which is
how often shots on target are converted to goals. The difficulty I have is in determining how much significance to assign to each games CRin creating my ratings. I'll use an example to explain.
Team A: 1 goal. 5 shots on target. CR = 0.2. Expected CR = 0.25
Team B: 2 goals. 10 shots on target. CR = 0.2. Expected CR = 0.25
I thought this was a Binomial distribution with p = expected CR and n = number of shots on target, with variance = n*p*(1-p) but my Standard Error results were rubbish since team B had more shots
on target, the CR = 0.2 should be more accurate than the CR for team a since I have more samples, where a sample is a shot on target. Does that make sense? Then I tried a Bernoulli distribution
which worked better where variance = p(1-p), standard deviation was the square root of the variance, and then I calculated a standard error by dividing the SD by the square of n, which in this
case was the number of shots on target.
Team A:
Variance = 0.25*0.75 = 0.43
SD = SQRT(0.43) = 0.185
SE = 0.185 / SQRT 5 = 0.08
Team B:
SE = 0.185 / SQRT 10 = 0.06
This looks a lot better to me, since the SE of the CR is lower for team B than it is for team A. Is this close to what I should be doing? The next part if to use the inverse of the SE to create a
value of 'significance' so I can weight the importance of the CR when estimating team strength in terms of CR. Where there are no shots on goal, the significance should be 0 since there is no CR
data to work with, and when there are many shots on goal, the significance should be greater.
variance = n*p*(1-p)
This is the correct formula for the variance of a binomial(n,p) distribution but i think you're mis-interpreting it.
This is the variance of the number of shots on target if the model is correct, not the variance of your estimate of the conversion rate. You would expect the variability of the number of shots on
target to increase with the number of attempts, which is what you've observed.
For the variance of your estimate, i'd think along these lines (not sure it's correct though):
given that the true distribution is $X \sim Bin(n,p)$,and that the estimated conversion rate is $\hat{p} = X/n$.
$Var(\hat{p}|n) = Var(\frac{X}{n}|n) = \frac{Var(X|n)}{n^2} = \frac{np(1-p)}{n^2} = \frac{p(1-p)}{n}$
$sd(\hat{p}|n) = \sqrt{\frac{p(1-p)}{n}}$
Which is decreasing in n as you expected. Whether or not this can be inverted to create a significance measure with a useful interpretation, i have no idea im afraid.
Thanks for the response, that's similar to what I'm looking for. I say similar because I can't tell if it's correct or not, but it does the job!
March 31st 2011, 09:55 AM #2
MHF Contributor
May 2010
April 1st 2011, 08:15 AM #3
Mar 2011 | {"url":"http://mathhelpforum.com/statistics/176433-standard-error-binomial-distribution.html","timestamp":"2014-04-20T23:44:05Z","content_type":null,"content_length":"37416","record_id":"<urn:uuid:a3967dfc-cae5-4d21-ae1a-619fdcef1743>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Measuring the distance of processes: RUB researchers develop new process in spectral analysis
Nachrichten Lexikon Protokolle Bücher Foren Sonntag, 20. April 2014
Measuring the distance of processes: RUB researchers develop new process in spectral analysis
28.10.2011 - (idw) Ruhr-Universität Bochum
A milestone in the description of complex processes - for example the ups and downs of share prices - has been reached by mathematicians at the Ruhr-Universität Bochum. Researchers led by Prof. Dr.
Holger Dette (stochastics) have developed a new method in spectral analysis, which allows a classical mathematical model assumption, so-called stationarity, to be precisely measured and determined
for the first time. The approach also makes it possible to construct statistical tests that are considerably better and more accurate than previous methods. Measuring the distance of processes
RUB researchers develop new process in spectral analysis
Mathematics: milestone for better statistical models achieved
A milestone in the description of complex processes - for example the ups and downs of share prices - has been reached by mathematicians at the Ruhr-Universität Bochum. Researchers led by Prof. Dr.
Holger Dette (stochastics) have developed a new method in spectral analysis, which allows a classical mathematical model assumption, so-called stationarity, to be precisely measured and determined
for the first time. The approach also makes it possible to construct statistical tests that are considerably better and more accurate than previous methods. The researchers report on their results in
the prestigious Journal of the American Statistical Association.
Stationary or not stationary - that is the question
Example, share prices: almost all economic models and forecasting tools suffer because they are based on a false premise. They assume that the average fluctuation of individual prices and the
dependence characteristics between different shares do not change over time. This would make the development of share prices stationary. This assumption mostly turns out to be wrong in times of
crisis, because, for example, under normal market conditions many prices barely affect each other or not at all, whereas in a crash they almost all collapse together. This proves that such a process
is generally non-stationary.
The solution: a new distance dimension
Bochums stochasticians Prof. Dr. Holger Dette, M.Sc. Philip Preuß and Dr. Mathias Vetter, found the key to the whole issue by calculating a distance dimension between the stationary and
non-stationary process. Just as we can determine distances on Earth between two places, we were able to measure the distances or the intervals between the processes said Prof. Dette. The measure is
exactly 0 when the assumption of stationarity applies to the process. This distance can be estimated from the data and thus provides a reliable tool for the spectral analysis of so-called time
series, such as share prices or climate data. The goal of statistical analyses of time series is always to understand the underlying dependencies in order to then deliver the most accurate
predictions possible for the future behaviour of these processes said Prof. Dette.
Motivated by the financial crises
Our research is strongly motivated by the recent financial crises. At that time, nearly all economic models and forecasts for loan losses failed because they do not take appropriate account of
extreme dependencies. In the long term, we aim to develop models and methods that predict such events better said Dette. New methods of asymptotic statistics are crucial to this success and have been
researched for years by Bochums mathematicians, funded by the German Research Foundation in the Collaborative Research Centre SFB 823 Statistical modelling of nonlinear dynamic processes (Host
university: TU Dortmund University). Here, statisticians from Bochum work together with colleagues from the TU Dortmund University on new statistical methods to statistically verify frequently used
model assumptions and develop new and better models where appropriate.
Bibliographic record
Holger Dette, Philip Preuß, Mathias Vetter. A Measure of Stationarity in Locally Stationary Processes With Applications to Testing. Journal of the American Statistical Association Sep 2011, Vol. 106,
No. 495, 1113-1124. doi:10.1198/jasa.2011.tm10811
Further information
Prof. Dr. Holger Dette, Institute of Statistics, Faculty of Mathematics at the RUB, Tel. +49 234 32 28284, holger.dette@rub.de
Editor: Jens Wylkop jQuery(document).ready(function($) { $("fb_share").attr("share_url") = encodeURIComponent(window.location); });
uniprotokolle > Nachrichten > Measuring the distance of processes: RUB researchers develop new process in spectral analysis
HTML-Code zum Verweis auf diese Seite:
<a href="http://www.uni-protokolle.de/nachrichten/id/226440/">Measuring the distance of processes: RUB researchers develop new process in spectral analysis </a> | {"url":"http://www.uni-protokolle.de/nachrichten/id/226440/","timestamp":"2014-04-20T21:10:20Z","content_type":null,"content_length":"13475","record_id":"<urn:uuid:56e46981-a9d5-46c6-884d-455974eb905d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00189-ip-10-147-4-33.ec2.internal.warc.gz"} |
MathGroup Archive: March 2012 [00303]
[Date Index] [Thread Index] [Author Index]
Re: yg = \frac{{d(yv)}}{{dt}}, how to solve this differential equation.
• To: mathgroup at smc.vnet.net
• Subject: [mg125525] Re: yg = \frac{{d(yv)}}{{dt}}, how to solve this differential equation.
• From: Murray Eisenberg <murray at math.umass.edu>
• Date: Sat, 17 Mar 2012 02:52:11 -0500 (EST)
• Delivered-to: l-mathgroup@mail-archive0.wolfram.com
You presented the differential equation using LaTeX syntax, not
Mathematica syntax. This suggest you know essentially nothing about
To begin: since multi-character names are allowed in Mathematica, you
have to indicate multiplication explicitly, perhaps with just a space,
rather than juxtaposition. Thus:
y g
Second, the Mathematica notation for a function y of a variable t is y[t].
Third, one notation for taking the derivative of a function y of t is
just y'[t]. Another is D[y[t], t], and the latter is more convenient for
taking the derivative of a product such as that of y v:
D[y[t] v[t], t]
Now of course velocity is the derivative of position, so you really have
D[y[t] y'[t], t]
You can either let Mathematica figure out what that is or use the
Product Rule from calculus:
D[y[t] y'[t], t] == (y'[t])^2 + y[t] y''[t]
Note the double-equal sign == for indicating an equation.
Finally, use the Mathematica function DSolve to solve a differential
equation. In your example, this will be:
DSolve[g y[t] == D[y[t] y'[t], t], y[t], t]
You probably won't like the pair of solutions you obtain, as they will
be expressed as inverse functions of some rather complicated expressions
involving complex cube- and sixth-rots of -1 along with elliptic integrals.
You may have better luck with tractable solutions if you specify initial
conditions, but I doubt it. So you may have to try for numerical
solutions, use DSolve.
On 3/16/12 7:30 AM, Hongyi Zhao wrote:
> Hi all,
> I've a differential equation looks like following:
> yg = \frac{{d(yv)}}{{dt}}
> where, g is gravity acceleration, y is the displacement, and the v is
> velocity. Could you please give me some hints by using mathematica to
> solve it?
> Best regards
Murray Eisenberg murray at math.umass.edu
Mathematics & Statistics Dept.
Lederle Graduate Research Tower phone 413 549-1020 (H)
University of Massachusetts 413 545-2859 (W)
710 North Pleasant Street fax 413 545-1801
Amherst, MA 01003-9305 | {"url":"http://forums.wolfram.com/mathgroup/archive/2012/Mar/msg00303.html","timestamp":"2014-04-19T07:04:04Z","content_type":null,"content_length":"27307","record_id":"<urn:uuid:111ea53d-8eba-4994-b81b-996d29201760>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00219-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oscilloscope Math Functions Aid Hot-Swap Circuit Analysis
Oscilloscope Math Functions Aid Hot-Swap Circuit Analysis
By: Dwight Larson, Senior Member of the Technical Staff
Abstract: Digital oscilloscopes are the norm in most engineering labs, but the chances are that you have not fully explored their features. Among the more interesting features of a digital
oscilloscope is the "math" channel, which can be applied in novel ways to simplify and expand the analysis of hot-swap and load-switching circuits. This application note shows how to connect the
oscilloscope's probes to a hot-swap circuit to obtain accurate values for MOSFET power dissipation and load capacitance. The MAX5976 hot-swap solution serves as the example device.
A similar version of this article appeared in the October 1, 2011 issue of Test & Measurement World magazine.
Among the more interesting features of a digital oscilloscope is the "math" channel, which can be applied in novel ways to simplify and expand the analysis of hot-swap and load-switching circuits.
With clever use, oscilloscope math functions enable the calculation of load capacitance or reveal the transient power dissipation in a MOSFET during startup or
. Math functions can yield detailed real-world information about hot-swap circuit parameters that are otherwise subject to approximations and estimates. Such information is invaluable, both for
design and for troubleshooting of hot-swap and load-switching circuits.
This application note shows how to connect the oscilloscope's probes to a hot-swap circuit to obtain accurate values for MOSFET power dissipation and load capacitance.
Oscilloscope Setup
For simplicity in this demonstration, we chose the
hot-swap solution, which combines an internal MOSFET switching element with the current-sensing and driver circuitry necessary to implement a complete power-switching circuit. (The following test
method also applies to hot-swap control circuits built from discrete components.) By connecting scope probes to the hot-swap circuit as shown in
Figure 1
, the oscilloscope can access the signals needed for calculations.
probes connected to the input and output provide the voltage drop across the MOSFET; a current probe offers the easiest way to sense load current through the MOSFET.
Figure 1. Scope probes connect to the MAX5976 and MAX5978 hot-swap circuits. These oscilloscope connections obtain waveforms that feed the scope's advanced math function.
Note that the same basic connections apply for a nonintegrated hot-swap circuit. Connect the input- and output-voltage probes before and after the MOSFET (internal to a MAX5976, external to a
MAX5978), and place the current probe in series with the circuit's current-sense resistor. To get an accurate measure of current flowing through the
element itself, you should place the current probe after the input bypass capacitance and before the output capacitance.
MOSFET Power Dissipation
Power dissipation in the switch element (typically an
MOSFET) is the product of drain-to-source voltage (V
) and
current (I
). In our test setup, V
is the difference between channel 2 and channel 1, and I
is measured directly by the current probe. The oscilloscope used in this example (a Tektronix® DPO3034) has a math trace that is configured through an advanced math menu (
Figure 2
Figure 2. This menu lets you edit math expressions in the advanced math function of a DPO3034 digital oscilloscope.
To measure power dissipation in the MOSFET, simply enter an expression that subtracts channel 1 from channel 2. Multiply the result by the current-probe signal. When the hot-swap circuit is enabled,
its output voltage rises toward the input potential at a particular dV/dt slew rate. The load-capacitance charging current (I[D]) flows through the MOSFET according to:
I[D] = C[OUT] × dV/dt
Capturing this startup event on the oscilloscope yields the waveforms of Figure 3a, for which output capacitance is 360µF and V[IN] = 12V. The MAX5976 limits inrush current to 2A. Note that the power
waveform is a decreasing ramp, starting at 12V × 2A = 24W and falling to 0W as the output rises to 12V. That behavior is exactly what we expect for a hot-swap circuit charging the load capacitance
with a constant current.
Figure 3a. The MOSFET power dissipation for the circuit of Figure 1 is shown (red trace) for C[OUT] = 360µF. Inrush current is clamped to 2A.
Power waveforms measured in this way can be used to determine whether the MOSFET is within its safe operating area (SOA), or to estimate the rise of its junction
by referring to relevant charts in the MOSFET data sheet. Determining the waveform directly from actual measurements eliminates the error inherent in approximating power dissipation. Moreover, the
power waveform can be accurately captured during a startup event for which neither the inrush current nor the dV/dt is constant (
Figure 3b
Figure 3b. A power waveform is accurately captured during a startup in which neither the inrush current nor the dV/dt is constant. Here the inrush current is unclamped.
If the math function in your oscilloscope includes an integration operand, this waveform calculation can be taken one step further to show the total energy deposited in the MOSFET during any event
that results in significant power dissipation in the FET. Figure 4 applies the integration function to the MOSFET's power information.
Figure 4. Integration of power dissipation yields the total energy deposited in the MOSFET of Figure 1 during startup.
As in Figure 3a, C[OUT] is 360µF and the inrush current is clamped to 2A. Because the power waveform has a triangular shape with a startup duration of about 2ms, we expect about 24W/2 × 2ms = 24mJ of
energy to be converted to heat in the MOSFET. Indeed, the math channel's integral of power reaches almost exactly 24mWs (= 24mJ) of energy at the end of the startup event!
Obviously, this technique can also be applied to other transient conditions that affect the MOSFET, such as a shutdown and short-circuit or overload events. Such detailed power and energy information
can be used to make precise calculations of pulse duration and single-pulse power when checking the MOSFET's SOA and thermal characteristics.
Measuring Load Capacitance
Among the math functions of a digital oscilloscope, the integration operand can also be used to measure hot-swap load capacitance—provided that the resistive load current is small during startup.
Capacitance is the amount of charge stored per volt applied to the capacitor; charge is simply the time integral of current. Therefore, by integrating the hot-swap inrush current and dividing by the
output voltage, an oscilloscope's math function can measure the total load capacitance with surprising accuracy. In
Figure 5a
, the hot-swap
is enabled with three ceramic output capacitors, each with a nominal value of 10µF. The math trace is initially meaningless because of the divide-by-zero problem before V
rises. However, when V
exceeds zero, the math channel quickly converges to a measured capacitance of approximately 27µF. Note that the math function units for this integral are not represented properly—modern digital
scopes are amazing, but they still cannot read our minds or understand our intentions!
Figure 5a. Output capacitance measurement from Figure 1 with C[OUT] = 30µF.
Figure 5b
repeats the experiment of Figure 5a, but with an additional aluminum electrolytic
of nominal value 330µF added to the output. Note that as the startup event ends, the math trace shows a measured output capacitance of approximately 360µF—almost exactly what we expect. Remember that
a resistive load degrades the accuracy of these capacitance measurements by drawing current that is not stored in the capacitor. For short-duration measurements, however, the results can still be
very useful.
Figure 5b. Output capacitance measurement from Figure 1 with C[OUT] = 30µF + 330µF.
Tektronix is a registered trademark and registered service mark of Tektronix, Inc.
Related Parts
DS4560 12V Hot-Plug Switch Free Samples
MAX5961 0 to 16V, Quad, Hot-Swap Controller with 10-Bit Current and Voltage Monitor Free Samples
MAX5970 0V to 16V, Dual Hot-Swap Controller with 10-Bit Current and Voltage Monitor and 4 LED Drivers
MAX5978 0 to 16V, Hot-Swap Controller with 10-Bit Current, Voltage Monitor, and 4 LED Drivers Free Samples
Next Steps
EE-Mail Subscribe to EE-Mail and receive automatic notice of new documents in your areas of interest.
Download Download, PDF Format (420.8kB)
APP 4883: Nov 03, 2011
APPLICATION NOTE 4883, AN4883, AN 4883, APP4883, Appnote4883, Appnote 4883 | {"url":"http://www.maximintegrated.com/app-notes/index.mvp/id/4883","timestamp":"2014-04-20T23:31:26Z","content_type":null,"content_length":"72945","record_id":"<urn:uuid:798c4ea8-8861-4281-a380-82f28a618b7c>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00572-ip-10-147-4-33.ec2.internal.warc.gz"} |
MAT 540 Final Exam
1). Fractional relationships between variables are not permitted in the standard form of a linear program.
2). In an unbalanced transportation model, supply does not equal demand and one set of constraints uses ≤ signs.
3). Excel can be used to simulate systems that can be represented by both discrete and continuous random variables.
4). In a transshipment problem, items may be transported from destination to destination and from source to source.
5). In a total integer model, all decision variables have integer solution values.
6). A cycle is an up and down movement in demand that repeats itself in less than 1 year.
7). Using the maximin criterion to make a decision, you
8). Using the minimax regret criterion to make a decision,
9). A business owner is trying to decide whether to buy, rent, or lease office space and has constructed the following payoff table based on whether business is brisk or slow. If the probability of
brisk business is .40 and for slow business is .60, the expected value of perfect information is:
10). In a break-even model, if all of the costs are held constant, how does an increase in price affect the model?
11). Events that cannot occur at the same time in any trial of an experiment are:
12). Steinmetz furniture buys 2 products for resale: big shelves (B) and medium shelves (M). Each big shelf costs $100 and requires 100 cubic feet of storage space, and each medium shelf costs $50
and requires 80 cubic feet of storage space. The company has $25000 to invest in shelves this week, and the warehouse has 18000 cubic feet available for storage. Profit for each big shelf is $85 and
for each medium shelf is $75. What is the storage space constraint?
13). Steinmetz furniture buys 2 products for resale: big shelves (B) and medium shelves (M). Each big shelf costs $100 and requires 100 cubic feet of storage space, and each medium shelf costs $50
and requires 80 cubic feet of storage space. The company has $25000 to invest in shelves this week, and the warehouse has 18000 cubic feet available for storage. Profit for each big shelf is $85 and
for each medium shelf is $75. In order to maximize profit, how many big shelves (B) and how many medium shelves (M) should be purchased?
14). The following is an Excel “Answer” and “Sensitivity” reports of a linear programming problem: The Answer Report: The Sensitivity Report: Which additional resources would you recommend to be
15). The production manager for Beer etc. produces 2 kinds of beer: light (L) and dark (D). Two resources used to produce beer are malt and wheat. He can obtain at most 4800 oz of malt per week and
at most 3200 oz of wheat per week respectively. Each bottle of light beer requires 12 oz of malt and 4 oz of wheat, while a bottle of dark beer uses 8 oz of malt and 8 oz of wheat. Profits for light
beer are $2 per bottle, and profits for dark beer are $1 per bottle. What is the optimal weekly profit?
16). The owner of Black Angus Ranch is trying to determine the correct mix of two types of beef feed, A and B which cost 50 cents and 75 cents per pound, respectively. Five essential ingredients are
contained in the feed, shown in the table below. The table also shows the minimum daily requirements of each ingredient. Ingredient Percent per pound in Feed A Percent per pound in Feed B Minimum
daily requirement (pounds) 1 20 24 30 2 30 10 50 3 0 30 20 4 24 15 60 5 10 20 40 The constraint for ingredient 3 is:
17). Let xij = gallons of component i used in gasoline j. Assume that we have two components and two types of gasoline. There are 8,000 gallons of component 1 available, and the demands for gasoline
types 1 and 2 are 11,000 and 14,000 gallons respectively. Write the supply constraint for component 1.
18). The Kirschner Company has a contract to produce garden hoses for a customer. Kirschner has 5 different machines that can produce this kind of hose. Write a constraint to ensure that if machine 4
is used, machine 1 will not be used.
19). If we are solving a 0-1 integer programming problem, the constraint x1 = x2 is a __________ constraint.
20). The following table represents the cost to ship from Distribution Center 1, 2, or 3 to Customer A, B, or C. The constraint that represents the quantity supplied by DC 1 is:
21). The assignment problem constraint x31+x32+x33+x34 ≤ 2 means
22). Professor Dewey would like to assign grades such that 15% of students receive As. If the exam average is 62 with a standard deviation of 13, what grade should be the cutoff for an A? (Round your
23). Jack is considering pursuing an MS in Information Systems degree. He has applied to two different universities. The acceptance rate for applicants with similar qualifications is 30% for
University X and 60% for University Y. The decisions of each university have no effect on each other. This means that they are:
24). __________ moving averages react more slowly to recent demand changes than do __________ moving averages.
25). Consider the following graph of sales. Which of the following characteristics is exhibited by the data?
26). For the following frequency distribution of demand, the random number 0.8177 would be interpreted as a demand of:
27). A bakery is considering hiring another clerk to better serve customers. To help with this decision, records were kept to determine how many customers arrived in 10-minute intervals. Based on 100
ten-minute intervals, the following probability distribution and random number assignments developed. Suppose the next three random numbers were .18, .89 and .67. How many customers would have
arrived during this 30-minute period?
28). Ford’s Bed & Breakfast breaks even if they sell 50 rooms each month. They have a fixed cost of $6500 per month. The variable cost per room is $30. For this model to work, what must be the
revenue per room? (Note: The answer is a whole dollar amount. Give the answer as a whole number, omitting the decimal point. For instance, use 105 to write $105.00).
29). Suppose that a production process requires a fixed cost of $50,000. The variable cost per unit is $10 and the revenue per unit is projected to be $50. Find the break-even point.
30). Joseph is considering pursuing an MS in Information Systems degree. He has applied to two different universities. The acceptance rate for applicants with similar qualifications is 30% for
University X and 60% for University Y. What is the probability that Jim will not be accepted at either university? (Note: write your answer as a probability, with two decimal places. If necessary,
round to two decimal places. For instance, a probability of 0.252 should be written as 0.25).
31). Consider the following linear program, which maximizes profit for two products, regular (R), and super (S): MAX 50R + 75S s.t. 1.2R + 1.6 S ≤ 600 assembly (hours) 0.8R + 0.5 S ≤ 300 paint
(hours) .16R + 0.4 S ≤ 100 inspection (hours) Sensitivity Report:
Final Reduced Objective Allowable Allowable Cell Name Value Cost Coefficient Increase Decrease $B$7 Regular = 291.67 0.00 50 70 20 $C$7 Super = 133.33 0.00 75 50 43.75 Final Shadow Constraint
Allowable Allowable
Cell Name Value Price R.H. Side Increase Decrease $E$3 Assembly (hr/unit) 563.33 0.00 600 1E+30 36.67 $E$4 Paint (hr/unit) 300.00 33.33 300 39.29 175 $E$5 Inspect (hr/unit) 100.00 145.83 100 12.94 40
A change in the market has increased the profit on the super product by $5. Total profit will increase by __________. Write your answers with two significant places after the decimal and do not
include the dollar “$” sign.
32). Tracksaws, Inc. makes tractors and lawn mowers. The firm makes a profit of $30 on each tractor and $30 on each lawn mower, and they sell all they can produce. The time requirements in the
machine shop, fabrication, and tractor assembly are given in the table. Formulation: Let x = number of tractors produced per period y = number of lawn mowers produced per period MAX 30x + 30y subject
to 2 x + y ≤ 60 2 x + 3y ≤ 120 x ≤ 45 x, y ≥ 0 The graphical solution is shown below. What is the shadow price for fabrication? Write your answers with two significant places after the decimal and do
not include the dollar “$” sign.
33). Klein Kennels provides overnight lodging for a variety of pets. An attractive feature is the quality of care the pets receive, including well balanced nutrition. The kennel’s cat food is made by
mixing two types of cat food to obtain the “nutritionally balanced cat diet.” The data for the two cat foods are as follows:
Cat Food Cost/oz protien (%) fat (%) Partner's Choice $0.20 45 20 Feline Excel $0.15 15 30 Klein Kennels wants to be sure that the cats receive at least 5 ounces of protein and at least 3 ounces of
fat per day. What is the optimal cost of this plan? Note: Please write your answers with two significant places after the decimal and do not include the dollar “$” sign. For instance, $9.45 (nine
dollars and fortyfive cents) should be written as 9.45
34). Find the optimal Z value for the following problem. Do not include the dollar “$” sign with your answer. Max Z = x1 + 6x2 Subject to: 17x1 + 8x2 ≤ 136 3x1 + 4x2 ≤ 36 x1, x2 ≥ 0 and integer
35). Suppose that x is normally distributed with a mean of 10 and a standard deviation of 3. Find P(x ≤ 6). Note: Round your answer, if necessary, to two places after the decimal. Please express your
answer with two places after the decimal.
36). Ms. James is considering four different opportunities, A, B, C, or D. The payoff for each opportunity will depend on the economic conditions, represented in the payoff table below.
Investment Economic Conditions Poor (S1) Average (S2) Good (S3) Excellent (S4) A 18 25 50 80 B 19 100 50 75 C 100 26 120 60 D 20 27 50 240 Suppose all states of the world are equally likely (each
state has a probability of 0.25). What is the expected value of perfect information? Note: Report your answer as an integer, rounding to the nearest integer, if applicable
37). The local operations manager for the IRS must decide whether to hire 1, 2, or 3 temporary workers. He estimates that net revenues will vary with how well taxpayers comply with the new tax code.
The probabilities of low, medium, and high compliance are 0.20, 0.30, and 0.50 respectively. What is the expected value of perfect information? Do not include the dollar “$” sign with your answer.
The following payoff table is given in thousands of dollars (e.g. 50 = $50,000). Note: Please express your answer as a whole number in thousands of dollars (e.g. 50 = $50,000). Round to the nearest
whole number, if necessary.
38). The local operations manager for the IRS must decide whether to hire 1, 2, or 3 temporary workers. He estimates that net revenues will vary with how well taxpayers comply with the new tax code.
The probabilities of low, medium, and high compliance are 0.20, 0.30, and 0.50 respectively. What are the expected net revenues for the number of workers he will decide to hire? The following payoff
table is given in thousands of dollars (e.g. 50 = $50,000). Note: Please express your answer as a whole number in thousands of dollars (e.g. 50 = $50,000). Round to the nearest whole number, if
39). The following sales data are available for 2003-2008 :
Year Sales Forecast 2003 7 7 2004 8 8.5 2005 12 10.5 2006 14 13 2007 16 15 2008 18 16 Calculate the MAPD and express it in decimal notation. Please express the result as a number with 4 decimal
places. If necessary, round your result accordingly. For instance, 9.14677, should be expressed as 9.1468
40). Consider the following decision tree. The objective is to choose the best decision among the two available decisions A and B. Find the expected value of the best decision. Do not include the
dollar “$” sign with your answer. | {"url":"http://www.finalexamguide.com/MAT-540-Final-Exam-2-365.htm","timestamp":"2014-04-21T13:18:11Z","content_type":null,"content_length":"40084","record_id":"<urn:uuid:6125f250-4305-49f2-9740-561cecd52f97>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Graphing Calculator Software
CATEGORIES TOP DOWNLOADS NEW DOWNLOADS Related Downloads Popular Topics
Graphing Calculator Software
A graphing calculator meant as a desktop replacement for the TI-83. A graphing calculator meant as a desktop replacement for the TI-83. A calculator / graphing tool written in Java. The goal is to
create an open source, cross-platform, desktop graphing calculator with functionality similar to a TI-83. Take Egors...
MagicPlot Calculator is a simple and easy-to-use formula calculator. MagicPlot Calculator is a simple and easy-to-use formula calculator. MagicPlot Calculator is a free expression calculator from
MagicPlot graphing application. Even if you don't use MagicPlot you can use our strong calculator.MagicPlot Calculator...
Software Terms: Data Processing, Fit, Fitting, Free Plotting Software, Graph, Graphing Program, Graphing Software, Gui, Multi-peak Fitting, Nonlinear Curve Fitting
DreamCalc is a virtual Scientific Graphing Calculator. DreamCalc is a virtual Scientific Graphing Calculator. You get the intuitive feel of using a real hand-held calculator on your PC or laptop!
With DreamCalc, youll be able to graph functions and plot list data more simply than ever. In fact, it is...
2.9 MB |
Shareware |
US$40 |
Category: Mathematics
Math software for students studying precalculus and calculus. Can be interesting for teachers teaching calculus. Math software for students studying precalculus and calculus. Can be interesting for
teachers teaching calculus. Math Center Level 2 consists of a Scientific Calculator, a Graphing Calculator 2D Numeric, a Graphing Calculator 2D Parametric, and...
OS: Windows
Software Terms: Cartesian, Coordinates, Graphing Calculator, Hyperbolic Functions, Inverse Functions, Mathematics, Parametric, Polar, Trigonometric Functions
2.7 MB |
Shareware |
US$25.99 |
Category: Mathematics
DreamCalc is a fully featured Graphing Calculator for Windows. DreamCalc is a fully featured Graphing Calculator for Windows. With DreamCalc, you'll be able to graph functions and plot list data
more simply than ever. In fact, it is a match for many dedicated graphing packages, but far easier to use. Unlike...
OS: Windows
Software Terms: Author Of Dreamcalc Dcg Graphing Calculator 4 6 1, Built, Business, Calculator, Constants, Conversions, Dreamcalc, Dreamcalc Dcg Graphing Calculator, Functions, Input
DreamCalc is a software application which provides a fully featured and convenient alternative to using a separate hand-held graphing calculator when you are working on your PC. Because it is
software, it runs alongside your other applications, allowing you to copy values, lists, expressions and graphs.
OS: Windows
Software Terms: Calculator, Enginee, Engineering Calculator, Function Graphing, Function Plot, Function Plotting, Graphing Calculator, Plot, Programming, Scientific Calculator
2.8 MB |
Freeware |
Category: Mathematics
Graph mathematical functions and equations in 2D or 3D with this advanced graphing calculator. Graph mathematical functions and equations in 2D or 3D with this advanced graphing calculator. - Easy
graphing calculator - Beautiful user-interface - Plots functions: y=sin(x) - Plots equations: sin(x) = y2 - Easy and detailed customization of...
OS: Windows, XP, 2000, 98, Me
Software Terms: Advanced Calculator, Buy Calculator, Calculator, Download Calculator, Equation Calculator, Free Calculator, Math Calculator, Online Calculator, Scientific Calculator, Simple
5.4 MB |
Shareware |
US$19.95 |
Category: Mathematics
Fornux® PowerCalc-GX is both a powerful scientific calculator and a fast graphing calculator offering college students everything from high precision calculations to quality graphics helping you to
resolve any problem easily. Intelligent, fast and precise, Fornux® PowerCalc-GX is ultimately designed for ease of use with advanced logical, scientific, statistical, vector and linear algebra
OS: Windows, XP, 2000, 98, Me, NT
Software Terms: Algebra Calculator, Graphing Calculator, Linear Algebra, Linear Re, Math Calculator, Matrix, Scientific Calculator, Scientific Graphic Calculator, Scientific Graphing Calculator,
Statistics Calculator
2.6 MB |
Shareware |
US$45 |
Category: Mathematics
Derivative Calculator Level 2 for teachers and students. Calculates derivatives of high orders of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to
find symbolic and numerical derivatives of standard functions. Derivative Calculator Level 2 is programmed in C#. All calculations are done in double floating data type. The calculator...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function
1.3 MB |
Shareware |
US$54 |
Category: Mathematics
Derivative Calculator Real 36 for teachers and students. Calculates derivatives of high orders of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to
find symbolic and numerical derivatives of standard functions. Derivative Calculator Real 36 is programmed in C#. All calculations are done in proprietary data type. The calculator calculates...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function
1.3 MB |
Shareware |
US$57 |
Category: Mathematics
Derivative Calculator Real 45 for teachers and students. Calculates derivatives of high orders of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to
find symbolic and numerical derivatives of standard functions. Derivative Calculator Real 45 is programmed in C#. All calculations are done in proprietary data type. The calculator calculates...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function
2.6 MB |
Shareware |
US$51 |
Category: Mathematics
Derivative Calculator Real 27 for teachers and students. Calculates derivatives of high orders of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to
find symbolic and numerical derivatives of standard functions. Derivative Calculator Real 27 is programmed in C#. All calculations are done in proprietary data type. The calculator calculates...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function
2.6 MB |
Shareware |
US$61 |
Category: Mathematics
Taylor Calculator Real 27 for teachers and students. Calculates partial sums of Taylor series of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to find
symbolic and numerical Taylor polynomials of standard functions. Taylor Calculator Real 27 is programmed in C#. All calculations are done in double floating data type. The calculator...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function, Taylor Polynomial, Taylor Series
1.3 MB |
Shareware |
US$64 |
Category: Mathematics
Taylor Calculator Real 36 for teachers and students. Calculates partial sums of Taylor series of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to find
symbolic and numerical Taylor polynomials of standard functions. Taylor Calculator Real 36 is programmed in C#. All calculations are done in double floating data type. The calculator...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function, Taylor Polynomial, Taylor Series
2.6 MB |
Shareware |
US$55 |
Category: Mathematics
Taylor Calculator Level 2 for teachers and students. Calculates partial sums of Taylor series of standard functions (including hyperbolic). A handy, fast, reliable, precise tool if you need to find
symbolic and numerical Taylor polynomials of standard functions. Taylor Calculator Level 2 is programmed in C#. All calculations are done in double floating data type. The calculator...
OS: Windows
Software Terms: Calculator, Derivative, Differentiation, Function, Taylor Polynomial, Taylor Series | {"url":"http://www.bluechillies.com/software/graphing-calculator-software.html","timestamp":"2014-04-18T00:14:44Z","content_type":null,"content_length":"64767","record_id":"<urn:uuid:9b357488-1da4-4c77-9c80-6c736f9e5139>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00455-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts from March 2007 on The Unapologetic Mathematician
I seem to be the only one around who thinks this is hilarious, or even gets it. I am the biggest geek in the department of mathematics.
There is a special kind of function between rings, just like we have in groups. Given rings $R$ and $S$, a function $f:R\rightarrow S$ is called a homomorphism if it preserves all the ring structure.
The sort of odd thing here is that we’ve got two different kinds of rings to consider: those with and without identities. If we’re considering rings in general, we require that
• $f(r_1+r_2)=f(r_1)+f(r_2)$
• $f(r_1r_2)=f(r_1)f(r_2)$
but if we’re restricting ourselves to rings with identities, we also require that
where the $1$ on the left is the identity of $R$, and the one on the right is the identity for $S$. If we have two rings with identities but we consider them as general rings there will be more
homomorphisms than if we consider them as rings with identity. It becomes important to pay a bit of attention to what kind of rings we’re really concerned with.
As an exercise, consider an arbitrary ring $R$ and see what ring homomorphisms exist from $\mathbb{Z}$ to $R$. If $R$ has an identity, which of these homomorphisms preserve the identity?
Oh, and I probably should mention this: all the terminology from groups comes along for the ride. An injective (one-to-one) ring homomorphism is a monomorphism. A surjective (onto) ring homomorphism
is an epimorphism. One that’s both is an isomorphism. A homomorphism from a ring to itself is an endomorphism, and an isomorphism from a ring to itself is an automorphism.
[EDIT: cleaned up LaTeX error and added comments at the end about terminology.]
I just got home from a long discussion with Dr. Zuckerman about this whole business. I’m not quite ready to say exactly what’s going on, but I want to correct a couple errors that I’ve made. Let it
not be said that I don’t admit when I’m wrong.
Firstly, in my little added remarks about the Monster group in my “Why We Care” post, I was oversimplifying. First of all, the $E_8$ lattice is not the Leech lattice. The Leech lattice lives in
24-dimensional space for one thing (doh). Basically, you put together three copies of the $E_8$ lattice and then tweak it a bit.
Putting them together I can explain. The simplest lattice is just the integers sitting inside the real line. If you move to the plane, the points with integer coordinates sit at the corners of the
squares in a checkerboard tiling of the plane. This is “adding two copies of the integer lattice”. For three copies of $E_8$, we want 24-tuples of numbers so the first eight, second eight, and third
eight are each the coordinates of a point in the $E_8$ lattice.
When you do this, it turns out there’s just enough room to squeeze in some more points to get a new lattice. That’s the Leech lattice. The Monster also isn’t quite just a group of symmetries of this
lattice, so there’s still a few more steps to go, but it’s definitely related. So the connection isn’t quite as close as I’d implied, but it’s there.
The other thing is about real forms. I’d forgotten that not every choice of “realification” of the Killing form gives a Lie group, and further that not every choice that does work gives a unique Lie
What is true is that to every real form $G(\mathbb{R})$ of a complex Lie group $G$, there’s a largest compact subgroup $K(\mathbb{R})$. This means that its ends curve back in on themselves like the
circle or the torus, and don’t run off to infinity like the line or the cylinder. Then we can “complexify” this group to get another complex group $K$ that’s really interesting to us. This group $K$
is a subgroup of $G$, which will be important. In particular, if we take the compact real form of $G$, its maximal compact subgroup is just itself, so its complexification $K$ is just $G$ back again.
Today I’m going to be talking to the graduate students about various topics relating to coloring knots. I think I’ll leave you with a little project to play with.
First, go to Bar-Natan’s table of knots. Notice how all the diagrams seem to be made up of arcs meeting up where one strand of the knot crosses under another. Pick a knot diagram and try to color
each arc either red, green, or blue, subject to the following rule: at any crossing, the three arcs that meet (two for the undercrossing strand and one for the overcrossing) must either be all the
same color or all different colors.
Which knots can you color using all three colors at least once? If that’s too easy for you, how many ways can you color a given knot? If that’s too easy for you, you’ve almost surely seen this
To get you started, I’ve tricolored the trefoil knot using all three colors.
Sometime between dragging myself into my bed after the calculus exam and related activities last night and dragging myself back out of it in time to teach this morning, an article claiming to solve
triangular peg solitaire went up on the arXiv. I’ve obviously not had time to read it, so I don’t know how good it is, but the subject matter at least should be pretty generally accessible.
There are a number of different kinds of rings differentiated (sorry) by properties of their multiplications. Most of them lead into their own specialized areas of study. I mentioned that a ring may
or may not be commutative, and it may or may not have an identity, but there are a few more that will be useful.
One initially counterintuitive idea is that it’s entirely possible that a ring has “zero divisors”: two nonzero elements that multiply to give zero. Imagine starting with two copies of the integers,
$\mathbb{Z}$ and $\bar{\mathbb{Z}}$, writing elements of the second copy as integers with a bar over them. Now consider pairs of elements, one from each copy, $(a,\bar{b})$. Add pairs by adding the
two components, but multiply them like this:
Notice that the product of any two elements of $\bar{\mathbb{Z}}$ is zero! Weird. Eerie.
To be explicit: an element of this ring coming from $\bar{\mathbb{Z}}$ is $(0,\bar{a})$. We calculate the product:
So, any element $a$ for which there is a $b$ so that $ab=0$ is called a left zero divisor. Right zero divisors are defined similarly. If a ring has no zero divisors, so the product of two nonzero
elements is always nonzero, we call it an “integral domain”. The integers are just such an integral domain, fittingly enough.
Now if a ring has a multiplicative identity we can start talking about multiplicative inverses. We say an element $a$ has a left inverse $b$ if $ba=1$, or a right inverse $c$ if $ac=1$. If a ring has
both a left and a right inverse they’re the same, since
In this case we call $a$ a unit and write its inverse as $a^{-1}$. We can also see that an element having a left (right) inverse cannot be a left (right) zero divisor:
$ax=0\Rightarrow x=1x=(ba)x=b(ax)=b0=0$
If every nonzero element of a ring is a unit, we call it a division ring.
In the case of commutative rings, all these distinctions between “left” and “right” (zero divisors, inverses, etc.) disappear, since multiplication doesn’t care about the order of the factors. We
actually have a special name for a commutative division ring: we call it a “field”, though everyone else in the world except the Belgians seems to call it a “(dead) body” (körper, corps, поле, test,
lichaam, …).
[EDIT: added explicit calculation verifying that elements from $\bar{\mathbb{Z}}$ in the example are zero-divisors.]
As I mentioned before, the primal example of a ring is the integers $\mathbb{Z}$. So far we’ve got an ordered abelian group structure on the set of (equivalence classes of) pairs of natural numbers.
Now we need to add a multiplication that distributes over the addition.
First we’ll figure out how to multiply natural numbers. This is pretty much as we expect. Remember that a natural number is either ${}0$ or $S(b)$ for some number $b$. We define
$a\cdot S(b)=(a\cdot b)+a$
where we’ve already defined addition of natural numbers.
Firstly, this is commutative. This takes a few inductions. First show by induction that ${}0$ commutes with everything, then show by another induction that if $a$ commutes with everything then so
does $S(a)$. Then by induction, every number commutes with every other. I’ll leave the details to you.
Similarly, we can use a number of inductions to show that this multiplication is associative — $(a\cdot b)\cdot c=a\cdot(b\cdot c)$ — and distributes over addition of natural numbers — $a\cdot(b+c)=a
\cdot b+a\cdot c$. This is extremely tedious and would vastly increase the length of this post without really adding anything to the exposition, so I’ll again leave you the details. I’m reminded of
something Jeff Adams said (honest, I’m not trying to throw these references in gratuitously) in his class on the classical groups. He told us to verify that the commutator in an associative algebra
satisfies the Jacobi identity because, “It’s long and tedious and doesn’t add much, but I had to do it when I was a grad student, so now you’re grad students and it’s your turn.”
So now these operations — addition and multiplication — of natural numbers make $\mathbb{N}$ into what some call a “semiring”. I prefer (following John Baez) to call it a “rig”, though: a “ring
without negatives”. We use this to build up the ring structure on the integers.
Recall that the integers are (for us) pairs of natural numbers considered as “differences”. We thus define the product
$(a,b)\cdot(c,d)=(a\cdot c+b\cdot d,a\cdot d+b\cdot c)$
Our life now is vastly easier than it was above: since we know addition and multiplication of natural numbers is commutative, the above expression is manifestly commutative. No work needs to be done!
Associativity is also easy: just set up both triple products and expand out, checking that each term is the same by the rig structure of the natural numbers. Similarly, we can check distributivity,
that $(1,0)$ acts as an identity, and that the product of two integers is independent of the representing pair of natural numbers.
Lastly, multiplication by a positive integer preserves order. If $a<b$ and $0<c$ then $ac<bc$. Together all these properties make the integers as we’ve defined them into a commutative ordered ring
with unit. The proofs of all these things have been incredibly dull (I actually did them all today just to be sure how they worked), but it’s going to get a lot easier soon.
I want to tie up a few loose ends about Rubik’s group today.
We can fit Rubik’s group into a sequence that more clearly shows all the structure I’m talking about. Specifically, it’s a subgroup of the bigger group I mentioned back at the beginning. We can
restate the three restrictions as saying the maneuvers in Rubik’s group are those in the kernel of a certain homomorphism. So, first let’s write down the big group.
The unrestricted edge and corner groups are just wreath products, which I’ll write out as semidirect products. Without restrictions, these two groups are independent, so we just have a direct product
to give the unrestricted Rubik’s group.
$\bar{G}=\left(\mathbb{Z}_2^{12}\rtimes S_{12}\right)\times\left(\mathbb{Z}_3^8\rtimes S_8\right)$
I’ll write $(((e_1,e_2,...,e_{12}),\sigma_e),((c_1,c_2,...,c_8),\sigma_c))$ for a generic element of this group. Each part of this list corresponds to part of the expression for $\bar{G}$ above.
Now we want to add up all the edge flips and make them come out to zero. We can write this sum as a homomorphism:
where the sum is taken in the group $\mathbb{Z}_2$. You should be able to verify that this actually is a homomorphism. Similarly, we want the sum of the total twists as a homomorphism:
where the sum is taken in $\mathbb{Z}_3$.
Finally, the permutation condition uses the “signum” homomorphism from a symmetric group to $\mathbb{Z}_2$. It assigns the value ${}0$ to even permutations and the value $1$ to odd ones. We use it to
write the last restriction as a homomorphism:
$p(((e_1,e_2,...,e_{12}),\sigma_e),((c_1,c_2,...,c_8),\sigma_c))={\rm sgn}(\sigma_e)+{\rm sgn}(\sigma_c)$
Now we assemble our overall restriction homomorphism as the direct product of these three:
$f=e\times c\times p:\bar{G}\rightarrow\mathbb{Z}_2\times\mathbb{Z}_3\times\mathbb{Z}_2$
and get the short exact sequence:
$\mathbf{1}\rightarrow G\rightarrow\bar{G}\rightarrow^{f}\mathbb{Z}_2\times\mathbb{Z}_3\times\mathbb{Z}_2\rightarrow\mathbf{1}$
Commenter Dan Hoey brought up where my fundamental operations come from. To be honest, these four are just ones I remember off the top of my head. He’s right, though, that there are systematic ways
of coming up with maneuvers that perform double-flips, double-twists, and $3$-cycles. I’ll leave you to read his comment and work out yourself that you can realize four such basic maneuvers as
commutators — products of elements of the form $m_1m_2m_1^{-1}m_2^{-1}$. This means that the commutator subgroup $\left[G,G\right]$ of Rubik’s group is almost all of $G$ itself. It just misses a
single twist. In fact, $G/\left[G,G\right]\cong\mathbb{Z}_2$ — Rubik’s group is highly non-abelian.
Incidentally, this approach to the cube is not the first one I worked out, but it’s far more elegant than my pastiche of particular tools. I picked it up back when I was at the University of Maryland
from a guy who had worked it out while he was at Yale as a graduate student back when the cube first came out: Jeff Adams.
I wanted to see how a book I’d checked out from our library treated a certain topic, hoping that it might have a theorem all ready for me to use. Unfortunately I didn’t remember the authors nor
exactly what it was called, but I did remember what it looked like. So I went to the library and tried to find it with no luck. As a fallback, I asked Paul.
Paul Lukasiewicz is our librarian, and has been around forever. People I know who were students here in the early ’80s thought of him as omniscient already. You can give him a title and author of any
book in the library and he can tell you what color it is off the top of his head.
Unfortunately it doesn’t work in reverse, so I was driven back to the online directory of the library system here to search through hundreds of books on the topic to find the one I remembered. They
really need to make those things searchable on appearance.
Dr. Adams just sent me a link to an explanation of the technical details for mathematicians in other fields, but it’s still somewhat readable.
I also have been reading the slides for Dr. Vogan’s talk, The Character Table for E[8], or How We Wrote Down a 453,060 x 453,060 Matrix and Found Happiness. There’s also an audio recording available
(7MB mp3). Incidentally, I’d have gone for The Split Real Form of E[8], or How We Learned to Stop Worrying and Love the Character Table, but it’s all good. This talk actually manages to be very
generally accessible, and includes all sorts of pretty pictures. Those of you who wanted more visuals than I provided in my rough overview might like to check that one out.
Together, these two are my core that, together with some input from Dr. Zuckerman I’ll be trying to break down into smaller chunks. I highly advise reading at least Vogan’s slides and preferably also
Adams’ notes.
I also want to respond to a comment basically asking, “so why the heck should we care about this?” It’s an excellent question, and yet another one the newspaper reports really glossed over without
taking seriously. I’ll admit that I glossed it over at first too, since I think this stuff is just too elegant not to love. Still, I’ve mulled this over not just as applies to these calculations, but
with regard to a lot of mathematics at this level (thus qualifying the “why we care” as a rant).
This sort of question from a non-mathematician almost always is looking for an engineering response. “What’s it good for?” means, “what can we build with it?” Honestly I have to say “not much”.
Representations of Lie groups do have their uses, though, and I can point out a few things they have already been good for.
As indicated in Dr. Vogan’s slides, representations of the one-dimensional Lie groups are concerned with change through time, particularly periodic changes. This means that they’re exceptionally good
at talking about periodic phenomena, like waves. Sound waves, light waves, electrical circuits, vibrating strings — they’re all one-dimensional waves. So what? So every time you use the graphic
equalizer on your stereo the electronics are taking the signal and performing a fast Fourier transform on it. This turns a function on the line (Lie group) into a function on the space of all
representations of the group; that’s the “unitary dual” that Dr. Adams refers to. Then you can adjust the periodic components and reconstruct a new function with much fatter bass, or whatever your
tastes are.
The same sorts of things can be done in higher dimensions. Similar techniques revealed that you can’t hear the shape of a drum — there are differently-shaped membranes that have the same vibrational
characteristics. What are “orbitals” of electrons around an atomic nucleus (hazy memories of chemistry)? They’re representations of the Lie group $SO(3,\mathbb{R})$!
So what can we do with $E_8$? Nothing right now, but there’s plenty we can do (and have done) with representation theory in general.
There’s another reason (beyond the intrinsic beauty of the ideas) to work out the Atlas: more data means more patterns, and more patterns means more interrelationships between seemingly-distinct
fields. Quite a few of the greatest theorems in recent years have been saying that this field of mathematics over here and that one over there are “really” the same thing. Everyone knows that Andrew
Wiles solved Fermat’s Last Theorem, but what he really did was show that some things in algebraic geometry (the study of solution sets of polynomials) called “elliptic curves” are deeply related to
functions with a certain sort of periodicity called “modular forms”. If, as David Corfield asserts, mathematics proceeds by “telling stories”, then each field’s stories become allegories for the
other. Hard questions in one area might be translated into questions we know how to solve in the other.
So how does having a lot of data like the Atlas around help out? Because we discover a lot of these relationships from similar patterns in the data, and in many cases (though I hate to admit it)
through the same numbers showing up over and over. As just one example, I present the Monstrous Moonshine conjecture. The Monster is a finite, simple group — no normal subgroups, so it can’t be
broken down into even a semidirect product of smaller groups — of order (brace yourself)
That’s $8\times10^{53}$ elements being juggled around in an intricate symmetry. People sat down and calculated its character table, very much a similar project to the current one about $E_8$. And
then there’s a certain special modular form called $j$ that just happens to be related to it. How so? John McKay happened to see the $j$-function written out like this:
$j(\tau) = \frac{1}{q} + 744 + 196884q + 21483670q^2 + ...$
So? So he’d also seen the dimensions of representations of the Monster, which start with $1$, $196883$, $21296876$, and continue. Every single coefficient in the function came from dimensions of
representations of the Monster! And it was conjectured that the pattern continued. In fact it did. Twenty-some years ago, Frenkel, Lepowsky, and Meurman constructed a representation of the Monster
that made it clear, and their results are still echoing. One of my colleagues graduated last year and went on to Harvard by studying exactly the same sorts of connections.
And how did it start? By recognizing patterns in a mountain of raw data about representations. What unsolved problems might be translatable into representation theory by reflections found through the
Atlas data? Maybe the Navier-Stokes equations, which would give a better understanding of fluid flows and aerodynamics. Maybe the Riemann hypothesis, which would lead to a better understanding of the
distributions of prime numbers, which would have an impact on modern cryptography. Who knows?
Oh, and one more thing. How did someone find the Monster in the first place? Well it turns out to be a group of symmetries of a certain collection of points tiling eight-dimensional space. What
collection of points? The “Leech lattice”. And you’ve already seen it: that picture of the $E_8$ root system in all the news reports is the basic cell, just like a square is the basic cell of a
checkerboard tiling of the plane. And it all comes back around again.
How the heck can you not care about this stuff?
[EDIT: I've found out I was wrong about how the Monster relates to $E_8$. More info in the link.]
• Recent Posts
• Blogroll
• Art
• Astronomy
• Computer Science
• Education
• Mathematics
• Me
• Philosophy
• Physics
• Politics
• Science
• RSS Feeds
• Feedback
Got something to say? Anonymous questions, comments, and suggestions at
• Subjects
• Archives | {"url":"http://unapologetic.wordpress.com/2007/03/","timestamp":"2014-04-17T12:39:36Z","content_type":null,"content_length":"87798","record_id":"<urn:uuid:b311e395-7562-4bf2-bc4a-d462c809d3d2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woodside, NY
New York, NY 10075
Yale Grad For Math, Science, Spanish, English and SAT Tutoring
...I have countless hours of experience as a peer tutor throughout high school and college, and have also worked with a variety of age groups (K-12). I specialize in biology/biochemistry (any
(geometry, pre-algebra, algebra I, algebra II, and...
Offering 10+ subjects including algebra 1, algebra 2 and geometry | {"url":"http://www.wyzant.com/geo_Woodside_NY_Math_tutors.aspx?d=20&pagesize=5&pagenum=5","timestamp":"2014-04-19T00:32:49Z","content_type":null,"content_length":"61515","record_id":"<urn:uuid:b40f566f-23ea-4355-8980-b0ad477ec04a>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00425-ip-10-147-4-33.ec2.internal.warc.gz"} |
WOOLY WORM LAB
Betty Ann Wonderly
This is an adaptation of a lab published by Ken House in American Biology Teacher Volume 48, #4, April 1986. This article tells how he uses the lab and gives many excellent tips on introducing the
lab and generating excitement.
There is also a bibliography that is useful. If you do not have a copy of a Chi Square chart you can find one in most any statistics textbook. The one I use came from the old BSCS Interaction of
Experiments and Ideas third edition.
I usually collect data and do calculations as a class then let them work in groups to answer the questions. Sometimes we collect the data as a class and then work in groups to do the calculations.
Either way works well. Depends on how much time you want to allow for the exercise.
In this lab you will study natural selection. During the exercise you will represent a predatory bird who feeds on wooly worms. The wooly worms are pieces of colored yarn that have been randomly
distributed over an area on the school grounds. Some of the "wooly worms" will blend into the habitat while others will be easy to find. The colored yarn will be tallied and a Chi-square test will be
used to determine if the wool pieces are collected randomly or by a selection process. It is hoped that this lab activity will add to your understanding of natural selection.
1. During a 4-5 minute "feeding" period you will collect as many wooly worms as possible in the collection area.
2. Return to the lab and list the various colors of "worms" in column A.
3. Tally the number of each kind of worm you "ate" and record the numbers in Table 1 under Column B (Observed Number)
4. Total this column of figures and then divide by the number of kinds of worms available. This will give the average number of worms of each kind that you would expect to find if you collected them
randomly. This value should be entered in the C column as Expected number.
5. If the wool pieces were collected randomly, then the number of each color collected should be nearly equal. If the collection is random, then the differences between observed numbers and expected
numbers could be attributed to chance. We can propose a Null Hypothesis that states that there will be no significant difference in the numbers of each color yarn collected. If the null
hypothesis is not supported by the data, then selection of some colors over others must have occurred.
6. You will use the Chi-square test to examine the differences between the number of worms expected and the actual number you collected. The Chi-square test will tell you if the differences between
what you collected and what was expected are too large to be attributed to chance alone. That is, does the variance from the expected fall within statistical limits and still support the null
hypothesis? The Chi-square test cannot prove or disprove a hypothesis, but it can provide you with a statement of probability concerning the original null hypothesis.
7. Using a calculator, determine the differences between observed and expected values. Enter this number in Column D. Square this difference and enter in Column E. Divide this squared difference by
the expected value from column C. Enter this value in column E. Total up all the values in column F to get the Chi-square value.
8. After the Chi-square value is determined, refer to a Chi Square Distribution Table. This table tells how much of a variance can be tolerated before the original null hypothesis can be accepted or
rejected? Most biologist agree that Chi-square values above the 0.05, or 5% level of probability , would tend to support the null hypothesis by indicating that the numbers of each color yarn
observed does not vary significantly from the expected. However, values at or below 0.05 level of probability suggest that the numbers that you observed are not likely to result from chance
factors alone. Therefore, such observed numbers suggest that certain yarn colors are being selected over others. Thus, the original null hypothesis must be rejected.
9. Determine the level of probability (p) for your Chi-square value. The degrees of freedom used are always one less than the number of events or colors observed. If you used 11 colors then use 10
degrees of freedom to determine the level of probability.
10. Answer the following questions:
□ What was your original null hypothesis.
□ What was your Chi-square value for this experiment?
□ How many degrees of freedom were there for this experiment?
□ What was the probability that your null hypothesis was acceptable?
□ Did you accept or reject your null hypothesis?
□ What does this mean?
□ Which color worm had the best chance of being eaten?
□ Which color worm had the best chance of survival?
□ What will happen to the gene frequencies for the various colors of worms?
□ What determines which genes will be an advantage and which ones will be a disadvantage?
□ How does this experiment illustrate Darwin's theory of Natural Selection? | {"url":"http://www.accessexcellence.org/AE/ATG/data/released/0310-BettyAnnWonderly/","timestamp":"2014-04-20T03:12:04Z","content_type":null,"content_length":"19287","record_id":"<urn:uuid:36f88225-1ce8-4940-87a8-06d429d93ff0>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00543-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: How to simplify an expression ...
Replies: 4 Last Post: Oct 3, 2012 10:29 PM
Messages: [ Previous | Next ]
How to simplify an expression ...
Posted: Oct 2, 2012 2:47 PM
Dear Colleagues,
I've noticed through the years that many students had
considerable difficulty simplifying the expression below
on a test in Intermediate Algebra:
[ (6 r^8 t) /(-3 r^2) ]^3
I'm interested in seeing how you think students should
approach this problem, and detailed steps they should
take, along with precise assumptions/formulas they
should know and be able to apply correctly at each step.
Thank you in advance.
John Lee
Date Subject Author
10/2/12 How to simplify an expression ... DCJLEE@AOL.COM
10/2/12 Re: How to simplify an expression ... Dave L. Renfro
10/3/12 Re: How to simplify an expression ... Peter Duveen
10/3/12 Re: How to simplify an expression ... Dave L. Renfro
10/3/12 Re: How to simplify an expression ... GS Chandy | {"url":"http://mathforum.org/kb/thread.jspa?threadID=2406217&messageID=7899515","timestamp":"2014-04-17T18:44:53Z","content_type":null,"content_length":"22046","record_id":"<urn:uuid:e78da220-178b-446c-9ded-cd2531ee4fe6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00375-ip-10-147-4-33.ec2.internal.warc.gz"} |
If lines y=mx+b and x=y+bm intersect at a degrees angle
Author Message
If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 16 Nov 2012, 02:45
Expert's post
55% (medium)
Question Stats:
Verbal Forum Moderator
Status: Preparing for the
another shot...! (01:56) correct
Joined: 03 Feb 2011 52% (01:58)
Posts: 1427 wrong
Location: India based on 71 sessions
Concentration: Finance, If lines y=mx+b and x=y+bm intersect at a degrees angle (where a<90 ), what is the value of angle a ?
(1) m=2
GPA: 3.75
(2) m=b
Followers: 108
Source: Jamboree
Kudos [?]: 491 [0], given: 62
Spoiler: OA
Prepositional Phrases Clarified|Elimination of BEING| Absolute Phrases Clarified
Rules For Posting
Last edited by
on 16 Nov 2012, 04:26, edited 1 time in total.
Renamed the topic.
Re: If lines y=mx+b [#permalink] 16 Nov 2012, 02:59
This post received
MacFauz Marcab wrote:
Moderator If lines y=mx+b and x=y+bm intersect at a degrees angle (where a<90 ), what is the value of angle a ?
Joined: 02 Jul 2012 (1) m=2
Posts: 1212 (2) m=b
Location: India Source: Jamboree
Concentration: Strategy To find the angle between two lines, we need to know the slope of both lines. But as shown in the figure, this angle "a" can be either "x" or "y". But since we are
given that a<90, we can find out which angle is required because x + y = 180. The slope of the second line is obviously 1. So the question is basically asking for the
GMAT 1: 740 Q49 V42 value of m.
GPA: 3.8 1) Sufficient
WE: Engineering (Energy and 2) We get y = bx + b. b is still unknown. Insufficient.
Answer is hence A.
Followers: 53
Kudos Please... If my post helped.
Kudos [?]: 565 [2] , given:
untitled.JPG [ 3.39 KiB | Viewed 2039 times ]
Did you find this post helpful?... Please let me know through the Kudos button.
Thanks To The Almighty - My GMAT Debrief
GMAT Reading Comprehension: 7 Most Common Passage Types
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 23 Nov 2012, 05:12
Marcab wrote:
If lines y=mx+b and x=y+bm intersect at a degrees angle (where a<90 ), what is the value of angle a ?
(1) m=2
(2) m=b
Source: Jamboree
We can find the angle of intersection b/w any 2 lines if we knw the values of their individual slopes.
For y = x + bm, slope is 1; for y = mx + b, slope is "m"
Joined: 21 Dec 2009
(1) Tan (a) = (m - 1)/(1 + 1*m); tan(a) < 90 if its value is +ve;since m-1>0, no need to add/subtract from 180.
Posts: 588
Since statement 2 gives m=2, it is sufficient.
Entrepreneurship, Finance (2) Test
Followers: 15 y=x+1, y=2x+2 and y=3x+3
Kudos [?]: 201 [0], given: 20 with "
y=x - 1, y=x-4, and y=x-9
y=x+1 and y=x gives 0 degrees
y=2x+2 and y=x-4 gives a diffrent value...INSUFF
KUDOS me if you feel my contribution has helped you.
Director Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 15 Feb 2013, 01:51
Status: Gonna rock this Folks,
I understand A is sufficient to find the angle.. but how do you find the angle?
Joined: 22 Jul 2012
Posts: 551
hope is a good thing, maybe the best of things. And no good thing ever dies.
Location: India
Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595
GMAT 1: 640 Q43 V34
GMAT 2: 630 Q47 V29 My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992
WE: Information Technology
(Computer Software)
Followers: 2
Kudos [?]: 22 [0], given: 561
Verbal Forum Moderator
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 15 Feb 2013, 02:21
Status: Preparing for the
another shot...! 1
Joined: 03 Feb 2011 This post received
Posts: 1427
Expert's post
Location: India
Concentration: Finance,
GPA: 3.75
Followers: 108
Kudos [?]: 491 [1] , given: 62
Director Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 15 Feb 2013, 02:58
Status: Gonna rock this Marcab wrote:
m= tan x.
Joined: 22 Jul 2012 So x= tan inverse(m).
Posts: 551 but m is slope of which line out of the that intersect?
Location: India _________________
GMAT 1: 640 Q43 V34 hope is a good thing, maybe the best of things. And no good thing ever dies.
GMAT 2: 630 Q47 V29
Who says you need a 700 ?Check this out : http://gmatclub.com/forum/who-says-you-need-a-149706.html#p1201595
WE: Information Technology
(Computer Software) My GMAT Journey : end-of-my-gmat-journey-149328.html#p1197992
Followers: 2
Kudos [?]: 22 [0], given: 561
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 20 Feb 2013, 15:07
m is the slope of the line
If you draw an equation for this line, you will find m to be the slope of the line and b to be the intersect on y axis(when x=0). This is called the slope-intercept
form of line equation and you memorizing it will help you deal with such questions. The form of such lines is
The other line x=y+bm can be written in a similar fashion
Joined: 24 Sep 2012
Going by the above stated formula, since the coefficient of x=1, slope =1. The y-intersect of the line is bm.
Posts: 90
Hope that clarifies your doubt.
Location: United States
Sachin9 wrote:
Entrepreneurship, Marcab wrote:
International Business
m= tan x.
GMAT 1: 730 Q50 V39 So x= tan inverse(m).
GPA: 3.2 but m is slope of which line out of the that intersect?
WE: Education (Education) _________________
Followers: 2 Thanks
Kudos [?]: 61 [0], given: 3 Instructor at Aspire4GMAT
Visit us at http://www.aspire4gmat.com
Post your queries
Join our free GMAT course
New blog: How to get that 700+
New blog: Data Sufficiency Tricks
Press Kudos if this helps!
Joined: 31 Dec 2012 Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 05 Mar 2013, 17:25
Posts: 68 Angle between 2 lines is m1-m2/1+m1m2 ..
Location: India We already know slope of line X = Y + bm.
Concentration: Strategy, Option A tells slope of line A - so suffcient but option B tells nothing so not sufficient
GMAT 1: 700 Q50 V33
GPA: 3.6
WE: Information Technology
Followers: 1
Kudos [?]: 11 [0], given: 5
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 31 Jul 2013, 23:34
abhisingla wrote:
Angle between 2 lines is m1-m2/1+m1m2 ..
We already know slope of line X = Y + bm.
Option A tells slope of line A - so suffcient but option B tells nothing so not sufficient
stne How do we apply this formula?
Manager In this question m1= 2 and m2=1, does that mean angle is
Joined: 27 May 2012 \frac{2-1}{1+2.1} = \frac{1}{3}
Posts: 212 is there a
Followers: 0 tan^{-1}
Kudos [?]: 46 [0], given: 75 before this, so angle between two lines having slopes m1 and m2
= tan^{-1}\frac{m1-m2}{1+m1m2}
In short how do we actually determine the angle between 2 lines given their slopes?
- Stne
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 31 Jul 2013, 23:48
This post received
stne wrote:
abhisingla wrote:
How do we apply this formula?
In this question m1= 2 and m2=1, does that mean angle is \frac{2-1}{1+2.1} = \frac{1}{3}
is there a tan^{-1}before this, so angle between two lines having slopes m1 and m2 = tan^{-1}\frac{m1-m2}{1+m1m2}
Zarrolou In short how do we actually determine the angle between 2 lines given their slopes?
We do not care about the actual measure of the angle.
Status: Far, far away!
When we have to determine the angle at which two lines intercept, the ONLY thing we have to know is the slope of each line .
Joined: 02 Sep 2012
With statement 1 we get (note that I consider only the slopes of the two equations):
Posts: 1125
Location: Italy
Concentration: Finance,
Entrepreneurship line2:
GPA: 3.8 y=x
Followers: 92 Once we have those, we are able to determine the length of the angles at the point of intersection. Those angles are fixed and do not change, hence we can determine
their length, but in this DS we do not care about the actual number.
Kudos [?]: 1000 [3] , given:
219 With statement 1, can you answer the question? YES, that's enough.
To determine the angle we would have to use formulas that are beyond the scope of the GMAT ("tan" for example)
but the point here, as I said above, is not to find the measure.
It is beyond a doubt that all our knowledge that begins with experience.
Kant , Critique of Pure Reason
Tips and tricks: Inequalities , Mixture | Review: MGMAT workshop
Strategy: SmartGMAT v1.0 | Questions: Verbal challenge SC I-II- CRNew SC set out !! , My Quant
Rules for Posting in the Verbal Forum - Rules for Posting in the Quant Forum[/size][/color][/b]
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 01 Aug 2013, 00:04
Zarrolou wrote:
stne wrote:
abhisingla wrote:
How do we apply this formula?
In this question m1= 2 and m2=1, does that mean angle is \frac{2-1}{1+2.1} = \frac{1}{3}
is there a tan^{-1}before this, so angle between two lines having slopes m1 and m2 = tan^{-1}\frac{m1-m2}{1+m1m2}
In short how do we actually determine the angle between 2 lines given their slopes?
We do not care about the actual measure of the angle.
When we have to determine the angle at which two lines intercept, the ONLY thing we have to know is the slope of each line .
With statement 1 we get (note that I consider only the slopes of the two equations):
Joined: 27 May 2012
Posts: 212
Followers: 0
Kudos [?]: 46 [0], given: 75
Once we have those, we are able to determine the length of the angles at the point of intersection. Those angles are fixed and do not change, hence we can determine
their length, but in this DS we do not care about the actual number.
With statement 1, can you answer the question? YES, that's enough.
To determine the angle we would have to use formulas that are beyond the scope of the GMAT ("tan" for example)
but the point here, as I said above, is not to find the measure.
That definitely helps ! Other solutions involving tan kept me wondering if indeed it was beyond scope or not,+1
- Stne
Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 01 Aug 2013, 00:30
Marcab wrote:
If lines y=mx+b and x=y+bm intersect at a degrees angle (where a<90 ), what is the value of angle a ?
(1) m=2
Joined: 04 Jul 2013
(2) m=b
Posts: 17
Source: Jamboree
Location: India
Correct answer is A
Concentration: Operations,
Technology Since Slope of y=mx+b is m let it be m1
WE: Operations (Manufacturing) and slope of y=x-mb is 1 , let it m2
Followers: 0 Now
Kudos [?]: 13 [0], given: 7 tan(a) = m1-m2/(1+m1*m2)
ie tan(a) = m-1/(1+m*1)= m-1/m+1
Hence knowing the value m will give us answer
gmatclubot Re: If lines y=mx+b and x=y+bm intersect at a degrees angle [#permalink] 01 Aug 2013, 00:30 | {"url":"http://gmatclub.com/forum/if-lines-y-mx-b-and-x-y-bm-intersect-at-a-degrees-angle-142552.html?fl=similar","timestamp":"2014-04-21T04:39:04Z","content_type":null,"content_length":"202999","record_id":"<urn:uuid:abc2a0b3-747a-49da-8136-5f16118a6898>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00279-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
can somebody please help me solve this question???.....pic is in comment box
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50274283e4b01d88d774dd41","timestamp":"2014-04-21T02:29:32Z","content_type":null,"content_length":"104987","record_id":"<urn:uuid:f741dc5c-5eca-4f8f-b66c-b5d76ea2c19f>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00313-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coppell Statistics Tutor
Find a Coppell Statistics Tutor
I have a Ph.D. in engineering and I have passion to teach Mathematics and Science. I have gone through rigorous mathematics at college, grad school and during my Ph.D. I love to teach mathematics,
not to just solve problems, but also to make sure concepts and fundamentals are clear.
12 Subjects: including statistics, calculus, geometry, algebra 1
...I'd like to help you master your subject and keep or improve your current grades. I have learned how to work with and motivate many different personality and learning types.I tutor high school,
introductory and first chemistry. I am NOT available to tutor organic chemistry or biochemistry.
4 Subjects: including statistics, chemistry, economics, vocabulary
...Operabitur apud te exspecto! Trigonometry, like Algebra and Geometry, is one of the base subjects for the higher level math courses. As a math major, I had to be very grounded in these
41 Subjects: including statistics, chemistry, ASVAB, logic
...I work very hard to make learning meaningful and fun. As an educational psychologist, I have completed many hours of advanced coursework, and I am well-versed in the current research regarding
learning, memory, and instructional practices. I utilize this knowledge to identify underlying process...
39 Subjects: including statistics, chemistry, English, writing
...Because of the similarity of the problem solving techniques, helping a student solve a problem in mechanics,for example, automatically helps them solve problems in another area such as electric
circuits. I have often been told that I help students "until they have been helped."I have a Ph.D. in physics. I have been a scientist at a national lab as well as a physics professor.
25 Subjects: including statistics, chemistry, calculus, physics | {"url":"http://www.purplemath.com/Coppell_Statistics_tutors.php","timestamp":"2014-04-17T08:02:19Z","content_type":null,"content_length":"23943","record_id":"<urn:uuid:95bcc4b7-7a1d-4470-a636-926bc2998232>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00253-ip-10-147-4-33.ec2.internal.warc.gz"} |
Abstract for holden_tr161
Department of Engineering
Abstract for holden_tr161
Cambridge University Engineering Department Technical Report CUED/F-INFENG/TR161 + PhD Thesis
Sean Holden
September 1993
The study of connectionist networks has often been criticized for an overall lack of rigour, and for being based on excessively ad hoc techniques. Even though connectionist networks have now been the
subject of several decades of study, the available body of research is characterized by the existence of a significant body of experimental results, and a large number of different techniques, with
relatively little supporting, explanatory theory. This dissertation addresses the theory of generalization performance and architecture selection for a specific class of connectionist networks; a
subsidiary aim is to compare these networks with the well-known class of multilayer perceptrons.
After discussing in general terms the motivation for our study, we introduce and review the class of networks of interest, which we call Phi-networks, along with the relevant supervised training
algorithms. In particular, we argue that Phi-networks can in general be trained significantly faster than multilayer perceptrons, and we demonstrate that many standard networks are specific examples
of Phi-networks.
Chapters 3, 4 and 5 consider generalization performance by presenting an analysis based on tools from computational learning theory. In chapter 3 we introduce and review the theoretical apparatus
required, which is drawn from Probably Approximately Correct (PAC) learning theory. In chapter 4 we investigate the growth function and VC dimension for general and specific Phi-networks, obtaining
several new results. We also introduce a technique which allows us to use the relevant PAC learning formalism to gain some insight into the effect of training algorithms which adapt architecture as
well as weights (we call these self-structuring training algorithms). We then use our results to provide a theoretical explanation for the observation that Phi-networks can in practice require a
relatively large number of weights when compared with multilayer perceptrons. In chapter 5 we derive new necessary and sufficient conditions on the number of training examples required when training
a Phi-network such that we can expect a particular generalization performance. We compare our results with those derived elsewhere for feedforward networks of Linear Threshold Elements, and we extend
one of our results to take into account the effect of using a self-structuring training algorithm.
In chapter 6 we consider in detail the problem of designing a good self-structuring training algorithm for Phi-networks. We discuss the best way in which to define an optimum architecture, and we
then use various ideas from linear algebra to derive an algorithm, which we test experimentally. Our initial analysis allows us to show that the well-known weight decay approach to self-structuring
is not guaranteed to provide a network which has an architecture close to the optimum one. We also extend our theoretical work in order to provide a basis for the derivation of an improved version of
our algorithm.
Finally, chapter 7 provides conclusions and suggestions for future research.
(ftp:) holden_tr161.ps.Z (http:) holden_tr161.ps.Z
PDF (automatically generated from original PostScript document - may be badly aliased on screen):
(ftp:) holden_tr161.pdf | (http:) holden_tr161.pdf
If you have difficulty viewing files that end '.gz', which are gzip compressed, then you may be able to find tools to uncompress them at the gzip web site.
If you have difficulty viewing files that are in PostScript, (ending '.ps' or '.ps.gz'), then you may be able to find tools to view them at the gsview web site.
We have attempted to provide automatically generated PDF copies of documents for which only PostScript versions have previously been available. These are clearly marked in the database - due to the
nature of the automatic conversion process, they are likely to be badly aliased when viewed at default resolution on screen by acroread. | {"url":"http://mi.eng.cam.ac.uk/reports/abstracts/holden_tr161.html","timestamp":"2014-04-18T13:07:41Z","content_type":null,"content_length":"9314","record_id":"<urn:uuid:627e7c0f-622b-4d58-a399-dc7725ac722d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00447-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pages: 1 2
Post reply
Re: Latex
yeah, we can type matrix now!
\displaystyle to enlarge the []
\left[ to define left bracket
\begin{array}{c} to define matrix centered
1,0,0\\0,1,0\\0,0,1 entries and rows
Re: Latex
Give me time, I am still working on it.
(BTW can someone give me 30 hours in a day?)
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
I am making progress of sorts. The trouble is that I can't test the images without going "live".
So sometime over Easter I will have to do some development work "live" on the forum. So, if you notice things are weird then it is probably me
In which case you can hang around and laugh at my attempts, or come back later. Your choice.
Oh, and feel free to tell me of any weirdness you notice. I may know all about it, in which case I will probably say "I know", but if not then it may help me debug things.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
More Testing
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
And even more testing:
Well done MathIsFun!
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Latex
Thank you! It took a few days to sort out the interactions of the math tag, code tag and hide tag.
I *think* it is fine now, and have done lots of testing, but it is kinda impossible to test for everything ...
BTW I made one other small but very useful improvement
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
I noticed that improvement when I tried to copy your latex. You are talking about how you can click on latex and get to the code, right? Great idea.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Latex
More testing:
Works really well for lining up the equals sign too.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Real Member
Re: Latex
Can you do something like this?:
IPBLE: Increasing Performance By Lowering Expectations.
Real Member
Re: Latex
IPBLE: Increasing Performance By Lowering Expectations.
Re: Latex
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Latex
MIF you are so creative! How did you do it?
Re: Latex
Just persistence.
I use the standard latex that comes with linux, so the only creative thing I have done is to integrate it with the forum.
So when you type some latex, my software grabs the text, puts it into a file with some other standard commands, runs the latex command on the server, takes the results and then has to reformat them
for the forum, then copies them to the right location and hey presto you see the image!
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
Programme mastering, hmmm...
Re: Latex
Mastering? I wish!
More like stumbling around in the dark looking for the light switch. Eventually the light goes.
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
Stumbling, of course, because you shot yourself in the foot
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Latex
What a great list!
I like Postscript: "foot bullets 6 locate loadgun aim gun shoot showpage"
And Java: "You locate the Gun class, but discover that the Bullet class is abstract, so you extend it and write ... and call the doShoot method on the instance of the Gun class ... and the instance
of Bullet is passed to the Foot. But this causes an IllegalHitByBullet exception to be thrown, and you die."
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Re: Latex
This should be the one for Perl:
You write the algorithm to shoot yourself in the foot. A week later, when you go to run it, you realize there is a bug. Looking at the source code, you have no idea what it actually does anymore.
(Perl is often called a 'Write-only' language)
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: Latex
Re: Latex
Ahaaaa! A space can be added!
Try this one:
Re: Latex
is still unavailable.
Re: Latex
JCL is the "Job Control Language" for IBM Mainframes. It has statements like "//CTRLTABL DD DISP=SHR,DSN=SYS1.FOCUS.MSO.MSOPROF(CTRLTABL)"
I was advised by someone that the way to write JCL was to find someone else who had managed to do the same thing and copy it. And they were serious.
The entry in "shoot in the foot" for JCL is: "You shoot yourself in the head just thinking about it."
"The physicists defer only to mathematicians, and the mathematicians defer only to God ..." - Leon M. Lederman
Post reply
Pages: 1 2 | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=33340","timestamp":"2014-04-20T01:07:33Z","content_type":null,"content_length":"37332","record_id":"<urn:uuid:2f9328d9-785d-41b6-a867-8d1dd7909c91>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/zaqewwe/asked","timestamp":"2014-04-16T22:50:58Z","content_type":null,"content_length":"84060","record_id":"<urn:uuid:0257add1-8805-463b-b7cf-e7622556a4a4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00062-ip-10-147-4-33.ec2.internal.warc.gz"} |
Line Equations Videos
Now finding line equations help is easier. Below find the list of all video tutorials we have on line equations. Each video clip helps you better understand the subject matter. Seeing different
problems makes you familiar with the topic. Our line equations video tutorials give brief but to-the-point review on line equations. You no longer need to feel overwhelmed with line equations
homework and tests.
Use TuLyn as your math help, line equations help, help with line equations, line equations homework help, free line equations help, help with line equations homework, math help on line equations,
free line equations math help, online line equations help, math homework help, math problems help, math tips, and more.
Enjoy our video tutorials and improve your math grades.
Finding The Equation Of A Line Given Two Points Video Clip
Finding The Equation Of A Line Given Two Points 2 Video Clip
Finding The Equation Of A Line Given Two Points 3 Video Clip
Finding The Equation Of A Line Given Two Points 4 Video Clip
Finding The Equation Of A Line Given Two Points 5 Video Clip
Finding The Equation Of A Line Given Two Points 6 Video Clip
Finding The Equation Of A Line Given Two Points 7 Video Clip
Finding The Equation Of A Line Given Two Points 8 Video Clip
Finding The Equation Of A Line Given Two Points 9 Video Clip
Finding The Equation Of A Line Perpendicular To Another Line Video Clip
Midpoint Formula Video Clip | {"url":"http://www.tulyn.com/videotutorials/line_equations.htm","timestamp":"2014-04-18T10:53:00Z","content_type":null,"content_length":"19889","record_id":"<urn:uuid:545274bb-9e0b-4eae-aa71-f29ac45ec7f4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Python-Dev] sum()
Tim Peters tim.peters at gmail.com
Sat Mar 12 05:02:55 CET 2005
FYI, there are a lot of ways to do accurate fp summation, but in
general people worry about it too much (except for those who don't
worry about it at all -- they're _really_ in trouble <0.5 wink>).
One clever way is to build on that whenever |x| and |y| are within a
factor of 2 of each other, x+y is exact in 754 arithmetic. So you
never lose information (not even one bit) when adding two floats with
the same binary exponent. That leads directly to this kind of code:
from math import frexp
class Summer:
def __init__(self):
self.exp2sum = {}
def add(self, x):
while 1:
exp = frexp(x)[1]
if exp in self.exp2sum:
x += self.exp2sum.pop(exp) # exact!
self.exp2sum[exp] = x # trivially exact
def total(self):
items = self.exp2sum.items()
return sum((x for dummy, x in items), 0.0)
exp2sum maps a binary exponent to a float having that exponent. If
you pass a sequence of fp numbers to .add(), then ignoring
underflow/overflow endcases, the key invariant is that the exact
(mathematical) sum of all the numbers passed to add() so far is equal
to the exact (mathematical) sum of exp2sum.values(). While it's not
obvious at first, the total number of additions performed inside add()
is typically a bit _less_ than the total number of times add() is
More importantly, len(exp2sum) can never be more than about 2K. The
worst case for that is having one input with each possible binary
exponent, like 2.**-1000 + 2.**-999 + ... + 2.**999 + 2.**1000. No
inputs are like that in real life, and exp2sum typically has no more
than a few dozen entries.
total() then adds those, in order of increasing exponent == in order
of increasing absolute value. This can lose information, but there is
no bad case for it, in part because there are typically so few
addends, and in part because that no two addends have the same binary
exponent implies massive cancellation can never occur.
Playing with this can show why most fp apps shouldn't worry most
often. Example:
Get a million floats of about the same magnitude:
>>> xs = [random.random() for dummy in xrange(1000000)]
Sum them accurately:
>>> s = Summer()
>>> for x in xs:
... s.add(x)
No information has been lost yet (if you could look at your FPU's
"inexact result" flag, you'd see that it hasn't been set yet), and the
million inputs have been squashed into just a few dozen buckets:
>>> len(s.exp2sum)
>>> from pprint import pprint
>>> pprint(s.exp2sum)
{-20: 8.8332388070710977e-007,
-19: 1.4206079529399673e-006,
-16: 1.0065260162672729e-005,
-15: 2.4398649189794064e-005,
-14: 5.3980784313178987e-005,
-10: 0.00074737138777436485,
-9: 0.0014605198999595448,
-8: 0.003361820812962546,
-7: 0.0063811680318408559,
-5: 0.016214300821313588,
-4: 0.044836286041944229,
-2: 0.17325355843673518,
-1: 0.46194788522906305,
3: 6.4590200674982423,
4: 11.684394209886134,
5: 24.715676913177944,
6: 49.056084672323166,
10: 767.69329043309051,
11: 1531.1560084859361,
13: 6155.484212371357,
17: 98286.760386143636,
19: 393290.34884990752}
Add those 22, smallest to largest:
>>> s.total()
Add the originals, "left to right":
>>> sum(xs)
So they're the same to about 14 significant decimal digits. No good
if exact pennies are important, but far more precise than real-world
How much does sorting first help?
>>> xs.sort()
>>> sum(xs)
Not much! It actually moved a bit in the wrong direction (which is
unusual, but them's the breaks).
Using decimal as a (much more expensive) sanity check:
>>> import decimal
>>> t = decimal.Decimal(0)
>>> for x in xs:
... t += decimal.Decimal(repr(x))
>>> print t
Of course <wink> Summer's result is very close to that.
More information about the Python-Dev mailing list | {"url":"https://mail.python.org/pipermail/python-dev/2005-March/052082.html","timestamp":"2014-04-20T22:04:17Z","content_type":null,"content_length":"6573","record_id":"<urn:uuid:57f7325d-d848-4f5a-80ae-35d571057341>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Patent application title: ARCHITECTURAL PATTERN DETECTION AND MODELING IN IMAGES
Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP
Systems and methods are provided to facilitate architectural modeling. In one aspect, repetitive patterns are automatically detected and analyzed to generate modeled structural images such as
building facades. In another aspect, structural symmetry is analyzed to facilitate architectural modeling and enhanced image generation.
A method, comprising: extracting initial sample portions from images as potential repetitive patterns; retaining selected patterns from the potential repetitive patterns that are determined to be
actual repetitive patterns; clustering the actual repetitive patterns from sub-group domains into aggregated group domains by using information from a transformation domain and a spatial domain; and
extracting one or more shapes for the actual repetitive patterns based in part on the information from the transformation domain and the spatial domain.
The method of claim 1, further comprising determining a similarity map between a width portion centered on a corner point and at least one pixel to facilitate determining repetitive image patterns.
The method of claim 2, further comprising determining stationary points from the similarity map and modes of a density map to facilitate determining repetitive image patterns, where the stationary
points are potential similar points related to a sampling point.
The method of claim 1, further comprising determining a bounding box and constructing a grid in image space to facilitate determining repetitive image patterns.
The method of claim 1, further comprising classifying lattices into multiple groups having associated sub-groups, wherein regions of each sub-group are segmented to estimate a shape of a foreground
The method of claim 5, wherein the classifying includes classifying the lattices according to a hierarchical cluster, and further comprising translating between spatial domains before determining
distances between lattices or determining an inter-cluster distance.
The method of claim 6, further comprising employing a nearest neighbor rule to determine smaller or larger inter-cluster distances.
The method of claim 1, further comprising determining a centroid of a grid to facilitate repetitive pattern extraction.
The method of claim 8, further comprising determining a score by matching component elements with one or more templates to facilitate the repetitive pattern extraction.
A system, comprising: a detector to determine sampling points in an image and to generate similarity maps for the sampling points; a cluster component to determine image lattices and transform
lattices within the image in order to determine multiple repetitive patterns of arbitrary shapes; a rectangular analyzer to determine regions of interest for non-repetitive patterns within the image;
and a facade layout component to generate a set of disjoint structure regions to facilitate symmetry detection within the image.
The system of claim 10, further comprising a translation symmetry component to determine one or more sample points in an image to facilitate the symmetry detection within the image.
The system of claim 11, wherein the translation symmetry component includes a clustering component to yield one or more potential image lattices via construction of a transform lattice operating on
the one or more sample points within the image.
The system of claim 12, further comprising a tessellation component to determine similarity points associated with a lattice, wherein the similarity points are employed to construct horizontal and
vertical lines through centroids associated with the sample points within the image.
The system of claim 10, further comprising a rotational symmetry component to determine regions of translational symmetry that occur locally in facades of buildings, wherein the rotational symmetry
component determines a rotation center and a rotation angle to determine the regions of translational symmetry.
The system of claim 10, further comprising a facade analysis component to determine repetitive patterns and non-repetitive objects within a facade, wherein the repetitive patterns are determined via
distances between disjoint symmetry patterns and the non-repetitive objects are determined by analyzing texture information from image regions that are not associated with the repetitive patterns.
The system of claim 10, further comprising one or more grammar rules to facilitate automatic generation of a facade, wherein the grammar rules are associated with a computer generated architecture
("CGA") shape to facilitate symmetry detection within the image.
The system of claim 16, further comprising one or more contain rules that define which patterns or shapes are included within an area defined as a building facade.
The system of claim 17, further comprising one or more heuristic rules that define a synthetical layout for the building facade, wherein the heuristic rules are associated with copying an original
layout proportion, replacing structural regions by a multiple of a determined structural region, or copying an original facade layout by randomly adjusting one or more regions within the facade
A tangible computer-readable medium, comprising: instructions for causing a computer to determine selected patterns from potential repetitive patterns and clustering the selected patterns as actual
repetitive patterns from a first domain into at least one other domain by utilizing information from a transformation domain and a spatial domain; instructions for causing a computer to extract one
or more shapes for the actual repetitive patterns based in part on the information from the transformation domain and the spatial domain; instructions for causing a computer to determine sampling
points in an image and to generate similarity maps for the sampling points; instructions for causing a computer to determine image lattices and transform lattices within the image in order to
determine one or more symmetry patterns; and instructions for causing a computer to generate a set of disjoint structure regions to facilitate symmetry detection within the image based in part on the
one or more symmetry patterns.
The computer-readable medium of claim 19, further comprising instructions for causing a computer to determine regions of interest for non-repetitive patterns within the image, wherein the
instructions include one or more translational symmetries and one or rotational symmetries to determine the non-repetitive patterns.
CROSS REFERENCE TO RELATED APPLICATIONS [0001]
This application claims priority to U.S. Provisional Patent Application No. 61/282,370, entitled DETECTION METHOD OF REPETITIVE PATTERNS IN IMAGES, and filed on Jan. 29, 2010, the entirety of which
is incorporated herein by reference. This application also claims priority to U.S. Provisional Patent Application No. 61/344,093, entitled METHOD OF FACADE SYMMETRY DETECTION AND MODELING IN IMAGES,
and filed on May 21, 2010, the entirety of which is also incorporated herein by reference
TECHNICAL FIELD [0002]
The subject disclosure relates generally to computer modeling and, more particularly, to applications where repetitive patterns and structural symmetry are detected and modeled to more accurately
visualize architectural structures.
BACKGROUND [0003]
There is an increasing demand for photo-realistic modeling of buildings and cities for applications including three-dimensional ("3D") map services, games, and movies. The modeling of buildings and
cities often reduces de facto to that of building facades. The current state of the art ranges from pure synthetic methods based on grammar rules and 3D scanning of street facades, to image-based
approaches which utilize either small or large numbers of images. This includes the problem of detecting translational repetitive patterns in an orthographic image, where there have been several
methods for regularity or symmetry analysis. These include Hough-transform like methods, varying in the manner on how to sample data space and where to vote. A recent framework discovers structural
regularity in 3D geometry using discrete groups of transformations, for example. The following provides a brief non-exhaustive discussion of architectural modeling techniques including symmetry
detection, facade modeling, and inverse procedural modeling.
Symmetry in 3D techniques often employs a voting scheme in transformation space to detect partial and approximate symmetries of 3D shapes. These include symmetrization applications to discover
structural regularity in 3D geometry using discrete groups of transformations. Another application provides a solution by manually creating models from 3D scanners and also relied on manually
specifying symmetry patterns. Symmetry in images provides a computational model that detects peaks in an autocorrelation ("AC") function of images to determine the periodicity of texture patterns.
One application mapped a 2D rotation symmetry problem to a Frieze detection problem and employed discrete Fourier transform for Frieze detection. Subsequently, a more general method to detect skewed
symmetry was proposed.
Another modeling technique detected and grouped selected elements in a graph structure based on intensity variation. One method identified salient peaks using Gaussian filters to iteratively smooth
the AC function, where translation vectors are determined by generalized Hough transforms. Another technique utilized edge detection to determine elements and successively grow patterns in a greedy
manner, where a grouping strategy for translational grids based on maximum likelihood estimation. Yet another technique introduced a pair-wise local feature matching algorithm using key points at
corresponding locations. One algorithm detects rotation symmetry centers and symmetry pattern from real images, but does not address all symmetry group properties. Another proposal was to test and
find the optimal hypotheses using a statistical model comparison framework. There are also methods that analyze near-regularity detection.
Facade modeling includes image-based methods, where images are employed as guides to interactively generate models of architectures. Many vision-based methods require registered multiple images. One
algorithm for structure detection in building facades utilized a strong prior knowledge regarding facade structures to detect translational repetitive windows. Another technique relaxed the input
image to process strong perspectives, where repeated point features are grouped using chain-wise similarity measurements. Yet another technique employed a priori knowledge regarding grid patterns on
building facades which is formulated as Markov Random Field and discovered by Markov Chain Monte Carlo optimization.
Another method utilizes a variation of RANSAC-based planar grouping method to detect perspectively distorted lattices of feature points which allows identification of the main translation vectors of
the underlying repeated wallpaper pattern. An interactive system was employed to create a model from a single image by manually assigning the depth based on a painting metaphor. Another system used a
sketching approach in one or more images, where yet another system interactively recovered a 3D texture-mapped architecture model from a single image by employing constraints derived from shape
Inverse procedural modeling includes L-system approaches for plant modeling which is perhaps the most representative of procedural approaches. Inverse modeling from images to extract rules is also
provided for tree modeling. For architecture modeling, Computer Generated Architecture ("CGA") shape software combined a set grammar with a split rule and produced detailed building geometries.
Although the design of grammar systems has been utilized, there is limited work on how to extract grammars from existing models as inverse modeling. One grammar extraction method uses a top-down
partition scheme to extract split rules from a rectified facade image. However, extracted grammar rules are limited to grid-like subdivisions. Recent approaches on inverse procedural modeling
recognize a vector picture and employ extracted rules to re-synthesize new pictures.
The above-described deficiencies of today's 3D modeling are merely intended to provide an overview of some of the problems of conventional systems, and are not intended to be exhaustive. Other
problems with conventional systems and corresponding benefits of the various non-limiting embodiments described herein may become further apparent upon review of the following description.
SUMMARY [0010]
The following presents a simplified summary in order to provide a basic understanding of some aspects disclosed herein. This summary is not an extensive overview. It is intended to neither identify
key or critical elements nor delineate the scope of the aspects disclosed. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is
presented later.
Systems and methods are provided to detect repetitive patterns and symmetry patterns in images to facilitate architectural modeling. In one aspect, a method is provided that includes extracting
initial sample portions from images as potential repetitive patterns. The method includes retaining selected patterns from the potential repetitive patterns that are determined to be actual
repetitive patterns. This includes clustering the actual repetitive patterns from sub-group domains into aggregated group domains by using information from a transformation domain and a spatial
domain. The method also includes extracting one or more shapes for the actual repetitive patterns based in part on the information from the transformation domain and the spatial domain.
In another aspect, a system is provided that includes a detector to determine sampling points in an image and to generate similarity maps for the sampling points. The system includes a cluster
component to determine image lattices and transform lattices within the image in order to determine multiple repetitive patterns of arbitrary shapes. A rectangular analyzer determines regions of
interest for non-repetitive patterns within the image and a facade layout component generates a set of disjoint structure regions to facilitate symmetry detection within the image.
In yet another aspect, a tangible computer-readable medium is provided. The computer-readable medium includes instructions for determining selected patterns from potential repetitive patterns and
clustering the selected patterns as actual repetitive patterns from a first domain into at least one other domain by utilizing information from a transformation domain and a spatial domain. This
includes instructions for extracting one or more shapes for the actual repetitive patterns based in part on the information from the transformation domain and the spatial domain. This also includes
instructions for determining sampling points in an image and to generate similarity maps for the sampling points and instructions for determining image lattices and transform lattices within the
image in order to determine one or more symmetry patterns. The computer-readable medium also includes instructions for generating a set of disjoint structure regions to facilitate symmetry detection
within the image based in part on the one or more symmetry patterns.
To the accomplishment of the foregoing and related ends, the subject disclosure then, comprises the features hereinafter fully described. The following description and the annexed drawings set forth
in detail certain illustrative aspects. However, these aspects are indicative of but a few of the various ways in which the principles disclosed herein may be employed. Other aspects, advantages and
novel features will become apparent from the following detailed description when considered in conjunction with the drawings.
BRIEF DESCRIPTION OF DRAWINGS [0015]
FIG. 1 is a schematic block diagram of a pattern-detection and modeling system.
FIG. 2 is a diagram of a method for detecting pattern sequences for recognition of patterns and facade modeling.
FIG. 3 is an ortho-rectified image of an example input for a repetitive pattern detection algorithm or component.
FIG. 4 illustrates an example of image rectification.
FIG. 5 illustrates an example of composed orthographic textures from input sequences.
FIG. 6 illustrates example surfaces that can be employed for detecting patterns.
FIG. 7 is a flow chart illustrating an example repetitive pattern detection process for architectural modeling.
FIG. 8 illustrates an example workflow of images for repetitive pattern detection.
FIG. 9 illustrates another example workflow of images for repetitive pattern detection.
FIG. 10 illustrates an example repetitive pattern detection results including nested repetitive pattern detection.
FIG. 11 illustrates an example of modeling results from composed images.
FIGS. 12-13 illustrate example modeling results for repetitive pattern detection.
FIG. 14 is a flow chart that illustrates an example facade symmetry detection process for architectural modeling.
FIGS. 15-21 illustrate various examples of facade symmetry detection for architectural modeling.
FIG. 22 illustrates an example facade symmetry detection system for architectural modeling.
FIG. 23 illustrates an example computer-readable medium of instructions for causing a computer to generate modeled images of buildings or other structures employing repetitive pattern and symmetry
DETAILED DESCRIPTION [0031]
Systems and methods are provided to facilitate architectural modeling. In one aspect, a detection system and method for repetitive patterns in orthographic texture images is provided. Given an input
image, the method detects repetitive patterns in substantially any shape without prior knowledge on location, size, or shape of such patterns. This improves the performance of a city modeling system
by enabling building facade analysis, for example, to be more robust and efficient than previous systems and methods. The systems and methods are automated and have been demonstrated on various
scenarios and city-scale examples.
In another aspect, an automatic method of 3D modeling of building facades from images is provided. A facade symmetry detection component provides a fronto-parallel image, which automatically detects
multiple repetitive patterns of arbitrary shapes without prior knowledge. An automated facade analysis detects and recognizes architectural elements, and generates a facade layout based on detected
repetitive patterns and non-repetitive objects for creating 3D models of actual facades. An inverse procedural modeling of synthetical facades is also provided. Procedural rules of the facade model
are learned from images of facades and then utilized to generate a synthetic facade 3D model.
As used in this application, the terms "component, " "system, " "model, " and the like are intended to refer to a computer-related entity, either hardware, a combination of hardware and software,
software, or software in execution. For example, a component may be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a
program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components may reside within a process and/or thread of
execution and a component may be localized on one computer and/or distributed between two or more computers. Also, these components can execute from various computer readable media having various
data structures stored thereon. The components may communicate via local and/or remote processes such as in accordance with a signal having one or more data packets (e.g., data from one component
interacting with another component in a local system, distributed system, and/or across a network such as the Internet with other systems via the signal).
Referring initially to FIG. 1, a pattern-detection and modeling system 100 is illustrated. The system 100 includes an image processor 110 that receives raw input images 120 and produces modeled
images 130 of architectural structures (or shapes). To generate such images 130, various detection components are provided to enhance the quality thereof. The input images 120 can include overhead
views of structures such as from satellite images or other views such as elevated or ground-level views (e.g., captured camera images) that are modeled to generate the images. Although not shown, the
system 100 can include other computer components for executing the various system components including a tangible computer-readable medium to store instructions for the respective execution of the
components and which is described in more detail below.
In one aspect, a repetitive pattern detection component 140 is provided to facilitate generation of the images 120 by the image processor 110. This includes a method that detects translational
repetitiveness in an orthographic texture image of a building facade, for example. Due to restricted translational structures of interest and richness of texture information for similarity
computation, the method facilitates efficiency and robustness (e.g., more accurate images, more immune to incomplete or corrupted input data). The repetitive pattern detection component 140 provides
a suitable mapping from image space of similarity transformations to an auxiliary two-dimensional ("2D") space. This procedure can be followed by clustering of transformations yielding characteristic
lattice patterns for shapes containing regular structures.
The repetitive pattern detection component 140 can include multiple stages of processing. For example, one stage samples and clusters sampling patches and accumulates the transformation among patches
in transform space. Another stage estimates parameters of a generative model that produces image patterns and can be achieved by a global optimization procedure in transform space which is described
in more detail below. Yet another stage of the detection component 140 can include aggregating small repetitive patterns into several large repetitive patterns and estimating the boundary of each
repetitive pattern. Multiple types of repetitive patterns in difference sizes and shapes can be detected concurrently, wherein nested repetitive patterns can also be detected. Such systems and
methods that execute the repetitive pattern component 140 can be employed for improving the performance of city modeling which require accurate facade analysis, for example. Various illustrations and
examples of image-based city modeling are further described below.
In another imaging processing aspect of the system 100, a symmetry detection component 150 is provided to facilitate analysis of the input images 120. This includes an automatic method to reconstruct
3D facade models of high visual quality from a single image (or a cluster of images/fragments of images). In general, symmetry is one of the most characteristic features of architectural design which
can be exploited for computer analysis. The symmetry detection component 150 includes a facade symmetry detection algorithm in a fronto-parallel image, which automatically detects multiple repetitive
patterns of arbitrary shapes via a facade analysis algorithm. This includes detecting and recognizing architectural elements to generate a facade layout for 3D models of real facades. Also, an
inverse procedural modeling of synthetic facades is provided by learning rules from facade images 120.
As noted, the input 120 includes a fronto-parallel image of a building facade. These images can be obtained in accordance with different procedures. For a single facade image, it can be rectified by
using available methods. For a sequence of registered images, a true orthographic texture of the facade is composed automatically. Starting with a rectified fronto-parallel image of facades, the
symmetry detection component 150 first detects Harris corner points to sample the image 120 and generate similarity maps for each of the sampling points. Through the construction of transform lattice
in the space of pair-wise transformations, the component 150 clusters the image lattices and transforms lattices to obtain multiple repetitive patterns of arbitrary shapes. Then, rectangular regions
of interest containing other architectural elements than the repetitive patterns are also detected. The facade layout is generated as a set of disjoint structure regions. Repetitive pattern and
non-repetitive elements are recognized through a database of architectural objects for final modeling. A complete example illustrating the different steps of computation for the symmetry detection
component 150 is shown in a sequence of images 200 of FIG. 2. Also, the component 150 converts the detected facade layout into procedural rules, which are then used to generate a synthetical facade.
The symmetry detection component 150 provides an integration of image-based and rule-based methods. These include developing a robust and efficient detection of repetitive patterns in facade texture
images; developing an efficient and accurate facade analysis method based on the detection and recognition of repetitive patterns and non-repetitive architectural elements; and improving CGA shape
grammar for buildings by introducing contain rules that overcome the weakness of the previous split rules for buildings. These grammar rules can be utilized to synthesize a synthetic but realistic
Before proceeding it is noted that the system 100 will be described in conjunction with various drawings that follow where the drawings illustrate one or more example aspects to the image-generating
capabilities of the system. From time-to-time during the discussion, subsequent drawings may be discussed in context of the general system drawing shown in FIG. 1.
Referring briefly to FIG. 2, a sequence of images 200 is provided to illustrate an example pattern detection method described above with respect to FIG. 1. At 210, Harris points (e.g., represented as
red crosses) are determined as seed sample points. At 220, a thresholded similarity map for a given seed sample e.g., marked in a box (or other shape) in the image 210, wherein potential similar
points of a given seed sample point are a type of local maxima points displayed in red dots (or other indicia). At 230, a transform lattice fitting in transformation space for the given seed sample
point is determined. At 240, a recovered lattice in image space from the respective transform lattice for a given pre-determined seed sample point. At 250, color-coded sets of similar points from
different seed sample points are determined. At 260, two top-ranked (could be other number than two) repetitive patterns with a grouping of sample points is selected. At 270, detection of repetitive
patterns and non-repetitive objects is illustrated, where recognition of patterns and objects is shown for example at 280 and an example facade layout is shown at 290. Before proceeding, it is noted
that the following description relates generally to repetitive pattern detection and symmetry detection, where concepts relating to repetitive pattern detection is described in FIGS. 3-13 and
concepts relating to pattern detection are described in relation to FIGS. 14-21, respectively.
FIG. 3 is an ortho-rectified image 300 of an example input for a repetitive pattern detection algorithm or component. The image 300 can be employed as standard input of to a detection algorithm or
component as noted above. It is noted that a non-rectified image or image sequence may also be utilized as input. If the input is a single image of urban environment without rectification for
example, it can be rectified (processed for a desired image quality or parameter) before performing repetitive pattern detection. A building usually includes several facades or a facade with a large
span, which generally cannot be captured in a single image as shown at 400 of FIG. 4. In this example, image sequence is used as input, and the ortho-rectified image on the facade plane is composed
by fetching texture from multiple views as shown at 500 of FIG. 5.
If the input is a single facade image such as the example 400 of FIG. 4 which can be downloaded from web for example, a first act is to rectify the respective image into an ortho-rectified image. One
example of the rectification is as follows. First, the gradient operator for each pixel in the image is computed. The argument and magnitude of the resulting gradient vector indicate the orientation
and reliability of a local edge respectively. Then, a Hough linear transformation is applied on these potential edges. Since lines are mapped into points in the Hough space, reliable lines have
strong corresponding points and a set of lines can be automatically detected. Finally, the horizontal and vertical vanishing points are extracted by a RANSAC optimization based on these lines. The 2D
projective transformation that transfers these two vanishing points into infinite points can be employed to rectify the input image. The rectified image is shown at 410 of FIG. 4.
When the input is an image sequence, the orthographic texture can be composed from multiple views. The following method can be used for the texture composition. The image sequence is first
reconstructed using a structure from motion ("SFM") algorithm to produce a set of semi-dense points and camera poses. Then, the reconstruction is partitioned into buildings and locally adjusted.
After that, for each building, obtain at least one reference plane. An inverse patch-based orthographic composition method is employed for facade modeling that efficiently composes a true
orthographic texture of the building facade. As shown at 500 of FIG. 5, an example input image sequence is provided whereas at 510 of FIG. 5, the composed orthographic texture on the reference plane
is illustrated. Furthermore, not only a reference plane, an arbitrary developable surface can be employed as an initial surface for texture composition as shown in the example 600 of FIG. 6.
FIG. 7 illustrates a repetitive pattern detection process 700. In view of the example systems and components described above, example methodologies can be implemented in accordance with the disclosed
subject matter and can be better appreciated with reference to flowcharts described herein. For purposes of simplicity of explanation example methods are presented and described as a series of acts;
however, it is to be understood and appreciated that the various embodiments are not limited by the order of acts, as some acts may occur in different orders and/or concurrently with other acts from
that shown and described herein. For example, those skilled in the art will understand and appreciate that a methodology could alternatively be represented as a series of interrelated states or
events, such as in a state diagram, or interaction diagram. Moreover, not all illustrated acts may be required to implement a methodology in accordance with the subject specification. Additionally,
it should be further appreciated that the methodologies disclosed hereinafter and throughout this specification are capable of being stored on an article of manufacture to facilitate transporting and
transferring such methodologies to computers for execution by a processor or for storage in a memory.
In this aspect, the method 700 is employed to determine repetitive structure elements in ortho-rectified images, for example. At 710, extract initial sample patches in an image (or images) as
potential repetitive patterns. At 720, the repetitive pattern confirmation process retains the patterns which are more like a repetitive pattern by using transform space as auxiliary space. The
related parameters of the confirmed repetitive patterns are also computed. At 730, the repetitive patterns in small-scale are clustered into large repetitive patterns by using information from
transformation domain and spatial domain. At 740, the shape of the repetitive patterns can be extracted.
The following descriptions and drawings are now discussed in support of the method 700. As noted, extract sampling points at 710, for instance, utilizing Harris corners in the texture image as
sampling points of the entire facade texture. Harris corners are suitable for sampling because of the proven stability in detection. For each sampling point, compute a similarity map between a patch
of width w centered at the corner point and each pixel in the texture, using a similarity measurement such as the sum of squared difference (SSD). Using a mode seeking method, e.g., a mean-shift
method, locate stationary points of the similarity map from the modes of the density map. These stationary points are potential similar points of the sampling point.
Employ the pairs of the stationary points and the sampling point, and for each pair (or substantially each pair), compute a translation and map it onto a 2D plane, which is the transform space of 2D
translations. This 2D transform space can be represented by a 2D array as an accumulator to receive the computed translations from the pairs. A mode seeking method such as the mean-shift can used to
compute the mode points, which are used to fit a rectangular grid referred to as a lattice through the origin of the transform space. Estimate the nx and ny, which is the number of points on the
lattice along x-axis and y-axis, respectively. The nx and ny restrict the boundary of the lattice fitting. Since the pixel position can be approximately quantized, the transform space is also
approximately quantized. A search is performed to estimate the nx and ny in the neighborhood around the point presenting identity transformation. The initial gx and gy are computed along with nx and
ny. Then, the gx and gy can be further optimized by searching a suitable fitting lattice in continuous value space. The search space may be restricted by the initial values of gx and gy. The
stationary points whose translations with the sampling point are on the fitted lattice are retained as the set of similar points SGc, to which the sampling points can also be added.
Compute the bounding box of the similar points, and construct a grid in the image space. Note that the term grid is used in image space and a lattice in transform space. If the number of the similar
points on the constructed grid is larger than a threshold, e.g., 90% of all the similar points, the grid is confirmed for the given sampling point. Each sampling point thus yields a potential grid in
image space through a lattice in transform space, where similar computations can be performed for all sampling points in order to obtain many confirmed grids.
The lattice estimation stage operates in transform space and a set of regular structures at the scale of the initial image patches are extracted. Merge the extracted small repetitive regions into
large-scale repetitive regions and find the optimal transformation for each large-scale repetitive region. To achieve this, classify the lattices into several groups. Then, further classify each
group into sub-groups according to the spatial information of the grids. Apply segmentation on the region of each sub-group and retrieve the estimated shape of the foreground object for each
sub-group. The segmentation in the spatial domain is essentially to deal with the inaccuracy of the estimated transformations from the small image patches.
As noted, classify the lattices into several groups. Each lattice l has a elementary translation El denoted as glx×gly, where gx and gy indicate the elementary translation along x-axis and y-axis,
respectively. The elementary translation El is the abstraction of each pair of grid and lattice and is useful to distinguish different repetitive patterns. Thus, the lattices are grouped according to
the elementary translation by using a clustering algorithm, e.g., a hierarchical clustering algorithm. Thus, define the distance between two lattices and the inter-cluster distance. There are several
methods to compute the distance between two lattices l1 and l2. For example, the distance between two lattices l1 and l2 can be defined as the L2-Norm of El1 and El2. To define the inter-cluster
distance, there may be many measurements. For example, the inter-cluster measurement between two clusters Ci and Cj can be defined as:
, C
, C
, C
) Equation 1:
where Min(C
, C
), Min(C
, C
) and α are the nearest neighbor rule, furthest neighbor rule and the control factor in [0,1], respectively. The neighboring clusters with smaller inter-cluster distance, is more likely to be merged.
In the merging process, the larger favors the merging of smaller clusters and smaller favors the merging of larger clusters.
In spatial domain, different repetitive patterns can have similar elementary translations. Additionally, one-one correspondences between grids and lattices can be determined. Thus, classify each grid
group into sub-groups in spatial domain. A distance DG is defined as the distance between two grids. One process to define the DG is as follows. For each confirmed grid Gc, a bounding box Gc is
defined as the minimum rectangle region that contains its similar points SGc. Thus, the missing grid points M
in the bounding box Gc can be computed. Thus, define the distance between two grids Gi and Gj as:
D G
( G i , G j ) = ? β G i β G j D p ( ? ) ? + s y .di-elect cons. S G j , m y .di-elect cons. G j β G i β G j D p ( s y , m y ) S G j ? indicates text missing or illegible when filed Equation 2 ##
where Dp computes the distance between two patches, e.g., the SSD. Therefore, compute the distances between each pair of confirmed grids. Then, another clustering process is employed to classify
small repetitive patterns into repetitive pattern groups.
In each repetitive pattern group, similar points from different grids but belonging to the same grid position are grouped together, and its centroid computed. A tile centered at the centroid with the
same size as the cell of the lattice in transform space is constructed. Such a rectangular tile is expected to contain one individual element of the repetitive pattern. Use a segmentation method, for
instance the "GrabCut, " within the tile to segment foreground out of the background wall as the architecture element. Then, the median of the foreground segments of the same type is determined as
the representative shape of the architecture element.
The boundaries of the detected repetitive elements are noised arbitrary contours. Match each of these recovered elements against a set of predefined generic templates. Set up a template database T,
in which each template t .di-elect cons. has a Type, windows or doors for instances, and a shape st, parametrized by its bounding box.
Select the potential matching by the bounding boxes of the element and the template. Then compute a score as:
= D ( s t , r sa ) B ( s t ) for s t and r sa , where D ( s t , r sa ) = s t r sa - s t r sa Equation 3 ##EQU00002##
which is the difference of the binary mask of st and rsa and B
(st) is the length of the boundary of st measured in pixels. The match is established for the pair having the highest score. The position of the template t is refined by searching around a small
neighbor to snap it to the positions where the edge response is the strongest. Moreover, a set of 3D models can be associated with each template t, so in order to search the most similar 3D model
from a 3D template database.
One example is shown in FIG. 8 where a recomposed orthographic texture image is employed to demonstrate the above described processes. At 810, Harris points are determined as sampling points. At 820,
a thresholded similarity map for a given sampling point is determined. The potential similar points are a type of local maxima points. At 830, lattice fitting in transform space for the given
sampling point is illustrated. At 840, the grid in image space that is converted from lattice pattern in transform space for the given sampling point is illustrated. At 850, the sets of similar
points from different sampling points are illustrated. At 860, the two top ranked grids for the two repetitive patterns are selected. At 870, the binary segmentation of repetitive elements is
illustrated. At 880, the rough contours of the irregular elements are determined. At 890, regularized elements after shape analysis are illustrated.
FIG. 9 is another example to show process workflow. At 910, the input image is shown as a facade texture in low resolution from aerial view. At 920, the sampling points are illustrated. At 930,
confirmed grids are determined where points belonging to the same grid are represented in similar shades of gray. At 940, the result of repetitive patterns clustering is illustrated where the pattern
extraction result is shown at 950.
FIG. 10 shows example repetitive detection results in various input images 1010-1030 that is the result of a composed texture from multiple views at ground level. It is noted that the methods
described herein can detect nested repetitive patterns (e.g., repetitive patterns within repetitive patterns and so forth).
FIG. 11 illustrates another result of a composed image 1100, where types of repetitive patterns in different sizes and different shapes are detected. FIG. 12 is a collection of example images 1200
where each column represents an example. From top to bottom in the images 1200: existing approximate model, existing textured model, remodeled geometry and final result. FIG. 13 illustrates examples
1300 where each column represents an example. From top to bottom in the images 1300: existing approximate model, existing textured model, remodeled geometry and final result.
As noted above, examples in FIGS. 9, 10, and 11 include a low resolution facade texture from aerial view and the composed orthographic textures at ground level. The time complexity of repetitive
pattern detection is O (mwh), where m is the number of sampling points, and w and h are the weight and height of the ortho-rectified image, respectively.
The above description relating to FIGS. 3-13 include a detection method of repetitive pattern in images. This includes employing transform space to facilitate detection of the repetitive pattern in
images. There are various useful applications based on the repetitive pattern detection methods described herein. This includes automatic detection of repetitive patterns in natural and man-made
objects. The method can be utilized in substantially any image processing task and other computer vision tasks. The method can also be employed to improve the performance of urban modeling systems.
Such methods were tested on a typical example of the city of Pittsburgh for which both the texture of 3D models from aerial images served by e.g., Google Earth and the input images systematically
captured at ground level can be analyzed. For each building, a typical size of orthographic texture image is 1000×1000 pixels. The city modeling results are demonstrated using an example facade
analysis method depicted in FIGS. 12 and 13. Before proceeding, it is noted that the following description and drawings for FIGS. 14-21 relate to facade symmetry detection.
FIG. 14 is a flow chart that illustrates an example facade symmetry detection process 1400 for architectural modeling. At 1410, initial parameters are determined and formulated. In general,
repetitive patterns occur frequently in the facades of buildings. It is known that there are various symmetry groups in the plane, which are also referred to as wallpaper groups or crystallographic
groups. In the simplest instance of these symmetry groups, process the group H of translations generated by two independent translations X and Y. This is related to the fact that a group G is a
symmetry group if and only if the subgroup of translations H in G is of finite index. Thus computationally, the facade symmetry detection is decomposed into at least two steps: the first is the
detection of translational symmetry, the second is the discovery of intrinsic symmetries of the repeated object.
In general, any object can be transformed by the group H into an infinite array of such objects, which forms a repetitive pattern. If the object is a single point, the pattern is an array of points
called a two-dimensional `lattice.` The lines through the lattice points are two families of parallel lines, forming a `tessellation` of congruent parallelograms filling the plane without gaps and
intersects. There is generally a one-to-one correspondence between the tiles of the tessellation and the transformations in the group H, with the property that each transformation processes any point
inside the original tile to a point similarly situated in the new tile. This typical parallelogram is referred to as a `fundamental region,` wherein the shape of the fundamental region is generally
not unique. The inverse problem is to rediscover such repetitive but finite patterns from a fronto-parallel view of the finite plane. More specifically, given a 2D fronto-parallel image of a building
facade, detect all patterns in the image, wherein each pattern is related by:
Equation 4: Pi=(e0; X×Y y) where the image is generated x×y times by X×Y from the fundamental region e0. Each pattern thus tessellates a finite region of the image, but different patterns may overlap
in the image. One challenge is that both the patterns Pi and its number p are unknown. The Pi should explain the given image region, where 2D symmetry structures with respect to the 3D structures are
detected, and the richer texture information of images with respect to point clouds lead to a more efficient and powerful method.
At 1420, of FIG. 14, translational symmetries are determined. Harris corners are detected in the image via sampling, where the set C, is used as seed sample points of the facade image to generate
more sample points. Assume that the parameter w is the minimum distance at which to detect the repetitive pattern. For each seed sample point si .di-elect cons. C, compute a similarity map D(x; y),
which is the sum of squared differences (SSD) between the patch of width w centered at the seed point si and the respective pixels of the image. Locate stationary points of the similarity map from
the modes of this density map using a mean-shift method. These detected stationary points, including the seed point si, form the similarity set of points Si of the seed si are determined. This
process thus generates sets {Si|si .di-elect cons. C} of similar points to the seed points si .di-elect cons. C. Harris corners are suitable for seed sampling points and are rotation invariant. The
detection stability also reflects the natural image scale at which to detect symmetries.
For each set Si of similar points, draw all the pairs of points (pj;pk) where pj;pk .di-elect cons. Si. Compute a translation from each pair and map it onto a 2D plane, which is the transformation
space of 2D translations. The space of pair-wise transformations includes a 2D case, the space of translations is also a lattice T
generated by T
and T
, which is called a `transform lattice.` The image lattice X
and transform lattice generally have the same group generators T
=X and T
=Y; but the sizes may differ.
One method, if viewed as a generalized Hough Transform, processes all pairs of points as the feature space, and the group of transformations as the transform space. Instead of seeking only peak
points in the transform space, a regular grid of peak points can be searched. The transform space of 2D translations is naturally represented by a 2D array as the accumulator to receive the computed
translations from the pairs. The transform space is quantized in a similar manner as in the image space. Thus, identify the transform lattice of m×n with the generators T
from such an array of accumulators. An exhaustive search method is feasible, given the simplicity of the transform space. The peak points of the accumulation array can be computed as the mode points
by the mean-shift method. The ranges of m and n can be estimated. The minimum is 1, and the maximum is the largest translation in that direction divided by w. From the estimated ranges, generate
possible discrete transform lattices mi×ni .di-elect cons. [1;mmax]×[1; nmax].
For each transform lattice mi×ni, compute accumulated distances of the detected peak points to the lattice. The transform lattice m×n that has the smallest accumulated distance is the estimated
transform lattice obtained. The set of similar points Si is now filtered by the estimated transform lattice m×n. A similar point is removed if its translation with the seed sample point is not close
to any of the transform lattice points (set by a predetermined threshold). From the remaining set of similar points, compute the bounding box of the set and construct a x×y lattice in the image
space. An image lattice is associated with the given seed sample point is retained if 90% (or other percentage) of similar points are on the lattice.
Each seed sample point thus yields one possible image lattice through the construction of a transform lattice. Perform the same for the seed sample points, then obtain at most |C| image lattices.
These image lattices are grouped together if they are overlapping and if they have the same generators and same sizes. The image lattices having the highest numbers are considered the most likely
patterns. Similar points associated with the same lattice position from the same group of image lattices are clustered into the set of similar points belonging to the same tile. A rectangular
tessellation of the image region covered by the lattice is constructed by drawing horizontal and vertical lines through the centroid of the similar points of the same tile, separated by X and Y.
Within a tile, process similar points as the sampling points of the object of the tile, e.g., GrabCut method to segment the foreground object out of the background wall. Then, the median of the
foreground objects from different tiles is selected as the representative shape of the architecture element. An example illustrating the different symmetry detection steps is shown in FIG. 2
described above.
At 1430 of FIG. 14, rotational symmetries are determined. Patterns of rotational symmetries occur sometimes within the fundamental region of the translational symmetry, or occur locally in the
facades of buildings. In the plane, the group of rotations about a fixed point generates such a pattern. For a rotation angle of n, obtain a circular pattern with rotational symmetry of order 360=n.
The detection of rotational symmetries can be adapted from that of translational symmetries. It can consist of at least two steps: the first is the localization of the rotation center, the second is
the determination of the rotation angle.
Start with a set of uniform random sampling points in the image region. Then, the SIFT descriptor is computed for each of the sampling points. The sampling points are then clustered to retain a few
of the highest clusters to create the sets of similar sampling points. For each set of similar points, use a Hough transform to yield an estimation of the rotation center by computing a circle center
of three points. The angle detection is carried out in the bounding box of similar points. This step is similar to the translational symmetry detection described above. Thus, indicate its differences
with that of translational symmetries, where the set of seed sample points C contains Harris corner and SIFT points that are more redundant, but is more robust in scale variations and orientations.
The similarity map D(x; y) is computed at each pixel as the maximum of the SSD's computed at different angles. Compute the similarity by rotating the patch every five degrees (or other predetermined
angle), and retain the best matched direction in degree d. Then, compute again by rotating the degree from d-5 to d+5. The space of pair-wise transformation is now a one-dimensional rotation angle,
which is naturally accumulated in an array of 1800 (or other sized array) cells. The generating angle ⊖ is the interval between peak points.
At 1440 of FIG. 14, facade analysis is performed. From detected repetitive patterns P
, group the disjoint patterns of similar type (similar lattice and close generators) together. The distance between two disjoint symmetry patterns P
and P
of similar type is the maximum of SSDs between the segmented foreground object in each corresponding tile. Compute a distance between each pair of disjoint symmetry patterns, and consider that the
pair is similar and possible to merge if the distance is less than Min (Var(P
); Var(P
)), where Var(P) is the variance in P. Utilize a hierarchical clustering method based on this distance to structure repetitive patterns.
In addition to the repetitive patterns P
of a facade, also search for regions that contain non-repetitive architectural objects. Thus, employ a method to detect rectangular regions in the image. One difference is that the process 1400 uses
texture information in the image regions not yet covered by the repetitive patterns. All detected regions containing non-repetitive objects are merged into a hierarchy. Within each region, a Grabcut
can be employed to segment the object out from the background.
In general, a facade includes a set of regions, each of which contains either repetitive patterns or non-repetitive architectural elements. Each region is generally represented by its bounding box,
where the regions are merged if their bounding boxes overlap. The remaining regions of non-repetitive elements are merged by using a bottom-up merging segmentation method, based on background texture
information. Thus, obtain a set of disjoint regions for the facade.
Extend each region into a structure region such that the structure regions form a partition of the facade. Thus, build up a hierarchical relationship of structure regions. The separating lines
between structure regions are vertical or horizontal. First, project and accumulate the gradients of X and Y direction of the image on the horizontal and vertical axis. The accumulated values are
normalized with the number of pixels along the direction of accumulation. Then, the partition process starts by dividing the image region R vertically into a set of horizontal regions R
. If there is a local maximum response in the vertical direction that does not intersect structure region, the region is partitioned at this line. Finally, each horizontal region r
.di-elect cons. R
is further partitioned horizontally into a set of vertical regions R
. Thus, a facade layout is a set of disjoint structure regions that partitions the facade, and each structure region generally contains a set of patterns and objects.
The intrinsic symmetries of the fundamental region can be detected in each tile of the tessellation or in each region of a non-repetitive object. If the tile is large enough and it is desired to
obtain patterns at a finer scale, the translational symmetry detection can be used by reducing the parameter w. The intrinsic rotational symmetries can be detected by the method of rotational
symmetries presented above within a tile. Otherwise, the intrinsic rotational and reflective symmetries are detected using a correlation-based method, for example, in a typical tile or in a combined
typical tile. The intrinsic symmetries of an object facilitate the recognition of the object are now described.
Each detected pattern and object can be recognized from a database of architectural elements T, in which each template t .di-elect cons. T has a Type (windows or doors for instances), real size, and
a detailed 3D geometric model. The templates of the database are non-parametric, so it is straight-forward for users to build a database and re-use existing models. Before recognition, each 3D
template is orthographically projected into a 2D template, which is smoothed by a Gaussian kernel whose size is set to one tenth (or other fraction) of the size of the 3D model. For an image object r
, align it with each 2D template using bounding boxes, then a distance between the object r
and a template st is defined to be Equation 5:
( s t , r sa ) = D e ( s t , r sa ) + β D b ( s t , r sa ) B ( s t ) ##EQU00003##
where D
; r
) is the accumulated Gradient differences in the intersection region and D
; r
) =s
is the accumulated differences of the binary mask of s
and r
, and B(s
) is the length of the boundary of st measured in pixels. Finally, the object matches the template if the distance is the smallest for the templates.
The position of the recognized template t is snapped to the nearest and the strongest edge points in the neighborhood. In addition to the patterns and objects, the separating lines of the structure
regions often indicate architectural separators, which should be detected and recognized. Thus, in the neighborhood of each separating line, search for an elongated horizontal or vertical bar in the
database as the architectural separators.
At 1450 of FIG. 14, inverse procedural facade modeling is performed to facilitate architectural modeling. After the modeling of an actual facade, an inverse procedural modeling of synthetical facades
is provided where a Computer Generated Architecture ("CGA") shape grammar represents a facade with contain rules. Then, the procedural rules of the facade model are learned in images. Finally, to
prove the expressiveness of the grammar rules, a synthesis method is developed to generate a synthetic facade 3D model based on the extracted rules from images.
A grammar is denoted as G=(S; R), where S is the set of symbols and R is a set of production rules. A symbol s in S is either in the set of terminal symbol V or in the set of non-terminal symbol E,
where V ∩ E=⊖ and V ∪ E=S. Each symbol, terminal or non-terminal, is a geometry shape with geometric and numeric attributes. In one example, CGA shape can be employed as a basic grammar system. In
this manner, a production rule can be defined as Equation 6:
where id is the unique identifier of the rule
, predecessor .di-elect cons. E is a symbol identifying a shape that is to be replaced with successor, and cond is a flag to indicate that the rule is selected with probability prob and applied if it
is true.
A facade is generated from a given string str consisting of the symbols in S. An active shape P is selected from str and a production rule L with P on the left hand side is also selected. Then the
string Q on the right hand side of L substitutes P as the successor. Also, Q is left active and P inactive. This process runs recursively until the string str contains no more non-terminals. A
priority is assigned to each rule to induce more controls on the production, thus select the shape replacement with the rule of highest priority. Note, in CGA shape, when a shape is replaced, it is
not deleted, but marked as inactive.
A general contain rule specifies an object is containing other objects by Equation 7:
→Contain(Num, r
, . . . , r.sub.Num-1){c
, . . . , c.sub.Num-1};
where Num is the number of components in s
, c
is a shape symbol in S and r
is the placement configuration of the component c
. It generally does not have constraints on how many ascending and descending shapes there are. The r
can be further defined as r
=(dim; bound
; vis
; op
) where boundx is the bounding box of the region r
in dim-dimensional space. To define the relationship between sibling nodes c
, define the priority for the visibility of c
as an integer vis
, where a larger value indicates higher priority for the visibility. In addition, op
is introduced to define the interaction between node c
with other sibling nodes {c
}, e.g., overlaying and 3D Boolean operations.
A general contain rule can be specialized to a repetitive contain rule, which generates repetitive components. The repetitive contain rule can be defined as Equation 8: s→Repeat(DIR; step; times; r
}; where DIR can be "X", "Y", "Z", "XY", "XZ", "Y Z", or "XY Z" to specify the direction of the repetitive pattern, step is a vector for steps in different directions, times is a vector for numbers
of repetition in each direction, rx is the initial region, and cx is the repetitive shape.
A subdivision technique defined by split rules is used to augment geometry details. However, split rules shows its limitation in two aspects: it often over-partitions a facade, and the geometry
generated geometry is not unique. More complex analysis may be needed to consolidate the inconsistent splitting grammars extracted from images. There is generally no mechanism for interactions
between sibling symbols generated by split rules. For contain rules, first, the positions and regions of descending shapes are explicitly defined. Hence, no splitting is generally required to
gradually subdivide the ascending shapes. Secondly, users can define properties that are shared by descending shapes. Thirdly, they are able to be extracted from bottom-up and/or top-down analysis.
The facade analysis produces a layout that is a set of disjoint structure regions in rectangular shape. The layout decomposed the input image into a few horizontal structure regions each of which
contains a few vertical structure regions, where each vertical structure region generally contains a region of objects. The literal description above shows that the decomposition of a facade f can be
naturally represented by a contain rule for a structure region of non-repetitive elements, and by a repeat rule for a structure region of repetitive patterns such as Equation 9:
|, r
, . . . , r
, . . . , r
, for each r
, Equation 10:
|, r
, . . . , r
,0, . . . , r
, for each r
, use a rule to contain its structure elements and background region if they exist.
The synthetical facade should preserve as much as possible the architectural characteristics of the original facade. First generate a layout, then synthesize its texture region-by-region. Given a
user-specified region, generate a layout for the synthetical region from the original layout of a real facade described by the rules. A layout is recursively a horizontal or a vertical partition,
thus process it as a horizontal partition without loss of generality. Use at least three heuristic rules to generate the synthetical layout to:
Copy the original layout proportionally.
Copy the original layout by successively replacing the largest (or the smallest) structure region so far by a multiple of that structure region if possible.
Copy the original layout by randomly fixing some regions and replacing some others by a multiple of them.
The remainder region modulo associated with the above operations, if any, is distributed to the largest structure region. After the global layout, fill in a synthetical structure region from an
original structure region, and repeat proportionally each original pattern and object if they do not generate conflicting overlapped objects. If the synthesized patterns are overlapped, select the
largest pattern and remove the smaller ones.
Given an original exemplar region R
with layout L
and a synthetical layout L
, synthesize the texture of the region R
. Consider the one dimensional horizontal case then extend to the general case. When the length of the synthesized region l
is less than that of the original region l
, it is a shrinking operation. When l
is larger than lro, it is an extension operation. For a horizontal exemplar region R
, partition the region with vertical boundaries of the bounding boxes of objects. The area between two neighboring objects S
and S
is the buffer region Br
. For each pixel position t on the top of the strip and each pixel position b on the bottom of the strip, compute a separating path c
. Pre-compute paths for the possible combination of pixel positions. Each path c
separates the buffer Br into Br
and Br
. One goal is to determine the cut position x and the connecting position y to shrink the original region.
Each possible x deduces a finite set of possible connectors y such that Br
. Process possible x and y variables, and possible paths c
. A possible position v with cut c divides the buffer region into Br
and Br
, the cost of division is measured by Equation 11:
E c
( c ) p υ .di-elect cons. c E l ( p υ ) ( α + 1 max ( HoG ( p υ ) ) ) ##EQU00004##
where E[l]
) is the sum of horizontal and vertical gradient at p
, HoG (p
) is the histogram of oriented gradients at each pixel and alpha is the control factor. Here, the gradient is computed at the combined image of Br
and Br
with cv as boundary. The v with the minimum score is selected as the cutting position x and the connecting position y. The time complexity is O(l
) where nb is the number of buffer regions.
An extension operation is similar as in the shrinking operation that search for x and y, but the restriction changes to search for the pair that extend current buffer length. Now, the operation
should be repeated until it at least reaches the required synthetical size. For the possible spill-over, the shrink operation should be run to obtain the required size. For a regular 2D layout, the
synthesis can be obtained by a horizontal synthesis followed by a vertical synthesis or vice versa. If the cut may goes through an object, then partition the region containing the object into smaller
and non-overlapping sub-regions in which the synthesis is carried out hierarchically such that the cuts will not cross the objects.
For a typical image size of 1000×1000, the symmetry detection takes in average about one hour. The 85% of time spent for symmetry detection, is the computation of the similarity map. In practice, the
detection algorithm can be run on a down-sampled image of size of 350×350, which takes about 15 minutes for detection. When the clustered similar points are obtained from the down-sampled image,
proceed back to the image of the original size for the subsequent computation. For each image, first roughly estimate a size scale in meters as an input parameter. Then, fix the parameters w to
default values of 1.5 meters, for example, (or other number) and the quantized unit of transform space to 0.3 meter, for example.
An image of Palais de Versailles is shown in FIG. 15 at 1500 and is 2158×1682. Both the hierarchical symmetries and the boundaries between different neighboring symmetry patterns are detected. At
1510, an input image is shown. At 1520, a 3D geometry model is shown, and at 1530, a textured 3D model is shown.
A Brussels example 1600 in FIG. 16 is of resolution 1036×1021 and has multiple facades. At 1610, an example input image is shown. At 1620, clustering of repetitive patterns is illustrated. At 1630,
detection of repetitive patterns and non-repetitive elements is shown. At 1640, an example facade layout is shown. At 1650, a 3d geometry model is shown and at 1660, a textured 3D model is shown.
A Beijing example 1700 is shown in FIG. 17 of size 844×1129 and contains repetitive patterns in small scales. At 1710, an input image is shown. At 1720, symmetry detection is illustrated. At 1730, a
facade layout is shown. At 1740, a textured 3D model is shown. At 1750, a synthetical 3D geometry model is shown and at 1760, a textured synthetical 3D model is illustrated.
Other representative examples are shown at 1800 of FIG. 18. All results are automatically computed. The first column at 1810 shows example input images followed by the second column at 1820 which
shows symmetry detection. The column at 1830 shows a facade layout whereas the column at 1840 shows a 3D symmetry model followed by column 1850 which is textured 3D models.
FIG. 19 shows rotational symmetry examples 1900, where an input image is shown at 1910 and 1920, and rotational symmetry detection is illustrated at 1930 and 1940.
-US-00001 TABLE 1 The performance is categorized into three groups by the repeated times of a repetitive pattern N
. The detection perform the best for larger N
thanks to a more reliable voting in the transform space. N
> 10 3 <= N
<= 10 N
= 2 Overall Er
91.4% 86.8% 67.2% 83.7% Er
93.3% 87.1% 84.2% 88.9%
-US-00002 TABLE 2 The detection correctness for three kinds of symmetries. Translational symmetries are the most robust, and the rotational ones are the weakest, which is due to the increased
complexity and difficulty in seeking rotation-invariant similarity. Translation Rotation Hierarchical Er
89.2% 83.5% 80.2%
In general, define the detection rate E
as the number of discovered repetitive patterns over the number of all repetitive patterns, the ground truth number is counted by inspection of the images. Define the accuracy rate of detection E
as the number of correctly discovered patterns over the number of all discovered patterns. The overall E
and E
are 83. 7% and 88. 9%, for example. For tile segmentation, the method used 20 ground truth segmentations, for example, where the overall pixel accuracy was 90.2%, for example.
In the one example, a database of about 300 architectural objects was created. The detection rate of non-repetitive objects for real Facade modeling is about 76.1%, for example. The subject symmetry
detection method avoids over-partitions and is convertible into general contain rules. An example facade analysis is shown at 2000 of FIG. 20.
An example symmetry detection process typically lasts about 5 minutes for a region of 1000×1000. The parameter used in the energy function is by default set to 0.5, for example. Some representative
synthetic facade examples are shown at 2100 in FIG. 21, in which examples of shrinking, extension, and that the original characteristics of Facades are well preserved. The rules can also be employed
to process hidden portions of a tall-rise building, for example. In the example given in FIG. 20, rules can be merged from two (or more) different images of the same building in order to generate the
complete model of the Facade.
FIG. 22 illustrates an example modeling system 2200 for generating modeled images of buildings or other structures and shapes via symmetry detection. The system 2200 includes a detector 2210 to
determine sampling points in an image and to generate similarity maps for the sampling points. This includes a cluster component 220 to determine image lattices and transform lattices within the
image in order to determine multiple repetitive patterns of arbitrary shapes. A rectangular analyzer 2230 determines regions of interest for non-repetitive patterns within the image and a facade
layout component 2240 generates a set of disjoint structure regions to facilitate symmetry detection within the image.
Other aspects that are not illustrated yet can be included with the modeling system 2200 include a translation symmetry component to determine one or more sample points in an image to facilitate the
symmetry detection within the image, wherein the translation symmetry component includes a clustering component to yield one or more potential image lattices via construction of a transform lattice
operating on the one or more sample points within the image. A tessellation component determines similarity points associated with a lattice, wherein the similarity points are employed to construct
horizontal and vertical lines through centroids associated with the sample points within the image. A rotational symmetry component determines regions of translational symmetry that occur locally in
facades of buildings, wherein the rotational symmetry component determines a rotation center and a rotation angle to determine the regions of translational symmetry.
A facade analysis component determines repetitive patterns and non-repetitive objects within a facade, wherein the repetitive patterns are determined via distances between disjoint symmetry patterns
and the non-repetitive objects are determined by analyzing texture information from image regions that are not associated with the repetitive patterns. One or more grammar rules can be provided with
the system 2200 to facilitate automatic generation of a facade, wherein the grammar rules are associated with a computer generated architecture ("CGA") shape to facilitate symmetry detection within
the image. This can include one or more contain rules that define which patterns or shapes are included within an area defined as a building facade. This also can include one or more heuristic rules
that define a synthetical layout for the building facade, wherein the heuristic rules are associated with copying an original layout proportion, replacing structural regions by a multiple of a
determined structural region, or copying an original facade layout by randomly adjusting one or more regions within the facade layout.
FIG. 23 illustrates an example computer-readable medium 2300 of instructions for causing a computer to generate modeled images of buildings or other structures employing repetitive pattern and
symmetry detection. The computer-readable medium 2300 includes instructions 2310 for causing a computer to determine selected patterns from potential repetitive patterns and clustering the selected
patterns as actual repetitive patterns from a first domain into at least one other domain by utilizing information from a transformation domain and a spatial domain. This includes instructions 2320
for causing a computer to extract one or more shapes for the actual repetitive patterns based in part on the information from the transformation domain and the spatial domain. This also includes
instructions 2330 for causing a computer to determine sampling points in an image and to generate similarity maps for the sampling points. Another aspect includes instructions 2340 for causing a
computer to determine image lattices and transform lattices within the image in order to determine one or more symmetry patterns. This also includes instructions 2350 for causing a computer to
generate a set of disjoint structure regions to facilitate symmetry detection within the image based in part on the one or more symmetry patterns. The computer-readable medium can also include
instructions for causing a computer to determine regions of interest for non-repetitive patterns within the image, wherein the instructions include one or more translational symmetries and one or
rotational symmetries to determine the non-repetitive patterns.
As it employed in the subject specification, the term "processor" can refer to substantially any computing processing unit or device comprising, but not limited to comprising, single-core processors;
single-processors with software multithread execution capability; multi-core processors; multi-core processors with software multithread execution capability; multi-core processors with hardware
multithread technology; parallel platforms; and parallel platforms with distributed shared memory. Additionally, a processor can refer to an integrated circuit, an application specific integrated
circuit (ASIC), a digital signal processor (DSP), a field programmable gate array (FPGA), a programmable logic controller (PLC), a complex programmable logic device (CPLD), a discrete gate or
transistor logic, discrete hardware components, or any combination thereof designed to perform the functions described herein. Processors can exploit nano-scale architectures such as, but not limited
to, molecular and quantum-dot based transistors, switches and gates, in order to optimize space usage or enhance performance of user equipment. A processor may also be implemented as a combination of
computing processing units.
In the subject specification, terms such as "data store," data storage," "database," and substantially any other information storage component relevant to operation and functionality of a component,
refer to "memory components," or entities embodied in a "memory" or components comprising the memory. It will be appreciated that the memory components described herein can be either volatile memory
or nonvolatile memory, or can include both volatile and nonvolatile memory. By way of illustration, and not limitation, nonvolatile memory can include read only memory (ROM), programmable ROM (PROM),
electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM), or flash memory. Volatile memory can include random access memory (RAM), which acts as external cache memory. By way of
illustration and not limitation, RAM is available in many forms such as synchronous RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double data rate SDRAM (DDR SDRAM), enhanced SDRAM
(ESDRAM), Synchlink DRAM (SLDRAM), and direct Rambus RAM (DRRAM). Additionally, the disclosed memory components of systems or methods herein are intended to comprise, without being limited to
comprising, these and any other suitable types of memory.
Various aspects or features described herein may be implemented as a method, apparatus, or article of manufacture using standard programming and/or engineering techniques. The term "article of
manufacture" as used herein is intended to encompass a computer program accessible from any computer-readable device, carrier, or media. For example, computer readable media can include but are not
limited to magnetic storage devices (e.g., hard disk, floppy disk, magnetic strips . . . ), optical disks (e.g., compact disk (CD), digital versatile disk (DVD) . . . ), smart cards, and flash memory
devices (e.g., card, stick, key drive . . . ).
In addition, the term "or" is intended to mean an inclusive "or" rather than an exclusive "or." That is, unless specified otherwise, or clear from context, "X employs A or B" is intended to mean any
of the natural inclusive permutations. That is, if X employs A; X employs B; or X employs both A and B, then "X employs A or B" is satisfied under any of the foregoing instances. Moreover, articles
"a" and "an" as used in the subject specification and annexed drawings should generally be construed to mean "one or more" unless specified otherwise or clear from context to be directed to a
singular form.
What has been described above includes examples of systems and methods that provide advantages of the subject innovation. It is, of course, not possible to describe every conceivable combination of
components or methodologies for purposes of describing the claimed subject matter, but one of ordinary skill in the art may recognize that many further combinations and permutations of the various
embodiments described herein are possible. Furthermore, to the extent that the terms "includes, " "has," "possesses," and the like are used in the detailed description, claims, appendices and
drawings such terms are intended to be inclusive in a manner similar to the term "comprising" as "comprising" is interpreted when employed as a transitional word in a claim.
Patent applications by Long Quan, Hong Kong CN
Patent applications by Peng Zhao, Hong Kong CN
Patent applications by THE HONG KONG UNIVERSITY OF SCIENCE AND TECHNOLOGY
Patent applications in class Feature extraction
Patent applications in all subclasses Feature extraction
User Contributions:
Comment about this patent or add new information about this topic: | {"url":"http://www.faqs.org/patents/app/20130011069","timestamp":"2014-04-20T02:10:43Z","content_type":null,"content_length":"118222","record_id":"<urn:uuid:30ae499d-36da-47b5-a89a-6cb8bb7833bc>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00301-ip-10-147-4-33.ec2.internal.warc.gz"} |
How much mortar
Zen83237 wrote:
"R" wrote in message
"Zen83237" wrote in message
Very quick question. How many bags of sand will I need to lay 80 bricks
for the base of a greenhouse.
How big are your bags of sand required?
The normal Wickes size of from my local builders merchant. I can't see that
it will need many.
I would have thought a barrow/mixer load would do that many bricks. That
usually works out at three bags of sand and half a cement...
/================================================== ===============\
| Internode Ltd -
| John Rumm - john(at)internode(dot)co(dot)uk |
\================================================= ================/ | {"url":"http://www.diybanter.com/uk-diy/270135-how-much-mortar.html","timestamp":"2014-04-18T18:56:44Z","content_type":null,"content_length":"68435","record_id":"<urn:uuid:abc1de0d-9896-440c-93f3-9bd8e28c8ca0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Summary: REMARKS ON A CONJECTURE ON CERTAIN INTEGER SEQUENCES
Abstract. The periodicity of sequences of integers (an)nZ satisfying the inequalities
0 an-1 + an + an+1 < 1 (n Z)
is studied for real with || < 2. Periodicity is proved in case is the golden ratio; for other
values of statements on possible period lengths are given. Further interesting results on the
morphology of periods are illustrated.
The problem is connected to the investigation of shift radix systems and of Salem numbers.
1. Introduction
In this note we will analyze the following conjecture raised in ([1], Conjecture 6.1):
Conjecture 1.1. Let R and assume that the sequence of integers (an)nZ satisfies the inequal-
(1.1) 0 an-1 + an + an+1 < 1 (n Z).
If || < 2 then (an)nZ is periodic.
The conjecture is supported by extensive computer experiments and by some theorems, which
we will collect below. It is trivially true for = -1, 0, 1.
The conjecture seems to be interesting by itself, but there are also connections to other areas.
Firstly, let us recall the definition of a shift radix system. To a vector r Rd
we associate the
mapping r : Zd | {"url":"http://www.osti.gov/eprints/topicpages/documents/record/549/1683881.html","timestamp":"2014-04-17T09:49:14Z","content_type":null,"content_length":"8351","record_id":"<urn:uuid:ae8905fe-89f1-4a26-b523-8bfec5c56d76>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00092-ip-10-147-4-33.ec2.internal.warc.gz"} |
Horn Logic
What is "Horn Logic"
Contributed by IgorMozetic (in progress)
Since the RIF charter mentions "Horn Logic" several times, this is an attempt to define the term more precisely.
I propose to define "Horn Logic" as a formal language, which has a corresponding inference procedure, thus forming a formal system together. The formal language allows to express rules (and facts),
and queries. The inference procedure computes answers to queries from rules (and facts).
The formal language to express rules is that of "definite program" [1] or "logic program" [2, 3] as defined in the Logic Programming community (and "definite goal" [1] or "query" [2] for queries,
respectively). There is no need to repeat the precise syntax here, just some highlights: no negation in rules' conditions, no existentially quantified variables, function symbols are allowed as
The "Horn Logic" language has declarative semantics defined by the least (or minimal) Herbrand model [1]. The language can express any computable function (or simulate the universal Turing machine)
and is therefore undecidable. E.g., checking whether a set of Horn rules implies a fact is undecidable. In the presence of function symbols, terms of arbitrary depth can be constructed and thus the
Herbrand model may be infinite.
Apart from the declarative, model-theoretic semantics, the "Horn Logic" language has also a procedural semantics. It is defined by an inference procedure called SLD-resolution (Linear resolution with
Selection function for Definite clauses [1]). SLD-resolution is a sound and complete inference procedure, meaning that declarative and procedural semantics coincide. However, due to undecidability,
SLD-resolution might not terminate (when the Herbrand model is infinite and the query is not a logical consequence of the rules). The SLD-refutation procedure is underspecified in the sense that it
leaves open the order in which atoms in the body are selected and the order in which clauses are tried. To guarantee soundness and completness, the following conditions have to be met:
• fair search for matching clauses (e.g., depth-first is not fair)
• unification with occurs-check
What is a "subset of standard Prolog"
By "standard Prolog" we indicate an operational computer language, as defined by the ISO standard. The ISO standard is apparently not freely available, but we found the following references to the
pre-final draft [4, 5].
By a "subset of standard Prolog" we mean a language which allows only user-defined predicates in the body of a Horn clause (no cut, negation, built-ins, meta- or extra-logical predicates).
In relation to "Horn Logic" a "subset of standard Prolog" is a logic program with clauses ordered sequentialy, and atoms in the body ordered from left-to-right. "Standard Prolog" systems implement a
deterministic resolution procedure in which the leftmost atom in the body is selected, and clauses are tried in the order in which they were specified. Such implementation of SLD-resolution in
"standard Prolog" system does not preserve its completeness.
[1] J.W. Lloyd, Foundations of Logic Programming, Springer-Verlag, 1987.
[2] L. Sterling, E. Shapiro, The Art of Prolog, MIT Press, 1994.
[3] E. Dantsin, T. Eiter, G. Gottlob, A. Voronkov, Complexity and Expressive Power of Logic Programming, ACM Computing Surveys 33 (3), pp. 374-425, Sep. 2001.
[4] http://pauillac.inria.fr/~deransar/prolog/docs.html
[5] http://www.cs.cmu.edu/afs/cs/project/ai-repository/ai/lang/prolog/doc/standard/ | {"url":"http://www.w3.org/2005/rules/wg/wiki/Horn_Logic.html","timestamp":"2014-04-20T13:45:13Z","content_type":null,"content_length":"9119","record_id":"<urn:uuid:75ffdccc-cfe5-4028-b0ef-1ee4411cac80>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
Practical Optimization: A Gentle
Practical Optimization: A Gentle Introduction
John W. Chinneck
Systems and Computer Engineering
Carleton University
Ottawa, Ontario K1S 5B6
The chapters appearing below are draft chapters of an introductory textbook on optimization. The ultimate objective is the creation of a complete, yet compact, introductory survey text on the major
topics in optimization. The material is derived from the lecture notes used by the author in engineering courses at Carleton University, and reflects the design considerations for those courses:
The material is written at the introductory level, assuming no more knowledge than high school algebra. Most concepts are developed from scratch. I intend this to be a gentle introduction.
You will need an Adobe Acrobat reader to read the files. A free reader is downloadable from www.adobe.com
Comments and suggestions are actively sought. Please contact the author at chinneck@sce.carleton.ca. This is a draft, so some diagrams are hand-drawn, and there may be typos, etc. Further chapters
will be added as time permits. Please let me know of any corrections that are needed.
Browse the algorithm animations. These provide animated illustrations of many of the key concepts. The student developers of these animations were funded through an IBM Faculty Award to Prof.
Chinneck. Last revision: August 28, 2007.
Printing out the whole volume? Then add the front matter for a better look. Last revision: November 6, 2012.
Chapter 1: Introduction. An introduction to the process of optimization and an overview of the major topics covered in the course. Last revision: December 12, 2010.
Chapter 2: Introduction to Linear Programming. The basic notions of linear programming and the simplex method. The simplex method is the easiest way to provide a beginner with a solid understanding
of linear programming. Last revision: September 19, 2007.
Chapter 3: Towards the Simplex Method for Efficient Solution of Linear Programs. Cornerpoints and bases. Moving to improved solutions. Last revision: August 18, 2006.
Chapter 4: The Mechanics of the Simplex Method. The tableau representation as a way of illustrating the process of the simplex method. Special cases such as degeneracy and unboundedness. Last
revision: August 18, 2006.
Chapter 5: Solving General Linear Programs. Solving non-standard linear programs. Phase 1 LPs. Feasibility and infeasibility. Unrestricted variables. Last revision: August 18, 2006.
Chapter 6: Sensitivity Analysis. Simple computer-based sensitivity analysis. Last revision: August 18, 2006.
Chapter 7: Linear Programming in Practice. Mention of other solution methods such as revised simplex method and interior point methods. Mention of advanced techniques used in practice such as
advanced and crash start methods, infeasibility analysis, and modelling systems. Last revision: August 18, 2006.
Chapter 8: An Introduction to Networks. Some basic network concepts. The shortest route problem. The minimum spanning tree problem. Last revision: September 10, 2010.
Chapter 9: Maximum Flow and Minimum Cut. The maximum flow and minimum cut problems in networks. Last revision: August 18, 2006.
Chapter 10: Network Flow Programming. A surprising range of problems can be solved using minimum cost network flow programming, including shortest route, maximum flow and minimum cut, etc. Variations
such as generalized and processing networks are also briefly introduced. Last revision: October 23, 2012.
Chapter 11: PERT for Project Planning and Scheduling. PERT is a network-based aid for project planning and scheduling. Many optimization problems involve some aspect of the timing of activities that
may run sequentially or in parallel, or the timing of resource use. PERT diagrams help you to understand and formulate such problems. Last revision: October 23, 2012.
Chapter 12: Integer/Discrete Programming via Branch and Bound. Branch and bound is the basic workhorse method for solving many kinds of problems that have variables that can take on only integer,
binary, or discrete values. Last revision: October 23, 2012.
Chapter 13: Binary and Mixed-Integer Programming. These are specialized versions of branch and bound. A binary program has only binary variables (0 or 1 only). A mixed-integer program looks like a
linear program, except that some or all of the variables are integer-valued (or binary-valued), while others might be real-valued. Last revision: October 27, 2011.
Chapter 14: Heuristics for Discrete Search: Genetic Algorithms and Simulated Annealing. Some problems are just too big for branch and bound, in which case you must abandon the guarantee of finding
the optimum solution and instead opt for heuristic methods which can only guarantee to do fairly well most of the time. Genetic Algorithms and Simulated Annealing are two popular heuristic methods
for use on very large problems. Last revision: August 22, 2006.
Chapter 15: Dynamic Programming. This optimization technique builds towards a solution by first solving a small part of the whole problem, and then gradually incrementing the size in a series of
stages until the whole problem is solved. Efficiency results from combining the local solution for a stage with the optimum found for a previous stage. We look at the simplest deterministic discrete
cases. Last revision: September 6, 2011.
Chapter 16: Introduction to Nonlinear Programming (NLP). NLP is a lot harder than linear programming. We start by looking at the reasons for this. Next we look at the simplest method for solving the
simplest type of NLP: unconstrained problems that consist only of a nonlinear objective function. The method of steepest ascent/descent is described. Last revision: November 14, 2012.
Chapter 17: Pattern Search for Unconstrained NLP. What do you do if you don’t have access to gradient information? In that case you can use pattern search techniques (also known as derivative-free,
direct search, or black box methods). We look at the classic Hooke and Jeeves pattern search method. Last revision: February 24, 2014.
Chapter 18: Constrained Nonlinear Programming. Now that we have some idea of how to solve unconstrained NLPs, how do we deal with constrained NLPs? The first idea is to turn them into unconstrained
NLPs of course! This is done by using penalty and barrier methods which replace or modify the original objective function in ways that make feasible points attractive in the resulting unconstrained
problem. Last revision: March 6, 2014.
Chapter 19: Handling Equality Constraints in NLP. Equality constraints are the hardest to handle in nonlinear programming. We look at two ways of dealing with them: (i) the method of Lagrange, and
(ii) the Generalized Reduced Gradient (GRG) method. And we take a look at making linear approximations to nonlinear functions because we need that for the GRG method. Last revision: March 18, 2014.
Chapter 20: Function and Region Shapes, the Karush-Kuhn-Tucker (KKT) Conditions, and Quadratic Programming. Function and region shapes (convex, nonconvex, etc.) are important in understanding how
nonlinear solvers work. The KKT conditions are very useful in deciding whether a solution point is really a local optimum in a nonlinear program, and form the basis for some methods of solving
quadratic programs. Last revision: April 9, 2014.
Animations for coming NLP chapters:
· See animations NLP3, NLP4, NLP5, NLP6,
Last update April 9, 2014. | {"url":"http://www.sce.carleton.ca/faculty/chinneck/po.html","timestamp":"2014-04-21T07:04:07Z","content_type":null,"content_length":"19389","record_id":"<urn:uuid:fe2de93e-52b2-44f0-8e94-72aa96b6341b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Number of Distinct Prime Factors for Which oe(N
, 1995
"... We extend the method due originally to Loh and Niebuhr for the generation of Carmichael numbers with a large number of prime factors to other classes of pseudoprimes, such as Williams's
pseudoprimes and elliptic pseudoprimes. We exhibit also some new Dickson pseudoprimes as well as superstrong Dicks ..."
Cited by 2 (0 self)
Add to MetaCart
We extend the method due originally to Loh and Niebuhr for the generation of Carmichael numbers with a large number of prime factors to other classes of pseudoprimes, such as Williams's pseudoprimes
and elliptic pseudoprimes. We exhibit also some new Dickson pseudoprimes as well as superstrong Dickson pseudoprimes.
, 1997
"... We call a family of primes P normal if it contains no two primes p; q such that p divides q \Gamma 1. In this thesis we study two conjectures and their related variants. Giuga's conjecture is
that P n\Gamma1 k=1 k n\Gamma1 j n \Gamma 1 (mod n) implies n is prime. We study a group of eight varian ..."
Add to MetaCart
We call a family of primes P normal if it contains no two primes p; q such that p divides q \Gamma 1. In this thesis we study two conjectures and their related variants. Giuga's conjecture is that P
n\Gamma1 k=1 k n\Gamma1 j n \Gamma 1 (mod n) implies n is prime. We study a group of eight variants of this equation and derive necessary and sufficient conditions for which they hold. Lehmer's
conjecture is that OE(n) j n \Gamma 1 if and only if n is prime. This conjecture has been verified for up to 13 prime factors of n, and we extend this to 14 prime factors. We also examine the related
condition OE(n) j n + 1 which is known to have solutions with up to 6 prime factors and extend the search to 7 prime factors. For both of these conjectures the set of prime factors of any
counterexample n is a normal family, and we exploit this property in our computations. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=3395666","timestamp":"2014-04-18T06:18:36Z","content_type":null,"content_length":"14354","record_id":"<urn:uuid:f541ffc6-7282-469d-a3d2-1156655dbcfe>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00651-ip-10-147-4-33.ec2.internal.warc.gz"} |
Series Solutions to Second Order Linear Differential Equations
We have fully investigated solving second order linear differential equations with constant coefficients. Now we will explore how to find solutions to second order linear differential equations whose
coefficients are not necessarily constant. Let
P(x)y'' + Q(x)y' + R(x)y = g(x)
Be a second order differential equation with P, Q, R, and g all continuous. Then x[0] is a singular point if P(x[0]) = 0, but Q and R do not both vanish at x[0]. Otherwise we say that x[0] is an
ordinary point. For now, we will investigate only ordinary points.
Find a solution to
y'' + xy' + y = 0 y(0) = 0 y'(0) = 1
Since the differential equation has non-constant coefficients, we cannot assume that a solution is in the form y = e^rt. Instead, we use the fact that the second order linear differential equation
must have a unique solution. We can express this unique solution as a power series
If we can determine the a[n] for all n, then we know the solution. Fortunately, we can easily take derivatives
Now we plug these into the original differential equation
We can multiply the x into the second term to get
We would like to combine like terms, but there are two problems. The first is the powers of x do not match and the second is that the summations begin in differently. We will first deal with the
powers of x. We shift the index of the first summation by letting
u = n - 2 n = u + 2
We arrive at
Since u is a dummy variable, we can call it n instead to get
Next we deal with the second issue. The second summation begins at 1 while the first and third begin at 0. We deal with this by pulling out the 0^th term. We plug in 0 into the first and third series
to get
(0 + 2)(0 + 1)a[0+2]x[0] = 2a[2]
a[0]x^0 = a[0]
We can write the series as
The initial conditions give us that
a[0] = 0 and a[1] = 1
Now we equate coefficients. The terms in the series begin with the first power of x, hence the constant term gives us
2a[2] + a[0] = 0
Since a[0] = 0, so is a[2]. Now the coefficient in front of x^n is zero for all n. We have
(n + 2)(n + 1)a[n+2] + (n + 1)a[n] = 0
Solving for a[n+2] gives
] a[n+2] = [S: :S]
We immediately see that
a[n] = 0
for n even. Now compute the odd a[n]
-1 1 -1
a[1] = 1 a[3] = [S: :S] a[5] = [S: :S] a[7] = [S: :S]
3 3^.5 3^.5^.7
In general
(-1)^n 2^n(n!)(-1)^n
a[2n+1] = [S: :S] = [S: :S]
3^.5^.7^....^.(2n+1) (2n + 1)!
The final solution is
This cannot be written in terms of elementary functions, however a computer can graph or calculate a value with as many decimal places as needed.
Find the the first three nonzero terms of two linearly independent solutions to
xy'' + 2y = 0
Notice that 0 is a singular point of this differential equation. We will not be able to find a solution in the form Sa[n]y^n, since the solution will not be differentiable at zero. Alternatively, we
find a solution in the form
This is the power series centered about x = 1, which is not a singular point. Now take derivatives
Plugging into the differential equation gives
x = (x - 1) + 1
and multiplying through gives
Let u = n - 2 in the first summation, u = n - 2 in the second and then changing the index variable back to n gives
Now plugging in n = 0 into the second and third series we get
Now we can equate coefficients to find
2a[2] + 2a[0] = 0
(n + 1)na[n+1] + (n + 2)(n + 1)a[n+2] + 2a[n] = 0
The first equation says that
a[2] = -a[0]
The recursion relationship says
-(n + 1)na[n+1] - 2a[n
] a[n+2] = [S: :S]
(n + 2)(n + 1)
We want to find two linearly independent solutions. To do this, we can choose the first two terms of the series. The easiest choices are
a[0] = 0 a[1] = 1 and a[0] = 1 a[1] = 0
Plugging the first pair, we get
a[0] = 0 a[1] = 1 a[2] = 0 a[3] = [-2(0) - 2(1)]/6 = -1/3
a[4] = [-2(3)(-1/3) - 2(0)]/12 = 1/6
Plugging in the second pair, we get
a[0] = 1 a[1] = 0 a[2] = -1 a[3] = [-2(-1) - 2(0)]/6 = 1/3
We can write
y[1] = (x - 1) - 1/3 (x - 1)^3 + 1/6 (x - 1)^4 + ...
y[2] = 1 - (x - 1)^2 + 1/3 (x - 1)^3 + ...
Click here for an additional example
Back to the Power Series Methods and Laplace Transforms Home Page
Back to the Differential Equations Home Page
Back to the Math Department Home Page
e-mail Questions and Suggestions | {"url":"http://ltcconline.net/greenl/courses/204/powerlaplace/seriessolutions1.htm","timestamp":"2014-04-18T08:23:15Z","content_type":null,"content_length":"26900","record_id":"<urn:uuid:26c365ac-47d6-4daf-b243-d7b9b29c69a4>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00249-ip-10-147-4-33.ec2.internal.warc.gz"} |
Torrance, CA Math Tutor
Find a Torrance, CA Math Tutor
...For the last four years, I have also taught a graduate course in survey research, with content similar to that of the managerial inquiry course, at The Chicago School and was recently
nominated for Adjunct Faculty of The Year. My unique blend of professional experience and academic study make fo...
9 Subjects: including SPSS, reading, writing, piano
...I played Division I water polo at UC Santa Barbara from 2005-2009. I was a starter for 3 years and the team's 2nd leading scorer in my final year. The only people better qualified to teach
water polo than me are Olympians.
26 Subjects: including linear algebra, physics, precalculus, swimming
...I use object lessons and clear explanation to help you get past your stumbling blocks and get the confidence that comes from real understanding. Already rockin' the math? I'll help you drive
ahead to score that scholarship!
24 Subjects: including calculus, SAT math, grammar, prealgebra
...Recently, I helped my older sister who is finishing her senior year in college and is taking upper division courses for her major with a paper she was assigned. She was forewarned that her TA
was a very harsh grader and generally known as just not a nice guy; with my help, she received an A- on ...
22 Subjects: including calculus, trigonometry, algebra 1, algebra 2
...I love to see my kids succeed and with some hard work and determination by all parties I know we will succeed. With good study skills and work habits the future is limitless.For the past
eighteen years, I have been teaching 1-12th grades (and Kindergarten on occasion) at a private Christian scho...
25 Subjects: including prealgebra, algebra 2, algebra 1, geometry
Related Torrance, CA Tutors
Torrance, CA Accounting Tutors
Torrance, CA ACT Tutors
Torrance, CA Algebra Tutors
Torrance, CA Algebra 2 Tutors
Torrance, CA Calculus Tutors
Torrance, CA Geometry Tutors
Torrance, CA Math Tutors
Torrance, CA Prealgebra Tutors
Torrance, CA Precalculus Tutors
Torrance, CA SAT Tutors
Torrance, CA SAT Math Tutors
Torrance, CA Science Tutors
Torrance, CA Statistics Tutors
Torrance, CA Trigonometry Tutors | {"url":"http://www.purplemath.com/Torrance_CA_Math_tutors.php","timestamp":"2014-04-18T14:03:41Z","content_type":null,"content_length":"23807","record_id":"<urn:uuid:f3d60442-9441-4c33-866f-a4472f21cc45>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00442-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert inch of water column to ounce/square inch - Conversion of Measurement Units
›› Convert inch of water column to ounce/square inch
›› More information from the unit converter
How many inch of water column in 1 ounce/square inch? The answer is 1.72999405266.
We assume you are converting between inch of water column and ounce/square inch.
You can view more details on each measurement unit:
inch of water column or ounce/square inch
The SI derived unit for pressure is the pascal.
1 pascal is equal to 0.00401463078662 inch of water column, or 0.00232060380812 ounce/square inch.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between inches water column and ounces/square inch.
Type in your own numbers in the form to convert the units!
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0030 seconds. | {"url":"http://www.convertunits.com/from/inch+of+water+column/to/ounce/square+inch","timestamp":"2014-04-17T04:09:48Z","content_type":null,"content_length":"20133","record_id":"<urn:uuid:63590c89-b24c-4285-9b07-b303074d716e>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
I'm asking again just so I understand this question. U=(0,1,2,3,4,5....) A=(1,2,3,4...) B=(4,8,12,16...) and C=(2,4,6,8...) Determine the following A' intersection C
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5157e0ade4b07077e0c07ffb","timestamp":"2014-04-18T10:50:36Z","content_type":null,"content_length":"126101","record_id":"<urn:uuid:e0301b45-b9ad-4d7c-9043-b287f639c52e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
integral abs values
April 24th 2008, 07:18 PM #1
Apr 2008
integral abs values
For all real b, the integral from 0 to b of the abs(2X) dx is
A) -b *abs(b)
B) b^2
C) -b^2
D) b*abs(b)
E)none of the above
Page three I think
Remember : if x<0, |x|=-x and if x>0, |x|=x.
If b>0 :
$\int_0^b |2x| dx=\int_0^b 2x dx$ since x is in [0,b], positive values.
$=2 \cdot \frac{b^2}{2}=\boxed{b^2}$
Now, if b<0 :
$\int_0^b |2x|dx=\int_0^b -2x dx$ since x is in [b,0], negative values.
$=\int_b^0 2x dx$
$=2 \cdot (\frac{0^2}{2}-\frac{b^2}{2})=\boxed{-b^2}$
So it's none of these...
April 24th 2008, 07:25 PM #2
April 25th 2008, 02:05 AM #3 | {"url":"http://mathhelpforum.com/calculus/35898-integral-abs-values.html","timestamp":"2014-04-18T01:59:49Z","content_type":null,"content_length":"37489","record_id":"<urn:uuid:baa5e2b9-89a5-4a80-a409-2339a2be7ece>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATLAB: An Introduction with Applications 4th Edition
MATLAB: An Introduction with Applications 4th Edition
December 2010, ©2011
MATLAB: An Introduction with Applications is used by more college students than any other MATLAB text or reference. This concise book is known for its just-in-time learning approach, giving students
the information when they need it. The new edition presents the latest MATLAB functionality gradually and in detail. Equally effective as a freshmen-level text, self-study tool, or course reference,
the book is generously illustrated through computer screen shots and step-by-step tutorials, with abundant and motivating applications to problems in mathematics, science, and engineering.
See More
Chapter 1 Starting With MATLAB.
1.1 Starting MATLAB, MATLAB Windows.
1.2 Working In The Command Window.
1.3 Arithmetic Operations With Scalars.
1.4 Display Formats.
1.5 Elementary Math Built-In Functions.
1.6 Defining Scalar Variables.
1.7 Useful Commands For Managing Variables.
1.8 Script Files.
1.9 Examples of MATLAB Applications.
1.10 Problems.
Chapter 2 Creating Arrays.
2.1 Creating A One-Dimensional Array (Vector).
2.2 Creating A Two-Dimensional Array (Matrix).
2.3 Notes About Variables In MATLAB.
2.4 The Transpose Operator.
2.5 Array Addressing.
2.6 Using A Colon : In Addressing Arrays.
2.7 Adding Elements To Existing Variables.
2.8 Deleting Elements.
2.9 Built-In Functions For Handling Arrays.
2.10 Strings And Strings As Variables.
2.11 Problems.
Chapter 3 Mathematical Operations With Arrays.
3.1 Addition And Subtraction.
3.2 Array Multiplication.
3.3 ARRAY DIVISION.
3.4 Element-By-Element Operations.
3.5 Using Arrays In MATLAB Built-In Math Functions.
3.6 Built-In Functions For Analyzing Arrays.
3.7 Generation Of Random Numbers.
3.8 Examples Of Matlab Applications.
3.9 Problems.
Chapter 4 Using Script Files And Managing Data.
4.1 The MATLAB Workspace And The Workspace Window.
4.2 Input To A Script File.
4.3 Output Commands.
4.4 THE save AND load COMMANDS.
4.5 Importing And Exporting Data.
4.6 Examples Of MATLAB Applications.
4.7 Problems.
Chapter 5 Two-Dimensional Plots.
5.1 THE plot COMMAND.
5.2 THE fplot COMMAND.
5.3 Plotting Multiple Graphs In The Same Plot.
5.4 Formatting A Plot.
5.5 Plots With Logarithmic Axes.
5.6 Plots With Error Bars.
5.7 Plots With Special Graphics.
5.8 Histograms.
5.9 POLAR PLOTS.
5.10 Putting Multiple Plots On The Same Page.
5.11 Multiple Figure Windows.
5.12 Examples Of MATLAB Applications.
5.13 PROBLEMS.
Chapter 6 Programming In MATLAB.
6.1 Relational And Logical Operators.
6.2 Conditional Statements.
6.3 The Switch-Case Statement.
6.4 Loops.
6.5 Nested Loops And Nested Conditional Statements.
6.6 The Break And Continue Commands.
6.7 Examples Of Matlab Applications.
6.8 Problems.
Chapter 7 User-Defined Functions And Function Files.
7.1 Creating A Function File.
7.2 Structure Of A Function File.
7.3 Local And Global Variables.
7.4 Saving A Function File.
7.5 Using A User-Defined Function.
7.6 Examples Of Simple User-Defined Functions.
7.7 Comparison Between Script Files And Function Files.
7.8 Anonymous And Inline Functions.
7.9 Function Functions.
7.10 Subfunctions.
7.11 Nested Functions.
7.12 Examples Of MATLAB Applications.
7.13 Problems.
Chapter 8 Polynomials, Curve Fitting, And Interpolation.
8.1 Polynomials.
8.2 Curve Fitting.
8.3 Interpolation.
8.4 The Basic Fitting Interface.
8.5 Examples Of MATLAB Applications.
8.6 Problems.
Chapter 9 Applications In Numerical Analysis.
9.1 Solving An Equation With One Variable.
9.2 Finding A Minimum Or A Maximum Of A Function.
9.3 Numerical Integration.
9.4 Ordinary Differential Equations.
9.5 Examples Of MATLAB Applications.
9.6 Problems.
Chapter 10 Three-Dimensional Plots.
10.1 Line Plots.
10.2 Mesh And Surface Plots.
10.3 Plots With Special Graphics.
10.4 The View Command.
10.5 Examples Of MATLAB Applications.
10.6 Problems.
Chapter 11 Symbolic Math.
11.1 Symbolic Objects And Symbolic Expressions.
11.2 Changing The Form Of An Existing Symbolic Expression.
11.3 Solving Algebraic Equations.
11.4 Differentiation.
11.5 Integration.
11.6 Solving An Ordinary Differential Equation.
11.7 Plotting Symbolic Expressions.
11.8 Numerical Calculations With Symbolic Expressions.
11.9 Examples of MATLAB Applications.
11.10 PROBLEMS.
Appendix: Summary of Characters, Commands, and Functions.
Answers To Selected Problems.
See More
• More than 200 completely new problems have been added to the 4th Edition.
• A majority of the problems are new or revised.
• Homework problems have been updated to cover a wider range of applications.
• Examples have been updated to ensure coverage is consistent with the latest version of MATLAB.
• The programming chapter (formerly Chapter 7) has been moved to 6 to help engage students with key programming concepts earlier.
See More
• Focused on applying MATLAB as a tool to solve engineering problems.
• Annotated programs simplify topic explanations.
• Helps students learn programming and how to put MATLAB commands to use right away.
• Well-organized text completely explains a topic, then shows how to apply it to solve problems.
• Students encounter complete explanations of new subjects to support learning.
• Subject matter includes arrays, mathematical operations with arrays, script files, 2-D and 3-D plotting, programming (flow control),user defined functions, anonymous functions, function
functions, function handles, subfunctions and nested functions, polynomials, curve fitting, interpolation, and applications in numerical analysis.
See More
“If you do not know anything about MATLAB, this is the book you should have at the first step…It reinforces the concepts with quality exercise questions.”
“By the end of the semester the students were well on their way to being competent programmers, and I think they will find calculus and physics much easier because of their experience with this
“After studying from Gilat's text for the past month or so I feel very comfortable using Matlab.”
See More
Instructors Resources
See More
See Less
Purchase Options
MATLAB: An Introduction with Applications, 4th Edition
ISBN : 978-0-470-91313-0
432 pages
December 2010, ©2011
MATLAB: An Introduction with Applications, 4th Edition
ISBN : 978-0-470-76785-6
432 pages
December 2010, ©2011
Information about Wiley E-Texts:
• Wiley E-Texts are powered by VitalSource technologies e-book software.
• With Wiley E-Texts you can access your e-book how and where you want to study: Online, Download and Mobile.
• Wiley e-texts are non-returnable and non-refundable.
• WileyPLUS registration codes are NOT included with the Wiley E-Text. For informationon WileyPLUS, click here .
• To learn more about Wiley e-texts, please refer to our FAQ.
Information about e-books:
• E-books are offered as e-Pubs or PDFs. To download and read them, users must install Adobe Digital Editions (ADE) on their PC.
• E-books have DRM protection on them, which means only the person who purchases and downloads the e-book can access it.
• E-books are non-returnable and non-refundable.
• To learn more about our e-books, please refer to our FAQ.
This title is also available on : | {"url":"http://www.wiley.com/WileyCDA/WileyTitle/productCd-EHEP001807.html","timestamp":"2014-04-17T04:18:02Z","content_type":null,"content_length":"54157","record_id":"<urn:uuid:26db2222-690c-4a46-9197-bb4ff5e3b3b2>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00410-ip-10-147-4-33.ec2.internal.warc.gz"} |
2 Interesting animations…
August 19, 2009
By the R user...
So I haven't had success YET in finding a way to post here the animations, but I thought it would be interesting to show you at least a couple of examples using this software, and I chose 2 pretty
interesting ones by Yihui Xie and Xiaoyue Cheng.
The first one is "The Gradient Descent Algorithm", it follows the gradient to the optimum. The arrows will take you to the optimum step by step. By the end of the animation, you get something like
the image above.
The code to generate such animation is:
# gradient descent works
oopt = ani.options(ani.height = 500, ani.width = 500, outdir = getwd(), interval = 0.3,
nmax = 50, title = "Demonstration of the Gradient Descent Algorithm",
description = "The arrows will take you to the optimum step by step.")
For the second example I chose an animation called "The k-Nearest Neighbour Algorithm",where, for each row of the test set, the nearest (in Euclidean distance) training set vectors are found, and the
classification is decided by majority vote, with ties broken at random.
By the end of the animation, you will get something like this:
The code to generate such animation is:
oopt = ani.options(ani.height = 500, ani.width = 600, outdir = getwd(), nmax = 10,
interval = 2, title = "Demonstration for kNN Classification",
description = "For each row of the test set, the k nearest (in Euclidean
distance) training set vectors are found, and the classification is
decided by majority vote, with ties broken at random.")
par(mar = c(3, 3, 1, 0.5), mgp = c(1.5, 0.5, 0))
I'll keep trying to find the way to upload the whole animations and not just the final result these days, wish me luck!
for the author, please follow the link and comment on his blog:
The power of R
daily e-mail updates
news and
on topics such as: visualization (
), programming (
Web Scraping
) statistics (
time series
) and more...
If you got this far, why not
subscribe for updates
from the site? Choose your flavor:
, or | {"url":"http://www.r-bloggers.com/2-interesting-animations-2/","timestamp":"2014-04-21T04:44:23Z","content_type":null,"content_length":"39144","record_id":"<urn:uuid:5589225a-dda1-4219-8f4d-4c97c9d91fed>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00435-ip-10-147-4-33.ec2.internal.warc.gz"} |
A. Interatomic potential
B. Monte Carlo simulations
C. Thermodynamic integration method
D. Thermodynamic model
A. Atomic density and thermal expansion of Cu–Zr melts
B. Isothermal bulk modulus of Cu–Zr melts
C. Structural properties of Cu–Zr melts and amorphous alloys
D. Thermodynamic optimization of the liquid phase using the TI method | {"url":"http://scitation.aip.org/content/aip/journal/jcp/135/8/10.1063/1.3624530","timestamp":"2014-04-18T06:53:33Z","content_type":null,"content_length":"91008","record_id":"<urn:uuid:6e037ca2-ba20-464f-8d23-b4be803dd51e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00222-ip-10-147-4-33.ec2.internal.warc.gz"} |
Counting and understanging commuting functions.
up vote 7 down vote favorite
Fix a positive integer $n$, and consider the functions from a set of size $n$ to itself. Let $cp(n)$ denote the number of ordered pairs $\langle f,g \rangle$ of these functions which commute, i.e.,
for which $f\circ g= g \circ f$. If we restrict $f$ and $g$ to be permutations, then the number of such pairs is well known: A053529 in the OEIS. But I cannot find any references to this more general
case. Any pointers to existing work that I might be missing would be greatly appreciated. I've recently added the first 10 values of $cp(n)$ to the OEIS: A181162.
Obviously, $n^n\le cp(n)\le n^{2n}$. Also, $(cp(n)-n^n)/2$ is always an integer, since it counts the number of unordered pairs of distinct commuting functions. Nothing else seems to be as easy to
prove as it should be. With some work, we can now show that $cp(n)$ is always divisible by $n$. I'm hoping that this is a new fact.
The investigation led me to the following strange fact about the symmetric group $S_n$. I'd like to know if this has been noticed before, or if it is also a new result. Fix a permutation $\sigma\in
S_n$, and suppose that the size of the centralizer of $\sigma$ does not divide $(n-1)!$. (Note that it must divide $n!$, and the assumption is equivalent to saying that the size of $\sigma$'s
conjugacy class is not a multiple of $n$). Represent $\sigma$ as a disjoint union of cycles. Then one of the following must hold:
1. There is a prime divisor $p$ of $n$ such that each of the cycles of $\sigma$ is either a fixed point or a cycle of size $p$.
2. $n$ is a multiple of 4 and each cycle of $\sigma$ has size 1, 2, or 4.
Case 1 includes the case of the identity function or $n/p$ cycles each of size $p$. There is just one $p$ for each $\sigma$, but each prime divisor of $n$ will occur is some case-1 permutation. Note
that there can be at most three distinct cycle sizes in $\sigma$. If $n$ is prime, it is not hard to prove that the only possibilities for $\sigma$ are the identity and an $n$-cycle, but the general
case seems to take a bit of work to establish.
co.combinatorics gr.group-theory universal-algebra
1^1=cp(1)=1^{2\cdot 1}, so it should say $n^n\leq cp(n)\leq n^{2n}. PS, for the community: Would it have been appropriate for me to edit the post myself to fix that? (just reached 2000 rep, don't
know the relevant etiquette) – Ricky Demer Oct 13 '10 at 23:31
Some of the general algebra literature includes work on semigroups of functions on a set. My sometimes faulty memory suggests that work of Dietmar Schweigert or Klaus Denecke on hyperidentities
included some basic results on iterated selfmaps of a finite set. They may not have addressed your issue, but their bibliographies might have. If other sources don't help you might look at the
literature on finite semigroups (possibly monoids). Gerhard "Ask Me About System Design" Paseman, 2010.10.13 – Gerhard Paseman Oct 13 '10 at 23:33
Also, it should not be f comp g = f comp f . Also, in general algebra the notion of polarity has been studied, a restriction of which is your case. I hope Arturo Magidin or Joel Hamkins chimes in
with a better suggestion than what I have offered. If you put in a universal-algebra tag, that might attract the desired kind of attention. Gerhard "Ask Me About System Design" Paseman, 2010.10.13
– Gerhard Paseman Oct 13 '10 at 23:40
Thanks for pointing out the typos, I've fixed them. I'm a topologist, and came to this question because commuting continuous maps are hard to classify. I started out by looking at semigroup
theory, and nothing I could find seemed to help. But, since this about as far as you can get from my area of expertise, I might very well be missing something simple. – Jeff Norden Oct 14 '10 at
1 Jeff, I have never seen that fact about elements of S_n in classes of size not divisible by n; it's amusing. It was not clear from your post whether or not you actually have a proof. Do you? –
Marty Isaacs May 29 '12 at 19:13
show 2 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged co.combinatorics gr.group-theory universal-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/42084/counting-and-understanging-commuting-functions?answertab=oldest","timestamp":"2014-04-21T00:50:11Z","content_type":null,"content_length":"55246","record_id":"<urn:uuid:b66d33b4-baf1-43c1-9a66-879abac28fd4>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00451-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Quantum World/Implications and applications/How fuzzy positions get fuzzier
From Wikibooks, open books for an open world
How fuzzy positions get fuzzier[edit]
We will calculate the rate at which the fuzziness of a position probability distribution increases, in consequence of the fuzziness of the corresponding momentum, when there is no counterbalancing
attraction (like that between the nucleus and the electron in atomic hydrogen).
Because it is easy to handle, we choose a Gaussian function
which has a bell-shaped graph. It defines a position probability distribution
$|\psi(0,x)|^2=N^2 e^{-x^2/\sigma^2}.$
If we normalize this distribution so that $\int dx\,|\psi(0,x)|^2=1,$ then $N^2=1/\sigma\sqrt{\pi},$ and
We also have that
• $\Delta x(0)=\sigma/\sqrt{2},$
• the Fourier transform of $\psi(0,x)$ is $\overline{\psi}(0,k)=\sqrt{\sigma/\sqrt{\pi}} e^{-\sigma^2 k^2/2},$
• this defines the momentum probability distribution $|\overline{\psi}(0,k)|^2=\sigma e^{-\sigma^2 k^2}/\sqrt{\pi},$
• and $\Delta k(0)=1/\sigma\sqrt{2}.$
The fuzziness of the position and of the momentum of a particle associated with $\psi(0,x)$ is therefore the minimum allowed by the "uncertainty" relation: $\Delta x(0)\,\Delta k(0)=1/2.$
Now recall that
$\overline{\psi}(t,k)=\phi(0,k) e^{-i\omega t},$
where $\omega=\hbar k^2/2m.$ This has the Fourier transform
$\psi(t,x)=\sqrt{\sigma\over\sqrt{\pi}}{1\over\sqrt{\sigma^2+i\,(\hbar/m)\,t}}\, e^{-x^2/2[\sigma^2+i\,(\hbar/m)\,t]},$
and this defines the position probability distribution
$|\psi(t,x)|^2={1\over\sqrt{\pi}\sqrt{\sigma^2+(\hbar^2/m^2\sigma^2)\,t^2}}\, e^{-x^2/[\sigma^2+(\hbar^2/m^2\sigma^2)\,t^2]}.$
Comparison with $|\psi(0,x)|^2$ reveals that $\sigma(t)=\sqrt{\sigma^2+(\hbar^2/m^2\sigma^2)\,t^2}.$ Therefore,
$\Delta x(t)={\sigma(t)\over\sqrt{2}}= {\sqrt{{\sigma^2\over2}+{\hbar^2t^2\over 2m^2\sigma^2}}}= {\sqrt{[\Delta x(0)]^2+{\hbar^2t^2\over 4m^2[\Delta x(0)]^2}}}.$
The graphs below illustrate how rapidly the fuzziness of a particle the mass of an electron grows, when compared to an object the mass of a $C_{60}$ molecule or a peanut. Here we see one reason,
though by no means the only one, why for all intents and purposes "once sharp, always sharp" is true of the positions of macroscopic objects.
Above: an electron with $\Delta x(0)=1$ nanometer. In a second, $\Delta x(t)$ grows to nearly 60 km.
Below: an electron with $\Delta x(0)=1$ centimeter. $\Delta x(t)$ grows only 16% in a second.
Next, a $C_{60}$ molecule with $\Delta x(0)=1$ nanometer. In a second, $\Delta x(t)$ grows to 4.4 centimeters.
Finally, a peanut (2.8 g) with $\Delta x(0)=1$ nanometer. $\Delta x(t)$ takes the present age of the universe to grow to 7.5 micrometers. | {"url":"http://en.wikibooks.org/wiki/This_Quantum_World/Implications_and_applications/How_fuzzy_positions_get_fuzzier","timestamp":"2014-04-20T13:35:47Z","content_type":null,"content_length":"33219","record_id":"<urn:uuid:feec1311-b092-4316-b71c-f54de0d5ea45>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00626-ip-10-147-4-33.ec2.internal.warc.gz"} |
Area of Rose Leaf - Polar Coordinates
Find the area inside one leaf of the rose: r = 4 sin(5θ) Please help!
the sine functin is equal to zero at points of the form $\pi \cdot k \mbox{ where} k \in \mathbb{Z}$ so setting the argument equal to the above we get $5 \theta = \pi \cdot k \iff \theta =\frac{\pi \
cdot k}{5}$ so our limits of integration are k=0 and k=1 $0 \mbox{and } \frac{\pi}{5}$ $A=\frac{1}{2}\int_{0}^{\pi/5}(4\sin(5 \theta))^2 d\theta$ $=8\int_{0}^{\pi/5}\sin^{2}(5\theta)d\theta$ I think
you can get it from here. Good luck | {"url":"http://mathhelpforum.com/calculus/33027-area-rose-leaf-polar-coordinates.html","timestamp":"2014-04-19T21:22:02Z","content_type":null,"content_length":"36146","record_id":"<urn:uuid:b593f197-702a-42d3-a971-6538db301b53>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Technology Integration Matrix Observation Tool
Beginning a new observation
The Technology Integration Matrix Observation Tool (TIM-O) is a tool for guiding principals, teachers, and others through the process of evaluating the level of technology integration within a
particular classroom. When completed, the tool produces a profile for the observed lesson in terms of the Technology Integration Matrix.
The TIM-O provides a branching series of questions that helps to provide consistent results regardless of observer familiarity with technology integration. Skip-logic in the series of questions
greatly increases the efficiency of the observation. Because classrooms and other instructional settings are varied and complex, the tool allows the evaluator to adjust the identified levels based on
careful consideration of the observation as a whole.
This observation tool is based on the Technology Integration Matrix (TIM) developed by FCIT. If you are not familiar with the TIM, you should begin with the question-based option. We recommend that
you observe the lesson for five to ten minutes before answering any questions. The question-based option is designed to arrive at a profile in a minimum number of steps. The total number of questions
asked will vary depending on how you answer earlier questions.
If you are very familiar with the Technology Integration Matrix and are comfortable with the observation protocol, you may choose to evaluate the levels of technology integration directly using the
Matrix-based option. The Matrix-based tool is faster only if the observer knows the TIM very well. Without training, use of the Matrix-based tool can be more susceptible to subjective interpretation
than the question-based tool.
Profile generated by observation
In both the question-based and matrix-based methods the observer is given the opportunity to append additional notes to the resulting profile if desired. | {"url":"http://fcit.usf.edu/matrix/tim-o","timestamp":"2014-04-20T18:28:55Z","content_type":null,"content_length":"5639","record_id":"<urn:uuid:1a96179a-e0af-4286-9b4c-4a410d915557>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00096-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - metrics on the 2 sphere
The metric ds = |dz|/(1 + |z|^2) has constant positive Gauss curvature equal to 4 and extends to the complex plane plus the point at infinity. How does this metric relate to the usual metric of
constant Gauss curvature computed from the unit sphere in Euclidean 3 space? | {"url":"http://www.physicsforums.com/showpost.php?p=3417061&postcount=1","timestamp":"2014-04-20T14:20:11Z","content_type":null,"content_length":"8605","record_id":"<urn:uuid:138002f9-de87-4ee6-bd56-64a107b1b300>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lowell, MA Algebra 1 Tutor
Find a Lowell, MA Algebra 1 Tutor
...It is critical to adjust to this new type of math problem, as both the rigor of justifying each step as well as the frequent occurrence of multiple equally valid approaches mimics the type of
creative technical problems modern S.T.E.M. workers encounter. I recently (03/2013) passed the Massachus...
12 Subjects: including algebra 1, chemistry, calculus, physics
...This quick reference book is a good way for students to independently refer back to concepts taught instead of being provided the answers. The concepts are to encourage independence, to be
responsible for own work, and assist in imprinting the information on the brain. The student takes this home and brings back during tutoring.
16 Subjects: including algebra 1, reading, writing, dyslexia
...Do you want feel comfortable asking questions and get answers that you understand? I can help you tackle these issues by providing excellent one-to-one tutoring designed specifically to assist
you, the individual student. I will review your proficiency, needs, objectives, study habits, test pre...
34 Subjects: including algebra 1, reading, English, geometry
...During our sessions and the attentiveness of the student, I also believe in engaging in a certain amount of conversation with the student that can make our sessions feel more like getting help
from a friend rather than just learning math. Regardless of a student's current situation, there is a b...
13 Subjects: including algebra 1, calculus, geometry, GRE
...For 40+ years I used Fortran almost exclusively to develop simulation models for missile systems. At the time of my retirement, I was responsible for about 30,000 lines of Fortran code. My
exposure to Fortran has therefore been very significant over many years at all levels from generating the code to verification of its operation.
12 Subjects: including algebra 1, physics, geometry, German
Related Lowell, MA Tutors
Lowell, MA Accounting Tutors
Lowell, MA ACT Tutors
Lowell, MA Algebra Tutors
Lowell, MA Algebra 2 Tutors
Lowell, MA Calculus Tutors
Lowell, MA Geometry Tutors
Lowell, MA Math Tutors
Lowell, MA Prealgebra Tutors
Lowell, MA Precalculus Tutors
Lowell, MA SAT Tutors
Lowell, MA SAT Math Tutors
Lowell, MA Science Tutors
Lowell, MA Statistics Tutors
Lowell, MA Trigonometry Tutors | {"url":"http://www.purplemath.com/Lowell_MA_algebra_1_tutors.php","timestamp":"2014-04-16T05:00:41Z","content_type":null,"content_length":"24052","record_id":"<urn:uuid:927eeb13-9b75-4945-8e5a-7d7066885c75>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Woodway, WA Math Tutor
Find a Woodway, WA Math Tutor
I am an experienced Math and Physics tutor looking for work in the Bothell area. For the past two years, I have been employed at Western Washington University's Tutoring Center and for two years
before that I worked at the Math Center at Black Hills High School in Olympia. I am certified level 1 b...
13 Subjects: including statistics, linear algebra, algebra 1, algebra 2
...For every semester that I attended WSU, I was on the President’s Honor Roll. I was recognized as the student of the month by my AP US History instructor, immediately following the AP season.
I believe that everyone is capable of gaining a firm grasp of mathematical concepts.
62 Subjects: including precalculus, ACT Math, ACT English, discrete math
I love teaching math. Its my job to make it simple and easy to understand for you. Currently, I work at a Community College. I've been teaching arithmetic and algebra one on one for 3 years
there. Privately, I've been tutoring arithmetic through integral calculus for 2 years.
6 Subjects: including precalculus, trigonometry, algebra 1, algebra 2
...I am currently entering my 5th year of my undergraduate pursuing an Honors Degree in Molecular, Cellular, and Developmental Biology with a minor in History. For a full range of topics that I
have been educated in, feel free to email me for more information. Academics aside, I am a very gregarious and kind-hearted person.
42 Subjects: including algebra 1, algebra 2, ACT Math, calculus
...Some of my assignments were developing a training program, and designing a solution to a problem on a sports team. My education in sport and experience with soccer specifically provide me with
the knowledge and tools to be successful in this job. I played on my high school tennis team all four years.
17 Subjects: including prealgebra, probability, anatomy, trigonometry
Related Woodway, WA Tutors
Woodway, WA Accounting Tutors
Woodway, WA ACT Tutors
Woodway, WA Algebra Tutors
Woodway, WA Algebra 2 Tutors
Woodway, WA Calculus Tutors
Woodway, WA Geometry Tutors
Woodway, WA Math Tutors
Woodway, WA Prealgebra Tutors
Woodway, WA Precalculus Tutors
Woodway, WA SAT Tutors
Woodway, WA SAT Math Tutors
Woodway, WA Science Tutors
Woodway, WA Statistics Tutors
Woodway, WA Trigonometry Tutors | {"url":"http://www.purplemath.com/Woodway_WA_Math_tutors.php","timestamp":"2014-04-16T10:33:29Z","content_type":null,"content_length":"23865","record_id":"<urn:uuid:608afa38-2648-4d06-9aa4-1e0f843d546c>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00134-ip-10-147-4-33.ec2.internal.warc.gz"} |
probability problem [Archive] - Statistics Help @ Talk Stats Forum
02-03-2010, 01:55 PM
can anyone help me with this problem?
a boy finds that he can climb to a height of at least 1.85m once in five attempts, and to a height of at least 1.7m nine times out of ten attempts. The heights he can reach forms a normal
distribution. Find the mean and the standard deviation. | {"url":"http://www.talkstats.com/archive/index.php/t-10838.html","timestamp":"2014-04-20T11:55:40Z","content_type":null,"content_length":"7480","record_id":"<urn:uuid:12ae93b6-a8a8-42c7-beec-063ff427aa6f>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00573-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are binary, octal, and hexadecimal notation?
ARCHIVED: What are binary, octal, and hexadecimal notation?
Binary notation
All data in modern computers is stored as series of bits. A bit is a binary digit and can have one of two values; the two values are generally represented as the numbers 0 and 1. The most basic form
of representing computer data, then, is to represent a piece of data as a string of 1s and 0s, one for each bit. What you end up with is a binary or base-2 number; this is binary notation. For
example, the number 42 would be represented in binary as: 101010
Interpreting binary notation
In normal decimal (base-10) notation, each digit, moving from right to left, represents an increasing order of magnitude (or power of ten). With decimal notation, each succeeding digit's contribution
is ten times greater than the previous digit. Increasing the first digit by one increases the number represented by one, increasing the second digit by one increases the number by ten, the third
digit increases the number by 100, and so on. The number 111 is one less than 112, ten less than 121, and one hundred less than the number 211.
The concept is the same with binary notation, except that each digit is a power of two greater than the preceding digit, rather than a power of ten. Instead of 1s, 10s, 100s, and 1000s digits, binary
numbers have 1s, 2s, 4s, and 8s. Thus, the number two in binary would be represented as a 0 in the ones place and a 1 in the twos place, i.e., 10. Three would be 11, a 1 in the ones place and a 1 in
the twos place. No numeral greater than 1 is ever used in binary notation.
Octal and hexadecimal notation
Because binary notation can be cumbersome, two more compact notations are often used, octal and hexadecimal. Octal notation represents data as base-8 numbers. Each digit in an octal number represents
three bits. Similarly, hexadecimal notation uses base-16 numbers, representing four bits with each digit. Octal numbers use only the digits 0-7, while hexadecimal numbers use all ten base-10 digits
(0-9) and the letters a-f (representing the numbers 10-15). The number 42 is written in octal as: 52 In hexadecimal, the number 42 is written as: 2a
Knowing whether data is being represented as octal or hexadecimal is sometimes difficult (especially if a hexadecimal number doesn't use one of the digits a-f), so one convention that is often used
to distinguish these is to put "0x" in front of hexadecimal numbers. So you might see, for example: 0x2a This is a less ambiguous way of representing the number 42 in hexadecimal. You can see an
example of this usage in the Character set comparison chart.
Note: The term "binary" when used in phrases such as "binary file" or "binary attachment" has a related but slightly different meaning than the one discussed here. For more information, see ARCHIVED:
What is a binary file?
This is document agxz in domain all.
Last modified on January 07, 2013.
I need help with a computing problem
I have a comment for the Knowledge Base | {"url":"http://kb.iu.edu/data/agxz.html","timestamp":"2014-04-21T02:00:10Z","content_type":null,"content_length":"16681","record_id":"<urn:uuid:cb38f6fb-4e5f-4ae4-8017-2451103287f7>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
java matrix liberary
Hi, can anyone give me a link for downloading matrix calculation liberary (transpose,multiply,...) for java????
This was the first result Google gave me: JAMA: Java Matrix Package. kind regards, Jos
thanks for all of you .. but can anyone tell me how to install it or in which folder should i place it???
Add the library to your classpath: Setting the class path
i tried but i couldnt set the path correctly... can anyone tell me easily where to place this liberaries?? my javac path is C:\Program Files\Java\jdk1.6.0_24\bin
You don't set the PATH, you set the CLASSPATH - you can place the library wherever you want, as long as you tell javac and java where to find them, which the above link explains if you are compiling
via the command line. If using an IDE, look at the documentation for your IDE | {"url":"http://www.java-forums.org/advanced-java/52008-java-matrix-liberary-print.html","timestamp":"2014-04-17T12:39:33Z","content_type":null,"content_length":"6930","record_id":"<urn:uuid:89460902-232f-46a2-a162-68cbc609aa86>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
finding sample variance
February 1st 2010, 09:31 PM #1
Nov 2009
finding sample variance
having trouble identifying what is what, and alpha is confusing me. thank you
A random sample of 100 observations was drawn from a normal population. The sample variance was calculated to be 220. Test with alpha = .05 to determine whether we can infer that the population
variance differs from 300.
$H_o: \sigma^2=300$ vs. $H_a: \sigma^2e 300$
The test statistic is ${(n-1)s^2\over \sigma_o^2}={(99)(220)\over 300}$
NEXT split your alpha in half and put .025 into each tail of a chi-square with 99 dfs.
You can use the table at http://www.danielsoper.com/statcalc/calc12.aspx
if you can't find one with 99 dfs.
February 3rd 2010, 11:09 PM #2 | {"url":"http://mathhelpforum.com/statistics/126715-finding-sample-variance.html","timestamp":"2014-04-17T07:43:30Z","content_type":null,"content_length":"33406","record_id":"<urn:uuid:e486d550-b724-40d4-a957-54bba85b692c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00649-ip-10-147-4-33.ec2.internal.warc.gz"} |
Plans and Updates
Posted on August 13, 2008 by Gil Kalai
Jerusalem and Budapest
Monday, last week was the last day of lectures for the spring term here at the Hebrew U. One outcome of the long professors’ strike was a very fruitful year for research seminars. We ran them during
the strike and we run when we taught. I heard about quite a lot of nice developments recently. Zeev Dvir gave a talk on the finite field Kakeya problem. The proof was extremely simple to start with
(see Tao’s post ) and Zeev’s presentation was even simpler. It is a very interesting question how, given Dvir’s proof, we should now view the connection between the finite field version of the
problem and the original problem. Does Dvir’s proof support the possibility that the original problem is not as hard as feared? Does it support the view that the finite case is not related to the
continuous case? (Both Tao and Laba discuss these questions.)
I will tell you in very rough terms (which represent my partial understanding of the matter) one recent wonderful development I heard about. Kolmogorov and Sinai proved that the Entropy is an
invariant for Bernoulli shifts under isomorphism. Ornstein celebrated isomorphism theorem asserts that entropy is the only invariant. Orenstein and Weiss studied other group actions, and extended
these theorems to amenable groups. There were strong indications that free-group actions behave very differently. (There were examples where the entropy function went the wrong way for “factors”.)
However, Lewis Bowen has just shown that for free-group action entropy is an invariant! (OK, I oversimplified matters here for the sake of telling the story.) And he further proved a similar result
for all “sofic groups”, a strange notion of groups which is a mixture of amenable and residually finite groups that, as far as we know, may include all groups. Benji Weiss told us about it in the
“basic notions seminar”, and a few weeks later another lecture was given by the man himself - Lewis Bowen – who came from Hawaii for a short visit.
And last week I participated in a meeting “Building Bridges” in Budapest honoring Laci Lovász for his birthday. A lot of interesting talks on extremal combinatorics, graph theory, optimization,
probabilistic combinatorics, and theoretical computer science. I will come back to this meeting later but let me mention one talk which I found very surprising: Peter Winkler talked about his work
with Rick Kenyon on branched polymers. Kenyon and Winkler found startling simple combinatorial proofs to starling results of Brydges and Imbrie, and proved further results. I hope to add a link to
the actual presentation. Hungary is the discrete mathematics superpower and on top of that as Einat Wigderson recently made clear “The Hungarians are also much funnier”. It was nice to meet many old
friends (and annoy them, at times, with silly jokes). Here is Micky Simonovits and here you can see Endre Szemeredi, Anna Szemeredi, Szemeredi’s jacket and Szemeredi’s tie. I followed the name of the
conference and talked about mathematics of social choice.
We had a few slow weeks on the blog and as the saying goes: “If you cannot do, plan, and if you cannot plan, plan plans.” Together with a little summary of previous posts, I will describe some of the
things I plan to write next on this blog.
I plan five posts in the series about Extremal combinatorics. Part I was an introduction to the subject and dealt with extremal set theory. Part II was about simple extremal problems in plane
geometry and additive number theory, and about difficult theorems which became quite easy in time. Extremal Combinatorics III will present several basic results in extremal set theory; Part IV will
be about POSETS, and part V will present the Frankl-Wilson theorem, or, at least, special cases.
I gave four posts (I, II, III, IV) based on my lecture in Marburg, dealing with high dimensional Cayley formula. Richard Stanley gave a detailed remark on why the fantasy about weighted correct
version of MacMahon’s conjecture for solid partitions is not what standard proofs of the plane partition case extend to. I plan a little series of posts on “f-vectors and homology,” which was the
title of a paper with Bjorner in 1988 as well as of my talk in Stockholm. However, before that I want to describe the “g-conjecture for spheres” a central problem in algebraic combinaorics.
We had several posts on convex polytopes. I next plan to discuss the diameter problem for polytopal graphs (the Hirsch conjecture) and related questions on the simplex algorithm. (In fact, we
already started.) The one proof I presented most often in lectures is my proof of the Blind-Mani theorem that asserts that simple polytopes are determined by their graphs. I will try to blog this
proof, tell you some open problems around it, and write about a startling theorem of Eric Friedman who found a polynomial-time algorithm. I also updated the post on five open problems regarding
convex polytopes and added two additional problems.
We talked about influence but not about a major technique which emerged in their study: Fourier Analysis of Boolean functions.” So we will discuss Boolean functions and their spectrum, and revisit
influences and look at noise-sensitivity. Muli Safra will give a post on the Goldreich Levin theorem and related stuff.
So far our open discussion “Is mathematics a science” attracted a single (nice) comment, and the poem translation contest is still waiting for quality translations. Perhaps we can try an open
discussion of a single theorem/problem and see how it goes. Meanwhile, have a look at the very successful discussion on Secret Blogging Seminar about the shy and secretive mathematicians, and how
they triumph again and again.
Let’s talk some more on economics and rationality. And about sociel welfare functions and voting methods. We discussed some controversies related to rationality in this post, and the Notices
book-review about economics and common sense supplied a source for two posts (1,2). I was just told that along with Landsburg’s book itself, my book review might be translated to Arabic. Sababa!
I will tell you about my ideas regarding detrimental noise for quantum computations, but only after trying to describe something about the beautiful, deep, and surprising subject of quantum
computation and quantum information. I will not do it before having at least one post about classical computation complexity. Meanwhile, let me link you to the classical post by Aaronson “Reasons to
believe” (on why we believe that NP is not P). And while at it, look at “Reason to believe II” (on why many of us believe that quantum computers are feasible.) Read also Bernard Chazelle’s thoughts
about alorithms and science.
The most successful link we had so far was “The Sarong Theorem Archive” – an electronic archive of images of people proving theorems while wearing sarongs. More than 100 people clicked on it.
Pictures of prominent bloggers proving theorems while wearing sarongs are below.
And more stories in the category “Taxi-and-other-stories” and even “Mathematics-to-the-rescue” are planned. Stay tuned!
(Reminder: Arabic words which are part of Hebrew slang: ashkara – for real; sababa - cool, wonderful; walla – true; ahla - great; yalla - hurry up, c’mon, let’s go.)
Prominent bloggers prove theorems with sarongs
One Response to Plans and Updates
1. natpekan says:
“Zvi Dvir” should be Zeev Dvir.
(They say that any publicity is good publicity as long as your name is spelled correctly. What do they say about good publicity when your name is not spelled correctly?)
Leave a Reply Cancel reply
This entry was posted in Conferences, Updates and tagged Conferences, Laci Lovasz, Updates. Bookmark the permalink. | {"url":"http://gilkalai.wordpress.com/2008/08/13/plans-and-updates/?like=1&source=post_flair&_wpnonce=966197e62c","timestamp":"2014-04-20T03:12:27Z","content_type":null,"content_length":"113688","record_id":"<urn:uuid:ebd59d34-1fa1-4912-9521-c9ffd444017c>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00474-ip-10-147-4-33.ec2.internal.warc.gz"} |
A Comparative Study of Statistical Methods to Assess Dilutional Similarity - The four parameter logistic model is a better control method than the dilution effect statistic - BioPharm International
The statistics used to compare sample and reference curves are not uniform across the industry. A statistic called dilution effect was introduced in the industry to assess dilution similarity.^5,6
The dilution effect is a measure of the percent bias per 2-fold dilution in a test sample's value relative to that of the reference standard. It is the apparent change in a sample's dilution-adjusted
concentration when it is diluted 2-fold.^2,5 The dilution effect is calculated by Equation (2):
The estimated slopes of the test sample and reference standard respectively are obtained by fitting Equation (3):
Z =1 for the reference sample and Z = 0 for the test sample; xmid [ R ] is the EC50 of the reference sample and xmid [ S ] is the EC50 of the test sample. Perfect parallelism corresponds to 0%
dilution effect. The absolute value of dilution effect less than 20% has been used in the industry to conclude dilutional similarity (parallelism) between the test sample and the reference standard.
Dilutional similarity means that the reference and the test samples have common a, d, and b parameters. Thus, failure to share common a, d, and b parameters implies a failure of dilutional
similarity. The DE statistic checks this only at one point (xmid) while the F-statistic checks the concentration range.
A standard F statistic for testing for parallelism (dilutional similarity) is obtained in the following way:
•Under the null hypothesis of parallel assays, use Equation (4):
•Fit equation 4 to the data to obtain the sum of squared errors (SSE), denoted as SSE ( Parallel).
•Under the alternative hypothesis that the curves are not parallel, i.e., asymptotes and slopes are not the same, use Equation (5):
where the subscripts R and S denote parameters for the reference and sample logistics models, respectively. Z = 0 or 1 as shown earlier. Obtain SSE(Nonparallel) by fitting Equation (5) to the data.
Compute the F statistic for parallelism with Equation (6): | {"url":"http://www.biopharminternational.com/biopharm/article/articleDetail.jsp?id=187659&sk=&date=&%0A%09%09%09&pageID=2","timestamp":"2014-04-18T05:51:10Z","content_type":null,"content_length":"143448","record_id":"<urn:uuid:702fe2b8-9af9-4fd1-acf8-10bd5e679154>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00426-ip-10-147-4-33.ec2.internal.warc.gz"} |
Kenmore ACT Tutor
...I enjoy tutoring a variety of subjects, including math, Chinese and ESL English. I always try to make it easy and fun to learn. I've helped many students improve their grades and study skills
with my experience.
13 Subjects: including ACT Math, geometry, Chinese, algebra 1
...I can help with homework, study skills, or enrichment activities to go beyond the classroom. I particularly enjoy helping students discover that they actually CAN be good at the subject that is
challenging them, provided they work hard and ask lots of questions. I am also one of those rare indi...
32 Subjects: including ACT Math, English, reading, geometry
Hello! My name is Kelly and I am an experienced tutor located here in North Seattle. I'm a graduate of the University of Washington Seattle campus with a Bachelors of Science Degree in Biology and
27 Subjects: including ACT Math, chemistry, reading, writing
...I have attained and finished a music school in Russia with equivalent of 6 years of study in US. When I attained Windward Community College for 2 years from 1996-1998, I played for a college
choir for one year. Later, I taught piano in Hawaii to several students from age 10 to adult.
20 Subjects: including ACT Math, reading, calculus, geometry
...For the past 3 years, I have been a First Years Program Leader on campus, essentially guiding freshmen through the various challenges and concerns they have upon entering college. I have taught
at a "Read 'n' Lead" program at my local library where I read to and helped elementary school students...
42 Subjects: including ACT Math, reading, English, calculus | {"url":"http://www.purplemath.com/kenmore_act_tutors.php","timestamp":"2014-04-21T02:29:06Z","content_type":null,"content_length":"23383","record_id":"<urn:uuid:08b6154f-7b79-4733-97c4-16dade135f98>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00066-ip-10-147-4-33.ec2.internal.warc.gz"} |
FFT based algorithm for special matrices
up vote 2 down vote favorite
Contest problems with connections to deeper mathematics.
This question is with regard to Elkies' answer to the above post.
Vandermonde determinant can be computed using FFT techniques.
Can Moore determinant(including modulo some integer) be computed using FFT techniques?
add comment
1 Answer
active oldest votes
Not exactly my field, but I think that the same techniques used for Vandermonde's determinant work for Moore's and on finite rings (with the added complication of using FFTs over
up vote 2 down
vote A good reference is Bini and Pan, Polynomial and matrix computations.
add comment
Not the answer you're looking for? Browse other questions tagged linear-algebra or ask your own question. | {"url":"http://mathoverflow.net/questions/84429/fft-based-algorithm-for-special-matrices","timestamp":"2014-04-20T13:49:05Z","content_type":null,"content_length":"48765","record_id":"<urn:uuid:dcc9d79f-f8c1-451b-a231-4b4f6c757056>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00186-ip-10-147-4-33.ec2.internal.warc.gz"} |
Please help me with my homework?
I have a horrible professor and he gave us this assignment and I absolutely can't do it it's more difficult than the others.
1) Write the definition of the function getHoursRate that prompts the user to input the hours worked and rate per hour to initialize the variables hours and rate of the function main.
2) Write the definition of the value-returning function payCheck that calculates and returns the amount to be paid to an employee based on the hours worked and the rate per hour. The hours worked and
rate per hour are stored in the variables hours and rate, respectively, of the function main. The formula for calculating the amount to be paid is as follows: For the first 40 hours the rate is the
given rate; for the hours over 40, the rate is 1.5 times the given rate.
3) Write the definition of the function printCheck that prints (displays) the hours worked, rate per hour, and the salary.
4) Write the definition of the function main that tests each of these functions.
These are the instructions can you guys please help me write the code? I think I am supposed to use multiple functions and maybe void functions?
Thanks a lot guys
What did you do so far, and how much do you know?
Topic archived. No new replies allowed. | {"url":"http://www.cplusplus.com/forum/general/82271/","timestamp":"2014-04-19T14:41:06Z","content_type":null,"content_length":"10858","record_id":"<urn:uuid:b6b0daf9-6716-4de6-8ec7-38c170a5df39>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
SSRIC/TRD: PERS - Chapter 10
SSRIC Teaching Resources Depository
PERS Module
Larry Herringer, Psychology Deptartment
California State University, Chico
Chapter 10: Comparing Behavior Across Situations
This exercise illustrates correlation, aggregation, and 2-Way Mixed Model (Between/Within) ANOVA. It assumes prior familiarity with the basics of Factorial ANOVA. It uses the PERS dataset, consisting
of 90 cases and 968 variables. The variables represent measures of traits and relevant behaviors for the dimensions of extraversion (outgoingness) and conscientiousness, reported each week for three
weeks by a group of undergraduate psychology students.
If you could measure the same behaviors for a group of people in two different situations, how consistent do you think it would be? Do you think that your results would be different if you repeated
the measurements again the following week? How much do you think human behavior is a reflection of personality characteristics, situational constraints, or variations over time?
This exercise illustrates the comparison of the same behaviors across two different situations, using both a correlational and ANOVA approach. The overarching question addressed by the PERS data is
"How consistent is human behavior, and is the consistency predictable from human personality?" Over the course of the prior exercises, this question has been examined by intercorrelating different
behaviors which presumably are products of the same personality trait, and by using personality trait measures to predict various relevant behaviors. The inherent unreliability of behaviors measured
on a single occasion was remedied by aggregating across time (see Chapters 2, 3, 4). The narrow specificity of individual behaviors was broadened by aggregation across behaviors (see Chapters 5, 6),
creating a behavioral measure comparable in generality to a personality trait scale. This current exercise takes a different approach to the consistency question, while incorporating the processes of
temporal and content aggregation employed successfully in the previous exercises. This exercise examines directly the question "Is behavior consistent across different situational contexts?" This
question has been historically at the very heart of a psychological debate known as the "person/situation controversy," (see Epstein & O’Brien, 1985, for a brief history) which seeks to understand
how much of our behavior is a function of internal factors (e.g., personality), and how much is a function of external factors (e.g., the particular situational context at the time). To examine this
requires measuring the same behaviors in at least two different situational contexts. Further, given the unreliability of single-occasion measures of behavior, the measures would need to be taken
over several occasions.
The PERS dataset includes cross-situational measures as described above: the same several behaviors measured in each of two different, specific situations, repeated on the same day each week for
three weeks. For outgoingness, several behaviors related to (a) an in-person conversation and (b) a telephone conversation, were measured each week. For conscientiousness, several behaviors related
to (a) the participant’s most important class and (b) the participant’s least important class, were measured each week.
Several methodological and definitional questions may be raised about these measures. For example, it surely is not the same conversations which are being reported on each week, so how do these
represent the same situations measured repeatedly over several occasions? When viewed in this manner, of course they are not. But then, every situation in our lives from moment to moment would be
treated as different, and the issue of cross-situational consistency would be meaningless. Insofar as there is some continuity to our behavior when talking in person or on the telephone, we can speak
of these as similar "situations." We would certainly expect variation over occasions on each of these, and that is why they are measured repeatedly over time. The purist can avoid the issue by only
using the data for a single-week’s measurements. This is not advised, however (see Chapters 2, 3, 4). Another obvious question about the measures concerns the comparison of the "most important"
versus "least important" classes. This is not the same as comparing "Psychology 1A" with "English Literature 5;" in fact, it means that most participants are measured in different classes from each
other. Aside from the logistical difficulty (impossibility?) of finding two courses in common for all the participants to compare, the problem is one of psychological comparability. It seemed a
clearer and more definitive situational contrast to use each student’s own highest priority and lowest priority courses, rather than specific courses which might not be perceived much differently by
some or many students.
There are two analytic strategies to use in comparing the behavior across situations: correlational, or ANOVA. In either case, we will use the 3-week average measures (aggregates), and we will
analyze both the individual behaviors and a behavioral aggregate (see Chapter 6). Examine the measures in the codebook in the section "Cross-situational Measures (3-Week Average)." We will use the
outgoingness measures in this exercise (OSP1-OSP6; OST1-OST6). None of these variables need to be recoded, though the OSP6 and OST6 may be too different from the others, and so we will not use them.
Create two new aggregate measures across the five behaviors:
osp5tot sum(osp1 to osp5)
ost5tot sum(ost1 to ost5)
The correlational approach to comparing behavior cross-situationally is simple: correlate osp1 with ost1, osp2 with ost2, etc., including osp5tot with ost5tot. Look at your results and explain them,
given what you know about aggregation and the kinds of correlations obtained in other exercises between behaviors and with personality traits. You should find them to be lower than many of the other
correlations we examined in previous exercises. Why do you think that is? What do you think would be the results if you used the measures for only Week#1 instead of the 3-week averages?
The ANOVA approach is more complex, but it yields some interesting information about the interaction between personality and situation influences, so it is favored by some researchers. Recall from
Chapter 9 that ANOVA compares conditions or groups, so we must divide our participants into groups according to their personality characteristics. This was done in Chapter 6; if you don’t still have
the "extgrp" variable on your datafile, go back to that exercise and recreate it. Our situations already represent two "conditions" under which measurements were taken.
We will use a "2-Way ANOVA." There are two "factors" in our analysis of variance: outgoingness level (low/average/high) and situation (in-person/telephone conversation), and so we have a 3x2 (or 2x3,
if you prefer the reverse order) ANOVA. Since participants are classified in only 1 of the 3 outgoingness groups, this is a "between-subjects" factor. Since participants were measured in both
situations, this is a "within-subjects" factor. This particular analysis, then, is called a "mixed-model," or "between/within" ANOVA. We need to perform separate ANOVAs on each variable; we will use
the 5-item aggregate as an example here.
Since there are repeated measures (situations; our "within-subjects" factor), we will use:
Analyze>General Linear Model>Repeated Measures
Within-Subject Factor Name: converse
Number of Levels: 2
(click on Add, then on Define; you will get a new, larger dialog box)
Within-Subjects Variable (converse): osp5tot, ost5tot
Between-Subjects Factor(s): extgrp
Click on OK to run the ANOVA
The output is daunting, because the design is complex and SPSS provides many multivariate statistics even when they are not needed. There are several tables of output; we will look at the third
table, called "Tests of Within-Subjects Effects." This table gives 4 different computations for each of 3 different effects. We will only examine the lines labeled "Sphericity Assumed." The F-ratio
for Converse is 12.945, and is highly statistically significant (.001 in the "Sig." Column means less than a .001 probability of occuring by chance). The interaction effect ("Converse*Extgrp") is not
statistically significant. Now move down to the last table, "Tests of Between-Subjects Effects." The line for "Extgrp" shows that it is statistically significant (F=8.034; probability less than .001
of occuring by chance). So the analysis shows that there is significant variability in the aggregated behaviors as a function of situational context (in-person versus telephone conversation), and as
a function of personality trait level (low versus average versus high outgoingness). There does not appear to be any significant interaction between the personality factors and the situational
factors in influencing behavior. Since significant effects were found, you would want to re-run the analysis and include Options for descriptive statistics (giving you means for the different
conditions) and Post-hocs (giving you specific statistical comparisons between conditions). Also, since this only analyzed the aggregated measures (osp5tot & ost5tot), you will need to repeat the
analysis for each of the separate behaviors (e.g., osp1 & ost1) to compare to the correlations from the first analysis.
Compare the two analyses (correlational and ANOVA). Which do you prefer and why? You can try the same analyses with the conscientiousness measures on your own.
Epstein, S. & O’Brien, E. J. (1985). The person-situation debate in historical and current perspective. Psychological Bulletin, 98, 513-537.
│Back│Top│Previous Chapter │PERS Home│PERS Downloads│Home│ | {"url":"http://www.csub.edu/ssricrem/modules/PERS/persex10.htm","timestamp":"2014-04-19T19:50:55Z","content_type":null,"content_length":"14760","record_id":"<urn:uuid:7ab95fe6-73e8-4066-9daa-0d19e10e0795>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
There are lots of books that cover this subject but it is not easy to pull the information together into a friendly format. Below is an example calculation of the electrical characteristics, number
of turns and air gap for an output transformer suitable for use with a 300B. I should point out that the choice of core type and material is up to you. I have very limited data on cores so please
don't email asking for data. However I will be pleased to help with the math if you have the data, also to receive helpful comments or corrections if you find any error or know a better technique /
After the example, the theoretical basis of the equations is given.
SUMMARY OF FORMULAE (using CGS units):
Reactance of an inductor (Ohms):
X = 2.p.f.L or L = X/2.p.f
Equivalent permeability of air gapped magnetic circuit m[e]:
m[e] = m / (1+m( l[g]/MPL))
m is the material permeability relative to m[0] for a given magnetic path length.
DC Flux (Gauss):
B = 1.257 x m[e]. N.I/MPL or N = B.MPL / 1.257 x m[e] .I
Inductance (Henries):
L= 1.257 x m[e].N^2.A x 10^-8/MPL
Peak AC Flux (Gauss)
B[AC] = Vr.m.s x 10^8/4.44.N.f.A
Transformation Ratios:
Vp/Vs = Np/Ns = (Zp/Zs)^1/2 = (Lp/Ls)^1/2
CGS Units:
I Amperes f Hz A sq cm MPL cm l[g] cm
Flux density is the key consideration for the design of an audio transformer. We do not want the core to saturate at-all. In fact, we want to stay within the sensibly linear region of the B-H curve.
The transformer equation shows that the AC flux density increases with decreasing frequency so we need to consider the lowest frequency of interest. If the transformer is to handle DC current, we
need to consider not only the sum of the DC flux and the peak AC flux but also, the effect of the DC current on the primary inductance of the transformer.
Let I[DC] = 0.08A
Now we have a dilemma, we have to chose core material and dimensions. This is where either experience or a lot of iteration comes in. For the purposes of this example let’s use a core having the
following parameters:
MPL = 26cm
m = 4000 for silicon steel C core with MPL = 26
A = 11.6cm^2 (Note, this figure takes into account the stacking efficiency of the core.)
Now we come to the next dilema, we know that a gap is likely to be required because the DC current is comparable to the peak AC current in this application. In practice, it is a good idea to use a
gap large enough that variations in DC current will not affect the primary inductance greatly. Experience suggests that a total gap length of 22mils (0.056cm) might be suitable.
l[g] = 0.056cm
1/ Determine a target value for the primary inductance, Lp. We know that the plate resistance of a 300B is 700W. The primary inductance controls the LF performance so let us make 10Hz the lowest
frequency of interest.
A good ‘rule of thumb’ (for a SE triode) is to make the reactance at the lowest frequency of interest equal to 5 times the plate resistance. For an inductor,
Using m[e] = m / (1+m( l[g]/MPL));
m = 4000 / (1+4000 x (0.056/26)) = 416
3/ Calculate number of turns permissible with the given DC current:
Using N = B.MPL / 1.257 x m[e] .I;
For the given type of core, a good limit for the total flux density is 16,000 Gauss. At this level, the transformer will remain linear. We know that Idc = 0.08A. For a 300B, we can also see that the
peak AC current will approach but not exceed the DC current and thus we can set a limit for the DC flux density as one-half of the total flux density, 8000 Gauss.
N = 8000 x 26 / 1.257 x 416 x 0.08
N = 4972, say 5000 turns.
Now we have another dilemma, how many turns are practical? For this we need to evaluate the core window against the chosen wire size, insulation and secondary turns. In this case, I know (again from
experience) that 5000 turns will fit comfortably.
4/ And so we can calculate the primary inductance:
Using; L= 1.257 x m[e].N^2.A x 10^-8/MPL;
L = 1.257 x 416 x 5000^2 x 11.6 x 10^-8 / 26
L = 58H This is just above the desired target of 56H.
(Yes, I did ‘cheat’ by doing all the necessary iteration before setting out this example. This is where the work lies……)
5/ Calculate the AC flux at 10Hz:
Using; B[AC] = Vr.m.s x 10^8/4.44.N.f.A;
Also; Vp/Vs = Np/Ns = (Zp/Zs)^1/2 = (Lp/Ls)^1/2
We want a primary impedance of 3800W and let’s say that the secondary is to be 8W, the turns ratio will be:
Np^ /Np = (Zp/Zs)^1/2 = (3800/8)^ 1/2 = 22
We know that the turns ratio is also the voltage transformation ratio.
For a power of 10W into 8W: Vrms = (10 x 8)^1/2 = 8.9Vrms.
B[AC] = 196 x 10^8/4.44 x 5000 x 10 x 11.6 = 7600Gauss
6/ Check combined DC and peak AC flux:
The AC and DC flux densities sum to 15600Gauss. This is the maximum value at 10Hz at 10W.
Silicon steel C cores saturate at more than 16000Gauss so this is satisfactory.
* You should always confirm this by plotting the load-line and then make an allowance for load reactance which will cause the load-line to become elliptical.
Analogy between the magnetic circuit and the electric circuit:
│Electric Circuit │Magnetic Circuit │
│Quantity │Unit │Quantity │Unit │
│E.m.f │Volt │m.m.f │Ampere │
│ │ │ │ │
│Current │Ampere │Magnetic field strength H│Oersteads │
│ │ │ │ │
│Current density │Ampere/m^2 │Magnetic flux F │Maxwells │
│ │ │ │ │
│Resistance │Ohm │Magnetic flux density B │Gauss │
│ │ │ │ │
│ │ │Reluctance S │Ampere/Maxwell│
│Current = e.m.f./ resistance │Flux = m.m.f./reluctance │ │
S.I. definition of electrical and magnetic units and CGS conversions:
│Unit │Symbol│Definition │To convert to CGS │Multiply by │
│Henry (H) │L │The inductance of a closed circuit in which an e.m.f. of 1V is produced when the electric current varies at a rate of 1A/s │ │ │
│Weber (Wb) │F │The magnetic flux which, when linking a circuit of one turn, produces in it an e.m.f. of 1V when it is reduced to zero at a uniform rate in│Maxwells │1 x 10^8 │
│ │ │1s │ │ │
│ │ │ │& lines │ │
│Ampere/metre│H │Ampere-Turns per metre │Oersteads │0.01257 │
│ │ │ │ │ │
│ │ │(Ampere-Turns per cm) │ │(1.257) │
│Tesla (T) │B │The magnetic flux density equal to 1 Wb/m^2 │Gauss & Lines/cm^2│10^4 │
│Henry/metre │m[0 ] │Permeability of free space 4px10^-7 │Gauss/Oerstead │795774.72 (= │
│ │ │ │ │unity) │
NOTE! CGS units are used.
H is the magnetic field strength due to a current flowing in a coil.
H = magneto motive force (mmf) per per unit length of the magnetic circuit.
The length of the magnetic circuit is denoted by MPL
The mmf is the force caused by a current I flowing through N turns. In a coil it is the total current linked with the magnetic circuit.
The unit for H is the Oerstead which is equal to 1.257 ampere-turns per cm.
Thus H = 1.257xN.I/MPL (Oersteads)
Consider a point C on magnetic field of radius r about a conductor A situated in a vacuum:
B is the flux density at point C
DEFINITION: The flux density at a point C on radius r is equal to the permeability of free space m[0] multiplied by the field strength at point C.
Substituting for H from above we have:
B = m[0] x 1.257xN.I/MPL (Gauss)
NOTE, In CGS units, m[0 ]= 1 Gauss/Oerstead
Self inductance is defined as the number of flux linkages which exist when a current is flowing. A coil possesses an inductance of 1 Henry if a current of 1 ampere through the coil produces a
flux-linkage of 1 Weber-turn º 10^8 lines-turn.
F is the total magnetic flux produced by a current flowing in a coil.
The unit for F is the Maxwell or line.
F = B.A where;
B is the flux density. (Gauss)
A is the core cross sectional area. (cm^2 )
Flux linkage is the linkage between the number of lines of flux with a single loop of wire.
Thus the total flux-linkage = F x N
Since flux is proportional to current and total flux linkage is proportional to flux, then it follows that flux-linkage is proportional to current thus;
Flux-linkage µ I.
Introducing a constant, k we have;
Flux-linkage = k.I
This constant, k is the self inductance of the circuit (electrical and magnetic) and is given the symbol L. Note that by definition, 1H results when 1A produces 10^8 lines-turn thus we must divide
the result by 10^8; Making inductance, k (L) the subject we have;
L=Flux-linkage/I x 10^8
From above, flux-linkage = F x N, more usually written N.F, we have the result;
L= N.F/ I x 10^8
From above, we have F = B.A, B=m[0].H and H = 1.257 x N.I/MPL
F = 1.257 x m[0].N.I.A/MPL, substituting into the expression for L we get;
L= 1.257 x m[0].N^2.A x 10^-8/MPL (H)
Observe that L µ N^2 and so we can further state:
Lp/Ls = (Np/Ns)^2
A further essential result can be derived at this point, the relationship between turns ratio and primary to secondary impedance.
Neglecting losses, the primary power will be transferred to the secondary. Ohm’s law gives,
P = V^2 / R . In this case, R will be impedance, thus so:
Pp = Vp^2 / Zp = Ps = Vs^2 / Zs
Now, the voltage in the primary and secondary windings is directly proportional to the number of turns and so we can replace V with N which yields the important result:
Np^2 / Zp = Ns^2 / Zs Cross multiplying we get:
(Np^ /Np)^2 = Zp / Zs which, from above is also equal to Lp/Ls thus:
Vp/Vs = Np/Ns = (Zp/Zs)^1/2 = (Lp/Ls)^1/2
Since we are considering iron (or other magnetic material) cored transformers, we need to modify the permeability from that of free space to that of the core. The permeability of magnetic materials
is usually specified as relative permeability to that of free space. The permeability is also a function of magnetic path length. Permeability data for core materials is sometimes presented as a
graph of permeability vs MPL.
Thus m = m[0].m[r ]
Hereafter, m will represent the product, m[0].m[r ]unless otherwise stated.
Consider an inductor having an air gap where subscript g indicates the gap and subscript m indicates the core:
Total reluctance = l[g]/m[g].a[g] + MPL/m[m].a[m]
Now we can introduce equivalent permeability for the complete circuit, m[e]
The total reluctance will be equal to l[t ]/m[e].a
We can now equate these two formulae for total reluctance. Noting that the area of the gap is equal to the area of the core and so multiplying through by a;
l[t ]/m[e] = l[g]/m[g]+ MPL/m[m]
Note, m[m] is the product, m[0].m[r ] which we will refer to as m
Cross multiply to make m[e] the subject and we have;
m[e] = [ ]l[t] /(l[g]/m[g] + MPL/m)
l[t ]= l[g ]+ MPL Þ
m[e] = l[g ]+ MPL /(l[g]/m[g] + MPL/m)
For any likely design, l[g] <<< MPL we can multiply both sides by m/MPL/m/MPL;
m[e] = m / (1+m( l[g]/MPL))
This is the classic equation for equivalent permeability of a core – air gap magnetic circuit. It is used to modify the equations for DC flux density and thus also, inductance.
To calculate the flux density due to AC current, we need to use Faraday’s law of electromagnetic induction:
DEFINITION: The instantaneous value of e.m.f., in volts, induced in a coil is equal to the negative rate of change of flux-linkages, in Webers per second.
Consider a single sinusoidal magnetic flux wave wave of peak amplitude F and frequency f. The flux will change from +F to -F in ½.f seconds thus;
Average rate of change of flux = 2F ¸ 1/2f = 4fF Webers/second.
By definition, 1 weber/second = 1 volt and noting that 1Wb º 10^8 lines, we have; Average e.m.f. induced per turn = 4fF volts and the r.m.s. value is 1.11 times the average value and thus;
Vr.m.s. / turn = 4.44fF/10^8 volts and for N turns;
Vr.m.s = 4.44NfF/10^8^ volts. Or;
N = Vr.m.s x10^8 /4.44.f.F This is the classic transformer equation (which is used to calculate the number of turns required for the primary of a power transformer for a desired maximum flux).
Re-writing the transformer equation to make F the subject and knowing that F = flux density times area we have the peak AC flux density;
B[AC] = Vr.m.s x 10^8/4.44.N.f.A (Gauss)
Terman, F. E. Radio Engineer’s Handbook
Hughes, E. Electrical Technology
Langford-Smith Radiotron Designer’s Handbook
McLyman, T Transformer & Inductor Design Handbook | {"url":"http://richard984.tripod.com/transformer_math.htm","timestamp":"2014-04-17T18:22:56Z","content_type":null,"content_length":"76166","record_id":"<urn:uuid:1c33ec8b-cdff-40c3-a4f3-53c0b034a4c6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00372-ip-10-147-4-33.ec2.internal.warc.gz"} |
This Article
Bibliographic References
Add to:
On the Relationship Between Two Systolic Array Design Methodologies
December 1992 (vol. 41 no. 12)
pp. 1589-1593
ASCII Text x
M.T. O'Keefe, J.A.B. Fortes, B.W. Wah, "On the Relationship Between Two Systolic Array Design Methodologies," IEEE Transactions on Computers, vol. 41, no. 12, pp. 1589-1593, December, 1992.
BibTex x
@article{ 10.1109/12.214667,
author = {M.T. O'Keefe and J.A.B. Fortes and B.W. Wah},
title = {On the Relationship Between Two Systolic Array Design Methodologies},
journal ={IEEE Transactions on Computers},
volume = {41},
number = {12},
issn = {0018-9340},
year = {1992},
pages = {1589-1593},
doi = {http://doi.ieeecomputersociety.org/10.1109/12.214667},
publisher = {IEEE Computer Society},
address = {Los Alamitos, CA, USA},
RefWorks Procite/RefMan/Endnote x
TY - JOUR
JO - IEEE Transactions on Computers
TI - On the Relationship Between Two Systolic Array Design Methodologies
IS - 12
SN - 0018-9340
EPD - 1589-1593
A1 - M.T. O'Keefe,
A1 - J.A.B. Fortes,
A1 - B.W. Wah,
PY - 1992
KW - systolic array design methodologies; parameter method; data dependency method; optimization; optimal array; deconvolution algorithm; logic CAD; optimisation; parallel algorithms; systolic
VL - 41
JA - IEEE Transactions on Computers
ER -
The parameter method and data dependency method have been proposed as systematic design methodologies for systolic arrays. The authors describe the relationship between the two methodologies and show
that the parameter method applies to a subclass of the algorithms that can be processed by the dependency method. The optimization procedure of the parameter method can be applied, in a restricted
sense, within the framework of the dependency method. This procedure is used to derive an optimal array for the deconvolution algorithm.
[1] G. J. Li and B. W. Wah, "The design of optimal systolic arrays,"IEEE Trans. Comput., vol. C-34, pp. 66-77, Jan. 1985.
[2] D. I. Moldovan and J. A. B. Fortes, "Partitioning and mapping algorithms into fixed size systolic arrays,"IEEE Trans. Comput., vol. C-35, pp. 1-12, Jan. 1986.
[3] J. A. B. Fortes, K. S. Fu, and B. W. Wah, "Systematic approaches to the design of algorithmically specified systolic arrays," inProc. IEEE Int. Conf. Acoust., Speech, Signal Processing, Mar.
1985, pp. 26- 29.
[4] S. K. Rao, "Regular iterative algorithms and their implementations on processor arrays," Ph.D. dissertation, Stanford Univ., Stanford, CA, Oct. 1985.
[5] M. C. Chen, "Synthesizing VLSI architectures: Dynamic programming solver," inProc. 1986 Int. Conf. Parallel Processing, Aug. 1986, pp. 776-784.
[6] S.Y. Kung,VLSI Array Processors, Prentice Hall, Englewood Cliffs, N.J. 1988.
[7] J. V. McCanny and J. G. McWhirter, "The derivation and utilization of bit level systolic array architectures," inProc. 1986 Int. Workshop Systolic Arrays, Oxford, England, July 1986, pp. 47-59.
[8] J. A. B. Fortes and D. I. Moldovan, "Data broadcasting in linearly scheduled array processors," inProc. 11th Annu. Int. Symp. Comput. Architecture, June 1984.
[9] M. T. O'Keefe and J. A. B. Fortes, "A comparative study of two systematic design methodologies for systolic arrays," Masters Thesis, School of Electrical Engineering, Purdue Univ., May 1986.
[10] M. T. O'Keefe and J. A. B. Fortes, "A comparative study of two systematic design methodologies for systolic arrays," inProc. 1986 Int. Conf. Parallel Processing, Aug. 1986, pp. 672-675.
[11] C. Guerra and R. Melhem, "Synthesizing non-uniform systolic designs," inProc. 1986 Int. Conf. Parallel Processing, Aug. 1986, pp. 765-772.
[12] J. Delosme and I. Ipsen, "Efficient systolic arrays for the solution of Toeplitz systems: An illustration of a methodology for the construction of systolic architectures in VLSI," inProc. 1986
Int. Workshop Systolic Arrays, Oxford, England, July 1986, pp. 37-46.
[13] P. R. Cappello and K. Steiglitz, "Unifying VLSI array designs with geometric transformations," inProc. 1983 Int. Conf. Parallel Processing, Aug. 1983, pp. 448-457.
[14] Proc. Int. Conf. Application Specific Array Processors, M. Valero, Y. Kung, T. Lang, and J. A. B. Fortes, Eds., Sept. 1991, IEEE Catalog Number 91-71633.
Index Terms:
systolic array design methodologies; parameter method; data dependency method; optimization; optimal array; deconvolution algorithm; logic CAD; optimisation; parallel algorithms; systolic arrays.
M.T. O'Keefe, J.A.B. Fortes, B.W. Wah, "On the Relationship Between Two Systolic Array Design Methodologies," IEEE Transactions on Computers, vol. 41, no. 12, pp. 1589-1593, Dec. 1992, doi:10.1109/
Usage of this product signifies your acceptance of the
Terms of Use | {"url":"http://www.computer.org/csdl/trans/tc/1992/12/t1589-abs.html","timestamp":"2014-04-17T18:56:15Z","content_type":null,"content_length":"52470","record_id":"<urn:uuid:86e86fa3-dfb5-43e5-ab0f-5641148ad2b6>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
Argo, IL ACT Tutor
Find an Argo, IL ACT Tutor
...Currently, I'm a school counselor at a local public high school finishing my eighth year after teaching Spanish for four years. On the college counseling side, I have extensive knowledge of
many two and four-year colleges having assisted over 500 secondary students in discerning the next step in...
20 Subjects: including ACT Math, Spanish, English, writing
...I have taught Algebra 2 in a high school and at Indiana University. I think that I can persuade any student that math is not only interesting and fun, but beautiful as well. I have taught
Trigonometry in a high school and at the University of Illinois at Chicago.
24 Subjects: including ACT Math, calculus, physics, geometry
...I will be happy to share my knowledge and the workbook with you. I hold a Bachelor of Science degree in Electrical Engineering from the University of Illinois at Chicago with emphasis in
higher mathematics. I am a full-time engineer at a Telecom company.
18 Subjects: including ACT Math, geometry, ASVAB, algebra 1
...Later, I would become Exam Prep Coordinator and Managing Director of the Learning Center. However, my next venture was being involved in the martial arts where I learned goal-setting skills,
the importance of building student's confidence, and how to motivate students. Although I took a step ba...
26 Subjects: including ACT Math, chemistry, English, reading
...I have been using Microsoft Access since 2001 in a corporate setting. I use Access to gather client phone numbers, create inventory and guest lists, and overall to track various pieces of
information. I have been using Microsoft Outlook at my primary email platform for over 5 years in my career.
36 Subjects: including ACT Math, English, reading, geometry
Related Argo, IL Tutors
Argo, IL Accounting Tutors
Argo, IL ACT Tutors
Argo, IL Algebra Tutors
Argo, IL Algebra 2 Tutors
Argo, IL Calculus Tutors
Argo, IL Geometry Tutors
Argo, IL Math Tutors
Argo, IL Prealgebra Tutors
Argo, IL Precalculus Tutors
Argo, IL SAT Tutors
Argo, IL SAT Math Tutors
Argo, IL Science Tutors
Argo, IL Statistics Tutors
Argo, IL Trigonometry Tutors
Nearby Cities With ACT Tutor
Bedford Park ACT Tutors
Bridgeview ACT Tutors
Brookfield, IL ACT Tutors
Countryside, IL ACT Tutors
Forest View, IL ACT Tutors
Hodgkins, IL ACT Tutors
Justice, IL ACT Tutors
La Grange Park ACT Tutors
Lyons, IL ACT Tutors
Mc Cook, IL ACT Tutors
Mccook, IL ACT Tutors
Riverside, IL ACT Tutors
Stickney, IL ACT Tutors
Summit Argo ACT Tutors
Summit, IL ACT Tutors | {"url":"http://www.purplemath.com/Argo_IL_ACT_tutors.php","timestamp":"2014-04-19T02:43:29Z","content_type":null,"content_length":"23546","record_id":"<urn:uuid:dc547838-d306-4a6c-8b53-05907d511c17>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Diffie-Hellman Key Exchange
This 10 line C program was written by an anonymous poster to sci.crypt. Possibly the poster is located in the US, though there is no way of telling. Anyway nobody's code includes a beautifully
compacted big num implementation, so all you need is a C compiler.
If you are trying this out with the example nobody gives in his message you should note that you must change the initialisation of S=129 on the first line to be S=3 (size of key in bytes + 1) -
nobody says this in the text, it just caught me out first time.
I posted a follow up to this message to sci.crypt in which I mail dropped nobody a message using the 1024 bit Diffie-Hellman key exchange he/she initiated. This message includes the Diffie-Hellman in
perl I wrote (it's very similar to RSA in it's use of modular exponentiation) inspired by nobody's code.
My perl-diffie-hellman plus examples of use (inspired by nobody's C code)
Here is nobody's posting to sci.crypt (it was cross posted to talk.politics.crypto and alt.anonymous.messages also):
Newsgroups: sci.crypt,talk.politics.crypto,alt.anonymous.messages
From: nobody@replay.com (Name withheld by request)
Organization: Replay and Company UnLimited.
X-Warning: This message was forwarded by an Anonymous Remailer.
X-Comment: Replay does not necessarily approve of the contents of this posting.
X-Comment: Please report inappropriate use to <postmaster>
Subject: Export-a-crypto-system: Diffie-Hellman
X-Posting-Host: xs1.xs4all.nl
Date: Sun, 2 Apr 1995 02:25:08 +0000
Sender: usenet@demon.co.uk
Lines: 88
Inspired by Adam Back's RSA code...
The export-a-crypto-system sig II - Diffie-Hellman in 10 lines of C:
#include <stdio.h> /* Usage: dh base exponent modulus */
typedef unsigned char u;u m[1024],g[1024],e[1024],b[1024];int n,v,d,z,S=129;a(
u *x,u *y,int o){d=0;for(v=S;v--;){d+=x[v]+y[v]*o;x[v]=d;d=d>>8;}}s(u *x){for(
v=0;(v<s-1>=m[v])a(x,m,-1);}r(u *x){d=0;for(v=0;v<
S;){d|=x[v];x[v++]=d/2;d=(d&1)<<8;}}M(u *x,u *y){u X[1024],Y[1024];bcopy(x,X,S
,Y,1);s(Y);}}h(char *x,u *y){bzero(y,S);for(n=0;x[n]>0;n++){for(z=4;z--;)a(y,y
,1);x[n]|=32;y[S-1]|=x[n]-48-(x[n]>96)*39;}}p(u *x){for(n=0;!x[n];)n++;for(;n<
printf("\n");}main(int c,char **v){h(v[1],g);h(v[2],e);h(v[3],m);bzero(b,S);b[
To compile it, type: gcc dh.c -o dh
The program is somewhat slow, but it works, and it's almost small enough to
attach to your posts as a signature. It's set up for 1024-bit numbers, but
you can allow bigger numbers by setting the value of the S variable. To
allow for carry digits, the value set for S must be one greater than the
maximum length in bytes of the modulus that you wish to use. The time to
calculate a modular exponent is roughly proportional to the cube of the
number of bits, so if a 1024-bit number takes 5 minutes, a 2048-bit number
will take 40 minutes, and a 4096-bit number would take about 5 hours.
All numbers are entered and printed in hexadecimal. Because of the minimal
size, the program doesn't check for human error; it may do strange things
if you give it bad data.
For example purposes, the numbers used below are small, any practical
use would require numbers of 512-1024 bits in length.
To generate a key, Joe selects a public generator (3 in this example), a
public prime modulus (10001 hexadecimal), and a secret exponent (9A2E hex).
Joe then calculates the following:
joe% dh 3 9A2E 10001
This is his public key. Joe sends these three numbers (3,10001,C366)
to Alice.
To encrypt a message to Joe, Alice picks a secret random number (4C20 in
this example) and using Joe's generator and modulus, calculates:
alice% dh 3 4C20 10001
She sends this result to Joe. Alice then takes Joe's public key and her
secret random number and calculates:
alice% dh C366 4C20 10001
She uses this result as a session key to encrypt her message to Joe.
To decrypt the message, Joe uses the number Alice sent him, and his
secret key to calculate:
joe% dh 6246 9A2E 10001
Joe now has the session key and can decrypt Alice's message.
An eavesdropper sees Joe send Alice three numbers (3,10001,C366), and
sees Alice send Joe '6246'. But the eavesdropper can not use these
numbers to calculate the secret session key (DED4), and thus can not
decrypt the message.
You can use this program to do RSA also, but generating a key is more
I wonder if the USA ITAR law covers integer math programs?
Here is my key:
Comments, html bugs to me (Adam Back) at <adam@cypherspace.org> | {"url":"http://www.cypherspace.org/adam/rsa/dh-in-C.html","timestamp":"2014-04-17T01:29:45Z","content_type":null,"content_length":"5836","record_id":"<urn:uuid:e0d10231-fcd1-427f-b59f-2a0ce6a31b76>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00654-ip-10-147-4-33.ec2.internal.warc.gz"} |
Identify the Derivative Function
Identify the Derivative Function HELP
On the left is a graph of a function `f`, and one of the three graphs on the right is the derivative of `f`. Make a guess and check your answer by clicking the red question mark buttons. Give
yourself a new, randomized problem by clicking the "Reset Graphs" button. | {"url":"http://webspace.ship.edu/msrenault/GeoGebraCalculus/derivative_matching.html","timestamp":"2014-04-16T22:10:19Z","content_type":null,"content_length":"14272","record_id":"<urn:uuid:fd57f2c4-22ed-46cf-a71e-04c090e11cde>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Azimuthal distinguishability of entangled photons generated in spontaneous parametric down-conversion
« journal navigation
Azimuthal distinguishability of entangled photons generated in spontaneous parametric down-conversion
Optics Express, Vol. 15, Issue 22, pp. 14636-14643 (2007)
We experimentally demonstrate that paired photons generated in different sections of a down-conversion cone, when some of the interacting waves show Poynting vector walk-off, carry different spatial
correlations, and therefore a different degree of spatial entanglement. This is shown to be in agreement with theoretical results. We also discuss how this azimuthal distinguishing information of the
down-conversion cone is relevant for the implementation of quantum sources aimed at the generation of entanglement in other degrees of freedom, such as polarization.
© 2007 Optical Society of America
1. Introduction
Paired photons entangled in the spatial degree of freedom are represented by an infinite dimensional Hilbert space. This offers the possibility to implement quantum algorithms that either inherently
use dimensions higher than two or exhibit enhanced efficiency in increasingly higher dimensions (see [
1. G. Molina-Terriza, J. P. Torres, and L. Torner, “Twisted photons,” Nature Phys. 3, 305 – 310 (2007). [CrossRef]
] and references inside). These include the demonstration of the violation of bipartite, three dimensional Bell inequalities [
2. A. Vaziri, J. Pan, T. Jennewein, G. Weihs, and A. Zeilinger, “Concentration of Higher Dimensional Entanglement: Qutrits of Photon Orbital Angular Momentum,” Phys. Rev. Lett. 91, 227902 (2003).
[CrossRef] [PubMed]
], the implementation of the
quantum coin tossing
protocol with qutrits [
3. G. Molina-Terriza, A. Vaziri, R. Ursin, and A. Zeilinger, “ Experimental Quantum Coin Tossing,” Phys. Rev. Lett. 94, 040501 (2005). [CrossRef] [PubMed]
], and the generation of quantum states in ultra-high dimensional spaces [
4. J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat , “Generation of Hyperentangled Photon Pairs,” Phys. Rev. Lett. 95, 260501 (2005). [CrossRef]
]. Actually, the amount of spatial bandwidth, and the degree of spatial entanglement, can be tailored [
5. J. P. Torres, A. Alexandrescu, and L. Torner, “Quantum spiral bandwidth of entangled two-photon states,” Phys. Rev. A 68050301(R) (2003). [CrossRef]
6. C. K. Law and J. H. Eberly, “Analysis and Interpretation of High Transverse Entanglement in Optical Parametric Down Conversion,” Phys. Rev. Lett. 92, 127903 (2004). [CrossRef] [PubMed]
], being possible to control the effective dimensionality where spatial entanglement resides.
The most widely used source for generating paired photons with entangled spatial properties is spontaneous parametric down-conversion (SPDC) [
7. H. H. Arnaut and G. A. Barbosa, “Orbital and Intrinsic Angular Momentum of Single Photons and Entangled Pairs of Photons Generated by Parametric Down-Conversion,” Phys. Rev. Lett. 85, 286 (2000).
[CrossRef] [PubMed]
8. A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, “Entanglement of the orbital angular momentum states of photons,” Nature 412, 313–316 (2001). [CrossRef] [PubMed]
]. In this process, photons are known to be emitted in cones whose shape depends of the phase matching conditions inside the nonlinear crystal. All relevant experiments reported to date make use of a
small section of the full down-conversion cone. But the spatial properties of different sections of the cone have been unexplored experimentally up to now. This could be done, for example, by
relocating the single photon counting modules. Then, one question naturally arises:
Are the entangled spatial properties of the photons modified depending of the location in the down-conversion cone where they are detected?
The answer to this question is of great relevance for the implementation of many quantum information schemes. When considering entanglement in the spatial degree of freedom, one should determine
whether pairs of photons with different azimuthal angle of emission might show different spatial quantum correlations, since all quantum information applications are based on the availability and use
of specific quantum states.
Additionally, the spatial properties of entangled two-photon states have to be taken into account even when entanglement takes place in other degrees of freedom, such as polarization. In general, it
is required to suppress any spatial “which-path” information that otherwise degrades the degree of entanglement. This is especially true for configurations that make use of a large spatial bandwidth
9. P. S. K. Lee, M. P. van Exter, and J. P. Woerdman, “How focused pumping affects type-II spontaneous parametric down-conversion,” Phys. Rev. A 72, 033803 (2005). [CrossRef]
] and in certain SPDC configurations where horizontally and vertically polarized photons are generated in different sections of the down-conversion cone [
10. P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, “Ultrabright source of polarization-entangled photons,” Phys. Rev. A 60, R773 (1999). [CrossRef]
11. J. Altepeter, E. Jeffrey, and P. Kwiat, “Phase-compensated ultra-bright source of entangled photons,” Opt. Express , 13, 8951–8959(2005). [CrossRef] [PubMed]
]. Finally, the generation of heralded single photons with well defined spatial properties, i.e. a gaussian shape for optimum coupling into monomode optical fibers, depends on the angle of emission [
12. J. P. Torres, G. Molina-Terriza, and L. Torner, “The spatial shape of entangled photon states generated in non-collinear, walking parametric downconversion,” J. Opt. B: Quantum Semiclass. Opt. 7,
235–239 (2005). [CrossRef]
2. Experimental set-up and results
Fig. 1
we present a scheme of our experimental set-up. The output beam of a CW diode laser emitting at
= 405
, is appropriately spatially filtered to obtain a beam with a gaussian profile, while a half wave plate (HWP) is used to control the polarization. The pump beam is focalized to
w [0]
= 136
beam waist on the input face of a
= 5mm thick lithium iodate crystal, cut at 42° for Type I degenerate collinear phase matching. The generated photons, signal and idler, are ordinary polarized, in opposition to the extraordinary
polarized pump beam. The crystal is tilted to generate paired photons which propagate inside the nonlinear crystal with a non-collinear angle of
= 4°. Due to the crystal birefringence, the pump beam exhibits Pointing vector walk-off with angle
ρ [0]
= 4.9°, while the generated photons do not exhibit spatial walk-off.
Fig. 1(b)
represents the transversal section of the down-conversion cone. The directions of propagation of the signal and the idler photons over this ring are determined by the azimuthal angle
, which is the angle between the plane of propagation of the down-converted photons and the
plane. To determine experimentally the position of the crystal optics axis, and the origin of
, we measure the relative position of the pump beam in the plane
at the input and output faces of the nonlinear crystal using a CCD camera.
Right after the crystal, each of the generated photons traverse a 2 - f system with focal length f=50cm. Low-pass filters are used to remove the remaining pump beam radiation. After the filters, the
photons are coupled into multimode fibers. In order to increase our spatial resolution, we use small pinholes of 300μm of diameter. We keep the idler pinhole fixed and measure the coincidence rate
while scanning the signal photon transverse spatial shape with a motorized XY translation stage. Finally, as we are interested in the different spatial correlations at different azimuthal positions
of the downconversion ring, instead of rotating the whole detection system, the nonlinear crystal and the polarization of the pump beam are rotated around the propagation direction. Due to slight
misalignments of the rotation axis of the crystal, after every rotation it is necessary to adjust the tilt of the crystal to achieve generation of photons at the same non-collinear angle in all the
Images for different azimuthal sections of the cone were taken. We present a sample of them in the upper row of
Fig. 2
, which summarizes our main experimental results. Each column shows the coincidence rate for
= 0°, 90°, 180° and 270°. The movie shows the experimental and theoretical spatial shape of the signal photon corresponding to other values of the angle
. Each point of these images corresponds to the recording of a 10
measurement. The typical number of coincidences at the maximum is around 10 photons per second. The resolution of the experimental images is 50 × 50 pixels. The different spatial shapes measured of
the mode function of the signal photons clearly show that the down-conversion cone does not posses azimuthal symmetry. This agrees with the theoretical predictions presented in the lower row of
Fig. 2
. Note that no fitting parameter has been used whatsoever. Slight discrepancies between experimental data and theoretical predictions might be due to the small, but not negligible, bandwidth of the
pump beam and to the fact that the resolution of our system is limited by the detection pinholes size.
An interesting feature that can be observed in these images is that the mode function in
Fig. 2(b)
, corresponding to the case
= 90° presents a nearly gaussian shape. We will show below that this effect happens whenever φ ≃
ρ [0]
, which corresponds to our experimental conditions. On the other hand the mode function shown in
Figs. 2 (a)
are highly elliptical.
3. Azimuthal distinguishability of paired photons generated in different sections of the the down-conversion cone
To gain further insight, we turn to the theoretical description of this problem. The signal photon propagates along the direction
z [1]
Fig. 1
) with longitudinal wavevector
= [(
- |
and transverse wavevector
= (
). Similarly, the idler photon propagates along the
z [2]
direction with longitudinal wavevector
, and transverse wavevector
. Here we consider the signal and idler photons as purely monochromatic, due to the use of a narrow pinhole in the idler side, which selects a very small bandwidth of frequencies of the
down-converted ring. Although photons detected in different parts of the down-conversion cone might present slightly different polarizations [
13. A. Migdall, “Polarization directions of noncollinear phase-matched optical parametric downconversion output,” J. Opt. Soc. Am. B , 141093–1098 (1997). [CrossRef]
], this is a small effect, and therefore we neglect it.
The quantum two-photon state at the output face of the nonlinear crystal, within the first order perturbation theory, can be written as |Ψ9 = ∫
d p d q
a[s] ^†
a[i] ^†
)|0,0〉, where the mode function writes [
14. M. H. Rubin, “Transverse correlation in optical spontaneous parametric down-conversion,” Phys. Rev. A 54, 5349 (1996). [CrossRef] [PubMed]
12. J. P. Torres, G. Molina-Terriza, and L. Torner, “The spatial shape of entangled photon states generated in non-collinear, walking parametric downconversion,” J. Opt. B: Quantum Semiclass. Opt. 7,
235–239 (2005). [CrossRef]
shows that the spatial mode function shape shows ellipticity. The amount of ellipticity depends on the non-collinear configuration [
15. G. Molina-Terriza, S. Minardi, Y. Deyanova, C. I. Osorio, M. Hendrych, and J. P. Torres, “Control of the shape of the spatial mode function of photons generated in noncollinear spontaneous
parametric down-conversion,” Phys. Rev. A 72, 065802 (2005). [CrossRef]
], and on the azimuthal angle of emission (
) due to the presence of spatial walk off.
The latter is the cause of the azimuthal symmetry breaking of the down-conversion cone
. Both effects turn out to be important when the length of the crystal
is larger than the non-collinear length
w [0]
/ sin
and the walk-off length
w [0]
ρ [0]
. Our experimental configuration is fully in this regime. We should notice that in a collinear SPDC configuration, Poynting vector walk-off also introduces ellipticity of the mode function [
16. M. V. Fedorov, M. A. Efremov, P. A. Volkov, E. V. Moreva, S. S. Straupe, and S. P. Kulik, “Anisotropically and High Entanglement of Biphoton States Generated in Spontaneous Parametric
Down-Conversion,” Phys. Rev. Lett. 99, 063901 (2007). [CrossRef] [PubMed]
The theory also predicts the orientation of the spatial mode function of the signal photon, as shown in 2. This is given by the slope tan
in the (
) plane of the loci of perfect phase matching transverse momentum, which writes tan
= (sin
- tan
ρ [0]
ρ [0]
). If
ρ [0]
= 90°, the spatial mode function of the signal photons shows a nearly gaussian shape, due to the compensation of the non-collinear and walk-off effects. All these results are in agreement with
experimental data in
Fig. 2
This azimuthal variation of the spatial correlations can be made clearer if we express the mode function of the signal photon, Φ
) = Φ(
= 0) in terms of orbital angular momentum modes. The mode function can be described by superposition of spiral harmonics [
17. G. Molina-Terriza, J. P. Torres, and L. Torner, “Management of the Angular Momentum of Light: Preparation of Photons in Multidimensional Vector States of Angular Momentum,” Phys. Rev. Lett. 88,
013601 (2002). [CrossRef] [PubMed]
] Φ
) = (2
) Σ
), where
) = 1/(2
), and ρ and
are cylindrical coordinates in the transverse wave-number space. The weight of the
-harmonic is given by
= ∫
) |
The gaussian pump beam corresponds to a mode with
= 0, while the idler photon is projected into
= 0, which corresponds to projection into a large area gaussian mode (
= 0).
Figures 3(a)
show the weight of the mode
= 0, and the weight of all other OAM modes, as a function of the angle
for two different pump beam widths. We observe that the OAM correlations of the two-photon state change along the down-conversion cone due to the azimuthal symmetry breaking induced by the spatial
walk-off. This implies that the correlations between OAM modes do not follow the relationship
. From
Fig. 3
it is clearly observed that for larger pump beams the azimuthal changes are smoothed out, since in this case the non-collinear and walk-off lengths are much larger than the crystal length.
Figures 3(c)
plots the OAM decomposition for
w [0]
= 100
m, and
Figs. 3(e)
w [0]
= 600
m, for
= 0,90°. Notice that the weight of the
= 0 mode is maximum for
= 90°, which therefore is the optimum angle for the generation of heralded single photons with a gaussian-like shape. This effect can be clearly observed in
Figs. 2(b)
. On the contrary, for
= 270°, the combined effects of the noncollinear and walk off effects make the weight of the
= 0 mode to obtain its minimum value. This is of relevance in any quantum information protocol where the generated photons, no matter the degree of freedom where the quantum information is encoded,
are to be coupled into single mode fibers.
Importantly, the degree of spatial entanglement of the two-photon state also shows azimuthal variations, depending on the direction of emission of the down-converted photons.
Figure 4
shows the Schmidt number
= 1/
Trρ ^2 [s]
, where
|Ψ〉〈Ψ|, is the density matrix that describe the quantum state of the signal photon, after tracing out the spatial variables corresponding to the idler photon. The Schmidt number [
6. C. K. Law and J. H. Eberly, “Analysis and Interpretation of High Transverse Entanglement in Optical Parametric Down Conversion,” Phys. Rev. Lett. 92, 127903 (2004). [CrossRef] [PubMed]
] is a measure of the degree of entanglement of the spatial two photon state,
= 1 corresponding to a product state, while larger values of
corresponds to increasingly larger values of the degree of entanglement. The degree of entanglement is maximum for
= 0, and minimum for
= 90 °, as shown in
Fig. 4(a)
. The degree of entanglement is known to decrease with increasing filtering [
18. M. P. van Exter, A. Aiello, S. S. R. Oemrawsingh, G. Nienhuis, and J. P. Woerdman, “Effect of spatial filtering on the Schmidt decomposition of entangled photons,” Phys. Rev. A 74, 012309 (2006).
], i.e., larger values of
, as shown in
Fig. 4(b)
, and to increase for larger values of the pump beam width (
w [0]
4. Effects on the generation of polarization entanglement
The azimuthal distinguishing information introduced by walking SPDC affect the quantum properties of polarization-entangled states, when photons generated in different sections of the down-conversion
cone are used. This is the case when using two type I SPDC crystal whose optical axis are rotated 90°. This configuration, originally demonstrated for the generation of polarization-entangled photons
10. P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, “Ultrabright source of polarization-entangled photons,” Phys. Rev. A 60, R773 (1999). [CrossRef]
], has been used as well for the generation of hyperentangled quantum states [
4. J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat , “Generation of Hyperentangled Photon Pairs,” Phys. Rev. Lett. 95, 260501 (2005). [CrossRef]
]. The quantum state of the two-photon state writes
Φ[1] (p, q) = Φ(α = 0, p, q) exp (ip[y]tanρ[s]L + iq[y]tanρ[i]L) describes the spatial shape of the photons generated in the first nonlinear crystal, ρ[s,i] are the spatial walk-off angles of the
down-converted photons traversing the second nonlinear crystal, and Φ[2](p,q) = Φ(α = 90°,p,q) corresponds to the photons generated in the second nonlinear crystal. The quantum state in the
polarization space is obtained tracing out the spatial variables, i.e., ρ[p] = Tr[s] |Ψ〉〈Ψ|, which gives
where ξ = ∫ d p d qΦ[1](p,q) Φ[2] ^* (p,q).
The degree of mixture of polarization and spatial variables is determined by the purity (
) of the quantum state given by Eq. (
), which writes
= 1/2 (1 + |
). The concurrence of the polarization-entangled state, which writes writes
= |
|, quantifies the quality of the polarization entangled state.
Figure 5
shows the concurrence of the quantum state for two different crystal lengths. If spatial walk-off effects are negligible, |
| = 1 and spatial and polarization variables can be separated. Therefore, both the purity and the concurrence are equal to 1. This is the case shown in
Fig. 5
for a crystal length of
= 0.5 mm. Notwithstanding, this is not generally he case, as demonstrated above.
Interestingly, the degree of spatial entanglement of the horizontally polarized photons is unchanged when traversing the second crystal, despite the fact that the down-converted photons shows
walk-off. Notwithstanding, the spatial correlations are modified due to the presence of walk-off. It is this effect which enhance spatial distinguishing information and thus degrades the quality of
polarization entanglement.
5. Conclusions
We have shown theoretically and experimentally, that the presence of Poynting vector walk-off in SPDC configurations introduces azimuthal distinguishing information of paired photons emitted in
different directions of propagation. The quantum correlations of the spatial two-photon state and, consequently, the degree of entanglement show azimuthal variations that are enhanced when using
highly focused pump beams and broadband spatial filters. This breaking of the azimuthal symmetry of the down-conversion cone has important consequences when designing and implementing sources of
paired photons with entangled properties.
References and links
1. G. Molina-Terriza, J. P. Torres, and L. Torner, “Twisted photons,” Nature Phys. 3, 305 – 310 (2007). [CrossRef]
2. A. Vaziri, J. Pan, T. Jennewein, G. Weihs, and A. Zeilinger, “Concentration of Higher Dimensional Entanglement: Qutrits of Photon Orbital Angular Momentum,” Phys. Rev. Lett. 91, 227902 (2003).
[CrossRef] [PubMed]
3. G. Molina-Terriza, A. Vaziri, R. Ursin, and A. Zeilinger, “ Experimental Quantum Coin Tossing,” Phys. Rev. Lett. 94, 040501 (2005). [CrossRef] [PubMed]
4. J. T. Barreiro, N. K. Langford, N. A. Peters, and P. G. Kwiat , “Generation of Hyperentangled Photon Pairs,” Phys. Rev. Lett. 95, 260501 (2005). [CrossRef]
5. J. P. Torres, A. Alexandrescu, and L. Torner, “Quantum spiral bandwidth of entangled two-photon states,” Phys. Rev. A 68050301(R) (2003). [CrossRef]
6. C. K. Law and J. H. Eberly, “Analysis and Interpretation of High Transverse Entanglement in Optical Parametric Down Conversion,” Phys. Rev. Lett. 92, 127903 (2004). [CrossRef] [PubMed]
7. H. H. Arnaut and G. A. Barbosa, “Orbital and Intrinsic Angular Momentum of Single Photons and Entangled Pairs of Photons Generated by Parametric Down-Conversion,” Phys. Rev. Lett. 85, 286 (2000).
[CrossRef] [PubMed]
8. A. Mair, A. Vaziri, G. Weihs, and A. Zeilinger, “Entanglement of the orbital angular momentum states of photons,” Nature 412, 313–316 (2001). [CrossRef] [PubMed]
9. P. S. K. Lee, M. P. van Exter, and J. P. Woerdman, “How focused pumping affects type-II spontaneous parametric down-conversion,” Phys. Rev. A 72, 033803 (2005). [CrossRef]
10. P. G. Kwiat, E. Waks, A. G. White, I. Appelbaum, and P. H. Eberhard, “Ultrabright source of polarization-entangled photons,” Phys. Rev. A 60, R773 (1999). [CrossRef]
11. J. Altepeter, E. Jeffrey, and P. Kwiat, “Phase-compensated ultra-bright source of entangled photons,” Opt. Express , 13, 8951–8959(2005). [CrossRef] [PubMed]
12. J. P. Torres, G. Molina-Terriza, and L. Torner, “The spatial shape of entangled photon states generated in non-collinear, walking parametric downconversion,” J. Opt. B: Quantum Semiclass. Opt. 7,
235–239 (2005). [CrossRef]
13. A. Migdall, “Polarization directions of noncollinear phase-matched optical parametric downconversion output,” J. Opt. Soc. Am. B , 141093–1098 (1997). [CrossRef]
14. M. H. Rubin, “Transverse correlation in optical spontaneous parametric down-conversion,” Phys. Rev. A 54, 5349 (1996). [CrossRef] [PubMed]
15. G. Molina-Terriza, S. Minardi, Y. Deyanova, C. I. Osorio, M. Hendrych, and J. P. Torres, “Control of the shape of the spatial mode function of photons generated in noncollinear spontaneous
parametric down-conversion,” Phys. Rev. A 72, 065802 (2005). [CrossRef]
16. M. V. Fedorov, M. A. Efremov, P. A. Volkov, E. V. Moreva, S. S. Straupe, and S. P. Kulik, “Anisotropically and High Entanglement of Biphoton States Generated in Spontaneous Parametric
Down-Conversion,” Phys. Rev. Lett. 99, 063901 (2007). [CrossRef] [PubMed]
17. G. Molina-Terriza, J. P. Torres, and L. Torner, “Management of the Angular Momentum of Light: Preparation of Photons in Multidimensional Vector States of Angular Momentum,” Phys. Rev. Lett. 88,
013601 (2002). [CrossRef] [PubMed]
18. M. P. van Exter, A. Aiello, S. S. R. Oemrawsingh, G. Nienhuis, and J. P. Woerdman, “Effect of spatial filtering on the Schmidt decomposition of entangled photons,” Phys. Rev. A 74, 012309 (2006).
OCIS Codes
(190.4410) Nonlinear optics : Nonlinear optics, parametric processes
(270.0270) Quantum optics : Quantum optics
(270.1670) Quantum optics : Coherent optical effects
ToC Category:
Nonlinear Optics
Original Manuscript: September 17, 2007
Revised Manuscript: October 12, 2007
Manuscript Accepted: October 12, 2007
Published: October 22, 2007
Clara I. Osorio, Gabriel Molina-Terriza, Blanca G. Font, and Juan P. Torres, "Azimuthal distinguishability of entangled photons generated in spontaneous parametric down-conversion," Opt. Express 15,
14636-14643 (2007)
OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies.
In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.
Multimedia Files Recommended Software
» Media 1: MPEG (928 KB) QuickTime
« Previous Article | Next Article » | {"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-15-22-14636&id=144438","timestamp":"2014-04-18T18:32:35Z","content_type":null,"content_length":"170732","record_id":"<urn:uuid:8f5aef9a-21f4-420c-bc0e-06950aa2e8a3>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
Fundamental theorem of calculus (idea)
The Fundamental Theorem is "fundamental" because it links the two distinct branches of calculus, derivatives and integrals. As you can see above, it is split into two parts.
• Part I says that you can find the definite integral or (sometimes) area under a curve by evaluating the antiderivative of the function at both edges, and finding the difference.
Before calculus, areas under curves could be evaluated only through approximation (what we would call Riemann Sums today). Using the FTC on area problems usually results in a huge improvement in
speed (and accuracy, because you get an exact answer).
• Part II says that, to take the derivative with respect to x of an integral where the end limit of integration is a function of x, you "take it-stick it-D it". (See below.)
Here's what I mean by "take it-stick it-D it". Look at this example, first:
/\ u(x)
d |
-- | f(t) dt
dx |
\/ a
Taking-sticking-D-ing is a phrase that was "patented" by my high school math teacher, Mr. Noeth. In this example, t is the variable of integration. To T-S-D, you take that end limit ("take it"),
replace the variable of integration with it ("stick it"), and take the derivative WRT x of it ("D it"). So you have:
f(u(x)) * --
Note that you can use any variables, not just x and t. | {"url":"http://everything2.com/user/LincolnQ/writeups/Fundamental+theorem+of+calculus","timestamp":"2014-04-17T04:36:35Z","content_type":null,"content_length":"21403","record_id":"<urn:uuid:ebcfa092-97c7-4947-a05e-6e25a680f50c>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00018-ip-10-147-4-33.ec2.internal.warc.gz"} |
Drafting - Geometry
Circumference A quadrilateral having two parallel sides
Acute Angle A line segment that joins two points on a circle
Rectangle A eight-sided polygon
Quadrant The longest side of a triangle
Scalene a parallelogram with unequal adjacent sides
Right Triangle A triangle with all sides and angles congruent
Square Angle less than 90 degrees
Right Angle Straight line passing through center of circle
Isosceles A quadralateral with four equal sides but not four equal angles
Concentric A prism with a rectangular base
Chord Angle of exactly 90 degrees
Pentagon A prism with a triangular base
Diameter A triangle with two sides and angles congruent
Trapezoid A triangle with one angle measureing 90 degrees
Hypotenuse a quadralateral with four equal sides and four equal angles
Prism A six-sided polygon
Equilateral A solid figure whose bases or ends have the same size and shape and are parallel to one another, and each of whose sides is a parallelogram
Hexagon Two circles that share the same center point
Distance Across Falts A quadralateral with four right angles
Octagon one fourth of the circumference of a circle
Rhombus The distance around the outside of a circle
Distance Across Corners The highest point of something, the top, or the summit
Rhomboid Distance of opposite sides
Right Rectangular A trianlge with no sides or angles congruent
Radius A Five-sided polygon
Right Tricangular Distance from on corner to another
Vertex The straight line distance from the side of a circle to the center. | {"url":"http://www.armoredpenguin.com/wordmatch/Data/best/math/geometry.08.html","timestamp":"2014-04-24T16:38:54Z","content_type":null,"content_length":"16303","record_id":"<urn:uuid:ca17d435-d263-41f4-b2b8-1969fec720b6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00388-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptography with Tamperable and Leaky Memory
Yael Tauman Kalai, Bhavana Kanukurthi, and Amit Sahai
Microsoft Research; Boston University; and University of California (UCLA)
Abstract. A large and growing body of research has sought to secure cryptographic systems against physical attacks. Motivated by a large variety of real-world physical attacks on memory, an important
line of work was initiated by Akavia, Goldwasser, and Vaikuntanathan [AGV09] where security is sought under the assumptions that: (1) all memory is leaky, and (2) leakage can be an arbitrarily chosen
(efficient) function of the memory.
However, physical attacks on memory are not limited to leakage through side-channels, but can also include active tampering attacks through a variety of physical attacks, including heat and EM
radiation. Nevertheless, protection against the analogous model for tampering — where (1) all memory is tamperable, and (2) where the tampering can be an arbitrarily chosen (efficient) function
applied to the memory — has remained an elusive target, despite significant effort on tampering-related questions.
In this work, we tackle this question by considering a model where we assume that both of these pairs of statements are true — that all memory is both leaky and (arbitrarily) tamperable. Furthermore,
we assume that this leakage and tampering can happen repeatedly and continually (extending the model of [DHLW, BKKV10] in the context of leakage). We construct a signature scheme and an encryption
scheme that are provably secure against such attacks, assuming that memory can be updated in a randomized fashion between episodes of tampering and leakage. In both schemes we rely on the linear
assumption over bilinear groups.
We also separately consider a model where only continual and repeated tampering (but only bounded leakage) is allowed, and we are able to obtain positive results assuming only that “self-destruct” is
possible, without the need for memory updates.
Our results also improve previous results in the continual leakage regime without tampering [DHLW, BKKV10]. Whereas previous schemes secure against continual leakage (of arbitrary bounded functions
of the secret key), could tolerate only 1/2-ε leakage-rate between key updates under the linear assumption over bilinear groups, our schemes can tolerate 1-ε leakage-rate between key updates, under
the same assumption. | {"url":"http://www.iacr.org/conferences/crypto2011/abstracts/367.htm","timestamp":"2014-04-21T02:02:53Z","content_type":null,"content_length":"3003","record_id":"<urn:uuid:2ff4f852-9c80-4593-bc53-28eb32aebefb>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00459-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Positive solutions for boundary value problem of nonlinear fractional differential equation.
(English) Zbl 1079.34048
Summary: We investigate the existence and multiplicity of positive solutions to the boundary value problem
${D}_{0+}^{\alpha }u\left(t\right)+f\left(t,u\left(t\right)\right)=0,\phantom{\rule{4pt}{0ex}}0<t<1,\phantom{\rule{1.em}{0ex}}u\left(0\right)=u\left(1\right)=0,$
where $1<\alpha \le 2$ is a real number, ${D}_{0+}^{\alpha }$ is the standard Riemann-Liouville differentiation, and $f:\left[0,1\right]×\left[0,\infty \right)\to \left[0,\infty \right)$ is
continuous. By means of some fixed-point theorems in a cone, existence and multiplicity results positive solutions are obtained. The proofs are based upon the reduction of the problem considered to
the equivalent Fredholm integral equation of second kind.
34K05 General theory of functional-differential equations
34B18 Positive solutions of nonlinear boundary value problems for ODE
34B15 Nonlinear boundary value problems for ODE
26A33 Fractional derivatives and integrals (real functions) | {"url":"http://zbmath.org/?q=an:1079.34048","timestamp":"2014-04-17T00:55:46Z","content_type":null,"content_length":"22584","record_id":"<urn:uuid:644c59c6-6a39-4c0d-867b-63c28f686b66>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00400-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instructor Class Description
Time Schedule:
Susan L Cassels
EPI 554
Seattle Campus
Introduction to Epidemic Modeling for Infectious Diseases
Covers the basic tools for building and analyzing mathematical models of infectious disease epidemics. Model types include deterministic and stochastic models, compartmental and individual-based
models. Laboratory provides hands-on model building experience in Excel, Stella, and R. Offered: A.
Class description
Course Description & Objectives: This course is designed to provide students with the basic tools for building and analyzing mathematical models of disease epidemics. Dynamical systems, such as those
that represent infectious disease transmission dynamics, are fundamentally different than traditional statistical models, and this course will provide insight into the fun, complex, and sometimes
unexpected world of modeling these systems. This course seeks to prepare public health graduate students to build and analyze the mathematical models of disease that they will encounter in the
scientific literature and use in their work as public health professionals. We will use Excel and Stella, a user- friendly modeling package, to run simple disease models. The handsâ ?on lab
experience will demystify mathematical modeling and lead to a clear understanding of the various types, characteristics, and qualities of models.
Student learning goals
Describe the philosophy of model-building and the relationships between modeling and other forms of scientific inquiry
Identify research questions that can be addressed with epidemic modeling methods
Discuss the role of epidemic modeling in public health policy and resource allocation
Interpret and critique mathematical models published in the scientific literature
Demonstrate proficient use of Excel, and Stella for building simple disease models
General method of instruction
The course combines a number of approaches to achieve these aims. Course time will be a mixture of lecture, seminarâ style discussion of readings, in which we dissect a series of modeling papers,
and time in the computer lab where you will recreate, manipulate, build your own mathematical models.
Recommended preparation
This course will involve basic programming, but prior experience with programming is not required. Expertise in calculus in not required either! We will review the basic concepts in math that we use
throughout the course, including logarithms, differentiation and integration.
Class assignments and grading
Paper critiques: Throughout the course we will be discussing a wide range of published modeling papers. The two paper critiques will require you to build upon what we are learning in class by
choosing a paper from the modeling literature and writing a critique. Critiques need to entail a discussion of the scientific question at hand, a detailed description of the model used, and an
assessment of the model. For each paper, you may want to consider questions such as: What are the authors modeling? Is the model reasonable? What questions are being answered with the model, and what
questions were not answered? What are good features of the model? How could the model be improved?
Each critique should fit on one page (single spaced if needed).
Class Participation: In addition to actively participating in each class, students will lead one journal-club style discussion of a modeling paper.
Lab Assignments: Students will submit completed labs assignments. We will go over lab assignments as a group during the computer lab sessions. However, students are responsible for completing the
assignments and uploading the final models. Successful lab assignments are models that run correctly. Excellent labs are those with tidy programming as well. (Programming is considered an art by
Final Project: The final project is a chance for you to gain practice building your own mathematical model as it relates to your own research interests. Students are encouraged to begin shaping their
ideas early and to work with me on this process.
Successful projects are those that are well-grounded in the existing literature on the topic of interest, employ modeling to expand on that literature in a way not easily accomplished through other
means, and are mindful of both the uses and limitations of modeling. They are also clearly written and engaging. Very successful papers will help you make great strides in your dissertation research,
and/or form the first steps towards a publishable paper.
Not all projects will be able to take the same form; some may entail a fully developed mathematical model that you both program and explore, while others may recreate and modify a published model in
order to answer a different question. We will discuss this in more detail as the quarter progresses, and determine together what the most fruitful approach is for each of your interests.
The project write-up should be roughly 15-20 pages, including text, figures, tables and perhaps snippets of code. The text should comprise roughly 10 pages of the total.
Your grade is broken down as follows: Paper Critiques: 20% (10% X 2) Class Participation: 15% Lab Assignments: 25% Final paper/project: 40%
The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without
notice. In most cases, the official course syllabus will be distributed on the first day of class. Course Website
Last Update by Susan L Cassels
Date: 09/25/2012 | {"url":"https://www.washington.edu/students/icd/S/epidem/554scassels.html","timestamp":"2014-04-18T13:28:06Z","content_type":null,"content_length":"8598","record_id":"<urn:uuid:c4670855-a247-4977-8b1f-a1775f20c322>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00331-ip-10-147-4-33.ec2.internal.warc.gz"} |
Miami Shores, FL Math Tutor
Find a Miami Shores, FL Math Tutor
...I know about directors and directors' style, actors, classics from the 1930s and beyond. Having taught Precalculus for many years, I have had great results with my students, one on one. Even
the ones having difficulties in the toughest schools and at the highest levels did eventually very well.
48 Subjects: including precalculus, chemistry, elementary (k-6th), physics
I studied Biology with a minor in mathematics at Purdue University. Currently I am an adjunct instructor at a college. Most recently, I taught Algebra and Geometry at the high school level.
18 Subjects: including geometry, study skills, biochemistry, cooking
...Here is the cancellation or no-show policy that you need to take into consideration if you decide to hire my services: If the cancellation is the same day of tutoring there will be a 30 min.
charge. If a student reschedules for a later day or time within the same week, there will be a 20 minute...
16 Subjects: including SAT math, algebra 1, algebra 2, chemistry
Hello everyone, my name is Becky. I have worked as a private tutor for 5 years in a variety of subjects. I am very patient and believe in teaching by example.
30 Subjects: including algebra 1, English, ESL/ESOL, ACT Math
...For my minor, I was required to take all levels of Spanish, Spanish literature, Spanish history, and Spanish dialects. I have been tutoring students since high school in various subjects and
continued in college. I have traveled around the world and met many people so I am able to adapt to all types of situations and people.
9 Subjects: including prealgebra, algebra 1, algebra 2, Spanish
Related Miami Shores, FL Tutors
Miami Shores, FL Accounting Tutors
Miami Shores, FL ACT Tutors
Miami Shores, FL Algebra Tutors
Miami Shores, FL Algebra 2 Tutors
Miami Shores, FL Calculus Tutors
Miami Shores, FL Geometry Tutors
Miami Shores, FL Math Tutors
Miami Shores, FL Prealgebra Tutors
Miami Shores, FL Precalculus Tutors
Miami Shores, FL SAT Tutors
Miami Shores, FL SAT Math Tutors
Miami Shores, FL Science Tutors
Miami Shores, FL Statistics Tutors
Miami Shores, FL Trigonometry Tutors
Nearby Cities With Math Tutor
Biscayne Park, FL Math Tutors
Doral, FL Math Tutors
El Portal, FL Math Tutors
Hialeah Math Tutors
Hialeah Gardens, FL Math Tutors
Hialeah Lakes, FL Math Tutors
Mia Shores, FL Math Tutors
Miami Beach Math Tutors
Miami Gardens, FL Math Tutors
Miami Lakes, FL Math Tutors
N Miami Beach, FL Math Tutors
North Bay Village, FL Math Tutors
North Miami Beach Math Tutors
North Miami, FL Math Tutors
Opa Locka Math Tutors | {"url":"http://www.purplemath.com/Miami_Shores_FL_Math_tutors.php","timestamp":"2014-04-20T03:55:49Z","content_type":null,"content_length":"23935","record_id":"<urn:uuid:618de558-68de-44e0-b705-ad350d6358d4>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
Old Wives' Tales In Amateur Radio
Chapter I: "An Antenna Tuner Does Absolutely Nothing Except Make The Transmitter Happy."
by Cecil Moore, www.W5DXP.com, Rev. 1.5, Jan 9, 2014
Wikipedia's Take on Old Wives' Tales
Judging from the number of Old Wives' Tales that abound in amateur radio, there must be more old wives in amateur radio than can be found in The Call Book. The author hears this one at least twice a
week on some ham radio newsgroup. This article is an attempt to debunk that myth using the most simple of examples hopefully that everyone can understand. This will be the first in a series of "Old
Wives' Tales" articles that will be compiled into chapters on the author's web page.
Let's not quibble over whether a transmitter is capable of human-like feelings or not. What is meant by "making the transmitter happy" is the concept that an antenna tuner presents a resistive load
of 50 ohms to a transmitter designed to drive a 50 ohm load, usually of the solid-state variety. The author conceeds the idea that an antenna tuner "makes the transmitter happy". The question
remains: Is "making the transmitter happy" all that an antenna tuner does or does the antenna tuner also have an effect at the antenna, i.e. does that 50 ohm Z0-match that "makes the transmitter
happy" also have a system-wide effect that makes the entire system, including the antenna, happy? (If a transmitter can be happy, why can't an antenna be happy?)
We are going to look at some simple examples. The source will be a voltage source, (V[S]), with an associated source impedance of the complex form (R[S] ± jX[S]). Any transmission line will be one
wavelength long and lossless (1WL T-Line). The load will represent an antenna feedpoint impedance of the complex form (R[L] ± jX[L]). We will represent such systems using one-line diagrams of the
(V[S])--(R[S] ± jX[S])----------(R[L] ± jX[L])
The Maximum Power Transfer Theorem
The maximum power transfer theorem was first used with DC circuits. Given a source and a load, the theorem says that: Maximum power transfer will occur if the source resistance is equal to the load
resistance. This is probably the origin of the myth that, for maximum power transfer to occur, an antenna must present a purely resistive, e.g. 50 ohm impedance, i.e. must be resonant.
When AC circuit theory was developed, it was apparent that the resulting reactive impedances would require the DC maximum power transfer theorem to be updated. That's when the conjugate matching
theorem came into existence. Given an AC circuit: Maximum power transfer will occur if the source impedance is equal to the conjugate of the load impedance. Note that the conjugate of 100+j100 ohms
is 100-j100 ohms and the conjugate of 50-j200 ohms is 50+j200 ohms. Both the above theorems apply to lumped-circuits.
When networks that are an appreciable percentage of a wavelength were introduced, it again became apparent that the maximum power transfer theorem needed to be updated since a transmission line with
reflections is capable of transforming the complex load impedance to an infinite number of other complex impedances and also to some purely resistive impedances. Let's take a look at how the maximum
power transfer theorem can be updated to handle distributed networks. We can do that by looking at one characteristic of the maximum power transfer theorem for an AC circuit represented by the
one-line diagram introduced above with point 'x' added.
(V[S])--(R[S] ± jX[S])-----x-----(R[L] ± jX[L])
The voltage source, (V[S]), just by itself is defined as having a zero impedance. So if we measure the impedance looking back from point 'x' toward the source, we will measure the source impedance,
(R[S] ± jX[S]). If we measure the load impedance looking toward the load from point 'x', we will measure the load impedance, (R[L] ± jX[L]). So another way of stating the maximum power transfer
theorem for an AC circuit is: From a point between the source and the load, if the impedance looking back toward the source is equal to the conjugate of the impedance looking toward the load, then
maximum transfer of power will occur. That is also the definition of a "conjugate match".
When the maximum power transfer theorem is applied to a lumped-circuit, it is assumed that the only losses in the circuit are losses in the source resistance and the load resistance. If we adopt that
same assumption for distributed networks, we can now take the liberty to state the maximum power transfer theorem for a typical amateur radio antenna system (assuming lossless transmission lines.)
(V[S])--(R[S] ± jX[S])-----Transmission-Line-----(R[L] ± jX[L])
A maximum transfer of power will occur in an antenna system when, at any point on the lossless transmission line, the impedance looking back toward the source is equal to the conjugate of the
impedance looking toward the load.
Once again, the above statement can be considered as a necessary and sufficient condition to define a conjugate match.
Numbered Step-By-Step Examples
For the remainder of this article, we will assume that V[S]=100v, R[S]=50 ohms, and X[S]=0 ohms, i.e. a standard voltage source and R[L]=50 ohms. Also remember that all transmission lines are
(1)Source(100v)--(50 ohms)---------1WL T-Line------------(50 ohms)Load
So here we have a matched system with 50 watts delivered to the load which is the maximum transfer of power. What happens to the power delivered to the load if we mismatch the system by adding -j500
ohms of capacitive reactance to the load?
(2)Source(100v)--(50 ohms)---------1WL T-Line------------(50-j500 ohms)Load
Only 1.92 watts are delivered to the load for example (2). The current through the load resistor is 0.196a and the voltage across the load resistor is 9.8 volts. What can we do to change those
conditions at the load? How about adding a loading coil with a reactance of +j500 ohms?
(3)Source(100v)--(50 ohms)---------1WL T-Line------------(+j500 ohms)--(50-j500 ohms)Load
So the loading coil reactance of +j500 ohms neutralizes the load reactance of -j500 ohms and, once again, as in (1) above the maximum power of 50 watts is delivered to the load.
Question: Did the addition of a loading coil have an effect at the load (antenna)?
What if, instead of at the load, we install the loading coil at the source?
(4)Source(100v)--(50 ohms)--(+j500 ohms)---------1WL T-Line------------(50-j500 ohms)Load
Question: In a lossless system, does the loading coil installed at the source cause the same effects at the load (antenna) as it did when it was located at the load (antenna), i.e. do the same
conditions exist at the load in example (4) as in example (3)?
What if we put the "loading coil" in a box at the source and call it an "antenna tuner"?
Question: In a lossless system, does an antenna tuner have considerable effect at the antenna or does it "do absolutely nothing except make the transmitter happy"?
Note that, no matter where the loading coil is located above, in a lossless system, it has the same effect of establishing a system-wide conjugate match thus ensuring maximum power transfer. If we
calculate the impedance looking back down the transmission line from the load, the result will be the conjugate of the feedpoint impedance in both examples (3) and (4) above. The effect on the
antenna (load) is the same whether the loading coil is located at the antenna or at the source (shack).
Now we are ready for the next logical step from lossless systems to systems with losses where we know that lossless transmission lines and system-wide conjugate matches are impossible.
Given: In a lossless system, an antenna tuner has the same effect whether it is located at the load or source.
Question: In a system with losses, does an antenna tuner located at the source have SOME effect at the load - not the same effect as in a lossless system, but ANY effect at all at the load? Does
adding real-world losses to a lossless system cause the antenna tuner in the shack to have ZERO effect at the antenna feedpoint?
Hopefully by now, everyone recognizes those questions as rhetorical
It seems to this author that since a tuner has the SAME effect no matter where it is located in a lossless system, and since a tuner must therefore necessarily have at least SOME effect no matter
where it is located in a low-loss system, that an old wives' tale has bit the dust. In a low-loss antenna system, an antenna tuner in the shack indeed does have an effect on the power radiated by the
antenna at the antenna.
One can see for oneself the effect that an antenna tuner has at the antenna. After adjusting the antenna tuner for a match, disconnect the transmitter and install a dummy load on the tuner input.
(That step can be eliminated if the receiver in the transceiver has a 50 ohm input impedance.) At the antenna feedpoint, disconnect the feedline and connect it to an antenna analyzer. In a low-loss
system, the impedance will be somewhat close to the conjugate of the antenna feedpoint impedance. Let's call it a "near-conjugate match" if the impedances are within 10% of an ideal conjugate match.
Have someone twist the knobs on the tuner and observe the impedance change at the antenna. Then who can truthfully say that an antenna tuner has "absolutely no effect at the antenna"?
A grid dip meter can be used to verify that when the tuner is adjusted to "make the transmitter happy", i.e. adjusted for a 50 ohm Z0-match at the tuner input, the result for a low-loss system is
that the entire antenna system is resonant, according to The IEEE Dictionary definition of "resonant". When the tuner has been properly adjusted, disconnect the transmitter and attach a 50 ohm dummy
load to the tuner input. Then take the grid dip meter and couple it to the antenna, e.g. with a loop of wire in one of the elements. The grid dip meter will dip at a resonant frequency that is
somewhat close to the frequency to which the transmitter was tuned when the antenna tuner was adjusted. It doesn't matter where the grid dip meter is coupled to the antenna system - it will indicate
resonance close to that single frequency from one end of the antenna system to the other.
For the sake of simplicity in the calculations, transmission line losses, which make the calculations much more complex, have not been taken into account. Failure to include losses in the
calculations does not negate the concepts presented in this article. Note that the conjugate matching theorem applies only to lossless networks and can only come close (assume within 10%) for
low-loss networks. Assume that if the impedance is not within 10% of a conjugate match, that the system doesn't qualify as a "low-loss" system.
Here is an example of a near-conjugately matched system using the transmission line loss calculator located at:
Assume an antenna with a feedpoint impedance of 102-j480 ohms used on 14.2 MHz. Feed the antenna with 63.75 feet of "Generic 600 ohm Open" feedline from the menu of transmission lines. The impedance
looking into the transmission line at the shack end will be 105.9-j471.6 ohms according to the calculator. If we want to resonate the system at the antenna feedpoint, we install a +j480 ohm loading
coil. If we want to resonate the system at the shack end, we install a +j471.6 ohm loading coil. That's a ~2% difference in impedances. In fact, the same loading coil will work pretty well at both
ends. It's certainly not a perfect result but this author doesn't require or expect perfection from the real world. | {"url":"http://www.w5dxp.com/OWT1.htm","timestamp":"2014-04-21T08:23:44Z","content_type":null,"content_length":"12842","record_id":"<urn:uuid:4c9c5419-25e9-4a8b-9938-018859b551c7>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00582-ip-10-147-4-33.ec2.internal.warc.gz"} |
Rational Functions help?
December 11th 2008, 11:13 PM #1
Dec 2008
Rational Functions help?
a) Graph:
f(x) = 2x+3
, labelling all zeroes, intercepts, asymptotes + give its domain and range
b) Now do the same for its inverse
c) Graph g(x)=
x^3 - 8x
, labelling all zeroes, intercepts, asymptotes.
Does it have an inverse? Why or why not?
Thanks/Rep for whoever helps!
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/pre-calculus/64635-rational-functions-help.html","timestamp":"2014-04-18T13:18:28Z","content_type":null,"content_length":"28824","record_id":"<urn:uuid:7a9949f3-1a41-4f90-9742-75cd301b8f74>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
ACT Math Tutors
Crowley, TX 76036
Highly Recommended Mathematics Tutor
...I would like to put you, the student, at ease when teaching, so that you can more easily understand the subject without added stress. I have a special technique that I stumbled upon when in High
School, for learning abstr
act math
ematical concepts. All difficulties...
Offering 10+ subjects including ACT Math | {"url":"http://www.wyzant.com/Benbrook_TX_ACT_Math_tutors.aspx","timestamp":"2014-04-16T19:42:13Z","content_type":null,"content_length":"59697","record_id":"<urn:uuid:fb79e78c-6ae9-441c-b13e-1db79ca8e755>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00610-ip-10-147-4-33.ec2.internal.warc.gz"} |
An O(jV j jEj) Algorithm for Finding Maximum Matchings in General Graphs
- In Proc. of the Eighth Annual ACM-SIAM Symposium on Discrete Algorithms , 1997
"... ..."
- in 11th ACM Symposium on Parallel Architectures and Algorithms , 1999
"... In this paper we study the problem of scheduling real-time requests in distributed data servers. We assume the time to be divided into time steps of equal length called rounds. During every
round a set of requests arrives at the system, and every resource is able to fulfill one request per round. Ev ..."
Cited by 2 (0 self)
Add to MetaCart
In this paper we study the problem of scheduling real-time requests in distributed data servers. We assume the time to be divided into time steps of equal length called rounds. During every round a
set of requests arrives at the system, and every resource is able to fulfill one request per round. Every request specifies two (distinct) resources and requires to get access to one of them.
Furthermore, every request has a deadline of d, i.e. a request that arrives in round t has to be fulfilled during round t +d 1 at the latest. The number of requests which arrive during some round and
the two alternative resources of every request are selected by an adversary. The goal is to maximize the number of requests that are fulfilled before their deadlines expire. We examine the scheduling
problem in an online setting, i.e. new requests continuously arrive at the system, and we have to determine online an assignment of the requests to the resources in such a way that every resource has
to fulfil...
, 1994
"... Constructing a maximum matching is an important problem in graph theory with applications to problems such as job assignment and task scheduling. Many efficient sequential and parallel
algorithms exist for solving the problem. However, no distributed algorithms are known. In this paper, we present a ..."
Cited by 1 (0 self)
Add to MetaCart
Constructing a maximum matching is an important problem in graph theory with applications to problems such as job assignment and task scheduling. Many efficient sequential and parallel algorithms
exist for solving the problem. However, no distributed algorithms are known. In this paper, we present a distributed, self-stabilizing algorithm for finding a maximum matching in trees. Since our
algorithm is self-stabilizing, it does not require any initialization and is tolerant to transient faults. The algorithm can also dynamically adapt to arbitrary changes in the topology of the tree.
Keywords: Distributed algorithms, Matching, Self-stabilization, Trees. 1 Introduction Let G = (V; E) be an undirected graph. Two edges in E are said to be adjacent if they share a common end-point. A
matching in G is a set of edges M ` E such that no two edges in M are adjacent. A matching M is called a maximum cardinality matching, or maximum matching in short, if M has largest cardinality among
all pos... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2162693","timestamp":"2014-04-19T02:24:45Z","content_type":null,"content_length":"19694","record_id":"<urn:uuid:be71dd12-4fc5-42a0-9fda-e6366601d1ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
Solve the differential equation
September 13th 2008, 03:15 AM
Solve the differential equation
A calf that weighs w_o pounds at birth gains weight at the rate shown below where w is weight in pounds and t is time in years. (w_o is w sub o)
dw/dt = 1200 - w
September 13th 2008, 05:08 AM
mr fantastic
$\Rightarrow \frac{dt}{dw} = \frac{1}{1200 - w}$, subject to the boundary condition that at t = 0, $w = w_0$.
Integrate with respect to w, use the boundary condition to get the arbitrary constant and, I suppose, make w the subject. | {"url":"http://mathhelpforum.com/differential-equations/48863-solve-differential-equation-print.html","timestamp":"2014-04-18T13:32:48Z","content_type":null,"content_length":"4689","record_id":"<urn:uuid:92f5518b-e3d0-446b-ba87-de9fcf99403f>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00514-ip-10-147-4-33.ec2.internal.warc.gz"} |
Procs per minute
100,552pages on
this wiki
Source information needed!
• This article is lacking citations and/or sources.
• Please see WoWWiki:Citation for information on how to add citations.
This article or section may need to be wikified to meet WoWWiki's quality standards.
Procs per minute (PPM) describes how a given proc mechanic functions. It is often used to contrast a game mechanic with one that uses "procs per hit" (PPH).
In World of Warcraft
This section concerns content exclusive to World of Warcraft.
PPM vs. PPH
In World of Warcraft, PPH is the percent chance of a proc occurring each hit. PPH benefits greatly from faster hits.
PPM scales to PPH according to weapon speed. Here is the formula:
PPH = (Weapon speed) / (60 / PPM)
which is the same as:
PPH = Weapon speed * PPM / 60
This gives the percentage chance that the proc will be triggered each hit. Note that PPM is an average, not a guarantee.
PPM & Haste
The speed used for calculating the PPH ( Proc per Hit ) chance is the basic weapon speed, not the characters haste-affected hit speed. This means the probability of a proc is not lowered or increased
by haste. What haste does is increase the total amount of melee hits per time and thus the total amount of procs.
Instant attacks
The PPM rate of a given effect is based on auto-attacks ("white damage" attacks) made with that weapon. However, an effect on the main-hand weapon can proc more often than its PPM rating, as instant
attacks (Sinister Strike, Overpower, etc.) are all assumed to be made with the main hand, and therefore can trigger the proc.
In addition, because instant attacks use the calculated PPH rate based on the weapon speed, yet are not themselves restricted by the weapon speed, a slower weapon will result in more total procs.
For example, a Warrior has Crusader (PPM of 1) on a 2.0-speed and a 2.5-speed weapon. These calculations assume ample Rage and no misses.
• Weapon 1 (speed 2.0): 30 white hits per minute, so PPH is 1/30.
• Weapon 2 (speed 2.5): 24 white hits per minute, so PPH is 1/24.
Each minute, the Warrior also does 10 Bloodthirsts, 6 Whirlwinds, and 4 Overpowers, for a total of 20 instant attacks.
• Weapon 1 (PPH 1/30): 50 hits (20 instant + 30 white), so actual PPM is 50/30 = 1.67.
• Weapon 2 (PPH 1/24): 44 hits (20 instant + 24 white), so actual PPM is 44/24 = 1.83.
Therefore, the slower the weapon you wield in the main hand, the more often any PPM-rated ability on it will proc, assuming you use any special attacks that can trigger the proc.
All these calculations are made with no Flurry included.
PPM Rates:
• The subject of this article was removed from World of Warcraft after the Mists of Pandaria cleared.
The in-game information in this article is kept purely for historical purposes and may not need to remain under any other categories.
[Formula: Enchant Weapon - Lifestealing]: 6
See also | {"url":"http://www.wowwiki.com/Procs_per_minute","timestamp":"2014-04-21T14:55:43Z","content_type":null,"content_length":"67522","record_id":"<urn:uuid:35eb1574-806c-4d3b-8cb6-ee10fb0c587c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Munro and Venkatesh Raman. Succinct representation of balanced parentheses, static trees and planar graphs
, 2000
"... We consider the problem of labeling the nodes of an n-node graph G with short labels in such a way that the distance between any two nodes u; v of G can be approximated eciently (in constant
time) by merely inspecting the labels of u and v, without using any other information. We develop such con ..."
Cited by 46 (18 self)
Add to MetaCart
We consider the problem of labeling the nodes of an n-node graph G with short labels in such a way that the distance between any two nodes u; v of G can be approximated eciently (in constant time) by
merely inspecting the labels of u and v, without using any other information. We develop such constant approximate distance labeling schemes for the classes of trees, bounded treewidth graphs, planar
graphs, k-chordal graphs, and graphs with a dominating pair (including for instance interval, permutation, and AT-free graphs). We also show lower bounds, and prove that most of our schemes are
optimal in length of labels generated and in the quality of the approximation, leaving some open problems.
- In: Proc. SODA , 2007
"... We define and design succinct indexes for several abstract data types (ADTs). The concept is to design auxiliary data structures that ideally occupy asymptotically less space than the
information-theoretic lower bound on the space required to encode the given data, and support an extended set of ope ..."
Cited by 42 (11 self)
Add to MetaCart
We define and design succinct indexes for several abstract data types (ADTs). The concept is to design auxiliary data structures that ideally occupy asymptotically less space than the
information-theoretic lower bound on the space required to encode the given data, and support an extended set of operations using the basic operators defined in the ADT. The main advantage of
succinct indexes as opposed to succinct (integrated data/index) encodings is that we make assumptions only on the ADT through which the main data is accessed, rather than the way in which the data is
encoded. This allows more freedom in the encoding of the main data. In this paper, we present succinct indexes for various data types, namely strings, binary relations and multi-labeled trees. Given
the support for the interface of the ADTs of these data types, we can support various useful operations efficiently by constructing succinct indexes for them. When the operators in the ADTs are
supported in constant time, our results are comparable to previous results, while allowing more flexibility in the encoding of the given data.
Usingourtechniques,wedesignasuccinctencodingthatrepresentsastringoflengthnoveranalphabetof size σ using nHk(S)+lgσ·o(n)+O ( nlgσ lglglgσ) bits to support access/rank/select operations in o((lglgσ)
1+ɛ) time, for any fixed constant ɛ> 0. We also design a succinct text index using nH0(S)+O ( nlgσ) bits that lglgσ
, 2003
"... In this article we define a canonical decomposition of rooted outerplanar maps into a spanning tree and a list of edges. This decomposition, constructible in linear time, implies the existence
of bijection between rooted outerplanar maps with n nodes and bicolored rooted ordered trees with n node ..."
Cited by 12 (1 self)
Add to MetaCart
In this article we define a canonical decomposition of rooted outerplanar maps into a spanning tree and a list of edges. This decomposition, constructible in linear time, implies the existence of
bijection between rooted outerplanar maps with n nodes and bicolored rooted ordered trees with n nodes where all the nodes of the last branch are colored white. As a consequence, for rooted
outerplanar maps of n nodes, we derive: an enumeration formula, and an asymptotic of 2 3n (log n) ; an optimal data structure of asymptotically 3n bits, built in O(n) time, supporting adjacency and
degree queries in worst-case constant time and neighbors query of a d-degree node in worst-case O(d) time...
"... c○Alexander Golynski 2007I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I
understand that my thesis may be made electronically available to the public. (Alexander Golynski) The main go ..."
Cited by 2 (1 self)
Add to MetaCart
c○Alexander Golynski 2007I hereby declare that I am the sole author of this thesis. This is a true copy of the thesis, including any required final revisions, as accepted by my examiners. I
understand that my thesis may be made electronically available to the public. (Alexander Golynski) The main goal of this thesis is to investigate the complexity of a variety of problems related to
text indexing and text searching. We present new data structures that can be used as building blocks for full-text indices which occupies minute space (FM-indexes) and wavelet trees. These data
structures also can be used to represent labeled trees and posting lists. Labeled trees are applied in XML documents, and posting lists in search engines. The main emphasis of this thesis is on lower
bounds for time-space tradeoffs for the following problems: the rank/select problem, the problem of representing a string of balanced parentheses, the text retrieval problem, the problem of computing
a permutation and its inverse, and the problem of representing a binary relation. These results are divided in two groups: lower bounds in the cell probe model and lower bounds in the indexing model.
"... Abstract. In this paper we present different solutions for the problem of indexing a dictionary of strings in compressed space. Given a pattern P, the index has to report all the strings in the
dictionary having edit distance at most one with P. Our first solution is able to solve queries in (almost ..."
Cited by 1 (0 self)
Add to MetaCart
Abstract. In this paper we present different solutions for the problem of indexing a dictionary of strings in compressed space. Given a pattern P, the index has to report all the strings in the
dictionary having edit distance at most one with P. Our first solution is able to solve queries in (almost optimal) O(|P | + occ) time where occ is the number of strings in the dictionary having edit
distance at most one with P. The space complexity of this solution is bounded in terms of the k-th order entropy of the indexed dictionary. Our second solution further improves this space complexity
at the cost of increasing the query time. 1 | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=188421","timestamp":"2014-04-25T03:15:09Z","content_type":null,"content_length":"24595","record_id":"<urn:uuid:d882c174-3b02-416b-a355-bf3f387e7bd6>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00209-ip-10-147-4-33.ec2.internal.warc.gz"} |
Merrillville Calculus Tutor
Find a Merrillville Calculus Tutor
...I have 6 years of experience teaching Geometry. I have 11 years of experience teaching Pre-Algebra. I have 4 years of experience teaching Pre-Calculus.
14 Subjects: including calculus, geometry, algebra 1, trigonometry
...I have taught a wide variety of courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus, advanced placement calculus, integrated chemistry/physics, and
physics. I also have experience teaching physics at the college level and have taught an SAT math preparation course. For the past three years I have served as the math department chair.
12 Subjects: including calculus, physics, geometry, algebra 1
I've taught Algebra 1, Algebra 2, Geometry, and Pre-Calculus at the high school level for 6 years. In addition, I've completed a BS in Electrical Engineering and I am quite knowledgeable of
advance mathematical concepts. (Linear Algebra, Calculus, Differential Equations) I create an individualized...
12 Subjects: including calculus, geometry, algebra 1, trigonometry
...For three semesters I taught C++ and Matlab to freshmen and sophomore mechanical engineering students. The course began with simple programming commands, progressed to logic and more
complicated problem solving, and culminated with object oriented programming. During my masters degree I was a TA for the intro to computer science course.
17 Subjects: including calculus, physics, geometry, GRE
...I am a Senior at Purdue North Central and I plan to graduate in May with a Bachelor of Science in Biology and a minor in Chemistry. My future goals include going to graduate school to acquire a
Master's in Cancer Chemical Biology and research in cancer as well. I am very involved at Purdue University North Central and am a very well-rounded individual.
25 Subjects: including calculus, chemistry, physics, geometry | {"url":"http://www.purplemath.com/merrillville_in_calculus_tutors.php","timestamp":"2014-04-16T19:37:26Z","content_type":null,"content_length":"24089","record_id":"<urn:uuid:3373edb4-68a0-4274-902b-9297c35759e0>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Angle Sum in a Triangle
Image Source: http://www.greenpest.com.au
Building Frames are “triangulated” to give them strength. Without a triangle structure the wood frame will not be rigid or strong.
In Australia, the Sydney Harbor Bridge is made of an interwoven triangle structure.
Image Source: http://www.orangehelicopters.com.au
Every part of the bridge is extensively triangulated to create rigidity and strength.
Image Source: http://www.dinkumaussies.com
Builders of Structural Frames and Steel Bridges need to have an intimate knowledge of angles and triangles.
In this lesson we look at how the three angles in any triangle always create a total of 180 degrees.
Sum of the Angles in a Triangle
The three angles in any triangle always create a total of 180 degrees.
Two mathematical proofs for angles adding up to 180 degrees in any triangle are shown in the following video.
The first proof involves hands on cutting up a paper triangle, and the second one is a far more technical geometric proof.
The following shows the 180 degrees total rule for several different triangles.
Image Copyright 2012 by Passy’s World
Videos About Triangle Angles
This first video shows several different examples of how to find missing Angles in Triangles.
This next video shows a more complex problem involving Ratios, and how to use an Algebra Equation to solve for the unknown angles.
Here is another video that involves using Algebra to find unknown angles.
Finding an Unknown Angle in a Triangle
Because we know that the three angles add up to 180 degrees, we can easily work out the value for a missing angle.
When a question gives us the values for two interior angles of a triangle, the third missing angle is always 180 minus the two angles that we know.
Missing Angle = 180 minus the other two given angles
The following examples show how to calculate the missing angle for several different triangles, using Algebara equations.
Image Copyright 2012 by Passy’s World
Math Warehouse Triangle Interactive
The middle of the following web page has a drag around interactive which shows the angles always adding to 180, no matter waht shape we make the triangle.
(Much quicker than cutting out lots of random triangles from paper and measuring them).
Note that it is best to set the units button to “Integer” whole numbers when using this demonstration tool.
Click the following link to use this interactive
Finding Angles Worksheet
Click the following link to get a PDF worksheet on finding Triangle Angles.
Finding Angles Game
“Tank Attack” has some quite challenging questions on Supplementary Angles, as well as Triangles.
Click the following link to play this game.
Solving Angles Game
This game about “Itzi” spider involves solving several levels of Angles questions, where later levels include triangles.
Click the following link to play this challenging game.
Related Items
Classifying Triangles
Interactives at Mathwarehouse
Jobs that use Geometry
If you enjoyed this post, why not get a free subscription to our website.
You can then receive notifications of new pages directly to your email address.
Go to the subscribe area on the right hand sidebar, fill in your email address and then click the “Subscribe” button.
To find out exactly how free subscription works, click the following link:
If you would like to submit an idea for an article, or be a guest writer on our blog, then please email us at the hotmail address shown in the right hand side bar of this page.
Feel free to link to any of our Lessons, share them on social networking sites, or use them on Learning Management Systems in Schools.
7 Responses to Angle Sum in a Triangle
You must be logged in to post a comment.
This entry was posted in Geometry, Triangles and tagged adding up angles in triables, equilateral triangle, Geometry, how to do triangles, scalene triangle, Sum of Angles in Triangles, triangle help,
triangle math, triangles, triangles help, types of triangles, what is a scalene triangle, what is an isosceles triangle. Bookmark the permalink. | {"url":"http://passyworldofmathematics.com/angle-sum-in-a-triangle/","timestamp":"2014-04-20T18:23:31Z","content_type":null,"content_length":"60115","record_id":"<urn:uuid:788e541c-c8b5-42d8-a01e-f6d56bffe8b5>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00191-ip-10-147-4-33.ec2.internal.warc.gz"} |
Doing Well in Calculus
Doing Well in Calculus
Written by: D. A. Kouba
• Develop an effective and time-efficient homework/study strategy for, not only your calculus class, but other classes as well. This will help you become a more confident, successful, and
well-rounded student. It will lead to a healthier balance between work time and leisure time.
• Spend at least two to four hours on each homework assignment. This affords you extra time to work on challenging homework problems and helps you organize your thoughts, questions, and ideas. The
more time you spend on homework, the more likely you are to articulate clear, concise questions to your classmates and teachers. The more time you spend on homework, the less time you will spend
on frantic, last-minute preparation for exams.
• Definitions, formulas, and theorems that are introduced in class or needed to complete homework assignments should be memorized immediately . Postponing this until it's needed for the exam will
impede your work speed on homework assignments and interfere with clearer and deeper understanding of calculus.
• Spend time working on calculus every day . Doing some calculus every day makes you more familiar with concepts, definitions, and theorems. This familiarity will make calculus get easier and
easier one day at a time.
• Find at least one or two other students from your calculus class with whom you can regularly do homework and prepare for exams. Your classmates are perhaps the least used and arguably your best
resource. An efficient and effective study group will streamline homework and study time, reduce the need for attendance at office hours, and greatly improve your written and spoken
communication. The best time to use your classmates as study/homework partners is after you have made an honest effort on your own to solve the problems using your own wits, knowledge, and
experience. When you encounter an unsolvable problem, don't give up too soon on it. Being stumped is an opportunity for mathematical growth and insight, even if you never solve the problem on
your own. If you seek help prematurely, you will never know if you could have solved a tough problem without outside assistance.
• Begin preparing/outlining for exams at least five class days before the exam. Outlining the topics, definitions, theorems, equations, etc. that you need to know for the exam will help you focus
on those areas where you are least prepared. Preparing early for the exam will build your self-confidence and reduce anxiety on the day of the exam. It's also an insurance policy against time
lost to illness, unexpected family visits, and last-minute assignments in other classes. Generally speaking, pulling all-nighters and doing last-minute cramming for exams is a recipe for eventual
academic disaster.
• Prepare for exams by working on new problems . Good sources for these problems are unassigned problems from your textbook, review exercises and practice exams at the end of each chapter, old hour
exams, or old final exams. Studying exclusively from those problems which you have already been assigned and worked on may not be effective exam preparation. Problems for each topic are generally
in the same section of the book, so knowing how to do a problem because you know what section of the book it is in could give you a false sense of security. Working on new randomly mixed problems
more closely simulates an exam situation, and requires that you both categorize the problem and then solve it.
• Use all resources of assistance and information which are available to you. These include classnotes, homework solutions, office hours with your professor or teaching assistants, and problem
sessions with your classmates. Do not rely exclusively on just one or two of these resources. Using all of them will help you develop a broader, more natural base of knowledge and understanding.
• Expect your exams to be challenging . If they are challenging, you will be prepared. If they are not challenging, you can expect to have an easy time getting a very high score !
Knowledge is a means to personal empowerment. Attaining knowledge can be a limitless source of pleasure and satisfaction.
Please e-mail your comments, questions, or suggestions to Duane Kouba at kouba@math.ucdavis.edu . | {"url":"https://www.math.ucdavis.edu/~kouba/CalculusTips.html","timestamp":"2014-04-16T15:58:38Z","content_type":null,"content_length":"5486","record_id":"<urn:uuid:8c505d91-6ff7-4dc1-ba30-b8059d4b803e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00107-ip-10-147-4-33.ec2.internal.warc.gz"} |
Code Calculations
Industrial Calculations: Lighting Loads
This is the first in a multi-part series that will cover industrial calculations for service and feeder conductors in an industrial facility. You must size each service and feeder so it can carry a
load current that is not less than the sum of all branch-circuits it supplies in the electrical system. To do this successfully, you must work methodically. Your first step is to lay out the lighting
Art. 220 is the basis for this task. Unfortunately, Table 220-3(a) doesn't list industrial occupancies. Thus, we must calculate industrial lighting loads without the use of Table 220-3(a).
Fortunately, Sec. 220-3(b) lists specific types of "other loads."
What if you have an unlisted VA rating? Let's take a look at Sec. 215-2(a) and 230-42(a)(1). What these basically say is that you must size the feeder-circuit conductor to have an allowable ampacity
equal to or greater than the noncontinuous load plus 125% of the continuous load.
Example: Suppose you have 160 lighting ballasts operating at 120V and rated at 1.5A each. You want to know the load in VA if the ballasts run 12 hr a day.
Step 1: Compute load per Sec. 220-4(b).
160 x 1.5A = 240A
Step 2: Compute continuous load per Sec. 215-2(a) and 230-42(a)(1).
240A x 125% = 300A
Step 3: Compute VA.
300A x 120V = 36,000VA.
Solution: The lighting load for the unlisted occupancy is 36,000VA.
Show window lighting load. When you think of lighting in an industrial setting, you probably think of 277V fluorescent or metal-halide lamps. Other types of loads exist in that environment, too. Most
manufacturing plants, for example, have show windows. In the lobby area, you may see displays of products or company history. In the plant itself, you'll find show windows that contain ISO-9000
procedures, production figures, job postings, Pareto charts of production problems, overtime schedules, and other information production management people want visible from the factory floor. Let's
take a look at how to calculate these types of loads.
Again, we can refer to Sec. 215-2(a) and 230-42(a)(1). We have two methods available. The first method is to multiply the linear feet of the show window by 200VA per foot. You compute this load at
100% for noncontinuous operation and 125% for continuous operation.
Example: What is the lighting load in VA for a 40-ft show window used at noncontinuous or continuous operation?
Step 1: Compute noncontinuous load per Sec. 220-12(a), 215-2(a), and 230-42(a)(1).
40 ft x 200VA x 100% = 8000VA.
Step 2: Compute continuous load per Sec. 220-12(a), 215-2(a), and 230-42(a)(1).
40 ft x 200VA x 125% = 10,000VA
Solution: The noncontinuous load is 8000VA, and the continuous load is 10,000 VA.
If you know the individual loads, you can use the second method, but you can use the results of that method only if the calculated load is higher than the results of the first method. With this
method, you size the show window load by multiplying the VA rating of each fixture by 125%. If you don't know the VA, assume 180VA for each outlet and multiply that by 125%.
Track lighting load. You may also find some track lighting in industrial settings. To calculate this load, assume 150VA for every 2 ft of lighting track. Round any leftover up to 2 ft, regardless of
how little it is. You'll also need to multiply by 125% for continuous operation.
Example: What is the load in VA for 180 ft of lighting track used at noncontinuous or continuous operation?
Step 1: Compute noncontinuous load per Sec. 220-12(b), 215-2(a), and 230-42(a)(1).
180 ft/2 ft x 150VA x 100% = 13,500VA
Step 2: Compute continuous load per Sec. 220-12(b), 215-2(a), and 230-42(a)(1).
180 ft/2 ft x 150VA x 125% = 16,875VA
Solution: The noncontinuous load is 13,500VA, and the continuous load is 16,875VA.
Low-voltage lighting load. You may also run into some low-voltage lighting systems - those operating at 30V or less. To compute the load for these systems, multiply the full-load amps of the
isolation transformer by 100% for noncontinuous operation and 125% for continuous operation.
Example: What is the load in VA for a low-voltage isolation transformer with an FLA of 150A used at noncontinuous or continuous operation?
Step 1: Compute noncontinuous load per Art. 411, Sec. 215-2(a), and Sec. 230-42(a)(1).
150A x 100% = 150A
Step 2: Compute continuous load per Art. 411, Sec. 215-2(a), and Sec. 230-42(a)(1).
150A x 125% = 187.5A
Solution: The noncontinuous load is 150A, and the continuous load is 187.5A.
Outside lighting load. To calculate outside lighting loads, multiply the VA rating of each lighting unit by 100% for noncontinuous operation and 125% for continuous operation.
Example: What is the lighting load for 110 noncontinuous operated lighting fixtures with each ballast having a rating of 175VA and 130 continuous operated lighting fixtures with a 175VA ballast in
each unit?
Step 1: Compute noncontinuous load per Sec. 220-4(b), 215-2(a), and 230-42(a)(1).
175VA x 110 x 100% = 19,250VA
Step 2: Compute continuous load per Sec. 220-4(b), 215-2(a), and 230-42(a)(1).
175VA x 130 x 125% = 28,438VA
Solution: The noncontinuous load is 19,250VA, and the continuous load is 28,438VA.
Outside sign lighting load. Most industrial facilities have lighted outside signs. You must size these loads at a minimum of 1200VA. Multiply this VA rating by 125% for signs operating for 3 hr or
more and 100% for those operating less than 3 hr.
Example: What is the lighting load in VA for a 1800VA sign that operates 10 hr continuously?
Step 1: Compute load per Sec. 600-5(b)(3), 215-2(a), and 230-42(a)(1).
1800VA x 125% = 2250VA
Solution: The total outside sign load is 2250VA.
Next month's article will focus on calculating receptacle loads and special loads. | {"url":"http://ecmweb.com/print/content/code-calculations-15","timestamp":"2014-04-19T13:49:15Z","content_type":null,"content_length":"20469","record_id":"<urn:uuid:c28ecdc1-3e36-49c1-afef-e4221461d3fa>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"} |
Are post-hoc tests being developed for R?
Thomas Lumley thomas@biostat.washington.edu
Tue, 14 Jul 1998 17:33:58 -0700 (PDT)
Here is code for three p-value based post hoc tests: bonferroni, holm, and
hochberg. Holm is better than Bonferroni under all circumtances. Hochberg
is more powerful than Holm, but under some slightly pathological
situations it can exceed the nominal Type I error rate.
I haven't looked at this code for a long time, but I think it works
correctly. Refeerences for the methods are in "Adjusted p-values for
simultaneous inference" by S. Paul Wright in Biometrics some time in
the early 90s (I'm in the wrong country at the moment so I don't have the
full reference).
p.adjust.holm <-function(p,n=length(p)) {
for (i in 2:length(p)) {
qi[index[i]]<-max(qi[index[i]], qi[index[i-1]])
p.adjust.hochberg <-function(p) {
for (i in (n-1):1) {
qi[index[i]]<-min(qi[index[i]], qi[index[i+1]])
if (is.na(how)) stop(paste("Don't know method:",method))
r-devel mailing list -- Read http://www.ci.tuwien.ac.at/~hornik/R/R-FAQ.html
Send "info", "help", or "[un]subscribe"
(in the "body", not the subject !) To: r-devel-request@stat.math.ethz.ch | {"url":"https://stat.ethz.ch/pipermail/r-devel/1998-July/018416.html","timestamp":"2014-04-19T10:01:15Z","content_type":null,"content_length":"4475","record_id":"<urn:uuid:8efd356b-95d1-41c9-9d6b-73e8b49f852e>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00429-ip-10-147-4-33.ec2.internal.warc.gz"} |
A stream of water
a stream of water exits from the nozzle of a hose at a speed of 16m/s. the vertical wall of a burning building is a horizontal distance D=4.0 m away from the nozzle. I understand and got the correct
answers for the first two questions, but i need help on the third question.
1.) if the nozzle is pointing at an angle of 40 Degrees above horizontal, how long does it take for the water to travel from the nozzle to the wall? 0.33s (it's correct, no need to check)
2.) At what height above the nozzle does the water hit the wall? 2.8 m ( this is also correct)
ok, i'm stuck on the third question....
3.) If the angle of the nozzle is changed to maximize the height of the water of the wall, what height above the nozzle does the water hit the wall?
The thing i dont understand is finding the angle of the nozzle when it's changed to the maximize height of the water. can someone help me? | {"url":"http://www.physicsforums.com/showthread.php?t=63745","timestamp":"2014-04-21T02:02:38Z","content_type":null,"content_length":"54685","record_id":"<urn:uuid:741323bf-d32c-41d3-b2d5-cb52a4324cbe>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00143-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Ticket #1113, change title?
David Warde-Farley dwf@cs.toronto....
Sat May 23 14:37:19 CDT 2009
On 23-May-09, at 2:33 PM, Pauli Virtanen wrote:
>> Yes, I was wondering about that too, though notably the tests pass on
>> x86, and in fact the result on ppc was nowhere near 0 when I
>> checked it.
> What do you mean by "nowhere near"? What does the following output for
> you:
>>>> np.arctanh(np.array([1e-5 + 1e-5j], np.complex64))
> array([ 9.99999975e-06 +9.99999975e-06j], dtype=complex64)
I must've been hallucinating: this is on ppc64. I remembered it being
closer to 1e-1,
>>> dtype = np.complex64
>>> z = np.array([1e-5*(1+1j)], dtype=dtype)
>>> p = 9.999999999333333333e-6 + 1.000000000066666666e-5j
>>> d = np.absolute(1-np.arctanh(z)/p)
>>> d
array([ 2.75662915e-09], dtype=float32)
>>> np.finfo('complex64').eps
>>> _ * 5
So I guess it is pretty small, and small enough to pass the revised
test condition you gave above.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-May/042677.html","timestamp":"2014-04-18T17:21:05Z","content_type":null,"content_length":"3667","record_id":"<urn:uuid:951bcdb7-1ddb-4f4e-807b-455106edb219>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00601-ip-10-147-4-33.ec2.internal.warc.gz"} |
What does "$H^*(X)$ is Hodge-Tate" mean?
up vote 14 down vote favorite
Let $X$ be a (let us say smooth to obscure any confusions I have between $H(X)$ and $H_c(X)$) algebraic variety defined over some subfield of $\mathbb{C}$. I have occasionally overheard the
expression "$H^*(X)$ is Hodge-Tate" used to mean something which, as far as I could tell from context, resembled one of the following:
(1) $H^*(X)$ is generated by $(p,p)$ classes, i.e. those in some intersection $W_{2p} H^i(X,\mathbb{Q}) \cap F^p H^i(X,\mathbb{C})$, where $W$ and $F$ are the weight and Hodge filtrations from the
mixed Hodge structure. In particular were $X$ smooth and proper, $H^*(X) = \bigoplus H^{p,p}(X)$.
(2) Spread $X$ out as appropriate and reduce mod a good prime, then it is `polynomial count', i.e. the number of points over $\mathbb{F}_{p^n}$ is a polynomial in $p^n$.
(3) Spread $X$ out as appropriate and reduce mod a good prime, then all the eigenvalues of Frobenius are powers of $p$.
(4) The class of $X$ in the Grothendieck group of varieties is in $\mathbb{Z}[\mathbb{A}^1]$
But when I searched for "Hodge-Tate" on google, I arrived at some description of "Hodge-Tate numbers" etc which seemed to have something to do with p-adic Hodge theory and apply to any variety.
Anyway my question is as in the title,
What does it mean for $H^*(X)$ to be Hodge-Tate?
Also I guess (4) => (3) => (2) and I vaguely recall from some appendix of N. Katz that => (1) can be tacked on the end (?) I would also like to know
Which of the reverse implications is false, and what are some counterexamples?
The paper "Eigenvalues of Froebenius and Hodge numbers" from Kisin and Lehrer discusses the relations between 1), 2) and 3), using p-adic Hodge theory. – Simon Pepin Lehalleur Feb 4 '13 at 6:28
1 Also, there is a potential confusion with the notion of "Hodge-Tate representation" in p-adic Hodge theory. According to Faltings' theorem, the cohomology of any smooth proper variety over a
p-adic field is Hodge-Tate (see definition 2.3.4 and theorem 2.2.3 in the Brinon-Conrad lecture notes, math.stanford.edu/~conrad/papers/notes.pdf) so this does not quite match the notions 1-4)
(which are closer to "the motive of X is a mixed Tate motive", I guess) – Simon Pepin Lehalleur Feb 4 '13 at 6:36
The appendix of Katz is in this paper: arxiv.org/pdf/math/0612668.pdf It does indeed seem to say (2)=>(1). – Jim Bryan Feb 4 '13 at 7:31
The Tate conjecture implies that every Galois-invariant map between the etale cohmology of two varieties comes from an $\mathbb Q_l$-linear combination of correspondences. Thus every
2 Galois-invariant map between the etale cohomology of two motives comes from a $\mathbb Q_l$-linear combination of morphisms in the category of motives. In particular, by linear algebra, an
isomorphism of Galois representations comes from an isomorphism of motives. (3) gives an isomorphism of Galois representations between $[X]$ and a sum of powers of the Tate motive (one may also
have to assume semisimplicity) – Will Sawin Feb 4 '13 at 23:39
thus we have an isomorphism in the category of motives, so an equality in the Grothendieck group of motives. I think the problem is that an isomorphism in the category of motives may come from a
2 correspondence in the category of algebraic varieties, like an isogeny between two elliptic curves, that is not an isomorphism, so does not induce an equality in the Grothendieck group. – Will
Sawin Feb 4 '13 at 23:40
show 4 more comments
1 Answer
active oldest votes
2 does imply 1 (for smooth projective varieties) via $p$-adic Hodge theory and perhaps a simpler argument.
1 does not imply 2. Indeed, blow up $\mathbb P^2$ at the Galois orbit of some point that is not $\mathbb Q$-rational but is rational over some quadratic field extension, say $(1: \sqrt
{-1} : 0)$ . Mod a prime $p$ where that point is not $\mathbb F_p$-rational, there are $p^2+p+1$ $\mathbb F_p$-rational points. Mod a prime $p$ where that point is rational, there are
$p^2+3p+1$ points. Obviously, this cannot be explained by any polynomial.
up vote 8 down 2 does imply 3 for smooth projective varieties. Using the polynomial for the number of points, one can compute the Weil zeta function as a product of terms of the form $\left( \frac{1}
vote accepted {1 -p^n t} \right)$. Using the Lefschetz trace formula, this is a product of factors corresponding to the eigenvalues of Frobenius in the etale cohomology. By the Riemann hypothesis,
none of these terms cancel, so all eigenvalues are powers of $p^n$.
Not sure about 3 and 4.
Do you know if (1) and "$X$ is defined over $\mathbf Q$" implies (2)? – Dan Petersen Feb 4 '13 at 8:01
1 I don't think that (2) implies (1) in general. Balazs mentions in mathoverflow.net/questions/92657 an example due to Nick Katz (that I haven't been able to locate) of a variety with
polynomial count but whose cohomology is not all of $(p,p)$ type. I guess the result is true if you make some purity assumption (like $X$ smooth proper) to avoid cancellation in the
Hodge-Deligne polynomial. – Dan Petersen Feb 4 '13 at 9:14
Oh - the fact that it's true when $X$ satisfies purity is exactly what is proven in the appendix by Katz linked by Jim Bryan above. – Dan Petersen Feb 4 '13 at 9:29
The variety I described is defined over $\mathbb Q$, although I said it in a strange way. I meant to blow up at the closed point on $\mathbb P^2_\mathbb Q$, thus a Galois orbit of $
\mathbb P^2_{\bar{\ mathbb Q}}$-points, which is why I added $2p$ and not $1p$. This is defined over $\mathbb Q$. – Will Sawin Feb 4 '13 at 17:35
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/120721/what-does-hx-is-hodge-tate-mean/120725","timestamp":"2014-04-16T20:07:52Z","content_type":null,"content_length":"63985","record_id":"<urn:uuid:813a9ca1-41e1-4ef9-ab6b-3e5fdef2faa5>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00002-ip-10-147-4-33.ec2.internal.warc.gz"} |
search results
Expand all Collapse all Results 1 - 4 of 4
1. CJM 2013 (vol 66 pp. 625)
Classifying the Minimal Varieties of Polynomial Growth
Let $\mathcal{V}$ be a variety of associative algebras generated by an algebra with $1$ over a field of characteristic zero. This paper is devoted to the classification of the varieties $\mathcal
{V}$ which are minimal of polynomial growth (i.e., their sequence of codimensions growth like $n^k$ but any proper subvariety grows like $n^t$ with $t\lt k$). These varieties are the building
blocks of general varieties of polynomial growth. It turns out that for $k\le 4$ there are only a finite number of varieties of polynomial growth $n^k$, but for each $k \gt 4$, the number of
minimal varieties is at least $|F|$, the cardinality of the base field and we give a recipe of how to construct them.
Keywords:T-ideal, polynomial identity, codimension, polynomial growth,
Categories:16R10, 16P90
2. CJM 2012 (vol 64 pp. 721)
Analysis of the Brylinski-Kostant Model for Spherical Minimal Representations
We revisit with another view point the construction by R. Brylinski and B. Kostant of minimal representations of simple Lie groups. We start from a pair $(V,Q)$, where $V$ is a complex vector space
and $Q$ a homogeneous polynomial of degree 4 on $V$. The manifold $\Xi $ is an orbit of a covering of ${\rm Conf}(V,Q)$, the conformal group of the pair $(V,Q)$, in a finite dimensional
representation space. By a generalized Kantor-Koecher-Tits construction we obtain a complex simple Lie algebra $\mathfrak g$, and furthermore a real form ${\mathfrak g}_{\mathbb R}$. The connected
and simply connected Lie group $G_{\mathbb R}$ with ${\rm Lie}(G_{\mathbb R})={\mathfrak g}_{\mathbb R}$ acts unitarily on a Hilbert space of holomorphic functions defined on the manifold $\Xi $.
Keywords:minimal representation, Kantor-Koecher-Tits construction, Jordan algebra, Bernstein identity, Meijer $G$-function
Categories:17C36, 22E46, 32M15, 33C80
3. CJM 2010 (vol 62 pp. 1419)
BMO-Estimates for Maximal Operators via Approximations of the Identity with Non-Doubling Measures
Let $\mu$ be a nonnegative Radon measure on $\mathbb{R}^d$ that satisfies the growth condition that there exist constants $C_0>0$ and $n\in(0,d]$ such that for all $x\in\mathbb{R}^d$ and $r>0$, ${\
mu(B(x,\,r))\le C_0r^n}$, where $B(x,r)$ is the open ball centered at $x$ and having radius $r$. In this paper, the authors prove that if $f$ belongs to the $\textrm {BMO}$-type space $\textrm
{RBMO}(\mu)$ of Tolsa, then the homogeneous maximal function $\dot{\mathcal{M}}_S(f)$ (when $\mathbb{R}^d$ is not an initial cube) and the inhomogeneous maximal function $\mathcal{M}_S(f)$ (when $\
mathbb{R}^d$ is an initial cube) associated with a given approximation of the identity $S$ of Tolsa are either infinite everywhere or finite almost everywhere, and in the latter case, $\dot{\
mathcal{M}}_S$ and $\mathcal{M}_S$ are bounded from $\textrm{RBMO}(\mu)$ to the $\textrm {BLO}$-type space $\textrm{RBLO}(\mu)$. The authors also prove that the inhomogeneous maximal operator $\
mathcal{M}_S$ is bounded from the local $\textrm {BMO}$-type space $\textrm{rbmo}(\mu)$ to the local $\textrm {BLO}$-type space $\textrm{rblo}(\mu)$.
Keywords:Non-doubling measure, maximal operator, approximation of the identity, RBMO(mu), RBLO(mu), rbmo(mu), rblo(mu)
Categories:42B25, 42B30, 47A30, 43A99
4. CJM 2009 (vol 61 pp. 382)
Unit Elements in the Double Dual of a Subalgebra of the Fourier Algebra $A(G)$
Let $\mathcal{A}$ be a Banach algebra with a bounded right approximate identity and let $\mathcal B$ be a closed ideal of $\mathcal A$. We study the relationship between the right identities of the
double duals ${\mathcal B}^{**}$ and ${\mathcal A}^{**}$ under the Arens product. We show that every right identity of ${\mathcal B}^{**}$ can be extended to a right identity of ${\mathcal A}^{**}$
in some sense. As a consequence, we answer a question of Lau and \"Ulger, showing that for the Fourier algebra $A(G)$ of a locally compact group $G$, an element $\phi \in A(G)^{**}$ is in $A(G)$ if
and only if $A(G) \phi \subseteq A(G)$ and $E \phi = \phi $ for all right identities $E $ of $A(G)^{**}$. We also prove some results about the topological centers of ${\mathcal B}^{**}$ and ${\
mathcal A}^{**}$.
Keywords:Locally compact groups, amenable groups, Fourier algebra, identity, Arens product, topological center | {"url":"http://cms.math.ca/cjm/kw/identity","timestamp":"2014-04-17T18:38:14Z","content_type":null,"content_length":"32801","record_id":"<urn:uuid:fd6db0d5-994c-4386-be3d-aead9e64b4bb>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Penrose Til
Penrose Tiles
Penrose was not the first to discover aperiodic tilings, but his is probably the most well-known. In its simplest form, it consists of 36- and 72-degree rhombi, with "matching rules" forcing the
rhombi to line up against each other only in certain patterns. It can also be formed by tiles in the shape of "kites" and "darts" or even by deformed chickens (see the "perplexing poultry" entry
below). Part of the interest in this tiling stems from the fact that it has a five-fold symmetry impossible in periodic crystals, and has been used to explain the structure of certain "quasicrystal"
• Ancient Islamic Penrose Tiles. Peter Lu uncovers evidence that the architects of a 500-year-old Iranian shrine used Penrose tiling to lay out the decorative patterns on its archways. From Ivars
Peterson's MathTrek.
• Aperiodic tiling and Penrose tiles, Steve Edwards.
• The Art and Science of Tiling. Penrose tiles at Carleton College.
• Cellular automaton run on Penrose tiles, D. Griffeath. See also Eric Weeks' page on cellular automata over quasicrystals.
• Clusters and decagons, new rules for using overlapping shapes to construct Penrose tilings. Ivars Peterson, Science News, Oct. 1996.
• Five-fold symmetry in crystalline quasicrystal lattices, Donald L. D. Caspar and Eric Fontano.
• Gallery of interactive on-line geometry. The Geometry Center's collection includes programs for generating Penrose tilings, making periodic drawings a la Escher in the Euclidean and hyperbolic
planes, playing pinball in negatively curved spaces, viewing 3d objects, exploring the space of angle geometries, and visualizing Riemann surfaces.
• Goldene Schnittmuster. Article in German on Penrose tiling and related topics.
• Irrational tiling by logical quantifiers. LICS proceedings cover art by Alvy Ray Smith, based on the Penrose tiling.
• Kadon Enterprises, makers of games and puzzles including polyominoes and Penrose tiles.
• Mathematical imagery by Jos Leys. Knots, Escher tilings, spirals, fractals, circle inversions, hyperbolic tilings, Penrose tilings, and more.
• Non periodic tiling of the plane. Including Penrose tiles, Pinhweel tiling, and more. Paul Bourke.
• Ozbird Escher-like tessellations by John Osborn, including several based on Penrose tilings.
• Patterns within rhombic Penrose tilings. Stephen Collins' program "Bob" generates these tilings and explores the patterns formed by geodesic walks in them.
• Penrose mandala and five-way Borromean rings.
• Penrose quilt on a snow bank, M.&S. Newbold. See also Lisbeth Clemens' Penrose quilt.
• Penrose tiles and how their visualization leads to strange looks from priests and small children. Drew Olbrich.
• Penrose tiles and worse. This article from Dave Rusin's known math pages discusses the difficulty of correctly placing tiles in a Penrose tiling, as well as describing other tilings such as the
• Penrose Tiles entry from E. Weisstein's treasure trove.
• Pentagonal coffee table with rhombic bronze casting related to the Penrose tiling, by Greg Frederickson.
• Quasitiler image, E. Durand.
• Santa Fe Ribbon, painting by Connie Simon featuring a rhombic Penrose tiling.
• Tessellations, a company which makes Puzzellations puzzles, posters, prints, and kaleidoscopes inspired in part by Escher, Penrose, and Mendelbrot.
• Three-color the Penrose tiling? Mark Bickford asks if this tiling is always three-colorable. Ivars Peterson reports on a new proof by Tom Sibley and Stan Wagon that the rhomb version of the
tiling is 3-colorable; A proof of 3-colorability for kites and darts was recently published by Robert Babilon [Discrete Mathematics 235(1-3):137-143, May 2001]. This is closely related to my page
on line arrangement coloring, since every Penrose tiling is dual to a "multigrid", which is just an arrangement of lines in parallel families. But my page only deals with finite arrangements,
while Penrose tilings are infinite.
• Tilings. Lecture notes from the Clay Math Institute, by Richard Stanley and Federico Ardila, discussing polyomino tilings, coloring arguments for proving the nonexistence of tilings, counting how
many tilings a region has, the arctic circle theorem for domino tilings of diamonds, tiling the unit square with unit-fraction rectangles, symmetry groups, penrose tilings, and more. In only 21
pages, including the annotated bibliography. A nice but necessarily concise introduction to the subject. (Via Andrei Lopatenko.)
• Toilet paper plagiarism. A big tissue company tries to rip off Sir Roger P.
• The trouble with five. Craig Kaplan explains why five-fold symmetry doesn't work in regular plane tilings, but does work for the Penrose tiling.
• Voronoi diagram of a Penrose tiling (rhomb version), Cliff Reiter.
From the Geometry Junkyard, computational and recreational geometry pointers.
Send email if you know of an appropriate page not listed here.
David Eppstein, Theory Group, ICS, UC Irvine.
Semi-automatically filtered from a common source file. | {"url":"http://www.ics.uci.edu/~eppstein/junkyard/penrose.html","timestamp":"2014-04-16T04:23:03Z","content_type":null,"content_length":"8908","record_id":"<urn:uuid:3a6a4d24-d2ba-4aed-8c9e-b5387ac17659>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
SAT Math Tutors
North Smithfield, RI 02896
Special Ed. Teacher Specializing in Core Subjects and Study Skills
...am a retired special education resource teacher who taught elementary school students with learning challenges for over 30 years. Most of my teaching took place individually or in small groups
where I provided individualized instruction according to each child's...
Offering 10+ subjects including SAT math | {"url":"http://www.wyzant.com/Johnston_RI_SAT_math_tutors.aspx","timestamp":"2014-04-19T00:02:17Z","content_type":null,"content_length":"60489","record_id":"<urn:uuid:5c494c3e-52c6-4e8f-9407-fca541eab018>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
An Analysis of U.S. and World Oil Production Patterns Using Hubbert-Style Curves
A quantitative analytical method, using a spreadsheet, has been developed which allows the determination of values of the three parameters that characterize the Hubbert-style Gaussian error curve
that best fits the conventional oil production data both for the U.S. and the world. The three parameters are: the total area under the Gaussian which represents the estimated ultimate (oil) recovery
( EUR), the date of the maximum of the curve, and the half-width of the curve. The "best fit" is determined by adjusting the values of the three parameters to minimize the root-mean-square deviation
( RMSD ) between the data and the Gaussian. The sensitivity of the fit to changes in values of the parameters is indicated by an exploration of the rate at which the RMSD increases as values of the
three parameters are varied from the values that give the best fit. The results of the analysis are: 1) the size of the U.S. estimated ultimate recovery ( EUR ) of oil is suggested to be 2.22 x 1011
barrels (0.222 trillion bbl) of which approximately three-fourths appears to have been produced through 1995; 2) if the world EUR is 2.0 x 1012 bbl, (2.0 trillion bbl) a little less than half of this
oil has been produced through 1995, and the maximum of world oil production is indicated to be in 2004; 3) each increase of one billion barrels in the size of the world EUR beyond the value of 2.0 x
1012 bbl can be expected to result in a delay of approximately 5.5 days in the date of maximum production; 4) alternate production scenarios are presented for EURs of 3.0 and 4.0 trillion bbl. Key
Petroleum: Energy: Gaussian: Logistic Curve: Peak Production:
One of the best known products of the work of M. King Hubbert is the "Hubbert curve" ( Hubbert, 1974 ) which empirically approximates the full cycle of the growth, peaking, and subsequent decline to
zero of the "production" [ ( quantity / year ) vs. year ] of a finite non-renewable resource. The main central portion of a representative Hubbert-style curve is shown by the solid line of Figure 1.
This analysis is the result of asking the questions, 1) What is the maximum amount of information one can gain from analytical comparisons of a Hubbert-style curve with data on historical oil
production for the U.S. and for the world? 2) How sensitive are the results of this analysis to changes in important parameters? 3) How do the results of the analysis compare with the results of
geological studies of the probable EUR? OTHER CURVES
In his original work Hubbert fitted production data to the derivative of the Logistic curve, which is similar in shape to the Gaussian. ( Hubbert, 1982 ) The analysis described here was done with
both curves to allow comparison of the results. The differences in the results were less than the root-mean-square deviations of the fits, so that the results did not indicate a clear preference for
either curve. Both curves are widely understood, but the Gaussian curve was used because the analysis is simpler in execution and interpretation.
No attempt was made to explore other similar curves to see if one could find a curve that gave a significantly improved fit to the data, nor was there any attempt to find improved fits by using
superposition of several Gaussian curves representing the production patterns of several different regions or provinces. BACKGROUND FOR THE METHOD
An analytical method will be outlined which allows one to determine the values of the three parameters of the Hubbert curve that gives the "best fit" to the historical data on oil production in the
U.S. and world.
This method assumes that the complete curve of production vs. time of a non-renewable resource such as oil ( the Hubbert curve ) can be represented by a Gaussian error curve ( Gaussian ) which is
characterized by three parameters:
1) The area under the Gaussian is the size Qoo of the EUR, ( U.S. or world ), expressed in barrels, ( bbl ) ;
2) The time tM is the date ( year ) of the peak of the Gaussian;
3) The parameter S , ( years ) is a measure of the width of the Gaussian.
Logic suggests that it is best to express quantities of oil in the SI units of cubic meters or joules of energy. However in the lingua franca of the world oil business, the "barrel" ( bbl ) is the
standard unit of quantity. The following conversion factors may be used to convert barrels to cubic meters or barrels of oil to joules of energy.
One barrel of oil has an energy content of approximately 5.9 x 109 joules.
Production data have only approximately followed the Gaussian pattern in the past. However, as Hubbert pointed out, over the long run, production of oil started initially at zero, will rise to one or
more maxima and then, at some time in the future, will return to zero. Because of these properties, the Gaussian can always be used as an approximate representation of the curve of production vs.
time of oil or of any non-renewable resource. The actual production curves will be modified by economic, geological, political, technological, and other factors, which may result in a deterioration
of the quality of the fit between the data and the Gaussian, but the role of these important factors is limited to changing the quality of this fit.
As applied to oil, the method: 1) uses the following data:
A. the history of the production ( bbl / y ) of oil vs. time;
B. the geologically estimated EUR, Qoo ;
2) uses all of the available annual oil production data, or a subset of the data;
The longer the time span covered by the data, the greater may be the precision of the fit between the Gaussian and the data.
3) is quantitative and analytical, using the accepted mathematical criterion that the Gaussian that is the "best fit" to the data is the one for which the root- mean-square deviation ( RMSD ) between
the historical data and the Gaussian has a minimum value;
4) is mathematically reproducible;
5) is easily updated as more production data become available;
6) produces numerical values ( primary values ) of the three parameters of the particular Gaussian that is the best fit to the data; this Gaussian is the "primary Gaussian."
7) allows one to explore the "goodness" of the fit of the Gaussian to the data by determining how rapidly the RMSD of the best fit deteriorates ( increases ) with changes in any of the three
parameters from their primary values;
8) can be used to determine "best fit" values for two of the parameters of the Gaussian, along with the associated RMSD, when the third parameter is given an arbitrary numerical value other than its
"best fit" ( primary ) value. The Gaussians calculated from these values of the three parameters are "secondary Gaussians."
9) can be used to determine a "best fit value" for one of the parameters of the Gaussian, along with the value of the associated RMSD, when the second and third parameters are given arbitrary values
other than their primary values.
10) ( except for the numerical value of Qoo ) is completely decoupled from theory, judgement, or speculation about the future consequences of complex geological, technological, economic, or political
factors that can affect annual production.
This decoupling ( 10 ) need not be of concern because all of these factors were present and operating in the real-world data that Hubbert used when he recognized that the production curve had an
approximately Gaussian shape. These factors can affect the quality of the fit between the Gaussian and the historical data. DEFINITIONS
Let us define quantities:
t = date ( year )
tM = the date of the maximum of the Gaussian Hubbert curve
P = the production of oil in barrels per year, ( bbl / y )
Q = the estimated amount of oil remaining in the ground. ( bbl )
Qoo = the integrated total production ( bbl ) as the time t approaches infinity. This is called the "Estimated Ultimate Recovery," or EUR.
W = the full-width at half-height of the Gaussian
W = [ ( 8 ln 2 ) 0.5 S ] = 2.355 . . . S
where S is a convenient width parameter
As it is applied to the oil analysis, the Gaussian curve for the annual production vs. time is given by:
P = - dQ / dt = [Qoo / ( S (2 p) 1/2 )] exp [ - ( tM - t )2 / ( 2 S 2) ] This equation for P contains the three parameters: Qoo , tM and S . METHOD OF ANALYSIS
One seeks the values of the three parameters that characterize the particular Gaussian which is the best fit to a set of historical oil production data. First, one assumes reasonable approximate
values of the three parameters and uses the spreadsheet to calculate the year-by-year values of the Gaussian that is prescribed by these assumed values. The root mean square deviation ( RMSD )
between the assumed Gaussian and the historical data points is then calculated and is displayed.
The values of the three parameters of the Gaussian are then varied systematically until the RMSD is found empirically to have a minimum value. The Gaussian characterized by this minimum RMSD is the
"primary Gaussian" which gives the best fit to the data. The values of the parameters that yield the primary Gaussian are then the "primary values" of the parameters.
Figure 1 shows the plot of the U.S. oil production data ( USEIA, 1995 ) along with the primary Gaussian. Table I tabulates the results of the analysis.
The analysis suggests that approximately three fourths ( 77 % ) of the EUR, ( Qoo = 222.2 x 109 bbl ) in the 50 states had been produced by the end of 1995.
This EUR gives a best fit that is significantly larger than the value found by Hubbert ( 1982 ) who based his analysis on U.S. production data for the lower 48 states through 1980. Hubbert's EUR was
161.8 x 109 bbl . ( p. 90 )
Hubbert gave little detailed discussion of the magnitudes of the analytical uncertainties in the quantities he derived from his curve fitting. The method used here allows the quantitative exploration
of these uncertainties.
The overall quality of the fit between the data and the primary Gaussian is indicated by the fact ( Table 1 ) that the RMSD between the primary Gaussian and the data is 3.2 % of the height of the
Gaussian maximum.
To explore the sensitivity of the fit of the data to the primary Gaussian, one can give two of the three parameters their primary values of Table 1, and then one can systematically change the value
of the third parameter to explore how the RMSD changes with changes in the third parameter. For example, when the parameters S and t have their primary values, how sensitive is the RMSD to changes in
the parameter Qoo , ( the EUR )? The answer to this question is shown in the upper curve of Fig. 2, where one sees that increasing Qoo by 8.1 % , from its primary value of 222.2 x 109 bbl to 240 x
109 bbl, causes the RMSD to increase approximately quadratically from 0.103 x 109 to 0.175 x 109 bbl / y , an increase of approximately 70 % .
A second investigation is to give one parameter a value other than its primary value and then vary the values of the other two parameters until one finds a secondary minimum RMSD. To illustrate: if
one changes the value of Qoo from its primary value to 240 x 109 bbl, what is the RMSD if S and tM are then varied from their primary values until a new minimum RMSD is found? When this is done, the
resulting secondary minimum is characterized by tM = 1977.5 and S = 29.7 y . The RMSD is 0.1183 x 109 bbl / y , which is a point on the lower curve of Fig. 2 for Qoo = 240 x 109 bbl.
Figure 3 shows the data, the primary Gaussian, and the secondary Gaussian that is the best fit to the data for an assumed value of the EUR, of 250 x 109 bbl.
The results of a detailed exploration of the date of the peak of oil production in the U.S. as a function of the assumed size of Qoo is shown in Fig. 4. The slope of a chord between the ends of the
plotted curve suggests that the date of peak production in the U.S. is delayed about 39 days for every billion barrels of new oil that is added to the estimated size of the EUR of the U.S. The
historical data show that the peak production of oil in the U.S. was in 1970 with a smaller peak in 1984. SENSITIVITY OF THE RMSD TO CHANGES OF S AND tM
If the three parameters are set at their primary values ( minimum RMSD ), and if then the assumed date of the peak ( tM ) is moved from 1975.6 to 1980, the RMSD is found to increase by about 80 % .
If the three parameters are set at their primary values, and if then the assumed value of S is increased from 27.56 years to 30.00 years, the RMSD is found to increase by about 52 % . THE GAUSSIAN
The data for world oil production ( USEIA, 1995 ) and the primary Gaussian that best fits the data are shown in Fig. 5.
The value of the EUR that gives the minimum RMSD for world oil is 1.115 x 1012 bbl, which is much smaller than the value ( 2.0 x 1012 bbl ) that Hubbert used in 1972. This discrepancy points out a
limitation of this analysis. In contrast to the case of U.S. oil, the world data do not yet show a long and persistent downturn in production. As a consequence, a wider range of values of the EUR can
give plausible fits to the data. This is illustrated in Fig. 6.. For assumed values of the EUR that are less than the primary value, the RMSD rises very rapidly, but for values of the EUR that are
greater than the primary value, the RMSD rises only slowly. If a production maximum has not been passed, this analysis tends strongly to reject assumed values of the EUR that are less than the
primary value, but the analysis does not discriminate strongly among values of the EUR that are larger than the primary value.
If one traces out the date tM of the peak of the secondary Gaussians corresponding to a series of increasing values assumed for the EUR , one gets the curve shown in Fig. 7. Reading from Fig. 7 it
can be seen that for values of the EUR of 2.0 x 1012 bbl , 3.0 x 1012 bbl , and 4.0 x 1012 bbl , peak production is indicated for the years 2004, 2019, and 2030 respectively.
The average slope of the curve of Fig. 7 shows that for every new billion barrels of oil added to the estimate of the world's EUR , the date of the world peak production is delayed approximately 5.5
days ! Doubling the world EUR moves the date of the maximum back by about 26 years!
Figure 8 shows the data and the best-fit secondary Gaussians for these three assumed values of the EUR . Three different values of the EUR are listed in the upper left and the years of the
corresponding peak production are given. It should be noted that the highest curve in Fig. 8 assumes not only that the EUR is 4.0 x 1012 bbl, but that the world production capability and world demand
can rise to 39 x 109 bbl / yr by the year 2030. PER CAPITA OIL PRODUCTION
In Fig. 9 one sees two curves of world daily per capita production of oil which are normalized to have the same value in the year 1920. The upper curve assumes that the world population has not
changed since 1920, while the lower curve takes account of the growth of world population since 1920, and so it shows the actual per capita oil production. At the end of 1995, world per capita oil
production was less than two liters per person per day ! The world population is increasing ( 1996 ) by about 1.5 % per year ( ~90 million per year ), so world oil production will have to climb by
1.5 % per year just in order to keep the per capita world oil production constant.
For the U.S., the maximum per capita oil production was approximately 7 liters per day in 1970 which has declined to approximately 4 liters per day in 1995. RESERVES-TO-PRODUCTION RATIOS
The ratio of current reserves ( bbl ) to current annual production ( bbl / y ) is the number of years the current reserves would last if the current annual production continued unchanged. This number
is widely quoted as an approximate indication of the future of oil production. If the ratio has the value of 40 years, it means that the reserves would last 40 years "at the present rate of
production," ( Bartlett, 1978 ) which suggests to some that the world production might remain constant for 40 years and then abruptly drop to zero. Rates of production are not constant over long
periods, so this widely quoted ratio is a meaningless indicator of the future course of oil production.
Theoretical curves that suggest the future path of the reserves-to-production ratio can be calculated for each of the best-fit Gaussians. Figure 10 shows the predicted reserves-to-production ratio as
a function of time for the primary Gaussian of Fig. 1 for U.S. oil production. Figure 11 shows three curves of the predicted reserves-to-production ratios for world oil production that correspond to
the three Gaussians of Fig. 8.
One notes that for a fixed value of the EUR, the reserves-to-production ratio decreases monotonically but at a rate that is less rapid than one year each year. New enlarged estimates of the value of
the EUR could slow or temporarily reverse the decline in the actual ratio.
In Fig. 11 one can see that for EUR = 2.0 x 1012 bbl , the world reserves-to- production ratio in the year 2000 can be estimated to be approximately 42 years. SUSTAINABILITY
The term "sustainability" is frequently invoked to describe a society that can continue many generations into the future. (Bartlett, 1997-98) Figures 1 and 8 suggest that current rates of consumption
of oil cannot continue for many generations in the future, so that present U.S. and world rates of consumption of oil are not sustainable. COMPARISONS WITH OTHER ANALYSES
Because of the critical importance of oil to modern society, many studies have yielded estimates of the size of the remaining oil resources, and of the probable future paths of U.S. and world oil
production. Only a few are cited here.
Campbell & Laherrere ( 1998 ) report that "Global production of conventional oil will begin to decline sooner than most people think, probably within 10 years."( p.78 ) "Using several different
techniques to estimate the current reserves of conventional oil and the amount still left to be discovered, we conclude that the [ peak will be reached and the ] decline will begin before 2010." (
p.79 ) From Figure 7 one sees that if the world EUR is 2.4 x 1012 bbl, the year of the maximum of the Hubbert Gaussian is indicated to be in 2010.
Edwards ( 1997 ) has given an extensive summary of the works of others and has added his own detailed analysis of the long-term situations in the U.S. and the world in regard to all fossil fuels. For
U.S. oil, he cites ( his Table 4 ) three estimates of the EUR whose average is 277 x 109 bbl. This is considerably higher than the 222.2 x 109 bbl that is yielded by this analysis. The secondary
curve for 277 x 109 bbl would be some distance to the right of the secondary curve shown in Fig. 3.
Edwards' Table 1 lists 14 estimates of the EUR for world oil ranging from a low of 1.65 x 1012 bbl to a high of 3.2 x 1012 bbl , along with 11 predictions of the date of the peak of world oil
production. The mean values of the EUR and their corresponding peak dates, along with the RMSD of the EUR and the peak dates, derived from the spread of the tabluated values, are: EUR = ( 2.4 +/- 0.4
) x 1012 bbl, and Peak Date = ( 2010 +/- 11 yrs ) . Perhaps it is only fortuitous, but these two numbers are the coordinates of a point on the line of Fig. 7, and hence they are in agreement with the
analysis given here.
Ivanhoe ( 1995 ) shows Hubbert curves for discoveries and production for both the U.S. and the world, and then shows graphically the probable production scenarios for the future. In his Fig.3 he
shows curves from Hubbert which he has reworked for world oil production based on EURs of 1.5 x 1012 and 2.0 x 1012 bbl. The two peaks are shown in 1988 and 1996 respectively. Ivanhoe's peaks thus
show that the delay in the date of the peak is approximately 5.8 days per billion barrels of oil added to the world EUR.
Ivanhoe has written ( 1997 ) that the critical date when global oil demand will exceed world production will fall sometime between 2000 and 2010.
MacKenzie ( 1996 ) has done a computer analysis of world oil which he has combined with a review of published estimates of oil reserves. He concludes that: At the low end, for EUR oil equal to 1.8 x
1012 bbl, peaking could occur as early as 2007; at the high end ( 2.3 x1012 bbl ), peaking could occur around 2014. ( An implausibly high 2.6 x 1012 bbl for EUR would postpone peaking only another
five years - - to 2019 ). MacKenzie's computer-generated estimates can be compared with the estimates read from Fig. 7 where for EURs of 1.8 x 1012 , 2.3 x 1012 , and 2.6 x 1012 bbl , the predicted
peak dates are 2000, 2009, and 2013.
From his analysis, MacKenzie has produced his estimate of the date of peak world production vs EUR which is given in his Fig. 13. His predicted peak dates are 6 to 8 years later than those given here
in Fig. 7, but his curve has the same slope as Fig. 7, namely 5.5 days delay per billion barrels added to the estimate of the EUR.
Masters, Attanasi & Root have estimated the world EUR of petroleum to be 2.3 x 1012 bbl. They note that this value: . . . is limited by our concepts of world petroleum geology and our understanding
of specific basins; nonetheless, continued expansion of exploration activity, around the world, has resulted in only minimal adjustments to our quantitative understanding of ultimate resources. . .
They also indicate that: Unconventional resources are present in large quantities, in particular in the Western Hemisphere, and are of a dimension to substantially contribute to world reserves should
economic conditions permit.
Adelman & Lynch ( 1997 ) point out that even as oil is produced and used, estimates of reserves of oil generally to rise with time so that a "fixed view of resource limits creates undue pessimism."
Their optimism is based on the past history of increases in the value of the world EUR. Because of the increases in reserves that they see, they indicate their belief that it is misleading to think
of the EUR as a fixed quantity, so that analyses such as the one presented here are seriously misleading.
We can note that according to Fig. 7, an increase in the world EUR of 66 billion bbl would delay the date of the maximum of the Gaussian by approximately one year. CONCLUSION
The work reported here is an analytical study of the data on U.S. and world production of oil. The study has no geological content beyond that of the values of the EUR. The results are internally
precise and self-consistent, and hence are reproducible, and they are consistent with the results of a number of other studies. Studies based on assumed fixed values of the EURs are often criticized
by noting that values of the EURs tend to increase with time. Increasing estimates of the EUR of the U.S. can be accomodated in this analysis by refering to Figs. 3 and 4. The consequences of
increasing estimates of the world EUR can be evaluated by examination of Figs. 7 and 8.
Prices and the consequences of the law of supply and demand will be significant short-term determinants of the course of oil production in the future. The effects of these are not modeled in this
Only time will tell the degree to which the results of this analysis may or may not be reasonable.
This work originated from discussions with Professors John D. Edwards
( Geology ) and Robert A. Ristinen ( Physics ). I thank them for their insight and help. Richard C. Duncan has very kindly shared with me the results of a number of his works. In addition, I wish to
express my great appreciation to Steve Anderson, L.F. ( Buzz ) Ivanhoe, James J. MacKenzie, Charles D. Masters, David Root, and Walter Youngquist for their many valuable suggestions.
Adelman, M. A. and Lynch, M. C., 1997, Fixed view of resource limit creates undue pessimism: Oil & Gas Journal, v. 95, no. 14, p. 56-60
Bartlett, A. A., 1978, Forgotten fundamentals of the energy crisis: Am. Jour. of Phys., v. 46, p. 876-888
Bartlett, A. A., 1997-98, Reflections on sustainability, population growth, and the environment - revisited: Renewable Resources Jour., v. 15, no. 4, Winter 1997- 1998, p. 6-23
Campbell, C. J. and Laherrere, J. H., 1998, The end of cheap oil: Scientific American, v. 278, no.3, p. 78-83
Edwards, J. D., 1997, Crude oil and alternate energy production forecasts for the twenty-first century; the end of the hydrocarbon era: Am. Assoc. Petroleum Geologists, Bull., v. 81, no. 8, p.
Much of this work was available earlier in a report from the Energy and Minerals Applied Research Center, ( EMARC ) ( 1996 ), Department of Geological Sciences, University of Colorado at Boulder,
80309-0250 Hubbert, M. K., 1974, U.S. energy resources, a review as of 1972; A background paper prepared at the request of Henry M. Jackson, Chairman, Committee on Interior and Insular Affairs,
United States Senate, Pursuant to Senate Resolution 45, A National Fuels and Energy Policy Study: Serial No. 93 - 40 ( 92 - 75 ), Part 1. U.S. Government Printing Office, Washington, 1974, p. 186
Hubbert, M. K., 1982, Oil and gas supply modeling: NBS special publication 631, U.S. Department of Commerce / National Bureau of Standards ( now the National Institute of Standards and Technology,
NIST ) May 1982. p. 90
Ivanhoe, L.F., 1995, Future world oil supplies: there is a finite limit,
World Oil, v. 216, no. 10, p. 77-88.
Ivanhoe, L.F., 1997, King Hubbert - updated; Hubbert Center Newsletter, No. 97 / 1
Colorado School of Mines, Golden, CO
MacKenzie, J.J., 1996, Oil as a finite resource: When is global production likely to peak? World Resources Institute, 1709 New York Ave. NW, Washington, D.C., 20006
Masters, C. D., Attanasi, E.D., Root, D.H., 1994, World petroleum assessment and analysis: Proceedings of the 14th World Petroleum Congress, John Wiley & Sons, New York City
(USEIA, 1995) U.S. Energy Information Administration, Annual Energy Review, 1995: DOE / EIA ( 95 )
The U.S. production data are from the column "Crude Oil" of Table 5.1, p. 141
The world production data are from the column "World" of Table 11.5, p. 297
TABLE 1
Analytically determined primary values of the three parameters that describe the Gaussian that is the best fit to the data on the production of oil in the U.S. Ultimate Resource, Qoo , bbl 222.2 x
109 Year ( date ) of maximum, tM 1975.6 S ( width parameter ) , years 27.56 Quantities that characterize the primary Gaussian: RMSD between the data and the primary curve: bbl / y 0.10293 x 109
Production through 1995: bbl 0.171 x 1012 Percent of EUR produced by the end of 1995 76.8 % Full-width at half-maximum of primary Gaussian: y 64.9 Gaussian maximum ( peak ) production, bbl / y 3.217
x 109 RMSD / maximum production 3.20 % FIGURE CAPTIONS Fig. 1. The data for the production of oil in the U.S. are shown, along with the primary Gaussian. This is the Gaussian that has the smallest
RMSD and hence is the best fit to the data. Each major square has the units of 1 x 109 bbl / y multiplied by 20 y, equals 20 x 109 barrels of oil. The values of the three parameters that characterize
this Gaussian are given in Table 1.
Fig. 2. For U.S. oil, the upper curve shows the values of the RMSD when S and tM have their primary values of 27.555 years and 1975.6 respectively, and the assumed value of the EUR ( Qoo ) of the
U.S. is changed systematically about its primary value of 0.2222 x 1012 bbl. ( 0.22 trillion bbl ) The lower curve shows the values of the RMSD when, for each non-primary value of the EUR , one
systematically varies S and tM until one locates a secondary minimum value of the RMSD. The quantities S and tM have the same value ( their primary values ) for all points on the upper curve; on the
lower curve their values change from point to point. The two curves share the same minimum at the primary value of ( EUR = 0.2222 x 1012 bbl ).
Fig. 3. The data for U.S. oil production and two Gaussians are shown: the left one is the primary Gaussian in which all three parameters are adjusted to give the minimum RMSD between the data and the
curve. The value of the EUR for this best-fit is 0.2222 x 1012 bbl. The right curve is a secondary Gaussian for the case where the EUR is arbitrarily given the non-primary value of 0.250 x 1012 bbl,
and then the two parameters S and tM were adjusted to find the values that gave the secondary minimum RMSD. The right curve represents a value of the EUR that is 12.5 % higher than the primary value
of the EUR . For the right curve, the RMSD of the fit of the Gaussian to the data is approximately 15 % higher than it is for the fit of the primary Gaussian on the left.
Fig. 4. As one increases the assumed EUR for the U.S., the date of the peak of the secondary Gaussians moves to later times; the delay is approximately 39 days for each billion barrels of oil that
are added to the estimate of the EUR of the U.S.
Fig. 5. The data for world oil production are compared with the best-fit primary Gaussian. Because the world data do not yet show a prolonged downturn, this analysis is very insensitive to the size
of the parameter Qoo , ( the EUR ) which from this fit is lower than many geological estimates. Curves for more widely accepted geological estimates of the EUR are shown in Fig. 8. The large
fluctuations in the data are due to political and economic factors.
Fig. 6. The RMSDs of the secondary Gaussians for world oil are shown as a function of the assumed values of the world EUR. The primary Gaussian that is shown in Fig. 5 is characterized by values of
the EUR and the RMSD at the minimum of this curve. For assumed values of the EUR that are less than the minimum, the RMSD deteriorates ( rises ) rapidly, while for values larger than the minimum, the
RMSD is seen to rise more slowly. The reason for this assymetry is the fact that the world oil production data have not yet shown any prolonged downturn. This should be compared with the lower curve
of Fig. 2 which is the same plot for U.S. oil where there has been a long downturn in production and where the RMSD rise around the minimum is more symmetrical.
Fig. 7. As one follows the secondary Gaussians for world oil for increasing assumed values of the EUR , the location of the peaks of the best-fit secondary Gaussians move to later times at a rate of
approximately 5.5 days for each billion barrels of oil added to the EUR. For assumed values of the EUR of 2.0 x 1012 bbl , 3.0 x 1012 bbl, and 4.0 x 1012 bbl, this analysis suggests that the peaks
would occur respectively in the years 2004, 2019, and 2030 and the respective peak productions would be 26.5 x 109 , 33 x 109 , and 39.5 x 109 bbl per year.
Fig. 8. The world oil production data are shown, along with three best-fit secondary Gaussians corresponding to values of the EUR of 2.0 x 1012 bbl , 3.0 x 1012 bbl , and 4.0 x l012 bbl, with
respective dates of peak production of 2004, 2019, and 2030.
Fig. 9. The lower curve shows the recent history of the per capita world production of oil which had its largest value of approximately 2.2 liters per (person-day) in the 1970s and which has fallen
to approximately 1.7 liters per (person-day) by 1995. The upper curve shows what the recent history would have been if the world population had not changed since 1920. In the period from 1920 to 1995
the world population has had an average growth rate of approximately 1.5 % per year which has resulted in the population tripling in these 75 years.
Figure 10 This curve shows the reserves-to-production ratio for U.S. oil, and it is derived from the primary Gaussian for U.S. oil. A point on the curve shows, for that date, how many years U.S. oil
would last if production were held constant at the value it had on that date. For example, in the year 1980, the remaining reserves of U.S. oil would last approximately 30 years if production
remained unchanged from its 1980 value: production would then drop abruptly to zero.
Figure 11 These curves show how the reserves-to-production ratios will vary with time for each of the three Gaussians of Fig. 8 which correspond to world EURs of 2.0, 3.0, and 4.0 x 1012 bbl. For
example, if the world EUR is 3000 billion barrels, the reserves-to-production ratio in the year 2000 would be expected to be approximately 74 years.
Professor Emeritus: Department of Physics
University of Colorado
Boulder, Colorado: 80309-0390
AAB (303) 492-7016: Department of Physics (303) 492-6952: FAX (303) 492-3352
E-Mail: Albert.Bartlett@Colorado.EDU
updated 2000 March 13 | {"url":"http://www.hubbertpeak.com/bartlett/hubbert.htm","timestamp":"2014-04-18T13:17:04Z","content_type":null,"content_length":"35693","record_id":"<urn:uuid:aba176e0-1aeb-4e9d-9a60-95008cca3c87>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00195-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: Advanced expression simplification
"F. Liekweg" <liekweg@ipd.info.uni-karlsruhe.de>
17 Jul 2005 13:58:28 -0400
From comp.compilers
| List of all articles for this month |
From: "F. Liekweg" <liekweg@ipd.info.uni-karlsruhe.de>
Newsgroups: comp.compilers
Date: 17 Jul 2005 13:58:28 -0400
Organization: University of Karlsruhe, Germany
References: 05-07-040
Keywords: optimize
Posted-Date: 17 Jul 2005 13:58:28 EDT
Truly, Igor Chudov wrote on 07/11/05 13:03:
> (x^2-1)/(x-1) simplifies to x+1. GOOD
> (1^100-1)/(x-1) "simplifies" to x^99+x^98+...+x+1. NOT GOOD.
But watch out, neither case preserves the division-by-zero
condition for x==1. The code around the expression might
actually depend on this behaviour, much like the elimination
of memory stores/loads might omit null-pointer checks that
the surrounding code might depend on.
Also, if you are transforming floating-point expressions,
a seemingly more complicated expresson might be numerically
stable and more robust wrt. rounding errors that the simplified
version. In particular, inside a fixed-point iteration, a longer
expression that is better conditioned might reduce the number
of iterations needed for convergence, so that simplifying it
would be counter-productive.
Florian Liekweg | Dot is a very forgiving language; it should
Universität Karlsruhe | be considered some form of religion.
================================= graphviz-interest@research.att.com ==
Post a followup to this message
Return to the comp.compilers page.
Search the comp.compilers archives again. | {"url":"http://compilers.iecc.com/comparch/article/05-07-074","timestamp":"2014-04-18T05:38:24Z","content_type":null,"content_length":"5636","record_id":"<urn:uuid:6adb404f-118b-469d-836e-f1d931211c64>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00441-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mplus Discussion >> Path analysis with 3-way (multilevel?) data
Path analysis with 3-way (multilevel?...
Adam Hafdahl posted on Wednesday, October 23, 2002 - 3:46 pm
I'd appreciate any thoughts about how to analyze some data that may or may not be well-suited for multilevel analysis. (Similar queries posted to the SEMNET and MULTILEVEL lists have generated little
response, except a few suggestions that Mplus may offer something appropriate.)
A colleague in Agricultural Economics has mail-survey data from about 76 respondents (staff members at various agencies) who each rated 17 policies (all aimed at resolving the same natural resources
issue) on 6 qualities (e.g., fairness, efficacy, farmer resistance, preference) using a nine-point Likert-type response scale. She's interested mainly in relationships among qualities and has in mind
a particular path model with preference as the ultimate outcome and various correlations and paths among the other qualities. The respondents were a fairly representative convenience sample from the
types of agencies to which she'd like to generalize; although the policies cover most of the conceivable ones germane to this specific issue, her broader research agenda concerns policies regarding
other natural resources issues as well.
As someone who's relatively naive about multilevel modeling, I suspect there's a multilevel structure to these data -- at least in a repeated-measures sense -- but can't describe it or see how to
respect it when analyzing the relationships among qualities. What seems clear is that treating each respondent-policy pair as an independent 6-variate observation is inappropriate.
Some colleagues and I have considered a few strategies, but I'll defer describing them for now. I'd appreciate any advice, including references of published examples with a similar data structure.
Thanks in advance.
Adam Hafdahl
University of Missouri-Columbia
bmuthen posted on Wednesday, October 23, 2002 - 5:47 pm
I think you are right that this can be seen as a multilevel situation. You have 76 independent observations and in the multilevel perspective that would constitute level 2, i.e. the "cluster unit".
Level 2 covariates such as agency type can be included. Level 1 is the 17 policies, i.e. the "members" of the clusters. And you have multivariate outcomes - 6 qualities. In multilevel modeling, you
would treat the members of the clusters (say students in a school, or repeated measures in a person) as statistically equivalent, that is obeying the same model; of course, the means of the outcomes
can shift as a function of level 1 covariates. In the student/school analogy, the 17 policies should be thought of as a random sample of students in that school. Typically, multilevel modeling
considers level 1 relationships - such as the path analysis among the 6 outcomes that you refer to - as possibly varying across clusters (across your 76 respondents), either in terms of intercepts or
in terms of slopes. And, you want to study how much such variation there is and what predictors of this variation you might have.
If you think this captures what you want to do, Mplus is ready to do it.
Adam Hafdahl posted on Thursday, October 24, 2002 - 2:59 pm
Thanks for your explanation of how these data might be considered multilevel in nature. I'd like to clarify one point and pose three follow-up questions. By way of clarification, each respondent
rated the same 17 policies, so unlike the typical student/school situation the clusters (respondents) and members (policies) are fully crossed.
My questions:
1. Is this crossed design likely to influence how the analysis should be handled?
2. Is it important that these policies are nearly the entire population of conceivable policies relevant to this particular natural resources issue? By analogy, this might be like having access to
nearly all the children of interest in each school, or measuring nearly all time points of interest. Could these 17 policies be treated as fixed realizations of the Policy variable? (Again, we might
hope the model for policies about this particular issue would generalize to other issues, but we have no empirical evidence.)
3. Are the numbers of respondents and/or policies dangerously close to being too small for an appropriate analysis? Where might I look for guidelines about this?
bmuthen posted on Friday, October 25, 2002 - 9:19 am
Ah, I see. Perhaps I jumped to conclusion. A straightforward way of viewing this is as a data matrix with 76 rows and 17x6 columns. I think you said the 6 variables were going into a path analysis.
Perhaps the 17 policies, particularly if they cover nearly all policies, could be viewed as "fixed" rather than "random" effects, and therefore be treated as covariates in the path analysis.
Adam Hafdahl posted on Sunday, January 05, 2003 - 9:36 am
Upon returning to this problem after several weeks away from it, I'm still unable to think clearly about reasonable modeling strategies. The central issue that has confused me from the beginning is
how to handle the two sources of (co)variation: Is it covariation across respondents, policies, or both sources that we wish our path model to reflect? Frankly, I'm unable to distinguish among the
substantive questions each choice would address. For instance, what types of research questions would warrant modeling covariation among respondents versus among policies? Does it even make sense to
model covariation among both?
Although I've made a bit of progress by thinking about these data in terms of multivariate generalizability theory, three-mode factor analysis, and multilevel SEM, I'm still at a loss. It's quite
possible this problem is simpler than I'm making it.
My currently favored ad-hoc strategy is as follows: Assuming the same covariance matrix among the six qualities holds for all 1,292 respondent-policy pairs, first subtract the respondent and policy
means from the data and estimate this covariance matrix from the residuals, then use standard SEM software to conduct path analyses on this single covariance matrix. Am I correct that this strategy
models covariation among respondents and policies simultaneously? If this is a reasonable strategy, I have four remaining questions:
1. What is the appropriate sample size for standard errors and other results that need this (e.g., fit indices), and how can I trick the software into handling this appropriately (or adjust the
results manually)?
2. Is there a more straighforward way to implement this strategy, such as with dummy variables for respondents and policies?
3. Can the assumption of homogeneity of covariance matrices be relaxed to allow, say, the covariance matrix -- or perhaps coefficients in the path model -- to vary across respondents or across
4. With only one replication per respondent-policy pair, can I do anything to investigate whether there's a substantial Respondent x Policy interaction remaining after the marginal means are
I would be grateful for any further thoughts about conceptualizing this problem and about the above strategy or other approaches (e.g., it's unclear to me exactly how to implement your suggestion to
treat Policy as a covariate or what the substantive interpretation of this would be). References to similar data sets or related methodologies would also be helpful.
bmuthen posted on Monday, January 13, 2003 - 10:35 am
If I understand your suggestion correctly, you are proposing the use of a 6 x 6 sample covariance matrix created from 76 x 17 = 1,292 observations. This would be acting as if you have independent
observations among all 1292 observations, whereas there is true independence only among the 76 respondents. I would instead lean towards looking at a 6 x 6 sample covariance matrix for 76 respondents
for one policy among the 17 at a time, or analyzing all policies jointly by putting policy as dummy covariates. Other Mplus Discussion readers are encouraged to jump into the discussion.
Back to top | {"url":"http://www.statmodel.com/discussion/messages/12/225.html?1141671917","timestamp":"2014-04-17T18:39:56Z","content_type":null,"content_length":"28917","record_id":"<urn:uuid:9c3a88e1-7625-49dd-bea6-d4ed3d6d9cc7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00308-ip-10-147-4-33.ec2.internal.warc.gz"} |
The self defined as a property of existing stuff.
The self defined as a property of existing stuff.
Here is an important link that explains things in a similar way: http://www.philosophyforum.com/forum/sh … .php?t=225
let's say that a part of the brain is the self, D, and that this brainpart has the neurons C.
C --> E = C-a, D(C) = D(E), you loose one neuron, but you do not loose your self.
E --> R = E+b, D(C) = D(E), you gain one neuron, but you don't loose yourself.
Let's say Q = C-a+b-c+d....+z
*D(X) is the only function that can simulate self, no matter richards paradox. Note: I only give the neuron counters C,E and D to Illustrate that they are only equal in amount possibly, but they do
not belong to eachother, hence you are not a unit of your distant past as a direct function of your neuron identity unless the neurons all have the same identity which would be hard to understand,
but possibly they can and then ofcourse D(X) would not simulate the self, because ns=xs (they would have the same identity). An n:dimensional unity. My thesis now is that a p-dimensional world has a
p+q dimensional unity, for instance, 2 dots (or more possibly along the 1D surface) has a 1D line as a unity, 2 lines (or more along the same 2D surface) has a 2D surface as a unity. 2 surfaces has a
3D space as a unity, 2 rooms has a 4D time space as a unity. But if there were more then 2 dots, then possibly a 2D or even nD surface would be the unity of the dots, it is a nD dot. Perhaps the
universe started with an n:dimensional dot, and an n:dimensional dot has a unity in the n:th dimension. But perhaps an n:dimensional dot requires that in an n-q dimensional world there are exactly
(or likely, since momentum&vektor&energy is preserved) >2^q (n-q):dimensional objects given that there could only be 2 (n-1):dimensional objects, that is 2^(n-X) X-dimensional objects. And since
every dimension has atleast a beginning and an end, the n-dimensional particle being the only exception. But then what is a dot? Know that a dot in a fourdimensional world, a particle, is a 4D dot, a
4D object, just like an n:dimensional world had this n:dimensional dot, this unity. A 4D world, as our own, does have p^(n-3) particles, but only p^(n-4) 4D dots, 4D-particles. A 4D dot seen along a
one dimensional line is not one dot, it is 2 1D dots, and not all dots can be seen, since they are not all along the line, But it is also true that in a 3D room you can only see the 3D expression of
a 4D-particle, hence what we see is a 3D particle, and 4D particles are fewer then the 3D particles, and we cannot see their entirety from where we are. There are 2 3D expression particles along
every 4D particle lifetime line, one in the beginning and one in the end. This all given, "the number of the planck particles in the universe" is 2^(D-3) = e^(ln(2)(D-3)), you can get D by
logarithming both sides then divide with ln(2) and then add 3. But I just thought this up, it don't have to be right. Comments on this would be appreciated*
And any neuron(s) ns of C (C(ns)) does not belong to Q or is not Q(xs) where xs stands for neuron(s) ie. one or more neurons in a neural network:
C(ns) ≠Q(xs)
But D(C) = D(Q).
So D ≠ f(ns or xs) unless f(ns) = f(xs)
*Sorry, I'm a bit ringrusty, f(ns) etc. are possibly matrixes/part matrixes, ie. M(ns)*
f(ns) has only incommon with f(xs) the force that keeps the self together, the force that binds it, any other function f(ns) would not be f(xs).
If the self was temporary and f(ns) varies from self to not self, then:
D(ns,t1) --> R(ns,t2), D ≠ R. R(ns,t1) ≠ D(ns,t1)
What this would mean can be described as R(t) = w(t)D(t)^v(t) + j(t), not to dismiss anything.
Then the self is still not bound to mass. Since the self do not leave the body anyday, and evolution would not keep it since what's the point, the self would just be a temporeral change, new change,
new self. But self in this case would be as temporaral as change, and not bound to a certain matter, it could emerge anywere at any time, so the self is really not bound to temporal change. So given
that f is a certain funktion p that remains and is self:
D(f(ns)) = D(f(xs)), D = p(ns) = p(xs) = f(ns) that belongs to f(xs).
That was what I wanted to say. Cause when you analyse these equations you will see that the self will always remain, since the self is force, and not change in force which implies that any force will
do (note: if it aint moving at all, and it still is the self, then anything can be the self)
And even if the self was a change, that change would still need to be dy/dx that would in all cases equal the fluctuation in the force. The fluctuation would always have a value, v. This value would
be the source of existence. The impact P of the fluctuations would be the size of the self. The self is rather dependent on a v-value then a certain value v. v defines the impact P. Twice the impact
=/= half the self, P only defines size, it does not define whether or not the self exist for any value > 0. The self is merely an impact P on an area A.
Everything has the property of self.
Any comments on whether this is true or not are deaply appreciated. I might w8 a week or 2 before answering any questions or making any comment. Until then, feel free to point out flaws etc.
**posted similar stuff before on another existing forum**
Last edited by LQ (2007-01-15 21:18:04)
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
Haven't read the entire thing yet, but if you want to issues with meta mathematics, which is what I believe this is, look up Richard's Paradox.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
thank you Ricky, for telling me and if you still consider me wrong after the edit, then just say so. In an additional reply.
*Further edits have been made*
You liked the proof, simron? Thank you!
And Ricky, are you Richard as in "Richard's paradox" in wikipedia? Myself I've never heard of it, but it did make sence. What is the paradox really reffering to? What is metaphysics and what's it got
to do with what?
Last edited by LQ (2006-12-12 01:05:48)
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
Like I said, I only briefly looked over it and it gave the appearance of meta mathematics (not metaphysics, very different things). On further investigation, it is not.
let's say that a part of the brain is the self, D, and that this brainpart has C neurons.
C --> C-a=E, D(C) = D(C-a), you loose one neuron, but you do not loose your self.
E --> E+b, D(C) = D(E), you gain one neuron, but you don't loose yourself.
That is already assuming a lot, first off that the brain is entirely ones self. Many people believe a soul exists external to the body. I personally do not, but you have realize that you are assuming
something which many don't accept.
Then you are taking away single neurons trying to say that a single neuron can make the difference between ones self and not. First off, it is the interactions of neurons that make up the brain, and
thus, the complexity of the interactions may be what defines ones self. It is certainly not just a function of the number of neurons.
Now lets assume you are right though, what makes a person is solely the number of neurons. This can be simplified into an earlier question, when is a ship itself? If you take all the planks off a
ship, and replace them all with new ones, is it still the same ship? The vast majority say no, it is different. But by replacing one plank of the ship, it is still the same. So somewhere, there is a
number of planks which defines the ship. This is a paradox because it is certainly obvious that there are not two groups of planks: those with the "ship" quality and those without.
By stating the above you assume that such a number does exist, something which I don't agree with.
I would go on, but I don't quite follow your notation. Could you please explain in more detail?
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Definitely so, Ricky! In my proposal, the self is not a direct function or matrix of the n particles/masses, but rather a function/matrix q(neurons,mass,charge,distance,position,time) that remains
from t1 to t2, this is deep stuff, i have to think hard. In short D is not (=/
That D(C) = D(E or F) is my way of describing that D[u1,u2,u3...] = D[w1,w2,w3...] or that the neuron matrice is independent of the self if you exchange few enough neurons, but clearly the self
cannot be in the braincell you take away, so D must surely be proportional to neurons existing, but that does not define the self, more then where it is and how big it is, unless you have other
factors. I hope I'm not being to serious for your taste. Hope I'm being social enough.
Last edited by LQ (2006-12-12 02:06:05)
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
You're using things you aren't defining.
let's say that a part of the brain is the self, D, and that this brainpart has C neurons.
C --> C-a=E, D(C) = D(C-a), you loose one neuron, but you do not loose your self.
E --> E+b, D(C) = D(E), you gain one neuron, but you don't loose yourself.
How is it that D is a function? What is a and b? What is E?
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Ricky wrote:
You're using things you aren't defining.
let's say that a part of the brain is the self, D, and that this brainpart has C neurons.
C --> C-a=E, D(C) = D(C-a), you loose one neuron, but you do not loose your self.
E --> E+b, D(C) = D(E), you gain one neuron, but you don't loose yourself.
How is it that D is a function? What is a and b? What is E?
a and b are single neurons, D is a function/matrix that loose its matrix value if C would sease to exist (vaporise, dissapearing without leaving a trace), It is probable that it is a multiple to the
self, this allready explained i won't venture further here. Hence D is a matrix that requires C. Sorry for taking time, I pressed ctrl-z. Anywho, where were I?
The matrix value, is at any given time the current matrix D(xs)
Where exactly is the shortcut in this, can you tell me?
Oh, no i missed startrek, the world is comming to an end!
I guess we are all just an n:dimensional solution.
Last edited by LQ (2006-12-12 02:40:53)
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
I'm assuming subtraction of neurons just means subtraction of the number of neurons?
D is a function/matrix that loose its matrix value
How exactly does something lose it's matrix value, what is a matrix value, and what does D become when this happens?
D is a function/matrix that loose its matrix value if C would sease to exist (vaporise, dissapearing without leaving a trace)
But I thought C was the number of neurons. How is it that a number can cease to exist? Do you mean that C neurons cease to exist?
C --> C-a=E, D(C) = D(C-a), you loose one neuron, but you do not loose your self.
E --> E+b, D(C) = D(E), you gain one neuron, but you don't loose yourself.
Why the "C -->" and the "E -->"? What are you trying to say here? Typically, arrows imply and if-then statement. That doesn't seem to make sense here.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Hrhrrm, OK attention please!
Ricky wrote:
I'm assuming subtraction of neurons just means subtraction of the number of neurons?
D is a function/matrix that loose its matrix value
How exactly does something lose it's matrix value, what is a matrix value, and what does D become when this happens?
D is a function/matrix that loose its matrix value if C would sease to exist (vaporise, dissapearing without leaving a trace)
But I thought C was the number of neurons. How is it that a number can cease to exist? Do you mean that C neurons cease to exist?
C --> C-a=E, D(C) = D(C-a), you loose one neuron, but you do not loose your self.
E --> E+b, D(C) = D(E), you gain one neuron, but you don't loose yourself.
Why the "C -->" and the "E -->"? What are you trying to say here? Typically, arrows imply and if-then statement. That doesn't seem to make sense here.
1.The arrow means "eventually reaches" as in the lim(x-->100) stuff.
2.C is the neurons, they cannot cease to exist, but given that they did, the matrix D would become the empty matrix [], and it does so because the function in every cell of the matrix looses its
I'd be happy to answer any more of your questions, just ask on. I hope there is reason to believe me. In some points. Go on Ricky, post ahead!
We might even save the world. Or we might blow up the world and survive as being stupid matter. Both choices are OK!
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
So you're saying C eventually approaches C-a? That doesn't make sense.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Well, it's true isn't it? First C and then C-a does mean C-->C-a. even though the leap is a and even if a has an identity. Somehow, I imagine it works. But anyway, if you have a better way, just tell
me. I'd be happy to change some. I'm thinking of this nD world. It's a dot with under(n)-dimensional dots that smear together seen from that dimension. As a (big) nD dot. So that's how all particles
can have the same identity, simply say they (the particles) were you all the time, connected by gravity for instance, and if so, that q(neurons,mass,charge,distance,position,time) is actually a
matrix q(particles,mass,charge,distance,position,time) that connects all thing, just a teenyweeny bit, and that truelly, the conclusion that the force does the self, is self sustaining since all mass
effects eachother with a force.
"f(ns) has only incommon with f(xs) the force that keeps the self together, the force that binds it, any other function f(ns) would not be f(xs)." And since the force keep all things connected, the
mass does not need an identity since they are all part of the same self, even though the self cannot notice or comprehend in particular that we are interconnected even though we are by force and
hence physically and that's the only cind of interconnection there is. So given that it is the force that binds us that makes us ourself and anything itself since proven force is the function "self"
and not for instance position since it is not incommon for all parts of the self, and momentum would not change the fact, since it would need to be an incommon factor and if the self would have that,
then everything would have that. But that would be impossible since they would probably need to share vector 2
I guess I make little sense to you. Have you read it all yet? I guess you think I'm just a "crackpot" kind of guy. But you know, I've thought about this for a long time. Probably some (if not to say
most of it) make sense. If you know how I use the math "tools".
You've been like a good friend, debating this here. Thank you so far.
(Note: that's a friendship kiss, well a smack on the sheek really)
And I hope you slept well, i guess you went to bed soon after you posted that. I know I did, gmt+1
Anyway, 184 views, Awesome! That's about as many views as i got on the other forum on 1 + 1/2 month!
On my best thread!
I must have posted something something -DOH-
Last edited by LQ (2006-12-13 00:27:00)
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
Well, it's true isn't it? First C and then C-a does mean C-->C-a. even though the leap is a and even if a has an identity.
Don't think so. C and C-a must always be a finite distance away from each other, no matter what the value of C. What do you mean "a has an identity"?
Honestly, I haven't been able to make it even a quarter of the way through your first paragraph of your first post because the time I get there absolutely nothing is making sense.
I guess you think I'm just a "crackpot" kind of guy.
No, well, not yet anyway. I just think you're having a real hard time communicating.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Ricky wrote:
Well, it's true isn't it? First C and then C-a does mean C-->C-a. even though the leap is a and even if a has an identity.
Don't think so. C and C-a must always be a finite distance away from each other, no matter what the value of C. What do you mean "a has an identity"?
Honestly, I haven't been able to make it even a quarter of the way through your first paragraph of your first post because the time I get there absolutely nothing is making sense.
I guess you think I'm just a "crackpot" kind of guy.
No, well, not yet anyway. I just think you're having a real hard time communicating.
So you are saying that you cannot remove a from C. Then you cannot remove, for instance, a branch from a tree or a penny from a wallet. That's what I meant. Think of C as a unit, made from smaller
units If you remove a from the unit, then most of the unit remains. Makes sense?
The finite distance thingy, what's that all about? If a goes to outer space, does C become C minus a then?
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
How is it that C can approach C - a? The difference between C and C-a will always be a.
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Would it be simpler for you if I wrote C --> E, E = C-a?
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
Would it be simpler for you if I wrote C --> E, E = C-a?
Well, I wouldn't say simpler, I would say makes sense.
Let's say Q = C+a-b+c-d....-z
Are you attempting to say that you add and then take away neutrons? Also, are you saying that this process is finite?
"In the real world, this would be a problem. But in mathematics, we can just define a place where this problem doesn't exist. So we'll go ahead and do that now..."
Re: The self defined as a property of existing stuff.
Nonono, don't start with neutron stuff. Neuron.
The process from C to Q is finite. I define Q as the self when all neurons in C has been exchanged.
Clearly the self remains this process, so the D as a function of C remains even though C-->Q
Don't know what the neutrons actually do anyway. But it just because they all look the same to us, doesn't mean they are. Except ofcourse, possibly for an nD solution.
I hope this might help a bit:
Thank you for all your views and posts, everyone!
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Star Member
Re: The self defined as a property of existing stuff.
I don't even believe what I just read!!?! This guy has some amazing notions. I have no idea what anybody said. I think therefore I am?
igloo myrtilles fourmis
Super Member
Re: The self defined as a property of existing stuff.
Okay another paradox involving infinity or infinitesimal.
One grain of wheat is weightless. (No weight)
Two are also weightless.
1 million grains are weightless.
But how can a pile of wheat so heavy as to bend a camel?
The only way to solve this paradox is to admit the premise of a weightless grain is wrong.
Or, as some mathematicians would like, a pile of wheat consists of Infinite grains.
But the new paradox would be-how can they abtain one grain from infinite grains? by reducing it? by getting a portion of?
Re: The self defined as a property of existing stuff.
John E. Franklin wrote:
I don't even believe what I just read!!?! This guy has some amazing notions. I have no idea what anybody said. I think therefore I am?
Well, I think of this as one of my most important works.
I think therefor I am the self, is the conclusion that there are requirements of sophisticated thinking in order to be. But surely you don't need to think a certain thing, any thought would do
actually. And then we have the argument "doesn't anything fulfill that requirement?", And since we don't need to remember anything to be ourselves, since we don't die if we forget something and we
didn't know alot of things before we learned them, I guess that's so. To prove this would be a thing benefitial for all parts. The question is only how to do it... If this wasn't proof enough.
And thank you for your encouraging reply.
Last edited by LQ (2006-12-22 22:37:18)
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy...
Re: The self defined as a property of existing stuff.
Added link to top post. May comment, I loved your views, they were great!
I see clearly now, the universe have the black dots, Thus I am on my way of inventing this remedy... | {"url":"http://www.mathisfunforum.com/viewtopic.php?pid=53500","timestamp":"2014-04-17T12:53:29Z","content_type":null,"content_length":"50630","record_id":"<urn:uuid:ab1fbd81-35a9-4582-9f9a-6ee1bafea80b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00058-ip-10-147-4-33.ec2.internal.warc.gz"} |
Name for a topological space where every closed set contains a closed point
up vote 9 down vote favorite
A coauthor and I have stumbled upon a useful topological property -- namely, we are interested in the property that every nonempty closed set contains a closed point. However, neither of us are
topologists, so we don't know whether this exists in the literature yet. Does it? If so, what is it called, and where can I find information on it? If not, I'm happy to just call such a space
"pearled" (intuition: the closed sets are the oysters), but I thought I'd ask here before publishing an existing definition under a new name.
By the way, every T$_1$-space is pearled, as is every finite T$_0$-space and every spectral space, but the property of being pearled is independent of the T$_0$-property. However, I would be
interested even in a name for a pearled T$_0$-space. Is this the same as a T$_0$-space whose lattice of closed sets is atomic?
gn.general-topology terminology reference-request
1 Another important example: The topological space underlying a quasi-compact scheme is pearled (and $T_0$). But I'm sure that you already know that :). – Martin Brandenburg Nov 6 '11 at 15:07
closed point? Are you talking about a Scattered space? – Michael Blackmon Nov 6 '11 at 17:00
2 a related notion, which I've only heard for schemes though, is that a space is said to be Jacobson if every closed subset is the closure of the subset of its closed points. – Laurent Berger Nov 6
'11 at 18:44
1 Any quasicompact $T_0$ space is pearled, since you can just keep intersecting closed sets until you reach a minimal closed set which must be a closed point. – Eric Wofsey Nov 6 '11 at 20:36
1 In answer to your last question, yes, $T_0$+pearled is equivalent to $T_0$+atomic lattice of closed sets. $T_0$ implies that the atomic closed sets are exactly the closed singletons. – David
Milovich Nov 8 '11 at 16:59
show 5 more comments
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged gn.general-topology terminology reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/80208/name-for-a-topological-space-where-every-closed-set-contains-a-closed-point","timestamp":"2014-04-21T08:12:55Z","content_type":null,"content_length":"53368","record_id":"<urn:uuid:b9b7cb27-c9b2-4538-b764-7fd45577d10b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00416-ip-10-147-4-33.ec2.internal.warc.gz"} |
Permutation and Combination
February 8th 2009, 04:30 AM #1
Jan 2009
Permutation and Combination
Find how many 5-digit numbers can be formed from digits 1,2,3,4,5, if
a) the numbers form must be odd,
b) the numbers formed must be divisible by 4,
c) the odd digits must occupy even positions (i.e. 2nd and 4th) and the even digits must occupy odd positions (i.e. 1st, 3rd, and 5th).
For the first one, in order to be odd the last digit must be 1,3 or 5.
Place the 1 at the end and arrange the other 4 digits in 4!=24 ways.
Place the 3 at the end and arrange the other 4 digits in 4!=24 ways.
Same for the 5
24+24+24=72 different numbers.
a number is divisible by 4 if its last 2 digits are divisible by 4.
so of all the even numbers, how many end with 2 digits that are divisible by 4?
c) the odd digits must occupy even positions (i.e. 2nd and 4th) and the even digits must occupy odd positions (i.e. 1st, 3rd, and 5th).
this seems like a strange question. you will always have at least one odd digit occupying an odd position.
February 8th 2009, 06:12 AM #2
February 15th 2009, 12:23 PM #3 | {"url":"http://mathhelpforum.com/discrete-math/72433-permutation-combination.html","timestamp":"2014-04-18T17:55:11Z","content_type":null,"content_length":"37188","record_id":"<urn:uuid:7cfdf019-1e5d-4cd3-9d0d-200193d47a51>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00349-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Bell" or "Jabotinsky"-matrix - What's the canonical name (if any)?
up vote 2 down vote favorite
I'm just reading J. Cigler's script for his talks "Konkrete Analysis" where I find the term "Jabotinsky-matrix" for that matrix, which I've (informally) been taught to call "Bell-matrix" (see at
least Wikipedia where it is subsumed under "Carleman-matrix", but can be found in various papers). Then I find the same in D. Knuth's 1992-article on "convolution polynomials" where he develops/
refers to that same idea and attributes it to Eri Jabotinsky(1947) (gives as an example the formal power series for the "half-iterate" of $ \exp(x)-1$ )
(The carleman-matrix is a transpose and similarity-transform using the factorials).
Q: is one of the names "canonized"? And if, which?
Q2: And, if not: which should one prefer in writing?
reference-request matrices names
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged reference-request matrices names or ask your own question. | {"url":"http://mathoverflow.net/questions/109702/bell-or-jabotinsky-matrix-whats-the-canonical-name-if-any","timestamp":"2014-04-16T22:05:07Z","content_type":null,"content_length":"45740","record_id":"<urn:uuid:4121fe8b-5e96-4b2b-928d-1fbeb1990e3b>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00663-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the
Calculating the floor area of a room
Use this calculator to work out the floor area of a room
If you're room is rectangular, then simply enter the width and depth of the room and the unit of measure, and the resulting floor area of the room will be calculated in several different units of
measure, both metric and imperial, useful if you are meauring the room in one unit but for example the floor tiles you want are measured in a different unit.
If on the other hand your floor area is not a perfect rectangle try breaking the room down into rectangular sections and enter the dimensions for each section into the calculator below, adding each
result together to give you the total floor area.
Index of DIY Calculators
Useful Sites
Laminate Flooring
Use of the calculators within this website is free. Whilst every effort has been made to ensure the accuracy of the calculators published within this website, you choose to use them and rely on any
results at your own risk. We will not under any circumstances accept responsibility or liability for any losses that may arise from a decision that you may make as aresult of using these calculators.
Similarly, we will not be requesting a share of any profits you may make as a result of using the calculators. | {"url":"http://www.online-calculators.co.uk/diy/roomarea.php","timestamp":"2014-04-16T10:15:32Z","content_type":null,"content_length":"18173","record_id":"<urn:uuid:53ed964c-a9ef-4efa-8d25-d58022e05cdb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |