content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
Math Help
November 3rd 2009, 03:10 PM #1
Junior Member
Sep 2009
Define a function f by f(x)=x+2ln(x) and let g(x)=f(x)-A-B(x-1)-C(x-1)^2. Find the values of A, B, and C so that L=lim x→1 g(x)/(x-1)^3 exists and is finite.
L'Hospital's Rule states that a limit $L=\lim_{x\to a}\frac{f(x)}{g(x)}=\lim_{x\to a}\frac{f'(x)}{g'(x)}$ if the limit is indeterminate, that is, $\frac{f(a)}{g(a)}=\frac00$
Here, if we substitute x=1 into the function $\frac{x-\ln x-A-B(x-1)-C(x-1)^2}{(x-1)^3}$, we get $\frac{1-A}{0}$. Notice that if the numerator is NOT zero, then the limit will run off to $\pm\
infty$. Therefore, it is necessary that A=1 so the limit is indeterminate, otherwise the limit will not converge at all. Apply L'Hospital's Rule...
Now $L=\lim_{x\to1}\frac{1+2/x-B-2C(x-1)}{3(x-1)^2}$, so repeat the process to find B. Keep going and you can get A,B,C, and also the final value of the limit L.
November 4th 2009, 06:47 PM #2
Senior Member
Apr 2009
Atlanta, GA | {"url":"http://mathhelpforum.com/calculus/112243-solving.html","timestamp":"2014-04-18T12:37:56Z","content_type":null,"content_length":"32131","record_id":"<urn:uuid:69e5c504-88c0-40f6-9b61-b46fc13b87c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00003-ip-10-147-4-33.ec2.internal.warc.gz"} |
A partial order set with a non-unique maximal element
August 29th 2009, 07:09 AM
A partial order set with a non-unique maximal element
Find an example of a partial order set with a maximal element that is not unique.
My question here is, since a maximal element is define as:
m is a max of a set X if $m \leq x$, then x = m.
Not unique means that there are at least two, but how can I find two such elements? If we have two different max, say m and n, then either $m \leq n$ or $n \leq m$ since $m eq n$, but then by
definition of a maximal element, m is n...
Thank you!
August 29th 2009, 07:25 AM
let $x\le y$ if $x^2\le y^2$. And to this on the interval $[-a,a]$ for $a>0$.
August 29th 2009, 07:28 AM
Remember this is a partial order not a total ordering.
Consider $\{1,2,3,4,5,6,7,8,9\}$ ordered by divides, $<br /> x \prec y \Leftrightarrow x|y$.
Does that answer the question?
August 29th 2009, 07:57 AM
Or just make up something like {a, b, c, d} with a< b< d, a< c< d with b and c "non-comparable". ("Non-comparable meaning none of b< c, c< b, or b= c are true. d is maximal (in fact, the maximum)
because every other member is less than it. That would not be possible in a linearly ordered set but is in a partially ordered set.)
If you want something a little more concrete use sets. Supose A= {a}, B= {a, b}, C= {a, c}, D= {a, b, c} with " $\le$" defined by "inclusion": [tex]X\le Y[tex] if and only if X is a subset of Y.
That is partially ordered because neither B nor C is a subset of the other. $B\le C$ is false because B is not a subset of C. $C\le B$ is not true because C is not a subset of B. Clearly B= C is
not true. So this is a partially ordered set in which D is the maximum (and so "maximal").
Yet another example, where D is maximal but not "the" maximum, is {A, B, C, D} where A= {a}, B= {a,b}, C= {c}, D= {c, d}. Now both B and D are maximal. | {"url":"http://mathhelpforum.com/differential-geometry/99673-partial-order-set-non-unique-maximal-element-print.html","timestamp":"2014-04-17T18:36:34Z","content_type":null,"content_length":"8448","record_id":"<urn:uuid:8696bb07-a44f-4dc8-a42c-ed2e2f75fba7>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Regular Singular Points
July 7th 2007, 12:48 PM #1
Jul 2007
Regular Singular Points
The problem is taken from "Elementary Differential Equations and Boundary Value Problems," 8th Edition. By Boyce. Section 5.4 (Page 271), problem 1. The problem states to find all singular points
of the given equation and determine whether each one is regular or irregular. How would I approach this problem?
The problem is x*y'' + (1 - x)*y' + x*y = 0
The problem is taken from "Elementary Differential Equations and Boundary Value Problems," 8th Edition. By Boyce. Section 5.4 (Page 271), problem 1. The problem states to find all singular points
of the given equation and determine whether each one is regular or irregular. How would I approach this problem?
What was the problem?
A differential equation of the form: $P(x)y'' + Q(x)y' + R(x)y = 0$ is said to have a "regular singular point," $x_0$, if:
$\lim_{x \to x_0} (x - x_0) \frac {Q(x)}{P(x)} < \infty$
$\lim_{x \to x_0} (x - x_0)^2 \frac {R(x)}{P(x)} < \infty$
That is, if both the above limits are finite for the point $x_0$, then $x_0$ is said to be a regular singular point
Now try the problem and post the solution if you wish, so we can check it
Note: We say $x_0$ is a singular point if $\frac {Q(x)}{P(x)}$ and/or $\frac {R(x)}{P(x)}$ are discontinuous functions at $x_0$. We classify the singular point as "regular" or "irregular" by the
method above
July 7th 2007, 12:55 PM #2 | {"url":"http://mathhelpforum.com/calculus/16600-regular-singular-points.html","timestamp":"2014-04-17T02:03:52Z","content_type":null,"content_length":"35711","record_id":"<urn:uuid:d07c0580-9113-4ea7-a549-fe09f1431ba6>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
DMOZ - Science: Math: Logic and Foundations: Nonstandard Logics and Extensions: Intuitionistic Logic
[ ] [Search] [the entire directory ]
• A Bibliography of Constructive Mathematics - Compiled by Erik Palmgren.
• Confessions of a Formalist, Platonist Intuitionist - Autobiographical article by Fred Richman, describing his encounter with intuitionism.
• Constructive Mathematics - Maintained by Fred Richards.
• Intuitionistic Logic - A short entry in the Stanford Encyclopaedia of Philosophy by Joan R. Moschovakis.
• Intuitionistic Logic - A very brief overview of the subject by Alex Sakharov from MathWorld.
• Intuitionistic logic - Wikipedia (free encyclopedia) article.
• Intuitionistic Topology and Foundations of Constructive Mathematics - Math page of Frank Waaldijk, containing articles and PhD thesis on foundations of constructive mathematics and intuitionistic
topology. Also links to other mathematicians in this field.
• PlanetMath: Intuitionistic Logic - An introduction to the subject, a mathematical philosophy introduced by the Dutch mathematician, L E J Brouwer.
• Porgi - Porgi is a Proof-Or-Refutation Generator for Intuitionistic propositional logic, implementated by Allen Stoughton. Given a sequent, Porgi either finds a minimally sized, normal natural
deduction of the sequent, or it finds a "small", tree-based Kripke countermodel of the sequent. It is written in Standard ML.
• Possibility Semantics for Intuitionistic Logic - Paper by M J Cresswell. [PDF]
Volunteer to edit this category. | {"url":"http://www.dmoz.org/Science/Math/Logic_and_Foundations/Nonstandard_Logics_and_Extensions/Intuitionistic_Logic/","timestamp":"2014-04-19T12:47:58Z","content_type":null,"content_length":"22373","record_id":"<urn:uuid:3416c3c8-20c6-488a-83c0-ecda77e364d8>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do inverse images respect flabby sheaves?
up vote 2 down vote favorite
Let $i:Y\to X$ be a closed embedding of varieties, and let $S$ be a flabby \'etale (or Nisnevich) sheaf of abelian groups on $X$. Is $i^*S$ flabby also? I am mostly interested in the case when $S=i_
{x*}C$, where $C$ is a constant sheaf on a geometric (or Nisnevich) not necessarily closed (!!) point $x$ of $X$, $i_x:x\to X$ is the corresponding morphism. In this particular case the statement
seems easy to prove; yet I wonder whether it follows from some general statement, and what are the 'standard' references for this. Are any additional restrictions needed here?
Also, I wonder whether sheaves of the type $i^*i_{x*}$ were studied somewhere in the literature?
ag.algebraic-geometry sheaf-theory etale-cohomology reference-request
1 At least in topology the corresponding statement is true when $Y$ has a fundamental system of paracompact neighborhoods. See Godement, chapter II, \S 3.3. In particular, if $X$ is metrizable
(which is the case in most situations) the restriction of a flabby sheaf to any subspace of $X$ is flabby (ibid, corollary 2). – algori Oct 9 '11 at 21:21
The only thing I can say right now is that the result you want is not general nonsense, i.e. it is not a general property of topoi. In general, the functors that conserve flabby sheaves are $u_*$
($u$ any morphism of topoi) and $i^!$ ($i$ a closed embedding), cf SGA 4 V 4.9 and 4.11. – Alex Oct 10 '11 at 20:31
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged ag.algebraic-geometry sheaf-theory etale-cohomology reference-request or ask your own question. | {"url":"http://mathoverflow.net/questions/77622/do-inverse-images-respect-flabby-sheaves","timestamp":"2014-04-18T10:53:41Z","content_type":null,"content_length":"49853","record_id":"<urn:uuid:edea128d-d64b-4750-a75c-2e27b09664a6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Santa Fe Springs Algebra 1 Tutor
...Currently, I have been a math tutor for 9 years for grades 4 to 12 in the East San Gabriel Valley. My success in helping students improve their grades is evident in that 95% of my students
improved their grades from C, D, or F to A and B. 90% of my SAT students' scores are above 700 points. Thi...
9 Subjects: including algebra 1, calculus, geometry, Chinese
...Everyday,I would give him practice exercises worksheet in mathematics. What I have learned in my experience as a teacher and as a tutor. First, It is very important to guide my students the
fundamentals in mathematics.
3 Subjects: including algebra 1, statistics, algebra 2
...With two sons in college (USC and Cal), I need to make some extra money in the late afternoons or early evenings. I have tutored students from several Pasadena area high schools, both public
and private, and will be happy to provide recommendations. I graduated from Dartmouth College in 1984, and have interviewed high school seniors for admission to that school.
5 Subjects: including algebra 1, algebra 2, SAT math, linear algebra
...I have taught Human anatomy and physiology in physical therapy school. I studied Chinese in Hong Kong from Grade 1 to 12 in high school including Chinese literature. I can speak and write
Chinese (Cantonese and Mandarin) fluently throughout my life.
33 Subjects: including algebra 1, chemistry, geometry, statistics
...This makes such a difference for my students and is reflected in their success. Prealgebra is the key stepping stone between Elementary Math and Algebra 1. I do not recommend skipping
12 Subjects: including algebra 1, chemistry, algebra 2, trigonometry | {"url":"http://www.purplemath.com/santa_fe_springs_ca_algebra_1_tutors.php","timestamp":"2014-04-21T12:52:09Z","content_type":null,"content_length":"24215","record_id":"<urn:uuid:745d6491-9044-4775-87b8-f7f7aeb362d4>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"} |
Looking at a set of equations
, 2001
"... In engineering and applied mathematics, polynomial systems arise whose solution sets contain components of different dimensions and multiplicities. In this article we present algorithms, based
on homotopy continuation, that compute much of the geometric information contained in the primary decomposi ..."
Cited by 56 (26 self)
Add to MetaCart
In engineering and applied mathematics, polynomial systems arise whose solution sets contain components of different dimensions and multiplicities. In this article we present algorithms, based on
homotopy continuation, that compute much of the geometric information contained in the primary decomposition of the solution set. In particular, ignoring multiplicities, our algorithms lay out the
decomposition of the set of solutions into irreducible components, by finding, at each dimension, generic points on each component. As by-products, the computation also determines the degree of each
component and an upper bound on itsmultiplicity. The bound issharp (i.e., equal to one) for reduced components. The algorithms make essential use of generic projection and interpolation, and can, if
desired, describe each irreducible component precisely as the common zeroesof a finite number of polynomials.
- Journal of Complexity , 1999
"... Many applications modeled by polynomial systems have positive dimensional solution components (e.g., the path synthesis problems for four-bar mechanisms) that are challenging to compute
numerically by homotopy continuation methods. A procedure of A. Sommese and C. Wampler consists in slicing the com ..."
Cited by 50 (24 self)
Add to MetaCart
Many applications modeled by polynomial systems have positive dimensional solution components (e.g., the path synthesis problems for four-bar mechanisms) that are challenging to compute numerically
by homotopy continuation methods. A procedure of A. Sommese and C. Wampler consists in slicing the components with linear subspaces in general position to obtain generic points of the components as
the isolated solutions of an auxiliary system. Since this requires the solution of a number of larger overdetermined systems, the procedure is computationally expensive and also wasteful because many
solution paths diverge. In this article an embedding of the original polynomial system is presented, which leads to a sequence of homotopies, with solution paths leading to generic points of all
components as the isolated solutions of an auxiliary system. The new procedure significantly reduces the number of paths to solutions that need to be followed. This approach has been implemented and
applied to...
- PROCEEDINGS OF A NATO CONFERENCE, FEBRUARY 25 - MARCH 1, 2001, EILAT , 2001
"... ..."
, 2003
"... Homotopy continuation methods have proven to be reliable and efficient to approximate all isolated solutions of polynomial systems. In this paper we show how we can use this capability as a
blackbox device to solve systems which have positive dimensional components of solutions. We indicate how the ..."
Cited by 21 (14 self)
Add to MetaCart
Homotopy continuation methods have proven to be reliable and efficient to approximate all isolated solutions of polynomial systems. In this paper we show how we can use this capability as a blackbox
device to solve systems which have positive dimensional components of solutions. We indicate how the software package PHCpack can be used in conjunction with Maple and programs written in C. We
describe a numerically stable algorithm for decomposing positive dimensional solution sets of polynomial systems into irreducible components.
- Computer algebra in science and engineering, pages 77 – 89. World Scientific , 1994
"... We report on some experience with a new version of the well known Gröbner algorithm with factorization and constraint inequalities, implemented in our REDUCE package CALI, [12]. We discuss some
of its details and present run time comparisons with other existing implementations on well splitting exam ..."
Cited by 4 (1 self)
Add to MetaCart
We report on some experience with a new version of the well known Gröbner algorithm with factorization and constraint inequalities, implemented in our REDUCE package CALI, [12]. We discuss some of
its details and present run time comparisons with other existing implementations on well splitting examples.
- In Proc. New Computer Technologies in Control Systems , 1994
"... There are discussed implementational aspects of the special-purpose computer algebra system FELIX designed for computations in constructive algebra. In particular, data types developed for the
representation of and computation with commutative and non-commutative polynomials are described. Furthermo ..."
Cited by 2 (1 self)
Add to MetaCart
There are discussed implementational aspects of the special-purpose computer algebra system FELIX designed for computations in constructive algebra. In particular, data types developed for the
representation of and computation with commutative and non-commutative polynomials are described. Furthermore, comparisons of time and memory requirements of di erent polynomial representations are
reported. 1
- Michael J. Wester, John Wiley & Sons, Chichester, United Kingdom, ISBN 0-47198353 , 1996
"... In memoriam to Renate. We report on some experiences with the general purpose Computer Algebra Systems (CAS) Axiom, Macsyma, Maple, Mathematica, MuPAD, and Reduce solving systems of polynomial
equations and the way they present their solutions. This snapshot (taken in spring 1996) of the current pow ..."
Cited by 1 (0 self)
Add to MetaCart
In memoriam to Renate. We report on some experiences with the general purpose Computer Algebra Systems (CAS) Axiom, Macsyma, Maple, Mathematica, MuPAD, and Reduce solving systems of polynomial
equations and the way they present their solutions. This snapshot (taken in spring 1996) of the current power of the different systems in a special area concentrates both on CPU-times and the quality
of the output. 1
- In Computational algebraic geometry , 1993
"... We describe an algorithm which computes all subfields of an effectively given finite algebraic extension. Although the base field can be arbitrary, we focus our attention on the rationals. This
appears to be a fundamental tool for the simplification of algebraic numbers. Introduction Many algo ..."
Add to MetaCart
We describe an algorithm which computes all subfields of an effectively given finite algebraic extension. Although the base field can be arbitrary, we focus our attention on the rationals. This
appears to be a fundamental tool for the simplification of algebraic numbers. Introduction Many algorithms in computer algebra contain subroutines which require to use algebraic numbers. Computing
with them is especially important when polynomial systems of equation have to be solved. As an example let us consider the now called cyclic 7th--roots of unit, which are the solutions of the
following system [4, 1] : a + b + c + d + e + f + g = 0 ab + bc + cd + de + ef + fg + ga = 0 abc + bcd + cde + def + efg + fga + gab = 0 abcd + bcde + cdef + defg + efga + fgab + gabc = 0 abcde +
bcdef + cdefg + defga + efgab + fgabc + gabcd = 0 abcdef + bcdefg + cdefga + defgab + efgabc + fgabcd + gabcde = 0 abcdefg = 1: Some of the solutions of this system are of the form (a; b; c; 1=c; 1=
b; 1=a; 1... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=1058212","timestamp":"2014-04-19T18:25:14Z","content_type":null,"content_length":"30557","record_id":"<urn:uuid:82006e51-c09d-4bdc-94fc-50b8e9ce6a20>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Placement Test
Overview of the Mathematics Placement Exam
The mathematics placement exam is administered to determine students’ first mathematics course. All incoming students are required to take the mathematics placement test before arriving on campus.
This procedure ensures that you will enroll in the most appropriate mathematics course for your level of preparation.
The table below describes course eligibility determined by the placement test results.
Minimum Course Note
Placement Level
MA005 Students with placement score of 0 must take MA005 before taking any other mathematics course. In particular, they should take and pass MA005 before the end of
Level 0 (Preparatory their freshman year. MA005 will NOT fulfill the Mathematics Requirement for graduation.
MA101 & MA102
(Mathematics: A If a student is in a social science or liberal arts major that does not demand precalculus, the student should take MA101 or MA102.
Level 1 Liberal Art)
MA103 Generally, if a student is in a technical major that demands precalculus or calculus at some point (or is a Navy ROTC student who must take Calculus I and II
(College Algebra) eventually), a placement score of 1 means the student should take MA103.
MA103 will NOT fulfill the Mathematics Requirement for graduation. This course is designed to prepare students for MA107: Precalculus.
MA107 Generally, if a student is in a technical major that demands precalculus or calculus at some point (or is a Navy ROTC student who must take Calculus I and II
(Precalculus) eventually), a placement score of 2 means that the student should take MA107.
Level 2 This course is designed to prepare students to take MA108 or MA121: Calculus I.
(Elementary If a student is in a social science or liberal arts major that does not demand precalculus, the student should take MA101, MA102, or MA232.
Generally, if a student is in a technical major that demands precalculus or calculus at some point, a placement score of 3 means that the student should take a
MA108 calculus course.
(Applied Calculus)
Level 3 or MA108 is designed for students not majoring in engineering, mathematics, or the physical sciences and is typically taken by Biology, Environmental Science,
MA121 Geology, Accounting, Management or Engineering Management majors.
(Calculus I)
MA121 is designed for students majoring in physical sciences, engineering, mathematics or any major that would require further calculus courses. MA121 is also
required for students with Navy ROTC scholarships.
When should I take the exam?
You should take the placement exam at the latest by June 1, so that your advisor and the Mathematics Department will know which courses you will be eligible for before you arrive on campus.
What is the exam like?
The on-line exam consists of 60 multiple-choice questions that test a range of topics from arithmetic to pre-calculus. The test is divided into three sections of 20 questions each. Please plan on
having 90 minutes available to complete the test. You may take this test at home if you have a computer, or go to a library or school, which allows you public access to the web.
Can I prepare for the exam?
Please feel free to study for the exam. The following links provide the topics in each section of the exam and some sample problems.
If you are unhappy with your exam score, you have the ability to go back in and take the test a second time. We will count your best score when planning your enrollment for the fall.
Guidelines for taking the placement exam
1. Take your time and do your best. This test will place you in the class that best matches your mathematics skills. Once you start the test you must complete it within 90 minutes.
2. You may use scratch paper and pencil, but NO calculator.
3. The use of textbooks or any other external help is not permitted. Doing so would be a violation of the Norwich University honor code, and sets you up for a very difficult time in your mathematics
class because you will be placed in a class that doesn’t reflect your true math skills.
Disclaimer: The Mathematics Placement test is used for placement purposes only. It CANNOT be used for assigning credit or for fulfilling the mathematics requirements at Norwich University.
Follow these steps to take your Math Placement Exam:
• Go to my.norwich.edu (Link opens in new window so you can have these instructions in front of you while you work.)
• Enter your network username with Norwich in all caps and a forward slash before the network username provided to you in the letter from IT. For example NORWICH/aschwarz
• Enter your network password as provided in the letter from IT
• Click on the Banner Web tab near the top of the page
• Enter your network username, this time without the NORWICH
• Enter your network password
• Click on Student Services
• Click on Mathematics Placement Exam and you are finally there!
Your score will appear in Banner Web under Student Services, Mathematics Placement Exam. If you have difficulty taking the exam or finding your score please contact the Help Desk at
helpdesk@norwich.edu or 1.802.485.2456. | {"url":"http://scimath.norwich.edu/mathematics/math-placement-test/","timestamp":"2014-04-20T01:35:43Z","content_type":null,"content_length":"32567","record_id":"<urn:uuid:76346c04-2fae-4114-9503-5b9d3ee66c27>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Looking for a Python command
Up to Use
Looking for a Python command
Posted by
at October 26. 2012
Hi all,
to keep it short Im looking for a command in Python that is able to search along an edge
for intersectionpoints.
So if I have an edge or line with intersections like this:
Note: I do NOT know the coordinates of the intersections because they come form an .stp file.
So Im looking for a command that walks along the line and finds all points and writes them out so I can use it later on.
I hope there is a command or a small script
If there is something that is hard to understand just ask me, my englisch isnt the best
Re: Looking for a Python command
Posted by
Saint Michael
at October 26. 2012
Hi Nigirim
In terms of SALOME, If a curve has been intersected, it is represented as a Wire composed of Edges, where each Edge is bound by intersection points (Vertices) and end points of the curve.
Thus a task is to get all Vertices from the Wire and find out their coordinates. Here is code doing this.
vertices = geompy.SubShapeAllSorted( Wire, geompy.ShapeType["VERTEX"]) # get all vertices from the Wire
for v in vertices[1:-1]: # loop on all but first and last vertices
print geompy.PointCoordinates( v )
Re: Looking for a Python command
Posted by
at October 26. 2012
Previously Saint Michael wrote:
Hi Nigirim
In terms of SALOME, If a curve has been intersected, it is represented as a Wire composed of Edges, where each Edge is bound by intersection points (Vertices) and end points of the curve.
Thus a task is to get all Vertices from the Wire and find out their coordinates. Here is code doing this.
vertices = geompy.SubShapeAllSorted( Wire, geompy.ShapeType["VERTEX"]) # get all vertices from the Wire
for v in vertices[1:-1]: # loop on all but first and last vertices
print geompy.PointCoordinates( v )
Hi St.Micheal
Thanks for your fast answer to my question. | {"url":"http://www.salome-platform.org/forum/forum_10/173044973","timestamp":"2014-04-16T07:14:28Z","content_type":null,"content_length":"25898","record_id":"<urn:uuid:b4883bda-04f4-4dba-96bc-e53271ab6b89>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00024-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A museum exhibit, ABCD, has infrared beams around it for security, If the length of the beams is made three times the original length on each side, which statement is correct about the maximum area
available for an exhibit to be displayed? Answer It becomes nine times the original area. It becomes three times the original area. It becomes eighty-one times the original area. It becomes
twenty-seven times the original area.
Best Response
You've already chosen the best response.
The second choice
Best Response
You've already chosen the best response.
thank u
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/4f5a4abce4b0636d8905de0f","timestamp":"2014-04-16T04:15:06Z","content_type":null,"content_length":"30323","record_id":"<urn:uuid:11c9e754-19f0-4a62-9a42-0096bab1d058>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
MD: Matching Theory, Volume 29
- IN GENERAL GRAPHS, SYMPOSIUM ON THEORETICAL ASPECTS OF COMPUTER SCIENCE, STACS 99 , 1998
"... A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least 1/2 of the edge weight of a
maximum weighted matching. Its time complexity is O(|E|), with |E| being the number of edges in the graph. T ..."
Cited by 37 (0 self)
Add to MetaCart
A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least 1/2 of the edge weight of a maximum
weighted matching. Its time complexity is O(|E|), with |E| being the number of edges in the graph. This improves over the previously known 1/2-approximation algorithms for maximum weighted matching
which require O(|E| log(|V|)) steps, where |V| is the number of vertices.
, 1999
"... Multilevel strategies have proven to be very powerful approaches in order to partition graphs efficiently. Their efficiency is dominated by two parts; the coarsening and the local improvement
strategies. Several methods have been developed to solve these problems, but their efficiency has only been ..."
Cited by 28 (9 self)
Add to MetaCart
Multilevel strategies have proven to be very powerful approaches in order to partition graphs efficiently. Their efficiency is dominated by two parts; the coarsening and the local improvement
strategies. Several methods have been developed to solve these problems, but their efficiency has only been proven on an experimental basis. In this paper we present new and efficient methods for
both problems, while satisfying certain quality measurements. For the coarsening part we develop a new approximation algorithm for maximum weighted matching in general edge-weighted graphs. It
calculates a matching with an edge weight of at least 1 2 of the edge weight of a maximum weighted matching. Its time complexity is O(jEj), with jEj being the number of edges in the graph.
Furthermore, we use the Helpful-Set strategy for the local improvement of partitions. For partitioning graphs with a regular degree of 2k into 2 parts, it guarantees an upper bound of k\Gamma1 2 jV j
+ 1 on the cut size of th...
- Graph Drawing (Proc. GD '97), volume 1353 of Lecture Notes in Computer Science , 1997
"... . Let G(V;E) be a graph, and let \Gamma be the drawing of G on the plane. We consider the problem of assigning text labels to every edge of G such that the quality of the label assignment is
optimal. This problem has been first encountered in automated cartography. Even though much effort has been ..."
Cited by 11 (3 self)
Add to MetaCart
. Let G(V;E) be a graph, and let \Gamma be the drawing of G on the plane. We consider the problem of assigning text labels to every edge of G such that the quality of the label assignment is optimal.
This problem has been first encountered in automated cartography. Even though much effort has been devoted over the last 15 years in the area of automated drawing of maps, the Edge Label Placement
(ELP) problem remains essentially unsolved. In this paper we investigate the ELP problem. We present an algorithm for the ELP problem more suitable for hierarchical drawings of graphs, but it can be
adopted to many different drawing styles and still remain effective. Also, we present experimental results of our algorithm that indicate its effectiveness. 1 Introduction The area of graph drawing
has grown significantly in the recent years motivated mostly by applications in information visualization [4, 17]. When visualizing information, it is essential to display not only the structure of
the ob...
"... We show that the Shortest Path Problem cannot be solved in o(log n) time on an unbounded fan-in PRAM without bit operations using poly(n) processors even when the bit-lengths of the weights on
the edges are restricted to be of size O(log 3 n). This shows that the matrix-based repeated squaring al ..."
Cited by 6 (0 self)
Add to MetaCart
We show that the Shortest Path Problem cannot be solved in o(log n) time on an unbounded fan-in PRAM without bit operations using poly(n) processors even when the bit-lengths of the weights on the
edges are restricted to be of size O(log 3 n). This shows that the matrix-based repeated squaring algorithm for the Shortest Path Problem is optimal in the unbounded fan-in PRAM model without bit
operations. 1
- in 11th ACM Symposium on Parallel Architectures and Algorithms , 1999
"... In this paper we study the problem of scheduling real-time requests in distributed data servers. We assume the time to be divided into time steps of equal length called rounds. During every
round a set of requests arrives at the system, and every resource is able to fulfill one request per round. Ev ..."
Cited by 2 (0 self)
Add to MetaCart
In this paper we study the problem of scheduling real-time requests in distributed data servers. We assume the time to be divided into time steps of equal length called rounds. During every round a
set of requests arrives at the system, and every resource is able to fulfill one request per round. Every request specifies two (distinct) resources and requires to get access to one of them.
Furthermore, every request has a deadline of d, i.e. a request that arrives in round t has to be fulfilled during round t +d 1 at the latest. The number of requests which arrive during some round and
the two alternative resources of every request are selected by an adversary. The goal is to maximize the number of requests that are fulfilled before their deadlines expire. We examine the scheduling
problem in an online setting, i.e. new requests continuously arrive at the system, and we have to determine online an assignment of the requests to the resources in such a way that every resource has
to fulfil...
- in General Graphs, Symposium on Theoretical Aspects of Computer Science, STACS 99 , 1999
"... . A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least 1 2 of the edge weight of a
maximum weighted matching. Its time complexity is O(jEj), with jEj being the number of edges in the graph. ..."
Add to MetaCart
. A new approximation algorithm for maximum weighted matching in general edge-weighted graphs is presented. It calculates a matching with an edge weight of at least 1 2 of the edge weight of a
maximum weighted matching. Its time complexity is O(jEj), with jEj being the number of edges in the graph. This improves over the previously known 1 2 -approximation algorithms for maximum weighted
matching which require O(jEj \Delta log(jV j)) steps, where jV j is the number of vertices. 1 Introduction Graph Matching is a fundamental topic in graph theory. Let G = (V; E) be a graph with
vertices V and undirected edges E without multi-edges or self-loops. A matching of G is a subset M ae E, such that no two edges of M are adjacent. A vertex incident to an edge of M is called matched
and a vertex not incident to an edge of M is called free. An enormous amount of work has been done in matching theory in the past. Different types of matchings have been discussed, their existence
and properties h... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=2027305","timestamp":"2014-04-18T06:35:39Z","content_type":null,"content_length":"30133","record_id":"<urn:uuid:9edf44d5-0807-4b9d-ae35-9b703eecde11>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00446-ip-10-147-4-33.ec2.internal.warc.gz"} |
3.) The time spent (in days) waiting for a heart
View the step-by-step solution to:
3.) The time spent (in days) waiting for a heart transplant in two states for...
Home Tutors Statistics and Probability 3.) The time spent (in days) w...
This question has been answered by Expert on Feb 5, 2012. View Solution
OrryG posted a question
3.) The time spent (in days) waiting for a heart transplant in two states for patients with type A+ blood can be approximated by a normal distribution, as shown in the graph to the right. Complete
parts (a) and (b) below.
μ= 126
σ= 20.4
Graph line is 55 left tail and 200 right tail
(round to two decimal points please)
(a) What is the shortest time spent waiting for a heart that world still place a patient in the top 10% of waiting times?
In days?
Expert answered the question
Expert answered the question
Expert answered the question
Dear Student
Please find... View Full Answer
Attachment Preview:
• Statistics and Probability-7875566.doc Download Attachment
3.) The time spent (in days) waiting for a heart transplant in two states for patients with type A+ blood
can be approximated by...3.) The time spent (in days) waiting for a heart transplant in two states for patients with type A+ blood
can be approximated by a normal distribution, as shown in the graph to the right. Complete parts (a)
and (b) below.
= 126
= 20.4
Graph line is 55 left tail... View Full Attachment Show more
User posted a question
b. You would expect that approximately ______ SAT verbal scores would be greater than 575?
User posted a question
Thank you!
Expert answered the question
Expert answered the question
User posted a question
Hi would you please answer a follow up question asap?
b.) What is the longest time spent waiting for a heart that would still place the patient in the bottom 25%? (Round to the nearest whole number please)
Expert answered the question
Expert answered the question | {"url":"http://www.coursehero.com/tutors-problems/Statistics-and-Probability/7875566-3-The-time-spent-in-days-waiting-for-a-heart-transplant-in-two-sta/","timestamp":"2014-04-20T01:08:18Z","content_type":null,"content_length":"58875","record_id":"<urn:uuid:6c4b1c35-51c6-4e5a-be45-587338697854>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00045-ip-10-147-4-33.ec2.internal.warc.gz"} |
Trying to find equation for parabola
January 28th 2013, 08:30 AM
Trying to find equation for parabola
Vertex of 0,3
Focus of 0,0
Formula for parabola:
y = a(x - h)^2 + k
I assume that h = 0, k = 3 (the vertex)
how does the focus factor into this equation? thanks.
January 28th 2013, 10:38 AM
Re: Trying to find equation for parabola
If you write the eqation of a parabola as $y= a(x- x_0)^2+ y_0^2$, the same as your formula with [itex]h= x_0[/itex], [itex]k= y_0[/tex], where [itex](x_0, y_0)[/itex] is the vertex, then the
focal length is f= 1/4a.Parabola - Wikipedia, the free encyclopedia
Saying that the vertex is at (0, 3) and the focus is at (0, 0) means that the focal length is f=1/4a= 3 so that 4a= 1/3, a= 1/12. Of course, because the focus is below the vertex, the parabola
opens downward, so a is negative. | {"url":"http://mathhelpforum.com/pre-calculus/212164-trying-find-equation-parabola-print.html","timestamp":"2014-04-18T06:09:46Z","content_type":null,"content_length":"5004","record_id":"<urn:uuid:56130035-a614-447d-be55-5abbced2bdd2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00255-ip-10-147-4-33.ec2.internal.warc.gz"} |
Coordinate Proof... Perpendicular Lines
November 15th 2010, 01:47 PM #1
Junior Member
Nov 2010
Coordinate Proof... Perpendicular Lines
Prove: If the product of the slopes of two lines is -1, then the lines are perpendicular.
I have already proven the converse of this statement. I have no idea where to go / what to do for this proof...
If θ1 and θ2 are the angles made by the two straight lines with x-axis, then the angle between the two straight line is given by
θ= θ2 - θ1.
Τhen tanθ = tan(θ2 - θ1) = (tanθ2 - tanθ1)/( 1 +tanθ1*tanθ2) = (m2 - m1)/(1 + m1*m2)
If m1*m2 = -1, tanθ becomes infiifnty which is possible when θ = π/2.
November 16th 2010, 01:07 AM #2
Super Member
Jun 2009 | {"url":"http://mathhelpforum.com/geometry/163359-coordinate-proof-perpendicular-lines.html","timestamp":"2014-04-17T06:09:45Z","content_type":null,"content_length":"32007","record_id":"<urn:uuid:5fb0d113-b991-4a81-bc6f-6859dbaaa890>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relationship Between the Money Supply and Nominal Gdp by Joshuak
I.Introduction to hypothesis
In estimating the relationship between the money supply and nominal GDP we look into the past to find the many different ways that the great economists of the past studied this relationship. The
first thing to understand is that money supply should be considered the same thing as money demand, this happens in our equilibrium society that I am using for this paper. Therefore anytime equations
may differ depending on money supply and money demand we will just assume that they mean the same thing in that M = M , but only in a equilibrium economy. Lets attempt to assume equilibrium for the
ease of the equations at hand. Next we must understand a basic money supply equation, in which the money supply is equal to the price level multiplied by income and that number is then divided by the
velocity (or the number of times per year a dollar turns over). The equation would look like this:
M = PY / V
The relationship between money supply (M ) and nominal GDP (PY), should be looked at as either PY is a function of M (or M ) or that Ms is a function of PY. Combining these we can find the M by
making M a function of P and Y. Of course I will try to use the other economists and other literature sources to prove this hypotheses. For instance the classical economists believed that money was a
function of price and the output level. Price and output give you nominal GDP, therefore they thought that money supply was a function of (simply enough) the nominal GDP. Keynes changed that thought
to state that the money supply was a function of price and the incomes. My argument is the derived oppisite of the classical in that the nominal GDP is a function of the money demand. Therefore the
dependent variable running would be GDP as the equation might start as: PY= f(M ).
II.Theoretical rationale
Theoretically this study is important to develop and understanding of why a county would decide to supply more or less money to its citizens in... | {"url":"http://www.studymode.com/essays/Relationship-Between-The-Money-Supply-And-137784.html","timestamp":"2014-04-18T13:17:01Z","content_type":null,"content_length":"33325","record_id":"<urn:uuid:d9732fb9-c646-4c2e-a801-aac01abc01d6>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00329-ip-10-147-4-33.ec2.internal.warc.gz"} |
calculating real power and power factor
Thanks for the replies! I get the concept of reactive power being a function of a shifting of the voltage / current phases. (Thus a power factor of 1 means that the voltage and current rise in
unison, correct? Inductance or capacitance in the circuit cause the two waves to become out of phase?) I don't get the statement "'Real Power' is V.I (the dot product of the two phasors) or VI cos
(phase)". I do get "which is instantaneous power. The value of this will vary and always be greater than zero (and corresponds to I^2R)".
Since I believe that I can calculate apparent power as Vrms * Irms (where rms is a/√2 and a=peak value), and real power can be derived from an instantaneous measurement of voltage and amperage, then
power factor can be calculated as real / apparent?
Ultimately I want to be able to put CTs on incoming mains power and several branch circuits as well as measure the voltage on the circuits and be able to make some statement about how much energy is
being consumed in total (mains) and by each branch circuit as well as how close I am to a PF of one.
The dot product is just 'vector speak' and takes you further into the business if you're interested. Have you not heard of using Phasors to describe AC?
This is great stuff as a thought experiment but all this stuff is readily available to buy. Furthermore, it is 'Electrically Safe'.
btw, how were you proposing to find the instantaneous V and I? | {"url":"http://www.physicsforums.com/showthread.php?t=568306","timestamp":"2014-04-19T02:12:43Z","content_type":null,"content_length":"37797","record_id":"<urn:uuid:ae44d540-9c01-4a5a-8bfd-adcd3eacf816>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
Efficient Gaussian blur with linear sampling
Print This entry was posted by Daniel Rákos on September 7, 2010 at 8:48 pm, and is filed under Graphics, Programming, Samples. Follow any responses to this post through RSS 2.0. You can leave
article a response or trackback from your own site.
• Sweet. This is similar to my old bloom tutorial, which I wrote when I was young and stupid:
Some aspects of my tutorial are an embarrassment to me now, so I hope folks will encounter your article before mine. Keep up the good work Daniel!
• Does this method require odd radius? In case it is even what do we do?
• It is not necessary, however with an even radius you’ll have two additional texels at the left and right end that cannot be linearly sampled, or you can include the middle texel into linear
sampling on both sides to get less number of texture fetches. However, in general an odd radius is preferred to get optimum number of texture fetches.
On the other hand, in case you need a reduced size blurred image it doesn’t really matter if you use an odd or even sized filter radius.
• Very nice article!! I have a little question. Shouldn’t the summary of the weights be equal to 1.0?
• Yes, it should, and actually if you sum the weights counting the side texel weights double it is roughly equal to 1.0. The only reason it is a little bit less is because we discarded the two
least significant coefficients as I mentioned in the article.
I also mentioned that due to the discard of the two least significant coefficients we can decrease the sum of the coefficients accordingly but in the sample application I didn’t do so but just
for simplicity. I thought it will be less confusing this way.
• oops, you are right that slipped my mind. I forgot to add the side weights. Thanks
• No problem, I suspected that this might be the reason of your question
• Great article!
In my shaders I’m getting a performance increase by moving the texture coordinates calculation to the vertex shader. This saves ‘millions’ of vec2 constructors, additions and divisions in the
fragment shader.
The vertical filter shader would look like this:
//vertex shader
attribute vec4 aVertices;attribute vec2 aTexCoords;
varying vec4 vTexCoordsPos;
varying vec3 vTexCoordsNeg;
uniform float invStepWidth1; // 1.3846153846 / texHeight
uniform float invStepWidth2; // 3.2307692308 / texHeight
void main()
{ vTexCoordPos = vec4(aTexCoords.x, aTexCoords.y, aTexCoords.y + invStepWidth1, aTexCoords.y + invStepWidth2);
vTexCoordNeg = vec3(aTexCoords.x, aTexCoords.y – invStepWidth1, aTexCoords.y – invStepWidth2);
gl_Position = aVertices;
//fragment shader
varying vec4 vTexCoordsPos;
varying vec3 vTexCoordsNeg;
uniform sampler2D image;
const float weight[3] = float[]( 0.2255859375, 0.314208984375, 0.06982421875 );
void main()
vec4 fragmentColor = texture2D(image, vTexCoordsPos.xy)*weight[0];
fragmentColor += texture2D(image, vTexCoordsPos.xz)*weight[1];
fragmentColor += texture2D(image, vTexCoordsPos.xw)*weight[2];
fragmentColor += texture2D(image, vTexCoordsNeg.xy)*weight[1];
fragmentColor += texture2D(image, vTexCoordsNeg.xz)*weight[2];
gl_FragColor = fragmentColor;
• Actually you’re right as these things should be better calculated in the vertex shader.
However, I’m not 100% sure that you save that amount of time in rendering due to texture fetches usually take much more time than for the GPU to calculate a few additions and subtractions and due
to latency hiding mechanisms in the latest GPUs will simply eliminate the benefits of moving the calculations to the vertex shader.
If you are not convinced, then please give me some benchmark results as maybe I’m wrong.
• Actually I do the same as heliosdev suggested. I had made some benches in my engine and this proved to boost performance a bit
• You made me interested. Unfortunately my only card wouldn’t be a good basis for a generic benchmark but maybe I will create then a demo which can do both methods and let’s see how it performs on
various GPU generations as I’m interested as well what will be the outcome of such a test.
• Just did some tests (one horizontal and one vertical step) on an old Geforce 6800 and the fps increased by 10-15% when doing the texture calculations in the vertex shader.
• Okay, but that’s just one card from one vendor.
I’ll create a benchmark program for the purpose and collect data here at the blog to see this is really always the case.
• I have tried this trick in an ATI 4870 some time ago. It works faster, dont know why but it does
• > The only reason it is a little bit less is because we discarded the two least significant coefficients as I mentioned in the article.
Actually, the weights should always sum to 1.0, otherwise the filter is essentially dimming down the incoming image (a tiny bit). For example, if you used “1 2 1″ and trimmed the “1″s off your
filter, but still weighted by 4 (sum of 1+2+1), you would effectively cut the image intensity by half. Extreme case, I know, but not summing to 1.0 in your code is the same sort of problem, at a
different scale.
There is a discrepancy between text and code. The weights in your first code snippet, when multiplied by 1024, give the values “16.5 55 123.75 198 231 198 123.75 55 16.5″. They’re supposed to
give “10 45 120 210 252 210 120 45 10″ as shown in the diagram. So, why is there a mismatch?
Otherwise, I like the article, and the comment about precomputing values in the vertex shader is a great tip (thanks, heliosdev!). Since at least some of these values are used *before* the first
texture sample is retrieved, it makes sense to me that computing these once in the VS would be a win. I agree that anything computed after the first texture tap tends to cost nothing due to
latency hiding, but computations before any taps cannot be hidden (if I understand this area correctly…).
• Yes, you’re right that the numbers do not sum exactly to 1.0 in the demo. I know about the issue. It is due to the discarded coefficients and due to rounding errors. However, I don’t think that
there such a big difference if you re-multiply them but I will double check them sometime.
About the VS solution, you are right that the calculations between the first texture fetch cannot be hidden, however for the first texture fetch we don’t need any calculation as we simply fetch
the texel corresponding to the actual fragment. The only thing done in the demo is the division by the texture size, however that is not needed either if you use a texture rectangle as for them
the texture coordinates are not normalized.
• > due to rounding errors
Divide by 1022 instead of 1024 and you’re golden – the weights must sum to 1.0. The code dims by a mere 0.2%, but there’s no reason to dim at all.
> for the first texture fetch we don’t need any calculation
Good point! However, I do think heliosdev’s point is a great one: computing it 4-6 times in the vertex shader for the quad is going to be faster than computing these a million times or more in
the pixel shader. Sometimes latency hiding will luck out and hide these computations, but it seems better to not risk it and just compute these in the VS. That said, maybe there’s some hidden
cost (interpolating extra values on the VS output?) that makes this A Bad Idea. I’ll be interested to how your benchmarking goes, if you do it.
Any comment on the large discrepancy I found between theory and code? The theory says use “10 45 120 210 252″, the code actually uses “16.5 55 123.75 198 231″. Bug, or was a different filter
kernel actually used, or…? I’m not trying to nit-pick, but I’d love it if you fixed your code to match the theory (or vice versa), so that we all could benefit.
If the Pascal’s triangle numbers are the right ones to use, then for the first code snippet it should be:
uniform float weight[5] = float[]( 252./1022., 210./1022., 120/1022., 45./1022., 10./1022. );
For the second:
uniform float offset[3] = float[]( 0.0, 1. + 120./330., 2. + 10./55. );
uniform float weight[3] = float[]( 252./1022., 330./1022., 55./1022 );
I’m just showing where the computations come from in the above, using the Pascal triangle numbers.
• > I’m not trying to nit-pick, but I’d love it if you fixed your code to match the theory (or vice versa), so that we all could benefit.
No problem, just I haven’t had the time so far as I had to travel this weekend and I haven’t got home yet.
> maybe there’s some hidden cost (interpolating extra values on the VS output?) that makes this A Bad Idea
Yes, that’s what I’m concerned about, but maybe I’m completely wrong so let the results talk instead.
• No rush, I just wasn’t sure it was on the radar, since you didn’t mention it in your reply. I was thinking a bit more on the 252 vs. 231 mismatch. One idea: the Pascal Triangle is an
approximation, after all, of the Gaussian. Also, I think we want to integrate the area under the curve of the Gaussian for each pixel’s extent. This would tend to make the central area’s value
(middle of the curve) smaller that the central point’s value. I might start playing with this and asking some experts about it…
• > the Pascal Triangle is an approximation, after all, of the Gaussian
Yes, I know and it is most probably not the best idea to use it for production quality but I intended this article for novices as it is much more like a tutorial than a quality scientific
I know now why the coefficients are different. It is because I’ve used the 12th row of the Pascal triangle that is not even shown on the figure. 924 is the middle coefficient and the sum is 4096.
Sorry for that, just I’ve made the demo earlier than the article. I will correct it ASAP.
Btw, are you the Eric Haines from Autodesk?
• I’ve updated the post and the downloads with the correct coefficients according to the 12th row of the Pascal triangle and with 4070 used as the sum. Thanks for the observations!
• Nice, thanks for the quick fixes, looks good. Yeah, I was figuring there was something going on at 4096, since the old factors had halfs and quarters for fractions. I thought you might have been
going much further down the triangle and adding up factors and averaging. I’m still not sure about “area under the Gaussian curve” vs. Pascal’s triangle values, nor (for that matter) how standard
deviation for the Gaussian curve relates. Anyway, your article now matches text to code, so good deal.
Click on my name to get more info about me ;->
• Well, then I have to say that it was an honor to get feedback and criticism from you
• Really great article. I have been looking for a clear explanation of how to do linear sampled blur before without any luck. So I will definitely add this to my screensaver/music visualizer.
Can you always blur down to a downscaled version for extra performance even if you need a normal blur? You mention doing this for a bloom effect.
It might be more of a memory optimization to not be forced to allocated backbuffers of the same size as the screen.
However using the ping-pong technique with scaled down back buffers would make calculating the total blur size difficult.
• > Can you always blur down to a downscaled version for extra performance even if you need a normal blur?
Depends on the use case. If you would like to create a simple blurred image of your visualization effect, you should not use downscaling. However, if the blurred image is just an input to more
sophisticated effects, maybe it worths. Can you give more details?
• Since no one seems to have mentioned it so far: in signal processing the filter you use is known as a “binomial filter” (surprise!). Another way to compute them is to convolve repeatedly with a
moving average filter [1 1]. For example convolving once with [1 2 1] is equivalent to convolving twice with [1 1]. This can sometimes be used to speed up binomial filters in CPU implementations,
but I’m not sure it’s beneficial on the GPU.
• omg guys you just fried my mind with this…
but now i understand how does photoshop do gaussian blur
but the comments… hard to follow
• You are right, that in fact the sample implementation is really a simplified binomial filter, but actually I mentioned that we use the binomial coefficients to give a rough estimation of the area
under the Gaussian curve. Actually that is not the only approximation we use, just see the eliminated coefficients. Usually in real-time graphics things go like this…
• Hello, Daniel.
Thanks for sharing your findings.
I think there is a mistake in formula for calculating the offsets. For example, let’s take numbers from first listing:
offA = 3.0, wA = 0.0540540541
offB = 4.0, wB = 0.0162162162
The formula on the website is:
off = (offA*wB + offB*wA)/(wA+wB) = 0.264864865 / 0.0702702703 = 3.7692307695
But in the second listing you are using value 3.2307692308 for third sample.
Which, as far as I understand, is the correct number, but corresponds to a different formula:
off = (offA*wA + offB*wB)/(wA+wB) = 0.2270270271 / 0.0702702703 = 3.23076923
I did the math myself and confirmed second result.
• Actually you are right, sorry, it is a typing error in the formula. I’ll correct it today. Thanks!
• “A Gaussian function with a distribution of 2σ is equivalent with the product of two Gaussian functions with a distribution of σ.”
“Using the second property of the Gaussian function, we can separate our 33×33-tap filter into three 9×9-tap filter (9+8=17, 17+16=33).”
Daniel, could you please explain more detailed what is filter separation?Multiplication of two one-dimensional filters looks clear – they are applied consistently, and result is two dimensional
filter of same size, but how can one filter be represented by less size filters?
I am newbie in image filtering, so I’ll be satisfacted enough by link to any manual instead of answer
Thank you:)
• All of these properties depend on the nature of the filter. While a simple box filter or a Gaussian filter is separable, most of the convolution filters are not. The same is true for the
composition of two fewer-tap filters.
Unfortunately I don’t have any book recommendations, at least right now. If I’ll find something useful then I’ll post it.
• Ok, it must be separable, but what is separation? If you please explain what this numbers (“(9+8=17, 17+16=33)”) mean, hope then I finally get it
• Okay, in order to ease the explanation let us consider the 1D case:
In the first round you have a 9 tap filter: the center texel plus 4 more in both directions.
In the second round you take another 9 tap but there every tap already contains data from a 9 tap footprint so you get actually a 17 tap filter as besides the 9 taps you take, the texel contains
already 4-4 taps from the left and right side so it is actually 4+9+4=17 taps.
The same for the next case. Here you actually have a cumulative 8-8 texels from both sides beside your 9 tap filter so that is 8+17+8=33.
• “The same for the next case. Here you actually have a cumulative 8-8 texels from both sides beside your 9 tap filter so that is 8+17+8=33.”
I understand now what “+8+8″ mean, but, sorry, still see no way how it can be “17″ there if we use 9 tap filter in third pass.
• I added this to my engine and it does seem to work fine so thanks for that. But I do wonder what the best way to do large blurs is. When is it better to do say a 21 tap blur then a few 9 tap
blurs. One such test I did I needed quite a few passes to get the blur radius I wanted. So I changed it do to a few 21 tap blurs then 9 tap for the smaller ones and the frame rate increased by
• Yes, in fact usually using larger kernels performs better. This is because raster operations also take some times as well as the implicit synchronization incurred by requiring the results of the
previous pass to do the current one. This is why the theoretical performance gain by using multiple steps is not always there in real world scenarios.
I agree that a 21 tap filter is better than using several smaller tap filters, however, in case you would need a >100 tap filter then most probably you could really take advantage of the
composition property of the Gaussian function. Also, optimal filter kernel varies based on the target hardware, so it is always best to do some benchmark to select a particular kernel size.
• Hi,
I’ve ported this app to Linux, also compiled it on Ubuntu 64bit.
If interested I can send you in e-mail or sth. (~2.4MB)
• I, I know this is an old post, but I just found it and was doing something similar in my engine.
However, the formula I was using for weight offsets is:
offA + (wB / (wA+wB))
You can derive this from your formula (assuming that offset B is offset A + 1):
(offA*wA + offB*wB)/(wA+wB)
= (offA*wA + (offA+1)*wB)/(wA+wB)
= (offA*wA + offA*wB + wB)/(wA+wB)
= (offA*(wA + wB) + wB)/(wA+wB)
= (offA*(wA + wB)) / (wA+wB) + wB/(wA+wB)
= offA + wB/(wA+wB)
• Your math for the chained convolution is wrong. Executing an n tap filter m times results in a filter of length m*(n-1)+1. Simple example:
[1 2 1] convolved with itself is [1 4 6 4 1].
[1 4 6 4 1] convolved with [1 2 1] again is [1 6 15 20 15 6 1].
Therefore, your example actually results in a 25-tap filter in which case you are actually wasting cycles.
• Oops, maybe you are right, but I have to double check it. It was a pretty long time ago when I wrote this article.
• Hello everybody. Very interesting article but I´ve got one followup question. You mentioned that convolving twice with sigma is the same as convolving one with 2*sigma. My current work relies on
rather accurate sigma calculation so for example i´ve got sigma of sqrt(2) so i divide it by 2 giving sigma=sqrt(2)/2 convolve twice and the result is miles away from beeing the same in both
cases. For coefficients i use actual equation not binomial, i see that if I take first row from binomial and convolve twice i d achieve 2. row but that is not the case of sigma beeing double , is
it ?
• in fact this is my code for weights that is probably the most important of the whole deal. I use 1d separation too but that of course have no effect on my current problem … int kernel_size=(int)
float *kernel;
kernel=new float[kernel_size];
for(int i=0;i<kernel_size;i++)
float x=i-kernel_size/2;
• Hm, I tried this method on iphone with OpenGL shader but instead I got slower result than non-sampling method (around 60% slower with kernel radius = 50).
• Well, that’s unlikely, however, tiled rendering GPUs have other performance characteristics so it may be possible. Are you sure that your implementation is correct?
• Ok my bad, it seems the image is too large so I use too much GPU instruction and it stops halfway. Can I divide a n-tap blur into two (n/2)-tap? In wikipedia it says applying 6-tap then 8-tap
results in 10-tap blur, so the relation is a^2+b^2=c^2. Which is correct?
• Thanks for the writeup, I came across it after reading David Wolff’s “OpenGL 4.0 Shading Language Cookbook.”
In response to Evan, I was able to adapt the above into a shader program that works well on iOS devices. On an iPhone 4, it can filter a live 640×480 video frame in under 2 ms, and I was able to
filter a 2000×1494 image and save it to disk in about 500 ms. The code for this can be found within my open source GPUImage framework under the GPUImageFastBlurFilter class:
I used the following vertex shader:
attribute vec4 position;
attribute vec2 inputTextureCoordinate;
uniform mediump float texelWidthOffset;
uniform mediump float texelHeightOffset;
varying mediump vec2 centerTextureCoordinate;
varying mediump vec2 oneStepLeftTextureCoordinate;
varying mediump vec2 twoStepsLeftTextureCoordinate;
varying mediump vec2 oneStepRightTextureCoordinate;
varying mediump vec2 twoStepsRightTextureCoordinate;
// const float offset[3] = float[]( 0.0, 1.3846153846, 3.2307692308 );
void main()
gl_Position = position;
vec2 firstOffset = vec2(1.3846153846 * texelWidthOffset, 1.3846153846 * texelHeightOffset);
vec2 secondOffset = vec2(3.2307692308 * texelWidthOffset, 3.2307692308 * texelHeightOffset);
centerTextureCoordinate = inputTextureCoordinate;
oneStepLeftTextureCoordinate = inputTextureCoordinate – firstOffset;
twoStepsLeftTextureCoordinate = inputTextureCoordinate – secondOffset;
oneStepRightTextureCoordinate = inputTextureCoordinate + firstOffset;
twoStepsRightTextureCoordinate = inputTextureCoordinate + secondOffset;
and the following fragment shader:
precision highp float;
uniform sampler2D inputImageTexture;
varying mediump vec2 centerTextureCoordinate;
varying mediump vec2 oneStepLeftTextureCoordinate;
varying mediump vec2 twoStepsLeftTextureCoordinate;
varying mediump vec2 oneStepRightTextureCoordinate;
varying mediump vec2 twoStepsRightTextureCoordinate;
// const float weight[3] = float[]( 0.2270270270, 0.3162162162, 0.0702702703 );
void main()
lowp vec3 fragmentColor = texture2D(inputImageTexture, centerTextureCoordinate).rgb * 0.2270270270;
fragmentColor += texture2D(inputImageTexture, oneStepLeftTextureCoordinate).rgb * 0.3162162162;
fragmentColor += texture2D(inputImageTexture, oneStepRightTextureCoordinate).rgb * 0.3162162162;
fragmentColor += texture2D(inputImageTexture, twoStepsLeftTextureCoordinate).rgb * 0.0702702703;
fragmentColor += texture2D(inputImageTexture, twoStepsRightTextureCoordinate).rgb * 0.0702702703;
gl_FragColor = vec4(fragmentColor, 1.0);
The naming needs a little work, because I use this in two passes, one each for the vertical and horizontal halves, and I just set the corresponding width or height input offset to 0 for the
appropriate stage.
You do need to move the texture offset calculations to the vertex shader, because dependent texture reads are very expensive on these GPUs. Also, they don’t handle for loops well, so I had to
inline the calculations.
• Thanks for sharing this, Brad, however, you made some incorrect assumptions:
There is no dependent texture reads in my shader either. Everything is uniform. Dependent texture reads mean that the texture coordinates of the fetch are affected by a previous texture fetch.
While moving the texture coordinate calculation to the vertex shader might help in some situations, on desktop GPUs the additional varying count would hurt the performance more than performing
the math in the fragment shader.
Also, the for loop should not hurt performance either as a proper GLSL compiler implementation should be able to unroll for loops with compile time constant conditions. At least, I can assure you
that this happens in case of desktop drivers.
• I used the term “dependent texture read” as defined by Apple and Imagination Technologies (the makers of the GPUs in iOS devices). For example, Apple’s OpenGL ES Programming Guide:
has the following language:
“Dynamic texture lookups, also known as dependent texture reads, occur when a fragment shader computes texture coordinates rather than using the unmodified texture coordinates passed into the
shader. Although the shader language supports this, dependent texture reads can delay loading of texel data, reducing performance. When a shader has no dependent texture reads, the graphics
hardware may prefetch texel data before the shader executes, hiding some of the latency of accessing memory.
It may not seem obvious, but any calculation on the texture coordinates counts as a dependent texture read. For example, packing multiple sets of texture coordinates into a single varying
parameter and using a swizzle command to extract the coordinates still causes a dependent texture read.”
Perhaps there is a difference in terminology here, but I’ve seen that anything that performs a calculation involving a texture coordinate causes a huge slowdown in the texture fetch in a fragment
shader on these mobile devices. This is even true for calculations involving constant offsets, like yours here. Moving the calculations from the fragment to the vertex shader, then providing the
results as varyings, took my version of your blur from 50 ms / frame to 2 ms / frame. I think this is one of those areas where the tile-based deferred renderers diverge from standard desktop
My comments about the for loops are also based on my profiling. The current shader compilers employed for mobile GPUs are not great at these kinds of optimizations yet, so we still need to
hand-tune things that should be taken care of for us, like unrolling loops. It’s unfortunate, but hopefully they’ll catch up to desktop compilers soon.
Mobile GPUs are interesting animals, and I’ve had to perform quite a few tweaks like this to get desktop shaders working well on them.
• It seems that they use the term a bit differently then. So that means that mobile GPUs fetch texture data before the shader execution?
Actually this is new thing to me, no such things happen on desktop GPUs.
No trackbacks yet. | {"url":"http://rastergrid.com/blog/2010/09/efficient-gaussian-blur-with-linear-sampling/comment-page-1/","timestamp":"2014-04-17T15:32:00Z","content_type":null,"content_length":"147699","record_id":"<urn:uuid:999eeb51-3b3c-448a-ac65-2e8b9270fe02>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00208-ip-10-147-4-33.ec2.internal.warc.gz"} |
External Influences on U.S. Undergraduate Mathematics Curricula: 1950-2000
Many of the most important curricular changes in undergraduate mathematics in the second half of the 20^th century appear to have been influenced by matters outside mathematics. In many cases these
were circumstances in other academic disciplines, but a variety of other external circumstances have been involved as well. We survey a selection of important changes and the external forces that
contributed to these changes. There is much more to the history of undergraduate mathematics during 1950-2000 we do not address, but the axis onto which we are projecting is an important one, and not
unrelated to current events.
In the era from 1950 to 2000 undergraduate teaching of mathematics in the United States was often responsive to external circumstances. This is an interesting phenomenon for a discipline that
sometimes claims it has often benefitted the outside world most by following its own internal logic and aesthetics. As will be seen below, the proposals and changes in curricula that demonstrate this
recent openness to the external environment are among the most important curricular changes that have occurred during the 1950-2000 era.
This paper is nowhere near a complete history of undergraduate mathematics teaching in the era under consideration – rather, it is a projection onto one important axis. For one thing, we do not
consider internal influences impinging on the individual curricular issues considered. (Not, of course, because they were insignificant. We make no claims about their significance relative to the
external influences.) Furthermore, we concentrate on impersonal forces, saying only a little about the talented and hard-working leaders who guided our community through these eventful years.
Finally, by focusing entirely on curriculum, we ignore many other factors that have shaped our community, our activities and our students: faculty demographics and working conditions; the role of the
Advanced Placement program for high school students (as described by David Bressoud’s numerous columns on calculus and the Advanced Placement program in 2010 at [6]); the role of organizations such
as the Mathematical Association of America (MAA); the role of journals serving the community; activities related to employment and development of young faculty (the employment register, Project
NExT); and so on.
A final disclaimer concerns bias and memory. Having lived through much of this era, we started with a tendency to think that we already understood what happened and why and whether it was a good
thing. However, by putting our preferences aside and subjecting our opinions to evidence, we have often surprised ourselves. We hope our work will, likewise, lead some readers to a deeper
understanding of our profession. Of course, in historical research, one can never declare that all the evidence is in and, in this spirit, we invite correspondence from readers.
Our work is only a part of what could be a wider investigation in the history of mathematics. It should be supplemented by a similar investigation of teaching in the first half of the 20^th century.
Would such investigation suggest that high sensitivity to the environment surrounding mathematics instruction in colleges is a timeless feature of American mathematics, or were the years from 1950 to
2000 unusual? Another aspect of the wider problem would compare the impulses expressed in college teaching to those expressed in research. It has been argued by Jeremy Gray [18], that mathematical
research underwent a transformation in the decades around 1900 toward modernism, a point of view marked by coolness toward applications and toward the world outside mathematics generally. Gray’s
investigation did not reach into the second half of the 20^th century. When the multiple strands of research in that second half-century are examined, what balance between modernist and countermodern
impulses will we find? And will it appear that there is some linkage between teaching and research in these matters?
Figure 1. Schematic overview of the present article, "External Influences on U.S. Undergraduate Mathematics Curricula: 1950-2000." External influences are shown at left and new components and
characteristics of the undergraduate mathematics curriculum at right.
Figure 1 is a schematic view of this paper. Our judgment is that all but one of the curricular thrusts in the right hand side of Figure 1 were of considerable importance in what went on in classrooms
by the end of the 20^th century. The exception is Universal Mathematics, a proposal that was unsuccessful, despite having been devised or supported by some of the most accomplished leaders of our
community at the time. We include it because of its eminent pedigree and because it reveals dynamics of the community at that time and how these dynamics differed from more recent ones because of
external influences.
The many short arrows leading from Student choice (enrollment) in Figure 1 call for comment. It seems axiomatic that anytime one tries to improve a curriculum one hopes it will draw in more students.
But it is surprisingly rare to find explicit comments about this motivation in print – thus the arrows are short and a bit vague about their destinations.
┃ Florian Cajori (1859-1930) was a mathematics historian and educator who surveyed undergraduate mathematics curricula in the U.S. for his 1890 book, The teaching and history of mathematics in the ┃
┃ United States. The Cajori Two Project, led by the author of this article, has carried his work into the 20th century. Cajori also served as president of the MAA during 1917. (Photo source: ┃
┃ Convergence Portrait Gallery) ┃
There are some curricular changes that do not appear on the right of Figure 1 because they do not seem to have been propelled by external influences to any significant degree. For example, we do not
discuss the modernization of the curriculum through the introduction of Modern Algebra, Linear Algebra, and more abstract and rigorous analysis. There are schools where this started before 1950, as
we can see from the Cajori Two Project, a survey of undergraduate mathematics courses in the U.S. in the 20^th century [40], but most of the change took place in the 1950s and 1960s. This
far-reaching change seems to us to have been motivated largely from mathematical research and by the desire to better prepare students for graduate studies in mathematics. Research in modern and
linear algebra and modern forms of analysis had been flourishing in the late 19^th and early 20^th centuries and the gap between undergraduate and graduate studies in these areas had widened.
Impersonal forces propelling proposals for change are sometimes broad, encouraging the general process of change, and sometimes particular, encouraging just one or a few proposals for change.
Foundation money, especially from the National Science Foundation (NSF), was undoubtedly a broad force for change [47], symbolized by the friendly cloud in Figure 1. The NSF was founded in 1950 with
a very broadly stated mission: "To promote the progress of science; to advance the national health, prosperity, and welfare; and to secure the national defense" [46]. Its budget for Education and
Human Resources, of which support for curricular innovation is a part, rose from $0 in fiscal year 1951 to $727.6 million in 2000.
This growth in federal funding for science, mathematics and engineering had a large effect on mathematics education. As an example, starting in 1976, it became possible for a non-profit mathematics
curriculum development organization, the Undergraduate Mathematics Applications Project (UMAP) and, later in 1980, its successor the Consortium for Mathematics and its Applications Project (COMAP),
under the direction of Solomon (Sol) Garfunkel, to sustain itself in large part through its work on NSF grants. COMAP’s activities would eventually involve a wide variety of undergraduate mathematics
courses, especially the popular lower division courses. COMAP had an especially large impact on re-introducing applications to the curriculum.
For another glimpse of how the growth of external funding has altered our professional world, we turn to a brief description of an early proposal for change so that we may examine how starkly it
differed from a modern curricular thrust such as calculus reform. | {"url":"http://www.maa.org/publications/periodicals/convergence/external-influences-on-us-undergraduate-mathematics-curricula-1950-2000?device=mobile","timestamp":"2014-04-20T01:38:15Z","content_type":null,"content_length":"33494","record_id":"<urn:uuid:11b1d1d1-7d5d-42ac-8d34-0986e783fb9a>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nifty Assignments - SIGCSE 2009
Encryption and the Enigma Machine
Nifty Assignments - SIGCSE 2009
│ Summary │These three independent but related assignments center on encryption, building to a software simulation of a complete Enigma machine. │
│ Topics │The context for these assignments is encryption. Students are introduced to the role of encryption in military and computing history, the historical significance of the Enigma machine,│
│ │and it internal workings. The programming components involve class modification, class design and implementation, string manipulation, and simple GUI design. │
│ │The first two assignments could easily be given in a CS1 course, the first fairly early as it only involves making modifications to existing classes. The second assignment requires │
│ Audience │students to design and implement classes from scratch, and so might come later in CS1. The third assignment is significantly more complex, with non-trivial design decisions involving │
│ │interacting classes. It could be assigned toward the end of an advanced CS1 course, or more likely in CS2. │
│ │Students prefer assignments with a real-world context to those they perceive as "artificial" or "made-up." In addition, each of these assignments can involve a hands-on activity, │
│ Strengths │building a paper model of the encryption scheme being discussed. Not only can this be a fun diversion in class, but the physical model can help students to understand the system they │
│ │are assigned to implement in software. │
│ Weaknesses │The second and third assignments require students to design and implement programs from scratch. Design guidance may need to be provided, depending on timing and the ability of │
│ │students. Also, instructors should be aware that there are Enigma simulator programs available on the Web. Most are highly sophisticated, and are easily spotted if copied by students. │
│ │All three of these assignments require students to be familiar with class structure and String manipulation methods. The second and third require increasing degrees of class design │
│Dependencies│maturity, as well as GUI design. If desired, the instructor could provide a GUI class to students to simplify that component. The three assignments are independent, but are related in │
│ │terms of content area. If desired, the instructor could assign two or all three assignments in the same class. │
Cryptography has played an important role in the history of computing, from motivating the development of the first electronic computer to enabling secure Web-based communication and commerce.
Substitution ciphers, such as the Caesar cipher, are simple to understand yet form the basis of many modern encryption tools, such as the Enigma machine used in World War II.
Below are three independent but related assignments involving substitution ciphers. The first assignment could be given early in a CS1 course, after String methods have been introduced. Students are
given a class for encoding/decoding text using the Caesar cipher, which they must then modify to make it more robust and powerful (by allowing different substitution keys and subsequently rotating
the key after each encoding). The second and third assignments extend the idea of a rotating substitution cipher, requiring students to design and implement classes for modeling an Enigma machine.
The first of these involves a simplified model of an Enigma machine, using multiple interconnected, rotating substitution keys. The last is a more complex but historically accurate model of an
Enigma. To help students visualize the workings of the machine, they first build a working model out of paper using the Do-It-Yourself Enigma Machine.
These assignments are "nifty" in that they combine class design, String manipulation, and GUI implementation with a broader historical context. In addition, building a working Enigma model with
scissors and tape is a fun hands-on activity for the classroom.
1. A Rotating Substitution Cipher
This assignment briefly describes substitution ciphers and their significance in commercial and military history. Students are provided with two working classes, a class that implements a simple
Caesar cipher and an accompanying driver class for encoding/decoding text files. They are then asked to generalize this code to handle capitals and non-letters and to allow different substitution
keys. The idea of strengthening codes by adding rotation to the substitution key is then introduced, and students must add this new feature to their code.
2. A Simplified Enigma Model
This assignment briefly describes the historical significance of the Enigma machine, and presents a simplified model of its behavior in the form of a rotating, 3-ring cipher disk. It also directs
students to a paper template from which they can build their own cipher disk. This physical model can be helpful in understanding the behavior of the rotating cipher disks and the encoding process.
Once students are familiar with the behavior of the model, they are asked to design and implement a software simulation of the 3-ring cipher disk.
3. A Complete Enigma Model
This assignment briefly describes the historical significance of the Enigma machine and its internal workings, with a link to more detailed information at Wikipedia. It also directs students to the
Do-It-Yourself Enigma machine, a 3-dimensional paper model of the Enigma machine that they can build themselves. This physical model can be very helpful in understanding the complex behavior of the
Enigma, and makes an engaging hands-on activity in class. Once students are familiar with the behavior of the model, they are asked to design and implement a software simulation of the Enigma
Be sure to check out my Nifty Panel Talk (ppt/pdf).
Extra info about this assignment: | {"url":"http://nifty.stanford.edu/2009/reed-enigma/","timestamp":"2014-04-19T19:33:29Z","content_type":null,"content_length":"7589","record_id":"<urn:uuid:96ea0c6e-9cb0-400b-92e8-0958f7792267>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
Energy from the Sun
ACS Climate Science Toolkit | Energy Balance
Although much hotter on the inside, we can closely approximate the surface of the sun, from which its emission occurs, as a black body at a temperature of about 5800 K. The Stefan-Boltzmann equation
then gives the energy flux emitted at the sun’s surface.
S[S] = (5.67 × 10^–8 W·m^–2·K^–4)(5800 K)^4 = 63 × 10^6 W·m^–2
The surface area of a sphere with a radius r is 4πr^2. If r[S] is the radius of the Sun, the total energy it emits is S[S]4πr[s]^2. As the radiation is emitted from this spherical surface, it is
spread over larger and larger spherical surfaces, so the energy per square meter decreases, as illustrated schematically in the diagram below.
The figure at the right compares the experimental solar emission curve observed outside the Earth’s atmosphere to the emission curve for a 5800 K black body located at the sun’s distance from the
Earth. The structure in the experimental curve is a result of absorption of some wavelengths by atoms and ions in the cooler layers outside the sun’s emitting surface.
When the energy emitted by the sun reaches the orbit of a planet, the large spherical surface over which the energy is spread has a radius, d[P], equal to the distance from the sun to the planet. The
energy flux at any place on this surface, S[P], is less than what it was at the Sun’s surface. But the total energy spread over this large surface is the same as the total energy that left the sun,
so we can equate them:
S[S]4πr[s]^2 = S[P]4πd[P]^2
S[P] = S[S](r[s]/d[P])^2
Values for the average planetary distances, d[P], and the corresponding S[P], calculated using r[s] = 700,000 km, are given in the table below.
Credit: American Chemical Society
When radiation from the sun reaches a planet, it does not strike all areas of the planet at the same angle. It strikes directly near the equator, but more obliquely near the poles. To find the amount
of energy entering the planetary atmosphere (if any) averaged over the entire planet, consider the diagram. The total amount of radiation incident on the planet (and atmosphere) is equal to the
amount the planet intercepts to cast the imaginary shadow shown in the diagram. That is, S[P]πr[P]^2. If the average energy flux over the area of the planet is S[ave], the total energy for the planet
is S[ave]4πr[P]^2. These two total energies must be equal, so: S[ave] = S[P]/4. These average fluxes are also included in the table. To find out how this incoming energy is connected to the
temperature of the planets see Predicted Planetary Temperatures.
Credit: Frank P. Incropera
Credit: American Chemical Society | {"url":"http://www.acs.org/content/acs/en/climatescience/energybalance/energyfromsun.html","timestamp":"2014-04-20T03:41:08Z","content_type":null,"content_length":"37480","record_id":"<urn:uuid:883bf5a7-aade-4ad0-907c-4caafb46ea4e>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Solve by reduction of order: yy''=3y'^2 The answer the book has is y=(c1*x+c2)^(-1/2) I can't figure out how to put it in standard form to get the above answer.
• 6 months ago
• 6 months ago
Best Response
You've already chosen the best response.
do you have a solution ?
Best Response
You've already chosen the best response.
No, I'm not sure how to go about doing this problem :/ I know that for a homogeneous 2nd order ODE, the solution is y=c1y1+c2y2 but since the above equation cannot be put in standard form I can't
see how to solve it. I should add that the problem wants me to use reduction of order, where I first pick a y1 solution by inspection, then get y2 using: \[y_2=y_1u=y_1\int\limits_{}^{}U\;dx\]
where \[U=\frac{ 1 }{ y^2 }e^{-\int\limits_{}^{}p\;dx}\] I can do it for a regular problem but I'm lost on this one...
Best Response
You've already chosen the best response.
so first we need to find \(y_1\), a solution to the equation, i'm not sure how to do this
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/524b433ce4b06f86e820eadf","timestamp":"2014-04-19T15:11:12Z","content_type":null,"content_length":"32961","record_id":"<urn:uuid:589bc5bf-d274-4984-ae5e-335a5dcb1ba3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00559-ip-10-147-4-33.ec2.internal.warc.gz"} |
Milwaukee, WI Math Tutor
Find a Milwaukee, WI Math Tutor
...Got an 11 on physical science section of MCAT. Scored a 96 when I took it 6 years ago after high school. I tutor all the biology courses at UW-Milwaukee.
46 Subjects: including precalculus, SAT math, GRE, GED
...I grew up in the UK and came to the US in 1999 to start graduate school. I now live with my wife and two young daughters in Milwaukee, WI. I look forward to hearing from you and helping you to
succeed in your studies!I have undergraduate and graduate degrees in the chemical sciences.
5 Subjects: including algebra 1, chemistry, prealgebra, Spanish
...In addition, I have a working knowledge of Italian pronunciation, spelling and basic sentence structure. Although my major was Spanish, I also have experience in education, having taking
courses in the field of education and doing field work at Cass Street School. Currently I am an intern at th...
39 Subjects: including algebra 1, prealgebra, reading, Spanish
My name is Matt and I am a graduate of Marquette University with a Bachelor of Arts Degree in Broadcast and Electronic Communication with a minor in Mathematics. I work at the Milwaukee Journal
Sentinel, and MLB.com as a game scorer for the Brewers. I am very passionate about sports, math and personal development.
9 Subjects: including discrete math, video production, algebra 1, algebra 2
...I am a new WyzAnt tutor, and do not have an office to work out of. The best way to reach me would be through phone and email, however I would also be willing to travel to your school if need
be. I have a 3 hour cancellation policy, and make an effort to be flexible with students.
2 Subjects: including algebra 1, physical science
Related Milwaukee, WI Tutors
Milwaukee, WI Accounting Tutors
Milwaukee, WI ACT Tutors
Milwaukee, WI Algebra Tutors
Milwaukee, WI Algebra 2 Tutors
Milwaukee, WI Calculus Tutors
Milwaukee, WI Geometry Tutors
Milwaukee, WI Math Tutors
Milwaukee, WI Prealgebra Tutors
Milwaukee, WI Precalculus Tutors
Milwaukee, WI SAT Tutors
Milwaukee, WI SAT Math Tutors
Milwaukee, WI Science Tutors
Milwaukee, WI Statistics Tutors
Milwaukee, WI Trigonometry Tutors
Nearby Cities With Math Tutor
Brookfield, WI Math Tutors
Brown Deer, WI Math Tutors
Cudahy Math Tutors
Glendale, WI Math Tutors
Greenfield, WI Math Tutors
Menomonee Falls Math Tutors
New Berlin, WI Math Tutors
Racine, WI Math Tutors
River Hills, WI Math Tutors
Saint Francis, WI Math Tutors
Shorewood, WI Math Tutors
Wauwatosa, WI Math Tutors
West Allis, WI Math Tutors
West Milwaukee, WI Math Tutors
Whitefish Bay, WI Math Tutors | {"url":"http://www.purplemath.com/milwaukee_wi_math_tutors.php","timestamp":"2014-04-19T17:26:48Z","content_type":null,"content_length":"23756","record_id":"<urn:uuid:a87ea6e0-b34f-4827-a541-efbc862270aa>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00632-ip-10-147-4-33.ec2.internal.warc.gz"} |
A generalization of covering designs and lottery wheels
up vote 2 down vote favorite
This question is inspired by a recent problem . A $(v,k,t)$ covering design is a pair $(V,B)$ where $V$ is a set of $v$ points and $B$ is a family of $k$ point subsets (called blocks) such that every
t points from $V$ is in at least one common block. The La Jolla covering repository lists (among other things) the best known covering designs (judged by fewest blocks) for many $v \lt 100, k\le 25$
$t \le 8$. In this question I am only interested in $t=2.$
To avoid some trivialities later I'd like to relax the definition and allow blocks to have less than $k$ points. For example $B=\lbrace 12345,12678,13458,34567 \rbrace$ is an optimal $(8,5,2)$
covering design under the usual definition but under the relaxed definition would not be optimal because the first block can be reduced to $2345$ and still leave all pairs covered.
The question raised in the recent problem was to find a minimal family of 5 point blocks from a 50 point set such that each of the $\binom{50}{5}$ 5-point sets has at least one pair in at least one
common block. I manage to get it to $42$ blocks and Douglas Zare to 37 blocks. The technique was to partition the points into $4$ groups, choose a good $ (v,5,2)$ covering design for each and pool
these blocks. My question is if it is obvious that this is the way to get the best solutions.
update The answer is that at least in this case one can get from 37 blocks to 36 by taking four optimal covering designs on 11,11,11, and 17 points, each with a deficient 3 point block, and then
replacing these blocks abc def ghi jkl by abcjk defjl ghikl. I'm revising the question to account for this.
To introduce some ungainly definitions: Let a $(v,k,[j,2])$ lottery wheel (an $LW(v,k,[j,2]$) be a pair $(V,B)$ with $V$ a set of $v$ points and $B$ a family of blocks, each with at most $k$ points,
so that out of every $j$ points there is at least one pair which is in at least one common block. Call such a design optimal if there is no $LW(v,k,[j,2])$ with fewer blocks and there is no way to
delete a point from a single block and have the reduced design still be an $LW(v,k,[j,2])$. Also, let a partitioned $(v,k,[j,2])$ lottery wheel (a $PLW(v,k,[j,2])$) be an $LW(v,k,[j,2])$ whose points
can be partitioned into $j-1$ (or less, but at least 2) classes such that for each class
• Every pair of points from that class is covered by a block and
• The restriction of the blocks to that class forms an optimal $(v_i,k,2)$ covering design up to possibly merging deficient blocks into a single block (as in the example above)
Alternate description of a $PLW(v,k,[j,2]:$ Take $j-1$ (or less, but at least 2) disjoint $(v_i,k,2)$ covering designs with a total of $v$ points between them and pool their blocks. One is allowed
certain additional steps provided that there are deficient blocks which will permit it: One can replace two deficient blocks by their union if this union has no more than $k$ points. One can also
remove a block of $k'$ points, cover its pairs with a tiny $(k',?,2)$ covering design and then allocate the blocks of that tiny design among various deficient blocks.
Are there any non-trivial $v,k,j$ so that there is an optimal $(v,k,[j,2])$ lottery wheel which is not partitioned? Is there any non-trivial $v,k,j$ so that no optimal $(v,k,[j,2])$ lottery wheel
is partitioned?
These questions can also be asked for ($v,k,t)$ with $t>2,$ but I will leave it as $t=2$ for now.
Terminology note. The name lottery wheel (design) is known (and used for many things) so I'm utilizing it here. The idea (as I'll distort it) is that some lottery chooses a set $J$ of $j$ numbers out
of $v$. A player can make one or more $k$ number bets. A bet wins if it has at least $t$ members in common with $J$. A lottery wheel is an assortment of bets which attains some goal. In this case the
goal is to be sure of at least one winning bet with no consideration of the odds of winning multiple bets nor of getting more than $t$ correct. I hasten to add that my question is purely in designs
with no interest in lotteries. Note too that I allow a bet to have less than $k$ numbers.
co.combinatorics design-theory
I recall a paper that had a link to it from LJCR which was coauthored by Greg Kuperberg and reviewed some existing and some new techniques for generating near-optimal covering designs. I recall
that there was some random element in picking the next block that helped improve on existing desogns for certain parameters. Perhaps Greg knows of even more recent developments? Gerhard "Ask Me
About System Design" Paseman, 2011.06.09 – Gerhard Paseman Jun 9 '11 at 9:49
How is a "semi-covering design" different from the standard definition of a lottery wheel or lotto design? Is "semi-covering design" used elsewhere? If not, then I think it would be better to use
one of the standard terms. – Douglas Zare Jun 9 '11 at 10:43
@Douglas You are right and I changed it. I don't really find a standard definition and most stuff is ads. But maybe I am not looking in the right place. – Aaron Meyerowitz Jun 9 '11 at 14:28
In the comments on my answer to the previous question, chous gave a link to a nonpartitioned $(50,5,2)$-wheel with $36$ blocks. I showed that the best partitioned wheel using the covering designs
in the La Jolla covering repository would have $37$ blocks. – Douglas Zare Jun 9 '11 at 19:43
Aha! Well, see above (or your old answer). It turns out that the answer to my question is no but there still may be something going on. – Aaron Meyerowitz Jun 10 '11 at 5:43
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged co.combinatorics design-theory or ask your own question. | {"url":"https://mathoverflow.net/questions/67334/a-generalization-of-covering-designs-and-lottery-wheels","timestamp":"2014-04-21T04:33:01Z","content_type":null,"content_length":"56103","record_id":"<urn:uuid:7574d52f-050d-40a5-8220-0d804a670492>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00431-ip-10-147-4-33.ec2.internal.warc.gz"} |
The smelting and forging of metals marks the beginning of civilization - the art of working metals was for thousands of years the major "high tech" industry of our ancestors.
Trial and error over this period of time lead to an astonishing degree of perfection, as can be seen all around us and in many museums. In the state museum of
Schleswig-Holstein in Schleswig, you may admire the damascene blades of our Viking ancestors.
Two kinds of iron or steel were welded together and forged into a sword in an extremely complicated way; the process took several weeks of an expert smith's time.
All this toil was necessary if you wanted a sword with better properties than those of the ingredients. The damascene technology, shrouded in mystery, was needed
because the vikings didn't know a thing about defects in crystals - exactly like the Romans, Greek, Japanese (india) Indians, and everbody else in those times.
You might enjoy finding and browsing through several modules to this topic which are provided "on the side" in this Hyperscript.
Exactly why metals could be plastically deformed, and why the plastic deformation properties could be changed to a very large degree by forging (and magic?) without changing the chemical
composition, was a mystery for thousands of years.
No explanation was offered before 1934, when Taylor, Orowan and Polyani discovered (or invented?) independently the dislocation.
A few years before (1929), U. Dehlinger (who, around 1969 tried to teach me basic mechanics) almost got there, he postulated so-called "Verhakungen" as lattice
defects which were supposed to mediate plastic deformation - and they were almost, but not quite, the real thing.
It is a shame up to the present day that the discovery of the basic scientific principles governing metallurgy, still the most important technology of mankind, did not merit a Nobel prize - but
after the war everything that happened in science before or during the war was eclipsed by the atomic bomb and the euphoria of a radiantly beautiful nuclear future. The link pays tribute to some of
the men who were instrumental in solving one of the oldest scientific puzzles of mankind.
Dislocations can be perceived easily in some (mostly two-dimensional) structural pictures on an atomic scale. They are usually introduced and thought of as extra lattice planes inserted in the
crystal that do not extend through all of the crystal, but end in the dislocation line.
This is shown in the schematic three-dimensional view of an edge dislocations in a cubic primitive lattice. This beautiful picture (from Read?) shows the inserted
half-plane very clearly; it serves as the quintessential illustration of what an edge dislocation looks like.
Look at the picture and try to grasp the concept. But don't forget
1. There is no such crystal in nature: All real lattices are more complicated - either not cubic primitive or with more than one atom in the base.
2. The exact structure of the dislocation will be more complicated. Edge dislocations are just an extreme form of the possible dislocation structures, and in most
real crystals would be split into "partial" dislocations and look much more complicated.
We therefore must introduce a more general and necessarily more abstract definition of what constitutes a dislocation. Before we do that, however, we will continue to look at some properties of
(edge) dislocations in the simplified atomistic view, so we can appreciate some elementary properties.
First, we look at a simplified but principally correct rendering of the connection between dislocation movement and plastic deformation - the elementary process of
metal working which contains all the ingredients for a complete solution of all the riddles and magic of the smith´s art.
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓ ┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓
┠───────────────────────────────────┨ ┠─────────────────────────────┨ ┠────────────────────────────────────────┨
┃ Generation of an edge dislocation ┃ ┃ Movement of the dislocation ┃ ┃ Shift of the upper half of the crystal ┃
┃ by a shear stress ┃ ┃ through the crystal ┃ ┃ after the dislocation emerged ┃
┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛ ┗━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┛
This sequence can be seen animated in the link
This calls for a little exercise
What the picture illustrates is a simple, but far-reaching truth:
┃ ┃
┃ Plastic deformation proceeds - atomic step by atomic step - by the ┃
┃ ┃
┃ generation and movement of dislocations ┃
┃ ┃
The whole art of forging consists simply of manipulating the density of dislocations, and, more important, their ability of moving through the lattice.
After a dislocation has passed through a crystal and left it, the lattice is complely restored, and no traces of the dislocation is left in the lattice. Parts of the crystal are now shifted in the
plane of the movement of the dislocation (left picture). This has an interesting consquence: Without dislocations, there can be no elastic stresses whatsoever in a single crystal! (discarding the
small and very localized stress fields around point defects).
We already know enough by now, to deduce some elementary properties of dislocations which must be generally valid.
1. A dislocation is one-dimensional defect because the lattice is only disturbed along the dislocation line (apart from small elastic deformations which we do not count as defects farther away from
the core). The dislocation line thus can be described at any point by a line vector t(x,y,z).
2. In the dislocation core the bonds between atoms are not in an equilibrium configuration, i.e. at their minimum enthalpy value; they are heavily distorted. The dislocation thus must possess
energy (per unit of length) and entropy.
3. Dislocations move under the influence of external forces which cause internal stress in a crystal. The area swept by the movement defines a plane, the glide plane, which always (by definition)
contains the dislocation line vector.
4. The movement of a dislocation moves the whole crystal on one side of the glide plane relative to the other side.
5. (Edge) dislocations could (in principle) be generated by the agglomeration of point defects: self-interstitial on the extra half-plane, or vacancies on the missing half-plane.
Now we add a new property. The fundamental quantity defining an arbitrary dislocation is its Burgers vector b. Its atomistic definition follows from a Burgers circuit around the dislocation in the
real crystal, which is illustrated below
Left picture: Make a closed circuit that encloses the dislocation from lattice point to lattice point (later from atom to atom). You obtain a closed chain of the base vectors which define the
Right picture: Make exactly the same chain of base vectors in a perfect reference lattice. It will not close.
The special vector needed for closing the circuit in the reference crystal is by definition the Burgers vector b.
It follows that the Burgers vector of a (perfect) dislocation is of necessity a lattice vector. (We will see later that there are exceptions, hence the qualifier "perfect").
But beware! As always with conventions, you may pick the sign of the Burgers vector at will.
In the version given here (which is the usual definition), the closed circuit is around the dislocation, the Burgers vector then appears in the reference crystal.
You could, of course, use a closed circuit in the reference crystal and define the Burgers vector around the dislocation. You also have to define if you go clock-wise or counter clock-wise around
your circle. You will always get the same vector, but the sign will be different! And the sign is very important for calculations! So whatever you do, stay consistent!. In the picture above we
went clock-wise in both cases.
Now we go on and learn a new thing: There is a second basic type of dislocation, called screw dislocation. Its atomistic representation is somewhat more difficult to draw - but a Burgers circuit is
still possible:
You notice that here we chose to go clock-wise - for no particularly good reason
If you imagine a walk along the non-closed Burges circuit, which you keep continuing round and round, it becomes obvious how a screw dislocation got its name.
It also should be clear by now how Burgers circuits are done.
But now we will turn to a more formal description of dislocations that will include all possible cases, not just the extreme cases of pure edge or screw dislocations.
© H. Föll (Defects - Script) | {"url":"http://www.tf.uni-kiel.de/matwis/amat/def_en/kap_5/backbone/r5_1_1.html","timestamp":"2014-04-17T21:22:43Z","content_type":null,"content_length":"25272","record_id":"<urn:uuid:d09f3c9e-ed85-4eb9-bd15-4989041971ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00481-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
Find the measures of an exterior angle and an interior angles given the number of sides of each regular polygon. Round to the nearest tenth, if necessary. Number of sides: 16
• one year ago
• one year ago
Best Response
You've already chosen the best response.
So I started out by doing (n-2)(180) And I substituted n for 16 because n is the number of sides. so.. (16-2)(180) 14(180) =2520
Best Response
You've already chosen the best response.
And then I tried to find out what the measure of the interior angles was.
Best Response
You've already chosen the best response.
Int angle=(n-2)* 180/n
Best Response
You've already chosen the best response.
ext angle=360/n
Best Response
You've already chosen the best response.
Now just divide by the number of sides
Best Response
You've already chosen the best response.
Yes that!! @ksaimouli I thought about that!!
Best Response
You've already chosen the best response.
R u sure? @zaynahf for some reason I thought it was wrong to do that.
Best Response
You've already chosen the best response.
then u forgot to divide /n or number of sides in this case 16 i think
Best Response
You've already chosen the best response.
Lemme seee
Best Response
You've already chosen the best response.
2520/16=157.5 So are you saying that 157.5 is the measure of an interior angle?
Best Response
You've already chosen the best response.
*SOrry, not the # of sides
Best Response
You've already chosen the best response.
16-2= 14 2520/14
Best Response
You've already chosen the best response.
And then.. 360/16 =22.5 for exterior angle?
Best Response
You've already chosen the best response.
Oh! 14 for the triangles?
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
That makes each side become 180 for interior!?! whaaa...makes no sense..
Best Response
You've already chosen the best response.
I know, lol i just messed up
Best Response
You've already chosen the best response.
oh..so then how do I do this?
Best Response
You've already chosen the best response.
Sorry, its been a long time -_- Here, i found this: Exterior angles always add up to 360 no matter how many sides. So to find an interior angle, it's easiest to find the exterior angle and
subtract it from 180 since interior + exterior = 180 So 1 16-gon: 360 / 16 = 22.5; 180 - 22.5 = 157.5 degrees
Best Response
You've already chosen the best response.
You were right :)
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Ok good :P Sorry again.. i forgot that stuff
Best Response
You've already chosen the best response.
Lol its ok!
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/50eb6792e4b07cd2b648aad5","timestamp":"2014-04-16T07:49:35Z","content_type":null,"content_length":"80845","record_id":"<urn:uuid:2c03e8ed-64d4-4574-9596-fa230389bf80>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
Computer Arithmetic
The ALU is that part of the computer that actually performs arithmetic and logical operations on data. All of the other elements of the computer system—control unit, registers, memory, I/O—are there
mainly to bring data into the ALU for it to process and then to take the results back out. We have, in a sense, reached the core or essence of a computer when we consider the ALU. An ALU and, indeed,
all electronic components in the computer arc based on the use of simple digital logic devices that can store binary digits and perform simple Boolean logic operations.
Figure 3.1 indicates, in general terms, how the ALU is interconnected with the rest of the processor. Data are presented to the ALU in registers, and the results of an operation are stored in
registers. These registers are temporary storage locations within the processor that are connected by signal paths to the ALU. The ALU may also set flags as the result of an operation. For example,
an overflow flag is set to 1 if the result of a computation exceeds the length of the register into which it is to be stored.
Figure 3.1 ALU Input and Output | {"url":"http://cnx.org/content/m29343/1.1/","timestamp":"2014-04-17T22:14:54Z","content_type":null,"content_length":"112120","record_id":"<urn:uuid:1832e771-5f5d-4848-af7b-d01a8922415a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00241-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interesting one!
November 12th 2008, 02:57 AM #1
Interesting one!
there are $100$ ties numbered from $1-100$. all ties numbered with a square are taken out. the remaining ties are renumbered from $1$ onwards. again all ties numbered with square numbers are
taken out. the remaining tie are renumbered again $1$ onwards. find a general formula for $n^2$ number of ties for how many times square numbered ties were taken out till only a tie numbered $1$
100-10-9-9-8-8......-2-2-1=1 so ,18 times.
Well done, repcvt but you need to explain
Here is a clue
1,2,3,4....100---->(10 squares are there)
1,2,3.......90---> (9 squares are there )
1,2,3.......81---->(9 squares are there)
1,2,3.......72---->(8 squares are there)
1,2,3.......64----->(8 squares are there)
1,2,3.......56----->(7 squares are there)
1,2,3.......49----->(7 squares are there)
1,2--->(1 square left)
watch the three steps in red
(n^2) - (n) - (n-1)= (n-1)^2
this will do
feel free to ask if trouble persists
November 12th 2008, 10:39 PM #2
Junior Member
Oct 2008
November 13th 2008, 05:46 AM #3 | {"url":"http://mathhelpforum.com/algebra/59137-interesting-one.html","timestamp":"2014-04-18T06:41:46Z","content_type":null,"content_length":"36360","record_id":"<urn:uuid:aee7de8e-5bf1-4202-b64d-664ca259dc8f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00112-ip-10-147-4-33.ec2.internal.warc.gz"} |
News from the world of maths: Einstein is proved right (16/01/2006)
Monday, January 16, 2006
Einstein is proved right (16/01/2006)
Unlike human celebrities, equations have to prove their merit to be famous. Now Einstein's e=mc^2 has done just that. An experiment conducted by US researchers from the National Institute of
Standards and Technology and the Massachusetts Institute of Technology, showed that e differs from mc^2 by at most 0.0000004, proving beyond doubt that the equation is indeed correct.
The equation, which just celebrated its 100th birthday, is part of Einstein's special theory of relativity and relates energy (e) to mass (m) and the speed of light (c). The recent experiment is the
most precise and direct test of the equation ever conducted. Its results were published in the journal Nature. You can find out more about the experiment in this news release. To see special
relativity in action, read Plus article What's so special about special relativity?. Another Plus article, Spinning in space, describes how the general theory of relativity is put to the test.
posted by Plus @ 1:20 PM
0 Comments: | {"url":"http://plus.maths.org/content/news-world-maths-einstein-proved-right-16012006","timestamp":"2014-04-20T00:41:38Z","content_type":null,"content_length":"20004","record_id":"<urn:uuid:129d48eb-9d6a-4ac6-a10c-52cbd0f84a7b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00362-ip-10-147-4-33.ec2.internal.warc.gz"} |
Metaprogramming Issue
10-07-2007 #1
Registered User
Join Date
Oct 2007
Metaprogramming Issue
I am currently having an issue with C++ metaprogramming system. To be more specific, the issue is about partial template specialisation.
Suppose I define a template as below:
template<typename T> struct W {...};
I can specialise W in many different ways. Some examples follow:
template<> struct W<int> {...};
template<typename T> struct W<T*> {...};
template<typename T> struct W<const T* const &> {...};
I assume it is possible to specialise W (even partially) for any type, may it be defined by myself or not.
Nevertheless, I am not being able to specialse it for inner classes of template structures. For instance, suppose I define:
template<typename T> struct X {
struct Y {...};
How do I get to specialise W for Y? I tried writting
template<typename T> struct W<typename X<T>::Y> {...};
without much success.
Can someone help me with this?
Thanks in advance.
You can't do that, for quite a number of reasons.
First, I'm getting a headache just trying to think my head around the possibilities. I'm pretty sure that qualifies as a reason
Second, to the compiler, Y is merely a nested type specifier. It might be a typedef. And if it's a typedef, you've got two problems. First, it may not actually be dependent (that is, Y is the
same no matter what T is) and the specialization is moot. Second, it may be ambiguous, because if the T of the original template could be int, and any number of X instantiations and
specializations may have typedef'd Y to int.
Third, the effort for the compiler is not justified. It would actually have to instantiate X with every possible type to see if the nested Y fits the one it has.
And fourth, the whole thing is quite impossible, because of reason #3. You can't know every possible type, because there's an infinite set of them.
You may say, well, at least it could work for actual inner types of X, say:
W< X<int>::Y > w;
Clearly, T is int, there, right?
Not right. I'll show you a pathological case where even this is ambiguous, but first let me rewrite your code to give the various template parameters distinctive names.
template<typename WT> struct W {...};
template<typename XT> struct X {
struct Y {...};
template<typename WXT> struct W<typename X<WXT>::Y> {...};
Now we can talk more easily.
So, you have the above code:
W< X<int>::Y > w;
Here, WT is X<int>::Y, so XT is int, so the W specialization should be used with WXT deduced to int, right? But remember, X can be specialized too. Consider this specialization:
template <>
struct X<short>
typedef X<int>::Y Y;
Ah, now X<short>::Y and X<int>::Y are the same. What should WXT be now, short or int?
For that matter, X<user_defined_type_that_just_happens_to_be_here>: :Y is a typedef for X<int>::Y too.
By the way, this is the same reason why template argument deduction of function templates doesn't work in similar cases:
template <typename T>
void foo(X<T>::Y y); // T must always be explicitly specified
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
First of all, thanks for answering.
I see the types can collide because there is no bijection between the XT and Y types.
But what if the compiler forcefully instances a new Y type for every X. I am willing to tolerate the overhead.
I just felt that was incorrect because the compiler did not even return a warning when I tried to compile the code above.
The reason I was using this construct is type traits. In the STL, the iterators are in similar situation to the Y struct, so how come the STL can specialise the iterator_traits class for every
possible iterator?
Can I implement traits differently?
Thanks in advance.
Iterator traits aren't specialized for "any type called iterator that is a member of something". std::iterator_traits looks like this:
template <typename It>
struct iterator_traits
typedef typename It::category category;
typedef typename It::value_type value_type;
typedef typename It::difference_type difference_type;
typedef typename It::pointer pointer;
// ...
template <typename T>
struct iterator_traits<T *>
typedef random_access_tag category;
typedef T value_type;
typedef ptrdiff_t difference_type;
typedef T *pointer;
typedef T &reference;
// ...
As you can see, the only specialization is for pointers. All other types either have the nested typedefs for iterator_traits to copy, or they fully specialize iterator_traits themselves. (But
note that applications may not partially specialize iterator_traits.)
But what if the compiler forcefully instances a new Y type for every X.
What do you mean?
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
But what if the compiler forcefully instances a new Y type for every X.
What do you mean?
I mean I wish the compiler could differ X<int>::Y from X<short>::Y, even if Y does not take the template parameter into consideration.
I have been searching around and figured out a partial solution to the matter. What if I could detect at compile time if the template type T is a struct (I can through the T::* work-around) and
then use the T::stuff for getting the trait? If T is not a struct, I get traits<T>::stuff to get the trait. The problem with this is verboseness, so I will have to figure out nice macro names.
Thank you for the help.
I can through the T::* work-around
No, you still can't distinguish between a genuine struct and a typedef for a struct.
What specific application do you have in mind?
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Oh, no, I did not mean that.
I mean there is a way do distinguish, in compile-time, a struct (or a typedef of a struct) against an int or any pre-built type.
The application I have in mind is a template library. I want to define type traits in order to resolve some information compile time, so that it will be possible to eliminate overheads.
Boost's type_traits have an is_primitive metafunction.
You definitely should look at type_traits, enable_if and MPL, all from Boost, if you're doing any metaprogramming.
All the buzzt!
"There is not now, nor has there ever been, nor will there ever be, any programming language in which it is the least bit difficult to write bad code."
- Flon's Law
Thank you for the help, I will give a look.
10-07-2007 #2
10-07-2007 #3
Registered User
Join Date
Oct 2007
10-08-2007 #4
10-08-2007 #5
Registered User
Join Date
Oct 2007
10-08-2007 #6
10-08-2007 #7
Registered User
Join Date
Oct 2007
10-08-2007 #8
10-08-2007 #9
Registered User
Join Date
Oct 2007 | {"url":"http://cboard.cprogramming.com/cplusplus-programming/94336-metaprogramming-issue.html","timestamp":"2014-04-19T05:12:30Z","content_type":null,"content_length":"76744","record_id":"<urn:uuid:662ed3df-d073-4c5a-a51a-64bcc7299302>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"} |
3D stress analysis calculation tool is developed to calculate principal stresses, maximum shear stresses, and Von Mises stress at a specific point for spatial stresses . Calculator can be used to
calculate out-plane shear stress for plane stress situation. Mohr's Circle for 3D stress analysis is also drawn according to input parameters.
The formulas used for the calculations are given in the List of Equations section.
Parameter Symbol Value Unit
Normal stress σ[x]
Normal stress σ[y]
Normal stress σ[z]
Shear stress τ[xy]
Shear stress τ[yz]
Shear stress τ[xz]
Note: Use dot "." as decimal separator.
Parameter Symbol Value Unit
Principal stress-1 σ[1] ---
Principal stress-2 σ[2] ---
Principal stress-3 σ[3] ---
Max shear stress -1 τ[max1] --- MPa
Max shear stress -2 τ[max2] ---
Max shear stress -3 τ[max3] ---
Von Mises stress σ[v] ---
Mohr’s Circle: A graphical method to represent the plane stress (also strain) relations. It’s a very effective way to visualize a specific point’s stress states, stress transformations for an angle,
principal and maximum shear stresses.
Normal Stress: Stress acts perpendicular to the surface (cross section).
Plane Stress: A loading situation on a cubic element where two faces the element is free of any stress. Such a situation occurs on free surface of a structural element or machine component, at any
point of the surface of that element which is not subjected to an external force. Another example for plane stress is structures which are built from sheet metals where stresses across the thickness
are negligible.
Plane stress example - Free surface of structural element
Principal Stress: Maximum and minimum normal stress possible for a specific point on a structural element. Shear stress is 0 at the orientation where principal stresses occur.
Shear stress: A form of a stress acts parallel to the surface (cross section) which has a cutting nature.
Stress: Average force per unit area which results strain of material.
List of Equations:
Parameter Symbol Formula
Characteristic polynomial equation - σ^3-Aσ^2+Bσ-C=0
Polynomial coefficient A =σ[x]+σ[y]+σ[z]
Polynomial coefficient B =σ[x]σ[y]+σ[y]σ[z]+σ[x]σ[z]-(τ[xy])^2-(τ[yz])^2-(τ[xz])^2
Polynomial coefficient C =σ[x]σ[y]σ[z]+2τ[xy]τ[yz]τ[xz]-σ[x](τ[yz])^2-σ[y](τ[xz])^2-σ[z](τ[xy])^2
Principal stress-1 σ[1] max(σ'[1],σ'[2],σ'[3])
Principal stress-2 σ[2] A-σ'[1]-σ'[2]
Principal stress-3 σ[3] min(σ'[1],σ'[2],σ'[3])
Max shear stress -1 τ[max1] (σ[2]-σ[3])/2
Max shear stress -2 τ[max2] (σ[1]-σ[3])/2
Max shear stress -3 τ[max3] (σ[1]-σ[2])/2
• Beer.F.P. , Johnston.E.R. (1992). Mechanics of Materials , 2nd edition. McGraw-Hill
• Budynas.R , Nisbett.K . (2008) . Shigley's Mechanical Engineering Design . 8th edition. McGraw-Hill | {"url":"http://www.amesweb.info/StressStrainTransformations/3DStressAnalysis/3DStressAnalysis.aspx","timestamp":"2014-04-19T04:19:49Z","content_type":null,"content_length":"46576","record_id":"<urn:uuid:1a5cf219-af02-40e4-b5ce-0d4a25032054>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00244-ip-10-147-4-33.ec2.internal.warc.gz"} |
Problem with totaling numbers that was spinned by a dice
03-04-2005 #1
Registered User
Join Date
Mar 2005
Problem with totaling numbers that was spinned by a dice
hi there,
I am writing a function that is suppose to calculate the total number of moves, a user gets by a spin of a dice. The game board has 25 spaces in all. Suppose on the first spin a user gets a 5, a
message should come up saying "You have just moved 5 spaces". If on the next spin you get a 4,a message should come up saying that "You have moved 9 spaces". Unfortunately the code is not doing
this. If the user got a 5 a message comes up saying " You have just moved 5 spaces". If a 6 comes up, the message says " You have just moved 6 spaces". Its not totaling it up.
/* Purpose of this function is to keep track of the total numbers of spaces
player2 moves in the game*/
int player1_move_spaces(int array1[],int die) //die represents the number that the user got when the dice was rolled
int num;
num = num + die;
printf("You have just moved %d spaces",num);
return num;
If you want to do it that way, you'll need to use a static variable.
int player1_move_spaces(int array1[],int die) //die represents the number that the user got when the dice was rolled
static int num = 0;
If I did your homework for you, then you might pass your class without learning how to write a program like this. Then you might graduate and get your degree without learning how to write a
program like this. You might become a professional programmer without knowing how to write a program like this. Someday you might work on a project with me without knowing how to write a program
like this. Then I would have to do you serious bodily harm. - Jack Klein
thanks for ur help pianorain but why did it make a difference?
The static keyword can mean different things depending on how it's used. I'm not up for discussing all of them. Using the static keyword when defining a variable in a function tells the function
not to delete that variable when it goes out of scope. Example:
int foo()
int a = 0;
return a;
} //this function will return 1 every time
//because a is deleted when it goes out of scope.
int bar()
static int a = 0;
return a;
} //this function will return an incremented value every time
//because a is not deleted when it goes out of scope.
If I did your homework for you, then you might pass your class without learning how to write a program like this. Then you might graduate and get your degree without learning how to write a
program like this. You might become a professional programmer without knowing how to write a program like this. Someday you might work on a project with me without knowing how to write a program
like this. Then I would have to do you serious bodily harm. - Jack Klein
Automatic variables local to a function are stored in the stack. When the function is called the stack grows to accomodate those variables. When the function returns the stack is shrunk back down
to its original position. Every time the function is called the process is repeated. Since those variables are getting recreated every time the function is called it doesn't remember what the
value of the variables was from the last call.
But static variables local to a function are stored in a different section of memory that remains intact throughout the life of the program. So even when the function returns and is called again,
it will remember the value from the last time the function was called.
You should also make sure you initialize variables. When the stack grows for the automatic variables it doesn't change the contents of that memory. So if you just have like:
void somefunc(void)
int num;
printf("%d\n", num);
It could print any integer. It doesn't necessarily get set to 0 unless your explicitly initialize it to that.
If you understand what you're doing, you're not learning anything.
thanks for all of ur inputs
You don't need a static variable. Just keep track of the total number of moved squares.
for( x = 0; x < num; x++ )
foo += array1[x];
printf("You've moved a total of %d spaces.\n", foo );
Hope is the first step on the road to disappointment.
03-04-2005 #2
Join Date
Feb 2002
03-04-2005 #3
Registered User
Join Date
Mar 2005
03-04-2005 #4
Join Date
Feb 2002
03-04-2005 #5
Gawking at stupidity
Join Date
Jul 2004
Oregon, USA
03-05-2005 #6
Registered User
Join Date
Mar 2005
03-05-2005 #7 | {"url":"http://cboard.cprogramming.com/c-programming/62623-problem-totaling-numbers-spinned-dice.html","timestamp":"2014-04-17T19:24:46Z","content_type":null,"content_length":"65551","record_id":"<urn:uuid:3a085292-d4c6-4573-87da-e28f97ef052c>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00586-ip-10-147-4-33.ec2.internal.warc.gz"} |
"Savings Lost": The True Cost of Zero Interest Rate Policy
Savings Lost: Update
on the True Cost of ZIRP
By Chris Turner
March 6, 2013
Stanley Druckenmiller spent some on CNBC discussing the malinvestment and misallocation of resources due to the interventionist policies of the Fed. See an excerpt here.
For some additional perspective on the topic, let's revisit the financial impact from Zero Interest Rate Policy (ZIRP) as it applies to responsible savers. We just need to make a couple adjustments
to my 2011 post "Savings Lost" The True Cost of Zero Interest Rate Policy. To assist in calculations and charting, the following data sets provided ample information:
Whether or not the the omnipotent FOMC truly understands all, they clearly understood the impact of arbitrarily lowering the Fed Funds rate. Consult the chart below to see the historical relationship
between total savings and the amount of interest income earned on the savings.
Note that prior to 2001, as savings increased (blue line), interest income received increased (red line). After 2001, however, the interest earned stopped increasing. The green line shows the impact
of the Fed Funds rate on average savings interest rates on interest bearing accounts.
Scaling into the shaded area representing 1986 to present, the following chart depicts the actual Fed Funds rate determined by FOMC.
As savings increased when Fed Funds rate remained around 5%, interest income continued to rise. However, post 2001, the interest income received stopped growing at the same rate. With the exception
of 2005 to 2008 when rates went back to "normal" in the 5% range - the interest income earned has remained stable at just under 1 trillion.
Let's apply some thought experiments and make a couple calculations: What would happen if the FOMC were removed and the Fed Funds rate "floated?" If we look at the historical rates from the 1920's
for the 10 year note, we the mean (average) rate would sit around 5.82%. With a floating Fed Funds rate, banks would be competing for money and providing responsible savers with some interest income.
By calculating the estimated interest income from historical ratios (orange shaded area), we can see that as of 2012, approximate interest income would be close to 3 trillion on savings of 6.7
trillion. Whereas the actual interest income reported by NIPA remained at 1.1 Trillion, the difference in interest received and lost interest equals roughly 2 Trillion. Remember, this is interest
income to SAVERS forever lost since 2001. By summing the entire shaded orange area, SAVERS have missed out on 9.9 Trillion in earned interest. The final chart above clearly illustrates who is NOT a
beneficiary of the low interest rate environment.
Remember, if you have a question or comment, send it to . | {"url":"http://advisorperspectives.com/dshort/guest/Chris-Turner-130306-Savings-Lost.php","timestamp":"2014-04-16T16:27:13Z","content_type":null,"content_length":"17985","record_id":"<urn:uuid:223009b1-163e-4a2a-8110-62e28c80534a>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00311-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are the results to date and the future of this work?
We (Murray & Armitage, MNRAS, 1997, submitted) have investigated the importance of tidal resonance in generating warps or tilts in accretion discs in binary systems. We found dynamical forces to be
too weak to generate a tilt in a fluid disc, but strong enough to disturb a disc of non-interacting particles.
We also (Armitage & Murray) completed simulations to explain spiral structure observed in the accretion disc of IP Pegasi. This system is a close binary composed of a white dwarf with a low mass
stellar companion. This work is now being extended to systems in which the accretion disc is partially disrupted by magnetic fields.
Murray, Ferrario and Wickramasinghe have completed several simulations of optically thin, bremsstrahlung cooled discs. The simulations show the discs to be thermally unstable. Further simulations are
planned with optically thin cooling laws valid for larger temperature ranges.
What computational techniques are used?
The SPH code being used is described in detail in Murray, 1996, Monthly Notices of the Royal Astronomical Society, 279, pp 402-414. A typical calculation must run for several viscous time scales
whilst accurately following motion on the much shorter time scale imposed by the gravitational potential of the binary star system. Such calculations typically take approximately one or two weeks on
a DEC Alpha AXP 3000/500S work station. | {"url":"http://anusf.anu.edu.au/annual_reports/annual_report97/I-Murray.html","timestamp":"2014-04-24T03:17:52Z","content_type":null,"content_length":"7733","record_id":"<urn:uuid:d17f1447-26cc-4224-9e46-40526036332c>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Deriving the Michaelis-Menten Equation
This page is originally authored by Gale Rhodes (© Jan 2000) and is still under continuous update.
The page has been modified with permission by Claude Aflalo (© Jan 2000).
Memorize this derivation as soon as your encounter it in your text, and you will be able to read the remainder of the chapter with far greater understanding. For other suggestions on how to make your
study of biochemistry easier, see Learning Strategies.
A simple model of enzyme action:
In this model, the substrate S reversibly associates with the enzyme E in a first step, and some of the resulting complex ES is allowed to break down and yield the product P and the free enzyme back.
We would like to know how to recognize an enzyme that behaves according to this model. One way is to look at the enzyme's kinetic behavior -- at how substrate concentration affects its rate. So we
want to know what rate law such an enzyme would obey. If a newly discovered enzyme obeys that rate law, then we can assume that it acts according to this model. Let's derive a rate law from this
For this model, let v[0] be the initial velocity of the reaction. The latter stands for the appearance of the product P in solution (+ d[P]/dt) whose phenomenological rate equation (first-order) is
given by
v[0] = k[cat][ES] (2),
containing an experimentally measurable (dependent) variable - v[0], a kinetic parameter - k[cat], and another variable unknown to us - [ES].
│Before proceeding, one should state (and remember) some implicite assumptions: │
│ │
│ ● As long as initial velocity is considered, the concentration of product can be neglected (compared to that of the substrate, thus [P] << [S]), and │
│ ● The concentration of substrate is in large excess over that of the enzyme ([E] << [S]). │
│These assumptions, which hold in most kinetic experiments performed in test tubes at low enzyme concentration, are convenient when considering the mass conservation equations for the reactants│
│[S][0] = [S][free] + [ES] + [P] which now approximates to [S][0] = [S], │
│while that for the enzyme is │
│[E][total] = [E][free] + [ES] (the possible formation of a complex EP is not considered here). │
We want to express v[0] in terms of measurable (experimentally defined, independent) variables, like [S] and [E][total] , so we can see how to test the mechanism by experiments in kinetics. So we
must replace the unknown [ES] in (2) with measurables.
During the initial phase of the reaction, as long as the reaction velocity remains constant, the reaction is in a steady state, with ES being formed and consumed at the same rate. During this phase,
the rate of formation of [ES] (one second order kinetic step) equals its rate of consumption (two first order kinetic steps). According to model (1),
Rate of formation of [ES] = k[1][E][S].
Rate of consumption of [ES] = k[-1][ES] + k[cat] [ES].
So in the steady state,
k[-1][ES] + k[cat][ES] = k[1][E][S] (3)
Remember that we are trying to solve for [ES] in terms of measurables, so that we can replace it in (2). First, collect the kinetic constants, and the concentrations (variables) in (3):
(k[-1] + k[cat]) [ES] = k[1] [E][S],
and (4)
(k[-1] + k[cat])/k[1] = [E][S]/[ES]
To simplify (4), first group the kinetic constants by defining them as K[m]:
K[m] = (k[-1] + k[cat])/k[1] (5)
and then express [E] in terms of [ES] and [E][total], to limit the number of unknowns:
[E] = [E][total] - [ES] (6)
Substitute (5) and (6) into (4):
K[m] = ([E][total] - [ES]) [S]/[ES] (7)
Solve (7) for [ES]:
First multiply both sides by [ES] (no Black Magic involvement here...):
[ES] K[m] = [E][total][S] - [ES][S]
Then collect terms containing [ES] on the left:
[ES] K[m] + [ES][S] = [E][total][S]
Factor [ES] from the left-hand terms:
[ES](K[m] + [S]) = [E][total][S]
and finally, divide both sides by (K[m] + [S]):
[ES] = [E][total] [S]/(K[m] + [S]) (8)
Substitute (8) into (2):
v[0] = k[cat][E][total] [S]/(K[m] + [S]) (9)
The maximum velocity V[max] occurs when the enzyme is saturated -- that is, when all enzyme molecules are tied up with S, or [ES] = [E][total]. Thus,
V[max] = k[cat] [E][total] (10)
Substitute V[max] into (9) for k[cat] [E][total]:
v[0] = V[max] [S]/(K[m] + [S]) (11)
This equation expresses the initial rate of reaction in terms of a measurable quantity, the initial substrate concentration. The two kinetic parameters, V[max ]and K[m] , will be different for every
enzyme-substrate pair.
Equation (11), the Michaelis-Menten equation, describes the kinetic behavior of an enzyme that acts according to the simple model (1). Equation (11) is of the form
y = ax/(b + x) (does this look familiar?)
This is the equation of a rectangular hyperbola, just like the saturation equation for the binding of oxygen to myoglobin.
Mathematically, the function v[0] presents two asymptotes:
* one parallel to the [S] axis at v[0] = V[max] , represents the velocity at infinite [S] (saturation),
* the second parallel to the v[0] axis at [S] = - K[m], has no physical meaning (no negative concentrations).
Further analysis reveals the physical meaning of K[m]: the concentration of substrate at which the velocity is half V[max]. Indeed, substituting K[m] for [S] in (11) yields
v[0] = 1/2 V[max].
Thus, a low value for K[m] may indicate a high affinity of the enzyme for its substrate.
Another physically meaningful limit of this function is found at vanishingly small values of [S] (--> 0),
where v[0] --> V[max] /K[m] [S].
In this case, the velocity becomes proportional to the (low, relative to K[m]) substrate concentration, displaying pseudo-first order kinetics in [S].
The parameter V[max] /K[m] (or rather its constant part k[cat] /K[m]), often referred to as the catalytic ability of the enzyme, is a direct measure of the efficiency of the enzyme in transforming
the substrate S.
k[cat] /K[m] recombines the two traditionally-separated aspects of enzyme catalysis:
* the effectiveness of transformation of bound product (catalysis per se, k[cat] )
* the effectiveness of productive substrate binding (affinity, 1/K[m] = k[1] /(k[-1] + k[cat]))
Equation (11) means that, for an enzyme acting according to the simple model (1), a plot of v[0] versus [S] will be a rectangular hyperbola. When enzymes exhibit this kinetic behavior, unless we find
other evidence to the contrary, we assume that they act according to model (1), and call them Michaelis-Menten enzymes.
Quiz at first class on enzyme kinetics: Derive equation (11) from model (1).
Ten minutes.
Last update: Dec 1999- Claude Aflalo You are visitor No. to this page. Suggestions are welcome... | {"url":"http://www.bgu.ac.il/~aflaloc/BioHTML/Goodies/DeriveMMEqn.html","timestamp":"2014-04-20T16:03:15Z","content_type":null,"content_length":"26655","record_id":"<urn:uuid:ec3d6b58-018a-4dd9-a5ae-75d3d326acc1>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00312-ip-10-147-4-33.ec2.internal.warc.gz"} |
recurrence relation bit strings
January 28th 2009, 12:39 AM #1
Dec 2008
recurrence relation bit strings
well I need help with this problem:
Find a recurrence relation for the number of bit strings of length n that contain three consecutive 0's
ok the answer is:
sn be the number of n-bit strings that contain three consecutive zeros. Such strings can start with
1 (there are sn−1 of these), or with 01 (there are sn−2 of these), or with 001 (there are sn−3 of these), or
with 000 (there are 2n−3 of these). Therefore sn = sn−1 + sn−2 + sn−3 + 2n−3, for n 3.
Ok but can someone explain to me why did they say: such strigns can start with 1,01,001 or 000?
from where did they get these?
thank you
These are all the four possibilities for creating this sequence.
January 28th 2009, 12:49 AM #2
Junior Member
Jan 2009 | {"url":"http://mathhelpforum.com/discrete-math/70332-recurrence-relation-bit-strings.html","timestamp":"2014-04-18T22:48:52Z","content_type":null,"content_length":"38256","record_id":"<urn:uuid:776a0e9c-9dc3-4e72-bfe2-201397b88fa2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
There can be no defence against the twin paradox -
There can be no defence against the twin paradox
Register Search Today's Posts Mark Forums Read
Physics Forum Physics Forum. Discuss and ask physics questions, kinematics and other physics problems.
There can be no defence against the twin paradox
There can be no defence against the twin paradox - Physics Forum
There can be no defence against the twin paradox - Physics Forum. Discuss and ask physics questions, kinematics and other physics problems.
Page 1 of 8 1 2 3 > Last »
LinkBack Thread Tools Display Modes | {"url":"http://www.molecularstation.com/forum/physics-forum/32920-there-can-no-defence-against-twin-paradox.html","timestamp":"2014-04-19T07:25:17Z","content_type":null,"content_length":"90514","record_id":"<urn:uuid:bd55899a-847e-4928-a524-63b4b3d3cd25>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00537-ip-10-147-4-33.ec2.internal.warc.gz"} |
West Chester University
Spring 2012 Colloquium/Seminar Schedule
Each Thursday there will be a mathematics seminar (usually in UNA 120 from 3:15-4:15), while colloquium talks will normally be on a Wednesday (usually in UNA 158 from 3:15-4:15).
These seminars/colloquium talks may be by visiting speakers, WCU faculty, or WCU students, and are open to all interested students and faculty.
Send an e-mail to jmclaughl@wcupa.edu, if you would like to be on the e-mail list to receive advance notice of upcoming talks.
Previous Semesters: Fall 2011, Spring 2011, Fall 2010, Spring 2010, Fall 2009, Spring 2009, Fall 2008, Spring 2008, Fall 2007, Spring 2007, Fall 2006, Summer 2006, Spring 2006,
Monday, February 13th, 2012
3:15 to 4:15PM UNA 162
JGengxin Li (Yale University)
The Improved SNP Calling Algorithms for Illumina BeadArray Data
Abstract: Genotype calling from high throughput platforms such as Illumina and Affymetrix is a critical step in data processing, so that accurate information on genetic variants can be obtained for
phenotype-genotype association studies. A number of algorithms have been developed to infer genotypes from data generated through the Illumina BeadStation platform, including GenCall, GenoSNP,
Illuminus, and CRLMM. Most of these algorithms are built on population-based statistical models to genotype every SNP in turn, such as GenCall with the GenTrain clustering algorithm, and require a
large reference population to perform well. These approaches may not work well for rare variants where only a small proportion of the individuals carry the variant. A fundamentally different
approach, implemented in GenoSNP, adopts a SNP-based model to infer genotypes of all the SNPs in one individual, making it an appealing alternative to call rare variants. However, compared to the
population-based strategies, more SNPs in GenoSNP may fail the Hardy-Weinberg Equilibrium test. To take advantage of both strategies, we propose the two-stage SNP calling procedures, to improve call
accuracy for both common and rare variants. The effectiveness of our approach is demonstrated through applications to genotype calling on a set of HapMap samples used for quality control purpose in a
large case-control study of cocaine dependence. The increase in power with our proposed method is greater for rare variants than for common variants depending on the model.
Gengxin Li is currently a Postdoctoral Associate in the Division of Biostatistics, Department of Epidemiology and Public Health at Yale University. She received a dual-major Ph.D. degree in
Statistics and Quantitative Biology at Michigan State University. Her current research interests are high-dimensional data analysis, Bayesian method, Dirichlet process, longitudinal data analysis,
Statistical genomics, Statistical genetics, Bioinformatics and Clinical Trials.
Tuesday, February 14th, 2012
3:15 to 4:15PM UNA 162
Meredith Hegg (Temple University)
Exact Relations for Fiber-Reinforced Elastic Composites
Predicting the effective elastic properties of a composite material based on the elastic properties of its constituent materials is extremely difficult, even when the microstructure of the composite
is known. However, there are special cases where certain properties in constituents always carry over to the composite, regardless of microstructure. We call such instances exact relations. The
general theory of exact relations allows us to find all of these relations in a wide variety of contexts including elasticity, conductivity, and piezoelectricity. We combine this theory with certain
results from representation theory to find all exact relations in the context of elasticity of fiber-reinforced polycrystalline composites and thereby generate new information about this widely-used
class of materials.
Meredith Hegg is currently a PhD student in the Department of Mathematics at Temple University. Her main area of research is currently Mechanics of Deformable Solids, and she expects to obtain her
PhD in May 2012. Her thesis adviser is Dr. Yury Grabovsky.
Wednesday, February 15th, 2012
3:15 to 4:15PM Anderson 111
Spring 2012 Mathematics Colloquium
John H. Conway (Princeton University)
“The First Field”
We all know one field that contains 0,1,2,..., but, logically, there is an earlier field that is defined as follows. We first fill in the addition table, subject to the condition that before we fill
in the entry for a+b, we must have already filled in all entries a'+b and a+b' with a'<a and b'<b. Then, the entry at a+b is to be the least possible number that is consistent with the result's
being a part of the addition table of a field.
We then tackle the multiplication table of a field with the given addition. Again, the entries are to be the least possible one's subject to this requirement; this construction produces a very
strange field in which 8 is a fifth root of unity. Amazingly, this field actually has practical applications.
John H. Conway is a prolific mathematician active in the theory of finite groups, knot theory, number theory, combinatorial game theory and coding theory. He has also contributed to many branches of
recreational mathematics, notably the invention of the cellular automaton called the Game of Life.
Conway is currently Professor of Mathematics and John Von Neumann Professor in Applied and Computational Mathematics at Princeton University. He studied at Cambridge, where he started research under
Harold Davenport. He received the Berwick Prize (1971), was elected a Fellow of the Royal Society (1981), was the first recipient of the Pólya Prize (LMS) (1987), won the Nemmers Prize in
Mathematics (1998), and received the Leroy P. Steele Prize for Mathematical Exposition (2000) of the American Mathematical Society.
For further information e-mail mfisher@wcupa.edu or sgupta@wcupa.edu.
Thursday, February 16th, 2012
3:15 to 4:15PM UNA 162
Andrew Crossett (Carnegie Mellon University)
Refining Genetically-Inferred Relationships Using Treelet Smoothing
Abstract: Heritability, or fraction of the total trait variance attributable to additive genetic effects, is an important concept in quantitative genetics. Originally, heritability was only
measurable by examining groups of very closely related individuals, such as twin studies. More recently, methods have been proposed to analyze population samples containing only distantly related
individuals using a random effects model. To do so they estimate the relatedness of all pairs of individuals in the population sample using a dense set of common genetic variants, such as SNPs, and
evaluate their relationships to subject trait values. We build on their approach, focusing on improved estimates of pairwise familial relationships. We propose a new method for denoising genetically
inferred relationship matrices, and refer to this general regularization approach of positive semi-definite matrices as Treelet Covariance Smoothing. On both simulated and real data, we show that
better estimates of the relatedness amongst individuals lead to better estimates of the heritability.
Friday, February 17th, 2012
3:15 to 4:15PM UNA 162
Jeffrey Beyerl (Clemson University)
On the Factorization of Eigenforms
Modular forms fall within the realm of complex analysis and number theory, with notable applications in theoretical physics. Hecke operators act on spaces of modular forms, and spectral theory
implies the existence of eigenforms. My recent research, which will be presented at this talk, has investigated the factorizations of these eigenforms. This type of investigation is relatively new,
having started in 1999 when Eknath Ghate and William Duke independently discovered that the product of two eigenforms is again an eigenform only when it is trivially so.
Jeffrey Beyerl is a graduate student in the Department of Mathematics at Clemson University. His main area of research is presently in the field of modular forms, and he expects to obtain his PhD in
May 2012. His thesis advisers are Kevin James and Hui Xue.
Monday, February 20th, 2012
3:15 to 4:15PM UNA 120
Tieming Ji (Iowa State University)
Borrowing Information across Genes and Experiments for Improved Error Variance Estimation in Microarray Data Analysis
Abstract: Statistical inference for microarray experiments usually involves the estimation of error variance for each gene. Because the sample size available for each gene is often low, the usual
unbiased estimator of the error variance can be unreliable. Shrinkage methods, including empirical Bayes approaches that borrow information across genes to produce more stable estimates, have been
developed in recent years. Because the same microarray platform is often used for at least several experiments to study similar biological systems, there is an opportunity to improve variance
estimation further by borrowing information not only across genes but also across experiments. We propose a lognormal model for error variances that involves random gene effects and random experiment
effects. Based on the model, we develop an empirical Bayes estimator of the error variance for each combination of gene and experiment and call this estimator BAGE because information is Borrowed
Across Genes and Experiments. A permutation strategy is used to make inference about the differential expression status of each gene. Simulation studies with data generated from different probability
models and real microarray data show that our method outperforms existing approaches.
Tuesday, February 21st, 2012
3:15 to 4:15PM UNA 162
Whitney George (University of Georgia)
Twist Knots and the Uniform Thickness Property
In 2007, Etnyre and Honda defined a new knot invariant called the Uniform Thickness Property in order to better understand Legendrian knots. The classification of Legendrian knots in R^3 with the
standard contact structure has been a slow process in comparison to the topological classification in R^3. In this talk, we will discuss what makes Legendrian knots more delicate than topological
knots, and how the Uniform Thickness Property can help in their classification. My current research investigates the Uniform Thickness Property with respect to positive twist knots which we will
discuss towards in the second half of this talk.
Whitney George is a graduate student in the Department of Mathematics at the University of Georgia. Her main area of research is presently in contact topology, and is focused towards knots and
surfaces in R^3 with the standard contact structure, and she expects to obtain her PhD in May 2012. Her thesis adviser is Gordana Matic.
Friday, February 24th, 2012
3:15 to 4:15PM UNA 162
Andrew Parrish (Illinois at Urbana-Champaign)
Pointwise Convergence of Averages of L1 Functions on Sparse Sets.
Joint work with P. LaVictoire (University of Wisconsin, Madison) and J. Rosenblatt (UIUC).
Abstract:The behavior of time averages when taken along subsets of the integers is a central
question in subsequence ergodic theory. The existence of transference principles enables us to talk
about the convergence of averaging operators in a universal sense; we say that a sequence {an} is
universally pointwise good for L1, for example, if the sequence of averages
1/NΣ_{n=0}^{N-1}f ◦ T^{-an}(x)
converges a.e. for any f ∈ L1 for every aperiodic measure preserving system (X; B; T; ). Only a
few methods of constructing a sparse sequence that is universally pointwise L1-good are known.
We will discuss how one can construct families of sets in Zd which are analogues of these sequences,
as well as some challenges and advantages presented by these higher-dimensional averages.
Andrew Parrish is a visiting Assistant Professor in the Department of Mathematics at the University of Illinois at Urbana-Champaign. His current research interests are in ergodic theory, particularly
subsequence ergodic theory, with applications to additive combinatorics and harmonic analysis. He obtained his PhD in May 2009 at the University of Memphis. His thesis adviser was Mate Wierdl.
Wednesday, February 29th, 2012
3:15 to 4:15PM UNA 155
Pi Mu Epsilon Presents
ALISSA CRANS (Loyola Marymount University)
A Fine Prime!
In celebration of your mathematical achievements on this special day we will investigate fun facts
related to Leap Days! We'll discuss mathematicians associated to this day and various calendar
systems. In addition, we will explore the numerous interesting properties of the number 29. Of
course it's prime, but in fact, it's a twin prime, Sophie Germain prime, Lucas prime, Pell prime, and
Eisenstein prime. It's also a Markov number, Perrin number, tetranacci number and Stormer
number! We'll see all of this, and more, as we congratulate the newest members of Pi Mu Epsilon for
their wonderful accomplishments.
Alissa S. Crans earned her B.S. in mathematics from the University of Redlands in 1999 and her Ph.D. in mathematics from the University of California at Riverside in 2004, under the guidance of John
Baez. She is currently an Associate Professor of mathematics at Loyola Marymount University and has held positions at Pomona College, The Ohio State University, and the University of Chicago.
Alissa's research is in the field of higher-dimensional algebra and her current work, funded by an NSA Young Investigator Grant, involves categorifying algebraic structures called quandles with the
goal of defining new knot and knotted surface invariants. She is also interested in the connections between mathematics and music, and enjoys playing the clarinet with the Santa Monica College wind
Alissa is extremely active in helping students increase their appreciation and enthusiasm for mathematics through coorganizing the Pacific Coast Undergraduate Mathematics Conference together with
Naiomi Cameron and Kendra Killpatrick, and her mentoring of young women in the Summer Mathematics Program (SMP) at Carleton College, the EDGE program, the Summer Program for Women in Mathematics at
George Washington University, the Southern California Women in Mathematics Symposium, and the Career Mentoring Workshop. In addition, Alissa was an invited speaker at the MAA Spring Sectional Meeting
of the So Cal/Nevada Section and the keynote speaker at the University of Oklahoma Math Day and the UCSD Undergraduate Math Day. She is a recipient of the 2011 Merten M. Hasse Prize for expository
writing and the Henry L. Alder Award for distinguished teaching.
For further information e-mail rsullivan@wcupa.edu
Tuesday, March 20th, 2012
3:15 to 4:15PM UNA 162
Spring 2012 Mathematics Colloquium
STEFAAN DELCROIX (California State University, Fresno)
"A Generalization of Bertrand's Postulate"
Bertrand's Postulate states that for any n > 1, there is at least one prime between n and 2n. We will give an elementary proof of the following generalization: Let k be a fixed number. Then for all n
≥ max{4000, 162k^2}, there are at least k primes between n and 2n.
Thursday, March 22nd, 2012
3:15 to 4:15PM UNA 162
Spring 2012 Mathematics Colloquium
STEFAAN DELCROIX (California State University, Fresno)
Locally Finite Simple Groups
Abstract: A group $G$ is locally finite if every finite subset of $G$ generates a finite subgroup. In this talk, we study infinite, locally finite, simple groups (=LFS-groups). We will introduce some
standard definitions and properties, divide the LFS-groups into three categories and provide examples of each category. Next, we study a specific category (LFS-groups of $p$-type) into more detail.
This allows us to show some local characterization of each category. Time permitting, we discuss a general construction of LFS-groups of $p$-type.
Born and raised in Belgium, Stefaan finished his masters in mathematics at the University of Ghent (in Belgium). He spent the next three years working on his Ph.D. at Michigan State University under
the guidance of Prof. Ulrich Meierfrankenfeld. The subject of his thesis was locally finite simple groups of p-type and alternating type. In June 2000, Stefaan finished his Ph.D. at the University of
Ghent. For two years, he
worked as a Visiting Assistant Professor at the University of Wyoming in Laramie. Since 2002, Stefaan has worked at California State University, Fresno.
For further information e-mail mfisher@wcupa.edu or sgupta@wcupa.edu.
Wednesday, March 21st, 2012
3:20 to 4:15PM UNA 155
Spring 2012 Mathematics Colloquium
Shiv Gupta (West Chester University)
“On Euler's Proof of Fermat's Last Theorem For Exponent Three”
We shall discuss some aspects of Euler's proof of Fermat's Last Theorem for exponent three.
This talk will be suitable for students who have taken (or currently taking) a course on Theory of
Numbers (Mat 414/514).
For further information e-mail mfisher@wcupa.edu or sgupta@wcupa.edu
Thursday, March 29th, 2012
3:15 to 4:15PM UNA 162
Spring 2012 Mathematics Seminar
Jimmy Mc Laughlin (West Chester University)
"Hybrid Proofs of the q-Binomial Theorem and other q-series Identities. I"
The proof of a q-series identity, whether a series-to-series identity such as the second iterate of Heine’s transformation, a basic hypergeometric summation formula such as the q-Binomial Theorem or
one of the Rogers-Ramanujan identities, generally falls into one of two broad camps.
In the one camp, there are a variety of analytic methods.
In the other camp there are a variety of combinatorial or bijective proofs, the simplest of course being conjugation of the Ferrer’s diagram for a partition.
In this series of talks we use a “hybrid” method to prove a number of basic hypergeometric identities. The proofs are “hybrid” in the sense that we use partition arguments to prove a restricted
version of the theorem, and then use analytic methods (in the form of the Identity Theorem) to prove the full version.
Thursday, April 5th, 2012
3:15 to 4:15PM UNA 162
Spring 2012 Mathematics Seminar
Jimmy Mc Laughlin (West Chester University)
"Hybrid Proofs of the q-Binomial Theorem and other q-series Identities. II"
Wednesday, April 11th, 2012
3:20 to 4:15PM UNA 155
Spring 2012 Mathematics Colloquium
Sergei Sergeev (University of Birmingham, UK.)
"Tropical convex geometry and two-sided systems of tropical inequalities"
Abstract: Tropical mathematics emerged in 1960's as a linear encoding of some problems in discrete
optimization and scheduling. In a nutshell, it studies "spaces" over the max-plus algebra, which is the set of
real numbers where taking maximum plays the role of addition, and addition plays the role of multiplication.
In the tropical mathematics, negative infinity plays the role of zero, hence any real number is "positive"
in the tropical sense. Hence, there are connections with nonnegative linear algebra (in particular, Perron-
Frobenius theory), and convex geometry. To this end, tropical spaces can be viewed as an analogue of convex cones, and many results of convex analysis have their tropical analogues, which will be
Tropical linear two-sided systems Ax = Bx, where matrix-vector multiplication is defined using the
tropical arithmetics, are the algebraic encoding of tropical convex cones. Geometrically, such systems represent the tropical convex cones as intersection of tropical halfspaces. Methods for finding
a solution to such two-sided systems stem from combinatorial game theory, more specifically, from the theory of deterministic mean-payoff games. We will also touch upon some problems like the
tropical linear programming that can be viewed as parametric extension of two-sided systems, and give rise to parametric extensions of mean-payoff games.
All are welcome to join for tea in Students Lounge after the talk.
Thursday, April 12th, 2012
3:15 to 4:15PM UNA 162
Spring 2012 Mathematics Seminar
Jimmy Mc Laughlin (West Chester University)
"Hybrid Proofs of the q-Binomial Theorem and other q-series Identities. III"
Wednesday, April 18th, 2012
3:20 to 4:15PM UNA 155
Spring 2012 Mathematics Colloquium
Hal Switkay (West Chester University)
"The Sensible Communication of Abstract Information"
We consider the engagement of the senses in the process of communicating and learning the abstractions of mathematics. Examples are provided from the history of mathematics continuing through current
developments, including Markov processes, analytic geometry, statistics, decision theory, 24-dimensional geometry, and the musical representation of groups.
This talk should be easily accessible to undergraduates.
Hal M. Switkay earned his Ph.D. in mathematics at Lehigh University in the study of set theory. After graduation, his interests shifted towards symmetry, lattices, groups, and higher-dimensional
geometry. He has taught mathematics, from remedial to advanced, has done public speaking, is a musician and composer, and has earned certification as a teacher of Tai Chi Easy and as a practitioner
of reiki and Thai massage. He is currently enrolled in West Chester Universitys graduate certificate program in applied statistics. His business card lists the following interests: mathematics;
music; philosophy; health and wellness; and syncretic
All are welcome to join for tea in Students Lounge after the talk.
Monday, April 23rd, 2012
3:15 to 4:15PM Anderson 103
Spring 2012 Mathematics Colloquium
Keith Devlin (Stanford University)
“Leonardo Fibonacci and Steve Jobs”
The first personal computing revolution took place not in Silicon Valley in the 1980s but in Pisa in the 13th Century. The medieval counterpart to Steve Jobs was a young Italian called Leonardo,
better known today by the nickname Fibonacci. Thanks to a recently discovered manuscript in a library in Florence, the story of how this little known genius came to launch the modern commercial world
can now be told.
Based on Devlin’s latest book The Man of Numbers: Fibonacci’s Arithmetical Revolution (Walker & Co, July 2011) and his co-published companion e-book Leonardo and Steve: The Young Genius Who Beat
Apple to Market by 800 Years.
Keith Devlin is a mathematician at Stanford University in California. He is a co-founder and Executive Director of the university's H-STAR institute, a co-founder of the Stanford Media X research
network, and a Senior Researcher at CSLI. He has written 31 books and over 80 published research articles. His books have been awarded the Pythagoras Prize and the Peano Prize, and his writing has
earned him the Carl Sagan Award, and the Joint Policy Board for Mathematics Communications Award. In 2003, he was recognized by the California State Assembly for his "innovative work and longtime
service in the field of mathematics and its relation to logic and linguistics." He is "the Math Guy" on National Public Radio.
He is a World Economic Forum Fellow and a Fellow of the American Association for the Advancement of Science. His current research is focused on the use of different media to teach and communicate
mathematics to diverse audiences. He also works on the design of information/reasoning systems for intelligence analysis. Other research interests include: theory of information, models of
reasoning, applications of mathematical techniques in the study of communication, and mathematical cognition. He writes a monthly column for the Mathematical Association of America, "Devlin's
For further information e-mail mfisher@wcupa.edu or sgupta@wcupa.edu
Thursday, April 26th, 2012
3:15 to 4:15PM UNA 162
Spring 2012 Mathematics Seminar
Jimmy Mc Laughlin (West Chester University)
"Some Partition Bijections in Igor Pak's "PARTITION BIJECTIONS, A SURVEY""
Friday, April 27th, 2012
2:00 to 3:00PM UNA 158
Spring 2012 Mathematics Colloquium
ELWYN BERLEKAMP University of California, Berkeley
“Combinatorial Games: Hackenbush and Go”
This talk will review the rudiments of combinatorial game theory [1] as exemplified by a game called Hackenbush. Positions are seen to have values, which are sums of numbers and infinitesimals, such
that the winner depends on how the total value compares with zero.
We then discuss how refinements of this theory have been applied to the classical Asian board game called Go. The most important tool is the "cooling operator" [2], which maps combinatorial games
into other combinatorial games. In the first application, many late stage Go endgame positions [3] are shown to be combinatorial games which, when cooled by 1, often reduce to familiar numbers and
infinitesimals. Combinatorial game theory then enables its practitioner to win the endgame by one point. In the second application, Nakamura[4] has shown that liberties can also be viewed as
combinatorial games which become familiar numbers and infinitesimals when cooled by 2. In a large class of interesting positions, this approach identifies the move(s), if any, which win the
capturing race.
Although not prerequisite to this talk, more details can be found in these references:
[1] Berlekamp, Conway, and Guy: Winning Ways, Chap 1
[2] Berlekamp, Conway, and Guy: Winning Ways, Chap 6
[3] Berlekamp and Wolfe: Mathematical Go
[4] Nakamura, in Games of No Chance, vol 3
Elwyn Berlekamp was an undergraduate at MIT; while there, he was a Putnam Fellow (1961). Professor Berlekamp completed his bachelor's and master's degrees in electrical engineering in 1962.
Continuing his studies at MIT, he finished his Ph.D. in electrical engineering in 1964; his advisors were Claude Shannon, Robert G. Gallager, Peter Elias and John Wozencraft. Berlekamp taught at
the University of California, Berkeley from 1964 until 1966, when he became a researcher at Bell Labs. In 1971, Berlekamp returned to Berkeley where, as of 2010, he is a Professor of the Graduate
He is a member of the National Academy of Engineering (1977) and the National Academy of Sciences (1999). He was elected a Fellow of the American Academy of Arts and Sciences in 1996. He received
in 1991 the IEEE Richard W. Hamming Medal, and in 1998 the Golden Jubilee Award for Technological Innovation from the IEEE Information Theory Society.
Berlekamp is one of the inventors of the Welch-Berlekamp and Berlekamp-Massey algorithms, which are used to implement Reed-Solomon error correction. In the mid-1980s, he was president of
Cyclotomics, Inc., a corporation that developed error-correcting code technology. With John Horton Conway and Richard K. Guy, he co-authored Winning Ways for your Mathematical Plays, leading to his
recognition as one of the founders of combinatorial game theory. He has studied various games, including Fox and Geese and other fox games, dots and boxes, and, especially, Go. With David Wolfe,
Berlekamp co-authored the book Mathematical Go, which describes methods for analyzing certain classes of Go endgames.
For further information e-mail mfisher@wcupa.edu or sgupta@wcupa.edu
Note: Talks will be added to the schedule throughout the semester. Check back for updates. | {"url":"http://wcupa.edu/_ACADEMICS/SCH_CAS.MAT/ColloquiumSpring2012.asp","timestamp":"2014-04-16T07:55:01Z","content_type":null,"content_length":"91421","record_id":"<urn:uuid:bf7c254b-c781-4320-b826-393a04c7d10e>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00625-ip-10-147-4-33.ec2.internal.warc.gz"} |
Examples 6.3.4(a):
We can easily check (by factoring the numerator) that the limit as
from the right and from the left is equal to
. Hence, the limit as
exists, and therefore the function has a removable discontinuity at
x = 3
. If we define
k(3) = 6
instead of
k(3) = 1
then the function, in fact, will be continuous on the real line. | {"url":"http://www.mathcs.org/analysis/reals/cont/answers/discwp1.html","timestamp":"2014-04-20T22:26:21Z","content_type":null,"content_length":"5312","record_id":"<urn:uuid:ea9f5d3d-62df-45d3-a9fe-008e97232d88>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00361-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to infer gene networks from expression profiles
• We are sorry, but NCBI web applications do not support your browser and may not function properly.
More information
How to infer gene networks from expression profiles
Inferring, or ‘reverse-engineering', gene networks can be defined as the process of identifying gene interactions from experimental data through computational analysis. Gene expression data from
microarrays are typically used for this purpose. Here we compared different reverse-engineering algorithms for which ready-to-use software was available and that had been tested on experimental data
sets. We show that reverse-engineering algorithms are indeed able to correctly infer regulatory interactions among genes, at least when one performs perturbation experiments complying with the
algorithm requirements. These algorithms are superior to classic clustering algorithms for the purpose of finding regulatory interactions among genes, and, although further improvements are needed,
have reached a discreet performance for being practically useful.
Keywords: gene network, reverse-engineering, gene expression, transcriptional regulation, gene regulation
Gene expression microarrays yield quantitative and semi-quantitative data on the cell status in a specific condition and time. Molecular biology is rapidly evolving into a quantitative science, and
as such, it is increasingly relying on engineering and physics to make sense of high-throughput data. The aim is to infer, or ‘reverse-engineer', from gene expression data, the regulatory
interactions among genes using computational algorithms. There are two broad classes of reverse-engineering algorithms (Faith and Gardner, 2005): those based on the ‘physical interaction' approach
that aim at identifying interactions among transcription factors and their target genes (gene-to-sequence interaction) and those based on the ‘influence interaction' approach that try to relate the
expression of a gene to the expression of the other genes in the cell (gene-to-gene interaction), rather than relating it to sequence motifs found in its promoter (gene-to-sequence). We will refer to
the ensemble of these ‘influence interactions' as gene networks.
The interaction between two genes in a gene network does not necessarily imply a physical interaction, but can also refer to an indirect regulation via proteins, metabolites and ncRNA that have not
been measured directly. Influence interactions include physical interactions, if the two interacting partners are a transcription factor, and its target, or two proteins in the same complex.
Generally, however, the meaning of influence interactions is not well defined and depends on the mathematical formalism used to model the network. Nonetheless, influence networks do have practical
utility for (1) identifying functional modules, that is, identify the subset of genes that regulate each other with multiple (indirect) interactions, but have few regulations to other genes outside
the subset; (2) predicting the behaviour of the system following perturbations, that is, gene network models can be used to predict the response of a network to an external perturbation and to
identify the genes directly ‘hit' by the perturbation (di Bernardo et al, 2005), a situation often encountered in the drug discovery process, where one needs to identify the genes that are directly
interacting with a compound of interest; (3) identifying real physical interactions by integrating the gene network with additional information from sequence data and other experimental data (i.e.
chromatin immunoprecipitation, yeast two-hybrid assay, etc.).
In addition to reverse-engineering algorithms, network visualisation tools are available online to display the network surrounding a gene of interest by extracting information from the literature and
experimental data sets, such as Cytoscape (Shannon et al, 2003) (http://www.cytoscape.org/features.php) and Osprey (Breitkreutz et al, 2003) (http://biodata.mshri.on.ca/osprey/servlet/Index).
Here we will focus on gene network inference algorithms (the influence approach). A description of other methods based on the physical approach and more details on computational aspects can be found
in (Beer and Tavazoie, 2004; Tadesse et al, 2004; Faith and Gardner, 2005; Prakash and Tompa, 2005; Ambesi and di Bernardo, 2006; Foat et al, 2006). We will also briefly describe two ‘improper'
reverse-engineering tools (MNI and TSNI), whose main focus is not inferring interactions among genes from gene expression data, but rather identification of the targets of the perturbation (point (2)
Among the plethora of algorithms proposed in the literature to solve the network inference problem, we selected one algorithm for each class of mathematical formalism proposed in the literature, for
which ready-to-use software is available and that had been tested on experimental data sets.
Gene network inference algorithms
We will indicate the gene expression measurement of gene i with the variable x[i], the set of expression measurements for all the genes with D and the interaction between genes i and j with a[ij]. D
may consist of time-series gene expression data of N genes in M time points (i.e. gene expression changing dynamically with time), or measurements taken at steady-state in M different conditions
(i.e. gene expression levels in homeostasis). Some inference algorithms can work on both kind of data, whereas others have been specifically designed to analyse one or the other.
Depending on the inference algorithm used, the resulting gene network can be either an undirected graph, that is, the direction of the interaction is not specified (a[ij]=a[ji]), or a directed graph
specifying the direction of the interaction, that is, gene j regulates gene i (and not vice versa) (a[ij]≠a[ji]). A directed graph can also be labeled with a sign and strength for each interaction,
signed directed graph, where a[ij] has a positive, zero or negative value indicating activation, no interaction and repression, respectively.
An overview of softwares described here is given in Figure 1 and Table I.
Flowchart to choose the most suitable network inference algorithms according to the problem to be addressed. (^*): check for independence of time points (see text for details); (BN): Bayesian
networks; (DBN): Dynamic Bayesian Networks.
Features of the network inference algorithms reviewed in this tutorial
Coexpression networks and clustering algorithms
Clustering, although not properly a network inference algorithm, is the current method of choice to visualise and analyse gene expression data. Clustering is based on the idea of grouping genes with
similar expression profiles in clusters (Eisen et al, 1998). Similarity is measured by a distance metric, as for example the correlation coefficient among a pair of genes. The number of clusters can
be set either automatically or by the user depending on the clustering algorithm used (Eisen et al, 1998; Amato et al, 2006). The rationale behind clustering is that coexpressed genes (i.e. genes in
the same cluster) have a good probability of being functionally related (Eisen et al, 1998). This does not imply, however, that there is a direct interaction among the coexpressed genes, as genes
separated by one or more intermediaries (indirect relationships) may be highly coexpressed. It is therefore important to understand what can be gained by advanced gene network inference algorithms,
whose aim is to infer direct interactions among genes, as compared with ‘simple' clustering, for the purpose of gene network inference.
The most common clustering approach is hierarchical clustering (Eisen et al, 1998), where relationships among genes are represented by a tree whose branch lengths reflect the degree of similarity
between genes, as assessed by a pairwise similarity function such as Pearson correlation coefficient:
For a set of n profiles, all the pairwise correlation coefficients r[ij] are computed; the highest value (representing the most similar pair of genes) is selected and a node in the tree is created
for this gene pair with a new expression profile given by the average of the two profiles. The process is repeated by replacing the two genes with a single node, and all pairwise correlations among
the n−1 profiles (i.e. n−2 profiles from single genes plus 1 of the gene pair) are computed. The process stops when only one element remains. Clusters are obtained by cutting the tree at a specified
branch level.
In order to compare clustering to the other network inference strategies, we assumed that genes in the same clusters regulate each other, that is, each gene represents a node in the network and is
connected to all the other genes in the same cluster. Clustering will thus recover an undirected graph.
Bayesian networks
A Bayesian network is a graphical model for probabilistic relationships among a set of random variables X[i], where i=1 … n. These relationships are encoded in the structure of a directed acyclic
graph G, whose vertices (or nodes) are the random variables X[i]. The relationships between the variables are described by a joint probability distribution P(X[1], … , X[n]) that is consistent with
the independence assertions embedded in the graph G and has the form:
where the p+1 genes, on which the probability is conditioned, are called the parents of gene i and represent its regulators, and the joint probability density is expressed as a product of conditional
probabilities by applying the chain rule of probabilities and independence. This rule is based on the Bayes theorem:
We observe that the JPD (joint probability distribution) can be decomposed as the product of conditional probabilities as in Equation 2 only if the Markov assumption holds, that is, each variable X[i
] is independent of its non-descendants, given its parent in the directed acyclic graph G. A schematic overview of the theory underlying Bayesian networks is given in Figure 2.
Bayesian networks: A is conditionally independent of D and E given B and C; information-theoretic networks: mutual information is 0 for statistically independent variables, and Data Processing
Inequality helps pruning the network; ordinary differential ...
In order to reverse-engineer a Bayesian network model of a gene network, we must find the directed acyclic graph G (i.e. the regulators of each transcript) that best describes the gene expression
data D, where D is assumed to be a steady-state data set. This is performed by choosing a scoring function that evaluates each graph G (i.e. a possible network topology) with respect to the gene
expression data D, and then searching for the graph G that maximises the score.
The score can be defined using the Bayes rule:P(G) can either contain some a priori knowledge on network structure, if available, or can be a constant non-informative prior, and P(D/G) is a function,
to be chosen by the algorithm that evaluates the probability that the data D has been generated by the the graph G. The most popular scores are the Bayesian Information Criteria (BIC) or Bayesian
Dirichlet equivalence (BDe). Both scores incorporate a penalty for complexity to guard against overfitting of data.
Trying out all possible combinations of interaction among genes, that is, all possible graphs G, and choosing the G with the maximum Bayesian score is an NP-hard problem. Therefore, a heuristic
search method is used, like the greedy-hill climbing approach, the Markov Chain Monte Carlo method or simulated annealing.
In Bayesian networks, the learning problem is usually underdetermined and several high-scoring networks are found. To address this problem, one can use model averaging or bootstrapping to select the
most probable regulatory interactions and to obtain confidence estimates for the interactions. For example, if a particular interaction between two transcripts repeatedly occurs in high-scoring
models, one gains confidence that this edge is a true dependency. Alternatively, one can augment an incomplete data set with prior information to help select the most likely model structure. Bayesian
networks cannot contain cycles(i.e. no feedback loops). This restriction is the principal limitation of the Bayesian network models. Dynamic Bayesian networks overcome this limitation. Dynamic
Bayesian networks are an extension of Bayesian networks able to infer interactions from a data set D consisting of time-series rather than steady-state data. We refer the reader to (Yu et al, 2004).
A word of caution: Bayesian networks model probabilistic dependencies among variables and not causality, that is, the parents of a node are not necessarily also the direct causes of its behaviour.
However, we can interpret the edge as a causal link if we assume that the Causal Markov Condition holds. This can be stated simply as: a variable X is independent of every other variable (except the
targets of X) conditional on all its direct causes. It is not known whether this assumption is a good approximation of what happens in real biological networks.
For more information and a detailed study of Bayesian networks for gene network inference, we refer the reader to Pe'er et al (2000).
Banjo is a gene network inference software that has been developed by the group of Hartemink (Yu et al, 2004). Banjo is based on Bayesian networks formalism and implements both Bayesian and Dynamic
Bayesian networks. Therefore it can infer gene networks from steady-state gene expression data or from time-series gene expression data.
In Banjo, heuristic approaches are used to search the ‘network space' to find the network graph G (Proposer/Searcher module in Banjo). For each network structure explored, the parameters of the
conditional probability density distribution are inferred and an overall network's score is computed using the BDe metric in Banjo's Evaluator module. The output network will be the one with the best
score (Banjo's Decider module).
Banjo outputs a signed directed graph indicating regulation among genes. Banjo can analyse both steady-state and time-series data. In the case of steady-state data, Banjo, as well as the other
Bayesian networks algorithms, is not able to infer networks involving cycles (e.g. feedback or feed forward loops).
Other Bayesian network inference algorithms for which software is available have been proposed (Friedman and Elidan, 2004; Murphy, 2001).
Information-theoretic approaches
Information-theoretic approaches use a generalisation of pairwise correlation coefficient in equation (1), called Mutual Information (MI), to compare expression profiles from a set of microarrays.
For each pair of genes, their MI[ij] is computed and the edge a[ij]=a[ji] is set to 0 or 1 depending on a significance threshold to which MI[ij] is compared. MI can be used to measure the degree of
independence between two genes.
Mutual information, MI[ij], between gene i and gene j is computed as:
where H, the entropy, is defined as:
The entropy H[i] has many interesting properties; specifically, it reaches a maximum for uniformly distributed variables, that is, the higher the entropy, the more randomly distributed are gene
expression levels across the experiments. From the definition, it follows that MI becomes zero if the two variables x[i] and x[j] are statistically independent (P(x[i]x[j])=P(x[i])P(x[j])), as their
joint entropy H[ij]=H[i]+H[j]. A higher MI indicates that the two genes are non-randomly associated to each other. It can be easily shown that MI is symmetric, M[ij]=M[ji], therefore the network is
described by an undirected graph G, thus differing from Bayesian networks (directed acyclic graph).
MI is more general than the Pearson correlation coefficient. This quantifies only linear dependencies between variables, and a vanishing Pearson correlation does not imply that two variables are
statistically independent. In practical application, however, MI and Pearson correlation may yield almost identical results (Steuer et al, 2002).
The definition of MI in equation (3) requires each data point, that is, each experiment, to be statistically independent from the others. Thus information-theoretic approaches, as described here, can
deal with steady-state gene expression data set, or with time-series data as long as the sampling time is long enough to assume that each point is independent of the previous points.
Edges in networks derived by information-theoretic approaches represent statistical dependences among gene-expression profiles. As in the case of Bayesian network, the edge does not represent a
direct causal interaction between two genes, but only a statistical dependency. A ‘leap of faith' must be performed in order to interpret the edge as a direct causal interaction.
It is possible to derive the information-theoretic approach as a method to approximate the JPD of gene expression profiles, as it is performed for Bayesian networks. We refer the interested readers
to Margolin et al (2006).
ARACNE (Basso et al, 2005; Margolin et al, 2006) belongs to the family of information-theoretic approaches to gene network inference first proposed by Butte and Kohane (2000) with their relevance
network algorithm.
ARACNE computes M[ij] for all pairs of genes i and j in the data set. M[ij] is estimated using the method of Gaussian kernel density (Steuer et al, 2002). Once M[ij] for all gene pairs has been
computed, ARACNE excludes all the pairs for which the null hypothesis of mutually independent genes cannot be ruled out (Ho: MI[ij]=0). A P-value for the null hypothesis, computed using Montecarlo
simulations, is associated to each value of the mutual information. The final step of this algorithm is a pruning step that tries to reduce the number of false-positive (i.e. inferred interactions
among two genes that are not direct causal interaction in the real biological pathway). They use the Data Processing Inequality (DPI) principle that asserts that if both (i,j) and (j,k) are directly
interacting, and (i,k) is indirectly interacting through j, then M[i, k]min(M[ij], M[jk]). This condition is necessary but not sufficient, that is, the inequality can be satisfied even if (i, k) are
directly interacting. Therefore the authors acknowledge that by applying this pruning step using DPI they may be discarding some direct interactions as well.
Ordinary differential equations
Reverse-engineering algorithms based on ordinary differential equations (ODEs) relate changes in gene transcript concentration to each other and to an external perturbation. By external perturbation,
we mean an experimental treatment that can alter the transcription rate of the genes in the cell. An example of perturbation is the treatment with a chemical compound (i.e. a drug), or a genetic
perturbation involving overexpression or downregulation of particular genes.
This is a deterministic approach not based on the estimation of conditional probabilities, unlike Bayesian networks and information-theoretic approaches. A set of ODEs, one for each gene, describes
gene regulation as a function of other genes:
where θ[i] is a set of parameters describing interactions among genes (the edges of the graph), i=1 … N, x[i](t) is the concentration of transcript i measured at time t, i, N is the number of genes
and u is an external perturbation to the system.
As ODEs are deterministic, the interactions among genes (θ[i]) represent causal interactions, and not statistical dependencies as the other methods.
To reverse-engineer a network using ODEs means to choose a functional form for f[i] and then to estimate the unknown parameters θ[i] for each i from the gene expression data D using some optimisation
The ODE-based approaches yield signed directed graphs and can be applied to both steady-state and time-series expression profiles. Another advantage of ODE approaches is that once the parameters θ[i,
] for all i are known, equation (5) can be used to predict the behaviour of the network in different conditions (i.e. gene knockout, treatment with an external agent, etc.).
NIR, MNI and TSNI
In recent studies (Gardner et al, 2003; di Bernardo et al, 2005; Bansal et al, 2006), ODE-based algorithms have been developed (Network identification by multiple regression (NIR) and microarray
network identifcation (MNI)) that use a series of steady-state RNA expression measurements, or time-series measurements (time-series network identification—TSNI) following transcriptional
perturbations, to reconstruct gene–gene interactions and to identify the mediators of the activity of a drug. Other algorithms based on ODEs have been proposed in the literature (D'haeseleer et al,
1999; Tegner et al, 2003; Bonneau et al, 2006; van Someren et al, 2006).
The network is described as a system of linear ODEs (de Jong, 2002) representing the rate of synthesis of a transcript as a function of the concentrations of every other transcript in a cell, and the
external perturbation:
where i=1 ,…, N; k=1 … M, N is the number of genes, M is the number of time points, [xdot][i](t[k]) is the concentration of transcript i measured at time t[k], [xdot][i](t[k]) is the rate of change
of concentration of gene i at time t[k], that is, the first derivative of the mRNA concentration of gene i measured at time t[k], a[ij] represents the influence of gene j on gene i, b[i] represents
the effect of the external pertrurbation on x[i] and u(t[k]) represents the external perturbation at time t[k] (a[ij] and b[i] are the θ in equation (5)).
In the case of steady-state data, [xdot][i](t[k])=0 and equation (6) for gene i becomes independent of time and can be simplified and rewritten in the form of a linear regression:
The NIR algorithm (Gardner et al, 2003) computes the edges a[ij] from steady-state gene expression data using equation (7). NIR needs, as input, the gene expression profiles following each
perturbation experiment (x[j]), knowledge of which genes have been directly perturbed in each perturbation experiment (b[i]u) and optionally, the standard deviation of replicate measurements. NIR is
based on a network sparsity assumption, that is, a maximum number of ingoing edges per gene (i.e. maximum number of regulators per gene), which can be chosen by the user. The output is in matrix
format, where each element is the edge a[ij]. The inference algorithm reduces to solving equation (7) for the unknown parameters a[ij], that is, a classic linear-regression problem.
The MNI algorithm (di Bernardo et al, 2005) is based on equation (7) as well, and uses steady-state data like NIR, but importantly, each microarray experiment can result from any kind of
perturbation, that is, we do not require knowledge of b[i]u. MNI is different from other inference methods as the inferred network is used not per se but to filter the gene expression profile
following a treatment with a compound to determine pathways and genes directly targeted by the compound. This is achieved in two steps. In the first step, the parameters a[ij] are obtained from gene
expression data D; in the second step, the gene expression profile following compound treatment is measured (x[i]^d with i=1 … N), and equation (7) is used to compute the values b[i]u for each i, as
a[ij] is known and u is simply a constant representing the treatment. b[i] different from 0 represents the genes that are directly hit by the compound. The output is a ranked list of genes; genes at
the top of the list are the most likely targets of the compound (i.e. the ones with the highest value of b[i]).
The network inferred by MNI could be used per se, and not only as a filter; however, if we do not have any knowledge about which genes have been perturbed directly in each perturbation experiment in
dataset D (right-hand side in equation (7)), then, differently from NIR, the solution to equation (7) is not unique, and we can only infer one out of many possible networks that can explain the data.
What remains unique are the predictions (b[i]), that is, all the possible networks predict the same b[i].
MNI performance is not tested here, not being a ‘proper' network inference algorithm, but we refer the interested readers to di Bernardo et al (2005), where the performance is tested in detail.
The TSNI (Time Series Network Identification) algorithm (Bansal et al, 2006) identifies the gene network (a[ij]) as well as the direct targets of the perturbations (b[i]). TSNI is based on equation
(6) and is applied when gene expression data are dynamic (time-series). To solve equation (6), we need the values of [xdot][i](t[k]) for each gene i and each time point k. This can be estimated
directly from the time-series of gene expression profiles. TSNI assumes that a single perturbation experiment is performed (e.g. treatment with a compound, gene overexpression, etc.) and M time
points following the perturbation are measured (rather than M different conditions at steady-state as for NIR and MNI). For small networks (tens of genes), it is able to correctly infer the network
structure (i.e. a[ij]). For large networks (hundreds of genes), its performance is best for predicting the direct targets of a perturbation (i.e. b[i]) (for example, finding the direct targets of a
transcription factor from gene expression time series following overexpression of the factor). TSNI is not tested here, but we refer the reader to Bansal et al (2006).
Reverse-engineering algorithm performance
We performed a comparison using ‘fake' gene expression data generated by a computer model of gene regulation (‘in silico' data). The need of simulated data arises from imperfect knowledge of real
networks in cells, from the lack of suitable gene expression data set and of control on the noise levels. In silico data enable one to check the performance of algorithms against a perfectly known
ground truth (simulated networks in the computer model).
To simulate gene expression data and gene regulation in the form of a network, we use linear ODEs relating the changes in gene transcript concentration to each other and to the external
perturbations. Linear ODEs can simulate gene networks as directed signed graphs with realistic dynamics and generate both steady-state and time-series gene expression profiles. Linear ODEs are
generic, as any non-linear process can be approximated to a linear process, as long as the system is not far from equilibrium, whereas non-linear processes are all different from each other. There
are many other choices possible (Brazhnik et al, 2003), but we valued the capability of linear ODEs of quickly generating many random networks with realistic behaviour and the availability of a
general mathematical theory.
We generated 20 random networks with 10, 100 and 1000 genes and with an average in-degree per gene of 2, 10 and 100, respectively. For each network we generated three kinds of data:
steady-state-simulated microarray data resulting from M global perturbations (i.e. all the genes in the network are perturbed simultaneously in each perturbation experiment); steady-state data
resulting from M local perturbations (i.e. a different single gene in the network is perturbed in each experiment) and dynamic time-series-simulated microarray data resulting from perturbing 10% of
the genes simultaneously and measuring M time points following the perturbation experiment. For all data sets, M was chosen equal to 10, 100 and 1000 experiments. Noise was then added to all data
sets by summing to each simulated gene expression level in the data set, white noise with zero mean and standard deviation equal to 0.1 multiplied by the absolute value of the simulated gene
expression level (Gardner et al, 2003).
All the algorithms were run on all the data sets using default parameters (Supplementary Table 1). Banjo was not run on the 1000 gene data set, as it was crashing owing to memory limitations, whereas
NIR needed an excessively long computation time.
Results from the simulations are described in Table II. PPV stands for positive predictive value (or accuracy) defined as TP/(TP+FP) and Sensitivity (Se) is TP/(TP+FN), where TP, true positive; FP,
false positive and FN, false negative. The label ‘Random' refers to the expected performance of an algorithm that selects a pair of genes randomly and ‘infers' an edge between them. For example, for
a fully connected network, the random algorithm would have a 100% accuracy for all the levels of sensitivity (as any pair of genes is connected in the real network). Some algorithms infer the network
just as an undirected graph, and others as a directed and/or signed graph. Thus, in order to facilitate comparison among algorithms, we computed PPV and Se by first transforming the real (signed
directed graph) and the inferred networks (when directed and/or signed) in an undirected graph (labeled ^u in the table). If the algorithm infers a directed graph and/or a signed directed graph, we
also compared PPV and Se in this case (labeled ^d and ^s, respectively, in the table). When computing PPV and Se we did not include self-feedback loops (diagonal elements of the adjacency matrix), as
all the simulated networks have self-feedback loops, and this could be an advantage for some algorithms as NIR that always recovers a network with self-feedbacks.
Results of the application of network inference algorithms on the simulated data set
We observe that for the ‘global' perturbation data set, all the algorithms, but Banjo, (Bayesian networks) fail, as their performance is comparable with the random algorithm (hence the importance of
reporting always the random performance). Banjo performance is poor when only 10 experiments are available, and reaches a very good accuracy for 100 experiments (independently of the number of
genes), albeit with a very low sensitivity (only few edges are found). The performance of all the algorithms, but Banjo, improves dramatically for the ‘local' data set. In this case, both ARACNE and
NIR perform very well, whereas Banjo performance is still random for 10 experiments, whereas it reaches a very good accuracy but poor sensitivity for the 100 experiments set. Clustering is better
than random, but is clearly not a good method to infer gene networks. Performance is again random for the time-series ‘dynamic' data set. In this case we run ARACNE as well, although the time points
cannot be assumed independent from each other. Banjo has been shown to work on dynamic data, but needs a very high number of experiments (time points) as compared with the number of genes (Yu et al,
In the ‘local' data set, most of the algorithms perform better than random: Banjo recovers only a few of the hundreds of real interactions (low sensitivity and high accuracy), ARACNE recovers about
half of the real connections in the network (good sensitivity and good accuracy), NIR instead recovers almost all of the real interactions (high sensitivity and high accuracy), clustering recovers a
fifth of the real connections but with low accuracy, and most of the connections recovered by clustering are found by ARACNE as well.
It is interesting to ask what is the average overlap between the inferred networks for the different algorithms. Supplementary Figure 1 shows an example of a 10-gene network recovered by each of the
four algorithms.
We intersected the networks inferred by all the algorithms and found that for the 10-gene network ‘local' dataset, on average about 10% of the edges overlap across all the four algorithms, whereas
about 0.01% overlap for the 100-gene network ‘local' dataset (Supplementary Table 2). This is due to the fact that Banjo for the 100-gene network dataset recovers very few connections compared with
the other algorithms, thus the intersection has very few edges in this case; excluding Banjo when computing the intersection rescues the intersection overlap to about 10% (Supplementary Table 2).
We then checked the PPV and Se by considering only the edges that were found by all the algorithms, if any, as reported in Supplementary Table 2. As expected, Se decreased, but the PPV improved only
slightly compared with that of each algorithm considered separately. In addition, for large networks (100 genes), the intersection among the networks exists only 35% of the time, whereas for small
networks (10 genes), it exists 95% of the time.
The performance of each algorithm can be further improved by modifying their parameters (refer to Supplementary Table 3). The ARACNE parameter DPI (threshold for data processing inequality) varies
from 0 to 1 and controls the pruning of the triplets in the network (1, no pruning and 0, each triplet is broken at the weakest edge). We found a DPI=0.15 to be a conservative threshold giving a good
compromise between Se and PPV (Supplementary Table 3). Another parameter is the threshold on the MI. Increasing these parameters allows one to improve the PPV at the cost of reducing the Se. The MI
level can also be chosen automatically by ARACNE, which does a fairly good job; so we suggest not to set the MI threshold manually.
Banjo gives the user a variety of choices for its parameters: the running time can be increased but it does not seem to affect the results much (Supplementary Table 3); so we suggest 60 s for 10-gene
networks and 600 s for 100-gene networks; the Proposer and Searcher modules, which scan and score the network topology to find the best directed acyclic graph, can be chosen from a set of four
different algorithms; on our data sets, the different choices did not affect the results considerably.
The NIR algorithm performance can be affected by varying the parameter k, which defines the average in-degree per node (i.e. each gene can be regulated at most by other k genes). The lower the k, the
higher the PPV at the cost of reducing the Se. The performance of NIR on the simulated data sets is biased as NIR is based on linear ODEs, which are also used to generate the ‘fake' simulated gene
expression data; however, as noise is added to the simulated data the reported performance should not be too far from the true one. NIR seems to perform better than the other algorithms, but it also
requires more information, that is, the genes that have been directly perturbed in each microarray experiment (for example, which gene has been knocked-out, etc.).
Application to experimental data
In order to test different softwares, we also collected the experimental data sets described in Table III and included in the Supplementary material. The microarray data to be given as input to the
algorithms need no specific pre-processing, just normalisation and selection of the genes that have responded significantly to the perturbation experiments, using standard techniques. We chose three
different organisms and data sets of different sizes: two large data sets (A and B), two medium data sets (C and D) and two small data sets (E and F). We tested each algorithm on the largest number
of data sets possible. In each case we used default parameters. Banjo could not run on data set A and B owing to the large size of the dataset. NIR can be applied only to dataset E, as it requires
steady-state experiments and knowledge of the perturbed gene in each experiment. Hierarchical clustering was applied to all data sets.
Experimental data sets used as examples
Table IV summarises the results but it should not be used for comparative purposes between the different algorithms, owing to the limited number of data and to the imperfect knowledge of the real
network. In silico analysis performed in the previous section is better suited for this task.
Results of the application of network inference algorithms on the experiment data sets
ARACNE performs well on datasets A and C, whereas the other algorithms are not significantly better than random. ARACNE is not better than random for data set B and better than random for dataset D,
whereas Banjo is considerably better than random for data set D, albeit with very low sensitivity, in line with the in silico results. For the same dataset, D, clustering performs better than random,
with a lower accuracy than Banjo, but a better sensitivity. The overall low performance on dataset B, as compared with the other data sets, is probably due either to higher noise levels in this
dataset or to imperfect knowledge of the real network (transcription network in yeast).
Data sets E and F are not very informative since the real networks are small and densely connected and therefore the random algorithm performs very well. In any case, only NIR performs significantly
better than random for dataset E, and only clustering does significantly better than random for dataset F.
Discussion and conclusions
In silico analysis gives reliable guidelines on algorithms' performance in line with the results obtained on real data sets: ARACNE performs well for steady-state data and can be applied also when
few experiments are available, as compared with the number of genes, but it is not suited for the analysis of short time-series data. This is to be expected owing to the requirement of statistically
independent experiments. Banjo is very accurate, but with a very low sensitivity, on steady-state data when more than 100 different perturbation experiments are available, independently of the number
of genes, whereas it fails for time-series data. Banjo (and Bayesian networks in general) is a probabilistic algorithm requiring the estimation of probability density distributions, a task that
requires large number of data points. NIR works very well for steady-state data, also when few experiments are available, but requires knowledge on the genes that have been perturbed directly in each
perturbation experiment. NIR is a deterministic algorithm, and if the noise on the data is small, it does not require large data sets, as it is based on linear regression. Clustering, although not a
reverse-engineering algorithm, can give some information on the network structure when a large number of experiments is available, as confirmed by both in silico and experimental analysis, albeit
with a much lower accuracy than the other reverse-engineering algorithms.
The different reverse-engineering methods considered here infer networks that overlap for about 10% of the edges for small networks, and even less for larger networks. Interestingly, if all
algorithms agree on an interaction between two genes (an edge in the network), this interaction is not more likely to be true than the ones inferred by a single algorithm. Therefore it is not a good
idea to ‘trust' an interaction more just because more than one reverse-engineering algorithm finds it. Indeed, the different mathematical models used by the reverse-engineering algorithms have
complementary abilities, for example ARACNE may correctly infer an interaction that NIR does not find and vice versa; hence in the intersection of the two algorithms, both edges will disappear
causing a drop in sensitivity without any gain in accuracy (PPV). Taking the union of the interactions found by all the algorithms is not a good option, as this will cause a large drop in accuracy.
This observation leads us to conclude that it should be possible to develop better approaches by subdividing the microarray dataset in smaller subsets and then by applying the most appropriate
algorithm to each microarray subset. How to choose the subsets and how to decide which is the best algorithm to use are still open questions.
A general consideration is that the nature of experiments performed in order to perturb the cells and measure gene expression profiles can make the task of inference easier (or harder). From our
results, ‘local' perturbation experiments, that is, single gene overexpression or knockdown, seem to be much more informative than ‘global' perturbation experiments, that is, overexpressing tens of
genes simultaneously or submitting the cells to a strong shock.
Time-series data allow one to investigate the dynamics of activation (inhibition) of genes in response to a specific perturbation. These data can be useful to infer the direct molecular mediators
(targets) of the perturbation in the cell (Bansal et al, 2006), but trying to infer the network among all the genes responding to the perturbation from time-series data does not yield acceptable
results. Reverse-engineering algorithms using time-series data need to be improved. One of the reasons for the poor performance of time-series reverse-engineering algorithms is the smaller amount of
information contained in time-series data when compared with steady-state data. Time-series are usually measured following the perturbation of one or few genes in the cell, whereas steady-state data
are obtained by performing multiple perturbations to the cell, thus eliciting a richer response. One way to improve performance in the time-series case is to perform more than one time-series
experiment by perturbing different genes each time, but this may be expensive; another solution could be to perform only one perturbation experiment but with a richer dynamics, for example the
perturbed gene should be overexpressed and then allowed to return to its endogenous level, while measuring gene expression changes of the other genes. Richer dynamics in the perturbation will yield
richer dynamics in the response and thus more informative data.
Gene network inference algorithms are becoming accurate enough to be practically useful, at least when steady-state gene expression data are available, but efforts must be directed in assessing
algorithm performances. In a few years, gene network inference will become as common as clustering for microarray data analysis. These algorithms will become more ‘integrative' by exploiting, in
addition to expression profiles, protein–protein interaction data, sequence data, protein modification data, metabolic data and more, in the inference process (Workman et al, 2006).
• Amato R, Ciaramella A, Deniskina N, Del Mondo C, di Bernardo D, Donalek C, Longo G, Mangano G, Miele G, Raiconi G, Staiano A, Tagliaferri R (2006) A multi-step approach to time series analysis
and gene expression clustering. Bioinformatics 22: 589–596. [PubMed]
• Ambesi A, di Bernardo D (2006) Computational biology and drug discovery: From single-target to network drugs. Curr Bioinform 1: 3–13.
• Bansal M, Della Gatta G, di Bernardo D (2006) Inference of gene regulatory networks and compound mode of action from time course gene expression profiles. Bioinformatics 22: 815–822. [PubMed]
• Basso K, Margolin AA, Stolovitzky G, Klein U, Dalla-Favera R, Califano A (2005) Reverse engineering of regulatory networks in human B cells. Nat Genet 37: 382–390. [PubMed]
• Beer MA, Tavazoie S (2004) Predicting gene expression from sequence. Cell 117: 185–198. [PubMed]
• Bonneau R, Reiss D, Shannon P, Facciotti M, Hood L, Baliga N, Thorsson V (2006) The inferelator: an algorithm for learning parsimonious regulatory networks from systems-biology data sets de novo.
Genome Biol 7: R36. [PMC free article] [PubMed]
• Brazhnik P, de la Fuente A, Mendes P (2003) Artificial gene networks for objective comparison of analysis algorithms. Bioinformatics 19 (Suppl 2): II122–II129. [PubMed]
• Breitkreutz B, Stark C, Tyers M (2003) Osprey: a network visualization system. Genome Biol 4: R22.2–R22.4. [PMC free article] [PubMed]
• Butte A, Kohane I (2000) Mutual information relevance networks: functional genomic clustering using pairwise entropy measurements. Pac Symp Biocomput 418–429. [PubMed]
• de Jong H (2002) Modeling and simulation of genetic regulatory systems: a literature review. J Comp Biol 9: 67–103. [PubMed]
• D'haeseleer P, Wen X, Fuhrman S, Somogyi R (1999) Linear modeling of mrna expression levels during cns development and injury. Pac Symp Biocomput 41–52. [PubMed]
• di Bernardo D, Thompson M, Gardner T, Chobot S, Eastwood E, Wojtovich A, Elliott S, Schaus S, Collins J (2005) Chemogenomic profiling on a genome-wide scale using reverse-engineered gene networks
. Nat Biotechnol 23: 377–383. [PubMed]
• Eisen M, Spellman P, Brown P, Botstein D (1998) Cluster analysis and display of genome-wide expression patterns. Proc Natl Acad Sci USA 95: 14863–14868. [PMC free article] [PubMed]
• Faith J, Gardner T (2005) Reverse-engineering transcription control networks. Phys Life Rev 2: 65–88. [PubMed]
• Foat B, Morozov A, Bussemaker HJ (2006) Statistical mechanical modeling of genome-wide transcription factor occupancy data by matrixreduce. Bioinformatics 22: e141–e149. [PubMed]
• Friedman N, Elidan G (2004) Bayesian network software libB 2.1. Available from http://www.cs.huji.ac.il/labs/compbio/LibB/
• Gardner T, di Bernardo D, Lorenz D, Collins J (2003) Inferring genetic networks and identifying compound mode of action via expression profiling. Science 301: 102–105. [PubMed]
• Hughes TR, Marton MJ, Jones AR, Roberts CJ, Stoughton R, Armour CD, Bennett HA, Coffey E, Dai H, He YD, Kidd MJ, King AM, Meyer MR, Slade D, Lum PY, Stepaniants SB, Shoemaker DD, Gachotte D,
Chakraburtty K, Simon J, Bard M, Friend SH (2000) Functional discovery via a compendium of expression profiles. Cell 102: 109–126. [PubMed]
• Lee TI, Rinaldi NJ, Robert F, Odom DT, Bar-Joseph Z, Gerber GK, Hannett NM, Harbison CT, Thompson CM, Simon I, Zeitlinger J, Jennings EG, Murray HL, Gordon DB, Ren B, Wyrick JJ, Tagne J-B,
Volkert TL, Fraenkel E, Gifford DK, Young RA (2002) Transcriptional regulatory networks in Saccharomyces cerevisae. Science 298: 799–804. [PubMed]
• Margolin A, Nemenman I, Basso K, Wiggins C, Stolovitzky G, Della Favera R, Califano A (2006) Aracne: an algorithm for the reconstruction of gene regulatory networks in a mammalian cellular
context. BMC Bioinformatics S1(arXiv: q–bio.MN/0410037) [PMC free article] [PubMed]
• Murphy K (2001) The bayes net toolbox for matlab. Comput Sci Stat 33.
• Pe'er D, Nachman I, Linial M, Friedman N (2000) Using bayesian networks to analyze expression data. J Comput Biol 7: 601–620. [PubMed]
• Prakash A, Tompa M (2005) Discovery of regulatory elements in vertebrates through comparative genomics. Nat Biotechnol 23: 1249–1256. [PubMed]
• Shannon P, Markiel A, Ozier O, Baliga N, Wang J, Ramage D, Amin D, Schwikowski B, Ideker T (2003) Cytoscape: a software environment for integrated models of biomolecular interaction networks.
Genome Res 13: 2498–2504. [PMC free article] [PubMed]
• Steuer R, Kurths J, Daub CO, Weise J, Selbig J (2002) The mutual information: detecting and evaluating dependencies between variables. Bioinformatics 18 (Suppl 2): 231–240, Evaluation Stud. [
• Tadesse M, Vannucci M, Lio P (2004) Identification of DNA regulatory motifs using bayesian variable selection. Bioinformatics 20: 2556–2561. [PubMed]
• Tegner J, Yeung MK, Hasty J, Collins JJ (2003) Reverse engineering gene networks: integrating genetic perturbations with dynamical modeling. Proc Natl Acad Sci USA 100: 5944–5949. [PMC free
article] [PubMed]
• van Someren E, Vaes B, Steegenga W, Sijbers A, Dechering K, Reinders M (2006) Least absolute regression network analysis of the murine osteoblast differentiation network. Bioinformatics 22:
477–484. [PubMed]
• Workman C, Mak H, McCuine S, Tagne J, Agarwal M, Ozier O, Begley T, Samson L, T I (2006) A systems approach to mapping DNA damage response pathways. Science 312: 1054–1059. [PMC free article] [
• Yu J, Smith VA, Wang PP, Hartemink AJ, Jarvis ED (2004) Advances to bayesian network inference for generating causal networks from observational biological data. Bioinformatics 20: 3594–3603. [
Articles from Molecular Systems Biology are provided here courtesy of The European Molecular Biology Organization and Nature Publishing Group
Your browsing activity is empty.
Activity recording is turned off.
See more... | {"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC1828749/?tool=pubmed","timestamp":"2014-04-19T10:24:18Z","content_type":null,"content_length":"118407","record_id":"<urn:uuid:fdbb3506-4361-4e12-93e5-69ba3f24c9eb>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00021-ip-10-147-4-33.ec2.internal.warc.gz"} |
Reference K2
[K2] N.H. Kuiper, Convex immersions of closed surfaces
in E^3, Comm. Math. Helv. 35 (1961) 85-92.
This paper deals specifically with immersions into three-space, and gives a good description of tightness. It gives the decomposition of a tight surface into the M+ and M- regions . It shows that
the real projective plane and the Klein bottle do not admit tight immersions into three-space, and that any other surface (except the one with Euler characteristic -1) can be tightly immersed.
This paper describes an immersion of the real projective plane with exactly one minimum, one maximum and one saddle point, and it points out how this could be used to produce an eversion of the
10/12/94 dpvc@geom.umn.edu -- The Geometry Center | {"url":"http://www.maa.org/external_archive/CVM/1998/01/tprppoh/article/Refs/Kuiper2.html","timestamp":"2014-04-18T17:11:29Z","content_type":null,"content_length":"2381","record_id":"<urn:uuid:025eb3ac-ab3f-44dd-bef2-3c590c390d9d>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00011-ip-10-147-4-33.ec2.internal.warc.gz"} |
joint effect of two endogenous variables
--- On Tue, 24/8/10, xueliansharon wrote:
> I want to estimate the following model:
> ivreg2 y z1 z2 (y1 y2= z3 z4 z5), cluster(mm)
> However, my instruments z3, z4 and z5 don't have enough
> independent variations (i.e.we don't have instruments that are
> correlated to y1 but not correlated to y2, or just correlated
> to y2 but not correlated to y1, z3, z4 and z5 all affect both
> y1 and y2) and thus the effects of y1 and y2 on the dependent
> variable y can't be isolated, so I want to compute and test the
> significance of the joint effect of y1 and y2, i.e. the
> effect on dependent variable when both y1 and y2 increase by 1
> unit. Does anybody know how to realize this idea?
The first thing that comes to mind is that you will need to make
sure that the unit of y1 and y2 are equal, if y1 is in seconds and
y2 is in liters, than what does a unit change mean? A common
approach is to standardize variables, i.e. subtract the mean and
divide by the standard deviation.
What you could do is constrain the effects to be equal. A quick
scan of -help ivreg2- gave me the impression that it doesn't
allow for the -constraint()- option. You might be able to use
an old trick: you can constrain the effects of two variables to
be equal by adding the sum of these two variables to your model.
You can see that as follows. Start with a regular regression:
y = b0 + b1 x1 + b2 x2 + e
we want to constrain b1 and b2 to be equal, so we can write:
y = b0 + b1 x1 + b1 x2 + e
= b0 + b1 (x1 + x2) + e
So application of this trick to your model would been that
you generate a new variable y_comb = y1 + y2, and use that
variable instead of y1 and y2.
However, I don't know much about -ivreg2-, so other people
who know more about it, will need to confirm that this trick
will have the desired properties before I would recommend
Hope this helps,
Maarten L. Buis
Institut fuer Soziologie
Universitaet Tuebingen
Wilhelmstrasse 36
72074 Tuebingen
* For searches and help try: | {"url":"http://statalist.1588530.n2.nabble.com/joint-effect-of-two-endogenous-variables-td5456810.html","timestamp":"2014-04-17T06:42:09Z","content_type":null,"content_length":"82615","record_id":"<urn:uuid:8147718f-d7e3-4358-84ce-05896923b61c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00628-ip-10-147-4-33.ec2.internal.warc.gz"} |
Almost Always Go for 2-Point Conversions?
In the Buccaneers-Redskins game this past Sunday, the Redskins were able to score a potentially game-tying touchdown at the end of regulation, only to fail to hit the extra point due to a mishandled
snap. Gregg Easterbrook suggested the Redskins should have gone for the two-point conversion, which is a plausible strategy in many circumstances. But Easterbrook went on to add this little tidbit:
"Rushing deuce attempts are about 65 percent successful in the NFL -- a better proposition than the 50/50 of advancing to overtime."
It's well established that 2-point conversion attempts are successful slightly less than 50% of the time, so could the 65% number for runs possibly be true? If so, what would that mean for NFL
There have been 718 2-point conversion attempts from 2000-2009, including playoff games. Overall, they've been successful 46.3% of the time. But this is slightly misleading because it includes
aborted kick attempts. If we weed those out, along with some other mysterious plays, such as Josh McCown's kneel-down while trailing by 5 points in the final few seconds of the Cardinals-Vikings 2003
game, we get a different answer. For all normal 2-point conversions, the success rate is 47.9%.
Now look at the success rate broken out by play type:
│Play Type │Success Rate │Attempts│
│Passes │43.4% │525 │
│Runs │61.7% │183 │
Running plays have been successful nearly 62% of the time. So Easterbrook's number is very close, the difference likely due to the span of years he looked at.
This is a classic football
game theory
problem. Despite being significantly more successful than passing, running plays are much less frequent. It's clear that offenses should be running more often and defenses should be more biased
against the pass. We can't tell exactly what run/pass ratio is optimum from these numbers, but the equilibrium will occur when the success rates for running and passing equalize.
Unfortunately, it's not that simple. Many of those runs are QB scrambles or bootleg options. If we remove all the plays in which the QB is credited for the run and just look at conventional runs we
get the following:
│Play Type │Success Rate │Attempts│
│Passes │43.4% │525 │
│QB Runs │74.5% │47 │
│RB Runs │57.4% │136 │
QB runs are successful a whopping 75% of the time, but the picture is still muddled. Often QBs will sprint for the end zone only when they sense they have a good chance, so there's likely a
considerable amount of bias in that number. And only when they happen to be tackled between the 2-yard line and the end zone would the attempt be recorded as an unsuccessful run. If they're tackled
behind the line of scrimmage it probably gets recorded as a sack, which is considered a pass play.
Still, even after removing all QB runs, conventional RB runs are successful 57% of the time. This suggests that if teams ran more often, the overall success rate would increase. Defenses would likely
respond, and eventually the success rates for both running and passing would equalize at a success rate
somewhat over 50%
If true, this would mean that
going for the 2-point conversion is a net positive expected-value play
. In 2009, the success rate for extra point kicks was 98.3%, and so far in 2010 it's 98.8%. So for 2-point conversions to be the higher expected-value play, it would only need to be successful about
49.5% of the time. A strategy mix that's heavy on running would almost certainly exceed that rate.
These results tend to confirm the previous finding that teams should generally be
running more often inside the 10-yard line
So should coaches go for 2 more often than not? Perhaps. The score and time remaining would ultimately dictate the strategy in each situation, but as long as the game is a point-maximization contest,
which is usually until the end of the 3rd quarter, I'd say it's good idea. And in the end-game, when an extra point ties, but a 2-point conversion takes the lead, it would almost certainly be a good
idea, all other things being equal.
Think how much more exciting every touchdown would be if there was almost always a 2-point conversion attempt. Unfortunately, I doubt coaches would do it. They're so risk-averse, it would take a
success rate considerably higher than 50% to convince them to adopt a more aggressive strategy.
50 Responses to “Almost Always Go for 2-Point Conversions?”
Steve says:
How often are there sacks on 2-point conversions? There's absolutely no reason to take one since there's no risk to fumbling or throwing an interception.
Ironically, re Easterbrook, it was the Bucs who employed this exact strategy in 2005 agains the Redskins, running Mike Alstott on a 2-point conversion for the win very late in regulatino.
Ian Simcox says:
Something to consider as part of an underdog strategy? Given that the expected payoff isn't massively more than just kicking the PAT (0.16 pts per TD), the only difference is in the
variance of the two options. Given your work saying that underdogs need to be on a higher variance strategy, it seems they should be going for it and it's the favourites that should be
just kicking the PAT.
DSMok1 says:
I agree with Ian, here. It's more variance than payoff at its core. Underdogs should go for 2 more often, big favorites probably not.
Kevin says:
And in a game involving teams with good offenses but bad defenses, the teams should go for 2 more often...
J-Doug says:
"Defenses would likely respond, and eventually the success rates for both running and passing would equalize at a success rate somewhat over 50%."
Except you don't really know this. The game is far too complex to just assume they'll split the difference. It's far more reasonable to assume that the current success rate for 2PT
conversions is the equilibrium outcome and that a change in offensive strategy--and a response in defensive strategy, would result in little if any departure from the current rate.
Bigmouth says:
Isn't there a sample size issue with so many fewer runs than passes?
Anonymous says:
Except you don't really know this. The game is far too complex to just assume they'll split the difference. It's far more reasonable to assume that the current success rate for 2PT
conversions is the equilibrium outcome and that a change in offensive strategy--and a response in defensive strategy, would result in little if any departure from the current rate."
It's not an equilibrium. It can be exploited. The fact is, it's not being exploited. Once it is exploited, we would EXPECT defenses to respond. When Team A starts going for two-point
conversions every time, and chooses to run every time, the defense SHOULD respond by expecting the run, therefore decreasing the success rate of the two point conversion by running. If
both offense and defense play optimally, there will necessarily be an equilibrium. Success rate for pass and run will be equal. This doesn't necessarily mean the number of pass attempts
vs. number of rush attempts will be equal, but the conversion rate should be.
If the success rate for TPC is not equal, teams are failing to exploit opportunities.
Brian Burke says:
J-Doug-I don't agree. The payoff values for running and passing are equal (a success = a success). So shouldn't success rates be equal at the minimax?
To clarify, I'm not assuming they'd split the difference. However, we do know the equilibrium must be somewhere between the 2 success rates, and very likely be higher than the 49.25%
required to be break-even for going for 2.
Steve-You're correct. Only 12 sacks in the sample.
Bigmouth-Sample size may be an issue with relatively few runs. The SE for n=136, p=.57 would be +/-.04%. So depending on how you look at it, things are 'significant.' Passing and running
are >2 SEs apart, but running is not quite 2 SEs from the overall rate of 48%.
Anonymous says:
Shouldn't the midpoint be in proportion to the types of play calls? Assume the QB runs are pass plays:
Pass: 572 plays, 46%
Run: 136 plays, 57%
Therefore, I'd assume something closer to 48% as the equilibrium point as opposed to 50-51%.
J-Doug says:
"It's not an equilibrium. It can be exploited. The fact is, it's not being exploited. Once it is exploited, we would EXPECT defenses to respond. "
"and very likely be higher than the 49.25% required to be break-even for going for 2."
Well, it is by definition an equilibrium--if there were no equilibrium there wouldn't be anything to exploit. It may not be a stable one, but I see the analysis in this blog making a very
large leap to assume that, after both offense and defense adjust, that the success rate would exceed break even.
"The payoff values for running and passing are equal (a success = a success). So shouldn't success rates be equal at the minimax?"
The payoff value isn't the issue here. The issue is that the success ratios that you're measuring are dependent in part on the choices and expectations of the defense.
Based on the data you're working with in this post, there's no reason at all that the real number should be in between the current success rate and the current success rate for 2PC RB
runs, because that number is dependent on the fact that the defense has a certain degree of expectation that that the offense will make that play
On the contrary, if the offense in any way signals an increase in the probability that they will hand off to the RB on a 2 pt attempt, one should expect the success rate to drop
significantly irrespective of the current values for success rates.
J-Doug says:
Put it another way:
The real success rate of an RB run on a 2PT conversion should be equal to P(A)*P(C)+P(B)*(1-P(C))
Where P(A) = Probability of reaching the end zone on a 2 yd run at the 2 yd line when the defense knows the offense will hand it off to the RB
Where P(B) = Probability of reaching the end zone on a 2 yd run at the 2 yd line when the defense guesses incorrectly
And P(C) = Probability of the defense being right about what the play is going to be.
The first problem here is that the value you observe is equal to that equation, but you're assuming to know P(A) and P(B) without knowing the probability of P(C). The second problem here
is that P(C) increases as the offense increases its frequency of choosing this play.
Without knowing P(C) and without acknowledging the correlation between P(C) and the act of the offense changing its strategy, it's perfectly plausible that the success rate would not fall
within the two numbers you specify.
Dave says:
Even if the expected number of points is greater when going for two, there are quite often good reasons not to do so, especially when the game was tied prior to a late game touchdown.
Example: Suppose the score is tied 20-20 before team A scores a touchdown with 3 minutes left in the game.
1) All teams have a 60% success rate on 2 point conversions and a 99% success rate on extra points.
2) Team B has a 40% chance of scoring a touchdown in the remaining 3 minutes, and if they do, that will be the final score of regulation.
3) If team B is down by 7 when they score a touchdown, they will attempt an extra point to send the game to overtime, and in overtime each team would have a 50% chance of winning.
Should team A go for 1 or 2? Let's look at the numbers.
(note: In either scenario, team A has at least a 60% of winning because there is only a 40% chance that team B responds with a touchdown. So we'll ignore that for the moment and consider
only situations where team B answers with a touchdown.)
If team A goes for 1, here are their winning scenarios:
a) 99%(team A makes) * 99%(team B makes) * 50% (chance in OT) = 49.005%
b) 99% * 1% (team B misses) = 0.99%
c) 1% * 1% (both miss) * 50% (OT) = 0.005%
Total = 50%
If team A goes for 2, here are their winning scenarios:
a) 60% (team A succeeds) * 60% (team B succeeds) * 50% (OT) = 18%
b) 60% * 40% (team B fails) = 24%
c) 40% * 1% (team B misses extra point) * 50% (OT) = 0.2%
Total = 42.2%
Adding back in the 60% chance of winning by keeping team B from scoring a touchdown, and we have:
Going for 1 = 60% + 40%*50% = 80% chance of winning.
Going for 2 = 60% + 40%*42.2% = 76.8% chance of winning.
Clearly in this situation, going for 2 is a bad move because if you fail, the other team can exploit that by adjusting their strategy and kicking the extra point.
My hunch is that this logic could be extended to say that in all situations where the score was tied prior to the touchdown, going for 2 is a bad move. If a team is down 7 prior to
scoring a touchdown, going for 2 is a great move.
makewayhomer says:
"On the contrary, if the offense in any way signals an increase in the probability that they will hand off to the RB on a 2 pt attempt, one should expect the success rate to drop
significantly irrespective of the current values for success rates"
yes yes yes
Anonymous says:
"On the contrary, if the offense in any way signals an increase in the probability that they will hand off to the RB on a 2 pt attempt, one should expect the success rate to drop
significantly irrespective of the current values for success rates"
But remember that the success rate of PASSING will increase... and at some point it's fair to expect that the success rate of passing will equal the success rate of running...
J-Doug says:
"But remember that the success rate of PASSING will increase... and at some point it's fair to expect that the success rate of passing will equal the success rate of running..."
Both of which, again, depend on the ability of the defense to adapt and the amount of information the offense creates by adopting any sort of strategy at all.
Anonymous says:
"My hunch is that this logic could be extended to say that in all situations where the score was tied prior to the touchdown, going for 2 is a bad move. If a team is down 7 prior to
scoring a touchdown, going for 2 is a great move."
Your analysis only holds true for late game situations - not necessarily for situations where the score was previously tied after a touchdown. If we assume Brian's analysis is correct and
that two-point conversions are +EV, they should be attempted up to the point where the benefit of minimizing variance by going for 1 outweighs the benefit of the added EV by going for
two. I imagine this would only present itself in the 4th quarter, but perhaps earlier.
Anonymous says:
I think you may be thinking too specifically. Sure, certain teams will struggle to adapt to other specific teams' specific strategies.
If all teams aggregated make a two-point conversion on average 50 percent of the time, it'd be difficult to find a matchup that's exactly 50/50. most off/def matchups will swing slightly
one way or another. it doesn't change the fact that game theory is at play and that if the success rates aren't equal then the offense is missing opportunities
J-Doug says:
"it doesn't change the fact that game theory is at play"
This is exactly my point, game theory IS at play. But the conclusion of this post is--in my opinion--ignoring a very important aspect of the strategic interaction between offense and
I'm not talking about the matchup level, I'm talking about the league-wide level. The only way you can assume that a change in offensive strategy will increase the success rate towards
the RB run success rate is if you assume that defense--and I mean the aggregate of all defenses at the league level--will not incorporate new information into their own strategy.
To be clear, I'm not saying Brian Burke is absolutely wrong. It's entirely possible that offenses aren't running 2PT conversions enough. What I'm saying is that based on the data Brian
has provided here, you cannot conclude that this will be the case, and you most certainly can't conclude anything about whether or not that value will exceed break even.
James says:
I think j doug could be right as this is not a perfect game theory situation as while the offence has two choices the defence has an infinite choice between extreme pass defence and
extreme run defence. If standard defence allows 57% success on runs and 47% success on passes but moving towards pass defence increases run success by more than it lowers pass success and
vice versa for a run defence then standard defence is the equilibrium. Of course this is purely theoretical and can never be proved because we cannot determine what the defence is doing
but it shows that passing and running do not have to be equal to be on equilibrium. I really need to draw a graph to show this idea clearly
Anonymous says:
by definition, an offense should choose the higher success rate every time. in this case, they would choose run. the defense should respond over time, decreasing the success rate of the
run, and (theoretically) increasing the success rate of the pass). the offense should always choose the higher success rate until they are equal.
Ivan says:
"It's far more reasonable to assume that the current success rate for 2PT conversions is the equilibrium outcome and that a change in offensive strategy--and a response in defensive
strategy, would result in little if any departure from the current rate. "
It can't be the equilibrium outcome. If you consider some representative offense taking as given the mixed strategy of the defense they can strictly improve by running at a higher
Anonymous says:
What is this McCown kneel down? http://sports.yahoo.com/nfl/recap?gid=20031228022&prov=ap says it was a pass to Emmitt Smith that was too short
J-Doug says:
"the offense should always choose the higher success rate until they are equal."
This is a fantastic strategy if the success rates aren't in any way related to the expectations of your opponent (the defense). Unfortunately, in football (and pretty much everything)
they are.
"It can't be the equilibrium outcome. If you consider some representative offense taking as given the mixed strategy of the defense they can strictly improve by running at a higher
Again, there's no way of knowing if running more often will improve the offense's chances unless we ignore the fact that the offense's chances are dependent on defensive expectations,
which is a ridiculous thing to ignore. Second, I don't actually think the current outcome is an equilibrium—I was making a point about what we can and cannot know about the problem at
hand. Third, the issue remains that if you don't know P(C) from my third comment or the difference between P(A) and P(B), there's absolutely no way of knowing what the real success rate
would be in the event of a change in strategy, and you absolutely cannot conclude the numbers that would bound the eventual outcome.
willkoky says:
Why wouldn't the proper thing to do be to add the QB runs to the pass plays? I presume 2 pointers are too far out that anybody is calling the QB sneak. Those QB runs are part of the
options provided by a pass play.
Anonymous says:
It's shocking that teams don't even attempt the 2pt conversion after a defensive penalty on the PAT, even though the expected point value of a 2pt conversion is well over 1 after you've
moved half the distance to the goal.
Anonymous says:
Adding QB runs to pass plays would give pass plays around a 46% success rate
Anonymous says:
Team A and Team B both attempt two-point conversions. Team A runs the ball. Team B passes the ball. Who is more likely to score?
Hopefully, your answer is Team A, which is the correct answer.
"Again, there's no way of knowing if running more often will improve the offense's chances unless we ignore the fact that the offense's chances are dependent on defensive expectations,
which is a ridiculous thing to ignore."
While we can't explicitly PROVE that running the ball increases a team's chances of scoring on a 2-point conversion on any given attempt, it seems asinine to say that attempting a play
with SR of 57% does not improve your chances of scoring over attempting a play with 46% SR.
James says:
Willkoky, QB runs could also be designed QB draw plays, which should be considered runs.
J-Doug says:
@Anonymous: Everything you just said is--again--dependent on defensive expectations remaining constant. This is not something you can assume in a strategic interaction or any sort of
game-theoretical analysis. If you want to go ahead assuming that then fine, but it's a completely false assumption that has major implications for the conclusion of this post.
Anonymous says:
In no way, shape, or form were defensive expectations held constant over time in analysis. That is the crux of game theory. The defense will adapt. Nothing I said is dependent on
defensive expectations remaining constant.
Given ONLY the information that Team A will run, and Team B will pass, we would expect 57% chance of success for team A, and 46% for team B (whatever the original numbers were). I don't
understand why you think this aggregated average is dependent on defensive expectation. For this exact instance, of course team A and team B's ACTUAL percent chance of success is
dependent on defensive expectations (as well as many, many other factors). The averages of 57% and 46% essentially "account" for defensive expectation.
It's like empirically knowing that 57% of women suffer some side effect of a drug, compared with only 46% of men. If person A suffered the side effect, would you guess they were female or
male (given equal base rates)? Of course there are other contextual factors, like medical history, age, etc. but, like particular and specific matchups in the nfl, can be taken on a case
by case basis.
Anonymous says:
Anonymous: About McCown's kneeldown. I assume Brian meant the final touchdown of the game. He said "kneel-down while trailing by 5 points", but I think he meant the final touchdown came
when trailing by 5 points. I assume the kneel-down came after they were up by 1?
Anonymous says:
Example: Suppose the score is tied 20-20 before team A scores a touchdown with 3 minutes left in the game.
3) If team B is down by 7 when they score a touchdown, they will attempt an extra point to send the game to overtime, and in overtime each team would have a 50% chance of winning.
Assumption 3 does not hold true, because, if obeying ideal strategy, a team should go for 2 if down by seven.
If they do that, the chances that team A wins if they kick an extra point become
Which is clearly a lower chance of winning than if team A had gone for 2.
Ehren says:
If the expectation of converting was always 50%, I'd go for it every time in order to not risk my players getting injured in overtime and to be more rested for the next game.
Anonymous says:
Re: the 2003 Vikings/Cardinals game:
Cards went for two twice that game.
1) Cards were down 17-6. Scored a TD to make it 17-12. Went for 2 with a pass to Smith, who was stopped short of goal line.
2) Cards were down 17-12. Scored a TD with 4 seconds left. Inexplicably kneeled for the conversion. Makes no sense. TD was last play of the game and opponents can't return a blocked PAT
for a point. So Cards were in no danger of losing the game.
Only two possible reasons
a) didn't want to rub it in. Cards had just eliminated the Vikes from the playoffs by scoring at the end.
b) Wanted to keep the score on the under. I have no idea what the over/under was, but it's possible (though highly unlikely, way too obvious) that the O/U was 35.5 and by kneeling, coach
kept the game at the Under.
Anonymous says:
TMQ wasn't talking about going for two in every situation. Only about going for two when the choice is a PAT to send the game to OT or going for two for the win. Probably every ready here
realizes that going for two when down by one late in the 4th is the better proposition. But TMQ is writing for "the masses" who may not understand that concept.
Anonymous says:
Interesting discussion. My view is that the data is skewed by the QB option to pass/run. By running out of the pocket, the QB can ascertain the success of running into the endzone fairly
easily; if chances of running seem low, QB can lob pass into endzone for a "jump ball". The catagorization of "run/pass" depends on the choice of the QB. Ergo, this skews the successful
run 2 point conversion on the high side.
Anonymous says:
What this boils down to is if a given team can maintain the norm (which is a 1pt. field goal per touchdown at the end of the game given that a 2pt. play wasn't necessary due to a comeback
scenario)would playing the odds of using a 2pt. conversion net a gain in score? If the odds are better than 50% it would be no different than kicking a field goal per touchdown with a net
of 1 extra pt per touchdown on each success and no loss of pt. on one failure for every 2 touchdowns. Why not go for it if you can maintain pace with the other teams scoring? If push
comes to shove and you are out a second TD opportunity (because you will need an even number of touchdowns to justify the odds of going for a 2pt. play (reality check) at the end of the
game) then kick it and go into OT and take your chances there.
Flanksteak says:
The numbers used in the analysis are based off the current decision making logic in the NFL -- namely, how many 2pt conversions occurred in the 1st and 2nd quarters? It might be that the
success rates shown should be higher in that the good teams don't "need" to go for it and are therefore left out of the sample. Furthermore, if most or all of the 2pt conversions are done
in close games, this implies a close matchup between the teams (admittedly not necessarily between offense/defense as it could be a shootout). If we want to compare a strategy of going
for two through the first three quarters versus kicking the extra point, we might be understating the EV.
Statalyzer says:
It's not just 2 point conversions - many offenses seem to think that 3rd and 3+ yards is a pass-only situation and they'll line up with 4 or 5 WRs and not even consider a run, even though
even poor running teams average more than 3 yards a rush.
david says:
Here's the problem with all of your theorizing...running the ball may just be easier to convert than passing. There may never be an equilibrium because one play is just generally easier
to complete. Passing against a defense with 12 vertical yards is tough, but running may be easier even when they are expecting because the blockers and back just need to push 2 yards or
run to the outside or qb sneak up the middle..there are still many options at the run which is very hard to defend in short yardage. Don't forget that the defense will virtually never go
into an 11 man run stop defense, let alone 9 maybe 8 because of the threat of a pass to a 3 tight end set, so running may just be more effective like shooting a 3 versus a 16 footer in
basketball even if the 16 footer is contested equally to the 3 it should be more successful.
Anonymous says:
But what if, as I imagine, many of the runs are checks at the line of scrimmage based upon certain looks? i.e. if there are 6 in the box or less it's an automatic run play, if not the
pass play remains? There may be several "unprofitable" decisions when running the ball is very negative. You can't always look at stats and say "teams should run more". Perhaps every time
they pass on a 2 point conversion they have a less than 50% chance of a successful run. Of course in some situations they are better off taking the delay of game and kicking the extra
point... But there usually is a reason they are going for two, and they don't gain anything from "losing by 1 rather than 2". But much like a chart on a variation of blackjack and "when
to surrender", there are certain situations when "only" being successful around 45% of the time (I estimate because QB could run or throw the ball laterally which counts as a run as well)
is perhaps much greater than running.
If the defense stacks the box and shows blitz, and calls a "pinch" defense I can't imagine it would be worth sticking with the current run play. Teams prepare for this, and if certain
defensive looks are given they go with their passing play. It is not as if you call a run play and stick with it regardless of the look, or a pass play and go regardless of the look. Runs
are successful because teams do them when the defense is lined up to defend pass, and passes aren't successful because it's the "surrender option" when you can't get the look in which
running the ball is effective.
I think you can also advance the stats by considering the average number of possessions (and scores) per game. The higher varience strategy of going for two may not always be best,
particularly if team is superior and already has a sufficient enough lead where the only thing they can do to let team back into it is a string of badluck. I don't think there is enough
time in an NFL game for the highest EPA decision to always be correct. the wider the varience, the less consistent the score and the greater the chance of a team coming back on you.
Additionally, the 2 point conversion probably won't be nearly as high if a team regularly goes for it.
Anonymous says:
With that being said, this type of analysis is still a great start to ground coaches decision making and to get them to start to consider going for two, doing their own research as it
applies to their own team where more information is known, and their opponents which more information is known, and make a more informed decision
Anonymous says:
Hey, is that 98.3% extra point success rate based on plays when the kicker actually kicks the ball? (The success rate on 1 point conversion attempts is lower than that if you include
fumbles by the holder or bad snaps...maybe closer to 97%)
My gut feel is that each team misses about 1 extra point per year, which would be about 97% success.
Two point conversions should be attempted much more often than they are.
Anonymous says:
I disagree that conversions should be attempted routinely by most teams. The issue is that this decision is made after a touchdown, and teams who score more touchdowns are more likely to
be winners.
It's not hard to admit that if the Patriots routinely went for 2pt conversions, there's a good chance they'd have one or maybe even two extra wins. That's because most of their wins were
blowouts while virtually all of their losses were close. Odds are that most of their failed attempts would happen in games they'd already won, while they'd pick up just enough conversions
in the close games to get an additional victory. They would have increased their victory odds in 3 games, and only put one victory in (very slight) jeopardy (Week 7 vs. Jets).
On the other hand consider the Falcons, who won more of their close games. With the aggressive XPA strategy they might have lost either the Panther's game in week 4 or the Bucs game in
Week 12, and there aren't any games they could have won.
The Packers do have a game they could have won, but a more likely scenario is that they'd have an additional loss. They only had 2 close games. They beat the Saints and lost to the
Vikings. In their win over the Saints, anything less than a 50% conversion rate and they would have lost. In their game against the Vikings, they'd have needed a 100% 2pt conversion rate
to get the win. Any less than that and they'd still have lost.
Based on my crude searching, there are not many teams that would clearly benefit from the aggressive strategy. I only see 6. Bills (+1), Browns(+1), Lions (+4, -1), Buccaneers (+3),
Panthers (+2, -1), Patriots (+3, -1).
Most teams wind up like the Packers, where they are putting more games at risk than they are gaining. Also, my search does not account for field goals, safeties, or 2pt conversion
attempts that actually happened, which might further skew the numbers. I estimated the number of touchdowns per game using TotalPoints/7. Fixing that would almost certainly reduce the
number of teams who would see benefit.
Basically, in most situations I'd need a substantially higher than 60% conversion rate to justify going for 2 routinely. The rewards just aren't worth the risks.
FWIW, in 2012 the league-wide extra point kick rate was 99.5% according to pro-football reference, with the vast majority of teams logging a 100% success rate.
alternaviews says:
a few things unaccounted for in these stats:
1. Momentum. After you score a TD, is it deflating to get stuffed on a 2pt attempt? What does the data say about effects on the next offensive & defensive possessions after a failed 2pt
try vs. after a PAT attempt? Intuitively, football is an emotional game of momentum, and it seems that it might be deflating to miss the 2pt try -- whereas the PAT is a near sure thing (&
if it fails, it's only a special teams issue, not offense).
2. Chance of injury. What are the odds of a major injury in 2pt conversion situations? Presumably very low, but worth quantifying.
3. Limited playbooks. Is it possible that conversion rates are only as high as they are because teams rely on specially-designed plays (gadget or otherwise) ? If so, then there would be
decreasing marginal returns on trying more 2pt conversions, as the opponents would get to see your playbook -- and prepare. Is there any data on the 2pt conversion success rates of teams
that try a large number of conversions -- versus teams that try a smaller number?
Tim says:
I'm quite late to the party, but regarding alternaviews:
The offense is about to kick the ball off to the defense, so momentum isn't as big of a deal with that unit. However, even if you agree momentum is a thing, wouldn't MAKING the 2 point
conversion really be deflating to the opposing team?
2. Major injury is probably no more (or less) than any other goal line play.
3. The playbook is basically the same as any goal line situation. Clearly, long passes are out, but that's the same as any goal line situation anyways. If you have a limited playbook
here, you need a better playbook.
Rich Ochoa says:
Don't forget, if you absolutely have to win a regular season game, and a tie eliminates you from the playoff hunt, then kicking a XP to tie a game and send it into OT is positively,
statistically, the wrong choice. Now you just introduced another possible way to not make the playoffs...tie after one OT period. In other words...one factor in the decision at game end
ought to be "Does a tie help me?"
Of course the dumbass media will question every attempt to go for two, but for some reason, rarely question an attempt to go for one... Too few journalists and former jocks have taken
advanced math classes so they don't even understand what they are arguing.
Alan Williams says:
This comment has been removed by the author.
Alan Williams says:
This is a very roundabout way of coming to the conclusion that the NFL average for 2pt conversions should be over 50% and everyone should be going for them early. The success rate now is
47.9%, less than half the percentage of a field goal. If teams begin to run more, defenses will adjust and likely keep the success at a similar rate. However, there are some teams in the
NFL(Broncos, Saints, Patriots, Packers, Eagles...) that could likely have a success rate much higher than the NFL average, and it would be worth it for them.
thanks for the information,, the story is quite interesting to read, indeed for all the information should be written with a very interesting blog so that readers interested and pleased
with your writing | {"url":"http://www.advancednflstats.com/2010/12/almost-always-go-for-2-point.html?showComment=1292597781453","timestamp":"2014-04-17T03:58:04Z","content_type":null,"content_length":"211033","record_id":"<urn:uuid:14b86e3f-9f6f-4a1f-9c2c-d0c9b27987ab>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00076-ip-10-147-4-33.ec2.internal.warc.gz"} |
Anyone here good at LIM problems?
I looked at it again today and actually wrote it out on paper and it's 2, just like we figured.
When you initially plug 2 in to the limit, you should end up with sqrt(0/0), so you know you have to do more work. When you have to do more work, the first thing you want to do is factor something
like this to see if you can cancel anything out.
The top can factor to: (x+2)(x-2)
The bottom can factor to: (x-1)(x-2)
The (x-2)'s cancel out, so you are left with sqrt((x+2)/(x-1))
Plug 2 back into the function and you get sqrt((2+2)/(2-1)) which is sqrt(4) which equals 2.
So your limit as you approach 2 from both sides is 2.
i'm the best mayne, i deed it
login first
to reply. | {"url":"http://hypebeast.com/forums/off-topic/147506","timestamp":"2014-04-16T07:20:08Z","content_type":null,"content_length":"66038","record_id":"<urn:uuid:cccb37d4-431f-44d1-91a5-fbdd7afd1a57>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00390-ip-10-147-4-33.ec2.internal.warc.gz"} |
Sun City, AZ Algebra 2 Tutor
Find a Sun City, AZ Algebra 2 Tutor
...The algebraic expressions confused him at first because he was intimidated by letters being mixed in with numbers. I told him don't worry, just think of one these letters as apples and the
other set as oranges. He was still a little slow, but started to come around, this I did back in 2003.
7 Subjects: including algebra 2, calculus, geometry, algebra 1
...Without a strong academic foundation, people have limited options. Furthermore, if children do not value education, or feel that being “smart” is either "not cool" or that it is out of their
reach, then the energy and effort that they put toward mastering their craft will be significantly dimini...
20 Subjects: including algebra 2, English, writing, calculus
...After my undergraduate degrees, I pursued a PhD in Physics at West Virginia University. While there, I continued tutoring introductory students with an emphasis on both engineering and bio/
medical applications. Additionally, I started my training to become a teacher.
20 Subjects: including algebra 2, chemistry, physics, calculus
...Additionally, I provide swim lessons given the pool is warm. My goal when tutoring is to explain difficult key concepts in ways students understand, going to great lengths to provide analogies
and comparisons for demonstration. I go through previous homework assignments, tests, and quizzes to target trouble areas and to improve upon them.
13 Subjects: including algebra 2, reading, Spanish, chemistry
...I took differential equations at Eastern Arizona College when I first started college. After I got my AA, I had to go to work to save for additional courses. Later, Arizona State University
would not accept any math credits that were five years old and so I took calculus 1-3 and differential equations again at Glendale Community College in Arizona and got straight As.
15 Subjects: including algebra 2, calculus, SAT math, GED
Related Sun City, AZ Tutors
Sun City, AZ Accounting Tutors
Sun City, AZ ACT Tutors
Sun City, AZ Algebra Tutors
Sun City, AZ Algebra 2 Tutors
Sun City, AZ Calculus Tutors
Sun City, AZ Geometry Tutors
Sun City, AZ Math Tutors
Sun City, AZ Prealgebra Tutors
Sun City, AZ Precalculus Tutors
Sun City, AZ SAT Tutors
Sun City, AZ SAT Math Tutors
Sun City, AZ Science Tutors
Sun City, AZ Statistics Tutors
Sun City, AZ Trigonometry Tutors | {"url":"http://www.purplemath.com/Sun_City_AZ_Algebra_2_tutors.php","timestamp":"2014-04-21T05:15:25Z","content_type":null,"content_length":"24149","record_id":"<urn:uuid:d88ec00a-1fe9-45fd-9756-df54f695640c>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00192-ip-10-147-4-33.ec2.internal.warc.gz"} |
University of Illinois
Trjitzinsky Memorial Lectures
Spring 2012
Robert Ghrist
University of Pennsylvania
Sheaves and the Global Topology of Data
This lecture series concerns Applied Mathematics -- the taming and tuning of mathematical structures to the service of problems in the sciences. The mathematics to be harnessed comes from algebraic
topology -- specifically, sheaf theory, the study of local-to-global data. The applications to be surveyed are in the engineering sciences, but are not fundamentally restricted to such. Beginning
with a gentle introduction to algebraic topology and its modern applications, the series will focus on sheaves and their recent utility in sensing, coding, optimization, and inference. No prior
exposure to sheaves required.
Robert Ghrist is the Andrea Mitchell Penn Integrating Knowledge Professor in the Departments of Mathematics and Electrical/Systems Engineering at the University of Pennsylvania. He was named one of
Scientific American magazine's "Top 50 Innovators" for research in 2007. Ghrist is a leading expert in using tools from topology and geometry to study abstract spaces and solve real-world problems.
Lecture 1: Tuesday, March 6, 2012, 314 Altgeld Hall, 4:00 p.m.
A reception will immediately following this lecture.
Lecture 2: Wednesday, March 7, 2012, 245 Altgeld Hall, 4:00 p.m.
Lecture 3: Thursday, March 8, 2012, 245 Altgeld Hall, 4:00 p.m.
The Trjitzinsky Memorial Lecture Series, held annually, honors the memory of Professor Waldemar J. Trjitzinsky who came to the United States from Russia, and taught and researched in the Department
of Mathematics at the University of Illinois from 1934 to 1969. The lecture series began in 1978 and was made possible from the gifts of Trjitzinsky's former Ph.D. students; Bing K. Wong, one of
Trjitzinsky's students was the person responsible for setting up the Trjitzinsky fund. Each series of three lectures is aimed at a general mathematical public and graduate students. | {"url":"http://www.math.illinois.edu/Colloquia/12SP/ghrist_mar06-12.html","timestamp":"2014-04-16T10:10:19Z","content_type":null,"content_length":"15279","record_id":"<urn:uuid:812d6a69-877d-4059-a784-fc7ebfcd4639>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
integer roots [Archive] - Free Math Help Forum
View Full Version : integer roots
find Integer value of a for which the equation |2x+1|+|x-2|=a have only integer roots.
If I am interpreting this correctly, I would start with partitioning the real line by the zeros of the function stuck into the absolute values: -1/2 and 2
x <= -1/2
You get: -(2x+1)-(x-2)=a
-1/2 < x <= 2
You get: (2x+1)-(x-2)=a
x > 2
You get: (2x+1)+(x-2)=a
In the first case: -3x+1=a gives the only possible integer values a=1-3k for which x=k will be a root.
Try the others.
Powered by vBulletin® Version 4.2.2 Copyright © 2014 vBulletin Solutions, Inc. All rights reserved. | {"url":"http://www.freemathhelp.com/forum/archive/index.php/t-71291.html","timestamp":"2014-04-19T19:34:08Z","content_type":null,"content_length":"3070","record_id":"<urn:uuid:979687a0-e486-4aea-ae5d-4c87e66d3f31>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00647-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bag Equivalence via a Proof-Relevant Membership Relation
Bag Equivalence via a Proof-Relevant Membership Relation
Nils Anders Danielsson
Interactive Theorem Proving, Third International Conference, ITP 2012. [pdf, highlighted code, tarball with code, darcs repository (may include more recent developments)]
Two lists are bag equivalent if they are permutations of each other, i.e. if they contain the same elements, with the same multiplicity, but perhaps not in the same order. This paper describes how
one can define bag equivalence as the presence of bijections between sets of membership proofs. This definition has some desirable properties:
• Many bag equivalences can be proved using a flexible form of equational reasoning.
• The definition generalises easily to arbitrary unary containers, including types with infinite values, such as streams.
• By using a slight variation of the definition one gets set equivalence instead, i.e. equality up to order and multiplicity. Other variations give the subset and subbag preorders.
• The definition works well in mechanised proofs.
The paper states that the new definition of bag equivalence is equivalent to a more standard one. In the accompanying code it is proved that, if "bijection" is replaced by the equivalent concept of
"weak equivalence" in the general versions of these definitions (those that work for all unary containers), then they are isomorphic (assuming extensionality).
Nils Anders Danielsson
Last updated Mon May 28 13:10:30 UTC 2012. | {"url":"http://www.cse.chalmers.se/~nad/publications/danielsson-bag-equivalence.html","timestamp":"2014-04-19T06:51:59Z","content_type":null,"content_length":"3516","record_id":"<urn:uuid:4fb83cd5-b508-4c7e-8cb7-e15cf28b0df6>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00396-ip-10-147-4-33.ec2.internal.warc.gz"} |
May 3rd 2010, 09:20 AM
A particle of mass m moves under an attractive central force mk/r^a where r is the radial distance from the force centre O.
assuming that the radial and transverse components of accleration in polar coordinated (r, (-)) are (r''-r*(-)'^2) and (2r'*(-)' + r*(-)'') respectively, show that the differential equation for
the orbit is,
d^2u/d(-)^2 = k/h^2 * u^(a-2)
where u=1/r and h=r^2 * (-)
sorry about the notations, i wasn't sure how to present the equation on this.
I am having trouble with this and a similar question. I am not sure how to derive the differential equation for the radial and transverse components.
p.s i hope this is the right sub-forum to post in
May 3rd 2010, 12:25 PM
The 2 equations are
(1) $m \left(\ddot{r}-r \dot{\theta}^2\right) = -\frac{mk}{r^a}$
(2) $2 \dot{r} \dot{\theta} + r \ddot{\theta} = 0$
$\frac{du}{d\theta} = \frac{du}{dt} \cdot \frac{dt}{d\theta} = \frac{d \left(\frac{1}{r}\right)}{dt} \cdot \frac{1}{\dot{\theta}} = -\frac{\dot{r}}{r^2} \cdot \frac{1}{\dot{\theta}} = -\frac{\dot
{r}}{r^2 \dot{\theta}}$
$\frac{d^2u}{d\theta^2} = \frac{d}{d\theta} \left(\frac{du}{d\theta}\right) = \frac{d}{d\theta} \left(-\frac{\dot{r}}{r^2 \dot{\theta}}\right) = \frac{d}{dt} \left(-\frac{\dot{r}}{r^2 \dot{\
theta}}\right) \cdot \frac{dt}{d\theta} = -\frac{\ddot{r}r^2 \dot{\theta}-\dot{r} \left(2r\dot{r} \dot{\theta}+r^2 \ddot{\theta}\right)}{r^4 \dot{\theta}^2} \cdot \frac{1}{\dot{\theta}}$
From equation (2) we know that the parenthesis is equal to 0
From equation (1) we can substitute $\ddot{r} = r \dot{\theta}^2 - \frac{k}{r^a}$
$\frac{d^2u}{d\theta^2} = -\frac{\ddot{r}r^2 \dot{\theta}}{r^4 \dot{\theta}^3} = -\frac{\ddot{r}}{r^2 \dot{\theta}^2} = -\frac{r \dot{\theta}^2 - \frac{k}{r^a}}{r^2 \dot{\theta}^2} = -\frac{1}{r}
+\frac{k}{r^{a+2} \dot{\theta}^2} = -u + \frac{k}{r^4 \dot{\theta}^2} \cdot u^{a-2}$
This is not exactly what you need to find so I may have missed something ... | {"url":"http://mathhelpforum.com/calculus/142825-dynamics-print.html","timestamp":"2014-04-17T02:21:18Z","content_type":null,"content_length":"7276","record_id":"<urn:uuid:94919b45-e8a5-4f5c-95d6-0491c3f7b682>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00100-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions
Math Forum
Ask Dr. Math
Internet Newsletter
Teacher Exchange
Search All of the Math Forum:
Views expressed in these public forums are not endorsed by Drexel University or The Math Forum.
Topic: dow define a numeric variable (no symbolic)?
Replies: 2 Last Post: May 9, 2013 8:32 AM
Messages: [ Previous | Next ]
Re: dow define a numeric variable (no symbolic)?
Posted: May 9, 2013 7:21 AM
On 5/9/2013 2:19 AM, ghasem wrote:
> obviously,I get this error:
> ??? Undefined function or variable 'bet'.
> because I don't define bet in gam,T,tau expressions.
> how define "bet" as a numeric unknown (I don't want use from "syms bet")?
You can't.
Matlab is not C nor C++ nor Pascal.
In Matlab, you can only define and assign a value at
same time, as in
You can't just write
var x;
Unless it is a sym, then you can define it as symbol
with no value.
But for numeric, it must be defined and given a value
at same time.
Date Subject Author
5/9/13 dow define a numeric variable (no symbolic)? ghasem
5/9/13 Re: dow define a numeric variable (no symbolic)? Nasser Abbasi
5/9/13 Re: dow define a numeric variable (no symbolic)? Alan Weiss | {"url":"http://mathforum.org/kb/message.jspa?messageID=9121414","timestamp":"2014-04-18T03:07:16Z","content_type":null,"content_length":"18926","record_id":"<urn:uuid:ad2d85f8-7be7-449c-906d-03caac440fb0>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00566-ip-10-147-4-33.ec2.internal.warc.gz"} |
Re: [vox-tech] linear algebra: equivalent matrices
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: [vox-tech] linear algebra: equivalent matrices
Hi Pete; nice to see you on the list again.
Peter Jay Salzman wrote:
Posted to vox-tech since this is a CS topic. I'd like to verify some things
which I think are true.
Consider the set of all square matrices of rank n. The determinant of M,
det(M), forms an equivalence class on that set. The equivalence relation is
defined by:
A ~ B iff det(A) == det(B) (1)
Now, like vectors, matrices are always expressed in a basis, whether we
explicitly say so or not. So when we write the components M, we should
really write M_b where b represents the basis we chose to express M in. We
can express M_b in a different basis, say M_a, by a rotation operation:
M_a = S^{-1} M_b S
where S is an orthogonal "rotation matrix". However, no matter what basis
we express M in, det(M) remains constant. Therefore, we get an equivalence
class on the set of square matrices of rank n based on whether we can rotate
one matrix into another. The equivalence relation is defined by:
A ~ B iff A = S^{-1} M_b S (2)
Did you write this equation correctly? I would expect something like
M_a ~ M_b iff M_a = S^{-1} M_b S
for _some_ orthogonal matrix S, which determines the basis for M. There is
one rotation matrix S that will make M_b diagonal. That rotation matrix is
formed by the eigenvectors of M_b.
Big finale:
The equivalence classes defined by relation (1) are epimorphic to the
equivalence classes defined by relation (2). If we place a restriction on S
that it must have a determinant of +1 ("proper" rotations), then the two
sets of equivalence classes are isomorphic.
What this is really saying is that, when viewed as the sides of a
parallelopiped, a matrix will always have the same area no matter what basis
you choose to express it in.
How accurate is all this? In interested in the lingo as well as the ideas.
Looks fine to me. In my PhD work (almost done!) I deal with 2nd order tensors for continuum mechanics theory. I've actually had to specify and use rotation matrices explicitly in some of my numerical
PS- Whether a rotation is S^{-1} M_b S or S M_b S^{-1} depends on how your
favorite linear algebra author defines his/her rotation matrices.
vox-tech mailing list
vox-tech mailing list
• Follow-Ups:
• References: | {"url":"http://www.lugod.org/mailinglists/archives/vox-tech/2005-12/msg00020.html","timestamp":"2014-04-18T00:25:18Z","content_type":null,"content_length":"23848","record_id":"<urn:uuid:612325b6-1317-49a4-abf1-df04631b7989>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00588-ip-10-147-4-33.ec2.internal.warc.gz"} |
Zentralblatt MATH
Publications of (and about) Paul Erdös
Zbl.No: 725.05062
Autor: Chung, F.R.K.; Erdös, Paul
Title: On unavoidable hypergraphs. (In English)
Source: J. Graph Theory 11, No.2, 251-263 (1987).
Review: An r-uniform hypergraph H (or an r-graph, for short) is a collection E = E(H) of r-element subsets (called edges) of a set V = V(H) (called vertices). We say an r-graph H is (n,e)-unavoidable
if every r-graph with n vertices and e edges must contain H. In this paper we investigate the largest possible number of edges in an (n,e)-unavoidable 3-graph for fixed n ande. We also study the
structure of such unavoidable 3-graphs.
Classif.: * 05C65 Hypergraphs
Keywords: unavoidable hypergraphs; edges
© European Mathematical Society & FIZ Karlsruhe & Springer-Verlag
│Books │Problems │Set Theory │Combinatorics │Extremal Probl/Ramsey Th. │
│Graph Theory │Add.Number Theory│Mult.Number Theory│Analysis │Geometry │
│Probabability│Personalia │About Paul Erdös │Publication Year│Home Page │ | {"url":"http://www.emis.de/classics/Erdos/cit/72505062.htm","timestamp":"2014-04-21T05:07:02Z","content_type":null,"content_length":"3273","record_id":"<urn:uuid:9f00daca-92d0-461b-82f4-5500a404b612>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00553-ip-10-147-4-33.ec2.internal.warc.gz"} |
st: RE: baseline adjustment in mixed models
[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
st: RE: baseline adjustment in mixed models
From "Visintainer PhD, Paul" <Paul.Visintainer@baystatehealth.org>
To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu>
Subject st: RE: baseline adjustment in mixed models
Date Sun, 15 Nov 2009 10:42:58 -0500
Clyde, thanks for the very clear explanation. You're getting to the root of my question. So, if I understand you correctly, the following model is unnecessary:
xtmixed y y0 group time groupXtime || id:
or the random slope equivalent, because the group variable accounts for differences at Y0. Two related questions:
1) You mentioned that coding baseline as Y0 makes life simpler. Suppose time is coded as baseline plus time1 through time4. Is there any utility to the model:
xtmixed y baseline group time groupXtime || id: , where the time variable does not include baseline Y0.
2) Controlling for baseline attempts to account for group differences at the start of the trial, and also for control for those observations with extreme values (i.e., regression to the mean). Am I correct in assuming that the random coefficient model is the model of choice for correcting regression to the mean? My logic (or illogic) here is that the more extreme the baseline values, the greater the effect of regression to the mean (i.e., an individual's slope is a composite of the group assignment plus the effect of regression to the mean depending on his initial value).
Thanks, this has been really helpful.
From: owner-statalist@hsphsun2.harvard.edu [owner-statalist@hsphsun2.harvard.edu] On Behalf Of Clyde Schechter [clyde.schechter@einstein.yu.edu]
Sent: Saturday, November 14, 2009 4:01 PM
To: statalist@hsphsun2.harvard.edu
Subject: st: baseline adjustment in mixed models
I hesitate to disagree with Martin Buis, but perhaps I have interpreted
your question differently.
I suppose you have an outcome y observed on each participant at each time,
a variable group (coded 0 for control, 1 for intervention), a participant
identifier variable, id, and a variable, time, which might be actual times
of observation, or just a sequence 0 through N, whatever). If your time
variable is not coded as baseline = 0, it will make life simpler if you
transform it so that is the case.w
In the example below I will assume that you plan to model y as a linear
function of time, because that is the simplest from a coding perspective.
If you need a more complicated representation with dummies for different
times, or a spline, etc., you can modify accordingly. Again, life is
simplest if the baseline measurement corresponds to time = 0 (or the
omitted time category if dummies are used) in your coding.
If you want to test whether the intervention has modified the response
trajectory over time, the key is to test for the group X time interaction.
gen groupXtime = group * time
xtmixed y group time groupXtime || id:
gives you a random intercept model. The coefficient of group represents
the mean difference of y between intervention and control groups when time
= 0. That is, this model does incorporate, and in a useful sense,
"adjusts for" the baseline difference between the groups. The coefficient
of groupXtime represents the difference in the slopes of the y-time lines
between groups.
Now, you might want to make this more sophisticated if you anticipate that
individuals, within each group, might have different individual y-time
slopes, in which case a random slopes model might be better:
xtmixed y group time groupXtime || id: time
Again, baseline differences in y are satisfactorily accounted for in this
model, and are even directly estimated by the coefficient of group.
Again, the key inference is based on the coefficient of the interaction
With either of these models you should get good, serviceable estimates of
the group difference in average y-time slope. Note that this approach is
different from adjusting for baseline response by excluding the time 0
observations from the model and including time 0 response as a covariate
in an ANCOVA like analysis.
Hope this helps.
Clyde Schechter
Associate Professor of Family & Social Medicine
Albert Einstein College of Medicine, Bronx, NY, USA
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/
CONFIDENTIALITY NOTICE: This email communication and any attachments may contain confidential and privileged information for the use of the designated recipients named above. If you are not the intended recipient, you are hereby notified that you have received this communication in error and that any review, disclosure, dissemination, distribution or copying of it or its contents is prohibited. If you have received this communication in error, please reply to the sender immediately or by telephone at (413) 794-0000 and destroy all copies of this communication and any attachments. For further information regarding Baystate Health's privacy policy, please visit our Internet web site at http://www.baystatehealth.com.
* For searches and help try:
* http://www.stata.com/help.cgi?search
* http://www.stata.com/support/statalist/faq
* http://www.ats.ucla.edu/stat/stata/ | {"url":"http://www.stata.com/statalist/archive/2009-11/msg00800.html","timestamp":"2014-04-17T01:10:24Z","content_type":null,"content_length":"11013","record_id":"<urn:uuid:fbbcd9c2-71c9-46e5-9d31-b484e89372ba>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Common Coupled Fixed-Point Theorems in Generalized Fuzzy Metric Spaces
Advances in Fuzzy Systems
Volume 2011 (2011), Article ID 986748, 6 pages
Research Article
Common Coupled Fixed-Point Theorems in Generalized Fuzzy Metric Spaces
^1Department of Mathematics, Acharya Nagarjuna University, Dr. M.R. Appa Row Campus, Nuzvid 521 201, India
^2Department of Mathematics, Faculty of Science and Arts, Kirikkale University, 71450 Yahsihan, Turkey
^3Department of Mathematics, CH. S.D. St. Theresa’s Junior College for Women, Eluru 534 001, India
Received 9 August 2011; Accepted 2 November 2011
Academic Editor: E. E. Kerre
Copyright © 2011 K. P. R. Rao et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any
medium, provided the original work is properly cited.
We prove two unique common coupled fixed-point theorems for self maps in symmetric G-fuzzy metric spaces.
1. Introduction and Preliminaries
Mustafa and Sims [1–3] and Naidu et al. [4] demonstrated that most of the claims concerning the fundamental topological structure of -metric introduced by Dhage [5–8] and hence all theorems are
incorrect. Alternatively, Mustafa and Sims [1, 2] introduced a -metric space and obtained some fixed-point theorems in it. Some interesting references in -metric spaces are [3, 9–15]. In this paper,
we prove two unique common coupled fixed-point theorems for Jungck type and for three mappings in symmetric -fuzzy metric spaces.
Before giving our main results, we recall some of the basic concepts and results in -metric spaces and -fuzzy metric spaces.
Definition 1 (see [2]). Let be a nonempty set and let be a function satisfying the following properties:(G[1]) if ,(G[2]) for all with ,(G[3]) for all with ,(G[4]), symmetry in all three variables,(G
[5]) for all .
Then, the function is called a generalized metric or a -metric on and the pair is called a -metric space.
Definition 2 (see [2]). The -metric space is called symmetric if for all .
Definition 3 (see [2]). Let be a -metric space and let be a sequence in . A point is said to be limit of if and only if . In this case, the sequence is said to be -convergent to .
Definition 4 (see [2]). Let be a -metric space and let be a sequence in . is called -Cauchy if and only if . is called -complete if every -Cauchy sequence in is -convergent in .
Proposition 5 (see [2]). In a -metric space , the following are equivalent.(i)The sequence is -Cauchy.(ii)For every there exists such that , for all .
Proposition 6 (see [2]). Let be a -metric space. Then, the function is jointly continuous in all three of its variables.
Proposition 7 (see [2]). Let be a -metric space. Then, for any , it follows that(i)if , then ,(ii),(iii),(iv),(v).
Proposition 8 (see [2]). Let be a -metric space. Then, for a sequence and a point , the following are equivalent:(i) is -convergent to ,(ii) as ,(iii) as ,(iv) as .
Recently, Sun and Yang [16] introduced the concept of -fuzzy metric spaces and proved two common fixed-point theorems for four mappings.
Definition 9 (see [16]). A 3-tuple is called a -fuzzy metric space if is an arbitrary nonempty set, is a continuous -norm, and is a fuzzy set on satisfying the following conditions for each :(i) for
all with ,(ii) for all with ,(iii) if and only if ,(iv), where is a permutation function,(v) for all ,(vi) is continuous.
Definition 10 (see [16]). A -fuzzy metric space is said to be symmetric if for all and for each .
Example 11. Let be a nonempty set and let be a -metric on . Denote for all . For each , is a -fuzzy metric on .
Let be a -fuzzy metric space. For , and , the set is called an open ball with center and radius .
A subset of is called an open set if for each ,there exist and such that .
A sequence in -fuzzy metric space is said to be -convergent to if as for each . It is called a -Cauchy sequence if as for each . is called -complete if every -Cauchy sequence in is -convergent in .
Lemma 12 (see [16]). Let be a -fuzzy metric space. Then, is nondecreasing with respect to for all .
Lemma 13 (see [16]). Let be a -fuzzy metric space. Then, is a continuous function on .
Now onwards, we assume the following condition: Using (P), one can prove the following lemma.
Lemma 14. Let be a -fuzzy metric space. If there exists such that for all and , then and .
Definition 15 (see [17]). Let be a nonempty set. An element is called a coupled fixed point of the mapping if and .
Definition 16 (see [18]). Let be a nonempty set. An element is called(i)a coupled coincidence point of and if and ,(ii)a common coupled fixed point of and if and .
Definition 17 (see [18]). Let be a nonempty set. The mappings and are called -compatible if and whenever and for some .
Now, we give our main results.
2. Main Results
Theorem 18. Let be a -fuzzy metric space with for all and and let be mappings satisfying , where ,
Then and have a unique common coupled fixed point of the form in .
Proof. Let and denote . Let , . From (2), we have Also, Thus, . Hence, For any positive integer and fixed positive integer , we have Letting and using (P), we get Hence, . Thus, is -Cauchy in .
Similarly, we can show that is -Cauchy in . Since is -complete, and converge to some and in , respectively. Hence, there exist and in such that : Letting , we get Hence, . Similarly, it can be shown
that . Since is -compatible, we have Letting , we get Similarly, we can show that Thus, From Lemma 14, we have and . Thus, and . Hence, is a common coupled fixed point of and .
Suppose is another common coupled fixed point of and : Similarly, Thus, From Lemma 14, and . Thus, is the unique common coupled fixed point of and . Now, we will show that : Thus, From Lemma 14, we
have . Thus, is a common fixed point of and , that is, . Suppose is another common fixed point of and : Hence, . Thus, and have a unique common coupled fixed point of the form .
Finally, we prove a common coupled fixed-point theorem for three mappings in symmetric -fuzzy metric spaces.
Theorem 19. Let be a symmetric -complete fuzzy metric space with for all and let be mappings satisfying , where . Then, there exists such that Or
Proof. Let . Define the sequences and in as follows: ,; , ; , , . Suppose for some . Then, , where . Suppose . Then, It is a contradiction. Hence, . From (25) and since is symmetric, From Lemma 14,
we have . Thus, . Similarly, if or , then also we can show that for some , in . Similarly, it can be shown that if or or then there exists such that Now, assume that and for all . Write and : Thus, .
Similarly, we have .
Thus, Similarly, we can show that Thus, Hence Thus, From , we have As in Theorem 18, we can show that and are -Cauchy sequences in . Since is -complete, there exist such that and Letting , From this,
we have . As in the first part of proof, we can show that . Similarly, it can be shown that . Thus, is a common coupled fixed point of , , and . Suppose is another common coupled fixed point of , ,
and . Consider Also, Thus, From Lemma 14, we have and . Thus, is the unique common coupled fixed point of , , and . Now, we will show that . Consider Hence, . Thus, , , and have a unique common
coupled fixed point of the form .
The authors are thankful to the referee for his valuable suggestions.
1. Z. Mustafa and B. Sims, “Some remarks concerninig D-metric spaces,” in Proceedings of the Internatinal Conferences on Fixed Point Theory and Applications, pp. 189–198, Valencia, Spain, July 2003.
2. Z. Mustafa and B. Sims, “A new approach to generalized metric spaces,” Journal of Nonlinear and Convex Analysis, vol. 7, no. 2, pp. 289–297, 2006.
3. Z. Mustafa and B. Sims, “Fixed point theorems for contractive mappings in complete G-metric spaces,” Fixed Point Theory and Applications, vol. 2009, Article ID 917175, 10 pages, 2009.
4. S. V. R. Naidu, K. P. R. Rao, and N. Srinivasa Rao, “On convergent sequences and fixed point theorems in D-metric spaces,” International Journal of Mathematics and Mathematical Sciences, no. 12,
pp. 1969–1988, 2005. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
5. B. C. Dhage, “Generalized metric spaces and mapping with fixed points,” Bulletin of the Calcutta Mathematical Society, vol. 84, pp. 329–336, 1992.
6. B. C. Dhage, “On generalized metric spaces and topological structure II,” Pure and Applied Mathematika Sciences, vol. 40, no. 1-2, pp. 37–41, 1994.
7. B. C. Dhage, “A common fixed point principle in D-metric spaces,” Bulletin of the Calcutta Mathematical Society, vol. 91, no. 6, pp. 475–480, 1999.
8. B. C. Dhage, “Generalized metric spaces and topological structure. I,” Annalele Stiintifice ale Universitatii Al.I.Cuza, vol. 46, no. 1, pp. 3–24, 2000.
9. M. Abbas and B. E. Rhoades, “Common fixed point results for noncommuting mappings without continuity in generalized metric spaces,” Applied Mathematics and Computation, vol. 215, no. 1, pp.
262–269, 2009. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
10. R. Chugh, T. Kadian, A. Rani, and B. E. Rhoades, “Property P in G-metric spaces,” Fixed Point Theory and Applications, vol. 2010, Article ID 401684, 12 pages, 2010.
11. Z. Mustafa, F. Awawdeh, and W. Shatanawi, “Fixed point theorem for expansive mappings in G-metric spaces,” International Journal of Contemporary Mathematical Sciences, vol. 5, no. 49–52, pp.
2463–2472, 2010.
12. Z. Mustafa and H. Obiedat, “A fixed point theorem of Reich in G-metric spaces,” Cubo A Mathematical Journal, vol. 12, no. 1, pp. 83–93, 2010.
13. Z. Mustafa, H. Obiedat, and F. Awawdeh, “Some fixed point theorem for mapping on complete G-metric spaces,” Fixed Point Theory and Applications, vol. 2008, Article ID 189870, 12 pages, 2008.
14. Z. Mustafa, W. Shatanawi, and M. Bataineh, “Existence of fixed point results in G-metric spaces,” International Journal of Mathematics and Mathematical Sciences, vol. 2009, Article ID 283028, 10
pages, 2009.
15. W. Shatanawi, “Fixed point theory for contractive mappings satisfying Φ-maps in G-metric spaces,” Fixed Point Theory and Applications, vol. 2010, Article ID 181650, 9 pages, 2010.
16. G. Sun and K. Yang, “Generalized fuzzy metric spaces with properties,” Research journal of Applied Sciences, Engineering and Technology, vol. 2, no. 7, pp. 673–678, 2010.
17. T. G. Bhaskar and V. Lakshmikantham, “Fixed point theorems in partially ordered metric spaces and applications,” Nonlinear Analysis, vol. 65, no. 7, pp. 1379–1393, 2006. View at Publisher · View
at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
18. M. Abbas, M. Ali Khan, and S. Radenović, “Common coupled fixed point theorems in cone metric spaces for W-compatible mappings,” Applied Mathematics and Computation, vol. 217, no. 1, pp. 195–202,
2010. View at Publisher · View at Google Scholar | {"url":"http://www.hindawi.com/journals/afs/2011/986748/","timestamp":"2014-04-17T21:51:15Z","content_type":null,"content_length":"599254","record_id":"<urn:uuid:4838c2df-3d19-4d3c-9c4e-bc561889642e>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00198-ip-10-147-4-33.ec2.internal.warc.gz"} |
User TerronaBell
bio website
visits member for 4 years, 5 months
seen Apr 14 at 23:18
stats profile views 529
15 accepted How useful/pervasive are differential forms in surface theory?
Mar How useful/pervasive are differential forms in surface theory?
13 comment Thanks Richard. Do you have a pointer to a good proof of Gauss-Bonnet using forms? I typically have the students prove the discrete analog (via angle defect), but have so far not found a
nice, simple proof of the smooth version that uses forms.
9 awarded Nice Question
Mar How useful/pervasive are differential forms in surface theory?
9 comment Thank you Deane; thank you Robert. These answers are in line with what was my fuzzy view of the culture, and it's very nice to have these concrete reference points and examples. (I am
also very tempted to use this proof of the theorema egregium in my class!) Thanks again.
Mar How useful/pervasive are differential forms in surface theory?
8 revised added 1 characters in body
Mar How useful/pervasive are differential forms in surface theory?
8 revised added 26 characters in body
Mar How useful/pervasive are differential forms in surface theory?
8 revised deleted 1 characters in body
8 asked How useful/pervasive are differential forms in surface theory?
13 accepted How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
Feb How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
11 comment Great. (And thanks for some very interesting pointers.) So to summarize crudely: for general graphs it is hard but there are relaxations; for convex triangulated surfaces it can be
solved efficiently for approximation but is hard or impossible to do exactly. No statement on the difficulty of finding approximate solutions for nonconvex surfaces.
Feb How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
9 comment (E.g., in Figure 1 the extrinsic length of the arc will match the intrinsic length of the corresponding edge.)
Feb How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
9 comment Nice paper! But here the distance they consider is the geodesic rather than Euclidean distance induced by the embedding, no?
Feb How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
9 comment P.S. Yes, I mean a triangulated surface.
Feb How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
9 comment Thanks Joseph. Yes, these kinds of finite isometries do come to mind, though I don't yet have much intuition for whether they make it hard to find just one isometric embedding. From an
optimization point of view: if the surface is infinitesimally rigid then any objective function with zeros only at isometric embeddings must be nonconvex (the zeros are isolated). But to
answer the existence question, one may need only ski downhill to the bottom of a single valley...
Feb How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
8 comment Excellent. Thanks. So the only question remaining is the second one: do things get easier for simplicial graphs?
8 asked How hard is it to determine if a weighted graph can be isometrically embedded in R^3?
6 awarded Citizen Patrol
6 awarded Yearling
11 accepted The space of circular triangles?
Oct comment The space of circular triangles?
10 Entschuldigt. Mine Deutsch ist nicht sehr gut! | {"url":"http://mathoverflow.net/users/1557/terronabell?tab=activity","timestamp":"2014-04-18T03:39:04Z","content_type":null,"content_length":"46619","record_id":"<urn:uuid:ca4b1120-79aa-4811-89e8-8ce6790fb79c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00295-ip-10-147-4-33.ec2.internal.warc.gz"} |
unsigned double?
unsigned double?
i wrote a program that gets the product of 5 bytes and stores it
into a long double however when the number gets large it stores
a negative number which corrupts my data. Is there a way i can cast it unsigned? or do i have to convert it to possitive myself by performing 2's complement or something? Don't know how to do
that either using c++... :(
please help me :P
There's no such thing as an unsigned double. However I don't know how you're managing to overflow a double with the product of 5 bytes. Are you sure it's not producing a small number with a
negative exponent rather than a negative number?
i'm not sure. what i have is 5 unsigned chars which are read from a file. then (c1*c2*c3*c4*c5) is stored in the long double sum.
then if sum>max; max=sum; (both are same type)
ok what is weird is if i declare them both unsigned long ints then the max is greater then when i declare them long double??
and when i declare them unsigned long there is no negative sums however max is not what its suppose to be because its larger then unsigned long int. anyone know what kind of paradox is occuring
here if its not overflowing :P
You need to cast your variables to long doubles.
sum = (long double) c1 * (long double) c2 * (long double) c3 * (long double) c4 * (long double) c5;
or maybe just casting the first is good enough:
sum = (long double) c1 * c2 * c3 * c4 * c5;
I think you can use the <static cast> method, but I'm not sure of the syntax.
cool it worked thanks =) | {"url":"http://cboard.cprogramming.com/cplusplus-programming/5593-unsigned-double-printable-thread.html","timestamp":"2014-04-17T17:08:37Z","content_type":null,"content_length":"8251","record_id":"<urn:uuid:3466e88c-5ea5-43ed-beb5-79ccfe2af534>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00138-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hillside, NJ Geometry Tutor
Find a Hillside, NJ Geometry Tutor
...I am hoping to become a college professor one day. Teaching is my passion. I have worked with kids of all ages for the best six year, from one-on-one home tutoring to group tutoring in class
rooms and after-school programs.
26 Subjects: including geometry, chemistry, reading, statistics
...This includes SHSAT, PSAT, and SAT Math! I graduated from college with a bachelor's degree in Biological Sciences. Moreover, I relate to many younger students.
29 Subjects: including geometry, chemistry, English, algebra 1
...My goal is to have the student understand the theory before the end of the lesson but at the same time make the subject fun. I will provide periodic progress reports and assign light homework
so that the student can build on the theories taught. For my own goals, I will ask for feedback from the student or parents to better assess my teaching capabilities.
13 Subjects: including geometry, chemistry, reading, biology
...Enrichment instruction for parents who feel that average is just not enough. In addition to tutoring, I also give piano lessons and can help students understand music theory. Whether the
student is a beginner or advanced, six years old or seventy-six years old, I can help the student reach his or her musical goals.
30 Subjects: including geometry, English, piano, reading
...I taught Saturday enrichment and after school. I was also the technical leader for a Robotics team. My tutoring approach is based on activating your prior knowledge.
18 Subjects: including geometry, writing, algebra 2, calculus
Related Hillside, NJ Tutors
Hillside, NJ Accounting Tutors
Hillside, NJ ACT Tutors
Hillside, NJ Algebra Tutors
Hillside, NJ Algebra 2 Tutors
Hillside, NJ Calculus Tutors
Hillside, NJ Geometry Tutors
Hillside, NJ Math Tutors
Hillside, NJ Prealgebra Tutors
Hillside, NJ Precalculus Tutors
Hillside, NJ SAT Tutors
Hillside, NJ SAT Math Tutors
Hillside, NJ Science Tutors
Hillside, NJ Statistics Tutors
Hillside, NJ Trigonometry Tutors
Nearby Cities With geometry Tutor
Cranford geometry Tutors
Elizabeth, NJ geometry Tutors
Elizabethport, NJ geometry Tutors
Harrison, NJ geometry Tutors
Irvington, NJ geometry Tutors
Kenilworth, NJ geometry Tutors
Maplewood, NJ geometry Tutors
Roselle Park geometry Tutors
Roselle, NJ geometry Tutors
South Orange geometry Tutors
Springfield, NJ geometry Tutors
Townley, NJ geometry Tutors
Union Center, NJ geometry Tutors
Union, NJ geometry Tutors
Weequahic, NJ geometry Tutors | {"url":"http://www.purplemath.com/Hillside_NJ_geometry_tutors.php","timestamp":"2014-04-19T23:25:49Z","content_type":null,"content_length":"23865","record_id":"<urn:uuid:3559d2dc-afac-4410-a41a-9ba268ea72ef>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00256-ip-10-147-4-33.ec2.internal.warc.gz"} |
ot the
Navigation Panel:
Go backward to This is Not the Fallacy
Go up to All People in Canada are the Same Age
Go forward to This is Not the Fallacy
Switch to graphical version (better pictures & formulas)
Go to University of Toronto Mathematics Network Home Page
This step is not the source of the fallacy.
This step is simply stating what happens in an induction argument.
The principle of induction says that, if the following two things are true
1. S(1) is true, and
2. For all natural numbers k: if S(k) is true, so is S(k+1),
then S(n) is true for all n. (For more details, see the brief summary of induction).
This step in the proof is simply asserting that part 1 above has already been proven (this follows from step 2), and that therefore proving part 2 is enough to prove that S(n) is true for all n.
Why don't you go back to the list of steps in the proof and see if you can identify which one is wrong, now that you know it isn't this one? This page last updated: May 26, 1998
Original Web Site Creator / Mathematical Content Developer: Philip Spencer
Current Network Coordinator and Contact Person: Joel Chan - mathnet@math.toronto.edu
Navigation Panel: Previous | Up | Forward | Graphical Version | U of T Math Network Home | {"url":"http://www.math.toronto.edu/mathnet/plain/falseProofs/guess21.html","timestamp":"2014-04-19T14:55:04Z","content_type":null,"content_length":"2340","record_id":"<urn:uuid:4ddd1399-9e5e-4088-b957-9ab734c87129>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00059-ip-10-147-4-33.ec2.internal.warc.gz"} |
MATHEMATICA BOHEMICA, Vol. 131, No. 4, pp. 419-425 (2006)
Infinite-dimensional complex projective spaces and complete intersections
E. Ballico
E. Ballico, Dept. of Mathematics, University of Trento, 38050 Povo (TN), Italy, e-mail: ballico@science.unitn.it
Abstract: Let $V$ be an infinite-dimensional complex Banach space and $X \subset {\bf {P}}(V)$ a closed analytic subset with finite codimension. We give a condition on $X$ which implies that $X$ is a
complete intersection. We conjecture that the result should be true for more general topological vector spaces.
Keywords: infinite-dimensional complex projective space, infinite-dimensional complex manifold, complete intersection, complex Banach space, complex Banach manifold
Classification (MSC2000): 32K05
Full text of the article:
[Previous Article] [Next Article] [Contents of this Number] [Journals Homepage] © 2006–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition | {"url":"http://www.emis.de/journals/MB/131.4/7.html","timestamp":"2014-04-19T00:07:16Z","content_type":null,"content_length":"2387","record_id":"<urn:uuid:ea7969b4-5abb-4230-8d66-62425b877920>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00357-ip-10-147-4-33.ec2.internal.warc.gz"} |
The Ins and Outs of digital filter design and implementation | EE Times
Design How-To
The “Ins and Outs” of digital filter design and implementation
Editor’s Note: This article first appeared in the Winter 2012 edition of Xilinx's quarterly Xcell Journal magazine, and is reproduced here with their kind permission (Click Here to see the magazine).
Filters are a key part of any signal-processing system, and as modern applications have grown more complex, so has filter design. FPGAs provide the ability to design and implement filters with
performance characteristics that would be very difficult to re-create with analog methods. What’s more, these digital filters are immune to certain issues that plague analog implementations, notably
component drift and tolerances (over temperature, aging and radiation, for high-reliability applications). These analog effects significantly degrade the filter performance, especially in areas such
as passband ripple.
Of course, digital models have their own quirks. The rounding schemes used within the mathematics of the filters can be a problem, as these rounding errors will accumulate, impacting performance by,
for example, raising the noise floor of the filter. The engineer can fall back on a number of approaches to minimize this impact, and schemes such as convergent rounding will provide better
performance than traditional rounding. In the end, rounding-error problems are far less severe than those of the analog component contribution.
One of the major benefits of using an FPGA as the building block for a filter is the ability to easily modify or update the filter coefficients late in the design cycle with minimal impact, should
the need for performance changes arise due to integration issues or requirements changes.
Filter types and topologies
Most engineers familiar with digital signal processing will be aware that there are four main types of filters. Low-pass filters only allow signals below a predetermined cutoff frequency to be
output. High-pass filters are the inverse of the low pass, and will only allow through frequencies above the cutoff frequency. Bandpass filters allow a predetermined bandwidth of frequencies,
preventing other frequencies from entering. Finally, band-reject filters are the inverse of the bandpass variety, and therefore reject a predetermined bandwidth while allowing all others to pass
Most digital filters are implemented by one of two methods: finite impulse response (FIR) and infinite impulse response (IIR). Let’s take a closer look at how to design and implement FIR filters,
which are also often called windowed-sinc filters.
So why are we focusing upon FIR filters? The main difference between the two filter styles is the presence or lack of feedback. The absence of feedback within the FIR filter means that for a given
input response, the output of the filter will eventually settle to zero. For an IIR filter subject to the same input which does contain feedback, the output will not settle back to zero.
The lack of feedback within the filter implementation makes the FIR filter inherently stable, as all of the filter’s poles are located at the origin. The IIR filter is less forgiving. Its stability
must be carefully considered as you are designing it, making the windowed-sinc filter easier for engineers new to DSP to understand and implement.
If you were to ask an engineer to draw a diagram of the perfect low-pass filter in the frequency domain, most would produce a sketch similar to that shown in Figure 1.
Figure 1. Ideal low-pass filter performance, with abrupt
transition from passband to stopband.
The frequency response shown in Figure 1 is often called the “brick-wall” filter. That’s because the transition from passband to stopband is very abrupt and much sharper than can realistically be
achieved. The frequency response also exhibits other “perfect” features, such as no passband ripple and perfect attenuation within the stopband.
If you were to extend this diagram such that it was symmetrical around 0 Hz extending out to both +/- FS Hz (where FS is the sampling frequency), and perform an inverse discrete Fourier transform
(IDFT) upon the response, you would obtain the filter’s impulse response, as shown in Figure 2.
Figure 2. IDFT or impulse response of the perfect low-pass filter.
This is the time-domain representation of the frequency response of the perfect filter shown in Figure 1, often called the filter kernel. It is from this response that FIR or windowed-sinc filters
get their name, as the impulse response is what is achieved if you plot the sinc function:
Combined with the step response of the filter, these three responses – frequency, impulse and step – provide all the information on the filter performance you need to know to demonstrate that the
filter under design will meet the requirements placed upon it.
Frequency response
The frequency response is the traditional image engineers consider when thinking about a filter. It demonstrates how the filter modifies the frequency-domain information.
The frequency response allows you to observe the cutoff frequency, stopband attenuation and passband ripple. The roll-off between passband and stopband – often called the transition band – is also
apparent in this response. Ripples in the passband will affect the signals being filtered. The stopband attenuation demonstrates how much of the unwanted frequencies remain within the filter output.
This can be critical in applications where specific frequency rejection is required, for example when filtering one frequency-division multiplexed channel from another in a communications system.
Impulse response
It is from the impulse response that the coefficients for your filter are abstracted. However, to achieve the best performance from your filter, the standard practice is to use a windowing function.
Windowing is the technique of applying an additional mathematical function to a truncated impulse response to reduce the undesired effects of truncation.
Figure 2 demonstrates the impulse response extending out infinitely with ripples which, though they diminish significantly in amplitude, never settle at zero. Therefore, you must truncate the impulse
response to N + 1 coefficients chosen symmetrically around the center main lobe, where N is the desired filter length (please remember that N must be an even number). This truncation affects the
filter’s performance in the frequency domain due to the abrupt cutoff of the new, truncated impulse response. If you were to take a discrete Fourier transform (DFT) of this truncated impulse
response, you would notice ripples in both the passband and stopband along with reduced roll-off performance. This is why it is common practice to apply a windowing function to improve the
Step response
The step response, which is obtained by integrating the impulse response, demonstrates the time-domain performance of the filter and how the filter itself modifies this performance. The three
parameters of importance when you are observing the step response are the rise time, overshoot and linearity.
The rise time is the number of samples it takes to rise between 10 percent and 90 percent of the amplitude levels, demonstrating the speed of the filter. To be of use within your final system, the
filter must be able to distinguish between events in the input signal; therefore, the step response must be shorter than the spacing of events in the signal.
Overshoot is the distortion that the filter adds to the signal as it is processing it. Reducing the overshoot in the step response allows you to determine if the signal distortion results from either
the system or the information that it is measuring. Overshoot reduces the uncertainty of the source of the distortion, degrading the final system performance, and possibly means the system does not
meet the desired performance requirements.
If the signal’s upper and lower halves are symmetrical, the phase response of the filter will have a linear phase, which is needed to ensure the rising edge and falling edge of the step response are
the same.
It is worth nothing at this point that it is very difficult to optimize a filter for good performance in both the time and frequency domains. Therefore, you must understand which of those domains
contains the information you are required to process. For FIR filters, the information required is within the frequency domain. Therefore, the frequency response tends to dominate.
Windowing the filter
Using a truncated impulse response will not provide the best-performing digital filter, as it will demonstrate none of the ideal characteristics. For this reason, designers use windowing functions to
improve the passband ripple, roll-off and stopband attenuation performance of the filter. There are many window functions that you can apply to the truncated sinc, for example Gaussian, Bartlett,
Hamming, Blackman and Kaiser – the list goes on. However, two of the most popular window functions are the Hamming and Blackman. Let’s explore them in more detail.
Both of these windows, when applied, reduce the passband ripple and increase the roll-off and attenuation of the filter. Figure 3 shows the impulse and frequency responses for a truncated sinc, and
with Blackman and Hamming windows applied.
Figure 3. Low-pass filter impulse responses (top chart)
and frequency responses
As you can see, both windows significantly improve the passband ripple. The roll-off of the filter is determined not only by the window but by the length of the filter – that is, the number of
coefficients, often called filter taps.
runs from 0 to N, providing a total of N + 1 points.
To apply these windows to the truncated impulse response, you must multiply the window coefficients with the truncated impulse response coefficients to produce the desired filter coefficients.
While the window type determines the roll-off frequency, a rule of thumb is that for a desired transition bandwidth the number of taps required is indicated by N = 4 / BW, where BW is the transition
Figure 4. High-pass filter impulse response (top) and frequency response using spectral
inversion, with nonwindowed, and Hamming and Blackman windows applied.
Implementing different filter topologies
No matter what the end type of filter – bandpass, band reject or high pass – all of these start out with the initial design of a low-pass filter. If you know how to create a low-pass filter and a
high-pass filter, you can then use combinations of the two to produce band-reject and bandpass filters.
Let’s first examine how to convert a low-pass filter into a high-pass one. The simplest method, called spectral inversion, changes the stopband into the passband and vice versa. You perform spectral
inversion by inverting each of the samples while at the same time adding one to the center sample. The second method of converting to a high-pass filter is spectral reversal, which mirrors the
frequency response; to perform this operation, simply invert every other coefficient.
Having designed the low- and high-pass filters, it’s easy to make combinations that you can use to generate bandpass and band-reject filters. To generate a band-reject filter, just place a high-pass
and a low-pass filter in parallel with each other and sum together the outputs. You can assemble a bandpass filter, meanwhile, by placing a low-pass and a high-pass filter in series with each other.
And now for the design
The information we’ve reviewed so far details what windowed-sinc filters are, the importance of windowing and how to generate filters of different topologies. However, before you can implement your
filter within an FPGA, you must generate a set of filter coefficients using one of several software tools, such as Octave, MATLAB, or even Excel. Many of these tools provide simplified interfaces and
options, allowing you to design the filter with minimal effort, the FDA tool in MATLAB being a prime example.
Having generated a set of coefficients for the desired filter, you are ready to implement your filter within the FPGA. No matter the number of taps you have decided on, the basic structure of each
stage of the FIR filter will be the same and will consist of a multiplier, storage and adder.
The method most engineers prefer, and the one that’s by far the simplest, is to use the Xilinx CORE Generator tool’s FIR Compiler, which provides multiple options for customizing and for generating
advanced filters. You can import the generated coefficients into the FIR Compiler as a COE file; this file will contain the coefficients for the filter with the radix also being declared.
Once you have loaded these coefficients, FIR Compiler will display the frequency response of the filter for the coefficients provided, along with basic performance characteristics such as stopband
attenuation and passband ripple.
Once you have completed your customization of the filter using the FIR Compiler tool, CORE Generator will produce all the necessary files to both implement the design and to simulate it during
behavioral simulation prior to implementation, provided the user has access to the correct simulation libraries.
Figure 5. Xilinx CORE Generator frequency responses:
From the top: truncated sinc, Blackman window, Hamming
window, and Hamming-windowed high-pass filter
If you prefer, you can also implement the filter in HDL you have generated yourself. This will often be the choice if your final implementation will be in an ASIC and you are using the FPGA
implementation as a prototyping system. In this case, the first step is to quantize the filter coefficients to use a fixed-number representation of the floating-point results. As the filter
coefficients will represent positive and negative numbers, it is common practice to represent these in a two’s complement format. Once these coefficients have been quantized, you can use them as
constants within the HDL design.
Digital filters find their way into many applications these days, and FPGAs offer a significant advantage to the system designer who needs to use them. Windowed-sinc filters are very simple to design
and implement using basic math tools along with either FPGA core-generation tools or directly in HDL.
About the author
Adam P. Taylor is a Principal Engineer at
EADS Astrium
. You can contact Adam via email at
If you found this article to be of interest, visit
Programmable Logic Designline
where you will find the latest and greatest design, technology, product, and news articles with regard to programmable logic devices of every flavor and size (FPGAs, CPLDs, CSSPs, PSoCs...).
Also, you can obtain a highlights update delivered directly to your inbox by signing up for my weekly newsletter – just
to request this newsletter using the Manage Newsletters tab (if you aren't already a member you'll be asked to register, but it's free and painless so don't let that stop you [grin]). | {"url":"http://www.eetimes.com/document.asp?doc_id=1279449","timestamp":"2014-04-17T04:34:53Z","content_type":null,"content_length":"148840","record_id":"<urn:uuid:aee1d296-d69c-484e-9a68-70f47f3cdc7d>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00113-ip-10-147-4-33.ec2.internal.warc.gz"} |
Square frame for a square picture?
08-31-2010, 02:23 PM
Ulrich Drolshagen
upper margin = (width of mat + height of mat - 2*(dimension of print) / 4
Sounds complicated, but is is simply the mean between the left/right margin and the value of the upper/lower margin if the picture would be placed vertically in the middle of the mat. Thus the
larger lower margin does not get so prominent.
width of mat ( w) = 40cm
height of mat (h) = 50cm
dimension of print (d) = 29cm
upper margin (u) = (40cm + 50cm - 2*29cm)/4 = 8cm
Or the long way:
left/right margin = (lr) (40cm - 29cm)/2 = 5.5cm
upper/lower margin if picture is placed in the middle (ul) = (50cm - 29cm)/2 = 10.5cm
u = lr + (ul - lr)/2 = 5.5cm + (10.5cm - 5.5cm)/2 = 8cm
lo = ul + (ul -lr)/2 = (10.5cm + 2.5cm) = 13cm
08-31-2010, 02:31 PM
I believe that's the same equation used in the link posted above. Good stuff!
08-31-2010, 03:30 PM
Ulrich Drolshagen
08-31-2010, 08:59 PM
I think maybe you are assuming a square print? The original effort accepts rectangular images. I note he also has improved it quite a bit since I downloaded a copy for safekeeping a while back.
Cotrell's implementation does some warning checks and offers vertical centering or top and sides equal as options too. Plus he now has input for the frame overlap. I had started to hack a copy of
the old one to do that, but as usual the project of the moment didn't allow time to play with the tools!
(There is a lot of Javascript in there!)
09-01-2010, 12:21 PM
I didn't understand the need for the frame overlap. Since it is uniform all around and typically very small, what's the use?
09-01-2010, 01:05 PM
Enh, maybe just for the sake of completeness. I also believe there are some frames where that number approaches a half inch or more. That would be noticeable in the case of a relatively small
frame and/or narrow mat widths.
09-01-2010, 05:32 PM
That feature is quite new on the site. I don't understand it either since it's depicted as if it were a frame in 3D (shadow sides and all), instead of just a "crop" of the mat size. I hope he
tweaks it, 'cause it's confusing.
09-01-2010, 06:33 PM
Heh, it's like all software -- never done, and always adding new features whether necessary or not. Those changes were new to me too, and I haven't really digested them, but I think the frame
"moulding" gets wider, as it might with larger overlaps. I agree a nice simple solid color would be more than sufficient -- but then, I usually use small profile gallery frames and may be biased.
On the old version he had a comment about allowing for overlap, the main problem being you could adjust the "mount" dimensions to make the allowance, but then you had to add the overlap to
position the image from the mat edge when putting things together. The new version appears to do all that for you.
09-02-2010, 01:33 AM
Nice pic, DWThomas. I think it looks good in its current frame, but that it would also look good in a square frame. | {"url":"http://www.apug.org/forums/presentation-marketing/print-81203-square-frame-square-picture-3.html","timestamp":"2014-04-24T02:22:42Z","content_type":null,"content_length":"15260","record_id":"<urn:uuid:c2bf264a-70ff-4da7-8f9a-46c62ac415e5>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00629-ip-10-147-4-33.ec2.internal.warc.gz"} |
Moving Averages in Forex
Moving Averages in Forex
Written by Forex Explore
Moving average is yet another indicator used in forex trading to forecast the next price movements. The word "average" already points to the main idea behind the moving average indicator – to show
the average value of the price changes. In other words, moving average "flattens" out the price slop over a certain set of values.
There are several types of moving averages and for each type there is a unique way of calculating the average. We will be talking about 2 major moving average types:
1. Simple Moving Average (shortly SMA)
2. Exponential Moving Average (shortly EMA)
Simple Moving Average – SMA
Simple (also referred to as Arithmetical) Moving Average is the simplest type and most popular one among forex traders.
Let's go back to your high school days and recall how average is calculated in the first place. It's simple – you add up all the numbers together and divide it by amount of numbers. Allow us to give
an example! The class average of final exam scores (for a very small class of 5 students!) is calculated below:
Maria's score = 70
Daniel's score = 85
Michelle's score = 97 (an A student!)
Marcus's score = 72
Christopher's score = 60
The sum of all scores is 70+85+97+72+60 = 384
The average score of the class is 384 divided by amount of scores: 384 / 5 = 76.8
The same thing is done for simple moving average – you sum up all the prices, lets say 60 closing prices for the last two months (60 days). Then you divide the result by 60 (amount of closing prices)
and Voila! There you have your moving average.
Why are we wasting your time on this when most forex broker platforms calculate this for you? Well, it won't hurt to actually understand how things work.
There is of course a delay when it comes to calculating moving average. The predictions you make with this indicator doesn't show you a definite glimpse of the future price movements. It is only a
forecast – a general prediction of the future price, so don't tell all of your friends that you are a true psychic yet!
Exponential Moving Average - EMA
Sometimes Simple Moving Average is just… how should we put it… too simple! There is a huge flaw in simple moving average indicator and it is called "spikes".
Let's consider an example – the agenda is to calculate SMA using a daily chart for a particular currency pair and the closing prices for the last 5 days are listed below:
The sum of all closing prices is 7.2703
The average (SMA) is 7.2703 divided by 5 equals to 1.4540
So what is the problem with simple moving average then??
What if the second closing price isn't 1.4537… What if the closing price is suddenly very low… what IF it is 1.4500?? The simple moving average result would be much lower and your forecast would
indicate that the price is moving down, when in fact the movement is just a bit "spiky" but overall price movement is going up.
So what is the solution to this average mess?! That's when Exponential Moving Average comes in handy.
Exponential Moving Average indicator "flattens" Moving Average. How is it calculated? You add moving average of the current closing price to moving average of the previous closing price. And don't
forget that the last prices get more weighted value.
So, for example, the "spiky" price movement that we have got because of 1.4500 closing price on a second day will not be so important.
Now the big question arrives – which one of the two moving average indicators is better? Lots of forex traders actually use both types to get a better view, so it is entirely up to you which type to
So the trading tip for today is "practice, practice and practice some more!" Get some charts and do your homework. Sooner or later you will find which moving average indicator works best for you. | {"url":"http://www.forexexplore.com/forex-indicators/moving-averages-in-forex","timestamp":"2014-04-21T02:27:36Z","content_type":null,"content_length":"73445","record_id":"<urn:uuid:71dc6872-7c7a-4dac-b037-fd54467150e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
Lebanon, GA Algebra Tutor
Find a Lebanon, GA Algebra Tutor
...I have installed and worked extensively nearly every major application available on Windows, included the MS Office Suite, and worked extensively with remote desktop and as a network
administrator have managed Microsoft Windows servers including DNS and IIS. I have troubleshot nearly every softw...
126 Subjects: including algebra 1, algebra 2, chemistry, English
I was a National Merit Scholar and graduated magna cum laude from Georgia Tech in chemical engineering. I can tutor in precalculus, advanced high school mathematics, trigonometry, geometry,
algebra, prealgebra, chemistry, grammar, phonics, SAT math, reading, and writing. I have been tutoring profe...
20 Subjects: including algebra 2, algebra 1, chemistry, reading
...I have also taught/tutored English, history and science.I speak Spanish fluently. I studied Spanish all through HS and college. I lived in the Dominican Republic for three years and I lived in
Mexico for about 6 mos.
22 Subjects: including algebra 1, English, Spanish, reading
...I have successfully assisted hundreds of individual students, and many parents have referred me to their acquaintances with very positive recommendations. I enjoy being able to help students,
not only with mastering their challenging subjects, but also with instilling confidence in themselves. ...
31 Subjects: including algebra 2, chemistry, English, algebra 1
Hi students! I am a recent college graduate with a B.S. in biology. I am currently preparing to take the MCAT for medical school.
17 Subjects: including algebra 1, algebra 2, chemistry, physics | {"url":"http://www.purplemath.com/Lebanon_GA_Algebra_tutors.php","timestamp":"2014-04-19T07:01:27Z","content_type":null,"content_length":"23654","record_id":"<urn:uuid:c88e0319-d95c-4ec7-869e-226b62b4db5a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00585-ip-10-147-4-33.ec2.internal.warc.gz"} |
solving an exponential as a variable
July 4th 2012, 10:02 PM #1
May 2010
solving an exponential as a variable
I have a simple question; 10^x= 10000 . I know the answer is x=4, but what I want to know is how do I find the value of x algebraically. I looked on some websites, and they suggested finding the
log or what but I havn't done logs yet.
Re: solving an exponential as a variable
\displaystyle \begin{align*} 10^x &= 10\,000 \\ \log_{10}{\left(10^x\right)} &= \log_{10}{\left(10\,000\right)} \\ x &= \log_{10}{\left(10^{4}\right)} \\ x &= 4 \end{align*}
Technically the logarithms are redundant though, since it can be seen from inspection that \displaystyle \begin{align*} 10^4 = 10\,000 \end{align*}.
July 4th 2012, 10:28 PM #2 | {"url":"http://mathhelpforum.com/algebra/200647-solving-exponential-variable.html","timestamp":"2014-04-19T10:50:52Z","content_type":null,"content_length":"36545","record_id":"<urn:uuid:63b5c890-c0c5-4920-8d2b-264f1e041a4d>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"} |
Home » Integral geometry
Integral Geometry August 13, 2001 to December 14, 2001 Organizers L. Barchini, S. Gindikin, A. Goncharov and J. Wolf Description Integral geometry is the branch of geometrical analysis that studies
integral transforms of geometrical nature. The first example is the Radon transform, which transforms functions to their integrals over hyperplanes. Most of the transforms of the modern integral
geometry are variations of the Radon transform. The basic idea in integral geometry is the 19th century geometric idea of the incidence between manifolds of geometric objects. The theory of the Radon
transform contains remarkable explicit formulas, starting with the inversion formula. The central problem of integral geometry is to find geometrical structures that allow one to develop a similar
explicit analysis. Such structures are known on some homogeneous manifolds in the presence of sufficiently many group symmetries. Gelfand and Graev showed, for complex semisimple Lie groups and some
other homogeneous spaces, that there is an integral transform of Radon type (the horospherical transform) whose inversion problem is equivalent to the Plancherel Formula. This connection with
representation theory is one of principal stimuli for the modern development of integral geometry. But integral geometry also has essential connections with many other areas of mathematics. Some such
areas are symplectic geometry, multidimensional complex analysis, algebraic analysis, nonlinear differential equations, and aspects of Riemannian geometry. The Radon transform and its variations are
the mathematical base of computer tomography. The program will be concentrate on a few directions where it is possible to expect strong new ideas and results:
• integral geometry and theory of representations
• integral geometry and multidimensional complex analysis
• ideas of symplectic geometry and algebraic analysis in integral geometry
• geometrical structures in integral geometry and non linear differential equations.
Many of participants in the proposed program are mathematicians whose research efforts were, and are, important in developing these directions in integral geometry. Sept 3-14, 2001 Special
activity on p-adic integral geometry, with a lecture series organized by Alexander Goncharov. Nov 19-30, 2001 Special emphasis on the D-module approach to classical integral geometry, with a
series of lectures on D-modules and integral geometry, organized by Alexander Goncharov.
Logistics Program Logistics can be viewed by Members. If you are a program member then Login Here. | {"url":"https://www.msri.org/programs/13","timestamp":"2014-04-17T15:46:13Z","content_type":null,"content_length":"52752","record_id":"<urn:uuid:3286dcb2-46d5-4624-b8e5-4e8f48eae190>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00613-ip-10-147-4-33.ec2.internal.warc.gz"} |
ADA385042 -- An Unconditionally Stable Implicit Difference Scheme for the Hydrodynamical Equations
An Unconditionally Stable Implicit Difference Scheme for the Hydrodynamical Equations
Local PDF: ADA385042.pdf
AD Number: ADA385042
Subject Categories: EXPLOSIONS
Corporate Author: LOS ALAMOS SCIENTIFIC LAB ALBUQUERQUE NM
Title: An Unconditionally Stable Implicit Difference Scheme for the Hydrodynamical Equations
Personal Authors: Turner, James; Wendroff, Burton
Report Date: 15 APR 1964
Pages: 43 PAGES
Report Number: LA-3007
Contract Number: W-7405-ENG-36
Monitor Acronym: XJ
Monitor Series: AEC
Descriptors: *SHOCK WAVES, *EXPLOSIONS, *HYDRODYNAMIC CODES, NUMERICAL ANALYSIS, FINITE DIFFERENCE THEORY, DISCONTINUITIES, NONLINEAR ALGEBRAIC EQUATIONS, RAREFACTION.
Identifiers: NEWTON METHOD, COURANT CONDITION
Abstract: We solve two hydrodynamical problems. The first involves a shock wave, a contact discontinuity, and a rarefaction wave using an unconditionally stable finite difference scheme. The Courant
condition is satisfied everywhere except in one zone behind the shock, where it is violated by factors of 10 and 100. The nonlinear difference equations are solved by Newton's method. The total
number of Newton iterations to get to a certain time is apparently independent of the degree to which the normal stability condition is violated in the one zone. The second problem involves two
rarefaction waves moving in opposite directions. One wave moves in a region where the Courant condition is violated by a factor of approximately two. The other wave moves in a region where the
Courant condition is satisfied. Numerical results are compared with the analytical solution. An examination of several runs indicates one explicit time step is about five times as fast as one
implicit time step. Therefore, use of the implicit method is indicated when the Courant condition is violated by a factor of 5 or more.
Limitation Code: APPROVED FOR PUBLIC RELEASE
Source Code: 394961
Citation Creation Date: 03 JAN 2001 | {"url":"http://www.fas.org/sgp/othergov/doe/lanl/dtic/ADA385042.html","timestamp":"2014-04-20T08:17:33Z","content_type":null,"content_length":"4867","record_id":"<urn:uuid:3dd3b9dd-b751-408a-8cfc-fb11f3d0141d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00120-ip-10-147-4-33.ec2.internal.warc.gz"} |
optimization problem - need confirmation
January 1st 2013, 01:06 PM #1
Junior Member
Sep 2012
New York
optimization problem - need confirmation
hey guys
the problem is the legs of a triangle are x and y. find the equation that will maximize the area of the triangle given that 2x+y=18
is the answer 40.5 cubic units
thanks alot
Re: optimization problem - need confirmation
I'm not sure what you are asking here. A general triangle does not have "legs". Is this a right triangle with legs x and y? In that case, the area is given by A= xy/2. If you are also required
that 2x+y= 18 so that y= 18- 2x so the area is given by A= x(18- 2x)/2= 9x- x^2. That's a parabola and you can find a maximum value by completing the square.
Re: optimization problem - need confirmation
i was trying to help someone but thats all the question says, i calculated it like an isosceles triangle but thanks alot for looking into the question
January 1st 2013, 01:17 PM #2
MHF Contributor
Apr 2005
January 1st 2013, 01:24 PM #3
Junior Member
Sep 2012
New York | {"url":"http://mathhelpforum.com/calculus/210586-optimization-problem-need-confirmation.html","timestamp":"2014-04-18T19:30:29Z","content_type":null,"content_length":"37286","record_id":"<urn:uuid:d9a9f336-ecf4-45d8-bf5e-324bc60b5af4>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00088-ip-10-147-4-33.ec2.internal.warc.gz"} |
Annandale, VA Geometry Tutor
Find an Annandale, VA Geometry Tutor
...This teaching has most often taken place in either self-contained schools or center programs inside larger schools. The most recent placement was Cedar Lane High School. Cedar is one of 2
self-contained schools Fairfax County has created to serve its students with severe emotional problems.
15 Subjects: including geometry, reading, algebra 1, GED
...With several years of experience teaching math and tutoring, I know how to help students build both their conceptual understanding of mathematics and their confidence in problem solving. Most
of my experience both tutoring and teaching in the classroom was with high school or middle school subje...
16 Subjects: including geometry, English, calculus, GRE
...I think I'll stick to "students"!I was once talking with a middle school class about the origins of algebra and showed them a picture and asked them if they knew who it was. One boy answered,
"Bin Laden!" While I found that answer amusing it was not the right answer. For the right answer let's...
10 Subjects: including geometry, algebra 1, algebra 2, GED
...I can help ensure you know the background information as well as the question information. If you are a college student I can help you fill in the blanks where the professor might not have
been absolutely clear. Plus assist with correctly writing answers how college professors want to see them in those little blue books.
11 Subjects: including geometry, writing, ESL/ESOL, TOEFL
...I have taken many math classes and received a perfect score on my SAT Math test. I have tutored students for years in standardized tests and provide a customized tutoring plan for each
individual student. I have a Master's degree in Chemistry and I am extremely proficient in mathematics.
11 Subjects: including geometry, chemistry, organic chemistry, algebra 2
Related Annandale, VA Tutors
Annandale, VA Accounting Tutors
Annandale, VA ACT Tutors
Annandale, VA Algebra Tutors
Annandale, VA Algebra 2 Tutors
Annandale, VA Calculus Tutors
Annandale, VA Geometry Tutors
Annandale, VA Math Tutors
Annandale, VA Prealgebra Tutors
Annandale, VA Precalculus Tutors
Annandale, VA SAT Tutors
Annandale, VA SAT Math Tutors
Annandale, VA Science Tutors
Annandale, VA Statistics Tutors
Annandale, VA Trigonometry Tutors
Nearby Cities With geometry Tutor
Alexandria, VA geometry Tutors
Arlington, VA geometry Tutors
Burke, VA geometry Tutors
Centreville, VA geometry Tutors
Fairfax, VA geometry Tutors
Falls Church geometry Tutors
Fort Washington, MD geometry Tutors
Herndon, VA geometry Tutors
Mc Lean, VA geometry Tutors
Oakton geometry Tutors
Reston geometry Tutors
Springfield, VA geometry Tutors
Takoma Park geometry Tutors
Vienna, VA geometry Tutors
Washington, DC geometry Tutors | {"url":"http://www.purplemath.com/annandale_va_geometry_tutors.php","timestamp":"2014-04-19T20:06:34Z","content_type":null,"content_length":"24157","record_id":"<urn:uuid:4029e02a-05fb-45d6-881e-f7d4ebf40cad>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nashua, NH Algebra 1 Tutor
Find a Nashua, NH Algebra 1 Tutor
...While teaching, I got an M.Ed. at UMassLowell and a professional math 9-12 teacher's license from Massachusetts. The first courses I taught were algebra, geometry, advanced math functions, and
descriptive statistics; but for the last five years I taught only precalculus and Fundamentals of Calculus at both the college and honors levels. I retired June 30, 2013.
9 Subjects: including algebra 1, calculus, geometry, algebra 2
...Algebra is like a puzzle where you first have to look at the big picture, break down the pieces into "edges and middle pieces" and then the pieces can be systematically put together . I have
extensive experience teaching Algebra II privately, and have excellent references and a good record for ...
15 Subjects: including algebra 1, geometry, algebra 2, precalculus
...I have covered several core subjects with a concentration in math. I currently hold a master's degree in math and have used it to tutor a wide array of math courses. In addition to these
subjects, for the last several years, I have been successfully tutoring for standardized tests, including the SAT and ACT.I have taken a and passed a number of Praxis exams.
36 Subjects: including algebra 1, chemistry, English, reading
...I have had many recent tutoring situations with students taking algebra, calculus, and statistics. Courses I have taught include algebra, trigonometry, precalculus, calculus, differential
equations, statistics, discrete mathematics, and advanced engineering mathematics.I have taught algebra clas...
12 Subjects: including algebra 1, calculus, geometry, statistics
...I received Summa Cum Laude undergraduate honors at Middlebury College majoring in Mathematics with a minor in Economics. I then did 1 year of graduate work at Dartmouth College, but decided to
switch to a Master's in Computer Science program at UNH. Most recently I have been a stay-at-home dad, and taught my eldest daughter to read using positive reinforcement and the DISTAR method.
12 Subjects: including algebra 1, physics, calculus, statistics | {"url":"http://www.purplemath.com/nashua_nh_algebra_1_tutors.php","timestamp":"2014-04-19T17:49:50Z","content_type":null,"content_length":"24190","record_id":"<urn:uuid:19375075-d740-493e-8a3c-dd4458822f9d>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00406-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 50: Probability and Statistical Inference
Math 50: Probability and Statistical Inference - WINTER 2006
Alex Barnett. Bradley room 308, tel 6-3178, email: m50w06 at math.dartmouth.edu
Uncertainty governs both the data analysis done by scientists, and judgments made by us all in our everyday lives. In this course we apply the mathematical techniques
of (mainly continuous) probability to estimation and hypothesis testing, the formal methods by which we learn from noisy data, random samples, and other such uncertain
real-world measurements. We culminate with linear regression, and introduce the powerful framework of Bayesian inference.
Bayesian solution of an These days, computers enable accurate analysis and visualization of probability models, and this course has a small but key computer component (using Matlab, R, or
inverse problem (x is your favorite package). As well as improving understanding, you will learn valuable tools that have recently become the bread-and-butter of science (including social),
parameters, y data) economics, and medicine.
Jump to... Schedule, Resources, or Projects
Lectures / OH: Bradley 104, MWF 12:30-1:45pm (period 12). X-hr is 1-2pm Tues, and I imagine will be used about half the time (I will give a few days notice) for: quizzes, computer help, catch-up, or
review material. Do not schedule anything regular in this X-hr. Office hours are 2-3 M, 4-5 Tu, and 2-3 F
Homework: 8-9 weekly HW's due Wednesday at start of lecture. I strongly encourage you to attempt the relevant homework problems before the next lecture. Leaving it all for Tuesday night is bad time
management and risks you getting left behind in this fast-paced course. Please make your working/reasoning as clear as you can, write clearly, don't be scared of using lots of space on the page, and
staple your work. Late homework will not be accepted (unless by prior arrangement for a valid, and exceptional, reason). Your lowest HW score will be dropped.
Exams: Two non-cumulative closed-book midterms (these will avoid the usual midterm season), and one final exam (single sheet of notes allowed). No algebraic/graphing calculators.
Real-world Statistics: Each week you should dig up a statistical example from the media, web, or other real-world source, post a paragraph to our comments page and be ready to explain it in class (I
will pick on you, randomly of course!). If it relates to the week's content, all the better. As the course progresses we will be able to connect these to the material. Why do it?
1. statistics is all around us, affects policies, our lives, etc. The discipline was invented to deal with these questions.
2. communication skills
3. project ideas for you and all of us
4. choose what interests you
Project: I am keen to have you do a project in the last 1-2 weeks, worth at least 10%, in which you apply what you've learned to analyse real-world statistical data. Stay tuned.
Honor principle. Exams: no help given or received. Homework/Project: no copying, however collaboration on problem techniques is encouraged. Write-ups must be done individually.
Grades: Your overall grade will be computed according to HW 15%, Midterms 2*20%, Final+Project 40%, Real-world-statistics contributions 5%. Note that although HW has a low weighting, it is the main
chance you get to practise the material and get feedback. Grades in Math 50 are not curved; other students' good performance will not hurt your grade. (So please work together and help each other
SCHEDULE, READINGS, LECTURE LINKS, and HOMEWORKS
week date reading (LM4) homework (due following Wed) / daily or weekly topics / info
1 Jan 4 W website, Ch.1, HW1. Overview, learning from data (frequentist vs Bayesian) Bayesian coin applet
6 F 2.4-5 review conditional, independence.
7 Sa special preemptive catch-up day! (free)
9 M 2.6-7, 3.2 Combinatorics, hypergeometric
10 Tu 3.3-4 (lecture to replace Sat) Binomial, random variables, PDF, cumulative distn func.
2 11 W 3.5 HW2. Expectation values, including binomial and hypergeometric. [2-4pm Susan A. Schwarz's MATLAB intro tutorial]
13 F 3.6 variance
16 M (MLK holiday: no class)
17 Tu 3.7 (lecture to replace MLK class) joint densities, marginals
3 18 W 3.8-9 HW3. Combining variables (convolution 1, 2), worksheet solutions, mean and variance of such.
20 F 3.10-11 order statistics, conditional pdfs.
23 M 4.2 Poisson distribution (poisson.m code, process, bus paradox 1, 2)
24 Tu (free)
4 25 W 4.3 HW4. Normal (gaussian) distribution (standard normal cdf applet).
27 F Midterm 1: on material from HW 1-3 (solutions, practise qu's).
29 M 4.4-5 geometric and negative binomial pdfs
5 Feb 1 W 4.6, 5.1-2 HW5. Gamma pdf and function. Estimation: max likelihood (lik.m 1-param max likelihood demo).
3 F 5.3-4 (lik_gamma2.m 2-param gamma likelihood plot) confidence intervals
6 M 5.5 properties of estimators: bias (applet demo: ML variance estimator is biased).
7 Tu X-hr 5.6 efficiency, minimum-variance estimators (lecture to replace Carnival class).
6 8 W 5.7-8 HW6 (selected answers, code for qu C). Cramer-Rao lower bound, consistency (worksheet solutions).
10 F (Carnival holiday: no class)
13 M 5.8, 6.2 Bayesian estimation. Hypothesis testing.
14 Tu Problem-solving session (optional).
7 15 W 6.3 HW7 (solutions). Binomial hyp testing.
17 F 6.4 Type I, II errors.
20 M Midterm 2: on material from HW 4-6. (solutions, practise qu's with answers).
21 Tu 7.2 Normal distribution, t test (student.m t-distn demo). Choose projects, start work on them.
8 22 W 7.4 inference on mean of normal data, t tests
24 F Bayesian inference for (mu, sigma) of normal data, nonlinear transformation of pdfs.
27 M 7.5 Random sampling from any pdf, inference on variance of normal data. reading data into matlab. chi.m, chi-squared demo
28 Tu (free)
9 Mar 1 W 9.2, 4 Two-sample tests: means (t-test w/ equal-variance), and proportions (normal approximation).
3 F 11.4 Covariance and correlation coefficient (corr_regr.m demo).
6 M 11.2, 5 Linear regression, robust noise models, Bayesian model fitting with Markov-chain Monte Carlo (MCMC). (regr_mcmc_sampling.m demo, needs nlp_linear_gauss.m,
nlp_linear_exp.m, mcmc_run.m).
7 Tu X-hr (free)
10 8 W (last day of class) Class project presentations (continues w/ pizza, 6:30-8pm, Bradley 105).
10 F Review session (usual time 12:30-1:35). Post-Mid2 practise problems w/ solutions. Project reports due (midnight)
13 M Final Exam (solutions): March 13th at 8:00 am - 11:00 am, Bradley 104 (usual room). Note this exam will not be given early to accommodate travel plans.
Special needs: I encourage students with disabilities, including "invisible" disabilities like chronic diseases and learning disabilities, to discuss with us any appropriate accommodations that might
be helpful. Let me know asap, certainly in first 2 weeks. Also stop by the Academic Skills Center in 301 Collis to register for support services.
Private tutoring: Tutor Clearinghouse may have private one-on-one tutors available for Math 50. The tutors are recruited on the basis that they have done well in the subject, and are trained by the
Academic Skills Center. If a student receives financial aid, the College will pay for three hours of tutoring per week. If you would like to have a tutor, please go to 301 Collis and fill out an
application as early in the term as possible.
• BOOKS:
□ The only required one is An Introduction to Mathematical Statistics and its Applications (3rd edition, since 4th is not quite yet out as of November, although it won't hurt I imagine) by R.
J. Larsen and M. L. Marx (Prentice-Hall, 2000) available at Wheelock Books in town. Note: we have now switched to 4th edition. Wheelock Books is located across the street from Collis at 2
West Wheelock St. in the building on stilts. Bookrush hours are 8 am - 8 pm Monday through Friday, 12 noon - 5 pm Saturday, and 12 noon - 5 pm Sunday.
□ Probability background (content of Math 20) provided by Introduction to Probability by Charles M. Grinstead and J. Laurie Snell, freely available online.
□ A very complete reference, quite mathematical and dense, but you may love its rigor and clarity, is Probability and Statistics, 3rd Edition by M. H. DeGroot and M. J. Schervish
(Addison-Wesley, 2001). DeGroot was one of the pioneers of Bayesian analysis so this is covered very well.
• Useful links relating to Grinstead/Snell book here (scroll down).
• Previous incarnations of this course: W05, W03, W00, and W99.
• Virtual Laboratories in Probability and Statistics, including lots of lecture notes (need MathML for your browser), and, best of all, applets. From U. Alabama, Huntsville.
• Rice Virtual Lab in Statistics, again, excellent applets.
• Links from Dartmouth's X10 course, and some X10 instructional applets.
• 1-page summary table of pdfs and their properties.
• D'Agostini's introduction to Bayesian inference.
• William Jefferys Bayesian inference course
• The locally (ie, Math Department, chiefly Laurie Snell) produced Chance News, full of fascinating statistical examples from the current media, updated monthly, is now a Wiki, meaning you can edit
it yourself. Part of the larger local Chance site and resources. | {"url":"http://www.math.dartmouth.edu/archive/m50w06/public_html/","timestamp":"2014-04-17T10:07:32Z","content_type":null,"content_length":"17534","record_id":"<urn:uuid:8401f0e2-bd33-432d-80e6-81402175f630>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00190-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cartan forms and structure equations
omg.. nobody answering...
there aren't many people who understand these stuff eh?
i think it is more of a matter of complexity. there are entire books written to answer that question.
I'll give the why, you can explore the what on your own: Cartan forms and connections allow us to define differentiation on certain geometrical objects, namely fibre bundles. This topic has proven
valuable in constructing models for phenomena in GR and string theory.
Any book on gauge theory and physics will discuss this. for a heavy duty mathematical treatment, look for the book Cartan for Beginners: Differential Geometry Via Moving Frames and Exterior
Differential Systems by my friend Tom Ivey. | {"url":"http://www.physicsforums.com/showthread.php?p=1092637","timestamp":"2014-04-20T23:28:55Z","content_type":null,"content_length":"24060","record_id":"<urn:uuid:cd2ff859-1bff-4797-89f3-af33b17add73>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00386-ip-10-147-4-33.ec2.internal.warc.gz"} |
Breaking the Taboos: Stability of Finite Difference for the Solution of Flow problems
1. Lookup NU author(s)
2. Dr Casper Hewett
3. Dr Vedrana Kutija
4. Dr Kutsi Erduran
Author(s) Hewett CJM, Kutija V, Erduran KS
Editor(s) Cortezón JAR; de Jesús Soriano Pérez T
Publication type Conference Proceedings (inc. Abstract)
Conference Name Hydroinformatics 2000
Conference Location Cedar Rapids, Iowa, USA
Year of Conference 2000
Date 23-27 August 2000
Full text for this publication is not currently held within this repository. Alternative links are provided below where available.
In this age of fast digital computers many problems which cannot be solved by analytical methods can be approximated by numerical methods such as the finite difference method. Some success has been
achieved in using finite differences to solve time dependent problems such as those described by the transport-diffusion equation and the equations for free surface flow. Explicit schemes are
relatively easy to implement but have the disadvantage that they can exhibit numerical instability. Imposing time step limitations solves the problem, but the restrictions can be severe, and thus
computationally expensive. On the other hand, implicit schemes do not generally exhibit stability problems, but have the disadvantage that they typically require a system of equations to be solved at
each space-time grid point, which can again be computationally intensive. Stability criteria which restrict the time step are typical and linearised stability analysis using Fourier series has been
developed to evaluate such criteria. The method is described and some well-known schemes are reviewed. NewC is an implicit finite difference scheme for one-dimensional free surface problems. It is
capable of modelling subcritical, supercritical and transcritical flows without the modifications to the governing equations required by other schemes. It also has the advantage that it is easy to
implement for network problems. Linearised stability analysis for the NewC scheme is presented. Recent work has led to the development of explicit finite difference schemes which are also
unconditionally stable. This is achieved by introducing an intermediate parameter which effectively decouples the time and space dimensions. The application of such a scheme to the
transport-diffusion equation is presented.
Publisher Iowa Institute of Hidraulic Research | {"url":"http://eprint.ncl.ac.uk/pub_details2.aspx?pub_id=11401","timestamp":"2014-04-18T05:34:06Z","content_type":null,"content_length":"9324","record_id":"<urn:uuid:13d44043-2247-4bde-bd3a-bd404dae8c88>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00135-ip-10-147-4-33.ec2.internal.warc.gz"} |
User Will Jagy
bio website zakuski.math.utsa.edu/~jagy
location Berkeley, California
age 57
visits member for 4 years, 3 months
seen 3 hours ago
stats profile views 13,634
My main activity is in number theory of integral positive ternary quadratic forms. This began through years of working with Irving Kaplansky. Much of his unpublished writing on quadratic forms can be
found as pdfs at http://zakuski.math.utsa.edu/~kap/forms.html and about Lie and Jordan superalgebras at http://zakuski.math.utsa.edu/~kap/superalgebra.html One of my own email addresses can be found
easily using the search feature at http://www.ams.org/cml and just putting in my last name
8 awarded Notable Question
Mar Are there nontrivial real functions of 2 real variables with gradient having constant euclidian norm on each level line?
30 comment mathoverflow.net/questions/82227/…
Mar Are there nontrivial real functions of 2 real variables with gradient having constant euclidian norm on each level line?
30 comment en.wikipedia.org/wiki/Eikonal_equation
Mar One Diophantine equation
27 comment I do not understand most of what you say; I suppose English is not your first language. It is certainly possible to describe some solutions of Apollonian circle packing with formulas.
There will always be many other solutions that are not described by those formulas.
Mar One Diophantine equation
27 comment no formula describes all solutions. Also, you don't seem to have asked any question.
Mar One Diophantine equation
27 comment en.wikipedia.org/wiki/… The solutions are a forest of countably many rooted trees. See articles in the AMS Bulletin by Fuchs and Kantorovich.
Mar Database of non-isomorphic trees
24 comment @BrendanMcKay, good point. But see the (American edition) cover of Interesting Times, by Terry Pratchett: Not tested on animals, you'll be the first! amazon.com/
Mar Database of non-isomorphic trees
24 comment Start one. You'll be the first!
Mar Surface curves equidistant from a simple closed geodesic
23 comment Right. for Q2, cannot imagine all geodesics unless it is a cylinder over a plane curve.
Mar Surface curves equidistant from a simple closed geodesic
22 comment Alright, one approach with a reference, en.wikipedia.org/wiki/…
Mar Surface curves equidistant from a simple closed geodesic
22 comment similar to Morse functions, really, and some similar to the cut locus of a point. From an ellipsoid, note that the farthest point (Q4) need not be the same distance from all points of $\
gamma,$ although perhaps from two points, otherwise a small movement could take it a hair farther. In general, though, thinking of hydra heads, en.wikipedia.org/wiki/…
Mar Asymptotics of special square-free numbers
21 comment Thank you. So, the "most integers near $x$" would be Erdos-Kac. en.wikipedia.org/wiki/Erd%C5%91s-Kac_theorem
Mar Asymptotics of special square-free numbers
21 comment @IstvánKovács, see the comment of Greg Martin after yours, then mine...
Mar Asymptotics of special square-free numbers
21 comment @GregMartin, thanks. It seems to me, reading Lucia's comment, that Montgomery and Vaughan are saying exactly the same thing as Hardy and Wright. Maybe you could leave an answer with some
detail about the error term, which i guess is the approximate size of H+W's $\tau_k(x) - \pi_k(x).$
Mar Asymptotics of special square-free numbers
21 comment @IstvánKovács, not sure what to tell you; they say the result is the same for $\tau_k(x),$ where $k$ is the exponent sum and there is no longer a restriction to be squarefree. So for
that interpretation, a sum of 1 seems correct. Well, I will leave it here, someone will explain the difficulty. I don't have Montgomery and Vaughan.
20 answered Asymptotics of special square-free numbers
Mar Discriminant of a compositum of number fields, a bound?
20 comment math.stackexchange.com/questions/719377/…
Mar Is there a Hotel California of set-theoretic geology?
20 comment @Erin, no, I just thought your question was amusing. With any luck Noah will give more detail. If you can wait, there are a variety of other people who can offer substantial answers. On
one detail, a couple i know in England just came back from Tenerife. Maggie took pictures of the sardine and put it on facebook.
Mar Is there a Hotel California of set-theoretic geology?
20 comment I guessed. Every year they have a funeral for a sardine. en.wikipedia.org/wiki/Carnival_of_Santa_Cruz_de_Tenerife
Mar comment Is there a Hotel California of set-theoretic geology?
20 You've been deprived in some way, right? Could we take up a collection, give you a round trip to Tenerife? I hear good things. | {"url":"http://mathoverflow.net/users/3324/will-jagy?tab=activity","timestamp":"2014-04-21T09:54:59Z","content_type":null,"content_length":"48968","record_id":"<urn:uuid:eb7125dc-b815-41d9-9eee-35a311db7866>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00475-ip-10-147-4-33.ec2.internal.warc.gz"} |
Crocodile Clips & Yenka
Yenka Mathematics: making a net
You can use the 2D shapes in Yenka Mathematics to construct your own nets, which can then be folded into 3D shapes.
When you move one 2D shape - like this square - close to another one, the shapes’ edges will snap together. You’ll see a square block between them when this happens. Yenka automatically forms a
hinge where the shapes join.
Repeat this with several shapes, and you’ll get a net. One square is darker than the others: this is the base of the net. You can fold the net up by dragging the corners of any other square, and
check if it makes the shape you want.
Once you’ve folded the net completely, and made a 3D shape, you can double-click on that shape and click “Flatten” in the panel that appears, to flatten out the net. | {"url":"http://crocclips.tumblr.com/tagged/nets","timestamp":"2014-04-16T04:14:38Z","content_type":null,"content_length":"60412","record_id":"<urn:uuid:d851a7ed-ac15-4e13-97a0-7dbc9189e26e>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00560-ip-10-147-4-33.ec2.internal.warc.gz"} |
3 3 8 8 - Solution
March 2001
3 3 8 8 - Solution
The trick to this problem was finding a way to make the most of the two 3's and two 8's that you could use. The answer was:
To see how this works, start by writing the left hand side as:
When you multiply this out it becomes clear that it is correct.
Clever isn't it! | {"url":"http://plus.maths.org/content/puzzle-page-6","timestamp":"2014-04-19T12:07:01Z","content_type":null,"content_length":"19563","record_id":"<urn:uuid:bbbb28d1-dbf9-45ae-877b-b2e71167c701>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00056-ip-10-147-4-33.ec2.internal.warc.gz"} |
Introduction to Graph Theory pdf
18-12-2012, 04:56 PM
Post: #1
project girl Posts: 10,113
Moderator Hey...Ask More Info About Introduction to Graph Theory pdf Joined: Nov 2012
Introduction to Graph Theory pdf
Introduction to Graph Theory
1Introduction to Graph.pdf (Size: 81.82 KB / Downloads: 37)
These notes are primarily a digression to provide general background remarks. The subject is an
efficient procedure for the determination of voltages and currents of a given network. A network
comprised of B branches involves 2B unknowns, i.e., each of the branch voltages and currents.
However the branch volt-ampere relations of the network, presumed to be known, relate the current and
the voltage of each branch,. Hence a calculation of either B currents or B voltages (or some
combination of B voltages and currents), and then substitution in the B branch volt-ampere relations,
provides all the voltages and currents.
In general however neither the B branch voltages nor the B branch currents are independent, i.e., some
of the B voltage variables for example can be expressed as a combination of other voltages using KVL,
and some of the branch currents can be related using KCL. Hence there generally are fewer than B
independent unknowns. In the following notes we determine the minimum number of independent
variables for a network analysis, the relationship between the independent and dependent variables, and
efficient methods of obtaining independent equations to determine the variables. In doing so we make
use of the mathematics of Graph Theory.
Graph Theory
A circuit graph is a description of the just the topology of the circuit, with details of the circuit elements
suppressed. The graph contains branches and nodes. A branch is a curve drawn between two nodes to
indicate an electrical connection between the nodes.
A directed graph is one for which a polarity marking is assigned
to all branches (usually an arrow) to distinguish between
movement from node A to B and the converse movement from
B to A.
A connected graph is one in which there is a continuous path
through all the branches (any of which may be traversed more
than once) which touches all the nodes. A graph that is not
connected in effect has completely separate parts, and for our
purposes is more conveniently considered to be two (or more)
independent graphs.
Choosing Independent Current Variables:
Given a network graph with B branches and N nodes select a tree, any one will do for the present
purpose. Remove all the link branches so that, by definition, there are no loops formed by the remaining
tree branches. It follows from the absence of any closed paths that all the branch currents become zero.
Hence by 'controlling' just the link currents all the branch currents can be controlled. This control would
not exist in general using fewer than all the link branches because a loop would be left over; depending
on the nature of the circuit elements branches making up the loop current could circulate around the
loop. Using more than the link branches is not necessary. Hence it should be possible to express all the
branch currents in terms of just the link currents, i.e., there are B-N+1 independent current variables, and
link currents provide one such set of independent variables. | {"url":"http://seminarprojects.com/Thread-introduction-to-graph-theory-pdf?pid=134690","timestamp":"2014-04-17T09:40:10Z","content_type":null,"content_length":"42105","record_id":"<urn:uuid:87778111-13c5-4db1-912d-4367ebc53a85>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00179-ip-10-147-4-33.ec2.internal.warc.gz"} |
Challenge integral
August 3rd 2009, 06:04 PM #1
Challenge integral
This one was given to me as a challenge. I was happy I could solve it!
Show that
$\int_0^1\frac{\log(x)\log(1-x)}{x}\: dx = \zeta(3) = \sum_{n=1}^\infty\frac{1}{n^3}$
It's quite easy.
\begin{aligned}<br /> \int_{0}^{1}{x^{j-1}\ln (x)\,dx}&=-\int_{0}^{1}{\int_{x}^{1}{\frac{x^{j-1}}{t}\,dt}\,dx} \\ <br /> & =-\int_{0}^{1}{\int_{0}^{t}{\frac{x^{j-1}}{t}\,dx}\,dt} \\ <br /> & =-\
frac{1}{j}\int_{0}^{1}{t^{j-1}\,dt} \\ <br /> & =-\frac{1}{j^{2}}.<br /> \end{aligned}
Hence we get $\int_{0}^{1}{\frac{\ln (x)\ln (1-x)}{x}\,dx}=-\sum\limits_{j=1}^{\infty }{\left( \frac{1}{j}\int_{0}^{1}{x^{j-1}\ln (x)\,dx} \right)}=\sum\limits_{j=1}^{\infty }{\frac{1}{j^{3}}},$
as required. $\quad\blacksquare$
Last edited by Krizalid; August 4th 2009 at 06:25 AM.
First, use integration by parts with $u=\ln(x)\ln(1-x)$, $dv=\frac{dx}{x}$, so that $du=\frac{\ln(1-x)}{x}-\frac{\ln(x)}{1-x}$, $v=\ln{x}$, and
Now, let $u=-\ln(x)$: x=0 becomes $u=\infty$, x=1 becomes $u=0$, and $x=e^{-u}$, and $dx=-e^{-u}\,du$, so our right-hand integral becomes
$\int_0^{\infty}\frac{x^{s-1}}{e^x-1}\,dx=\Gamma(s)\zeta(s)$ (see here).
-Kevin C.
$\int_{0}^{1}{\frac{\ln (x)\ln (1-x)}{x}\,dx}=-\sum\limits_{j=1}^{\infty }{\left( \frac{1}{j}\int_{0}^{1}{x^{j-1}\ln (x)\,dx} \right)}$
I don't understand what you did here.
$\ln (1-x)=-\sum\limits_{j=1}^{\infty }{\frac{x^{j}}{j}},$ and then switch sum and integral.
you know the justification, and it's quite known and I won't do it.
August 3rd 2009, 08:01 PM #2
August 3rd 2009, 08:02 PM #3
Senior Member
Dec 2007
Anchorage, AK
August 3rd 2009, 08:31 PM #4
August 3rd 2009, 08:32 PM #5
August 4th 2009, 10:28 AM #6
August 4th 2009, 12:52 PM #7 | {"url":"http://mathhelpforum.com/calculus/96894-challenge-integral.html","timestamp":"2014-04-18T03:24:33Z","content_type":null,"content_length":"58333","record_id":"<urn:uuid:edbf02be-95cd-49a5-a498-8e82bdd9ddaa>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00268-ip-10-147-4-33.ec2.internal.warc.gz"} |
please help me with my economics h.w
November 3rd 2006, 07:10 PM
please help me with my economics h.w
hello, how is doing everybody
could somebody please give me a hand
please check this graph
the blue line represents the demand
the red line represents the marginal cost
and the orange line represents the marginal revenue
There's a theory in economics which says a monopolist will maximizes profits by choosing the level where marginal revenue equal marginal cost, marginal revenue is the change in total revenue as
result of producing one more unit of out, and marginal cost the change in total cost as result of producing one more unit of output.
the question is. If the goal of the monopolist is to maximize profit, how many units will it produce and at what price will charge each unit? ( according to the graph )
I really appreciate your help.
November 4th 2006, 03:23 AM
hello, how is doing everybody
could somebody please give me a hand
please check this graph
the blue line represents the demand
the red line represents the marginal cost
and the orange line represents the marginal revenue
There's a theory in economics which says a monopolist will maximizes profits by choosing the level where marginal revenue equal marginal cost, marginal revenue is the change in total revenue as
result of producing one more unit of out, and marginal cost the change in total cost as result of producing one more unit of output.
the question is. If the goal of the monopolist is to maximize profit, how many units will it produce and at what price will charge each unit? ( according to the graph )
I really appreciate your help.
If the monopolist prices his widget at $4 he will sell 3000 units, the cost
of producing the 3001-st unit will be an additional $4 which is equal to his
additional revenue (in fact slightly less than the additional revenue as it is
falling per unit as the sales rise), so it is not worth the effort of producing
that extra unit at that price.
At $4 per unit he could have sold 6000 units but the revenue per unit
at this level of sales would have been negative. | {"url":"http://mathhelpforum.com/math-topics/7172-please-help-me-my-economics-h-w-print.html","timestamp":"2014-04-20T22:09:31Z","content_type":null,"content_length":"6399","record_id":"<urn:uuid:c5514019-662d-4d16-a252-6afaca0c257a>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Supplying Conditions for Having up to 1000 Degress of Freedom in the Onset of Inflation, Instead of 2 to 3 Degrees of Freedom, Today, in Space-Time.
Authors: Andrew Beckwith
The following document attempts to answer the role additional degrees of freedom have as to initial inflationary cosmology. I.e. the idea is to cut down on the number of independent variables to get
as simple an emergent space time structure of entropy and its generation as possible. One parameter being initial degrees of freedom, the second the minimum allowed grid size in space time, and the
final parameter being emergent space time temperature. In order to initiate this inquiry, a comparison is made to two representations of a scale evolutionary Friedman equation, with one of the
equations based upon LQG, and another involving an initial Hubble expansion parameter with initial temperature T[Planck] ~ 10^19 GeV used as an input into T^4 times N(T). Initial assumptions as to
the number of degrees of freedom has for T[Planck] ~ 10^19 GeV a maximum value of N(T) ~ 10^3 . Making that upper end approximation for the value of permissible degrees of freedom is dependent upon a
minimum grid size length as of about l[Planck] ~ 10^33l centimeters. Should the minimum uncertainty grid size for space time be higher than l[Planck] ~ 10^33 centimeters, then top value degrees of
freedom of phase space as given by a value N(T) ~ 10^3 drops. In addition, the issue of bits, i.e. information is shown to not only have temperature dependence, but to be affected by minimum 'grid
size' as well. A bifurcation diagram argument involving Hemoltz free energy as a 'driver' to push through a transition from a prior universe to the present universe, with classical physics behavior
down to a grid size of l[Planck] ~ 10^33 centimeters ( i.e. start of quantum gravity effects) is employed to invoke use of classical physics down to l[Planck] ~ 10^33 centimeters. Subsequent chaotic
dynamics during the expansion phase driven by Helmholtz free energy leads to up to N(T) ~ 10^3 degrees of freedom at the start of the inflationary regime. The possibility this semi classical argument
for increase of the degrees of freedom up to N(T) ~ 10^3 is tied in with the possible emergence of space time E8 embedding is included as a speculative bonus. This is akin to Bogolyubov "spontaneous"
particle creation arguments outlined in the article.
Comments: 10 pages, 1 figure
Download: PDF
Submission history
[v1] 1 Sep 2010
[v2] 3 Sep 2010
[v3] 4 Sep 2010
Unique-IP document downloads: 64 times
Add your own feedback and questions here:
You are equally welcome to be positive or negative about any paper but please be polite. If you are being critical you must mention at least one specific error, otherwise your comment will be deleted
as unhelpful.
comments powered by | {"url":"http://vixra.org/abs/1009.0001","timestamp":"2014-04-17T18:29:50Z","content_type":null,"content_length":"9780","record_id":"<urn:uuid:d33b58b2-34cb-427d-a8b4-81ed658c3c3c>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00340-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
A card is selected from a standard deck of 52 playing cards. A standard deck of cards has 12 face cards and four Aces (Aces are not face cards). Find the probability of selecting • an odd prime
number under 10 given the card is a club. (1 is not prime.) • a Jack, given that the card is not a heart. • a King given the card is not a face card.
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/501d65b6e4b0be43870e2439","timestamp":"2014-04-17T01:11:24Z","content_type":null,"content_length":"25518","record_id":"<urn:uuid:82469fb4-91ea-4acc-8c42-f7b4cdf8ee52>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00223-ip-10-147-4-33.ec2.internal.warc.gz"} |
: Exponentiation
There are two ways to compute x^y in Python. You can write it as “x**y”. There is also a function that does the same thing:
pow(x, y)
For integer arithmetic, the function also has a three-argument form that computes x^y%z, but more efficiently than if you used that expression:
pow(x, y, z)
>>> 2**4
>>> pow(2,4)
>>> pow(2.5, 4.5)
>>> (2**9)%3
>>> pow(2,9,3) | {"url":"http://infohost.nmt.edu/tcc/help/pubs/python27/web/pow-function.html","timestamp":"2014-04-21T07:16:40Z","content_type":null,"content_length":"3854","record_id":"<urn:uuid:3338efaf-eb13-4ff1-b248-9abc055364ae>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00010-ip-10-147-4-33.ec2.internal.warc.gz"} |
e Math
1st to 8th Grade Math and Reading
• " AdaptedMind has my highest recommendation as a parent. "- Parent, Bay Area, CA
• " My daughter has so much fun doing math on AdaptedMind. "- Parent, United Kingdom
• " I've seen real improvement in his confidence and his math grades. "- Parent, Raleigh, NC
• " My daughter loves trying to earn the badges. "- Parent, New York, New York
• " [AdaptedMind] has such simple, easy to use progress reports. "- Parent, Bay Area, CA
• " Props for making this addictive! "- Parent, Iowa City, IA
• " It doesn't cease to amaze me that my son likes practicing math. "- Parent, College Station, TX
• " He's now getting an A in math! "- Parent, Dallas, TX
• 1st Grade Math
Counting, comparing numbers, addition, telling time, and more.
1st Grade Math
1st Grade Reading
• 2nd Grade Math
Addition, subtraction, coins, measurement, multiplication and more.
2nd Grade Math
2nd Grade Reading
• 3rd Grade Math
Subtraction, multiplication, division, geometry and more.
3rd Grade Math
3rd Grade Reading
• 4th Grade Math
Multiplication, division, factoring, fractions and more.
4th Grade Math
4th Grade Reading
• 5th Grade Math
Fractions, mixed numbers, decimals, algebra and more.
5th Grade Math
5th Grade Reading
• 6th Grade Math
Fractions, decimals, algebra, percents and more.
6th Grade Math
6th Grade Reading
• 7th Grade Math
Fractions, algebra, geometry.
7th Grade Math
• 8th Grade Math
Algebra, equations, systems of equations, lines.
8th Grade Math | {"url":"http://www.adaptedmind.com/Math-Worksheets.html?type=hstb","timestamp":"2014-04-21T10:40:45Z","content_type":null,"content_length":"22134","record_id":"<urn:uuid:ca681e6f-b5d2-4e05-8eb0-7f3cf9bba43e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00373-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convert micromole/day to millimole/hour - Conversion of Measurement Units
›› Convert micromole/day to millimole/hour
›› More information from the unit converter
How many micromole/day in 1 millimole/hour? The answer is 24000.
We assume you are converting between micromole/day and millimole/hour.
You can view more details on each measurement unit:
micromole/day or millimole/hour
The SI derived unit for mole flow rate is the mole/second.
1 mole/second is equal to 86400000000 micromole/day, or 3600000 millimole/hour.
Note that rounding errors may occur, so always check the results.
Use this page to learn how to convert between micromoles/day and millimoles/hour.
Type in your own numbers in the form to convert the units!
›› Metric conversions and more
ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data.
Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres
squared, grams, moles, feet per second, and many more!
This page was loaded in 0.0027 seconds. | {"url":"http://www.convertunits.com/from/micromole/day/to/millimole/hour","timestamp":"2014-04-19T11:56:42Z","content_type":null,"content_length":"19975","record_id":"<urn:uuid:cbf0cd6a-79b0-47ee-8d5a-b6cd48088aca>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00183-ip-10-147-4-33.ec2.internal.warc.gz"} |
Finding Tangent Line and Normal of a parametric equation
January 13th 2012, 05:25 AM #1
Junior Member
Nov 2011
Finding Tangent Line and Normal of a parametric equation
For $\delta \mathbb{R}\rightarrow \mathbb{R}^2,\ t\ =\ \delta (t)\ =\ (cosh(2t-2),sin(2\pi t^2))$ find the tangent line and the normal line passing through $\delta(t_0),t_0\ =\ 1$
So i think the tangent line is $T\delta (t_0)\ =\ \delta' (t_0)s\ +\ \delta (t_0)$
However Im clueless as to how to find the normal line.
Last edited by CaptainBlack; January 14th 2012 at 02:31 AM. Reason: fix LaTeX
Re: Finding Tangent Line and Normal of a parametric equation
it's ok apparently, after pluggin in the to into the tangent equation to get Tdelta(to) = (x,y)s + (a,b) the normal line is simply Ndelta(to) = (-y,x)s + (a,b).
ps it was meant to say deltaR rightarrow R to the power of 2 not that weird symbol.
Re: Finding Tangent Line and Normal of a parametric equation
$\delta'(t_0)$ will be a two dimensional vector- the normal will be perpendicular to that so you just need a vector perpendicular to $\delta'(t_0)$. In two dimensions, that's easy. A normal to
the vector <a, b> is <b, -a>.
January 13th 2012, 06:12 AM #2
Junior Member
Nov 2011
January 14th 2012, 07:02 AM #3
MHF Contributor
Apr 2005 | {"url":"http://mathhelpforum.com/differential-geometry/195225-finding-tangent-line-normal-parametric-equation.html","timestamp":"2014-04-16T17:12:32Z","content_type":null,"content_length":"37210","record_id":"<urn:uuid:f852fd21-f487-4dba-83ee-fe7c3d674aaa>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00070-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is a Rectilinear Figure?
The basic elements of geometry which form its basis are points, lines & planes. As we know that a point is a dot made on a plane by some sharp, pointed object. It may be made on a paper as a hole
made with a sharp pin or with the tip of a pen or a pencil on the paper. We represent points by a capital letter. For example, point ‘P’ or just ‘P’. A full stop that we put at the end of each
sentence is an example of points.
A point, unlike other elements, has no length, no breadth, no thickness.
The next element is a line which can be drawn by joining any two points on a plane and extending in both directions. A line is straight and it extends infinitely in both the directions. Given a
single point, we can draw an infinite number of lines passing through it. Whereas, there is exactly one line passing through two given points.
Line Segment can be defined as a part of a line with two fixed ends.
Unlike a line, a line segment has a fixed length.
These line segments form the base of the different figures, that we draw in geometry. We also draw curves and some figures with the help of both, line segments and curves. Some of these figures are
rectilinear figures.
“Recti” means “straight” and “ linear” comes from “ line”. Thus, the figures drawn with the help of only line segments are called rectilinear figures. As we know that polygons are closed figures made
up of only line segments, all polygons such as square, triangle, pentagon, etc. are examples of rectilinear figures. Even 3 - dimensional solid shapes like cubes, cuboids, prisms, etc. are examples
of rectilinear figures. | {"url":"http://math.tutorcircle.com/geometry/what-is-a-rectilinear-figure.html","timestamp":"2014-04-20T00:37:35Z","content_type":null,"content_length":"19985","record_id":"<urn:uuid:bf05b5d5-bf71-45c5-926f-7b24ac870b32>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00530-ip-10-147-4-33.ec2.internal.warc.gz"} |
Intuitionistic type theory
From Wikipedia, the free encyclopedia
Intuitionistic type theory (also known as constructive type theory, or Martin-Löf type theory) is a type theory and an alternative foundation of mathematics based on the principles of mathematical
constructivism. Intuitionistic type theory was introduced by Per Martin-Löf, a Swedish mathematician and philosopher, in 1972. Martin-Löf has modified his proposal a few times; his 1971 impredicative
formulation was inconsistent as demonstrated by Girard's paradox. Later formulations were predicative. He proposed both intensional and extensional variants of the theory.
Intuitionistic type theory is based on a certain analogy or isomorphism between propositions and types: a proposition is identified with the type of its proofs. This identification is usually called
the Curry–Howard isomorphism, which was originally formulated for intuitionistic logic and simply typed lambda calculus. Type theory extends this identification to predicate logic by introducing
dependent types, that is types which contain values.
Type theory internalizes the interpretation of intuitionistic logic proposed by Brouwer, Heyting and Kolmogorov, the so-called BHK interpretation. The types in type theory play a similar role to sets
in set theory but functions definable in type theory are always computable.
Connectives of type theory
In the context of type theory a connective is a way of constructing types, possibly using already given types. The basic connectives of type theory are:
Main article:
Dependent type
Π-types, also called dependent product types, are analogous to the indexed products of sets. As such, they generalize the normal function space to model functions whose result type may vary on their
input. E.g. writing $\operatorname{Vec}({\mathbb R}, n)$ for the type of n-tuples of real numbers and $\mathbb N$ for the type of natural numbers,
$\Pi_{n \mathbin{:} {\mathbb N}} \operatorname{Vec}({\mathbb R}, n)$
stands for the type of a function that, given a natural number n, returns an n-tuple of real numbers. The usual function space arises as a special case when the range type does not actually depend on
the input, e.g., $\Pi_{n \mathbin{:} {\mathbb N}} {\mathbb R}$ is the type of functions from natural numbers to the real numbers, which is also written as ${\mathbb N} \to {\mathbb R}$.
Using the Curry–Howard isomorphism Π-types also serve to model implication and universal quantification: e.g., a term inhabiting $\Pi_{m, n \mathbin{:} {\mathbb N}} m + n = n + m$ is a function which
assigns to any pair of natural numbers a proof that addition is commutative for that pair and hence can be considered as a proof that addition is commutative for all natural numbers. (Here we have
used the equality type ($x = y$) as explained below.)
Σ-types, also called dependent sum types, are analogous to the indexed disjoint unions of sets. As such, they generalize the usual Cartesian product to model pairs where the type of the second
component depends on the first. For example, the type $\Sigma_{n \mathbin{:} {\mathbb N}} \operatorname{Vec}({\mathbb R}, n)$ stands for the type of pairs of a natural number $n$ and an $n$-tuple of
real numbers, i.e., this type can be used to model sequences of arbitrary but finite length (usually called lists). The conventional Cartesian product type arises as a special case when the type of
the second component doesn't actually depend on the first, e.g., $\Sigma_{n \mathbin{:} {\mathbb N}} {\mathbb R}$ is the type of pairs of a natural number and a real number, which is also written as
${\mathbb N} \times {\mathbb R}$.
Again, using the Curry–Howard isomorphism, Σ-types also serve to model conjunction and existential quantification.
Finite types
Of special importance are 0 or ⊥ (the empty type), 1 or ⊤ (the unit type) and 2 (the type of Booleans or classical truth values). Invoking the Curry–Howard isomorphism again, ⊥ stands for false and ⊤
for true.
Using finite types we can define negation as $eg A \equiv A \to \bot$.
Equality type
Given $a, b \mathbin{:} A$, the expression $a = b$ denotes the type of equality proofs for $a$ is equal to $b$. That is, if $a = b$ is inhabited, then $a$ is said to be equal to $b$. There is only
one (canonical) inhabitant of $a = a$ and this is the proof of reflexivity $\operatorname{refl} \mathbin{:} \Pi_{a \mathbin{:} A} a = a$.
Examination of the properties of the equality type, or rather, extending it to a notion of equivalence, lead to homotopy type theory.
Inductive types
A prime example of an inductive type is the type of natural numbers $\mathbb{N}$ which is generated by $0 \mathbin{:} {\mathbb N}$ and $\operatorname{succ} \mathbin{:} {\mathbb N} \to {\mathbb N}$.
An important application of the propositions as types principle is the identification of (dependent) primitive recursion and induction by one elimination constant: ${\operatorname{{\mathbb N}-elim}}
\mathbin{:} P(0) \to (\Pi_{n \mathbin{:} {\mathbb N}} P(n) \to P(\operatorname{succ}(n))) \to \Pi_{n \mathbin{:} {\mathbb N}} P(n)$ for any given type $P(n)$ indexed by $n \mathbin{:} {\mathbb N}$.
In general inductive types can be defined in terms of W-types, the type of well-founded trees.
An important class of inductive types are inductive families like the type of vectors $\operatorname{Vec}(A, n)$ mentioned above, which is inductively generated by the constructors $\operatorname
{vnil} \mathbin{:} \operatorname{Vec}(A, 0)$ and $\operatorname{vcons} \mathbin{:} A \to \Pi_{n \mathbin{:} {\mathbb N}} \operatorname{Vec}(A, n) \to \operatorname{Vec}(A, \operatorname{succ}(n))$.
Applying the Curry–Howard isomorphism once more, inductive families correspond to inductively defined relations.
An example of a universe is $\mathcal{U}_0$, the universe of all small types, which contains names for all the types introduced so far. To every name $a \mathbin{:} \mathcal{U}_0$ we associate a type
$\operatorname{El}(a)$, its extension or meaning. It is standard to assume a predicative hierarchy of universes: $\mathcal{U}_n$ for every natural number $n \mathbin{:} {\mathbb N}$, where the
universe $\mathcal{U}_{n+1}$ contains a code for the previous universe, i.e., we have $u_n \mathbin{:} \mathcal{U}_{n+1}$ with $\operatorname{El}(u_n) \equiv \mathcal{U}_n$. (A hierarchy with this
property is called "cumulative".)
Stronger universe principles have been investigated, i.e., super universes and the Mahlo universe. In 1992 Huet and Coquand introduced the calculus of constructions, a type theory with an
impredicative universe, thus combining type theory with Girard's System F. This extension is not universally accepted by Intuitionists since it allows impredicative, i.e., circular, constructions,
which are often identified with classical reasoning.
Formalisation of type theory
This formalization is based on the discussion in Nordström, Petersson, and Smith.
The formal theory works with types and objects.
A type is declared by:
An object exists and is in a type if:
Objects can be equal
and types can be equal
A type that depends on an object from another type is declared
and removed by substitution
• $B[x / a]$, replacing the variable $x$ with the object $a$ in $B$.
An object that depends on an object from another type can be done two ways. If the object is "abstracted", then it is written
and removed by substitution
• $b[x / a]$, replacing the variable $x$ with the object $a$ in $b$.
The object-depending-on-object can also be declared as a constant as part of a recursive type. An example of a recursive type is:
• $0 \mathbin{:} \mathbb{N}$
• $\operatorname{succ} \mathbin{:} \mathbb{N} \to \mathbb{N}$
Here, $\operatorname{succ}$ is a constant object-depending-on-object. It is not associated with an abstraction. Constants like $\operatorname{succ}$ can be removed by defining equality. Here the
relationship with addition is defined using equality and using pattern matching to handle the recursive aspect of $\operatorname{succ}$:
\begin{align} \operatorname{add} &\mathbin{:}\ (\mathbb{N} \times \mathbb{N}) \to \mathbb{N} \\ \operatorname{add}(0, b) &= b \\ \operatorname{add}(\operatorname{succ}(a), b) &= \operatorname
{succ}(\operatorname{add}(a, b))) \end{align}
$\operatorname{succ}$ is manipulated as an opaque constant - it has no internal structure for substitution.
So, objects and types and these relations are used to express formulae in the theory. The following styles of judgements are used to create new objects, types and relations from existing ones:
$\Gamma\vdash \sigma\ \mathsf{Type}$ σ is a well-formed type in the context Γ.
$\Gamma\vdash t \mathbin{:} \sigma$ t is a well-formed term of type σ in context Γ.
$\Gamma\vdash \sigma \equiv \tau$ σ and τ are equal types in context Γ.
$\Gamma\vdash t \equiv u \mathbin{:} \sigma$ t and u are judgmentally equal terms of type σ in context Γ.
$\vdash \Gamma\ \mathsf{Context}$ Γ is a well-formed context of typing assumptions.
By convention, there is a type that represents all other types. It is called $\mathcal{U}$ (or $\operatorname{Set}$). Since $\mathcal{U}$ is a type, the member of it are objects. There is a dependent
type $\operatorname{El}$ that maps each object to its corresponding type. In most texts $\operatorname{El}$ is never written. From the context of the statement, a reader can almost always tell
whether $A$ refers to a type, or whether it refers to the object in $\mathcal{U}$ that corresponds to the type.
This is the complete foundation of the theory. Everything else is derived.
To implement logic, each proposition is given its own type. The objects in those types represent the different possible ways to prove the proposition. Obviously, if there is no proof for the
proposition, then the type has no objects in it. Operators like "and" and "or" that work on propositions introduce new types and new objects. So $A \times B$ is a type that depends on the type $A$
and the type $B$. The objects in that dependent type are defined to exist for every pair of objects in $A$ and $B$. Obviously, if $A$ or $B$ has no proof and is an empty type, then the new type
representing $A \times B$ is also empty.
This can be done for other types (booleans, natural numbers, etc.) and their operators.
Categorical models of type theory
Using the language of category theory, R.A.G. Seely introduced the notion of a locally cartesian closed category (LCCC) as the basic model of type theory. This has been refined by Hofmann and Dybjer
to Categories with Families or Categories with Attributes based on earlier work by Cartmell.
A category with families is a category C of contexts (in which the objects are contexts, and the context morphisms are substitutions), together with a functor T : C^op → Fam(Set).
Fam(Set) is the category of families of Sets, in which objects are pairs (A,B) of an "index set" A and a function B: X → A, and morphisms are pairs of functions f : A → A' and g : X → X' , such that
B' [°] g = f [°] B - in other words, f maps B[a] to B'[g(a)].
The functor T assigns to a context G a set Ty(G) of types, and for each A : Ty(G), a set Tm(G,A) of terms. The axioms for a functor require that these play harmoniously with substitution.
Substitution is usually written in the form Af or af, where A is a type in Ty(G) and a is a term in Tm(G,A), and f is a substitution from D to G. Here Af : Ty(D) and af : Tm(D,Af).
The category C must contain a terminal object (the empty context), and a final object for a form of product called comprehension, or context extension, in which the right element is a type in the
context of the left element. If G is a context, and A : Ty(G), then there should be an object (G,A) final among contexts D with mappings p : D → G, q : Tm(D,Ap).
A logical framework, such as Martin-Löf's takes the form of closure conditions on the context dependent sets of types and terms: that there should be a type called Set, and for each set a type, that
the types should be closed under forms of dependent sum and product, and so forth.
A theory such as that of predicative set theory expresses closure conditions on the types of sets and their elements: that they should be closed under operations that reflect dependent sum and
product, and under various forms of inductive definition.
Extensional versus intensional
A fundamental distinction is extensional vs intensional type theory. In extensional type theory definitional (i.e., computational) equality is not distinguished from propositional equality, which
requires proof. As a consequence type checking becomes undecidable in extensional type theory. This is because relying on computational equality means that the equality depends on computations that
could be Turing complete in general and thus the equality itself is undecidable due to the halting problem. Some type theories enforce the restriction that all computations be decidable so that
definitional equality may be used.
In contrast in intensional type theory type checking is decidable, but the representation of standard mathematical concepts is somewhat complex, since extensional reasoning requires using setoids or
similar constructions. It is a subject of current discussion whether this tradeoff is unavoidable and whether the lack of extensional principles in intensional type theory is a feature or a bug.
Implementations of type theory
Type theory has been the base of a number of proof assistants, such as NuPRL, LEGO and Coq. Recently, dependent types also featured in the design of programming languages such as ATS, Cayenne,
Epigram and Agda.
See also
Further reading
External links
Academic areas | {"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Intuitionistic_type_theory","timestamp":"2014-04-19T02:42:13Z","content_type":null,"content_length":"139954","record_id":"<urn:uuid:bc190e9e-7c78-4fcc-a63d-24c9e5bac07d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00360-ip-10-147-4-33.ec2.internal.warc.gz"} |
number of zeroes in 100 factorial.
up vote 15 down vote favorite
I was on math.stackexchange the other day and i found a question that said How many zeroes are there in 100!. I quickly factored it out and said that there where 24 zeroes. However thats only the
trailing zeroes (as the person who asked the question quickly pointed out). As the days passed no one answered the question. My question is the following: is there a method to figure it out without
having to compute the whole answer? I initially thought that it could be solved using many divisibility properties but I didn't figure anything out.
soft-question nt.number-theory
4 For 100!, one can, as you say, simply compute it. But if we define $Z(n)$ to be the number of non-trivial zeros in the decimal expansion of $n!$, i.e., ignoring the ones coming from the factors of
5 in $n!$, then I don't see any easy way to compute, or even accurately estimate, $Z(n)$. Is it interesting to ask how, aside from the trivial trailing zeros, the digits of $n!$ are distributed?
It's not so hard, I think, to show that the leading digits are Benford distributed, i.e., log uniform. – Joe Silverman Jul 13 '12 at 2:57
Hello Joe Silverman. By that do you mean that there is no way of determining the zeroes in 100! without computing the whole number? – user4140 Jul 13 '12 at 3:02
3 Here is a link to a proof that the leading digits of $n!$ satisfy Benford's law : williams.edu/go/math/sjmiller/public_html/BrownClasses/197/… I think Benford also conjectured that the frequency
of a given digit in $n!$ tends to $1/10$ when $n \to \infty$, but this is probably out of reach... – François Brunault Jul 13 '12 at 7:55
2 This is sequence oeis.org/A027869 , which has no interesting citations. – Kevin O'Bryant Jul 20 '12 at 19:10
2 For the "tricks" that GMP uses in computing factorial, see page 105 of gmplib.org/gmp-man-5.0.5.pdf – Robert Israel Jul 20 '12 at 20:10
show 5 more comments
4 Answers
active oldest votes
Using well known approximations for the length and number of trailing zeroes of n!, and making the reasonable assumption that the inside zeros appear with frequency $\frac{1}{10}$, we
get the following approximation of the total number of zeros, t, in n!:
$t = \lfloor \frac{1}{10}(\frac{\log (2 \Pi n)}{2}+n\log (\frac{n}{e})- \frac{n}{4}+ \log(n)) + \frac{n}{4} - log(n)\rfloor $
Which simplifies to:
$t = \lfloor \frac{n (9 \ln (10)-4)+4 (n-9) \ln (n)+2 \ln(2 \Pi n)}{40 \ln(10)} \rfloor$
up vote 4 down This approximation seems to work well for n up to at least 10,000.
vote accepted
100!, with digit length 158, has less inside zeroes, 6, with 24 trailing, than the normal expectation for a total of 30, with t=36.
98! is "zero-perfect", i.e. inside zeroes appear with exactly frequency $1/10$, with actual total zero count 35 and $t = 35$
Other examples of zero-perfect factorials are: 1009!, 1097!, 1112!, 2993!, 6128!, ....
There appears to be a strong correlation of n having only 0-3 prime factors in {2, 3, 5} if n! is zero-perfect. Uneven n is often a prime number if n! is zero-perfect.
Note that certain numbers have predictableffects. 100! has 2 'more zeros than 99!, which in turn is likely to "kill off" many zeros present in 98!. Can you give a theorem which says
how the number of zeros in x and in 99 times x are related? Gerhard "Ask Me About System Design" Paseman, 2012.07.20 – Gerhard Paseman Jul 21 '12 at 3:47
@HalfdanFaber where did you get this, please answer, that some nice solution(i mean WOW! just want to know HOW?).+1, icannot vote more or i could have made +100.Waiting for reply. –
Shobhit Aug 27 '13 at 22:31
Thanks, @sShobhit. You give me too much credit, though. As you can see, I have just taken a known estimate for the total length, subtracted from this a known estimate for the number
of trailing zeros, multiplied this by 1/10, to obtain an estimate for the inside zeros, and then added the same estimate for the trailing zeros to obtain a total estimate. – Halfdan
Faber Sep 8 '13 at 4:36
As can be seen elsewhere, the number of trailing zeros has an exact closed form solution. My numerical error analysis for the above estimates showed periodic behavior for the
trailing zeros, and chaotic behavior for the inside zeros. With respect to the inside zeros, I suspect they are out of reach, similar in complexity to the distribution of prime
numbers. – Halfdan Faber Sep 8 '13 at 4:54
add comment
It is unlikely. There are ways to compute the nth digit of certain numbers in certain bases (for example, pi in base 16) without having to compute the entire number, but in most situations,
the number or formula for it either has very special properties (e.g. 101*10^n) in order to answer the question, or the work done to answer the question is tantamount to calculating the
number, writing it down, and counting the digits. Not only do I know of no way to answer the question otherwise, I will wager a small amount of money that no such nice way will posted here
up vote 3 for the next 2 years.
down vote
Gerhard "Willing To Formalize The Bet" Paseman, 2012.07.12
2 Well, that's not completely true. You would save a couple of minutes simply by eliminating 2^24*5^24 from the multiplication. – user4140 Jul 13 '12 at 3:08
The time saved is dependent and the speed of t and capacity of the multiplier. I agree that it has potential for simplifying the computation. One could also multiply prime powers
together. It still smells to me like computing most if not all of the factorial first. Gerhard "Will Lower Yhe Wager Though" Paseman, 2012.07.13 – Gerhard Paseman Jul 13 '12 at 17:25
I think you can say some things about the distribution of digits of some large products of small factors. Keep track of the distribution of subsequences of digits of a length larger than
the largest factor. This distribution for $n$ almost determines the distribution for $n*a_i$ for $a_i$ small. However, you lose some control as you have more terms, and for $n!$ it looks
like keeping track of the distribution of subsequences is as hard as multiplying the whole number out. – Douglas Zare Jul 13 '12 at 18:18
add comment
EDIT: this doesn't really work. I'm still a good human being.
Evenly enough, it seems possible to get the number of zeros in the binary expansion of $n!$
One can get a fairly accurate expression for $$\log_2 \; n! = \frac{\log \; n!}{\log 2}$$ from using extra terms in Stirling's formula. Taking the floor of that and adding 1 gives the
total number of digits in base two..
up vote 2 Legendre's formula $$ v_2(n!) = \left\lfloor \frac{n}{2} \right\rfloor + \left\lfloor \frac{n}{4} \right\rfloor + \cdots $$ has a companion,
down vote
$$ v_p(n!) = \frac{n - S_p(n)}{p-1} $$ where $S_p(n)$ is the sum of the digits when $n$ is written in base $p.$ As all the digits in a base two expansion are $1,$ we find that $S_2(n)$ is
simply the count of 1's in the base two expansion of $n.$
Alright, some people, who shall remain nameless, have attempted to cast aspersions on the reputation of your humble servant, pointing out that the number of ones in the binary expansion
of $n!$ is not the same as the number of ones in the binary expansion of $n$ itself. I try so hard. Don't change the light bulb, I'll just sit here in the dark.
1 That would be nice if it were the count of $1$'s in the base two expansion of $n!$. – Robert Israel Jul 13 '12 at 3:51
1 Well, crap. ${}{}{}{}{}$ – Will Jagy Jul 13 '12 at 3:57
Maybe we can do something with $v_p(n!!)$ – SJR Jul 13 '12 at 7:21
add comment
Given a prime $p$, it occurs $m$ times in the prime factorization of $n!$, where
m = $ \lfloor \frac{n}{p} \rfloor + \lfloor \frac{n}{p^2} \rfloor + \lfloor \frac{n}{p^3} \rfloor + \cdots $
up vote -2 down vote This is explained in many number theory books. Apply this for $p=2$ and $p=5$, and use the value you get for 5, since this occurs many fewer times.
Well, this is just the simple answer for the trailing zeroes. I know no way to get ALL the zeroes. – Tom Dickens Jul 13 '12 at 2:57
add comment
Not the answer you're looking for? Browse other questions tagged soft-question nt.number-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/102092/number-of-zeroes-in-100-factorial/102102","timestamp":"2014-04-21T02:11:12Z","content_type":null,"content_length":"84136","record_id":"<urn:uuid:5131c727-1589-48d2-adc7-f6bb64c4c1ea>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"} |
• Login
• Register
• Forget
Challenger of the Day
Ann Theressa
Time: 00:02:06
Placed User Comments (More)
Lekshmi Narasimman MN
5 Days ago
Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u all a good news :)
Thanks to almighty too :) !!
abhinay yadav
10 Days ago
thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same as given here.
bcz of practice i finished my written test 15 minutes before and got it.
thanx allot for such noble work...
15 Days ago
coz of this site i cud clear IBM's apti nd finally got placed in tcs
thanx m4maths...u r a wonderful site :)
17 Days ago
thank u m4maths and all its user for posting gud and sensible answers.
Nilesh singh
20 Days ago
finally selected in TCS. thanks m4maths
22 Days ago
Thank you team m4maths.Successfully placed in TCS.
Deepika Maurya
22 Days ago
Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !!
Rimi Das
1 Month ago
Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site for placement
Stephen raj
1 Month ago
prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it.
Stephen raj
1 Month ago
Thanks to m4maths:)
cracked infosys:)
1 Month ago
i have been Selected in Tech Mahindra.
All the quanti & reasoning questions are common from the placement papers of m4maths.
So a big thanks to m4maths team & the people who shares the placement papers.
Amit Das
1 Month ago
I got selected for interview in TCS.Thank you very much m4maths.com.
1 Month ago
I got placed in TCS :)
Thanks a lot m4maths :)
Syed Ishtiaq
1 Month ago
An Awesome site for TCS.
Cleared the aptitude.
1 Month ago
I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com
plz guide for the technical round.
mounika devi mamidibathula
1 Month ago
got placed in IBM..
this site is very useful, many questions repeated..
thanks alot to m4maths.com
Anisha Lakhmani
1 Month ago
I got placed at infosys.......thanx to m4maths.com.......a awesum site......
Anisha Lakhmani
1 Month ago
I got placed at infosys.......thanx to m4maths.com.......a awesum site......
Kusuma Saddala
1 Month ago
Thanks to m4maths, i have place at IBM on feb 8th of this month
2 Months ago
thanks to m4 maths because of this i clear csc written test
mahima srivastava
2 Months ago
Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site.
Surya Narayana K
2 Months ago
I successfully cleared TCS aptitude test.Thanks a lot m4maths.com.
Surya Narayana K
2 Months ago
I successfully cleared TCS aptitude test.Thanks a lot m4maths.com.
prashant gaurav
2 Months ago
Got Placed In Infosys...
Thanks of m4maths....
3 Months ago
iam not placed in TCS...........bt still m4maths is a good site.
4 Months ago
Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a lotttttt............
4 Months ago
THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even reading the whole
question completely.. gr8 work m4maths.. keep it up.
5 Months ago
m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus drives that happened
in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not only for TCS but for all
other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :)
5 Months ago
THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now.
5 Months ago
Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and 1000000000000s of thanks to this
website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website.
5 Months ago
2 days before i cleared written test just because of m4maths.com.thanks a lot for this community.
6 Months ago
thanks for m4maths!!! bcz of which i cleared apti of infosys today.
6 Months ago
Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths.
No words to praise m4maths.so i simply said thanks a lot.
7 Months ago
I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am very proud to say that I
have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle very easily making people
like us to be successful. Thanks a lotttt
Abhishek Ranjan
7 Months ago
me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :)
do practice n u'll surely succeed :)
Sandhya Pallapu
1 year ago
Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site helped me a lot in achieving
this...thanks to M4MATHS!
vivek singh
2 years ago
I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through TCS......special thanks to
admin for creating such a superb community
Manish Raj
2 years ago
this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE
Asif Neyaz
2 years ago
Thanku M4maths..due to u only, imade to TCS :D test on sep 15.
Harini Reddy
2 years ago
Big thanks to m4maths.com.
I cracked TCS..The solutions given were very helpful!!!
2 years ago
HI everyone ,
me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been able to place in TCS... and thanks
to all the users who clearly solved the problems.. im very greatful to you :)
2 years ago
Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially...
2 years ago
Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that
the success rate of M4Math grow exponentially.
Again Thanx for all support given by M4Math during
my preparation for TCS.
and Best of LUCK for all students for their preparation.
2 years ago
thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :)
2 years ago
thousands of thnx to m4maths...
got selected in tcs for u only...
u were the only guide n i hv nvr done group study for TCS really feeling great...
thnx to all the users n team of m4maths...
3 cheers for m4maths
2 years ago
thousands of thnx to m4maths...
got selected in tcs for u only...
u were the only guide n i hv nvr done group study for TCS really feeling great...
thnx to all the users n team of m4maths...
3 cheers for m4maths
2 years ago
Thank U ...I'm placed in TCS.....
Continue this g8 work
2 years ago
thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus
raghu nandan
2 years ago
thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :)
V.V.Ravi Teja
3 years ago
thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........
Veer Bahadur Gupta
3 years ago
got placed in TCS ...
thanku m4maths...
Amulya Punjabi
3 years ago
Hi All,
Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the help of m4maths.......it's just an
osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students.
Anusha Alva
3 years ago
a big thnks to this site...got placed in TCS!!!!!!
Oindrila Majumder
3 years ago
thanks a lot m4math.. placed in TCS
Pushpesh Kashyap
3 years ago
superb site, i cracked tcs
Saurabh Bamnia
3 years ago
Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME.........
Gautam Kumar
3 years ago
it was really useful 4 me.................n finally i managed to get through TCS
Karthik Sr Sr
3 years ago
i like to thank m4maths, it was very useful and i got placed in tcs
Lekshmi Narasimman MN 5 Days ago Thanks ton for this site . This site is my main reason for clearing cts written which happend on 5/4/2014 in chennai . Tommorrw i have my interview. Hope i will tel u
all a good news :) Thanks to almighty too :) !!
abhinay yadav 10 Days ago thank you M4maths for such awesome collection of questions. last month i got placed in techMahindra. i prepared for written from this site, many question were exactly same
as given here. bcz of practice i finished my written test 15 minutes before and got it. thanx allot for such noble work...
manasi 15 Days ago coz of this site i cud clear IBM's apti nd finally got placed in tcs thanx m4maths...u r a wonderful site :)
arnold 17 Days ago thank u m4maths and all its user for posting gud and sensible answers.
Nilesh singh 20 Days ago finally selected in TCS. thanks m4maths
MUDIT 22 Days ago Thank you team m4maths.Successfully placed in TCS.
Deepika Maurya 22 Days ago Thank you so much m4maths.. I cleared the written of IBM.. :) very good site.. thumps up !!
Rimi Das 1 Month ago Thanks to m4maths I got selected in Tech Mahindra.I was preparing for TCS 1st round since last month.Got interview call letter from there also...Really m4maths is the best site
for placement preparation...
Stephen raj 1 Month ago prepare from r.s.aggarwal verbal and non verbal reasoning and previous year questions from m4maths,indiabix and chetanas forum.u can crack it.
Stephen raj 1 Month ago Thanks to m4maths:) cracked infosys:)
Ranadip 1 Month ago i have been Selected in Tech Mahindra. All the quanti & reasoning questions are common from the placement papers of m4maths. So a big thanks to m4maths team & the people who
shares the placement papers.
Amit Das 1 Month ago I got selected for interview in TCS.Thank you very much m4maths.com.
PRAVEEN K H 1 Month ago I got placed in TCS :) Thanks a lot m4maths :)
Syed Ishtiaq 1 Month ago An Awesome site for TCS. Cleared the aptitude.
sara 1 Month ago I successfully cleared TCS aptitude test held on 8th march 2014.Thanks a lot m4maths.com plz guide for the technical round.
mounika devi mamidibathula 1 Month ago got placed in IBM.. this site is very useful, many questions repeated.. thanks alot to m4maths.com
Anisha Lakhmani 1 Month ago I got placed at infosys.......thanx to m4maths.com.......a awesum site......
Kusuma Saddala 1 Month ago Thanks to m4maths, i have place at IBM on feb 8th of this month
sangeetha 2 Months ago thanks to m4 maths because of this i clear csc written test
mahima srivastava 2 Months ago Placed at IBM. Thanks to m4maths. This site is really very helpful. 95% questions were from this site.
Surya Narayana K 2 Months ago I successfully cleared TCS aptitude test.Thanks a lot m4maths.com.
prashant gaurav 2 Months ago Got Placed In Infosys... Thanks of m4maths....
vishal 3 Months ago iam not placed in TCS...........bt still m4maths is a good site.
sameer 4 Months ago Thanx to m4 maths, because of that i able to crack aptitude test and now i am a part of TCS. This site is best for the preparation of placement papers.Thanks a
Sonali 4 Months ago THANKS a lot m4maths. Me and my 2 other roomies cleared the tcs aptitude with the help of this site.Some of the questions in apti are exactly same which i answered without even
reading the whole question completely.. gr8 work m4maths.. keep it up.
Kumar 5 Months ago m4maths is one of the main reason I cleared TCS aptitude. In TCS few questions will be repeated from previous year aptis and few questions will be repeated from the latest campus
drives that happened in various other colleges. So to crack TCS apti its enough to learn some basic concepts from famous apti books and follow all the TCS questions posted in m4maths. This is not
only for TCS but for all other companies too. According to me m4maths is best site for clearing apti. Kuddos to the creator of m4maths :)
YASWANT KUMAR CHAUDHARY 5 Months ago THANKS A LOT TO M4MATHS.due to m4maths today i am the part of TCS now.got offer letter now.
ANGELIN ALFRED 5 Months ago Hai friends, I got placed in L&T INFOTECH and i m visiting this website for the past 4 months.Solving placemetn puzzles from this website helped me a lot and
1000000000000s of thanks to this website.this website also encouraged me to solve puzzles.follw the updates to clear maths aps ,its very easy yar, surely v can crack it if v follow this website.
MALLIKARJUN ULCHALA 5 Months ago 2 days before i cleared written test just because of m4maths.com.thanks a lot for this community.
Madhuri 6 Months ago thanks for m4maths!!! bcz of which i cleared apti of infosys today.
DEVARAJU 6 Months ago Today my written test of TCS was completed.I answered many of the questions without reading entire question.Because i am one of the member in the m4maths. No words to praise
m4maths.so i simply said thanks a lot.
PRATHYUSHA BSN 7 Months ago I am very grateful to m4maths. It is a great site i have accidentally logged on when i was searching for an answer for a tricky maths puzzle. It heped me greatly and i am
very proud to say that I have cracked the written test of tech-mahindra with the help of this site. Thankyou sooo much to the admins of this site and also to all members who solve any tricky puzzle
very easily making people like us to be successful. Thanks a lotttt
Abhishek Ranjan 7 Months ago me & my rooom-mate have practiced alot frm dis site TO QUALIFY TCS written test.both of us got placed in TCS :) IT'S VERY VERY VERY HELPFUL N IMPORTANT SITE. do practice
n u'll surely succeed :)
Sandhya Pallapu 1 year ago Hai friends! this site is very helpful....i prepared for TCS campus placements from this site...and today I m proud to say that I m part of TCS family now.....dis site
helped me a lot in achieving this...thanks to M4MATHS!
vivek singh 2 years ago I cracked my first campus TCS in November 2011...i convey my heartly thanks to all the members of m4maths community who directly or indirectly helped me to get through
TCS......special thanks to admin for creating such a superb community
Manish Raj 2 years ago this is important site for any one ,it changes my life...today i am part of tcs only because of M4ATHS.PUZZLE
Asif Neyaz 2 years ago Thanku M4maths..due to u only, imade to TCS :D test on sep 15.
Harini Reddy 2 years ago Big thanks to m4maths.com. I cracked TCS..The solutions given were very helpful!!!
portia 2 years ago HI everyone , me and my friends vish,sube,shaf placed in TCS... its becoz of m4maths only .. thanks a lot..this is the wonderful website.. unless your help we might not have been
able to place in TCS... and thanks to all the users who clearly solved the problems.. im very greatful to you :)
vasanthi 2 years ago Really thanks to m4maths I learned a lot... If you were not there I might not have been able to crack TCS.. love this site hope it's reputation grows exponentially...
vijay 2 years ago Hello friends .I was selected in TCS. Thanx to M4Maths to crack apti. and my hearthly wishes that the success rate of M4Math grow exponentially. Again Thanx for all support given by
M4Math during my preparation for TCS. and Best of LUCK for all students for their preparation.
maheswari 2 years ago thanks to M4MATHS..got selected in TCS..thanks for providing solutions to TCS puzzles :)
GIRISH 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of
m4maths... 3 cheers for m4maths
girish 2 years ago thousands of thnx to m4maths... got selected in tcs for u only... u were the only guide n i hv nvr done group study for TCS really feeling great... thnx to all the users n team of
m4maths... 3 cheers for m4maths
Aswath 2 years ago Thank U ...I'm placed in TCS..... Continue this g8 work
JYOTHI 2 years ago thank you m4maths.com for providing a web portal like this.Because of you only i got placed in TCS,driven on 26/8/2011 in oncampus
raghu nandan 2 years ago thanks a lot m4maths cracked TCS written n results are to be announced...is only coz of u... :)
V.V.Ravi Teja 3 years ago thank u m4maths because of you and my co people who solved some complex problems for me...why because due to this only i got placed in tcs and hcl also........
Veer Bahadur Gupta 3 years ago got placed in TCS ... thanku m4maths...
Amulya Punjabi 3 years ago Hi All, Today my result for TCS apti was declared nd i cleared it successfully...It was only due to m4maths...not only me my all frnds are able to crack it only wid the
help of m4maths.......it's just an osum site as well as a sure shot guide to TCS apti......Pls let me know wt can be asked in the interview by MBA students.
Anusha Alva 3 years ago a big thnks to this site...got placed in TCS!!!!!!
Oindrila Majumder 3 years ago thanks a lot m4math.. placed in TCS
Pushpesh Kashyap 3 years ago superb site, i cracked tcs
Saurabh Bamnia 3 years ago Great site..........got Placed in TCS...........thanx a lot............do not mug up the sol'n try to understand.....its AWESOME.........
Gautam Kumar 3 years ago it was really useful 4 me.................n finally i managed to get through TCS
Karthik Sr Sr 3 years ago i like to thank m4maths, it was very useful and i got placed in tcs
Latest User posts (More)
Maths Quotes (More)
""The study of mathematics, like the Nile, begins in minuteness but ends in magnificence"" Charles caleb cotton
"There will be a mathematics in each and every problem you solve." Jyothish K R
"Measure what is measurable, and make measurable what is not so." Galileo Galilei
"My maths book suicided because it had lots of problems in it ......" Ruchi.das
"The highest form of pure thought is in mathematics" Plato
"Mathematics is the supreme judge; from its decisions there is no appeal." Tobias Dantzig
"If a problem arise in your mind take help your best friend "mathematics." yash singh
Latest Placement Puzzle (More)
"If 73+46=42,
95+87=57, than
UnsolvedAsked In: SSC
"Abhishek purchased 140 shirts and 250 trousers @ Rs. 450/- and @ Rs. 550/respectively. What should be the overall average selling price of shirts and trousers so that 40% profit is earned? (Rounded
off to next integer).
a) Rs. 725/-
b) Rs. 710/-
c) Rs. 720/-
d) Rs. 700/-
e) None of these"
UnsolvedAsked In: Bank Exam
"PIGEON : PEACE
a)Olive Oil : Enmity
b) Eagle : Friendship
c) Whiteflag : Surrender
d)Roses : Garden
e)Ring : Engagement
UnsolvedAsked In: Tech Mahindra
""The study of mathematics, like the Nile, begins in minuteness but ends in magnificence"" Charles caleb cotton
"There will be a mathematics in each and every problem you solve." Jyothish K R
"Measure what is measurable, and make measurable what is not so." Galileo Galilei
"My maths book suicided because it had lots of problems in it ......" Ruchi.das
"The highest form of pure thought is in mathematics" Plato
"Mathematics is the supreme judge; from its decisions there is no appeal." Tobias Dantzig
"If a problem arise in your mind take help your best friend "mathematics." yash singh
"Abhishek purchased 140 shirts and 250 trousers @ Rs. 450/- and @ Rs. 550/respectively. What should be the overall average selling price of shirts and trousers so that 40% profit is earned? (Rounded
off to next integer). a) Rs. 725/- b) Rs. 710/- c) Rs. 720/- d) Rs. 700/- e) None of these" UnsolvedAsked In: Bank Exam
"PIGEON : PEACE a)Olive Oil : Enmity b) Eagle : Friendship c) Whiteflag : Surrender d)Roses : Garden e)Ring : Engagement " UnsolvedAsked In: Tech Mahindra
3i-infotech (285) Accenture (258) ADITI (46) Athenahealth (38)
CADENCE (30) Capgemini (227) CMC (29) Cognizant (42)
CSC (462) CTS (811) Dell (41) GENPACT (503)
Google (29) HCL (119) Hexaware (67) Huawei (39)
IBM (1160) Infosys (1612) L&T (58) Microsoft (41)
Miscellaneous C (149) Oracle (38) Patni (193) Sasken (25)
Self (26) Syntel (433) TCS (6579) Tech Mahindra (143)
Wipro (1073)
HR Interview (2)
Logical Reasoning (104)
Age Blood Coding Making General Letter Logical Mathematical Missing Number Seating
Problem Relations Decoding Cryptography and Direction Mental Arrangement Letter Series Sequences Reasoning Character Series Arrangement
(1) (20) (11) (6) Problem Sense (7) Ability (4) (1) (5) (5) (2) (14) (2)
Solving (21)
Numerical Ability (602)
Clocks Permutation Profit Sequence Simple & Time Time
Age Area and Arithmetic Averages and Co-ordinate Complex Data Data Geometry LCM and Number Percentage and Probability and Quadratic Ratio and and Compound and Distance Trigonometry
Problem Algebra (32) Volume (17) (24) (8) Calendars geometry Numbers (3) Interpretation Sufficiency (10) HCF (13) System (19) Combination (51) Loss Equations Proportion Series Interest Work and (5)
(46) (7) (2) (11) (2) (135) (42) (17) (5) (20) (26) (1) (47) Speed
Programming (10)
Database Definition Program (2) Technical
(3) (1) (3)
Verbal Ability (14)
Antonyms Miscellaneous Sentence Spotting Synonyms
(6) (4) Arrangement Errors (1) (2)
Here you can share maths puzzles, comments and their answers, which helps you to learn and understand each puzzle's answer in detail. If you have any specific puzzle which is not on the site, use
"Ask Puzzle" (2nd tab on the left side). KEEP AN EYE: By this feature you can bookmark your Favorite Puzzles and trace these puzzles easily in your next visit. Click here to go your Keep an EYE (0)
puzzles. If you face any interview then please submit your interview experience here and also you can read others sucess story and experience InterView Experience(19). | {"url":"http://www.m4maths.com/placement-puzzles.php?ISSOLVED=&page=35&LPP=10&SOURCE=tcs&MYPUZZLE=","timestamp":"2014-04-17T12:58:20Z","content_type":null,"content_length":"145553","record_id":"<urn:uuid:874be661-15bd-4a9a-bd64-36c300a9225d>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00351-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modern logic
Modern logic: an introduction
LibraryThing Review
User Review - AlexTheHunn - LibraryThing
A good no-frills handbook of logic and methods of reasoning and deduction. Read full review
Sets and Syllogisms 17
Truth Table Logic 59
Derivation of Deductive Systems 116
5 other sections not shown
Bibliographic information | {"url":"http://books.google.com/books?id=MGx7AAAAMAAJ&q=column&dq=related:ISBN9122019146&source=gbs_word_cloud_r&cad=6","timestamp":"2014-04-20T08:28:12Z","content_type":null,"content_length":"109562","record_id":"<urn:uuid:2bddc0a1-7cc3-4274-a075-daa553907a4b>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Period doubling bifurcations
Period doubling bifurcation for real quadratic maps
For c < 1/4, after the tangent bifurcation, the x[1] fixed point of the quadratic map (the left intersection of f[c] and the green line) stays attracting while its multiplier |λ[1] | < 1 . For c
< -3/4 it is λ[1] = 1 - (1 - 4c)^1/2 < -1 so the fixed point becomes repelling and an attracting period-2 orbit appears. This phenomenon is called the period doubling bifurcation. To watch the
birth of the period-2 orbit consider the two-fold iterate f^ o2(x) = f(f(x)) of the quadratic map. It is evident that two fixed points of f[c] are the fixed points of f[c]^o2. Moreover as since for
these points (f[c]^o2)' = (f[c]')^2, therefore for -3/4 < c < 1/2 the left fixed point of f[c]^o2 is attracting (to the left below). At c = -3/4 it loses stability and two new intersections of f[c]^
o2 with the green line simultaneously appear (see to the right). These x[3], x[4] points make a period-2 orbit. When this cycle appears f^ o2(x[3])' = f^ o2(x[4])' = 1 and the slope decrease as c is
decreased. A period doubling bifurcation is also known as a flip bifurcation, as since the period two orbit flips from side to side about its period one parent orbit. This is because f ' = -1 .
At c =-5/4 the cycle becomes unstable and a stable period-4 orbit appears. Period doubling bifurcation is called also the pitchfork bifurcation (see below).
Period doubling bifurcation on complex plane
Pictures below illustrate this process on complex plane "the birth" scheme
While c is changed from c = 0 to c = -3/4 (inside the main cardioid) attractor z[1] moves from 0 to the parabolic point p = -1/2 (with multiplier λ = -1 ). Two points of an unstable period 2 orbit
z[3,4] = -1/2 +- t i, t = (3/4 + c)^1/2 (t is real and positive).
Therefore they move towards the point p too from above and below. At c = -3/4 attractor meets the repelling orbit and they merge into one parabolic point. Further, for c < -3/4, since c leaves the
main cardioid, attractor turns into repeller and as c gets into the biggest (1/2) bulb the unstable period-2 orbit becomes attracting with two points
z[3,4] = -1/2 +- t, t = (-3/4 - c)^1/2. The right picture above is disconnected Cantor dust. We get it if we will go up after crossing the p point.
Animation (350+350 and 350+450 pixels movies)
Contents Previous: Tangent bifurcations Next: Period trippling bifurcations
updated 12 June 2003 | {"url":"http://ibiblio.org/e-notes/MSet/Orbit2.htm","timestamp":"2014-04-20T05:52:51Z","content_type":null,"content_length":"7056","record_id":"<urn:uuid:73d62098-d537-4611-9baf-2ab5d9703388>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00237-ip-10-147-4-33.ec2.internal.warc.gz"} |
A New Model of Universal Reality
The four-dimension space-time continuum is the perfect model to describe Newtonian physical reality. However, the advent of quantum theory with its shattering description of sub-atomic particle
behavior as a discontinuous process has limited the space-time model to a special case of universal reality.
The initial work of Planck, Heisenberg, Bohr, and others over the last one hundred years has ruptured the foundations of classic Newtonian physics.
However, it was not until Dr. J.S. Bell introduced his theorem of universal connectivity that I was finally encouraged to postulate a new physical reality model of the universe. Bell’s theorem
mathematically proves that if the predictions of quantum physics are correct, then our understanding of the physical reality of the Universe is limited. The predictions of quantum physics have been
proven correct.
The Model for a Five-Dimensional Universe
My hypothesis is that the universe is a five-dimensional mass/motion-matter/time continuum as contrasted to our general understanding of a four-dimensional space-time reality concept.
The fifth dimension is identified as that region beginning at the interface of the sub-atomic particle domain (at the interface barrier). The fifth dimension is characterized as that region
containing all the mass in any functional universal five-dimensional system. The fifth dimension is a displacement phenomenon from our four-dimensional space-time continuum. The fifth dimension is
displaced from our space-time continuum by the physical border that separates sub-particle phenomenon (mass) from cellular, organic particles (matter). These corresponding dimensional realities are
inter-relational and provide the connected pathways for the functioning and the understanding of the universe.
The five-dimensional mass/motion-matter/time continuum operates in accordance with the statistical predictions of quantum theory.
My five-dimensional model would then modify our present space-time continuum model to a massless, unified energy field. This unified energy field is expanding outwardly at a rate proportional to the
mass content that generated it. This expanding space-time energy field provides our observed perception of universal reality. In contrast, the fifth dimension mass/motion region is expanding
inwardly, in a reaction direction, at the same rate as the space-time energy field is expanding outwardly.
Matter does not exist in the fifth dimension. Matter (both organic and inorganic) is a four-dimensional manifestation of the mass particle content available from the fifth dimension. Matter is the
resultant implicit derivation of fifth-dimensional mass into our four-dimension energy field. The four-dimensional energy field provides the receptacle/container through which mass particles from the
fifth dimension are derived into matter. Matter defines our space-time perception (our sensors) of reality.
The model further implies that the mass particle content (fifth dimension) is the generator and the fuel provider that functionally operates our universal systems.
Motion is to mass as time is to our unified energy field (derived matter). Time is an indivisible space-time continuum concept that does not exist in the fifth dimension. Time is a special case of
precise motion in the space-time model. Time is special in that it records the progress/history of any organic/inorganic operational universal system. Time records that activity in a forward,
unidirectional progression.
Precise, constant mass particle motion is the fundamental, absolute requirement for the operation and regeneration of the universe. Fifth dimension motion is a constant operational phenomenon that
only indirectly relates to our concept of time. Motion is not a directional concept in my model. In the fifth dimension, a directional or forward concept of precise motion has no significance.
Comments on the Five-Dimension Model
* The concept of the duality of light as wave or particle is consistent with energy waves (space-time) and mass particles (fifth dimension) reality.
* Light is fundamentally dependent on the mass/motion of particles. Light cannot exist without precise fifth dimension mass particle motion. Conversely, time is directly dependent on light. Time
cannot exist without fifth dimension precise motion.
*Light is a constant, continuous mass/motion particle phenomenon. The model suggests that light is the complex interaction of mass and energy operating in a functional “local” universe. Light is
always directly related to the precise motion of fifth dimension mass particle operation.
* The model hypothesizes that the expansion coefficient of the fifth dimension region is proportional to its total mass content. As a corollary, the extent of the expansion of the space-time energy
field is proportional to the energy transfer from fifth dimension total mass content.
* The generally understood space-time concept of ether as an infinitely elastic, massless medium for the propagation of electromagnetic waves is hypothetically the same ether that propagates mass
particles in a relativistically infinite fifth dimension.
* The model hypothesizes that the DNA/RNA complex molecular structure is the conduit through which fifth-dimensional metaphysical energy (M-energy) is introduced into cognitive, organic matter.
Example of the Model
The five-dimension model implies that the universe can be considered as a cluster of “localized” mass/motion-energy field units that are operational in their own individual time sequences.
Consider the phenomenon of the black hole in space. The five-dimension model would theorize that the black hole condition was the result of a collapsing fifth dimension (highly compressed particle
mass) in conjunction with a collapsing space-time energy field. The effect of this condition in a full collapsed state (potential condition of solid mass) would be devastating. Fifth dimension motion
and its corresponding four-dimensional counterpart time would cease. The condition of particle mass motion ceasing would create a collapsed mass/motion-energy situation that would be an absolute
universal impossibility – and would be unsustainable. A condition of “expectant regeneration” would initiate an explosion proportional to the mass contained in the black hole.
At this moment of “expectant regeneration,” constant motion would begin, and directional time begins to record the progress of this new “localized” universe.
The fundamental requirement for the expectant regeneration of any black hole is the cessation of fifth dimension particle motion along with its relational time component. The black hole is primarily
only a potential vehicle of regeneration.
The five-dimensional model simply hypothesizes a regenerative process based on localized mass centers. The model does not preclude the possibility that some single event occurred where all universal
mass was located at a single node and the “big bang” resulted.
However, the five-dimensional model would require that the “big bang” was initiated by the fifth dimension, where no collapsing energy field or any space-time continuum entity could possibly have
pre-existed. The “big bang” concept is an unlikely theoretical possibility in relation to the five-dimensional model, but it is primarily a complex conundrum.
May 16, 2007 | {"url":"http://www.kennethlaplante.com/2010/11/five-dimensional-model-of-universe.html","timestamp":"2014-04-20T05:42:36Z","content_type":null,"content_length":"69202","record_id":"<urn:uuid:900ee855-4be9-4865-b90a-ca10addc7394>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00414-ip-10-147-4-33.ec2.internal.warc.gz"} |
assuming - compute the value of an expression under assumptions
Calling Sequence
expression assuming property
expression assuming x1::property1, x2::property2, ...
expression assuming additionally, x1::property1, x2::property2, ...
expression - valid expression or Maple input to be evaluated
property - name: property
x1, x2, ... - names; names in or referenced by expression
property1, property2, ... - names; property1 is the property assumed on x1, property2 is the property assumed on x2, ...
additionally - (optional); computes placing the received assumptions on x1, x2, ... without discarding previous assumptions on these variables
Basic Information
• This help page contains complete information about the assuming command. For basic information on the assuming command, see the assuming help page.
• The expression assuming property calling sequence evaluates the expression under the assumption property on all names in expression. This is similar to the assume=property option of the simplify
• The expression assuming x1::property1, x2::property2, ... calling sequence evaluates the expression under the assumption(s) property1, property2, ... on the name(s) x1, x2, ..., respectively. By
default, previously existing assumptions on x1, x2, and so forth, if any, are ignored when computing the result. To override this behavior use the optional argument 'additionally', which can be
placed anywhere to the right of assuming - see the Examples section of this help page.
• The output is the same as that received by successively doing the following (exceptions noted further below).
1. Calling assume (to enter assumptions on names).
2. Entering (and so evaluating under the assumptions) the expression depending on these names.
3. Removing the assumptions.
The use of the assuming routine simplifies the process to one step. Thus, it facilitates experiments concerning the value of the expression under different assumptions.
• To perform computations under assumptions, the assuming routine identifies the names that are assumed to have special properties, then replaces them with equivalent names that have these
properties. The evaluation uses these new names in the computation, then restores the original names before returning the output. This process ensures that no assumptions are made to the original
names in the expression. Therefore, the computations performed using assuming do not affect computations performed before or after calling assuming.
• assuming is a left-associative operator with precedence between `, ` and `:=`.
Notes: The assuming command does not place assumptions on integration or summation dummy variables in definite integrals and sums, nor in limit or product dummy variables, because all these
variables already have their domain restricted by the integration, summation or product range or by the method used to compute a limit. To obtain the simplification of the expression being summed,
integrated or subject to a product taking into account the restriction on the values of the dummy variable implicit in the integration/summation/product range, use the simplify command -- see the
Examples section of this help page.
The assuming command does not scan Maple programs regarding the potential presence of assumed variables. To compute taking into account assumptions on variables found in the bodies of programs, use
assume -- see the Examples section.
The simplify command can be called with assumptions.
The following is an example of the optional argument 'additionally'.
Originally x, renamed x~:
is assumed to be: RealRange(-infinity,Open(1))
As default behavior, the existing assumption is disregarded when computing using `assuming` with assumptions on .
Using 'additionally', is taken into account in addition to the already existing assumption , leading to:
With or without the optional argument 'additionally', any assumptions previously existing on the variables are preserved.
Originally x, renamed x~:
is assumed to be: RealRange(-infinity,Open(1))
Without assumptions, the ODE above is solved in terms of a piecewise function. Assuming is positive:
Assuming is positive, the ODE has the following solution.
This solution can be tested (see odetest) under assumptions.
Computing with quoted objects under assuming occurs normally. If quotation marks are placed around , the information that is ignored.
The assuming command does not place assumptions on integration or summation variables in definite integrals and sums. So, in
no assumptions are placed on the dummy variable .
To obtain the simplification of the integrand taking into account that use simplify.
You can call assuming with assuming as argument (nested computation under different assumptions). In doing so, note assuming is left-associative, so for instance in
first simplify(sqrt(r^2*s^2*t^2)) assuming real is computed then the result evaluated assuming .
The assuming command does not scan programs regarding the presence of assumed variables; for that purpose use assume. Consider
> f := proc(x) sqrt(a^2)+x end proc; # a Maple procedure (program)
The variable a is inside the body of f; the assumption that is not effectively used when computing .
For these purposes, use assume.
See Also
about, abs, assume, assuming, diff, dsolve, Int, irem, limit, ln, Maple initialization file, odetest, operators/precedence, proc, simplify, sin, sqrt, type, value
Was this information helpful?
Please add your Comment (Optional)
E-mail Address (Optional)
What is This question helps us to combat spam | {"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=assuming/details","timestamp":"2014-04-21T14:46:39Z","content_type":null,"content_length":"246022","record_id":"<urn:uuid:727b0515-7de7-4e29-81c6-4c3737eae1a9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00269-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
help with intervals with word problem
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5120fb5be4b06821731cdfdb","timestamp":"2014-04-19T17:18:38Z","content_type":null,"content_length":"192400","record_id":"<urn:uuid:a4852948-5777-48c4-8dd8-81022635a6bd>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00658-ip-10-147-4-33.ec2.internal.warc.gz"} |
Courses/CS 332F/Recursion engines
Running a recursive function on a recursion engine
factorial is the prototypical recursive function.
factorial n
| (n==0) = 1
| otherwise = n * (factorial (n-1))
Like many recursive functions factorial has the following form.
recursiveFunction x
| termCond x = termFn x
| otherwise = reduceFn (mapFn x) (recursiveFunction (wayAheadFn x))
These are the correspondances.
termCond: (n==0). The termination condition is to stop when n is 0.
termFn: 1. The termination function always returns 1.
mapFn : n. The map function takes the input and converts it to something to be used later by the reduce function. In this case the conversion is just the identity.
wayAheadFn : n-1. The way-ahead function converts the input into the form to be passed to the recursive call. In this case, subtract 1.
reduceFn : n * (factorial (n-1)). The reduce function applies a function to (a) what the mapFn left and (b) what the recursive call returned. In this case the function is multiplication.
We would like to make the recursiveFunction template above available so that one can define a recursive function by providing the components functions.
One way to do that is to nest the recursiveFunction template within a context into which we can plug in the various functions.
recursionEngine termCond termFn reduceFn mapFn wayAheadFn =
where recursiveFunction y
| termCond y = termFn y
| otherwise = reduceFn (mapFn y) (recursiveFunction (wayAheadFn y))
recursionEngine termCond termFn reduceFn mapFn wayAheadFn =
let recursiveFunction y
| termCond y = termFn y
| otherwise = reduceFn (mapFn y) (recursiveFunction (wayAheadFn y))
in recursiveFunction
In either of these formulations the recursiveFunction template performs the recursion using the functions passed to the recursionEngine.
Using this recursion engine, factorial would look like this.
factorial' =
(==0) -- The termination condition
(\_ -> 1) -- Return 1
(*) -- Performs the multiplication of (n * factorial (n-1)).
id -- This keeps the n for the multiplication above.
(+(-1)) -- This generates (n-1) for the next recursive call.
-- Can't write just (-1), which is taken as a value,
-- or ((-)1), which is taken as the function (\x -> 1 - x).
zip example
Here's an example of converting zip from traditional recursion to using the recursion engine.
This is the traditional definition of zip.
zip [] _ = []
zip _ [] = []
zip (x:xs) (y:ys) = (x, y) : (zip xs ys)
Since zip takes two arguments and the recursion engine expects to operate on one argument we have to package the two arguments together into a single bundle. A simple way to do that is to put the two
arguments in a tuple. Here's what zip would look like if we did it that way.
zipPair ([], _) = []
zipPair (_, []) = []
zipPair ((x:xs), (y:ys)) = (x, y) : (zipPair (xs, ys))
> :t zipPair
zipPair :: ([t1], [t2]) -> [(t1, t2)]
zipPair takes a pair of lists and produces a list of pairs. We could always define myZip to call zipPair.
myZip xs ys = zipPair (xs, ys)
Now we can extract the pieces of zipPair and pass them to the recursionEngine.
zip' xs ys =
(\(x, y) -> null x || null y) -- termCond. Terminate if either list is []
(\_ -> []) -- termFn. If we terminate return []
(:) -- reduceFn. Rebuild the list
(\(x:_, y:_) -> (x, y)) -- mapFn. Create a pair from the heads.
(\(_:xs, _:ys) -> (xs, ys)) -- wayAheadFn. The recursion operates on the tails.
(xs, ys)
zip' has the same type as zip.
> :t zip'
zip' :: [a1] -> [a2] -> [(a1, a2)]
GCD example
Here's an example of converting the greatest-common-divisor function from traditional recursion to the use of the recursion engine.
This is a traditional gcd algorithm. I'm calling it gcd_1 to distinguish it from the gcd that's exported by the Prelude.
gcd_1 x y
| x `mod` y == 0 = y
| otherwise = gcd_1 y (x `mod` y)
> :t gcd_1
gcd_1 :: (Integral a) => a -> a -> a
Since the recursion engine only works on functions that take a single parameter, let's convert gcd_1 into a function of 1 parameter. We could use a tuple, but for practice, let's create a data type
to hold the two pieces.
data GCD_Package = GCD_Package Integer Integer deriving (Show)
The creates a data type GCD_Package, which stores two Integer values. In this case we are using the type name as a constructor. Recall that other constructor names can be used.
Here is the same function as gcd_1 but using the GCD_Package.
gcd_2 (GCD_Package x y)
| x `mod` y == 0 = y
| otherwise = gcd_2 (GCD_Package y (x `mod` y))
> :t gcd_2
gcd_2 :: GCD_Package -> Integer
What are the components.
termCond: x `mod` y == 0
termFn: Return y
reduceFn: After the call to gcd_2 we simply return the result.
No additional processing is done.
mapFn: No mapfn because we don't have to produce anything for the reduceFn.
wayAheadFn: GCD_Package y (x `mod` y). This is the argument passed to
the recursive call of gcd_2.
Let's make well-defined functions out of these and plug them into the recursionEngine.
gcd_3 =
(\(GCD_Package x y) -> x `mod` y == 0) -- termCond
(\(GCD_Package x y) -> y) -- termFn
(\_ gcd -> gcd) -- reduceFn
id -- mapFn
(\(GCD_Package x y) -> GCD_Package y (x `mod` y)) -- wayAheadFn
> :t gcd_3
gcd_3 :: GCD_Package -> Integer
Except for the reduceFn, each of the other functions takes one argument. In this case that argument is (GCD_Package x y).
The reduceFn is different. It takes two arguments. The first is what was produced by the mapFn; the second is what was returned from the recursive call. The recursionEngine makes the recursive call
for you. You don't have to do it explicitly. You just have to write the wayAheadFn, which generates the argument to the recursive call. And you have to write the reduceFn which processes the result
of the recursive call along with the result of the mapFn.
Tail recursion
In the gcd algorithm the reduceFn does nothing except pass back the result of the recursive call. The mapFn also does nothing since the reduceFn never looks at what it produces. Algorithms with this
property are called tail recursive. The term implies that the last thing the algorithm does is to make a recursive call. When the recursive call completes, the result is just passed back.
Since tail recursive algorithms have no need for either a mapFn or a reduceFn we can write a tailRecursionEngine without them. The wayAheadFn does all the real work.
tailRecursionEngine termCond termFn wayAheadFn =
where tailRecursiveFn y
| termCond y = termFn y
| otherwise = (tailRecursiveFn (wayAheadFn y))
The tailRecursionEngine is like the recursionEngine except that it has no reduceFn and no mapFn.
Here's how the gcd algorithm would be expressed in terms of the tailRecursionEngine.
gcd_4 =
(\(GCD_Package x y) -> x `mod` y == 0) -- termCond
(\(GCD_Package x y) -> y) -- termFn
(\(GCD_Package x y) -> GCD_Package y (x `mod` y)) -- wayAheadFn
> :t gcd_4
gcd_4 :: GCD_Package -> Integer
> gcd_4 (GCD_Package 33 36)
Tracing tail recursion
It's often useful to trace the sequence of values generated during the tail recursion process. Instead of simply applying the wayAheadFn over and over again, one can also keep track of those values.
Let's define a function trace that applies another function f, our wayAheadFn, to a starting input over and over.
trace f x = x : trace f (f x)
For example,
> take 10 (trace (*2) 1)
In fact, Haskell already has such a trace function available. It's called iterate.
> take 10 (iterate (*2) 1)
Let's trace gcd, i.e., apply iterate to the wayAheadFn. To make the results easier to see, instead of GCD_Package, we'll use a simple pair.
> take 10 (iterate (\(x, y) -> (y, (x `mod` y))) (15,39))
[(15,39),(39,15),(15,9),(9,6),(6,3),(3,0),(0,*** Exception: divide by zero
We got a divide by zero exception when we attempted to take 3 `mod` 0.
To avoid that let's define myMod and try again.
myMod _ 0 = 0
myMod x y = mod x y
> take 10 (iterate (\(x, y) -> (y, (x `myMod` y))) (15,39))
A utility function that's useful in this conext this is takeUntil. takeUntil is like takeWhile except:
1. It takes values until its argument function holds rather than while its argument function holds.
2. It returns not only a list of values but also the value for which the argument function first holds.
It can be defined as follows.
takeUntil termCond (x:xs)
| termCond x = (x, [])
| otherwise = (z, x:zs) where (z, zs) = takeUntil termCond xs
Or, equivalently
takeUntil termCond (x:xs)
| termCond x = (x, [])
| otherwise = let (z, zs) = takeUntil termCond xs in (z, x:zs)
> :t takeUntil
takeUntil :: (a -> Bool) -> [a] -> (a, [a])
> takeUntil (> 10) [1 .. ]
takeUntil solves the problem we saw above of getting an exception when we attempt to apply the wayAheadFn when the termination condition holds.
Applying takeUntil with both the termCond and the wayAheadFn from gcd produces this result.
> takeUntil (\(x, y) -> x `mod` y == 0) (iterate (\(x, y) -> (y, (x `mod` y))) (15, 39))
The pair (6, 3) satisfies the termCond; the list [(15,39),(39,15),(15,9),(9,6)] contains the sequence of pairs generated by the wayAheadFn from the starting point (15,39) that precede (6, 3).
Tail recursion as iteration
As the preceding shows, tail recursion is really iteration disguised as recursion.
• The wayAheadFn is the body of the iterative loop.
• The argument (in this example (GCD_Package x y)) is a package that contains (all) the variables used in the loop.
• The termCond looks at the variables and determines when to stop.
• The termFn looks at the variables and constructs an answer. In this case it just picks one of the variables.
This suggests the following definition for a tailRecursionEngine with trace.
tailRecursionEngineTrace termCond termFn wayAheadFn x =
(termFn termValue, termValue, list)
where (termValue, list) = takeUntil termCond (iterate wayAheadFn x)
To illustrate, let's define yet another version of gcd. Again we'll use pairs instead of GCD_Package.
gcd_5 x y =
(\(x, y) -> x `mod` y == 0) -- termCond
(\(x, y) -> y) -- termFn
(\(x, y) -> (y, (x `mod` y))) -- wayAheadFn
(x, y)
> gcd_5 15 39
The result returned is a triple:
(<function value>, <value terminating the iteration>, <list of preceding values>)
This suggests another formulation for the tailRecursionEngine itself.
tailRecursionEngine termCond termFn wayAheadFn x =
termFn (fst (takeUntil termCond (iterate wayAheadFn x)))
tailRecursionEngine termCond termFn wayAheadFn x =
termFn . fst $ takeUntil termCond (iterate wayAheadFn x)
In other words, iterate the wayAheadFn (the body of the loop) until the termCond holds. Then apply the termFn to extract the result.
Applying gcd_4 with this definition of the tailRecursionEngine produces the same answer as before.
> gcd_4 15 39
Rather than rely on takeUntil, we can do this more directly by defining iterateUntil. iterateUntil applies the wayAheadFn to its argument until the termCond holds. It also checks to see if the
termCond holds on the argument as initially given. iterateUntil returns the value for which termCond first holds.
iterateUntil termCond wayAheadFn x =
head (dropWhile (not . termCond) (iterate wayAheadFn x))
Or equivalently
iterateUntil termCond wayAheadFn =
head . (dropWhile (not . termCond)) . (iterate wayAheadFn)
Except that the parameters are reversed
is really the old familiar
do <body>
until <termCond>
The tailRecursionEngine can be defined in terms of iterateUntil. The tailRecursionEngine simply applies the termFn to the result returned by iterateUntil.
tailRecursionEngine termCond termFn wayAheadFn x = termFn (iterateUntil termCond wayAheadFn x)
Or equivalently,
tailRecursionEngine termCond termFn wayAheadFn = termFn . iterateUntil termCond wayAheadFn
Or, expanding iterateUntil
tailRecursionEngine termCond termFn wayAheadFn =
termFn . head . (dropWhile (not . termCond)) . (iterate wayAheadFn)
Tail recursion, Haskell, and efficiency
Tail recursion is usually considered more efficient that standard recursion. With tail recursion there is no need to keep a stack, which must be unwound when the recursion ends.
But since Haskell is lazy, tail recursion is not always more efficient than standard recursion.
The recursionEngine as a combination of map and reduce
We just saw that we can give an iterative description of tail recursion. This section shows how to give an iterative description of standard recursion.
Think about how the recursionEngine works. It does four things.
1. It repeatedly calls the wayAheadFn until it finds a value for which the termCond holds.
2. In the process it leaves behind a trail of values produced by applying the mapFn to the values for which the termCond did not hold.
3. It applies the termFn to the value for which the termCond holds.
4. It uses foldr to apply the reducFn to the list of mapped values from step 2. It uses the result from step 3 as the starting point.
(The tailRecursionEngine also does 1 and 3, but it doesn't do 2 or 4.)
Given takeUntil and iterate, we can define recursionAsMapReduce as follows.
recursionAsMapReduce termCond termFn reduceFn mapFn wayAheadFn x =
let (y, ys) = takeUntil termCond (iterate wayAheadFn x)
in foldr reduceFn (termFn y) (map mapFn ys)
Or equivalently
recursionAsMapReduce termCond termFn reduceFn mapFn wayAheadFn x =
foldr reduceFn (termFn y) (map mapFn ys)
where (y, ys) = takeUntil termCond (iterate wayAheadFn x)
The let or where expression iterates the wayAheadFn until termCond holds. It also captures the value for which termCond holds along with a list of the previously generated values.
The foldr expression first maps the mapFn to the list captured as the second element of the tuple in the let or where portion. Then it repeatedly applies the reduceFn to those values, starting with
the value produced by applying the termFm to the value for which the termCond held.
Using recursionAsMapReduce we have defined recursion as a sequence of two iterative processes: iterate and foldr.
Alternative formulation
We can let the first iteration—that done by takeUntil—do more of the work. Let's define mapUntil so that it does not only what takeUntil does, but it also applies the mapFn to the values left behind.
mapUntil termCond mapFn (x:xs)
| (termCond x) = (x, [])
| otherwise = (z, (mapFn x):zs) where (z, zs) = (mapUntil termCond mapFn xs)
Or, equivalently:
mapUntil termCond mapFn (x:xs)
| (termCond x) = (x, [])
| otherwise = let (z, zs) = (mapUntil termCond mapFn xs) in (z, (mapFn x):zs)
Now recursionAsMapReduce can be defined as follows.
recursionAsMapReduce termCond termFn reduceFn mapFn wayAheadFn x =
let (y, ys) = mapUntil termCond mapFn (iterate wayAheadFn x)
in foldr reduceFn (termFn y) ys
Or, equivalently
recursionAsMapReduce termCond termFn reduceFn mapFn wayAheadFn x =
foldr reduceFn (termFn y) ys
where (y, ys) = mapUntil termCond mapFn (iterate wayAheadFn x)
The let or where portion still does the first iteration process. But now it also does the mapping. The foldr then does the second iteration process. It applies the termFn to the value found by
termCond and uses that as the starting point when it applies the reduceFn to what mapUntil set up for it.
factorial using recursionAsMapReduce
We can define factorial using recursionAsMapReduce.
factorial =
(==0) -- termCond
(\_ -> 1) -- termFn
(*) -- reduceFn
id -- mapFn
(\n -> n-1) -- wayAheadFn
The functions are the same as we used when defining factorial using the recursionEngine.
Let's look at how the execution works. Suppose we call
> factorial 5
Substituting (==0) as the termCond, id as the mapFn, and (\n -> n-1) as the wayAheadFn we generate
> mapUntil (==0) id (iterate (\n -> n-1) 5)
First, iterate repeatedly applies the function it gets as its first argument to its second argument, generating and infinite list. For example:
> take 9 (iterate (\n -> n-1) 5)
Since Haskell is lazy, the list elements aren't generated until they are needed. Therefore they are generated only until (==0), the termCond, holds.
Since the mapFn is id, the list of returned elements is the same as the list of elements generated until termCond holds.
This is what we get.
> mapUntil (==0) id (iterate (\n -> n-1) 5)
Next, substituting (\_ -> 1) for termFn, (*) for reduceFn, and [5,4,3,2,1] for ys we call
> foldr (*) ((\_ -> 1) 0) [5,4,3,2,1]
First of all, ((\_ -> 1) 0) returns 1 as the foldr starting element.
Then we use foldr to repeatedly apply the reduceFn to the mapped list and the starting element. So we are making this call.
> foldr (*) 1 [5,4,3,2,1]
Multiple recursive calls
The recursionEngine works well for functions that make a single recursive call. What about functions that make multiple recursive calls. quicksort is an example.
quicksort [] = []
quicksort (x:xs) = (quicksort smaller) ++ (x: (quicksort larger))
where smaller = [y | y <- xs, y <= x]
larger = [y | y <- xs, y > x]
The problem we face is that for each recursive call to quicksort, two additional recursive calls are made: one to sort the smaller elements and one to sort the larger elements.
One solution is for the recursiveFn of the recursionEngine to map itself over a list of elements instead of calling itself on just one.
Here are the original recursionEngine and an extendedRecursionEngine. The differences are indicated in italics.
recursionEngine termCond termFn reduceFn mapFn wayAheadFn x =
recursiveFn x
where recursiveFn y
| termCond y = termFn y
| otherwise = reduceFn (mapFn y) (recursiveFn (wayAheadFn y))
extendedRecursionEngine termCond termFn reduceFn mapFn wayAheadFn x =
recursiveFn x
where recursiveFn y
| termCond y = termFn y
| otherwise = reduceFn (mapFn y) (map recursiveFn (wayAheadFn y))
The only difference between the extendedRecursionEngine and the original recursionEngine is that the recursive call is map recursiveFn (wayAheadFn y) instead of recursiveFn (wayAheadFn y).
As far as using the extendedRecursionEngine is concerned
1. the wayAheadFn must return a list of elements instead of just a single element—although there is nothing to prevent the list from being a singleton. It may even be an empty list, which will end
the recursion without relying on termCond.
2. the reduceFn must now expect its second argument to be a list of results.
Given this extendedRecursionEngine we can define quicksort as follows.
quicksort =
null -- Stop on the empty list.
id -- If we get an empty list return it.
-- The reduceFn expects a list of two sorted
-- lists. It concatenates them with x in between.
(\x [smaller, larger]-> smaller ++ (x:larger))
head -- Save the head for the reduce step.
-- The wayAheadFn generates a list of two lists: the
-- elements smaller than or equal to the head and those larger.
(\(x : xs) -> [[y | y <- xs, y <= x], [y | y <- xs, y > x]])
Multiple recursive calls when processing trees
Suppose we define a Tree data type that allows multiple children.
data Tree a = Node a [Tree a] | EmptyTree deriving (Show, Eq)
How might we write a function to find the height of a tree? Here's the traditional approach.
height EmptyTree = 0
height (Node _ subtrees) = 1 + maximum (0:(map height subtrees))
Notice that although height is recursive—the recursion occurs when height is mapped onto the children—there is no clause to end the recursion. (The clause for EmptyTree is never called if the
argument Tree is not empty.) But the recursion terminates anyway. How does that happen?
If a Tree has no children, the recursion will be mapped over an empty list of children. Since the list is empty, that ends the recursion. That's also the reason maximum is applied to a list with a
pre-pended 0. If the recursive call is mapped to an empty list of children, the result will be an empty list.
How would we write height using the extendedRecursionEngine?
First let's define an accessor that retrieves the list of children.
children (Node _ subtrees) = subtrees
Now we can define height as follows.
height =
(\tree -> tree == EmptyTree) -- termCond and termFn are used only
(\_ -> 0) -- for the EmptyTree.
(\_ heights -> -- The reduceFn ignores what the mapFn
1 + maximum (0:heights)) -- function generates and simply adds 1 to
-- the maximum of the heights of the children.
id -- Ignore the value at the Node.
children -- Do recursion on all the children.
> height (Node 1 -- Manually formatted
[Node 11
[Node 111 [],
Node 112 []
Node 12 []
> height EmptyTree
Making the extended recursion engine iterative
We can't linearlize an algorithm with multiple recursive calls in the body. But we can separate the expansion phase—the use of the wayAheadFn from the reduce phase—the use of the reduceFn.
extendedRecursionEngine termCond termFn reduceFn mapFn wayAheadFn x =
reduceTree reduceFn termCond termFn mapFn (buildTree termCond wayAheadFn x)
First we use buildTree to build out the tree. Then we use reduceFn to reduce it to a value.
To build a tree, we need a Tree data type.
data Tree a = Node a [Tree a] deriving (Show, Eq)
Note that we don't need an EmptyTree constructor since we won't have any null Trees. A node without children is indicated by an empty list of children, not through the use of an EmptyTree.
Now we can define buildTree.
buildTree termCond wayAheadFn x =
buildIt x
where buildIt x
| termCond x = Node x []
| otherwise = Node x (map builtIt (wayAheadFn x))
Note that the Tree we build holds the original value that was used by the wayAheadFn as the value of the Node of the built Tree. We do not apply the mapFn to it yet.
reduceTree reduceFn termCond termFn mapFn tree =
reduce tree
reduce (Node value children)
| termCond value = termFn value
| otherwise = reduceFn (mapFn value) (map reduce children)
reduceTree applies the reduceFn to two arguments. The first is the mapFn applied to the original value at that point in the expansion. The second is the list of results from the recursive calls.
Here's how it would apply to quicksort.
quicksort =
null -- Terminate on an empty list
id -- If we have an empty list that's the answer.
(\ x [smaller, larger] -> -- The reduceFn. smaller and larger are
smaller ++ (head xs : larger)) -- both sorted. (head xs) is between them in value.
id -- The mapFn is the identity.
(\(x : xs) -> -- The wayAheadFn splits the list into two sublists:
[[y | y <- xs, y <= x], -- elements less than or equal to the head
[y | y <- xs, y > x]]) -- elements greater than the head.
> quicksort [6, 2, 4, 5, 7, 8, 6, 1]
This really hasn't accomplished much. The reduceTree function is nearly identical to the original extendedRecursionEngine. The only difference is that reduceTree assumes that the children are already | {"url":"http://cs.calstatela.edu/wiki/index.php/Courses/CS_332F/Recursion_engines","timestamp":"2014-04-19T04:37:47Z","content_type":null,"content_length":"49317","record_id":"<urn:uuid:5c2fe58b-41cc-46d0-b341-95679c194aeb>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00257-ip-10-147-4-33.ec2.internal.warc.gz"} |
University of Illinois
Faculty Research
Scott Ahlgren ahlgren@math.uiuc.edu http://www.math.uiuc.edu/~ahlgren/
Ahlgren completed his Ph.D. in 1996 in the field of Diophantine equations under the direction of Wolfgang Schmidt. Much of his recent work has focused on the theory of modular forms to problems in
number theory. For example, he has used this theory to answer several long-standing open problems on the arithmetic of the ordinary partition function. He has written several papers on the arithmetic
properties of the Fourier coefficients of modular forms of half-integral weight; here there are applications to combinatorics, to the theory of elliptic curves, and to the study of the critical
values of L-functions attached to modular forms. Ahlgren has written on a wide range of topics; he has authored or co-authored more than 35 research papers in various areas of number theory.
Bruce Berndt berndt@math.uiuc.edu http://www.math.uiuc.edu/~berndt/
Berndt received his Ph.D. in 1966 from the University of Wisconsin and spent a postdoctoral year at the University of Glasgow, in Scotland, before coming to the University of Illinois. He is an
analytic number theorist with strong interests in several related areas of classical analysis. His primary interests are in theta functions, $q$-series, partitions, continued fractions, Eisenstein
series, Dirichlet series, and character sums. Since early 1974, almost all of his research has been devoted to proving the claims left without proof by the famous Indian mathematician Ramanujan in
his three notebooks and in his ``lost notebook.'' The three notebooks contain approximately 3300 results. With the help of several other mathematicians, he completed his work on the notebooks in
1998. An account of Berndt's work can be found in his five books, "Ramanujan's Notebooks, Parts I-V," published by Springer-Verlag in the years 1985, 1989, 1991, 1994, and 1998.
While studying Ramanujan's work over several years, it was natural for Berndt to develop a strong interest in Ramanujan as a human being as well as in the southeast Indian Tamil culture from which
Ramanujan emerged. This interest led him and Robert A. Rankin to write "Ramanujan: Letters and Commentary" and "Ramanujan: Essays and Surveys", both jointly published by the American and London
Mathematical Societies.
Since the mid 1990s, Berndt has been attempting to find proofs for many of the claims left by Ramanujan in his lost notebook, which was written in the last year of Ramanujan's life and contains
approximately 650 assertions without proofs. Berndt and George Andrews, who found the lost notebook in 1976, are publishing volumes on the lost notebook analogous to those prepared by Berndt on the
three earlier notebooks. The first two volumes appeared in 2005 and 2008.
Twenty-five students have completed doctoral theses under Berndt's direction. Currently, he advises about a half-dozen doctoral students. Most are focusing on material in the lost notebook or on
research inspired by Ramanujan.
Florin P. Boca fboca@math.uiuc.edu http://www.math.uiuc.edu/~fboca/welcome.html/
Boca completed his undergraduate studies at the University of Bucharest (diploma 1986) and received his Ph.D. at UCLA (1993) under the supervision of Sorin Popa. He held positions at the Institute of
Mathematics of the Romanian Academy (researcher since 1988) and University of Toronto (postdoctoral fellow 1993-1995), and was a EPSRC advanced research fellow in the UK between 1995-1997 (University
of Wales Swansea) and 1998-2001 (Cardiff University).
Boca's interests lie in the areas of operator algebras, number theory, and ergodic theory. His research on some problems originating from operator algebras (the structure of non-commutative tori,
subalgebras of rotation algebras, Bost-Connes systems and Araki-Woods factors) has substantial connections with number theory. After 1999 he became interested in the statistical properties of Farey
fractions, fractional parts of polynomials, quadratic rationals, and in applications of Number Theory methods to the study of the angular distribution of "fat" lattice points which arise, for
instance, in the model of the periodic Lorentz gas. His current interests range over problems in operator algebras, number theory, and ergodic theory.
Harold G. Diamond (Emeritus) hdiamond@math.uiuc.edu http://www.math.uiuc.edu/~hdiamond/
Diamond received his Ph.D. in 1965 from Stanford University, under the supervision of P. J. Cohen, and has been at Illinois since 1967, with visiting appointments at several American and European
universities. He became an emeritus faculty member in 2002. His main area of work is multiplicative number theory, particularly elementary proofs of the prime number theorem, the theory of Beurling
generalized numbers, and sieve theory. He is coauthor of a Carus monograph on algebraic number theory (with H. Pollard), a textbook on analytic number theory (with P. T. Bateman), and a research
monograph on sieve theory (with H. Halberstam and W. F. Galway). Other interests include harmonic analysis and tauberian theorems, numerical computation, and mathematical problems.
He has served as an editor of the AMS Transactions and the Problem Section of the MAA Monthly. Eleven students completed Ph.D.'s under Diamond's direction.
Iwan Duursma duursma@math.uiuc.edu http://www.math.uiuc.edu/~duursma/
Duursma received master's degrees in aerospace engineering at Delft University and in mathematics at the University of Amsterdam. He received his Ph.D. in 1993 under the direction of Jack van Lint
and Ruud Pellikaan at Eindhoven University, all in the Netherlands. Part of his thesis work was the formulation and proof of the Feng-Rao algorithm for the decoding of a general geometric Goppa code.
Among his current interests in coding theory are properties of codes over rings, in particular self-dual codes over p-adic rings; weight distributions for asymptotically good codes; and relations
with zeta functions. In cryptography his interests include most of the algebraically formulated protocols and, in particular, those that use number theory (such as RSA) or elliptic curves (such as
elliptic curve digital signature schemes). His interests extend to other classes of curves and their Jacobians.
Kevin Ford ford@math.uiuc.edu http://www.math.uiuc.edu/~ford/
Ford has degrees from California State University, Chico (B.S., 1990) and the University of Illinois at Urbana-Champaign (Ph.D.,1994). He has held positions at the Institute for Advanced Study, the
University of Texas and the University of South Carolina before returning to UIUC in Fall, 2001.
His research interests cover a variety of topics in elementary, analytic, combinatorial and probabilistic number theory. They include Waring-type problems, Weyl sums, the distribution of values of
arithmetic functions, sieve theory, zeros of the Riemann zeta function, irregularities in the distribution of primes and almost-primes in arithmetic progressions (prime race type problems), covering
systems of congruences, the distribution of divisors of integers, and configurations of prime numbers. Ideas and techniques from probability theory play an important role in many recent
investigations. Some highlights of his research are large improvements in estimates for mean values of Weyl sums and Vinogradov's mean value theorem, determining accurately how many values Euler's
function assumes in an interval [1, x], settling two conjectures of Sierpinski (one jointly with Sergei Konyagin) concerning the values taken by the sum of divisors function and Euler's totient
function, showing that the sum-of-divisors function and Euler's function have infinite intersection (joint with Carl Pomerance and Florian Luca), establishing the best known quantitative zero-free
region for the Riemann zeta function, determining accurately the number of distinct products in an N by N multiplication table (an old Erdos problem), and settling conjectures of Erdos, Graham and
Selfridge on covering systems (joint with M. Filaseta, S. Konyagin, C. Pomerance and G. Yu).
A.J. Hildebrand hildebr@math.uiuc.edu http://www.math.uiuc.edu/~hildebr/
Hildebrand earned a Ph.D. in 1983 from the University of Freiburg, Germany, and a Doctorat d'Etat in 1984 from the University of Paris-Sud, Orsay, France, and spent a year at the Institute for
Advanced Study in Princeton before joining the Illinois faculty in 1986.
Trained as a number theorist, he is interested also in problems in analysis, probability theory, and combinatorics, and, in particular, in problems that lie at the interface of these areas with
number theory. Most of his research falls into the areas of analytic number theory, which investigates problems of number theory by methods of analysis, and probabilistic number theory, which studies
number theoretic problems of a statistical nature.
Hildebrand has taught special topics courses on asymptotic methods of analysis, exponential sums, combinatorial number theory, and probabilistic number theory, and has supervised six PhD students.
At the undergraduate level, Hildebrand has a long-standing interest in and involvement with the local mathematical contest scene. For over twenty years, he has served as local coordinator of the
William Lowell Putnam Competition and a coach of the Illinois Putnam team, and has organized training sessions, practice contests, and related activities.
Leon McCulloh (Emeritus) mcculloh@math.uiuc.edu http://math.uiuc.edu/FacultyPages/mcculloh.html
McCulloh received his PhD in 1959 from the Ohio State University under H.B.Mann, and he came to the University of Illinois in 1961. He held visiting appointments at Indiana University and the
University of Hawaii in 1963 and 1967, respectively, and a short term appointment at the University of Bordeaux in 1983. He has also visited King's College London, the University of Regensburg, and
Cambridge University on sabbatical leaves. He has supervised nine PhD students.
His research, starting with his thesis on integral bases in relative Kummer extensions of number fields, has been a logical progression from that beginning. It has centered on relative integral and
normal integral bases, and the related notions of realizable Steinitz and Galois module classes. These topics were the basis for the thesis topics of four of his students. He developed a
generalization of the notion of Stickelberger relations to class groups of integral group rings and found connections with realizable Galois module classes and with class number formulas for integral
group rings. (He owes a heavy debt to Steve Ullom for pointing out the connection between normal integral bases and Stickelberger relations in the work of Hilbert.) In 1987 he published a
characterization "in Stickelberger terms" of the realizable Galois module classes of (tame) abelian extensions, in particular showing that they form a subgroup of the classgroup. He has since
partially generalized these results to nonabelian extensions but a full generalization is not yet in sight. There are many related open problems and conjectures which he feels provide an important
and fruitful area for research.
Bruce Reznick reznick@math.uiuc.edu http://www.math.uiuc.edu/~reznick/
Reznick's degrees are from Caltech (BS, 1973) and Stanford (Ph.D., 1976). He has been on the faculty of the University of Illinois since 1979. He was a Sloan Foundation fellow from 1983--1986, and
received the Prokasy Award for Excellence in Undergraduate Teaching from his College in 1997.
His main research interests originate from questions about the structure of polynomials in several variables and their representations as sums of powers of polynomials and then spread out. The
subjects ultimately range from concrete realizations of Hilbert's 17th problem for positive definite forms to Hilbert identities; from spherical designs and quadrature formulas to the linear algebra
of polynomials and the Linfty norm of the Laplacian on spaces of forms of fixed degree; from polynomial solutions of Diophantine equations to concrete versions of theorems in abstract real algebraic
geometry; from questions about the lattice point structure of polytopes with lattice point vertices to structure theorems for recursively defined sequences. So far, these have encompassed questions
in number theory, algebra, analysis and combinatorics.
Armin Straub (Doob Postdoc) astraub@illinois.edu http://www.math.illinois.edu/~astraub/
Armin Straub received his Ph.D. in 2012 from Tulane University under the direction of Victor Moll. His research interests are in special functions, especially hypergeometric and modular ones, and
their many connections to number theory, combinatorics and computer algebra. Recently he has for instance studied short planar random walks from a number theoretical perspective.
Kenneth B. Stolarsky stolarsk@math.uiuc.edu http://www.math.uiuc.edu/People/stolarsky.html/
Stolarsky did his undergraduate studies at Caltech and received his Ph.D. at the University of Wisconsin (Madison) in 1967 under the direction of Marvin Knopp. His research interests include number
theory, geometry, classical analysis, and combinatorics, and he has supervised six Ph.D. theses in these areas. Stolarsky is particularly interested in Diophantine approximation and has taught many
graduate courses in this and the related areas of transcendence theory and the geometry of numbers. His undergraduate teaching has centered on introductory differential equations, and he has also
done much editorial work for the problems section of the American Mathematical Monthly. He retired in 2010.
Stephen Ullom ullom@math.uiuc.edu http://www.math.uiuc.edu/~ullom/
Ullom received his Ph.D. in 1968 from the University of Maryland; his thesis advisor was Sigekatu Kuroda. He spent his NSF Postdoctoral Fellowship (1968--69) at the University of Karlsruhe and King's
College, University of London, his mentors being H. W. Leopoldt and A. Frohlich respectively. In 1969--1970 he was a visiting member of the Institute for Advanced Study. He came to the University of
Illinois in 1970 as an Assistant Professor and was promoted to Professor in 1978. Ullom has spent sabbatical leaves at King's College London and Cambridge University (Bye Fellow of Robinson College)
with a shorter visit to the University of Arizona. He has supervised five Ph.D. students, the last two jointly with Nigel Boston.
In his thesis Ullom studied the Galois module structure of ideals in a local or global field which are invariant under the Galois group. He found the first example where the ring of integers is not
projective over the integral group ring, but some ideals are projective. He generalized Hilbert's theorem on the connection between Galois module structure and Stickelberger elements to proper
ideals. Frequently in collaboration with Irv Reiner, Ullom studied class groups of integral group rings, particularly on their arithmetic properties. He proved a conjecture of Kervaire and Murthy on
the class groups of cyclic p-groups by using Iwasawa theory applied to group rings. His work on Swan modules, a particularly simple form of projective module, showed most noncyclic groups G have
projective nonfree modules over the group ring ZG. Tate cited Ullom's 1977 survey article on class groups in his book on Stark's conjecture. Ullom's student, Steve Watt, extended Frohlich's results
on Galois groups of p-extensions with restricted ramification over the rationals to the case of imaginary quadratic base field. Ullom and Watt used this result to give a criterion for when an abelian
p--extension L of an imaginary quadratic field K has class number prime to p (assuming the genus number for L/K is prime to p).
Ullom wrote a paper with N. Boston on Galois deformations associated to elliptic curves with complex multiplication. An important consequence is that for the Fermat curve there are infinitely many
primes p such that the deformation ring is not simply a ring of formal power series in several variables.
Marcin Mazur and Ullom investigated the Galois module structure of units modulo torsion in certain real abelian fields of two power degree. There are many fascinating connections here between Galois
module structure and classical topics such as genus field, central class field, and sign of the fundamental unit in a quadratic field. At present Ullom is also investigating some problems on the
structure of Galois groups of extensions of number fields with restricted ramification.
Alexandru Zaharescu zaharesc@math.uiuc.edu http://www.math.uiuc.edu/~zaharesc/
Zaharescu received his Ph.D. in 1995 from Princeton University under the direction of Peter Sarnak. He has held positions at the Massachusetts Institute of Technology, McGill University, the
Institute for Advanced Study, and since 2000 he is a faculty member at UIUC. He is currently working on several projects concerned with the zeros of the Riemann zeta function and more general
L-functions, and on some problems on the distribution of various sequences of interest in number theory. | {"url":"http://www.math.uiuc.edu/ResearchAreas/numbertheory/facultyresearch.html","timestamp":"2014-04-16T13:07:23Z","content_type":null,"content_length":"33508","record_id":"<urn:uuid:a68df357-3b36-4c3b-aeee-8f7bbf3f1f82>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00568-ip-10-147-4-33.ec2.internal.warc.gz"} |
Magic Square Update-2009
┃ Luo Shu Format M. S. │ Topographical Magic Squares │ How Many? ┃
┃ Totally Irregular M. S. │ Postage Stamps and M. S. │ 1040 Order-4 M. S.? ┃
┃ That Amazing 1089 and the Lho-Shu │ Palindromic Magic Squares │ W. S. Andrews error ┃
┃ Magic Square Pair │ │ Shineman – 4 Related ┃
Luo Shu Format Magic Squares
Dr. Robert Dickter (DDS) announced in an email of Aug. 18, 2009 a series of magic squares with identical features to the order-3. He referred to them as Luo Shu format squares and shows a number of
examples on his new website at www.luo-shu.com
Here I show the Luo Shu and 3 examples followed by their common features. These magic squares are easy to construct using a modified de la Loubére method.
The central themes of Luo Shu format squares is:
• They contain a relationship between two consecutive numbers. In the case of the order-3, these are 1 and 2.
• They will always contain a pythagorean triplet in a gnomon arrangement in the center of the square.
• All are odd order, associated (center-symmetric) magic.
Some relationships between key numbers in all Luo Shu format magic squares.
│a│b│order of square│above x│center cell (y)│Lower right corner │Constant│Total of all #’s x^2 * y │
│ │ │(x = b^2 - a^2)│ y-1 │(y = a^2 + b^2)│z = b * x and y + a│ x * y │ │
│1│2│ 3 │ 4 │ 5 │ 6 │ 15 │ 45 │
│2│3│ 5 │ 12 │ 13 │ 15 │ 65 │ 325 │
│3│4│ 7 │ 24 │ 25 │ 28 │ 175 │ 1225 │
│4│5│ 9 │ 40 │ 41 │ 45 │ 369 │ 3321 │
│5│6│ 11 │ 60 │ 61 │ 66 │ 671 │ 7381 │
│6│7│ 13 │ 84 │ 85 │ 91 │ 1105 │ 14365 │
‘a’ and ‘b’ are two consecutive numbers whose sum equals the order of the square. 'a' always appears in the bottom row, 2^nd cell from the right. 'b' always appears in the top right hand corner.
'y' is always the center cell. 'x' is adjacent to the left of 'y', and the 3^rd member of the triad (y-1) is adjacent and above 'x'. The product of x and y equals the constant of the magic square,
while x^2 times y gives the total of all integers in the square. These 3 cells always form a pythagorean triad in a Luo Shu format square. They are indicated in red in the illustrations.
Beginning with order 9, the pythagorean triplet associated with the next higher order also appears in the square in the position shown with green numbers.
The number appearing in the lower right corner (purple) is the sum of the numbers 1 to x (the order of the square).
A. is the Luo Shu, the original and only order-3 magic square (not counting rotations and reflections).
B. is the algebraic representation of A. Compare x and y with the above table.
C. is the order-5 Luo Shu format magic square. Compare colored integers with the above table. C.
Here we see the order-7 and the order-9 Luo Shu format magic squares.
On his web site, Dr. Dickter shows the Luo Shu type squares for orders 11 to 27. He explores the calendar, mysticism, and archaic religions in relation to the Luo Shu.
Topographical Magic Squares
In August, 2008, I posted an article about Craig Knecht’s Topographical magic squares. [1]
Since then he and Walter Trump have expanded on this subject, and Craig has posted a comprehensive page. [2]
The basic premise is that the ‘height’ of each cell is based on the value of the integer in that cell. Then cells that are lower then the surrounding cells may contain ‘water’. Please refer to [1] or
[2] for a complete description of topographical magic squares.
A. Shows 1 of 36 simple magic squares that hold the maximum of 69 units of water.
B. 35 units retained is the maximum for order-5 pandiagonal magic squares. All 3600 order-5 pandiagonal magic squares contain water in only 5 different basic patterns. These range from 1800
different squares with 2 lake cells to 80 squares with the pattern shown in B. The units of water held range from 144 squares containing 8 units, to 352 squares containing 15 units. Squares exist
that hold units of water from 8 to 30, and the 24 squares that hold 35 units.
C. 59 units retained is maximum for an order-5 associated magic square. There are 48,544 associated m. s. 47,278 of these hold between 2 and 59 units of water. No pandiagonal associated order-5
magic squares hold water.
I suppose, to be consistent, we must say that figure 1(A) contains 2 ‘lakes’ of 2 cells each, and 1 ‘pond’ and (C) contains 1 lake and 1 pond. This follows from the fact that the basic definition
states that no water can flow diagonally between cells.
The concept of topographical numerical squares may be extended. Following are several examples.
A. Franklin’s "forgotten" order-8 magic square contains 3 ‘lakes’and 2 'ponds'. [3] A. B.
B. An order-9 simple magic square with 1 lake, 2 ‘ponds’ and 3 ‘islands’. 780 units are retained. [4]
C.The Luo-shu type order-7 magic square contains 10 ‘ponds’
D. This order-7 magic square contains 321 units of water. [4] C. D. E.
Walter Trump has found an order-7 with 1 lake containing 365 units, and another order-7 with a lake and a pond that contains a total of 378 units. Are these the maximums for order-7 magic squares?
E. This is a number (not magic) square. (Disregard the fainter numbers. They are just inserted to complete the square.) The lake and the 2 ponds together retain 488 units of water. This is the
maximum possible for an order-7 number square with numbers 1 to m^2. [4]
[1] See Still More Magic Squares
[2] http://www.knechtmagicsquare.paulscomputing.com/
[3] Paul C. Pasles, Benjamin Franklin's Numbers, 2008, 978-0-691-12956-3, p.207
[4] These 3 squares were constructed by Walter Trump. He has a web site at http://www.trump.de/magic-squares/
How Many?
In an email dated August 12, 2009 Francis Gaspalou announced that he had successfully enumerated the number of order-6 self-similar magic squares (see example). He reported that there were
67,704,146,804,736 different squares of type a1+f1 = 37 (when counting all of the squares). There are the same number when a1+a6=37 so the total number of self-similar basic magic squares of order-6
is exactly 16,926,036,701,184 (ie 2 times 67,704,146,804,736/8).
The same day, Walter Trump confirmed that total by reminding us that he had arrived at the same figure in October 2000. Walter had posted this figure (plus other magic Square enumerations) on the M.
Suzuki site. [1]
An Aside note: The Suzuki page refers to composite magic squares. What is actually meant is compact, a term defined by Gakuho Abe in 1950. [2] (Composite refers to magic squares composed of smaller
magic squares i.e 9 order-4s make 1 order-12).
Francis has a site where he shows results of various magic square enumerations and methods at http://www.gaspalou.fr/magic-squares/.
[1] Most of Matsumi Suzuki’s site is now at http://mathforum.org/te/exchange/hosted/suzuki/MagicSquare.total.html
[2] Gakuho Abe, Fifty Problems of Magic Squares, Self published 1950. Later republished in Discrete Math, 127, 1994, pp 3-13.
How many Bordered order 6
In an email March 29, 2009, Harry White sent a list of 140 borders to place around a non-normal order-4 magic square consisting of consecutive numbers. His web site [1] on this subject includes
scripts to construct bordered squares of various orders.
There is some confusion over the difference between 'bordered' and 'concentric' magic squares.
The simplest distinction is that the interior magic square must consist of consecutive numbers for the square (as a whole) to be called bordered. This means that the 2m+2 lowest and their
complements (the highest numbers) must appear in the border.
In the illustration, A is a bordered order-6 magic square. B. is a concentric magic square, but in this special case, the border consists of consecutive numbers. This is not required, and is in fact,
quite rare!
Totals for some distinct orders of bordered magic squares are:
Order-5 = 2,880
Order-6 = 567,705,600
Order-7 = 6.14 x 10^10
Harry`s page [1] contains totals for all orders from 5 to 14.
For concentric order-6 magic squares Harry found the total was 736,347,893,760. This number was confirmed by Francis Gaspalou in July 2009.
[1] http://users.eastlink.ca/~sharrywhite/BorderedMagicSquares.html
Totally Irregular Magic Squares
On August 24, 2009, Francis Gaspalou introduced a new type of magic square which he calls totally irregular.
A lively email discussion followed, between him, Aale de Winkel and Walter Trump.
A. (above) is the totally irregular simple magic square received from Francis Gaspalou. Aug. 24, 2009
More on this subject may be found on his web site. [1]
Magic squares may be constructed by using two subsidiary squares which are then added on a cell by cell basis to form the final square. Such squares may then be divided into 2 broad classifications.
A Regular magic square [2] is one where each number (in the classical version, each letter) appears once in each row, column, and diagonal. If such a square could exist for order-6, then each line
would total 21 in subsidiary square A and 90 in subsidiary square B. Prof. Candy reported that there are 38,102,400 regular magic squares of order-7. [3]
An Irregular magic square is one where this is not the case and a number may appear more then once (or not at all) in at least 1 row, column, or diagonal. For example, there are no irregular
pandiagonal magic squares of order less the 7, but both Trump and Gaspalou found that there are many more then the 640,120,320 different ones of that order reported by Prof. Candy. [4]
A Totally Irregular magic square (as the one shown above) is one where all the lines are irregular in both subsidiary squares. The total number of these (for any order) is still unknown.
[1] Francis Gaspalou’s web site is http://www.gaspalou.fr/magic-squares/.
[2] W. H. Bebson and O. Jacoby, New Recreations With Magic Squares, Dover Publ., 1976, 0-486-23236-0
[3] Professor A. L. Candy of University of Nebraska (1890s to 1945) did much original research on this subject. A self-published book and some of his notes are in the Strens Recreational Mathematics
Collection at the University of Calgary.
[4] In an email dated Oct.4, 2009, Francis showed totals for several small sub-sets of order-7 pandiagonal magic squares. These were sufficient to indicate that Candy's total for irregular
pandiagonal magic squares is far too low.
Postage Stamps and Magic Squares
After retiring from the Department of Mathematics and Statistics, McGill University, in 2005, Professor Emeritus George Styan [1] became interested again in magic squares.
In 2007 with Götz Trenkler, George prepared a bibliography [6] of over 300 references on magic squares. George soon combined this interest in magic squares with his lifetime fascination with stamp
collecting. In August 2009, he sent me copies of a recently accepted paper and the pdf-file for the overheads of a talk he gave on the study of magic squares as related to stamp collecting. [2][3][4]
From the abstract of Some Comments On Philatelic Latin Squares From Pakistan [2]
We explore the use of Latin squares in printing postage stamps, with special emphasis on stamps from Pakistan. We note that Pakistan may be the only country to have issued postage stamps in 2x2; 3x3;
4x4 and 5x5 Latin square formats: we call such sets of stamps philatelic Latin squares (PLS)….
Here are three illustrations from that paper.
The sheetlets displayed in Figures 1.1 and 1.3 are examples of 2x2 philatelic Latin squares (PLS). The stamps in the left panel (fig. 1.1) depict Mustafa Kemal Atatürk (1881--1938), founder of the
Republic of Turkey as well as its first President, and Quaid-e-Azam Muhammad Ali Jinnah (1876--1948), who is generally regarded as the founder of Pakistan.
The stamps on the right (fig. 1.3) feature Alexander Sergeyevich Pushkin (1799–1837), who is considered to be the greatest Russian poet.
The authors do not discuss magic squares per se, but consider various aspects of Latin Squares and their newly defined PLS. They also discuss Gerechte Latin squares and the somewhat related Sudoku
square in some depth.
Here is another illustration and some of the accompanying text.
Featured in the 5x5 PLS here are five antique automobiles from the late 19th and (very) early 20th century (from left to right in the first row):
(1) 1893 Duryea, (2) 1894 Haynes,
(3) 1898 Columbia, (4) 1899 Winton,
(5) 1901 White.
Following are three of the four stamps that George Styan found that actually depict a magic square. These are taken from An Illustrated Philatelic Introduction to Magic Squares. [3]
See also [5]
The Franklin USA stamp consists of a collage of 4 items from Franklin history. Here also is an enlarged view of the order-8 bent-diagonal square. This is only semi-magic because the two diagonals do
not sum to the constant.
George thinks there are at least 160 stamps associated with Benjamin Franklin, but only one that shows a magic square.
MELENCOLIA:I by
Albrecht Dürer, Aitutaki (Cook Islands)1986: Scott 391
MELENCOLIA:I by
Albrecht Dürer, Mongolia 1978: Scott 1039
Not shown is
MELENCOLIA:I by
Albrecht Dürer, Djibouti. [5]
George says there are over 200 stamps associated with Albrecht Dürer and wonders if, maybe, there are more stamps for Franklin than for Dürer?
[1] George P. H. Styan may be reached at styanatmath.mcgill.ca
[2] Ka Lok Chu, Simo Puntanen, and George P. H. Styan: Some comments on philatelic Latin squares from Pakistan. Invited paper to appear in the
Special Silver Jubilee issue of the Pakistan Journal of Statistics: preprint 43 pages.
[3] An Illustrated Philatelic Introduction to Magic Squares (in honour of Dr. Peter Loly’s retirement). Invited talk presented at The University of Manitoba, Winnipeg, on 9 November 2007. Overheads
(pdf file): 52 heavily illustrated pages.
[4] Peter D. Loly and George P. H. Styan: An introduction to philatelic Latin squares, with some comments on Sudoku. Report 2009-01 from the
Department of Mathematics and Statistics, McGill University, Montréal. In preparation.
[5] Oskar Maria Baksalary, Ka Lok Chu, George P. H. Styan and Götz Trenkler: A philatelic introduction to magic squares associated with
Albrecht Dürer (1471--1528), Benjamin Franklin (1706--1790) and Johannes Hermann Zukertort (1842--1888). Report 2009-04 from the Department of Mathematics and Statistics, McGill University, Montréal.
In preparation.
[6] George P. H. Styan and Götz Trenkler: An annotated bibliography on magic squares, with special emphasis on linear-algebraic results, as well as on very old and very new results. Report 2007-02
from the Department of Mathematics and Statistics, McGill University, Montréal. Includes 318 references.
1040 Order-4 Magic Squares ?
On September 6, 2009 I received an email from Lee Sallows with two documents attached [1][2].
(Actually, most of this work on this subject was done about 10 years ago.)
Folowing are the first two paragraphs of NEW ADVANCES WITH 4 X 4 MAGIC SQUARES by Lee Sallows
One of the best known results in the magic square canon is Bernard Frénicle de Bessy's enumeration of the 880 ‘normal’ 4´4 squares that can be formed using the arithmetic progression 1,2,..,16. A
natural question this suggests concerns non-normal squares: Is 880 the largest total attainable if any 16 distinct numbers are allowed?
The answer is no. A computer program that will generate every square constructable from any given set of integers has identified 1040 distinct squares using the almost arithmetic progression: 1, 2,
3, 4, 5, 6, 7, 8,10,11,12,13,14,15,16,17. Note the doubled step from 8 to 10. Extensive trials with alternative sets make it virtually certain that 1040 is the maximum attainable (or 8 ´ 1040 = 8320,
if rotations and reflections are included), although an analytic proof of this assumption is lacking. A listing of the 1040 squares can be had on request via email at lee.sal@inter.nl.net.
In this paper he goes on to examine the FERTILITY of alternative sets of 16 numbers, meaning the number of magic squares yielded by each. Tables listing the fertilities of both symmetric and
asymmetric sets are given.
Here are 3 example order-4 squares taken from his index ordered list of 1040 magic squares using the above number set (figures A, B, C).[3]
The two numbers above each square are the index number and the Dudeney Group number.
The quantity of squares in each of the 12 (traditional) Dudeney groups for this number set is
│Dudeney Group │1 │2 │3 │4 │5 │ 6 │7 │8 │9 │10│11│12│
│# of Squares │48│48│48│96│96│480 │52│52│52│52│8 │8 │
The second document I received was A SET USING 1, 2, 3, 4, 6, 7, 8, 9, 10 11, 12, 13, 15, 16, 17, 18.
The second document received was a NON-ARITHMETIC progression of 16 integers that yield 880 4x4 magic squares.[2]
This list of 880 squares (using 8 complementary integer pairs) includes 6 with distributions that are DIFFERENT to the usual 12 Dudeney types, such as the examples D and E above.
If you wish, you may review the 12 original Dudeney groups here.
A subsequent paper [4] received on 9/9/09 from Lee Sallows identifies a total of 22 additional Dudeney type diagrams and a representative magic square for each. Unlike the original Dudeney types,
these apply only to magic squares that consist of non-consecutive numbers. Of course, from Lee Sallows and most other mathematicians perspective, the normal magic squares are only a sub-set of all
number magic squares!
[1] New Advances With 4 X 4 Magic Squares by Lee Sallows New_Advances_(4).doc
[2] A set using 1, 2, 3, 4, 6, 7, 8, 9, 10 11, 12, 13, 15, 16, 17, 18 by Lee Sallows 880 New Squares.doc
[3] MS1040.txt received Sept. 3/09 lists all 1040 squares in index order.
[4] On non-normal Graphics Types by Lee Sallows non-normal graphics types.pdf
Click on the above links to download the files.
That Amazing 1089 and the Lho Shu
The Luo Shu multiplied by 1089 gives us a magic square with the magic sum of 16335.
The 3 Most Significant Digits of the numbers in this square gives us another magic square with a sum of 1632.
The 2 MSD give us another magic square with the sum 162. The 3 LST give a magic square with sum 1335, and the 2 Least Significant Digits give a magic sum of 135.
The 1 MSD gives us the original square, and the 1 LSD give us a 180 degree rotation of the original magic square.
Oct. 20/12 Thanks to Lorenzo Susican Jr. of The Phillppines for spotting the error in my magic square sum (I had 16385).
Palindromic Magic Squares
Eight additional squares derived from square A. How many more variations can you find? [1]
A. Original Luo Shu but with 3 digit redigits.
B. 1 digit of A. with arbitrary first and last digits
C. The 3 digits of A. with arbitrary 2nd and 4th digits inserted.
D. First and second digits of A.
E. First and second digits of B.
F. First 3 digits of C.
Sum of the digits in each cell of A.
Sum of the digits in each cell of B.
Sum of the digits in each cell of C.
[1] Adapted from Emanuel Emanouilidis: Journal of Recreational Mathematics:29:3:1988:177-178
W. S. Andrews error
On August 18, 2009, Peter Loly reported (by email) a minor error in W. S. Andrews Magic Squares and Cubes.
In figure 253 (page 143) the cell in the bottom row, sixth column is incorrect. It should be a+2d+2c+b.
This correction provides the correct integer for the corresponding cell in the order-9 numerical magic square shown
in figure 254. In Andrews first addition, figure 253 (actually fig. 270 there) is correct.
Magic Square Pair
Digits in second table are the reverse of the first one.
In each case, if the 2^nd digit of each number is removed, the square is still magic
[1] Dr. Crypton, Timid Virgins Make Dull Company, Penguin Books, 1985, p 149, 01400-80430
Shineman – 4 Related
I received these 4 related magic squares from Ed Shineman Jr. of New York on January 9, 2001
I have since lost contact with him after he passed his 90^th birthday. Ed was not much on the theory of magic squares, but had a creative imagination. I have taken this item from a 2 inch stack of
material received from him over the years, and include it as a tribute to him.
Together, the first 3 magic squares use the consecutive numbers from 1 to 97.
The two order-4 squares each use the numbers 10, 11, 21, 22, 32, 33, 43, 44, 54, 55, 65, 66, 76, 77, 87, and 88.
The order-9 composite square uses the remaining numbers.
The magic constant of the 9 order-3 squares form another order-3 square.
A. Is Associated semi-pandiagonal
B. Is Pandiagonal magic
C. Is a simple composite magic square
D. Is associated magic (as are all order-3 hypercubes)
Other Shineman creations on my site are on my Unusual Magic Squares and my Material from REC pages.
┃Ed Shineman passed away on Jan. 12, 2009 at the age of 93.┃ | {"url":"http://www.magic-squares.net/square-update.htm","timestamp":"2014-04-18T08:54:27Z","content_type":null,"content_length":"88069","record_id":"<urn:uuid:b57d95eb-be5e-4b1b-be88-263e14010e99>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00082-ip-10-147-4-33.ec2.internal.warc.gz"} |
Matroid representable over $\mathbb{R}$ but not over $\mathbb{Q}$?
up vote 4 down vote favorite
Does there exist a matroid that is representable over $\mathbb{R}$ but not over $\mathbb{Q}$?
In particular, can one give a positive answer using a nonrational polytope, i.e., a combinatorial polytope that cannot be realized as the convex hull of rational vertices? (Such things do exist; see,
e.g., p.94 of Grünbaum's Convex Polytopes.) The vertex sets of the faces of a convex polytope certainly form the flats of a matroid, but it's not clear to me why the same matroid could not be
realized by affine dependences of a set of points not in convex position.
co.combinatorics matroid-theory
add comment
1 Answer
active oldest votes
Jeremy, on the very same page 94 you will find a "point and line configuration" called Perles configuration which when viewed as set ov vectors in $\Bbb R^3$ is a matroid that is
up vote 8 down realizable over $\Bbb Q[\sqrt{5}]$ but not over $\Bbb Q$. In my book I even prove it (Ex 12.3) - sorry to make a plug, this is the only place with a proof I know.
vote accepted
(wipes egg off face) Quite right. Thanks, Igor. – Jeremy Martin Sep 17 '12 at 20:23
This might be a better question: Is there a matroid that is representable over $\mathbb{R}$, but not over the algebraic closure of $\mathbb{Q}$? – Jeremy Martin Sep 18 '12 at
2 Um, Jeremy, I think you should continue reading p.94, starting with "As a matter of fact..." To clarify for those without a book, the answer is NO. BTW, this is the reason
Grünbaum won the Steele Prize: math.washington.edu/newsletter/2005/grunbaum.html – Igor Pak Sep 20 '12 at 23:20
add comment
Not the answer you're looking for? Browse other questions tagged co.combinatorics matroid-theory or ask your own question. | {"url":"http://mathoverflow.net/questions/107414/matroid-representable-over-mathbbr-but-not-over-mathbbq","timestamp":"2014-04-18T18:48:19Z","content_type":null,"content_length":"54624","record_id":"<urn:uuid:9c9cda64-95d9-4c9a-a880-299169a73356>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00105-ip-10-147-4-33.ec2.internal.warc.gz"} |
School of I
Basser Seminar Series
Graph representations in a grid
Speaker: Tomas Vyskocil
Charles University, Prague
Time: Monday 20 February 2012, 2:00-3:00pm
Location: The University of Sydney, School of IT Building, Lecture Theatre (Room 123), Level 1
Add seminar to my diary
The talk will be about two types of structures which we were interested recently. The first one is an island representation of graphs. A island is set of vertices in the extended grid plane grid
where into each square you add both diagonal edges which induces connected sub-graph of grid. I will show a connection with the string graphs and show that if we bound size of islands then problem of
recognizing such graphs is NP-hard. The second topic I want to talk about are intersection graphs of $k$-bend paths. A $k$-bend path is a simple path in plane grid, where there are at most $k$ 90
degree turns.
We showed that recognition of such graphs is NP-hard and even more that recognition of graphs representable by k-bend paths within graphs representable by $(k+1)$-bend paths is NP-hard as well.
Speaker's biography
Tomas is a PhD student of Prof. Jan Kratochvil at Charles University, Prague. His PhD thesis is about graph drawing and representations. His research interests include combinatorics and complexity | {"url":"http://rp-www.cs.usyd.edu.au/research/news/vyskocil.shtml","timestamp":"2014-04-19T19:35:21Z","content_type":null,"content_length":"11825","record_id":"<urn:uuid:4336559d-07f3-4c5e-823c-d465e290e10d>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00489-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unit III B Conductors, Capacitors, and Dielectrics
III B. Conductors, Capacitors, Dielectrics
AP Exam Objectives
B. Conductors, Capacitors, Dielectrics
1. Electrostatics with Conductors
a. Students should understand the nature of electric fields in and around conductors so they
(1) Explain the mechanics responsible for the absence of electric field inside a conductor, and why all excess charge must reside on the surface of the conductor.
(2) Explain why a conductor must be an equipotential, and apply this principle in analyzing
what happens when conductors are joined by wires.
(3) Determine the direction of the force on a charged particle brought near an uncharged or
grounded conductor
b. Students should be able to describe and sketch a graph of the electric field and potential
inside and outside a charged conducting sphere so they can:
(1) Describe qualitatively the process of charging by induction.
(2) Determine the direction of the force on a charged particle brought near an uncharged or
grounded conductor
2. Capacitors and Dielectrics
a. Students should know the definition of capacitance so they can relate stored charge and
voltage for a capacitor.
b. Students should understand energy storage in capacitors so they can:
(1) Relate voltage, charge, and stored energy for a capacitor.
(2) Recognize situations in which energy stored in a capacitor is converted to other forms.
c. Students should understand the physics of the parallel-plate capacitor so they can:
(1) Describe the electric field inside the capacitor, and relate the strength of this field to the potential difference between the plates and the plate separation.
(4) Determine how changes in dimension will affect the value of capacitance.
Sharing of Charge
· All systems, mechanical and electrical, come to equilibrium when the energy of the system is at a minimum. When a conductor is in equilibrium, the electric field everywhere inside the conductor is
zero. This is fairly obvious for a neutral conductor where the electric field emanating from all of the positive charges end on the negative charges. But, what happens when an excess charge is placed
on a conductor. Since the charges within a conductor are free to move about, the excess charge moves to the outside surface of the conductor, reestablishing equilibrium and the internal field reduces
to zero
· Charges move until all parts of a conductor are at the same potential.
· A charged sphere shares charge equally with a neutral sphere of equal size. Charges flow until all parts of a conducting body, two touching spheres in this case, are at the same potential.
Which way will the charge flow when each of the following conducting spheres are connected?
Which way will the charge flow when each of the above conducting spheres are connected to the Earth?
· The potential on the larger sphere is lower than the potential on the smaller sphere because the charges are farther apart thus the repulsive force between them is reduced. If the two spheres are
touched together, charges will move to the sphere with the lower potential; that is from the smaller to the larger sphere. The result is a greater charge on the larger sphere when the two
different-sized spheres are at the same potential.
grounding- touching an object to Earth to eliminate excess charge.
· Earth is a very large sphere so a charged body that touches it will result in practically any amount of charge to flow from the body to the Earth, allowing the body to become neutral.
Electric Fields Near Conductors
· The electric field around a conducting body depends on the structure and shape of the body.
· The charges on a conductor are spread apart as far apart as possible in order to make the energy of the system as low as possible. The result is that all charges are on the surface of a solid
conductor. If the conductor is hollow, excess charges will move to the outside surface. In this way, a closed metal container shields the inside from electric fields. (Faraday cage)
· The electric field around the outside of a conductor depends on the shape of the body as well as its potential. The charges are closer together at the sharp points of a conductor, therefore, the
field lines are closer together; the field is stronger.
· Application: To reduce corona and sparking, conductors that are highly charged or operate at high potentials are made smooth in shape. Lightning rods are pointed so that the electric field will be
strong near the end of the rod thus attracting lightning to it rather than the building it is protecting.
Storing Electric Energy- The Capacitor P 59
1746- Musschenbroek- invented a device that could store electric charge.
Leyden jar (see our Leyden jar)
· Two isolated conductors having a potential difference of DV
between them with equal and opposite charges + Q and - Q ,
constitute what is called a capacitor because as long as they
are isolated they have the capacity for storing charge.
· The difference in potential is directly proportional to the charge of the objects. This should be rather obvious since doubling the charge also doubles the electric field in the region about the
objects and hence, the work-per-unit charge (that is, the potential difference, DV) required to move a test charge between two points in the field also doubles.
Q µ DV or Q = CDV Where C is a constant of proportionality
· For a given size and shape of a capacitor, the ratio of stored charge to potential difference, Q/V, is a constant called the capacitance (C).
· The capacitance is independent of the charge on it. It can be measured by placing charge +q on one plate and -q on the other, and measuring the potential difference, DV, that results. The AP
equation is...
C = Q/V unit: = 1C/1V = farad (F) (Eq ***)
· A capacitor in electronics lingo is- a device designed to have a specific capacitance.
· All capacitors used in circuitry are made up of two conductors of equal and opposite charges, separated by an insulator called a dielectric. Capacitors have become very important devices in
electronic circuits, electrical machinery, etc. since they are devices which store charge and electrical energy.
Applications: Tuners in radio receivers, “condensers” in ignition systems of
cars, camera flash, strobe light
Practice Problem 10 The Charge on the Plates of a Capacitor
A 3.0-mF capacitor is connected to a 12-V battery. What is the magnitude of the charge on each plate of the capacitor?
C = Q /V
Q = C V
= (3 x 10^-6 C)(12 V)
= 36 mC
The Parallel-Plate Capacitor
· Recall that a constant electric field can be made by placing two flat plates with equal and opposite charges parallel to each other. Represent this electric field on the diagram above.
· For the parallel plate capacitor shown in the figure above, the voltage V is given by Ed where E is the electric field strength between the plates and d is the plate separation.
· The electric field E depends on the amount of charge on the plates which in turn depends on the area of the plates (more area can hold more charge) with the proportionality
constant = 1/ e[o] . Thus E = Q/(e[o] A). The capacitance of a parallel-plate capacitor, therefore, is directly proportional to the area of one of the plates and inversely proportion to their
C = Q/V V = Ed C = Q/Ed
C = e[o] A/d (Eq) ***
A = the area of one of the plates.
e[o] = constant = permittivity of free space = 8.85 x 10^-12 C^2/N m^2 = 1/4 p k
Practice Problem 11 Calculating Capacitance for a Parallel Plate Capacitor
C = e[o] A/d
= (8.85 x 10^-12 C^2/N m^2)(.0002 m^2) / .001 m
= 1.77 x 10^-10 F^
Energy Stored in a Charged Capacitor
· The energy stored in a charged capacitor is equal to the work required to charge it. The work required to transfer a small amount of charge DQ from the negative to the positive plate at potential V
is DW = VDQ. Since V increases linearly as more charge is added, the total work must be the average V, which is (1/2) (0 + V[f]), times Q. So, the total work done and thus the electric potential
energy (U[c]) stored in the capacitor is
U[c] = 1/2 QV = 1/2 CV^2 (Eq) ***
Practice Problem 12 Energy Stored in a Charged Capacitor
U[c] = ˝ CV^2
= ˝ (1.77 x 10^-10 F)((120 V)^ 2 = .036 J
· Thus the usefulness of a capacitor lies in its ability to store electrical energy which can, upon discharging the capacitor, then be converted to other useful forms such as sound, light, heat, and
mechanical energy. | {"url":"http://www.morrisville.org/user/jkushmaul/jkushmaul/AP%20Physics%20B/PubSite/UnitIIIBConductorsCapacitorsDielectrics.htm","timestamp":"2014-04-21T13:10:49Z","content_type":null,"content_length":"169990","record_id":"<urn:uuid:783e95ea-9360-4068-94c4-c149ec1e6d0a>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00505-ip-10-147-4-33.ec2.internal.warc.gz"} |
Modeling dispersal gradients
P. D. Esker^1, A. H. Sparks^1, G. Antony^1, M. Bates^2, W. Dall' Acqua^1, E. E. Frank^1, L. Huebel^3, V. Segovia^1, and K. A. Garrett^1
^1Dept. of Plant Pathology, Kansas State University, Manhattan, KS
^2Dept. of Horticulture, Forestry, and Recreation Resources, Kansas State University, Manhattan, KS
^3Dept. of Mathematics, Kansas State University, Manhattan, KS
Current address of P. D. Esker: Dept. of Plant Pathology, University of Wisconsin, Madison, WI, USA
Esker, P.D., A.H. Sparks, G. Antony, M. Bates, W. Dall' Acqua, E.E. Frank, L. Huebel, V. Segovia, and K.A. Garrett, 2007. Ecology and Epidemiology in R: Modeling dispersal gradients. The Plant
Health Instructor. DOI:10.1094/PHI-A-2007-1226-03.
Student Learning Goals
After completion of this module:
• Students will understand how R can be used to model dispersal and disease gradients.
• Students will be able to:
1. use R to compare different dispersal gradient models,
2. use R to compare and analyze primary versus secondary gradients,
3. run simulations in R that illustrate how an epidemic changes in space and time.
We would appreciate feedback for improving this paper and information about how it has been used for study and teaching. Please send your feedback to kgarrett@ksu.edu. Please include the following
text in the e-mail subject line, "Feedback on R Modules", to make sure your comments are received.
The dispersal (or movement) of plant pathogens is an essential component for spread of plant diseases and may occur within a field or across continents. Dispersal may be defined as the movement of
propagative units of a pathogen from the original source, or focus (Campbell and Madden 1990). Mechanisms of dispersal differ widely among plant pathogens, including the following mechanisms
(Campbell and Madden 1990).
• Spore ejection
• Wind
• Insect and other arthropod vectors
• Nematode vectors
• Rain
• Water runoff
• Movement of infected plant material
• Human influences such as movement of propagules in farm machinery
Dispersal and disease gradients corresponding to these mechanisms are often estimated. Dispersal gradients represent the frequency distribution of the distances traveled by all individuals in a
population and the application of dispersal gradients has been useful for characterizing unidirectional dispersal (Nathan et al. 2003). A key concept for increasing our understanding of dispersal is
the difference between dispersal (inoculum) gradients and disease gradients. Inoculum gradients describe movement of the propagative unit, where host availability is not necessarily required.
Disease gradients take into account all events leading to the spread of disease, including release of inoculum, transport, and deposition, as well as the presence of susceptible hosts in a
disease-conducive environment.
For plant pathogens, the primary sources of inoculum are often one of three general types: point, line, or area. A point source typically has a diameter smaller than 1% of the gradient length
(Campbell and Madden 1990; Zadoks and Schein 1979), while line or area sources do not necessarily have a fixed size. For example, a line source may be a row of diseased plants, and pathogen/disease
measures are made at increasing distances away from this source.
Furthermore, there are two types of propagative dispersal gradients that need consideration: primary and secondary (Campbell and Madden 1990; Gregory 1968). Primary disease gradients indicate the
dispersal potential of a pathogen (inoculum) in a single disease cycle (single source). Secondary disease gradients occur when inoculum is moved from lesions (plants) that had been infected during
the primary dispersal event. In the following case studies, both primary and secondary gradients will be illustrated.
In addition to measurements of dispersal within fields, long distance dispersal (LDD) is important for many plant pathogens (e.g., Asian soybean rust, tobacco blue mold). Successful transmission of
plant diseases over long distances often depends on the following:
1. the reproductive rate of the pathogen;
2. the carrying capacity of the source locality;
3. the role of atmospheric turbulence, stability, and wind speed; and
4. the survival of spores during exposure to inhospitable temperature and humidity and to UVB radiation from the sun.
While many spores may be killed during atmospheric transport, a sufficient number often remain viable to cause new infections and epidemics (Campbell and Madden 1990).
In this document we introduce concepts about dispersal for different types of plant pathogens. Four case studies are introduced that emphasize the following concepts:
1. primary disease gradients;
2. secondary disease gradients;
3. effects of droplet size and number on dispersal; and
4. dispersal in large field studies.
Our analysis and illustration of the dispersal process use the statistical package R, introduced in an associated set of exercises.
Dispersal of Bacteria
Plant pathogenic bacteria may be dispersed via several mechanisms: rain, wind, contaminated/infected seed, insects, and contaminated farm equipments (Quinn et al. 1980). Splash dispersal is an
important mechanism for short distance dispersal, typically across distances less than one meter. Splash-dispersed spores or bacteria are produced in mucilage and it is this mucilage that makes them
stick to the plant surface, thereby reducing the role of wind dispersal. Several factors influence splash dispersal: inoculum concentration at the source, orientation and surface characteristics of
the source, and size and velocity of raindrops. Dispersal effectiveness (i.e., distance traveled) depends on the momentum of the falling raindrops and larger rain drops are found to be more
effective. Splash dispersal may also occur via overhead irrigation, such as sprinkler irrigation. Drips from the canopies saturated with rain, fog, mist, and dew have the same effect, and to some
extent, cause vertical dispersal of bacteria (Fitt et al. 1989). Xanthomonas campestris pv. malvacearum (causal agent of angular leaf spot of cotton) and Erwinia carotovora subsp. atroseptica (black
leg of potato) are two examples of splash-dispersed bacteria.
For LDD, wind is the primary mechanism, mainly for dry inoculum. Dry dispersed pathogens produce lighter and smaller dispersal structures that can easily become airborne (Gregory et al. 1959). For
example, wind plays a major role in the dispersal of Xanthomonas axonopodis pv. citri, causing citrus canker. Furthermore, more than fifty different plant pathogenic bacteria are found to be
dispersed through infested seeds (Nino-Liu et al. 2006). One example is Pseudomonas syringae pv. phaseolicola, causing halo blight of bean. Five infected seeds out of ten thousand can cause an
epidemic (Trigalet and Biduad 1978). Lastly, insects like honeybees are important dispersal agents of bacterial pathogens affecting fruits and flowers, such as Erwinia amylovora, causing fire blight
of pome fruits.
Dispersal of Fungi
Fungi have a wide array of dispersal mechanisms and dispersal gradients. For example, plant pathogenic fungi that associate only with plant roots (soil-borne) have relatively short dispersal
gradients compared to fungi that associate with plant foliage and flowers. Also, the spore type greatly influences dispersal distance. Spores of fungi that are produced on aerial parts of a plant,
such as flowers or leaves, can be dispersed easily and over a wide range of distances. Fungi which colonize the plant vascular system rely on vectors for dispersal and so do not disperse as readily.
The dispersal of these pathogens depends on the range of the vector. Soilborne fungi typically disperse very slowly (Agrios 2004).
For the majority of plant pathogenic fungi, dispersal is dependent on factors such as wind, water, birds, insects, other animal, and humans, with the primary dispersal progagules being spores
(Agrios 2004). Fragments of hyphae and sclerotia can also be disseminated, although that is not as common (Agrios 2004). Release mechanisms may be triggered by environmental factors, including
changes in irradiance, air temperature, and relative humidity (Aylor 1990).
Spores can be actively discharged into the air or released by strong winds or light breezes and can travel distances ranging from a few centimeters to a few kilometers (Agrios 2004). For example,
Rambert et al. (1998) found that the wind speed for spore dispersal differed between leaf rust and stripe rust. For leaf rust, spores were not released until the wind speed reached 2.8 m s^-1, while
for stripe rust, spores were released when wind speeds were 2.3 m s^-1 or greater. Lacey (1996) described three steps for fungal spore movement: take-off, dispersal, and deposition. Take-off
involved spore release from a diseased plant, followed by dispersal to a different location, where finally deposition enabled the landing and subsequent infection of a new plant.
Rain is also important in spore dispersal, as water drops can cause passive spore removal when contacting a diseased leaf (Geagea et al. 1999). Rainfall has been shown to enhance spore removal in
some species infected with rust (Geagea et al. 1999) and stripe rust occurrence was closely associated with the amount of rainfall recorded in field studies (Park 1990). Spores released as a result
of water drops do not travel as far as spores released in the wind. Madden (1997) found that transport distance was usually less than 15 cm for each splash event. In addition to its role in
dispersal, rain also provides an environment conducive to infection for many fungi.
Dispersal of Viruses
Viruses differ from most other pathogens in that they cannot penetrate an intact plant cuticle and cellulose found in the cell wall (Hull 2002). Viruses overcome this problem by utilizing methods of
transmission that bypass the need to penetrate the outer surface of a plant, such as seed transmission or vegetative propagation, or by penetrating through a wound in the plant, such as through
mechanical or insect transmission. Virus transmission by insects involves interactions among the virus, vector and the host plant. The two most important phyla in the transmission of viruses are
Arthropoda and Nemata. Viruses can be taken up in one of two ways. Circulative viruses are taken up internally within the vector organism and pass through the vector's interior. Non-circulative
viruses are taken up externally and these viruses do not pass through the vector's interior.
Mechanical transmission involves the introduction of the virus into a wound made on the surface of the plant. When using mechanical transmission of viruses in experiments, the goal is to make many
small wounds on a plant surface without causing the death of plant cells (Hull 2002). This allows successful entry of the virus particles into the plant which will facilitate viral infection. The
most common method for mechanical inoculation of plants is use of abrasives (Corbett and Sisler 1964), which are rubbed on the surface of the plant causing small wounds in the tissue. Other less
common methods include spraying virus inoculum or pricking the epidermis of the plant and injecting the virus (Corbett and Sisler 1964). Viruses can also be transmitted by fungi, infected seed and
infected pollen. A pollinating insect can spread infected pollen to a new host, introducing the virus to new plants. Self-pollination will result in more infected seed than cross pollination between
a healthy plant and an infected plant (Corbett and Sisler 1964). Certain species of fungus-like organisms, typically Plasmodiophoromycetes or Chytridiomycetes, are vectors for some viruses (Hull
2002). Polymyxa species are important as vectors for such viruses as beet necrotic yellow vein virus (BNYVV) and wheat soilborne mosaic virus (WSBMV).
Dispersal of Nematodes
Soilborne nematodes move in the films of water that cling to soil particles. Nematode populations are generally denser and more prevalent in the world's warmer regions, where longer growing seasons
extend feeding periods and increase reproductive rates. Light, sandy soils generally harbor larger populations of plant-parasitic nematodes than clay soils. Nematodes in sandy soil benefit from the
more efficient aeration, the presence of fewer competitors and prey, and greater ease of movement through the root zone. Also, plants growing in readily drained soils are more likely to suffer from
intermittent drought, and are thus more vulnerable to damage by parasitic nematodes. Desert valleys and tropical sandy soils are particularly challenged by nematode overpopulation (Dropkin 1980).
Dispersal of nematodes is primarily through: running water (rain, irrigation, run-off, etc), and the movement of human beings, farm equipment, soil debris and plant material.
Preventing nematodes from entering uninfested areas is one way to avoid problems. Many nematodes may only spread on the order of a few feet per year without human dispersal through mechanisms such
as cultivation. The following practices may help minimize nematode dispersal:
1. use of certified planting material;
2. soil-less growing media in greenhouses;
3. cleaning soil from equipment before moving between fields (washing equipment, including tires, with water is most effective);
4. keeping excess irrigation water in a holding pond so that any nematodes present can settle out, pumping water from near the surface of the pond, and planning irrigation to minimize excess water;
5. preventing or reducing animal movement from infested to uninfested fields;
6. composting manure to kill any nematodes that might be present, before applying it to fields (Kodira and Westerdahl 1995), and eliminating important weed hosts, such as crabgrass, ragweed, and
cocklebur (Yepsen 1984).
Why Model Dispersal?
Dispersal processes underlie the development of disease foci. Information about the form of dispersal gradients is an essential component of spatially explicit epidemiological models (Roche et al.
1995). Dispersal models can provide insights into the mechanism of inoculum dispersal and deposition, the source of inoculum, and the physical processes underlying dispersal. Developing an accurate
dispersal model, however, requires a more complete understanding of the pathogen, its life cycle, spore or other propagule characteristics, the agents of dispersal and the interaction between the
propagules and the environment. There are two common methods for analyzing dispersal gradients: empirical models and physical models (Campbell and Madden 1990). For the development of empirical
models, researchers start with data sets and estimate parameters to fit equations that describe the probability of dispersal as a function of distance. For the development of physical models,
researchers start with theories based on physical laws describing the aerodynamic and other properties of pathogen propagules, and attempt to model the dispersal events mathematically. While
physical models enable a “more complete understanding of the dispersal event (canopy escape, liftoff and ascent, transport, descent and landing, impact)” (Isard et al. 2005), the mathematical
complexity and difficulty in obtaining all required model components often limit their direct application for many plant pathosystems. For an interesting example of how models might be applied for
long distance dispersal, see Aylor (2003).
Some of the same models used to describe disease progress over time can be used to study disease dispersal gradients. The significant difference between the two applications is that for disease
progress curves disease intensity tends to increase with increasing time, while pathogen dispersal and disease intensity tend to decrease with increasing distance from the source of inoculum.
The dispersal model most appropriate for describing the dispersal of a particular plant pathogen depends on that pathogen’s dispersal mechanism. A researcher might try several different types of
models to describe data from a given experiment or observational study. The form of the model that fits best may provide insight into the probable dispersal mechanism.
The following table presents some common dispersal models, as adapted from Campbell and Madden (1990).
Table 1. Common empirical dispersal models used to study plant pathogen dispersal.
Model Differential Equation Form Integrated Form Linearized Form Units of b
B none
E none
F none
These equations involve the following notation:
• y=y(s) may represent the concentration of inoculum, disease severity, or probability of infection at a point s units away from the source of infection. In the disease gradient context, the value
1-y represents the proportion of healthy host tissue.
• b is the rate parameter determining how steep the disease gradient is (a greater value for |b| leads to a steeper disease gradient). When b has no units (Models B, E, F) it is more difficult to
compare two models, as they might have different distance scales.
• The parameter a in some ways acts like an initial condition. In the event that y represents a probability, a will be chosen so that the integral of the probability of dispersal over the range of
potential distances from the source is 1.
Note that in models A, C, and D (the models in which b has units) it is assumed that the rate of change in dispersal with distance from the source is the same across all distances.
In each equation, y is nonlinear in terms of s, so nonlinear regression can be used to estimate the parameters. Another option, which is simpler in some cases, is to linearize the equations (the
resulting forms are listed in the table) and then use linear regression to estimate the parameters.
Models A and B, the exponential law and the power law respectively, are the simplest models presented here, and are both commonly applied. In the more general version of the exponential law, the s
in the integrated form can be raised to some power. In many cases where one of these models provides a good fit to data, the other may fit almost as well.
One difference between the exponential and power law models is the aforementioned pitfall of b having no units in the power law model, a pitfall that is avoided in the exponential model. Another
difference is that at the source, the power law model gives a disease intensity of ∞, which is unrealistic, while the exponential model gives a finite density that can be controlled through the
parameters. However, at a very small distance from the source, the exponential model does not necessarily reflect the high inoculum density, while the power law model will generally give the
necessarily large values. As a generalization, when pathogens are dispersed by splashing water, the exponential model may be a better fit, and when wind-dispersed pathogens are very small (smaller
than 10 μm) the power law model may be more appropriate (Campbell and Madden 1990).
The remaining models C-F take into account the amount of healthy host tissue. Equation C is analogous to the monomolecular model used in studying disease progress over time. Models C and D assume
that the rate of change in dispersal is constant over different distances, while E and F do not. However, C and D have units for the parameter b (making it easier to compare models fit to different
data sets) while E and F do not. C and E are functions of the amount of healthy tissue remaining, while D and F are functions of both the amount of diseased tissue remaining and the amount of
healthy tissue remaining, based on linearized dispersal models where the response is ln(1/1-y)).
A specific form of equation F which has been applied in epidemiology is the Cauchy model:α is the median dispersal (Shaw 1997).
Next we use the R programming environment, introduced in a companion set of exercises (Garrett et al. 2007) to illustrate how changing parameters changes the shape of these models, as well as
illustrating the differences between models A through F. First consider the shapes for the exponential and power law models. The following R code produces a function that will plot the two curves
with parameters a1 and b1 corresponding to the power law model and parameters a2 and b2 corresponding to the exponential model. This new function incorporates an existing R function, curve; for more
information about this function, enter help(curve) on the R command line. For more information about writing functions in R, see Garrett et al. (2007).
plot.exp.power <-
from=0, to=max1,
xlab='Distance (m) from source',
ylab='Disease incidence',
title(main='Power law (black) and Exponential (red)')
from=0, to=max2,
To explore differences in these two curves, plots may be made applying the function with parameter estimates from a particular example, such as the parameters obtained from research on wheat stripe
rust in Hermiston, Oregon in 2002 (Sackett and Mundt 2005), Case Study #4.
plot.exp.power(a1=184.9, a2=18.59,
b1=2.07, b2=0.106,
max1=60, max2=60)
The resulting graph looks like:
Click on the image for larger version.
If parameters are modified as:
plot.exp.power(a1=113.9, a2=11.49,
b1=2.07, b2=0.106,
max1=60, max2=60)
the resulting graph has a less steep dispersal gradient:
Click on the image for larger version.
To explore models C and D, a similar plot function may be defined as for models A and B. As an example:
plot.C.D <-function(a1,a2,b1,b2,max1,max2){
from=0, to=max1,
xlab='Distance (m) from source',
ylab='Disease incidence',
title(main='Model C (black) and Model D (red)')
from=0, to=max2,
which, when applied using the following parameter values:
plot.C.D(a1=0.03, a2=5.59,
b1=0.08, b2=0.106,
max1=60, max2=60)
Click on the image for larger version.
Likewise, for models E and F:
plot.E.F <- function(a1,a2,b1,b2,max1,max2){
from=0, to=max1,
xlab='Distance (m) from source',
ylab='Disease incidence',
title(main='Model E (black) and Model F (red)')
from=0, to=max2,
The following plot may be observed with the specified parameters:
plot.E.F(a1=0.09, a2=2,
b1=0.6, b2=0.6,
max1=60, max2=60)
Click on the image for larger version.
Suggested Exercise
Using the R functions applied above, examine how different parameter estimates for all the different models modify the shape of the dispersal curves. Also, modify the maximum dispersal distances
plotted (max1 and max2 in the functions) and see what this shows about the gradients fit by the different models.
Next, primary disease gradients of bacteria
1Dept. of Plant Pathology, Kansas State University, Manhattan, KS 2Dept. of Horticulture, Forestry, and Recreation Resources, Kansas State University, Manhattan, KS 3Dept. of Mathematics, Kansas
State University, Manhattan, KS Current address of P. D. Esker: Dept. of Plant Pathology, University of Wisconsin, Madison, WI, USA
Esker, P.D., A.H. Sparks, G. Antony, M. Bates, W. Dall' Acqua, E.E. Frank, L. Huebel, V. Segovia, and K.A. Garrett, 2007. Ecology and Epidemiology in R: Modeling dispersal gradients. The Plant Health
Instructor. DOI:10.1094/PHI-A-2007-1226-03.
We would appreciate feedback for improving this paper and information about how it has been used for study and teaching. Please send your feedback to kgarrett@ksu.edu. Please include the following
text in the e-mail subject line, "Feedback on R Modules", to make sure your comments are received.
The dispersal (or movement) of plant pathogens is an essential component for spread of plant diseases and may occur within a field or across continents. Dispersal may be defined as the movement of
propagative units of a pathogen from the original source, or focus (Campbell and Madden 1990). Mechanisms of dispersal differ widely among plant pathogens, including the following mechanisms
(Campbell and Madden 1990).
Dispersal and disease gradients corresponding to these mechanisms are often estimated. Dispersal gradients represent the frequency distribution of the distances traveled by all individuals in a
population and the application of dispersal gradients has been useful for characterizing unidirectional dispersal (Nathan et al. 2003). A key concept for increasing our understanding of dispersal is
the difference between dispersal (inoculum) gradients and disease gradients. Inoculum gradients describe movement of the propagative unit, where host availability is not necessarily required. Disease
gradients take into account all events leading to the spread of disease, including release of inoculum, transport, and deposition, as well as the presence of susceptible hosts in a disease-conducive
For plant pathogens, the primary sources of inoculum are often one of three general types: point, line, or area. A point source typically has a diameter smaller than 1% of the gradient length
(Campbell and Madden 1990; Zadoks and Schein 1979), while line or area sources do not necessarily have a fixed size. For example, a line source may be a row of diseased plants, and pathogen/disease
measures are made at increasing distances away from this source.
Furthermore, there are two types of propagative dispersal gradients that need consideration: primary and secondary (Campbell and Madden 1990; Gregory 1968). Primary disease gradients indicate the
dispersal potential of a pathogen (inoculum) in a single disease cycle (single source). Secondary disease gradients occur when inoculum is moved from lesions (plants) that had been infected during
the primary dispersal event. In the following case studies, both primary and secondary gradients will be illustrated.
In addition to measurements of dispersal within fields, long distance dispersal (LDD) is important for many plant pathogens (e.g., Asian soybean rust, tobacco blue mold). Successful transmission of
plant diseases over long distances often depends on the following:
While many spores may be killed during atmospheric transport, a sufficient number often remain viable to cause new infections and epidemics (Campbell and Madden 1990).
In this document we introduce concepts about dispersal for different types of plant pathogens. Four case studies are introduced that emphasize the following concepts:
Our analysis and illustration of the dispersal process use the statistical package R, introduced in an associated set of exercises.
Plant pathogenic bacteria may be dispersed via several mechanisms: rain, wind, contaminated/infected seed, insects, and contaminated farm equipments (Quinn et al. 1980). Splash dispersal is an
important mechanism for short distance dispersal, typically across distances less than one meter. Splash-dispersed spores or bacteria are produced in mucilage and it is this mucilage that makes them
stick to the plant surface, thereby reducing the role of wind dispersal. Several factors influence splash dispersal: inoculum concentration at the source, orientation and surface characteristics of
the source, and size and velocity of raindrops. Dispersal effectiveness (i.e., distance traveled) depends on the momentum of the falling raindrops and larger rain drops are found to be more
effective. Splash dispersal may also occur via overhead irrigation, such as sprinkler irrigation. Drips from the canopies saturated with rain, fog, mist, and dew have the same effect, and to some
extent, cause vertical dispersal of bacteria (Fitt et al. 1989). Xanthomonas campestris pv. malvacearum (causal agent of angular leaf spot of cotton) and Erwinia carotovora subsp. atroseptica (black
leg of potato) are two examples of splash-dispersed bacteria.
For LDD, wind is the primary mechanism, mainly for dry inoculum. Dry dispersed pathogens produce lighter and smaller dispersal structures that can easily become airborne (Gregory et al. 1959). For
example, wind plays a major role in the dispersal of Xanthomonas axonopodis pv. citri, causing citrus canker. Furthermore, more than fifty different plant pathogenic bacteria are found to be
dispersed through infested seeds (Nino-Liu et al. 2006). One example is Pseudomonas syringae pv. phaseolicola, causing halo blight of bean. Five infected seeds out of ten thousand can cause an
epidemic (Trigalet and Biduad 1978). Lastly, insects like honeybees are important dispersal agents of bacterial pathogens affecting fruits and flowers, such as Erwinia amylovora, causing fire blight
of pome fruits.
Fungi have a wide array of dispersal mechanisms and dispersal gradients. For example, plant pathogenic fungi that associate only with plant roots (soil-borne) have relatively short dispersal
gradients compared to fungi that associate with plant foliage and flowers. Also, the spore type greatly influences dispersal distance. Spores of fungi that are produced on aerial parts of a plant,
such as flowers or leaves, can be dispersed easily and over a wide range of distances. Fungi which colonize the plant vascular system rely on vectors for dispersal and so do not disperse as readily.
The dispersal of these pathogens depends on the range of the vector. Soilborne fungi typically disperse very slowly (Agrios 2004).
For the majority of plant pathogenic fungi, dispersal is dependent on factors such as wind, water, birds, insects, other animal, and humans, with the primary dispersal progagules being spores (Agrios
2004). Fragments of hyphae and sclerotia can also be disseminated, although that is not as common (Agrios 2004). Release mechanisms may be triggered by environmental factors, including changes in
irradiance, air temperature, and relative humidity (Aylor 1990).
Spores can be actively discharged into the air or released by strong winds or light breezes and can travel distances ranging from a few centimeters to a few kilometers (Agrios 2004). For example,
Rambert et al. (1998) found that the wind speed for spore dispersal differed between leaf rust and stripe rust. For leaf rust, spores were not released until the wind speed reached 2.8 m s-1, while
for stripe rust, spores were released when wind speeds were 2.3 m s-1 or greater. Lacey (1996) described three steps for fungal spore movement: take-off, dispersal, and deposition. Take-off involved
spore release from a diseased plant, followed by dispersal to a different location, where finally deposition enabled the landing and subsequent infection of a new plant.
Rain is also important in spore dispersal, as water drops can cause passive spore removal when contacting a diseased leaf (Geagea et al. 1999). Rainfall has been shown to enhance spore removal in
some species infected with rust (Geagea et al. 1999) and stripe rust occurrence was closely associated with the amount of rainfall recorded in field studies (Park 1990). Spores released as a result
of water drops do not travel as far as spores released in the wind. Madden (1997) found that transport distance was usually less than 15 cm for each splash event. In addition to its role in
dispersal, rain also provides an environment conducive to infection for many fungi.
Viruses differ from most other pathogens in that they cannot penetrate an intact plant cuticle and cellulose found in the cell wall (Hull 2002). Viruses overcome this problem by utilizing methods of
transmission that bypass the need to penetrate the outer surface of a plant, such as seed transmission or vegetative propagation, or by penetrating through a wound in the plant, such as through
mechanical or insect transmission. Virus transmission by insects involves interactions among the virus, vector and the host plant. The two most important phyla in the transmission of viruses are
Arthropoda and Nemata. Viruses can be taken up in one of two ways. Circulative viruses are taken up internally within the vector organism and pass through the vector's interior. Non-circulative
viruses are taken up externally and these viruses do not pass through the vector's interior.
Mechanical transmission involves the introduction of the virus into a wound made on the surface of the plant. When using mechanical transmission of viruses in experiments, the goal is to make many
small wounds on a plant surface without causing the death of plant cells (Hull 2002). This allows successful entry of the virus particles into the plant which will facilitate viral infection. The
most common method for mechanical inoculation of plants is use of abrasives (Corbett and Sisler 1964), which are rubbed on the surface of the plant causing small wounds in the tissue. Other less
common methods include spraying virus inoculum or pricking the epidermis of the plant and injecting the virus (Corbett and Sisler 1964). Viruses can also be transmitted by fungi, infected seed and
infected pollen. A pollinating insect can spread infected pollen to a new host, introducing the virus to new plants. Self-pollination will result in more infected seed than cross pollination between
a healthy plant and an infected plant (Corbett and Sisler 1964). Certain species of fungus-like organisms, typically Plasmodiophoromycetes or Chytridiomycetes, are vectors for some viruses (Hull
2002). Polymyxa species are important as vectors for such viruses as beet necrotic yellow vein virus (BNYVV) and wheat soilborne mosaic virus (WSBMV).
Soilborne nematodes move in the films of water that cling to soil particles. Nematode populations are generally denser and more prevalent in the world's warmer regions, where longer growing seasons
extend feeding periods and increase reproductive rates. Light, sandy soils generally harbor larger populations of plant-parasitic nematodes than clay soils. Nematodes in sandy soil benefit from the
more efficient aeration, the presence of fewer competitors and prey, and greater ease of movement through the root zone. Also, plants growing in readily drained soils are more likely to suffer from
intermittent drought, and are thus more vulnerable to damage by parasitic nematodes. Desert valleys and tropical sandy soils are particularly challenged by nematode overpopulation (Dropkin 1980).
Dispersal of nematodes is primarily through: running water (rain, irrigation, run-off, etc), and the movement of human beings, farm equipment, soil debris and plant material.
Preventing nematodes from entering uninfested areas is one way to avoid problems. Many nematodes may only spread on the order of a few feet per year without human dispersal through mechanisms such as
cultivation. The following practices may help minimize nematode dispersal:
Dispersal processes underlie the development of disease foci. Information about the form of dispersal gradients is an essential component of spatially explicit epidemiological models (Roche et al.
1995). Dispersal models can provide insights into the mechanism of inoculum dispersal and deposition, the source of inoculum, and the physical processes underlying dispersal. Developing an accurate
dispersal model, however, requires a more complete understanding of the pathogen, its life cycle, spore or other propagule characteristics, the agents of dispersal and the interaction between the
propagules and the environment. There are two common methods for analyzing dispersal gradients: empirical models and physical models (Campbell and Madden 1990). For the development of empirical
models, researchers start with data sets and estimate parameters to fit equations that describe the probability of dispersal as a function of distance. For the development of physical models,
researchers start with theories based on physical laws describing the aerodynamic and other properties of pathogen propagules, and attempt to model the dispersal events mathematically. While physical
models enable a “more complete understanding of the dispersal event (canopy escape, liftoff and ascent, transport, descent and landing, impact)” (Isard et al. 2005), the mathematical complexity and
difficulty in obtaining all required model components often limit their direct application for many plant pathosystems. For an interesting example of how models might be applied for long distance
dispersal, see Aylor (2003).
Some of the same models used to describe disease progress over time can be used to study disease dispersal gradients. The significant difference between the two applications is that for disease
progress curves disease intensity tends to increase with increasing time, while pathogen dispersal and disease intensity tend to decrease with increasing distance from the source of inoculum.
The dispersal model most appropriate for describing the dispersal of a particular plant pathogen depends on that pathogen’s dispersal mechanism. A researcher might try several different types of
models to describe data from a given experiment or observational study. The form of the model that fits best may provide insight into the probable dispersal mechanism.
The following table presents some common dispersal models, as adapted from Campbell and Madden (1990).
Table 1. Common empirical dispersal models used to study plant pathogen dispersal.
Model Differential Equation Form Integrated Form Linearized Form Units of b
B none
E none
F none
y=y(s) may represent the concentration of inoculum, disease severity, or probability of infection at a point s units away from the source of infection. In the disease gradient context, the value 1-y
represents the proportion of healthy host tissue.
b is the rate parameter determining how steep the disease gradient is (a greater value for |b| leads to a steeper disease gradient). When b has no units (Models B, E, F) it is more difficult to
compare two models, as they might have different distance scales.
The parameter a in some ways acts like an initial condition. In the event that y represents a probability, a will be chosen so that the integral of the probability of dispersal over the range of
potential distances from the source is 1.
Note that in models A, C, and D (the models in which b has units) it is assumed that the rate of change in dispersal with distance from the source is the same across all distances.
In each equation, y is nonlinear in terms of s, so nonlinear regression can be used to estimate the parameters. Another option, which is simpler in some cases, is to linearize the equations (the
resulting forms are listed in the table) and then use linear regression to estimate the parameters.
Models A and B, the exponential law and the power law respectively, are the simplest models presented here, and are both commonly applied. In the more general version of the exponential law, the s in
the integrated form can be raised to some power. In many cases where one of these models provides a good fit to data, the other may fit almost as well.
One difference between the exponential and power law models is the aforementioned pitfall of b having no units in the power law model, a pitfall that is avoided in the exponential model. Another
difference is that at the source, the power law model gives a disease intensity of ∞, which is unrealistic, while the exponential model gives a finite density that can be controlled through the
parameters. However, at a very small distance from the source, the exponential model does not necessarily reflect the high inoculum density, while the power law model will generally give the
necessarily large values. As a generalization, when pathogens are dispersed by splashing water, the exponential model may be a better fit, and when wind-dispersed pathogens are very small (smaller
than 10 μm) the power law model may be more appropriate (Campbell and Madden 1990).
The remaining models C-F take into account the amount of healthy host tissue. Equation C is analogous to the monomolecular model used in studying disease progress over time. Models C and D assume
that the rate of change in dispersal is constant over different distances, while E and F do not. However, C and D have units for the parameter b (making it easier to compare models fit to different
data sets) while E and F do not. C and E are functions of the amount of healthy tissue remaining, while D and F are functions of both the amount of diseased tissue remaining and the amount of healthy
tissue remaining, based on linearized dispersal models where the response is ln(1/1-y)).
A specific form of equation F which has been applied in epidemiology is the Cauchy model:α is the median dispersal (Shaw 1997).
Next we use the R programming environment, introduced in a companion set of exercises (Garrett et al. 2007) to illustrate how changing parameters changes the shape of these models, as well as
illustrating the differences between models A through F. First consider the shapes for the exponential and power law models. The following R code produces a function that will plot the two curves
with parameters a1 and b1 corresponding to the power law model and parameters a2 and b2 corresponding to the exponential model. This new function incorporates an existing R function, curve; for more
information about this function, enter help(curve) on the R command line. For more information about writing functions in R, see Garrett et al. (2007).
plot.exp.power <- function(a1,a2,b1,b2,max1,max2){ curve(a1*x^(-b1), from=0, to=max1, add=FALSE, lty=1, xlab='Distance (m) from source', ylab='Disease incidence', col='black', xlim=c(0,60), ylim=c
(0,50)) title(main='Power law (black) and Exponential (red)') curve(a2*exp(-b2*x), from=0, to=max2, add=TRUE, lty=1, col='red')}
To explore differences in these two curves, plots may be made applying the function with parameter estimates from a particular example, such as the parameters obtained from research on wheat stripe
rust in Hermiston, Oregon in 2002 (Sackett and Mundt 2005), Case Study #4.
To explore models C and D, a similar plot function may be defined as for models A and B. As an example:
plot.C.D <-function(a1,a2,b1,b2,max1,max2){curve(1-a1*exp(b1*x), from=0, to=max1, add=FALSE, lty=1, xlab='Distance (m) from source', ylab='Disease incidence', col='black', xlim=c(0,60), ylim=c(0,1))
title(main='Model C (black) and Model D (red)')curve(1/(1+a2*exp(b2*x)), from=0, to=max2, add=TRUE, lty=1, col='red')}
plot.E.F <- function(a1,a2,b1,b2,max1,max2){ curve(1-a1*x^(b1), from=0, to=max1, add=FALSE, lty=1, xlab='Distance (m) from source', ylab='Disease incidence', col='black', xlim=c(0,60), ylim=c(0,1))
title(main='Model E (black) and Model F (red)') curve(1/(1+a2*x^b2), from=0, to=max2, add=TRUE, lty=1, col='red')}
The following plot may be observed with the specified parameters:
Using the R functions applied above, examine how different parameter estimates for all the different models modify the shape of the dispersal curves. Also, modify the maximum dispersal distances
plotted (max1 and max2 in the functions) and see what this shows about the gradients fit by the different models. | {"url":"http://www.apsnet.org/EDCENTER/ADVANCED/TOPICS/ECOLOGYANDEPIDEMIOLOGYINR/MODELINGDISPERSALGRADIENTS/Pages/default.aspx","timestamp":"2014-04-17T09:43:18Z","content_type":null,"content_length":"102154","record_id":"<urn:uuid:92579559-daa7-488d-99b8-1fdd4b6be498>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00047-ip-10-147-4-33.ec2.internal.warc.gz"} |
Monte Carlo Monopoly
May 1999
Dr. John Haigh, a mathematics lecturer from the University of Sussex, has found the ultimate strategy for winning at Monopoly: use the help of a computer!
Dr. Haigh wanted to discover which were the best squares to buy and build hotels and houses on. He wrote a program that simulated an enormously long game - more than ten million rolls of the dice -
in order to discover which squares players were most likely to land on. While greedy players are tempted to go for the high-rent squares such as Park Lane and Mayfair, he found that the orange
squares - Vine Street, Bow Street and Marlborough Street - are the handiest squares to own, because of their proximity to the Jail square.
"Because Go To Jail is the most frequently visited square, you need streets which your opponents are likely to land on when they are released. The orange and red squares are best and the green and
light blue the worst, because they are not favoured by their positioning to the jail", he told Michael Fleet of the Daily Telegraph.
Dr. Haigh has recently published a new book, Taking Chances, which is all about using probability in everyday life.
Exploring probability distributions with computer simulation
Dr Haigh’s calculations are an example of the application of a general method called the Monte Carlo Method. This method is sometimes used to calculate a quantity which has an exact value, but which
is unknown, by using many random "trials".
For example, suppose we want to know
Dr Haigh got a computer to make millions of dice rolls and used those rolls to work out where a Monopoly player would move on the board. What he was trying to find out was the relative probabilities
of landing on each possible square in the game, and the Monte Carlo method gave him approximations to those probabilities. He found that the probability of landing on Trafalgar Square was greater
than landing on any other street name.
Random dots falling inside (pink) and outside (blue) a circle.
Another famous example of the Monte Carlo method is to calculate | {"url":"http://plus.maths.org/content/os/issue8/news/monopoly/index","timestamp":"2014-04-16T13:50:11Z","content_type":null,"content_length":"28526","record_id":"<urn:uuid:a573c22a-a459-465b-9e75-07121277d11d>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00394-ip-10-147-4-33.ec2.internal.warc.gz"} |