content
stringlengths
86
994k
meta
stringlengths
288
619
Greenway, VA SAT Math Tutor Find a Greenway, VA SAT Math Tutor ...I am a former high school math teacher, with well over 10 years of full time teaching & tutoring experience. I have typically more than one way to explain most math problems and I have a busy schedule because of my success with students. Though my schedule gets pretty full, I work six days a week and there are openings here and there, so please contact me to discuss. 28 Subjects: including SAT math, chemistry, calculus, physics ...D. Continuity as a Property of Functions. II. 21 Subjects: including SAT math, calculus, statistics, geometry ...Although they usually brought a specific assignment to work on, I always tried to teach them skills they could apply to future assignments as well. My goal was to make better writers, not just better writing. I have also privately tutored students for the last four years. 22 Subjects: including SAT math, English, reading, writing ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including SAT math, calculus, physics, GRE ...I'm currently running two algebra classes, and several tutoring classes for 9th grade chemistry, Algebra II, physics, and pre-calculus. Thank you, YuI'm a Chessmaster. I've been tutoring chess for the last 10 years outside of WyzAnt. 24 Subjects: including SAT math, chemistry, reading, calculus Related Greenway, VA Tutors Greenway, VA Accounting Tutors Greenway, VA ACT Tutors Greenway, VA Algebra Tutors Greenway, VA Algebra 2 Tutors Greenway, VA Calculus Tutors Greenway, VA Geometry Tutors Greenway, VA Math Tutors Greenway, VA Prealgebra Tutors Greenway, VA Precalculus Tutors Greenway, VA SAT Tutors Greenway, VA SAT Math Tutors Greenway, VA Science Tutors Greenway, VA Statistics Tutors Greenway, VA Trigonometry Tutors Nearby Cities With SAT math Tutor Belleview, VA SAT math Tutors Berwyn, MD SAT math Tutors Brambleton, VA SAT math Tutors Calverton, MD SAT math Tutors Crystal City, VA SAT math Tutors Green Meadow, MD SAT math Tutors Jefferson Manor, VA SAT math Tutors Kingstowne, VA SAT math Tutors Lansdowne, VA SAT math Tutors South Riding, VA SAT math Tutors Sudley Springs, VA SAT math Tutors Sully Station, VA SAT math Tutors W Bethesda, MD SAT math Tutors West Hyattsville, MD SAT math Tutors West Mclean SAT math Tutors
{"url":"http://www.purplemath.com/greenway_va_sat_math_tutors.php","timestamp":"2014-04-21T14:53:24Z","content_type":null,"content_length":"23977","record_id":"<urn:uuid:424dbf1d-8fa0-42cc-8ba7-f7c3db85682e>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
UW Probability Seminar Joint Probability and Rainwater seminar Time: Tuesday, April 22, 2014, 1:30-3:30 pm. Location: SMI 307 Speaker: Jason Miller (MIT) Title: Random Surfaces and Quantum Loewner Evolution Abstract: What is the canonical way to choose a random, discrete, two-dimensional manifold which is homeomorphic to the sphere? One procedure for doing so is to choose uniformly among the set of surfaces which can be generated by gluing together $n$ Euclidean squares along their boundary segments. This is an example of what is called a random planar map and is a model of what is known as pure discrete quantum gravity. The asymptotic behavior of these discrete, random surfaces has been the focus of a large body of literature in both probability and combinatorics. This has culminated with the recent works of Le Gall and Miermont which prove that the $n \to \infty$ distributional limit of these surfaces exists with respect to the Gromov-Hausdorff metric after appropriate rescaling. The limiting random metric space is called the Brownian map. Another canonical way to choose a random, two-dimensional manifold is what is known as Liouville quantum gravity (LQG). This is a theory of continuum quantum gravity introduced by Polyakov to model the time-space trajectory of a string. Its metric when parameterized by isothermal coordinates is formally described by $e^{\gamma h} (dx^2 + dy^2)$ where $h$ is an instance of the continuum Gaussian free field, the standard Gaussian with respect to the Dirichlet inner product. Although $h$ is not a function, Duplantier and Sheffield succeeded in constructing LQG rigorously as a random area measure. LQG for $\gamma=\sqrt{8/3}$ is conjecturally equivalent to the Brownian map and to the limits of other discrete theories of quantum gravity for other values of $\gamma$. In this talk, I will describe a new family of growth processes called quantum Loewner evolution (QLE) which we propose using to endow LQG with a distance function which is isometric to the Brownian map. I will also explain how QLE is related to DLA, the dielectric breakdown model, and SLE. Based on joint works with Scott Sheffield Archive of previous talks The University of Washington is committed to providing access, equal opportunity and reasonable accommodation in its services, programs, activities, education and employment for individuals with disabilities. To request disability accommodation contact the Disability Services Office at least ten days in advance at: 206-543-6450 (voice), 206-543-6452 (TTY), 206-685-7264 (FAX), or Mathematics Department University of Washington
{"url":"http://www.math.washington.edu/~zchen/Seminar/pseminar.html","timestamp":"2014-04-19T04:19:05Z","content_type":null,"content_length":"3813","record_id":"<urn:uuid:f3782d1c-2ce5-4a54-89ec-282307f03be5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00654-ip-10-147-4-33.ec2.internal.warc.gz"}
The law of large numbers or the flaw of averages “If the probability of a given outcome to an event is P and the event is repeated N times, then the larger N becomes, so the likelihood increases that the closer, in proportion, will be the occurrence of the given outcome to N*P.” For example:- If the probability of throwing a double-6 with two dice is 1/36, then the more times we throw the dice, the closer, in proportion, will be the number of double-6s thrown to of the total number of throws. This is, of course, what in everyday language is known as the law of averages. The overlooking of the vital words 'in proportion' in the above definition leads to much misunderstanding among gamblers. The 'gambler's fallacy' lies in the idea that “In the long run” chances will even out. Thus if a coin has been spun 100 times, and has landed 60 times head uppermost and 40 times tails, many gamblers will state that tails are now due for a run to get even. There are fancy names for this belief. The theory is called the maturity of chances, and the expected run of tails is known as a 'corrective', which will bring the total of tails eventually equal to the total of heads. The belief is that the 'law' of averages really is a law which states that in the longest of long runs the totals of both heads and tails will eventually become equal. In fact, the opposite is really the case. As the number of tosses gets larger, the probability is that the percentage of heads or tails thrown gets nearer to 50%, but that the difference between the actual number of heads or tails thrown and the number representing 50% gets larger. Let us return to our example of 60 heads and 40 tails in 100 spins, and imagine that the next 100 spins result in 56 heads and 44 tails. The 'corrective' has set in, as the percentage of heads has now dropped from 60 per cent to 58 per cent. But there are now 32 more heads than tails, where there were only 20 before. The 'law of averages' follower who backed tails is 12 more tosses to the bad. If the third hundred tosses result in 50 heads and 50 tails, the 'corrective' is still proceeding, as there are now 166 heads in 300 tosses, down to 55-33 per cent, but the tails backer is still 32 tosses behind. Put another way, we would not be too surprised if after 100 tosses there were 60 per cent heads. We would be astonished if after a million tosses there were still 60 per cent heads, as we would expect the deviation from 50 per cent to be much smaller. Similarly, after 100 tosses, we are not too surprised that the difference between heads and tails is 20. After a million tosses we would be very surprised to find that the difference was not very much larger than 20. A chance event is uninfluenced by the events which have gone before. If a true die has not shown 6 for 30 throws, the probability of a 6 is still 1/6 on the 31st throw. One wonders if this simple idea offends some human instinct, because it is not difficult to find gambling experts who will agree with all the above remarks, and will express them themselves in books and articles, only to advocate elsewhere the principle of 'stepping in when a corrective is due'. It is interesting that despite significant statistical evidence and proof of all of the above people will go to extreme lengths to fulfill there belief in the fact that a corrective is due. The number 53 in an Italian lottery had failed to appear for some time and this lead to an obsession with the public to bet ever larger amounts on the number. People staked so much on this corrective that the failure of the number 53 to occur for two years was blamed for several deaths and bankruptcies. It seems that a large number of human minds are just simply unable to cope with the often seemingly contradictory laws of probability. If only they had listened to their maths teacher. The full story is publish here. An understanding of the law of the large numbers leads to a realisation that what appear to be fantastic improbabilities are not remarkable at all but, merely to be expected. Make sure you enter the (*) required information where indicated. Basic HTML code is allowed.
{"url":"http://www.probabilitytheory.info/content/item/6-the-law-of-large-numbers-/-the-law-of-averages","timestamp":"2014-04-17T04:53:03Z","content_type":null,"content_length":"30472","record_id":"<urn:uuid:a3b96553-8988-4679-aa49-a8781f6cd3c3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Argo, IL Algebra Tutor Find an Argo, IL Algebra Tutor ...I have three children of my own (14, 7, and 4). My strengths are in Math, Writing, and strategic test-taking. I look forward to working with you.Algebra 1 is one of my favorite subjects to teach. I am intimately familiar with multiple-step equations, graphing, systems of equations, and all of the rules that are associated with algebra. 28 Subjects: including algebra 1, algebra 2, English, writing ...I have completed undergraduate coursework in the following math subjects - differential and integral calculus, advanced calculus, linear algebra, differential equations, advanced differential equations with applications, and complex analysis. I have a PhD. in experimental nuclear physics. I hav... 10 Subjects: including algebra 1, algebra 2, physics, geometry ...As a student myself, I realize that physics and mathematics can often appear as the most daunting of school subjects. And the puzzles posed by such demand a knack for creativity and the resolve for intense contemplation. Additionally, I propose that the seed for successful comprehension resides in you as much as it does in the most accomplished of scholars. 7 Subjects: including algebra 1, algebra 2, calculus, physics ...My favorite part of teaching is discovering how a student learns best, and then watching that student blossom. It warms my heart to witness a student feeling confident in the subject that they were once struggling with. I believe that every student is capable of successfully learning. 70 Subjects: including algebra 1, algebra 2, English, chemistry ...I worked with children ages 5-8 years old while working at B.A.S.I.C., Before and After School Instructional Care, at St. Walter School in Roselle, IL for one year. While working there I would help the children with their homework and play games with them. 19 Subjects: including algebra 1, algebra 2, calculus, reading Related Argo, IL Tutors Argo, IL Accounting Tutors Argo, IL ACT Tutors Argo, IL Algebra Tutors Argo, IL Algebra 2 Tutors Argo, IL Calculus Tutors Argo, IL Geometry Tutors Argo, IL Math Tutors Argo, IL Prealgebra Tutors Argo, IL Precalculus Tutors Argo, IL SAT Tutors Argo, IL SAT Math Tutors Argo, IL Science Tutors Argo, IL Statistics Tutors Argo, IL Trigonometry Tutors Nearby Cities With algebra Tutor Bedford Park algebra Tutors Bridgeview algebra Tutors Brookfield, IL algebra Tutors Countryside, IL algebra Tutors Forest View, IL algebra Tutors Hodgkins, IL algebra Tutors Justice, IL algebra Tutors La Grange Park algebra Tutors Lyons, IL algebra Tutors Mc Cook, IL algebra Tutors Mccook, IL algebra Tutors Riverside, IL algebra Tutors Stickney, IL algebra Tutors Summit Argo algebra Tutors Summit, IL algebra Tutors
{"url":"http://www.purplemath.com/Argo_IL_Algebra_tutors.php","timestamp":"2014-04-16T19:04:31Z","content_type":null,"content_length":"23872","record_id":"<urn:uuid:2e2d4072-9b17-4ba3-9ca5-27f0b57c013c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
note gjb <p>NP-completeness is a property of an algorithm. It implies that no algorithm is known to solve the problem in polynomial time.<br/> This means that if you increase the length of the input for the problem, the execution time will increase exponentially. (Of course there are input cases which are polynomial, but many of interest are not). Essentially, it means that brute force is the only known method to tackle the problem exactly.</p> <p>The question is on the relation between the behavior of an algorithm to decide on a language and the class to which this language belongs. For regular languages and context free languages polynomial time algorithms are known, but does this necessarily mean that since regular expressions with backreferences are proven to be NP-complete that the language they describe are a superset of regular and context free languages?</p> <p>It certainly means it is hard to decide whether or not a certain string is an element of the language described by a regular expression with backreferences. But what does it tell us about the expressive power?</p> <p>The expression <code>/^(.*)\1$/</code> defines the language <code>{ww | w in sigma*}</code>, known neither to be regular, nor context free. On the other hand, regular expressions with backreference <em>can't</em> describe <code>{a^n b^n | n >= 0}</code> which is definitely context free.</p> <p>So on the one hand, regular expressions with backreference describe languages that are not context free, but can't describe all context free languages either! This example illustrates that one has to be very careful when judging expressive power from algorithmic complexity. A high complexity is a sign that the expressive power must be high in some cases, but doesn't guarantee that everything can be done.</p> <p>Incidently, the code below shows two Perl regular expressions that describe non-regular languages: <code> { a^n b^n | n >= 0} /^ (a*) (??{sprintf("b{%d}", (length($1)))}) $/x </ code> which is context free as mentioned above and <code> { a^n b^n c^n | n >= 0 } /^ (a*) (??{sprintf("b{%d}", (length($1)))}) (??{sprintf("c{%d}", (length($1)))}) $/x </code> which is context sensitive.</p> <p>Just my 2 cents, -gjb-</p> 224537 224853
{"url":"http://www.perlmonks.org/index.pl?displaytype=xml;node_id=224866","timestamp":"2014-04-18T00:15:49Z","content_type":null,"content_length":"3017","record_id":"<urn:uuid:0bb42d48-9a17-4177-a155-b26751476d11>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00649-ip-10-147-4-33.ec2.internal.warc.gz"}
Isomorphic Graphs- hw help! May 21st 2008, 05:49 PM #1 Mar 2008 Isomorphic Graphs- hw help! I know this one is very involved, but can someone tell me how to approach it at least? Do I start out drawing out a possible graph with this description? I was taught to use an adjacency table as "Construct all non-isomorphic simple connected graphs on 7 vertices with 18 edges. Prove they are not isomorphic and that there are no more." Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/discrete-math/39207-isomorphic-graphs-hw-help.html","timestamp":"2014-04-16T14:30:25Z","content_type":null,"content_length":"28727","record_id":"<urn:uuid:689e6c51-f17f-4c71-8d0d-5d8f200465f5>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00268-ip-10-147-4-33.ec2.internal.warc.gz"}
probability question January 22nd 2009, 05:46 AM #1 Junior Member May 2008 I am totally confused on how to solve this problem, and I don't have any data. A fair die is rolled twice with the two rolls being independent of each other. Let M be the maximum of the two rolls and D be the value of the first roll minus the value of the second roll. Are M and D independent? I am totally confused on how to solve this problem, and I don't have any data. A fair die is rolled twice with the two rolls being independent of each other. Let M be the maximum of the two rolls and D be the value of the first roll minus the value of the second roll. Are M and D independent? Denote the values of the two rolls, respectively, to be: $x_1, x_2$. And we know that $x_1,x_2 \in [1,2,3,4,5,6]$. An intuitive method for seeing the answer is to consider the extreme cases. Say that $M=1$. This implies that the $max(x_1,x_2)=1$, which means that $x_1 \leq 1$ and $x_2 \leq 1$. So then it must be the case that $x_1 = x_2 = 1$. Then you know that $D= x_2 - x_1 = 1-1 = 0$. As a result, knowing that $M=1$ gives you information about $D$, specifically that $D$ must be zero. Now say that $M=6$. That implies that $max(x_1,x_2)=6 \rightarrow x_1, x_2 \leq 6$. That means that you could have $D=5$ (in the case where $x_2=6,x_1=1$), or you could have $D=0$ (in the case where $x_2=6,x_1=6$) or anything in between. Less information is known about the value of $D$ now, when $M$ takes a larger value. When $M=1$, you are sure that $D=0$ but when $M=6$, $D$ could be anything from $0$ to $5$. So the smaller that $M$ is, the more certain you are about the values that $D$ can take. By definition, then, because the value of $D$ depends on the value of $M$, they cannot be independent. January 22nd 2009, 06:17 AM #2
{"url":"http://mathhelpforum.com/advanced-statistics/69382-probability-question.html","timestamp":"2014-04-18T02:03:54Z","content_type":null,"content_length":"38104","record_id":"<urn:uuid:aecaf272-3717-4cbf-a8d8-ba2e7c332aec>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Byfield Math Tutor Find a Byfield Math Tutor ...Through short stories, plays, and novels, we can appreciate important questions and lessons that cannot be captured in any other way. Your enjoyment of reading is greatly increased by how much literature you read and how much analysis you make of its contents. I'm one of the world's most popular online reviewers of English literature. 55 Subjects: including trigonometry, ACT Math, reading, algebra 1 ...Basically if you've ever seen The Big Bang Theory, it's Sheldon's job plus teaching. I absolutely adore math, and love thinking of shortcuts and the most efficient, creative ways to solve problems. In middle school I won my regional MATHCOUNTS competition twice because of such pursuits, and it has instilled in me a love for standardized tests. 27 Subjects: including algebra 1, algebra 2, biology, calculus ...Precalculus is a gateway course for more advanced mathematics -- so it's no surprise that many students, even those who have a track record of good grades in math, find themselves overwhelmed by both the depth and breadth of the material. I've tutored the subject since I was in high school and e... 47 Subjects: including discrete math, ACT Math, logic, linear algebra ...This makes it easier to focus on personal stories about experiences with the history we can only read about from historians. I tutored special needs students at North Shore Community College when I was enrolled. I adhere to a simple study method that if one knows HOW to study then any subject can be learned by anyone. 11 Subjects: including prealgebra, reading, study skills, elementary math ...I can also tutor at a college level for Biology, Environmental Science/Studies, Wildlife and Population Biology, Writing, MS Office Suite, and SQL. I can also offer preparatory tutoring for the GED, SATs, ACTs and other college/career placement tests. My tutoring methods are fairly straight for... 55 Subjects: including algebra 2, English, prealgebra, biology
{"url":"http://www.purplemath.com/byfield_math_tutors.php","timestamp":"2014-04-16T07:34:54Z","content_type":null,"content_length":"23841","record_id":"<urn:uuid:c3be02dc-76cd-4341-9338-235314a1e505>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving Theorems Date: 07/20/2001 at 06:55:45 From: Condensate Subject: How to prove Euler's Theorem, i.e., a^f(n) = 1 (mod n) where f(n) is the phi function. Dear Dr. Math, I would like to ask two questions: (1) How to prove Euler's Theorem, i.e., let f(x) be the Euler phi-function. Then a^f(n) = 1 (mod n) for any integer a. (2) Is there a theorem stating that 'If (a, p) = 1, then a^(p-1) = 1 (mod p)? (I think it is a bit different from Fermat's Little Theorem.) If so, can you give me a proof of it? Thanks for your help! Date: 07/20/2001 at 14:30:57 From: Doctor Rob Subject: Re: How to prove Euler's Theorem, i.e., a^f(n) = 1 (mod n) where f(n) is the phi function. Thanks for writing to Ask Dr. Math, Condensate. Your statement of Euler's Theorem is incorrect. It doesn't work for any integer a. For example, try a = 2 and n = 4. You must add the condition that (a,n) = 1. (1) Look at the multiplicative group G of integers x such that 1 <= x < n and (x,n) = 1, with multiplication done modulo n. By definition, f(n) = #G = the number of elements in G. Let A = a (mod n), where 1 <= A < n, so A is in G. Now consider the subgroup H of G consisting of powers of A. By Lagrange's Theorem, #H | #G. Then #H is the order of A, ord(A), so ord(A) | f(n). Since A^ord(A) = 1 (mod n) by the definition of the order of an element of a group, that implies that A^f(n) = 1 (mod n), which in turn implies that a^f(n) = 1 (mod n). Lagrange's Theorem is proved by showing that if H is a subgroup of a finite group G, then any two left cosets of H are disjoint or coincident; all left cosets of H have the same size; and the left cosets of H exhaust G. Thus G can be split into a number m of sets all of whose sizes are #H. Thus #G = m*#H, and so #H | #G. (2) This is false. Try a = 3 and p = 4. If you add the condition that p is prime, then it is Fermat's Little Theorem, and so it is true. - Doctor Rob, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/51614.html","timestamp":"2014-04-19T10:41:44Z","content_type":null,"content_length":"6823","record_id":"<urn:uuid:c823fb42-e9bf-46f1-8ff7-d1cd936e257f>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematics 265 > Unknown > Notes > ch3 sec 3.1 differentiation rules james stewart pdf online | StudyBlue 172 We have seen how to interpret derivatives as slopes and rates of change. We have seen how to estimate derivatives of functions given by tables of values. We have learned how to graph derivatives of functions that are defined graphically. We have used the definition of a derivative to calculate the derivatives of functions defined by formulas. But it would be tedious if we always had to use the definition, so in this chapter we develop rules for finding derivatives without having to use the definition directly. These differentiation rules enable us to calculate with relative ease the derivatives of poly- nomials, rational functions, algebraic functions, exponential and logarithmic functions, and trigonometric and inverse trigonometric functions. We then use these rules to solve problems involving rates of change and the approximation of functions. By measuring slopes at points on the sine curve, we get strong visual evidence that the derivative of the sine function is the cosine function. DIFFERENTIATION RULES 3 x ?=y= sin x 0 x y y fª(xy= ) 0 pi 2 m=1 m=_1 m=0 pi 2 pi pi DERIVATIVES OF POLYNOMIALS AND EXPONENTIAL FUNCTIONS In this section we learn how to differentiate constant functions, power functions, polyno- mials, and exponential functions. Let?s start with the simplest of all functions, the constant function . The graph of this function is the horizontal line y H33527 c, which has slope 0, so we must have . (See Figure 1.) A formal proof, from the definition of a derivative, is also easy: In Leibniz notation, we write this rule as follows. DERIVATIVE OF A CONSTANT FUNCTION POWER FUNCTIONS We next look at the functions , where n is a positive integer. If , the graph of is the line y H33527 x, which has slope 1. (See Figure 2.) So (You can also verify Equation 1 from the definition of a derivative.) We have already investigated the cases and . In fact, in Section 2.8 (Exercises 17 and 18) we found that For we find the derivative of as follows: Thus d dx H20849x 4 H20850 H33527 4x 3 3 H33527 lim hl0 H208494x 3 H11001 6x 2 h H11001 4xh 2 H11001 h 3 H20850 H33527 4x 3 H33527 lim hl0 4x 3 h H11001 6x 2 h 2 H11001 4xh 3 H11001 h 4 h H33527 lim hl0 x 4 H11001 4x 3 h H11001 6x 2 h 2 H11001 4xh 3 H11001 h 4 H11002 x 4 h fH11032H20849xH20850 H33527 lim hl0 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H33527 lim hl0 H20849x H11001 hH20850 4 H11002 x 4 h f H20849xH20850 H33527 x 4 n H33527 4 d dx H20849x 3 H20850 H33527 3x 2 d dx H20849x 2 H20850 H33527 2x2 n H33527 3n H33527 2 d dx H20849xH20850 H33527 11 f H20849xH20850 H33527 x n H33527 1f H20849xH20850 H33527 x n d dx H20849cH20850 H33527 0 H33527 lim h l 0 0 H33527 0 fH11032H20849xH20850 H33527 lim hl0 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H33527 lim hl0 c H11002 c h fH11032H20849xH20850 H33527 0 f H20849xH20850 H33527 c 3.1 173 FIGURE 1 The graph of ?=c is the line y=c, so fª(x)=0. y c 0 x y=c slope=0 6 030102 y 0 x y=x slope=1 FIGURE 2 The graph of ?=x is the line y=x, so fª(x)=1. Comparing the equations in (1), (2), and (3), we see a pattern emerging. It seems to be a reasonable guess that, when n is a positive integer, . This turns out to be true. THE POWER RULE If n is a positive integer, then FIRST PROOF The formula can be verified simply by multiplying out the right-hand side (or by summing the second factor as a geometric series). If , we can use Equation 2.7.5 for and the equation above to write SECOND PROOF In finding the derivative of we had to expand . Here we need to expand and we use the Binomial Theorem to do so: because every term except the first has as a factor and therefore approaches 0. M We illustrate the Power Rule using various notations in Example 1. EXAMPLE 1 (a) If , then . (b) If , then H33527 . (c) If , then . (d) M d dr H20849r 3 H20850 H33527 3r 2 dy dt H33527 4t 3 y H33527 t 4 1000x 999 yH11032y H33527 x 1000 fH11032H20849xH20850 H33527 6x 5 f H20849xH20850 H33527 x 6 h H33527 nx nH110021 H33527 lim hl0 H20875 nx nH110021 H11001 nH20849n H11002 1H20850 2 x nH110022 h H11001 H11080H11080H11080 H11001 nxh nH110022 H11001 h nH110021 H20876 H33527 lim hl0 nx nH110021 h H11001 nH20849n H11002 1H20850 2 x nH110022 h 2 H11001 H11080H11080H11080 H11001 nxh nH110021 H11001 h n h fH11032H20849xH20850 H33527 lim hl0 H20875 x n H11001 nx nH110021 h H11001 nH20849n H11002 1H20850 2 x nH110022 h 2 H11001 H11080H11080H11080 H11001 nxh nH110021 H11001 h n H20876 H11002 x n h H20849x H11001 hH20850 n H20849x H11001 hH20850 4 x 4 fH11032H20849xH20850 H33527 lim hl0 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H33527 lim hl0 H20849x H11001 hH20850 n H11002 x n h H33527 na nH110021 H33527 a nH110021 H11001 a nH110022 a H11001 H11080H11080H11080 H11001 aa nH110022 H11001 a nH110021 H33527 lim xla H20849x nH110021 H11001 x nH110022 a H11001 H11080H11080H11080 H11001 xa nH110022 H11001 a nH110021 H20850 fH11032H20849aH20850 H33527 lim xla f H20849xH20850 H11002 f H20849aH20850 x H11002 a H33527 lim xla x n H11002 a n x H11002 a fH11032H20849aH20850f H20849xH20850 H33527 x n x n H11002 a n H33527 H20849x H11002 aH20850H20849x nH110021 H11001 x nH110022 a H11001 H11080H11080H11080 H11001 xa nH110022 H11001 a nH110021 H20850 d dx H20849x n H20850 H33527 nx nH110021 H20849dH20862dxH20850H20849x n H20850 H33527 nx nH110021 174 |||| CHAPTER 3 DIFFERENTIATION RULES N The Binomial Theorem is given on Reference Page 1. What about power functions with negative integer exponents? In Exercise 61 we ask you to verify from the definition of a derivative that We can rewrite this equation as and so the Power Rule is true when . In fact, we will show in the next section [Exercise 58(c)] that it holds for all negative integers. What if the exponent is a fraction? In Example 3 in Section 2.8 we found that which can be written as This shows that the Power Rule is true even when . In fact, we will show in Sec- tion 3.6 that it is true for all real numbers n. THE POWER RULE (GENERAL VERSION) If n is any real number, then EXAMPLE 2 Differentiate: (a) (b) SOLUTION In each case we rewrite the function as a power of x. (a) Since , we use the Power Rule with : (b) M The Power Rule enables us to find tangent lines without having to resort to the defi- nition of a derivative. It also enables us to find normal lines. The normal line to a curve at a point is the line through that is perpendicular to the tangent line at . (In the study of optics, one needs to consider the angle between a light ray and the normal line to a lens.) PPP C dy dx H33527 d dx (s 3 x 2 ) H33527 d dx H20849x 2H208623 H20850 H33527 2 3 x H208492H208623H20850H110021 H33527 2 3 x H110021H208623 fH11032H20849xH20850 H33527 d dx H20849x H110022 H20850 H33527 H110022x H110022H110021 H33527 H110022x H110023 H33527 H11002 2 x 3 n H33527 H110022f H20849xH20850 H33527 x H110022 y H33527s 3 x 2 f H20849xH20850 H33527 1 x 2 d dx H20849x n H20850 H33527 nx nH110021 n H33527 1 2 d dx H20849x 1H208622 H20850 H33527 1 2 x H110021H208622 d dx sx H33527 1 2sx n H33527 H110021 d dx H20849x H110021 H20850 H33527 H20849H110021H20850x H110022 d dx H20873 1 x H20874 H33527 H11002 1 x 2 SECTION 3.1 DERIVATIVES OF POLYNOMIALS AND EXPONENTIAL FUNCTIONS |||| 175 2 _2 _3 3 y yª FIGURE 3 y=#??? N Figure 3 shows the function in Example 2 (b) and its derivative . Notice that is not differ- entiable at ( is not defined there). Observe that is positive when increases and is nega- tive when decreases.y yyH11032 yH110320 yyH11032 y EXAMPLE 3 Find equations of the tangent line and normal line to the curve at the point . Illustrate by graphing the curve and these lines. SOLUTION The derivative of is So the slope of the tangent line at (1, 1) is . Therefore an equation of the tan- gent line is The normal line is perpendicular to the tangent line, so its slope is the negative recipro- cal of , that is, . Thus an equation of the normal line is We graph the curve and its tangent line and normal line in Figure 4. M NEW DERIVATIVES FROM OLD When new functions are formed from old functions by addition, subtraction, or multipli- cation by a constant, their derivatives can be calculated in terms of derivatives of the old functions. In particular, the following formula says that the derivative of a constant times a function is the constant times the derivative of the function. THE CONSTANT MULTIPLE RULE If c is a constant and is a differentiable func- tion, then PROOF Let . Then (by Law 3 of limits) M EXAMPLE 4 (a) (b) M The next rule tells us that the derivative of a sum of functions is the sum of the derivatives. d dx H20849H11002xH20850 H33527 d dx H20851H20849H110021H20850xH20852 H33527 H20849H110021H20850 d dx H20849xH20850 H33527 H110021H208491H20850 H33527 H110021 d dx H208493x 4 H20850 H33527 3 d dx H20849x 4 H20850 H33527 3H208494x 3 H20850 H33527 12x 3 H33527 cfH11032H20849xH20850 H33527 c lim hl0 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H33527 lim hl0 c H20875 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H20876 tH11032H20849xH20850 H33527 lim hl0 tH20849x H11001 hH20850 H11002tH20849xH20850 h H33527 lim hl0 cf H20849x H11001 hH20850 H11002 cf H20849xH20850 h tH20849xH20850 H33527 cf H20849xH20850 d dx H20851cf H20849xH20850H20852 H33527 c d dx f H20849xH20850 f y H33527 H11002 2 3 x H11001 5 3 ory H11002 1 H33527 H11002 2 3 H20849x H11002 1H20850 H11002 2 3 3 2 y H33527 3 2 x H11002 1 2 ory H11002 1 H33527 3 2 H20849x H11002 1H20850 fH11032H208491H20850 H33527 3 2 fH11032H20849xH20850 H33527 3 2 x H208493H208622H20850H110021 H33527 3 2 x 1H208622 H33527 3 2 sx f H20849xH20850 H33527 xsx H33527 xx 1H208622 H33527 x 3H208622 H208491, 1H20850 y H33527 xsx V 176 |||| CHAPTER 3 DIFFERENTIATION RULES 3 _1 _1 3 tangent normal FIGURE 4 N GEOMETRIC INTERPRETATION OF THE CONSTANT MULTIPLE RULE x y 0 y=2? y=? Multiplying by stretches the graph verti- cally by a factor of 2. All the rises have been doubled but the runs stay the same. So the slopes are doubled, too. c H33527 2 THE SUM RULE If f and t are both differentiable, then PROOF Let . Then (by Law 1) M The Sum Rule can be extended to the sum of any number of functions. For instance, using this theorem twice, we get By writing as and applying the Sum Rule and the Constant Multiple Rule, we get the following formula. THE DIFFERENCE RULE If f and t are both differentiable, then The Constant Multiple Rule, the Sum Rule, and the Difference Rule can be combined with the Power Rule to differentiate any polynomial, as the following examples demonstrate. EXAMPLE 5 M H33527 8x 7 H11001 60x 4 H11002 16x 3 H11001 30x 2 H11002 6 H33527 8x 7 H11001 12H208495x 4 H20850 H11002 4H208494x 3 H20850 H11001 10H208493x 2 H20850 H11002 6H208491H20850 H11001 0 H33527 d dx H20849x 8 H20850 H11001 12 d dx H20849x 5 H20850 H11002 4 d dx H20849x 4 H20850 H11001 10 d dx H20849x 3 H20850 H11002 6 d dx H20849xH20850 H11001 d dx H208495H20850 d dx H20849x 8 H11001 12x 5 H11002 4x 4 H11001 10x 3 H11002 6x H11001 5H20850 d dx H20851 f H20849xH20850 H11002tH20849xH20850H20852 H33527 d dx f H20849xH20850 H11002 d dx tH20849xH20850 f H11001 H20849H110021H20850tf H11002t H20849 f H11001tH11001 hH20850H11032 H33527 H20851H20849 f H11001tH20850 H11001 hH20852H11032 H33527 H20849 f H11001tH20850H11032 H11001 hH11032 H33527 fH11032 H11001tH11032 H11001 hH11032 H33527 fH11032H20849xH20850 H11001tH11032H20849xH20850 H33527 lim hl0 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H11001 lim hl0 tH20849x H11001 hH20850 H11002tH20849xH20850 h H33527 lim hl0 H20875 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H11001 tH20849x H11001 hH20850 H11002tH20849xH20850 h H20876 H33527 lim hl0 H20851 f H20849x H11001 hH20850 H11001tH20849x H11001 hH20850H20852 H11002 H20851 f H20849xH20850 H11001tH20849xH20850H20852 h FH11032H20849xH20850 H33527 lim hl0 FH20849x H11001 hH20850 H11002 FH20849xH20850 h FH20849xH20850 H33527 f H20849xH20850 H11001tH20849xH20850 d dx H20851 f H20849xH20850 H11001tH20849xH20850H20852 H33527 d dx f H20849xH20850 H11001 d dx tH20849xH20850 SECTION 3.1 DERIVATIVES OF POLYNOMIALS AND EXPONENTIAL FUNCTIONS |||| 177 N Using prime notation, we can write the Sum Rule as H20849 f H11001tH20850H11032 H33527 fH11032 H11001tH11032 EXAMPLE 6 Find the points on the curve where the tangent line is horizontal. SOLUTION Horizontal tangents occur where the derivative is zero. We have Thus if x H33527 0 or , that is, . So the given curve has horizontal tangents when x H33527 0, , and . The corresponding points are , , and . (See Figure 5.) M EXAMPLE 7 The equation of motion of a particle is , where is measured in centimeters and in seconds. Find the acceleration as a function of time. What is the acceleration after 2 seconds? SOLUTION The velocity and acceleration are The acceleration after 2 s is . M EXPONENTIAL FUNCTIONS Let?s try to compute the derivative of the exponential function using the defini- tion of a derivative: The factor doesn?t depend on h, so we can take it in front of the limit: Notice that the limit is the value of the derivative of at , that is, Therefore we have shown that if the exponential function is differentiable at 0, then it is differentiable everywhere and This equation says that the rate of change of any exponential function is proportional to the function itself. (The slope is proportional to the height.) fH11032H20849xH20850 H33527 fH11032H208490H20850a x 4 f H20849xH20850 H33527 a x lim hl0 a h H11002 1 h H33527 fH11032H208490H20850 0f fH11032H20849xH20850 H33527 a x lim hl0 a h H11002 1 h a x H33527 lim h l 0 a x a h H11002 a x h H33527 lim h l 0 a x H20849a h H11002 1H20850 h fH11032H20849xH20850 H33527 lim h l 0 f H20849x H11001 hH20850 H11002 f H20849xH20850 h H33527 lim h l 0 a xH11001h H11002 a x h f H20849xH20850 H33527 a x aH208492H20850 H33527 14 cmH20862s 2 aH20849tH20850 H33527 dv dt H33527 12t H11002 10 vH20849tH20850 H33527 ds dt H33527 6t 2 H11002 10t H11001 3 t ss H33527 2t 3 H11002 5t 2 H11001 3t H11001 4 (H11002s3, H110025)(s3, H110025) H208490, 4H20850H11002s3s3 x H33527 H11006s3x 2 H11002 3 H33527 0dyH20862dx H33527 0 H33527 4x 3 H11002 12x H11001 0 H33527 4xH20849x 2 H11002 3H20850 dy dx H33527 d dx H20849x 4 H20850 H11002 6 d dx H20849x 2 H20850 H11001 d dx H208494H20850 y H33527 x 4 H11002 6x 2 H11001 4V 178 |||| CHAPTER 3 DIFFERENTIATION RULES FIGURE 5 The curve y=x$-6x@+4 and its horizontal tangents 0 x y (0, 4) {??3, _5}{_??3, _5} Numerical evidence for the existence of is given in the table at the left for the cases and . (Values are stated correct to four decimal places.) It appears that the limits exist and In fact, it can be proved that these limits exist and, correct to six decimal places, the val- ues are Thus, from Equation 4, we have Of all possible choices for the base in Equation 4, the simplest differentiation formula occurs when . In view of the estimates of for and , it seems rea- sonable that there is a number between 2 and 3 for which . It is traditional to denote this value by the letter . (In fact, that is how we introduced e in Section 1.5.) Thus we have the following definition. DEFINITION OF THE NUMBER e Geometrically, this means that of all the possible exponential functions , the function is the one whose tangent line at ( has a slope that is exactly 1. (See Figures 6 and 7.) If we put and, therefore, in Equation 4, it becomes the following impor- tant differentiation formula. fH11032H208490H20850 H33527 1a H33527 e FIGURE 7 0 y 1 x slope=1 slope=e® y=e® {x, e®} 0 y 1 x y=2® y=e® y =3® FIGURE 6 fH11032H208490H208500, 1H20850f H20849xH20850 H33527 e x y H33527 a x lim hl0 e h H11002 1 h H33527 1e is the number such that e fH11032H208490H20850 H33527 1a a H33527 3a H33527 2fH11032H208490H20850fH11032H208490H20850 H33527 1 a d dx H208493 x H20850 H11015 H208491.10H208503 x d dx H208492 x H20850 H11015 H208490.69H208502 x 5 d dx H208493 x H20850 H20895 xH335270 H11015 1.098612 d dx H208492 x H20850 H20895 xH335270 H11015 0.693147 fH11032H208490H20850 H33527 lim hl0 3 h H11002 1 h H11015 1.10for a H33527 3, fH11032H208490H20850 H33527 lim hl0 2 h H11002 1 h H11015 0.69for a H33527 2, a H33527 3a H33527 2 fH11032H208490H20850 SECTION 3.1 DERIVATIVES OF POLYNOMIALS AND EXPONENTIAL FUNCTIONS |||| 179 h 0.1 0.7177 1.1612 0.01 0.6956 1.1047 0.001 0.6934 1.0992 0.0001 0.6932 1.0987 3 h H11002 1 h 2 h H11002 1 h N In Exercise 1 we will see that lies between and . Later we will be able to show that, correct to five decimal places, e H11015 2.71828 2.82.7 e DERIVATIVE OF THE NATURAL EXPONENTIAL FUNCTION Thus the exponential function has the property that it is its own derivative. The geometrical significance of this fact is that the slope of a tangent line to the curve is equal to the -coordinate of the point (see Figure 7). EXAMPLE 8 If , find and . Compare the graphs of and . SOLUTION Using the Difference Rule, we have In Section 2.8 we defined the second derivative as the derivative of , so The function f and its derivative are graphed in Figure 8. Notice that has a horizon- tal tangent when ; this corresponds to the fact that . Notice also that, for , is positive and is increasing. When , is negative and is decreasing. M EXAMPLE 9 At what point on the curve is the tangent line parallel to the line ? SOLUTION Since , we have . Let the x-coordinate of the point in question be a. Then the slope of the tangent line at that point is . This tangent line will be parallel to the line if it has the same slope, that is, 2. Equating slopes, we get Therefore the required point is . (See Figure 9.) MH20849a, e a H20850 H33527 H20849ln 2, 2H20850 a H33527 ln 2e a H33527 2 y H33527 2x e a yH11032 H33527 e x y H33527 e x y H33527 2x y H33527 e x ffH11032H20849xH20850x H11021 0ffH11032H20849xH20850x H11022 0 fH11032H208490H20850 H33527 0x H33527 0 ffH11032 f H11033H20849xH20850 H33527 d dx H20849e x H11002 1H20850 H33527 d dx H20849e x H20850 H11002 d dx H208491H20850 H33527 e x fH11032 fH11032H20849xH20850 H33527 d dx H20849e x H11002 xH20850 H33527 d dx H20849e x H20850 H11002 d dx H20849xH20850 H33527 e x H11002 1 fH11032ff H11033fH11032f H20849xH20850 H33527 e x H11002 xV yy H33527 e x f H20849xH20850 H33527 e x d dx H20849e x H20850 H33527 e x FIGURE 8 3 _1 1.5_1.5 f fª Visual 3.1 uses the slope-a-scope to illustrate this formula. TEC (b) What types of functions are and ? Compare the differentiation formulas for and t. (c) Which of the two functions in part (b) grows more rapidly when x is large? 3?32 Differentiate the function. 3. 4. 5. 6. 7. 8. f H20849tH20850 H33527 1 2 t 6 H11002 3t 4 H11001 tf H20849xH20850 H33527 x 3 H11002 4x H11001 6 FH20849xH20850 H33527 3 4 x 8 f H20849tH20850 H33527 2 H11002 2 3 t f H20849xH20850 H33527s30 f H20849xH20850 H33527 186.5 f tH20849xH20850 H33527 x e f H20849xH20850 H33527 e x 1. (a) How is the number e defined? (b) Use a calculator to estimate the values of the limits and correct to two decimal places. What can you conclude about the value of e? 2. (a) Sketch, by hand, the graph of the function , pay- ing particular attention to how the graph crosses the y-axis. What fact allows you to do this? f H20849xH20850 H33527 e x lim h l 0 2.8 h H11002 1 h lim h l 0 2.7 h H11002 1 h EXERCISES3.1 FIGURE 9 1 1 0 x 2 3 y y=´ y=2x (ln 2, 2) 180 |||| CHAPTER 3 DIFFERENTIATION RULES (b) Using the graph in part (a) to estimate slopes, make a rough sketch, by hand, of the graph of . (See Example 1 in Section 2.8.) (c) Calculate and use this expression, with a graphing device, to graph . Compare with your sketch in part (b). ; 44. (a) Use a graphing calculator or computer to graph the func- tion in the viewing rectangle by . (b) Using the graph in part (a) to estimate slopes, make a rough sketch, by hand, of the graph of . (See Example 1 in Section 2.8.) (c) Calculate and use this expression, with a graphing device, to graph . Compare with your sketch in part (b). 45?46 Find the first and second derivatives of the function. 45. ; 47?48 Find the first and second derivatives of the function. Check to see that your answers are reasonable by comparing the graphs of , , and . 47. 48. The equation of motion of a particle is , where is in meters and is in seconds. Find (a) the velocity and acceleration as functions of , (b) the acceleration after 2 s, and (c) the acceleration when the velocity is 0. 50. The equation of motion of a particle is , where is in meters and is in seconds. (a) Find the velocity and acceleration as functions of . (b) Find the acceleration after 1 s. ; (c) Graph the position, velocity, and acceleration functions on the same screen. Find the points on the curve where the tangent is horizontal. 52. For what values of does the graph of have a horizontal tangent? 53. Show that the curve has no tangent line with slope 4. 54. Find an equation of the tangent line to the curve that is parallel to the line . 55. Find equations of both lines that are tangent to the curve and are parallel to the line . ; 56. At what point on the curve is the tangent line parallel to the line ? Illustrate by graphing the curve and both lines. 57. Find an equation of the normal line to the parabola that is parallel to the line .x H11002 3y H33527 5y H33527 x 2 H11002 5x H11001 4 3x H11002 y H33527 5 y H33527 1 H11001 2e x H11002 3x 12x H11002 y H33527 1y H33527 1 H11001 x 3 y H33527 1 H11001 3x y H33527 xsx y H33527 6x 3 H11001 5x H11002 3 f H20849xH20850 H33527 x 3 H11001 3x 2 H11001 x H11001 3 x y H33527 2x 3 H11001 3x 2 H11002 12x H11001 151. t tss H33527 2t 3 H11002 7t 2 H11001 4t H11001 1 t t ss H33527 t 3 H11002 3t49. f H20849xH20850 H33527 e x H11002 x 3 f H20849xH20850 H33527 2x H11002 5x 3H208624 f H11033fH11032f GH20849rH20850 H33527sr H11001s 3 r 46.f H20849xH20850 H33527 x 4 H11002 3x 3 H11001 16x tH11032 tH11032H20849xH20850 tH11032 H20851H110028, 8H20852 H20851H110021, 4H20852tH20849xH20850 H33527 e x H11002 3x 2 fH11032 fH11032H20849xH20850 fH11032 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 24. 25. 26. 27. 28. 29. 30. 32. 33?34 Find an equation of the tangent line to the curve at the given point. 33. , 34. , 35?36 Find equations of the tangent line and normal line to the curve at the given point. , 36. , ; 37?38 Find an equation of the tangent line to the curve at the given point. Illustrate by graphing the curve and the tangent line on the same screen. 37. , 38. , ; 39?42 Find . Compare the graphs of and and use them to explain why your answer is reasonable. 39. 40. 41. 42. ; 43. (a) Use a graphing calculator or computer to graph the func- tion in the viewing rectangle by .H20851H1100210, 50H20852H20851H110023, 5H20852 f H20849xH20850 H33527 x 4 H11002 3x 3 H11002 6x 2 H11001 7x H11001 30 f H20849xH20850 H33527 x H11001 1 x f H20849xH20850 H33527 3x 15 H11002 5x 3 H11001 3 f H20849xH20850 H33527 3x 5 H11002 20x 3 H11001 50xf H20849xH20850 H33527 e x H11002 5x fH11032ffH11032H20849xH20850 H208491, 0H20850y H33527 x H11002sx H208491, 2H20850y H33527 3x 2 H11002 x 3 H208491, 9H20850y H33527 H208491 H11001 2xH20850 2 H208490, 2H20850y H33527 x 4 H11001 2e x 35. H208491, 2H20850y H33527 x 4 H11001 2x 2 H11002 xH208491, 1H20850y H33527s 4 x y H33527 e xH110011 H11001 1z H33527 A y 10 H11001 Be y 31. v H33527 H20873 sx H11001 1 s 3 x H20874 2 u H33527s 5 t H11001 4st 5 y H33527 ae v H11001 b v H11001 c v 2 HH20849xH20850 H33527 H20849x H11001 x H110021 H20850 3 tH20849uH20850 H33527s2 u H11001s3u y H33527 4H9266 2 y H33527 x 2 H11002 2sx x y H33527 x 2 H11001 4x H11001 3 sx 23. y H33527sx H20849x H11002 1H20850y H33527 ax 2 H11001 bx H11001 c f H20849tH20850 H33527st H11002 1 st F H20849xH20850 H33527 ( 1 2 x) 5 y H33527s 3 x GH20849xH20850 H33527sx H11002 2e x BH20849yH20850 H33527 cy H110026 AH20849sH20850 H33527 H11002 12 s 5 RH20849tH20850 H33527 5t H110023H208625 VH20849rH20850 H33527 4 3 H9266r 3 y H33527 5e x H11001 3y H33527 x H110022H208625 hH20849xH20850 H33527 H20849x H11002 2H20850H208492x H11001 3H20850f H20849tH20850 H33527 1 4 H20849t 4 H11001 8H20850 SECTION 3.1 DERIVATIVES OF POLYNOMIALS AND EXPONENTIAL FUNCTIONS |||| 181 69. (a) For what values of is the function dif- ferentiable? Find a formula for . (b) Sketch the graphs of and . 70. Where is the function differenti- able? Give a formula for and sketch the graphs of and . 71. Find the parabola with equation whose tangent line at (1, 1) has equation . 72. Suppose the curve has a tan- gent line when with equation and a tangent line when with equation . Find the values of , , , and . For what values of and is the line tangent to the parabola when ? 74. Find the value of such that the line is tangent to the curve . 75. Let Find the values of and that make differentiable every- where. 76. A tangent line is drawn to the hyperbola at a point . (a) Show that the midpoint of the line segment cut from this tangent line by the coordinate axes is . (b) Show that the triangle formed by the tangent line and the coordinate axes always has the same area, no matter where is located on the hyperbola. Evaluate . 78. Draw a diagram showing two perpendicular lines that intersect on the -axis and are both tangent to the parabola . Where do these lines intersect? 79. If , how many lines through the point are normal lines to the parabola ? What if ? 80. Sketch the parabolas and . Do you think there is a line that is tangent to both curves? If so, find its equation. If not, why not? y H33527 x 2 H11002 2x H11001 2y H33527 x 2 c H33355 1 2 y H33527 x 2 H208490, cH20850c H11022 1 2 y H33527 x 2 y lim x l 1 x 1000 H11002 1 x H11002 1 77. P P Pxy H33527 c fbm f H20849xH20850 H33527 H20877 x 2 mx H11001 b if x H33355 2 if x H11022 2 y H33527 csx y H33527 3 2 x H11001 6c x H33527 2y H33527 ax 2 2x H11001 y H33527 bba 73. dcba y H33527 2 H11002 3xx H33527 1 y H33527 2x H11001 1x H33527 0 y H33527 x 4 H11001 ax 3 H11001 bx 2 H11001 cx H11001 d y H33527 3x H11002 2 y H33527 ax 2 H11001 bx hH11032hhH11032 hH20849xH20850 H33527 H11341 x H11002 1 H11341 H11001 H11341 x H11001 2 H11341 fH11032f fH11032 f H20849xH20850 H33527 H11341 x 2 H11002 9 H11341 x 58. Where does the normal line to the parabola at the point (1, 0) intersect the parabola a second time? Illustrate with a sketch. Draw a diagram to show that there are two tangent lines to the parabola that pass through the point . Find the coordinates of the points where these tangent lines inter- sect the parabola. 60. (a) Find equations of both lines through the point that are tangent to the parabola . (b) Show that there is no line through the point that is tangent to the parabola. Then draw a diagram to see why. 61. Use the definition of a derivative to show that if , then . (This proves the Power Rule for the case .) 62. Find the derivative of each function by calculating the first few derivatives and observing the pattern that occurs. (a) (b) 63. Find a second-degree polynomial such that , , and . 64. The equation is called a differential equation because it involves an unknown function and its derivatives and . Find constants such that the function satisfies this equation. (Differ- ential equations will be studied in detail in Chapter 9.) 65. Find a cubic function whose graph has horizontal tangents at the points and . 66. Find a parabola with equation that has slope 4 at , slope at , and passes through the point . 67. Let Is differentiable at 1? Sketch the graphs of and . 68. At what numbers is the following function differentiable? Give a formula for and sketch the graphs of and .tH11032ttH11032 tH20849xH20850 H33527 H20877 H110021 H11002 2x x 2 x if x H11021 H110021 if H110021 H33355 x H33355 1 if x H11022 1 t fH11032ff f H20849xH20850 H33527 H20877 2 H11002 x x 2 H11002 2x H11001 2 if x H33355 1 if x H11022 1 H208492, 15H20850 x H33527 H110021H110028x H33527 1 y H33527 ax 2 H11001 bx H11001 c H208492, 0H20850H20849H110022, 6H20850 y H33527 ax 3 H11001 bx 2 H11001 cx H11001 d y H33527 Ax 2 H11001 Bx H11001 C A, B, and CyH11033yH11032 y yH11033 H11001 yH11032 H11002 2y H33527 x 2 PH11033H208492H20850 H33527 2PH11032H208492H20850 H33527 3 PH208492H20850 H33527 5P f H20849xH20850 H33527 1H20862xf H20849xH20850 H33527 x n nth n H33527 H110021 fH11032H20849xH20850 H33527 H110021H20862x 2 f H20849xH20850 H33527 1H20862x H208492, 7H20850 y H33527 x 2 H11001 x H208492, H110023H20850 H208490, H110024H20850y H33527 x 2 59. y H33527 x H11002 x 2 182 |||| CHAPTER 3 DIFFERENTIATION RULES Suppose you are asked to design the first ascent and drop for a new roller coaster. By studying photographs of your favorite coasters, you decide to make the slope of the ascent 0.8 and the slope of the drop . You decide to connect these two straight stretches and with part of a parabola , where and are measured in feet. For the track to be smooth there can?t be abrupt changes in direction, so you want the linear f H20849xH20850xy H33527 f H20849xH20850 H33527 ax 2 H11001 bx H11001 cy H33527 L 2 H20849xH20850 y H33527 L 1 H20849xH20850H110021.6 BUILDING A BETTER ROLLER COASTERA P P L I E D P R O J E C T James Stewart Stewart - Calculus - Early Transcedentals 6e calculus
{"url":"http://www.studyblue.com/notes/note/n/ch3-sec-31-differentiation-rules-james-stewart-pdf-online/file/289483","timestamp":"2014-04-16T17:19:15Z","content_type":null,"content_length":"59015","record_id":"<urn:uuid:0c1833da-58ff-42b7-a3f7-0144e76a5bde>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Application of the empirical characteristic function to compare and estimate densities by pooling information. Ferré, L. and Whittaker, Joseph (2004) Application of the empirical characteristic function to compare and estimate densities by pooling information. Computational Statistics, 19 (2). pp. 169-193. ISSN 0943-4062 Full text not available from this repository. Independent measurements are taken from distinct populations which may differ in mean, variance and in shape, for instance in the number of modes and the heaviness of the tails. Our goal is to characterize differences between these different populations. To avoid pre-judging the nature of the heterogeneity, for instance by assuming a parametric form, and to reduce the loss of information by calculating summary statistics, the observations are transformed to the empirical characteristic function (ECF). An eigen decomposition is applied to the ECFs to represent the populations as points in a low dimensional space and the choice of optimal dimension is made by minimising a mean square error. Interpretation of these plots is naturally provided by the corresponding density estimate obtained by inverting the ECF projected on the reduced dimension space. Some simulated examples indicate the promise of the technique and an application to the growth of Mirabilis plants is Item Type: Article Journal or Publication Computational Statistics Uncontrolled Keywords: complex principal component analysis - empirical characteristic - function - exploratory data analysis - Fourier inversion - growth curve - analysis - kernel density estimation - mean square error - mixture distribution Subjects: Q Science > QA Mathematics Departments: Faculty of Science and Technology > Mathematics and Statistics ID Code: 9486 Deposited By: Mrs Yaling Zhang Deposited On: 12 Jun 2008 16:17 Refereed?: Yes Published?: Published Last Modified: 26 Jul 2012 18:37 Identification Number: URI: http://eprints.lancs.ac.uk/id/eprint/9486 Actions (login required)
{"url":"http://eprints.lancs.ac.uk/9486/","timestamp":"2014-04-20T14:42:26Z","content_type":null,"content_length":"16857","record_id":"<urn:uuid:e4c37666-a95c-4ee1-908f-cb6d1df8aa1a>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple choice tries Luc Devroye, Gábor Lugosi, Gahyun Park and Wojchiek Szpankowski Random Structures and Algorithms 2007. In this paper we consider tries built from n strings such that each string can be chosen from a pool of k strings, each of them generated by a discrete i.i.d. source. Three cases are considered: k = 2, k is large but fixed, and k ∼ c log n. The goal in each case is to obtain tries as balanced as possible. Various parameters such as height and fill-up level are analyzed. It is shown that for two-choice tries a 50% reduction in height is achieved when compared to ordinary tries. In a greedy on-line construction when the string that minimizes the depth of insertion for every pair is inserted, the height is only reduced by 25%. In order to further reduce the height by another 25%, we design a more refined on-line algorithm. The total computation time of the algorithm is O(n log n). Furthermore, when we choose the best among k ≥ 2 strings, then for large but fixed k the height is asymptotically equal to the typical depth in a trie. Finally, we show that further improvement can be achieved if the number of choices for each string is proportional to log n. In this case highly balanced trees can be constructed by a simple greedy algorithm for which the difference between the height and the fill-up level is bounded by a constant with high probability. This, in turn, has implications for distributed hash tables, leading to a randomized ID management algorithm in peer-to-peer networks such that, with high probability, the ratio between the maximum and the minimum load of a processor is O(1).
{"url":"http://eprints.pascal-network.org/archive/00004624/","timestamp":"2014-04-16T13:27:10Z","content_type":null,"content_length":"8388","record_id":"<urn:uuid:dd755e02-deb7-4268-91f7-35eaf7069eb0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
how do i find the fact family for a fraction? Answered: Converting fractions to percents. 5/9 = 0.5555555556 * 100 = 55. and all that repeating nonsense, so.... 55.5 Percents are usually restrained in thus case to two integers, so... 56% ==== Answered: What are 2 equivalent fractions of 6/8 Answered: What fraction of time was spent during drills total amount of time was 90 Hi Jakall, 20*100/90 = 22.2% was spent on actual drilling and 70*100/90 = 77.8% was spent inbetween the actual drillings. Best regards, Answered: 12 children shared 2 medium-size pizzas equally. what fraction of 1 whole Quite possibly by now you have already understood this, but in case you still find it a bit tricky I will try to explain further. Imagine a pizza divided into 6 pieces. (It does not matter that it is medium). If you had another one also divided into 6 pieces and you were looking at these two ... Answered: Under what circumstances would the GCF be equal to the numerator of a GCF/n(GCF) would be the fraction, where n is not equal to GCF, but is an integer. Answered: Fractional laser can heal burns scar? Medical lasers work by causing intense heat in very short bursts on very small areas. I have seen medical professionals ask for laser designs that will perforate the eyeball and relieve severe intra ocular pressure thereby prevent blindness, and I have seen lasers used in the removal of tattoos on ... • Posted by Bill Compton 2 years ago • 1 Comment • Last Comment by Anonymous 2 years ago 5/9 = 0.5555555556 * 100 = 55. and all that repeating nonsense, so.... 55.5 Percents are usually restrained in thus case to two integers, so... 56% ==== Hi Jakall, 20*100/90 = 22.2% was spent on actual drilling and 70*100/90 = 77.8% was spent inbetween the actual drillings. Best regards, Quite possibly by now you have already understood this, but in case you still find it a bit tricky I will try to explain further. Imagine a pizza divided into 6 pieces. (It does not matter that it is medium). If you had another one also divided into 6 pieces and you were looking at these two ... GCF/n(GCF) would be the fraction, where n is not equal to GCF, but is an integer. Medical lasers work by causing intense heat in very short bursts on very small areas. I have seen medical professionals ask for laser designs that will perforate the eyeball and relieve severe intra ocular pressure thereby prevent blindness, and I have seen lasers used in the removal of tattoos on ... Other people asked questions on various topics, and are still waiting for answer. Would be great if you can take a sec and answer them Express 0.7 inch as a fraction with a denominator of 32 Hi, 0.7 = 7/10 = 7*3.2 / 32 = 22.4/32 Best regards, What is a fraction between one third and five sixths whose denominator is Well, 1/3 == 4/12 5/6 == 10/12 So any fraction between the two would be the answer 5/12, 6/12, 7/12, 8/12 9/12 Family reunion Oh For Crying Out Loud! How can we know the answer to this? Ask Your family members!!! I am trying to locate a family in Livingston NJ ... There's a Judy Bronson in Verona NJ who might be the wife's mom. Hi, 0.7 = 7/10 = 7*3.2 / 32 = 22.4/32 Best regards, Well, 1/3 == 4/12 5/6 == 10/12 So any fraction between the two would be the answer 5/12, 6/12, 7/12, 8/12 9/12 Oh For Crying Out Loud! How can we know the answer to this? Ask Your family members!!! There's a Judy Bronson in Verona NJ who might be the wife's mom.
{"url":"http://aolanswers.com/questions/how_do_i_find_the_fact_family_for_a_fraction_p627781491379534","timestamp":"2014-04-17T07:12:29Z","content_type":null,"content_length":"92879","record_id":"<urn:uuid:6d9c1aea-a59f-4389-b87c-f1420b912211>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
Nihil, you have no idea how way the hell above my head that is........ FamStars&Straps that’s the best article ive read on the subject. Not that it hasn’t been said before but not all so well in one article. I think, at first glance, the only thing he left out was the fact that CD copyrights never expire. great article that you found. just hope thats someone bring up these arguments to congress and get those stupid laws thrown out. I plan to e-mail this to ever intelligent person i know (both of them :-) ) and urge they write their reps well damme! its a conspiracy of two. hand written letters have more clout but emails still express the opinion of many = +/- votes im sitting hear thinking about it. im going to print up a bunch-o-copies of this article, get like minded folkes to read it and try and get them to sign a petition. something to the effect that they have read and do compleatly agree with the article and would like to see people in office that support these sentiments then mail the petition (hand signed) and a copy of the article to my reps. if others did something similar in their own regions it would have a greater impact than some generic lame ass email petition that would more than likely say somewhere in it "free Lamo" or something similar. yup its a done deal...im doing it tomorrow any input would be appreciated. For those of you wondering; a = b a^2 = ab a^2 - b^2 = ab - b^2 (a + b)(a - b) = b(a - b) (a + b) = b 1 + 1 = 1 therefore 1 = 2. I forget where division by zero occurs, but it is there. Personally, i find the 'infinity symbol' or the sheer notion of expressing infinity methematically stupid - think about it; a = infinity /* lets set teh varable!!!11! */ a = a + 1 /* OMG i r teh genios??! a no longer equal infinity keke lah!!!11!!one!!1 */ etc etc.. As I wish to continue this discussion, I am attempting to move this division by zero argument into the cosmos thread where it properly belongs. http://www.antionline.com/showthread...hreadid=249452 Quote: a = b a^2 = ab a^2 - b^2 = ab - b^2 (a + b)(a - b) = b(a - b) (a + b) = b 1 + 1 = 1 therefore 1 = 2. Reminds me of Algebraic Structures. Did a lot of this stuff there. Loved that class. hehehe call me weird. Division by zero is undefined in some cases and then becomes infinity as you learn more and also proves 1 = 2. So you guys are all right. It just depends on how far you are in your education. Guidance... This is unrelated, but has anyone else noticed that Striek does not have a 'title' in any of his posts? [Antionline Newbie, Jr Member, Senior etc.] Is this a bug or am i the only one affected? All others display fine.
{"url":"http://www.antionline.com/printthread.php?t=247544&pp=10&page=2","timestamp":"2014-04-16T12:04:12Z","content_type":null,"content_length":"14139","record_id":"<urn:uuid:5cce6678-15e6-455f-8715-cf18f53ba744>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
Hello, everyone! How are you? Today we’ll talk a little about recursion. Recursion has many definitions, but in computing it describes algorithms that are defined in terms of itself, which means that in it’s implementations it calls itself. The concept might seem complicated for someone who has never heared of it, but it is something extremely useful, depending on the problem you have in hand. On this post, I’ll use the classic examples of rescursive algorithms, but I’d like to make it clear that it can be used for much more practical things, as I’ll show you on the end of this post. Even though recursion isn’t language specific, it is a very important part of functional programming, so if you’re entering that area ,or thinking of it, it’s something you should have on the tip of your fingers. Well, let’s move to the first example, the calculation of a factorial number. The easiest way to implement a recursive algorithm, is to think on the mathematical function that represents it, so let’s do that. A factorial number is represented by an exclamation point (!) preceded by the number to be calculated, therefore, if we want to calculate the factorial of 5, we can represent it like this: 5! The calculation is made by multiplying every whole number, since the informed value, all the way to the number 1, so in the case of 5, it would look like this: And the function to calculate the factorial of a number n, would be like this: Okay then, to calculate the factorial of n, we must calculate the factorial of n-1, meaning that to evaluate this function, it needs to execute itself internally, the fact that this seems repetitive indicates that we can apply recursion. So here’s a very simple implementation of this calculation in Java: public int factorial(int n){ if(n == 0){ return 1; return n * factorial(n - 1); You see that the method calls itself up to a certain condition, that condition also comes from the definition of a factorial number. The explanation given by any math teacher, or at least the ones that I met, is that the factorial of 0 is 1 by definition, so we have a case where the function does not invoke itself, we call that a stopping condition. Another example is the calculation of a term in the Fibonacci sequence, which is a succession of natural numbers, in which the 2 first terms are 0 and 1, and every subsequent term corresponds to the sum of it’s 2 predecessors, observe the image below: Seeing this image, we can assume that the sequence follows this function: As you probably noticed, once again the function is defined in terms of itself, so it is recursive, if we translate it to Java it would look like this: public static int fibonacci(int n){ if(n == 0){ return 0; }else if(n == 1){ return 1; return fibonacci(n - 1) + fibonacci(n - 2); Now as promised, I present a recursive code that I run daily. At work I had to write a script for the deletion of directories, because a certain temporary directory was never deleted by some reason, in this case, I believe recursion was the best choice I could have made, and with a simple Scala script the problem was solved, here’s the function: def deleteFiles(file: File): Boolean = { if (file isDirectory) { for(f <- file.listFiles){ Now recursive functions may have a problem, like in the factorial function. In Java, if you try to calculate the factorial of a very large number using the algorithm presented here, you could run into a StackOverflowError. Why is that? How can I avoid it? That subject will be approached on my next post, also about recursion I hope you enjoyed it! See you next time! 5 Responses 1. The factorial function should be: public int factorial(int n){ if(n == 1){ return 1; return n * factorial(n – 1); return 0 if n == 1 make the factorial function return 0 for each n. 1. Hello, Brian. I understand your point, if I make the stopping condition (n == 1) I will prevent an extra step on my algorithm. It was done this way because as I said, the factorial of 0 is 1 by definition, so I did it for educational purposes. But thank you for you comment 2. if you want to save the extra step in your algorithm you should have wrote “if(n==1) return 1;” Otherwise you would have wrote “if(n==0) return 1;” Which is the formal definition. But if(n==1) return 0; doesn’t work. It makes your function return 0 for whatever n. 1. Oh, I see what you mean, now. I’m sorry It is fixed just like you said. My bad.
{"url":"http://rodrigosasaki.com/2012/10/27/recursion/","timestamp":"2014-04-18T23:15:28Z","content_type":null,"content_length":"33715","record_id":"<urn:uuid:25473636-d331-4142-92db-ef71158c98e2>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with calculating geometric sequence June 28th 2013, 01:29 AM Help with calculating geometric sequence I hope it is the right place to post it... I need help with the following sequence: (it's 2 to the power of 2 to the power of i) I need to find i as a function of N. meaning how further away in the sequence do i need to go, in order to get N. any help would greatly appreciated. June 28th 2013, 02:53 AM Re: Help with calculating geometric sequence Hey Stormey. Did you mean as a function of j? I don't know of a closed form solution but you should look into techniques like Euler-Mclaurin series and their relationship to integrals. June 28th 2013, 04:18 AM Re: Help with calculating geometric sequence Hi chiro. yes, as a function of j, sorry. the truth is that it is actually a question from data starctures in computer science, so I almost sure the solution suppose to be accomplished with discrete math. (using some manipulation to compute the sum of a more simple sequence, to which the sum is known or easy to calculate, and then calculating the sum of the original sequence, I also tried to think about some change of variable [index in this case], that will help simplify this problem) June 28th 2013, 04:41 PM Re: Help with calculating geometric sequence You might want to consider a bit-wise representation (i.e. binary) and how that relates to N.
{"url":"http://mathhelpforum.com/calculus/220202-help-calculating-geometric-sequence-print.html","timestamp":"2014-04-18T08:40:52Z","content_type":null,"content_length":"5330","record_id":"<urn:uuid:f6133a19-6f52-4628-8cb8-881820cc6aa0>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00071-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] The NumPy Mandelbrot code 16x slower than Fortran [Numpy-discussion] The NumPy Mandelbrot code 16x slower than Fortran Mark Wiebe mwwiebe@gmail.... Tue Jan 24 18:33:44 CST 2012 2012/1/21 Ondřej Čertík <ondrej.certik@gmail.com> > <snip> > Let me know if you figure out something. I think the "mask" thing is > quite slow, but the problem is that it needs to be there, to catch > overflows (and it is there in Fortran as well, see the > "where" statement, which does the same thing). Maybe there is some > other way to write the same thing in NumPy? In the current master, you can replace z[mask] *= z[mask] z[mask] += c[mask] np.multiply(z, z, out=z, where=mask) np.add(z, c, out=z, where=mask) The performance of this alternate syntax is still not great, but it is significantly faster than what it replaces. For a particular choice of mask, I get In [40]: timeit z[mask] *= z[mask] 10 loops, best of 3: 29.1 ms per loop In [41]: timeit np.multiply(z, z, out=z, where=mask) 100 loops, best of 3: 4.2 ms per loop > Ondrej > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion -------------- next part -------------- An HTML attachment was scrubbed... URL: http://mail.scipy.org/pipermail/numpy-discussion/attachments/20120124/0a9719a7/attachment.html More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2012-January/059979.html","timestamp":"2014-04-20T11:52:21Z","content_type":null,"content_length":"4586","record_id":"<urn:uuid:87fc93ba-aef6-450e-acf2-d76d858626c7>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Santa Clara, CA Algebra Tutor Find a Santa Clara, CA Algebra Tutor ...The easiest differential equations to solve are usually linear, however separable, homogeneous, and exact can also be fairly straight forward. The method of integrating factors can be useful in solving differential equations as can series solutions. Differential equations are highly applicable, especially in modeling. 20 Subjects: including algebra 2, algebra 1, calculus, GRE ...I am punctual and regular. I give discipline great value and believe good conduct a strong virtue.AutoCAD is a software application for computer-aided design (CAD) and drafting, in both 2D and 3D formats. I have been around AutoCAD for more than 8 years now. 29 Subjects: including algebra 1, algebra 2, chemistry, GED ...I hold a single subject teaching credential in math. I aim to know my students, set practical goals, engage the students during instruction, have students practice in problem solving, and gauge student learning with assessments. Most students can achieve the goal they set if they truly put their minds to it. 5 Subjects: including algebra 1, algebra 2, geometry, prealgebra ...For instance, Bill, one of my Calculus tutee. Bill believes that he's an excellent writer, but somehow he is "dumb" at math. So I asked him: "How do you know you are dumb at math?" "Because I had a D on my last midterm" Bill Replied. 20 Subjects: including algebra 1, English, reading, writing ...I will begin by leading you in a simple, logical way to discover for yourself that the underlying concepts make sense. Once you are totally convinced, the best way to become proficient is through practice, and I will provide you with an unlimited supply of practice problems to work on, with me o... 12 Subjects: including algebra 2, algebra 1, chemistry, calculus
{"url":"http://www.purplemath.com/santa_clara_ca_algebra_tutors.php","timestamp":"2014-04-17T07:44:43Z","content_type":null,"content_length":"24087","record_id":"<urn:uuid:aa31289d-5afa-43f2-ac0e-984e27afe667>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Flashcards - FFM12_Ch_11_TB_01-28-09.doc | StudyBlue CHAPTER 11 THE BASICS OF CAPITAL BUDGETING (Difficulty Levels: Easy, Easy/Medium, Medium, Medium/Hard, and Hard) We point out to our students that some of the questions can best be analyzed by sketching out a NPV profile graph and then thinking about the question in relation to the graph. Please see the preface for information on the AACSB letter indicators (F, M, etc.) on the subject lines. Multiple Choice: True/False (11-1) Capital budget F I Answer: b EASY . A firm should never accept a project if its acceptance would lead to an increase in the firm's cost of capital (its WACC). a. True b. False (11-2) PV of cash flows F I Answer: b EASY . Because "present value" refers to the value of cash flows that occur at different points in time, a series of present values of cash flows should not be summed to determine the value of a capital budgeting project. a. True b. False (11-2) NPV F I Answer: b EASY . Assuming that their NPVs based on the firm's cost of capital are equal, the NPV of a project whose cash flows accrue relatively rapidly will be more sensitive to changes in the discount rate than the NPV of a project whose cash flows come in later in its life. a. True b. False (11-2) NPV and IRR F I Answer: b EASY . A basic rule in capital budgeting is that If a project's NPV exceeds its IRR, then the project should be accepted. a. True b. False (11-2) Mutually exclusive projects F I Answer: a EASY . Conflicts between two mutually exclusive projects occasionally occur, where the NPV method ranks one project higher but the IRR method puts the other one first. In theory, such conflicts should be resolved in favor of the project with the higher NPV. a. True b. False (11-2) Mutually exclusive projects F I Answer: b EASY . Conflicts between two mutually exclusive projects occasionally occur, where the NPV method ranks one project higher but the IRR method puts the other one first. In theory, such conflicts should be resolved in favor of the project with the higher IRR. a. True b. False (11-3) IRR F I Answer: a EASY . The internal rate of return is that discount rate that equates the present value of the cash outflows (or costs) with the present value of the cash inflows. a. True b. False (11-3) IRR F I Answer: b EASY . Other things held constant, an increase in the cost of capital will result in a decrease in a project's IRR. a. True b. False (11-4) Multiple IRRs F I Answer: a EASY . Under certain conditions, a project may have more than one IRR. One such condition is when, in addition to the initial investment at time = 0, a negative cash flow (or cost) occurs at the end of the project's life. a. True b. False (11-4) Multiple IRRs F I Answer: b EASY . The phenomenon called "multiple internal rates of return" arises when two or more mutually exclusive projects that have different lives are being compared. a. True b. False (11-5) Reinvestment rate assumption F I Answer: a EASY . The NPV method is based on the assumption that projects' cash flows are reinvested at the project's risk-adjusted cost of capital. a. True b. False (11-5) Reinvestment rate assumption F I Answer: b EASY . The IRR method is based on the assumption that projects' cash flows are reinvested at the project's risk-adjusted cost of capital. a. True b. False (11-5) Reinvestment rate assumption F I Answer: a EASY . The NPV method's assumption that cash inflows are reinvested at the cost of capital is generally more reasonable than the IRR's assumption that cash flows are reinvested at the IRR. This is an important reason why the NPV method is generally preferred over the IRR method. a. True b. False (11-6) Modified IRR F I Answer: a EASY . For a project with one initial cash outflow followed by a series of positive cash inflows, the modified IRR (MIRR) method involves compounding the cash inflows out to the end of the project's life, summing those compounded cash flows to form a terminal value (TV), and then finding the discount rate that causes the PV of the TV to equal the project's cost. a. True b. False (11-6) Modified IRR F I Answer: b EASY . Both the regular and the modified IRR (MIRR) methods have wide appeal to professors, but most business executives prefer the NPV method to either of the IRR methods. a. True b. False (11-6) Modified IRR F I Answer: b EASY . When evaluating mutually exclusive projects, the modified IRR (MIRR) always leads to the same capital budgeting decisions as the NPV method, regardless of the relative lives or sizes of the projects being evaluated. a. True b. False (11-8) Payback period F I Answer: a EASY . One advantage of the payback method for evaluating potential investments is that it provides information about a project's liquidity and risk. a. True b. False (11-2) Mutually exclusive projects F I Answer: b MEDIUM . When considering two mutually exclusive projects, the firm should always select the project whose internal rate of return is the highest, provided the projects have the same initial cost. This statement is true regardless of whether the projects can be repeated or not. a. True b. False (11-5) NPV vs. IRR F I Answer: b MEDIUM . The primary reason that the NPV method is conceptually superior to the IRR method for evaluating mutually exclusive investments is that multiple IRRs may exist, and when that happens, we don't know which IRR is relevant. a. True b. False (11-5) NPV vs. IRR F I Answer: b MEDIUM . The NPV and IRR methods, when used to evaluate two independent and equally risky projects, will lead to different accept/reject decisions and thus capital budgets if the projects' IRRs are greater than their cost of capital. a. True b. False (11-5) NPV vs. IRR F I Answer: b MEDIUM . The NPV and IRR methods, when used to evaluate two equally risky but mutually exclusive projects, will lead to different accept/reject decisions and thus capital budgets if the cost of capital at which the projects' NPV profiles cross is greater than the projects' cost of capital. a. True b. False (11-5) NPV vs. IRR F I Answer: a MEDIUM . No conflict will exist between the NPV and IRR methods, when used to evaluate two equally risky but mutually exclusive projects, if the projects' cost of capital exceeds the rate at which the projects' NPV profiles cross. a. True b. False (11-7) NPV profiles F I Answer: a MEDIUM . Project S has a pattern of high cash flows in its early life, while Project L has a longer life, with large cash flows late in its life. Neither has negative cash flows after Year 0, and at the current cost of capital, the two projects have identical NPVs. Now suppose interest rates and money costs decline. Other things held constant, this change will cause L to become preferred to S. a. True b. False (11-8) Discounted payback F I Answer: b MEDIUM . The regular payback method is deficient in that it does not take account of cash flows beyond the payback period. The discounted payback method corrects this fault. a. True b. False (11-9) Ranking methods F I Answer: a MEDIUM . In theory, capital budgeting decisions should depend solely on forecasted cash flows and the opportunity cost of capital. The decision criterion should not be affected by managers' tastes, choice of accounting method, or the profitability of other independent projects. a. True b. False (11-9) Ranking methods F I Answer: b MEDIUM . If you were evaluating two mutually exclusive projects for a firm with a zero cost of capital, the payback method and NPV method would always lead to the same decision on which project to undertake. a. True b. False (11-10) Small business practices F I Answer: a MEDIUM . Small businesses make less use of DCF capital budgeting techniques than large businesses. This may reflect a lack of knowledge on the part of small firms' managers, but it may also reflect a rational conclusion that the costs of using DCF analysis outweigh the benefits of these methods for very small firms. a. True b. False (Comp.) NPV and IRR F I Answer: b MEDIUM . An increase in the firm's WACC will decrease projects' NPVs, which could change the accept/reject decision for any potential project. However, such a change would have no impact on projects' IRRs. Therefore, the accept/reject decision under the IRR method is independent of the cost of capital. a. True b. False (11-7) NPV profiles F I Answer: b HARD . The IRR of normal Project X is greater than the IRR of normal Project Y, and both IRRs are greater than zero. Also, the NPV of X is greater than the NPV of Y at the cost of capital. If the two projects are mutually exclusive, Project X should definitely be selected, and the investment made, provided we have confidence in the data. Put another way, it is impossible to draw NPV profiles that would suggest not accepting Project X. a. True b. False (11-7) NPV profiles F I Answer: b HARD . Normal Projects S and L have the same NPV when the discount rate is zero. However, Project S's cash flows come in faster than those of L. Therefore, we know that at any discount rate greater than zero, L will have the higher NPV. a. True b. False (11-7) NPV profiles F I Answer: b HARD . If the IRR of normal Project X is greater than the IRR of mutually exclusive (and also normal) Project Y, we can conclude that the firm should always select X rather than Y if X has NPV > 0. a. True b. False Multiple Choice: Conceptual (11-2) NPV C I Answer: c EASY . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. A project?s NPV is found by compounding the cash inflows at the IRR to find the terminal value (TV), then discounting the TV at the WACC. b. The lower the WACC used to calculate it, the lower the calculated NPV will be. c. If a project?s NPV is less than zero, then its IRR must be less than the WACC. d. If a project?s NPV is greater than zero, then its IRR must be less than zero. e. The NPV of a relatively low-risk project should be found using a relatively high WACC. (11-3) IRR C I Answer: e EASY . Which of the following statements is CORRECT? a. One defect of the IRR method is that it does not take account of cash flows over a project?s full life. b. One defect of the IRR method is that it does not take account of the time value of money. c. One defect of the IRR method is that it does not take account of the cost of capital. d. One defect of the IRR method is that it values a dollar received today the same as a dollar that will not be received until sometime in the future. e. One defect of the IRR method is that it assumes that the cash flows to be received from a project can be reinvested at the IRR itself, and that assumption is often not valid. (11-3) IRR C I Answer: e EASY . Which of the following statements is CORRECT? a. One defect of the IRR method versus the NPV is that the IRR does not take account of cash flows over a project?s full life. b. One defect of the IRR method versus the NPV is that the IRR does not take account of the time value of money. c. One defect of the IRR method versus the NPV is that the IRR does not take account of the cost of capital. d. One defect of the IRR method versus the NPV is that the IRR values a dollar received today the same as a dollar that will not be received until sometime in the future. e. One defect of the IRR method versus the NPV is that the IRR does not take proper account of differences in the sizes of projects. (11-3) IRR C I Answer: d EASY . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. A project?s regular IRR is found by compounding the cash inflows at the WACC to find the terminal value (TV), then discounting this TV at the WACC. b. A project?s regular IRR is found by discounting the cash inflows at the WACC to find the present value (PV), then compounding this PV to find the IRR. c. If a project?s IRR is greater than the WACC, then its NPV must be negative. d. To find a project?s IRR, we must solve for the discount rate that causes the PV of the inflows to equal the PV of the project?s costs. e. To find a project?s IRR, we must find a discount rate that is equal to the WACC. (11-3) IRR C I Answer: d EASY . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. A project?s regular IRR is found by compounding the initial cost at the WACC to find the terminal value (TV), then discounting the TV at the WACC. b. A project?s regular IRR is found by compounding the cash inflows at the WACC to find the present value (PV), then discounting the TV to find the IRR. c. If a project?s IRR is smaller than the WACC, then its NPV will be positive. d. A project?s IRR is the discount rate that causes the PV of the inflows to equal the project?s cost. e. If a project?s IRR is positive, then its NPV must also be positive. (11-4) Normal vs. nonnormal CFs C I Answer: e EASY . Which of the following statements is CORRECT? a. If a project has ?normal? cash flows, then its IRR must be positive. b. If a project has ?normal? cash flows, then its MIRR must be positive. c. If a project has ?normal? cash flows, then it will have exactly two real IRRs. d. The definition of ?normal? cash flows is that the cash flow stream has one or more negative cash flows followed by a stream of positive cash flows and then one negative cash flow at the end of the project?s life. e. If a project has ?normal? cash flows, then it can have only one real IRR, whereas a project with ?nonnormal? cash flows might have more than one real IRR. (11-4) Normal vs. nonnormal CFs C I Answer: a EASY . Which of the following statements is CORRECT? a. Projects with ? normal? cash flows can have only one real IRR. b. Projects with ?normal? cash flows can have two or more real IRRs. c. Projects with ?normal? cash flows must have two changes in the sign of the cash flows, e.g., from negative to positive to negative. If there are more than two sign changes, then the cash flow stream is ?nonnormal.? d. The ?multiple IRR problem? can arise if a project?s cash flows are ?normal.? e. Projects with ?nonnormal? cash flows are almost never encountered in the real world. (11-8) Payback C I Answer: d EASY . Which of the following statements is CORRECT? a. The regular payback method recognizes all cash flows over a project?s life. b. The discounted payback method recognizes all cash flows over a project?s life, and it also adjusts these cash flows to account for the time value of money. c. The regular payback method was, years ago, widely used, but virtually no companies even calculate the payback today. d. The regular payback is useful as an indicator of a project?s liquidity because it gives managers an idea of how long it will take to recover the funds invested in a project. e. The regular payback does not consider cash flows beyond the payback year, but the discounted payback overcomes this defect. (11-8) Payback C I Answer: b EASY . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. The longer a project?s payback period, the more desirable the project is normally considered to be by this criterion. b. One drawback of the payback criterion for evaluating projects is that this method does not properly account for the time value of money. c. If a project?s payback is positive, then the project should be rejected because it must have a negative NPV. d. The regular payback ignores cash flows beyond the payback period, but the discounted payback method overcomes this problem. e. If a company uses the same payback requirement to evaluate all projects, say it requires a payback of 4 years or less, then the company will tend to reject projects with relatively short lives and accept long-lived projects, and this will cause its risk to increase over time. (11-8) Payback C I Answer: b EASY . Which of the following statements is CORRECT? a. The shorter a project?s payback period, the less desirable the project is normally considered to be by this criterion. b. One drawback of the payback criterion is that this method does not take account of cash flows beyond the payback period. c. If a project?s payback is positive, then the project should be accepted because it must have a positive NPV. d. The regular payback ignores cash flows beyond the payback period, but the discounted payback method overcomes this problem. e. One drawback of the discounted payback is that this method does not consider the time value of money, while the regular payback overcomes this drawback. (Comp.) Ranking methods C I Answer: b EASY . Assume a project has normal cash flows. All else equal, which of the following statements is CORRECT? a. A project?s IRR increases as the WACC declines. b. A project?s NPV increases as the WACC declines. c. A project?s MIRR is unaffected by changes in the WACC. d. A project?s regular payback increases as the WACC declines. e. A project?s discounted payback increases as the WACC declines. (Comp.) Ranking methods C I Answer: d EASY . Which of the following statements is CORRECT? a. The internal rate of return method (IRR) is generally regarded by academics as being the best single method for evaluating capital budgeting projects. b. The payback method is generally regarded by academics as being the best single method for evaluating capital budgeting projects. c. The discounted payback method is generally regarded by academics as being the best single method for evaluating capital budgeting projects. d. The net present value method (NPV) is generally regarded by academics as being the best single method for evaluating capital budgeting projects. e. The modified internal rate of return method (MIRR) is generally regarded by academics as being the best single method for evaluating capital budgeting projects. (11-4) Normal vs. nonnormal CFs C I Answer: d EASY/MEDIUM . Which of the following statements is CORRECT? a. An NPV profile graph shows how a project?s payback varies as the cost of capital changes. b. The NPV profile graph for a normal project will generally have a positive (upward) slope as the life of the project increases. c. An NPV profile graph is designed to give decision makers an idea about how a project?s risk varies with its life. d. An NPV profile graph is designed to give decision makers an idea about how a project?s contribution to the firm?s value varies with the cost of capital. e. We cannot draw a project?s NPV profile unless we know the appropriate WACC for use in evaluating the project?s NPV. (11-2) NPV C I Answer: e MEDIUM . Which of the following statements is CORRECT? a. The NPV method was once the favorite of academics and business executives, but today most authorities regard the MIRR as being the best indicator of a project?s profitability. b. If the cost of capital declines, this lowers a project?s NPV. c. The NPV method is regarded by most academics as being the best indicator of a project?s profitability, hence most academics recommend that firms use only this one method. d. A project?s NPV depends on the total amount of cash flows the project produces, but because the cash flows are discounted at the WACC, it does not matter if the cash flows occur early or late in the project?s life. e. The NPV and IRR methods may give different recommendations regarding which of two mutually exclusive projects should be accepted, but they always give the same recommendation regarding the acceptability of a normal, independent project. (11-2) NPV C I Answer: b MEDIUM . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. A project?s NPV is generally found by compounding the cash inflows at the WACC to find the terminal value (TV), then discounting the TV at the IRR to find its PV. b. The higher the WACC used to calculate the NPV, the lower the calculated NPV will be. c. If a project?s NPV is greater than zero, then its IRR must be less than the WACC. d. If a project?s NPV is greater than zero, then its IRR must be less than zero. e. The NPVs of relatively risky projects should be found using relatively low WACCs. (11-4) Multiple IRRs C I Answer: d MEDIUM . Which of the following statements is CORRECT? a. For a project to have more than one IRR, then both IRRs must be greater than the WACC. b. If two projects are mutually exclusive, then they are likely to have multiple IRRs. c. If a project is independent, then it cannot have multiple IRRs. d. Multiple IRRs can only occur if the signs of the cash flows change more than once. e. If a project has two IRRs, then the smaller one is the one that is most relevant, and it should be accepted and relied upon. (11-5) NPV and IRR C I Answer: a MEDIUM . Which of the following statements is CORRECT? a. The NPV method assumes that cash flows will be reinvested at the WACC, while the IRR method assumes reinvestment at the IRR. b. The NPV method assumes that cash flows will be reinvested at the risk-free rate, while the IRR method assumes reinvestment at the IRR. c. The NPV method assumes that cash flows will be reinvested at the WACC, while the IRR method assumes reinvestment at the risk-free rate. d. The NPV method does not consider all relevant cash flows, particularly cash flows beyond the payback period. e. The IRR method does not consider all relevant cash flows, particularly cash flows beyond the payback period. (11-5) NPV and IRR C I Answer: e MEDIUM . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. If Project A has a higher IRR than Project B, then Project A must have the lower NPV. b. If Project A has a higher IRR than Project B, then Project A must also have a higher NPV. c. The IRR calculation implicitly assumes that all cash flows are reinvested at the WACC. d. The IRR calculation implicitly assumes that cash flows are withdrawn from the business rather than being reinvested in the business. e. If a project has normal cash flows and its IRR exceeds its WACC, then the project?s NPV must be positive. (11-5) NPV vs. IRR C I Answer: d MEDIUM . Assume that the economy is in a mild recession, and as a result interest rates and money costs generally are relatively low. The WACC for two mutually exclusive projects that are being considered is 8%. Project S has an IRR of 20% while Project L's IRR is 15%. The projects have the same NPV at the 8% current WACC. However, you believe that the economy is about to recover, and money costs and thus your WACC will also increase. You also think that the projects will not be funded until the WACC has increased, and their cash flows will not be affected by the change in economic conditions. Under these conditions, which of the following statements is CORRECT? a. You should reject both projects because they will both have negative NPVs under the new conditions. b. You should delay a decision until you have more information on the projects, even if this means that a competitor might come in and capture this market. c. You should recommend Project L, because at the new WACC it will have the higher NPV. d. You should recommend Project S, because at the new WACC it will have the higher NPV. e. You should recommend Project S because it has the higher IRR and will continue to have the higher IRR even at the new WACC. (11-5) NPV vs. IRR C I Answer: c MEDIUM . Assume that the economy is enjoying a strong boom, and as a result interest rates and money costs generally are relatively high. The WACC for two mutually exclusive projects that are being considered is 12%. Project S has an IRR of 20% while Project L's IRR is 15%. The projects have the same NPV at the 12% current WACC. However, you believe that the economy will soon fall into a mild recession, and money costs and thus your WACC will soon decline. You also think that the projects will not be funded until the WACC as decreased, and their cash flows will not be affected by the change in economic conditions. Under these conditions, which of the following statement is CORRECT? a. You should reject both projects because they will both have negative NPVs under the new conditions. b. You should delay a decision until you have more information on the projects, even if this means that a competitor might come in and capture this market. c. You should recommend Project L, because at the new WACC it will have the higher NPV. d. You should recommend Project S, because at the new WACC it will have the higher NPV. e. You should recommend Project L because it will have both a higher IRR and a higher NPV under the new conditions. (11-5) NPV vs. IRR C I Answer: e MEDIUM . Suppose a firm relies exclusively on the payback method when making capital budgeting decisions, and it sets a 4-year payback regardless of economic conditions. Other things held constant, which of the following statements is most likely to be true? a. It will accept too many short-term projects and reject too many long-term projects (as judged by the NPV). b. It will accept too many long-term projects and reject too many short-term projects (as judged by the NPV). c. The firm will accept too many projects in all economic states because a 4-year payback is too low. d. The firm will accept too few projects in all economic states because a 4-year payback is too high. e. If the 4-year payback results in accepting just the right set of projects under average economic conditions, then this payback will result in too few long-term projects when the economy is weak. (11-7) NPV profiles C I Answer: a MEDIUM . Projects A and B have identical expected lives and identical initial cash outflows (costs). However, most of one project?s cash flows come in the early years, while most of the other project?s cash flows occur in the later years. The two NPV profiles are given below: Which of the following statements is CORRECT? a. More of Project A?s cash flows occur in the later years. b. More of Project B?s cash flows occur in the later years. c. We must have information on the cost of capital in order to determine which project has the larger early cash flows. d. The NPV profile graph is inconsistent with the statement made in the problem. e. The crossover rate, i.e., the rate at which Projects A and B have the same NPV, is greater than either project?s IRR. (11-7) NPV profiles C I Answer: b MEDIUM . Projects S and L both have an initial cost of $10,000, followed by a series of positive cash inflows. Project S?s undiscounted net cash flows total $20,000, while L?s total undiscounted flows are $30,000. At a WACC of 10%, the two projects have identical NPVs. Which project?s NPV is more sensitive to changes in the WACC? a. Project S. b. Project L. c. Both projects are equally sensitive to changes in the WACC since their NPVs are equal at all costs of capital. d. Neither project is sensitive to changes in the discount rate, since both have NPV profiles that are horizontal. e. The solution cannot be determined because the problem gives us no information that can be used to determine the projects? relative IRRs. (11-7) NPV profiles C I Answer: a MEDIUM . Projects C and D are mutually exclusive and have normal cash flows. Project C has a higher NPV if the WACC is less than 12%, whereas Project D has a higher NPV if the WACC exceeds 12%. Which of the following statements is CORRECT? a. Project D probably has a higher IRR. b. Project D is probably larger in scale than Project C. c. Project C probably has a faster payback. d. Project C probably has a higher IRR. e. The crossover rate between the two projects is below 12%. (11-8) Payback C I Answer: d MEDIUM . Four of the following statements are truly disadvantages of the regular payback method, but one is not a disadvantage of this method. Which one is NOT a disadvantage of the payback method? a. Lacks an objective, market-determined benchmark for making decisions. b. Ignores cash flows beyond the payback period. c. Does not directly account for the time value of money. d. Does not provide any indication regarding a project?s liquidity or risk. e. Does not take account of differences in size among projects. (Comp.) NPV, IRR, and MIRR C I Answer: a MEDIUM . Which of the following statements is CORRECT? a. If a project with normal cash flows has an IRR greater than the WACC, the project must also have a positive NPV. b. If Project A?s IRR exceeds Project B?s, then A must have the higher NPV. c. A project?s MIRR can never exceed its IRR. d. If a project with normal cash flows has an IRR less than the WACC, the project must have a positive NPV. e. If the NPV is negative, the IRR must also be negative. (Comp.) NPV, IRR, and MIRR C I Answer: c MEDIUM . Which of the following statements is CORRECT? a. The MIRR and NPV decision criteria can never conflict. b. The IRR method can never be subject to the multiple IRR problem, while the MIRR method can be. c. One reason some people prefer the MIRR to the regular IRR is that the MIRR is based on a generally more reasonable reinvestment rate assumption. d. The higher the WACC, the shorter the discounted payback period. e. The MIRR method assumes that cash flows are reinvested at the crossover rate. (Comp.) NPV, IRR, and MIRR C I Answer: c MEDIUM . Which of the following statements is CORRECT? a. The NPV, IRR, MIRR, and discounted payback (using a payback requirement of 3 years or less) methods always lead to the same accept/reject decisions for independent projects. b. For mutually exclusive projects with normal cash flows, the NPV and MIRR methods can never conflict, but their results could conflict with the discounted payback and the regular IRR methods. c. Multiple IRRs can exist, but not multiple MIRRs. This is one reason some people favor the MIRR over the regular IRR. d. If a firm uses the discounted payback method with a required payback of 4 years, then it will accept more projects than if it used a regular payback of 4 years. e. The percentage difference between the MIRR and the IRR is equal to the project?s WACC. (Comp.) NPV, IRR, and MIRR C I Answer: e MEDIUM . Which of the following statements is CORRECT? a. For a project with normal cash flows, any change in the WACC will change both the NPV and the IRR. b. To find the MIRR, we first compound cash flows at the regular IRR to find the TV, and then we discount the TV at the WACC to find the PV. c. The NPV and IRR methods both assume that cash flows can be reinvested at the WACC. However, the MIRR method assumes reinvestment at the MIRR itself. d. If two projects have the same cost, and if their NPV profiles cross in the upper right quadrant, then the project with the higher IRR probably has more of its cash flows coming in the later years. e. If two projects have the same cost, and if their NPV profiles cross in the upper right quadrant, then the project with the lower IRR probably has more of its cash flows coming in the later years. (Comp.) Ranking methods: NPV C I Answer: b MEDIUM . Which of the following statements is CORRECT? a. One advantage of the NPV over the IRR is that NPV takes account of cash flows over a project?s full life whereas IRR does not. b. One advantage of the NPV over the IRR is that NPV assumes that cash flows will be reinvested at the WACC, whereas IRR assumes that cash flows are reinvested at the IRR. The NPV assumption is generally more appropriate. c. One advantage of the NPV over the MIRR method is that NPV takes account of cash flows over a project?s full life whereas MIRR does not. d. One advantage of the NPV over the MIRR method is that NPV discounts cash flows whereas the MIRR is based on undiscounted cash flows. e. Since cash flows under the IRR and MIRR are both discounted at the same rate (the WACC), these two methods always rank mutually exclusive projects in the same order. (Comp.) Miscellaneous concepts C I Answer: a MEDIUM . Which of the following statements is CORRECT? a. The IRR method appeals to some managers because it gives an estimate of the rate of return on projects rather than a dollar amount, which the NPV method provides. b. The discounted payback method eliminates all of the problems associated with the payback method. c. When evaluating independent projects, the NPV and IRR methods often yield conflicting results regarding a project's acceptability. d. To find the MIRR, we discount the TV at the IRR. e. A project?s NPV profile must intersect the X-axis at the project?s WACC. (11-7) NPV profiles C I Answer: a MEDIUM/HARD . Projects S and L are equally risky, mutually exclusive, and have normal cash flows. Project S has an IRR of 15%, while Project L?s IRR is 12%. The two projects have the same NPV when the WACC is 7%. Which of the following statements is CORRECT? a. If the WACC is 10%, both projects will have positive NPVs. b. If the WACC is 6%, Project S will have the higher NPV. c. If the WACC is 13%, Project S will have the lower NPV. d. If the WACC is 10%, both projects will have a negative NPV. e. Project S?s NPV is more sensitive to changes in WACC than Project L's. (11-7) NPV profiles C I Answer: e MEDIUM/HARD . Westchester Corp. is considering two equally risky, mutually exclusive projects, both of which have normal cash flows. Project A has an IRR of 11%, while Project B's IRR is 14%. When the WACC is 8%, the projects have the same NPV. Given this information, which of the following statements is CORRECT? a. If the WACC is 13%, Project A?s NPV will be higher than Project B?s. b. If the WACC is 9%, Project A?s NPV will be higher than Project B?s. c. If the WACC is 6%, Project B?s NPV will be higher than Project A?s. d. If the WACC is greater than 14%, Project A?s IRR will exceed Project B?s. e. If the WACC is 9%, Project B?s NPV will be higher than Project A?s. (11-7) NPV profiles C I Answer: b MEDIUM/HARD . You are considering two mutually exclusive, equally risky, projects. Both have IRRs that exceed the WACC. Which of the following statements is CORRECT? Assume that the projects have normal cash flows, with one outflow followed by a series of inflows. a. If the two projects' NPV profiles do not cross, then there will be a sharp conflict as to which one should be selected. b. If the cost of capital is greater than the crossover rate, then the IRR and the NPV criteria will not result in a conflict between the projects. One project will rank higher by both criteria. c. If the cost of capital is less than the crossover rate, then the IRR and the NPV criteria will not result in a conflict between the projects. One project will rank higher by both criteria. d. For a conflict to exist between NPV and IRR, the initial investment cost of one project must exceed the cost of the other. e. For a conflict to exist between NPV and IRR, one project must have an increasing stream of cash flows over time while the other has a decreasing stream. If both sets of cash flows are increasing or decreasing, then it would be impossible for a conflict to exist, even if one project is larger than the other. (11-7) NPV profiles C I Answer: b MEDIUM/HARD . Project X?s IRR is 19% and Project Y?s IRR is 17%. The projects have the same risk and the same lives, and each has constant cash flows during each year of their lives. If the WACC is 10%, Project Y has a higher NPV than X. Given this information, which of the following statements is CORRECT? a. The crossover rate must be less than 10%. b. The crossover rate must be greater than 10%. c. If the WACC is 8%, Project X will have the higher NPV. d. If the WACC is 18%, Project Y will have the higher NPV. e. Project X is larger in the sense that it has the higher initial cost. (11-4) Multiple IRRs C I Answer: c HARD . You are on the staff of Camden Inc. The CFO believes project acceptance should be based on the NPV, but Steve Camden, the president, insists that no project should be accepted unless its IRR exceeds the project?s risk-adjusted WACC. Now you must make a recommendation on a project that has a cost of $15,000 and two cash flows: $110,000 at the end of Year 1 and -$100,000 at the end of Year 2. The president and the CFO both agree that the appropriate WACC for this project is 10%. At 10%, the NPV is $2,355.37, but you find two IRRs, one at 6.33% and one at 527%, and a MIRR of 11.32%. Which of the following statements best describes your optimal recommendation, i.e., the analysis and recommendation that is best for the company and least likely to get you in trouble with either the CFO or the president? a. You should recommend that the project be rejected because its NPV is negative and its IRR is less than the WACC. b. You should recommend that the project be rejected because, although its NPV is positive, it has an IRR that is less than the WACC. c. You should recommend that the project be accepted because (1) its NPV is positive and (2) although it has two IRRs, in this case it would be better to focus on the MIRR, which exceeds the WACC. You should explain this to the president and tell him that that the firm?s value will increase if the project is accepted. d. You should recommend that the project be rejected because (1) its NPV is positive and (2) it has two IRRs, one of which is less than the WACC, which indicates that the firm?s value will decline if the project is accepted. e. You should recommend that the project be rejected because, although its NPV is positive, its MIRR is less than the WACC, and that indicates that the firm?s value will decline if it is accepted. (11-6) MIRR C I Answer: e HARD . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one cash outflow at t = 0 followed by a series of positive cash flows. a. A project?s MIRR is always greater than its regular IRR. b. A project?s MIRR is always less than its regular IRR. c. If a project?s IRR is greater than its WACC, then its MIRR will be greater than the IRR. d. To find a project?s MIRR, we compound cash inflows at the regular IRR and then find the discount rate that causes the PV of the terminal value to equal the initial cost. e. To find a project?s MIRR, the textbook procedure compounds cash inflows at the WACC and then finds the discount rate that causes the PV of the terminal value to equal the initial cost. (11-7) NPV profiles C I Answer: e HARD . Projects S and L both have normal cash flows, and the projects have the same risk, hence both are evaluated with the same WACC, 10%. However, S has a higher IRR than L. Which of the following statements is CORRECT? a. Project S must have a higher NPV than Project L. b. If Project S has a positive NPV, Project L must also have a positive NPV. c. If the WACC falls, each project?s IRR will increase. d. If the WACC increases, each project?s IRR will decrease. e. If Projects S and L have the same NPV at the current WACC, 10%, then Project L, the one with the lower IRR, would have a higher NPV if the WACC used to evaluate the projects declined. (11-7) NPV profiles C I Answer: c HARD . Which of the following statements is CORRECT? Assume that all projects being considered have normal cash flows and are equally risky. a. If a project?s IRR is equal to its WACC, then, under all reasonable conditions, the project?s NPV must be negative. b. If a project?s IRR is equal to its WACC, then under all reasonable conditions, the project?s IRR must be negative. c. If a project?s IRR is equal to its WACC, then under all reasonable conditions the project?s NPV must be zero. d. There is no necessary relationship between a project?s IRR, its WACC, and its NPV. e. When evaluating mutually exclusive projects, those projects with relatively long lives will tend to have relatively high NPVs when the cost of capital is relatively high. (11-7) NPV profiles C I Answer: d HARD . A company is choosing between two projects. The larger project has an initial cost of $100,000, annual cash flows of $30,000 for 5 years, and an IRR of 15.24%. The smaller project has an initial cost of $50,000, annual cash flows of $16,000 for 5 years, and an IRR of 16.63%. The projects are equally risky. Which of the following statements is CORRECT? a. Since the smaller project has the higher IRR, the two projects? NPV profiles cannot cross, and the smaller project's NPV will be higher at all positive values of WACC. b. Since the smaller project has the higher IRR, the two projects? NPV profiles will cross, and the larger project will look better based on the NPV at all positive values of WACC. c. If the company uses the NPV method, it will tend to favor smaller, shorter-term projects over larger, longer-term projects, regardless of how high or low the WACC is. d. Since the smaller project has the higher IRR but the larger project has the higher NPV at a zero discount rate, the two projects? NPV profiles will cross, and the larger project will have the higher NPV if the WACC is less than the crossover rate. e. Since the smaller project has the higher IRR and the larger NPV at a zero discount rate, the two projects? NPV profiles will cross, and the smaller project will look better if the WACC is less than the crossover rate. (11-7) NPV profiles C I Answer: c HARD . McCall Manufacturing has a WACC of 10%. The firm is considering two normal, equally risky, mutually exclusive, but not repeatable projects. The two projects have the same investment costs, but Project A has an IRR of 15%, while Project B has an IRR of 20%. Assuming the projects' NPV profiles cross in the upper right quadrant, which of the following statements is CORRECT? a. Each project must have a negative NPV. b. Since the projects are mutually exclusive, the firm should always select Project B. c. If the crossover rate is 8%, Project B will have the higher NPV. d. Only one project has a positive NPV. e. If the crossover rate is 8%, Project A will have the higher NPV. (11-7) NPV profiles C I Answer: c HARD . Projects A and B are mutually exclusive and have normal cash flows. Project A has an IRR of 15% and B's IRR is 20%. The company?s WACC is 12%, and at that rate Project A has the higher NPV. Which of the following statements is CORRECT? a. The crossover rate for the two projects must be less than 12%. b. Assuming the timing pattern of the two projects? cash flows is the same, Project B probably has a higher cost (and larger scale). c. Assuming the two projects have the same scale, Project B probably has a faster payback than Project A. d. The crossover rate for the two projects must be 12%. e. Since B has the higher IRR, then it must also have the higher NPV if the crossover rate is less than the WACC of 12%. (11-6) MIRR C I Answer: c VERY HARD . Which of the following statements is CORRECT? Assume that the project being considered has normal cash flows, with one outflow followed by a series of inflows. a. A project?s MIRR is always greater than its regular IRR. b. A project?s MIRR is always less than its regular IRR. c. If a project?s IRR is greater than its WACC, then the MIRR will be less than the IRR. d. If a project?s IRR is greater than its WACC, then the MIRR will be greater than the IRR. e. To find a project?s MIRR, we compound cash inflows at the IRR and then discount the terminal value back to t = 0 at the WACC. Problems (11-2) NPV C I Answer: a EASY . Anderson Systems is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that if a project's projected NPV is negative, it should be rejected. WACC: 9.00% Year 0 1 2 3 Cash flows -$1,000 $500 $500 $500 a. $265.65 b. $278.93 c. $292.88 d. $307.52 e. $322.90 (11-2) NPV C I Answer: c EASY . Tuttle Enterprises is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that if a project's projected NPV is negative, it should be rejected. WACC: 11.00% Year 0 1 2 3 4 Cash flows -$1,000 $350 $350 $350 $350 a. $77.49 b. $81.56 c. $85.86 d. $90.15 e. $94.66 (11-2) NPV C I Answer: e EASY . Harry's Inc. is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that if a project's projected NPV is negative, it should be rejected. WACC: 10.25% Year 0 1 2 3 4 5 Cash flows -$1,000 $300 $300 $300 $300 $300 a. $105.89 b. $111.47 c. $117.33 d. $123.51 e. $130.01 (11-3) IRR C I Answer: b EASY . Simms Corp. is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC or negative, in both cases it will be rejected. Year 0 1 2 3 Cash flows -$1,000 $425 $425 $425 a. 12.55% b. 13.21% c. 13.87% d. 14.56% e. 15.29% (11-3) IRR C I Answer: d EASY . Warr Company is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC or negative, in both cases it will be rejected. Year 0 1 2 3 4 Cash flows -$1,050 $400 $400 $400 $400 a. 14.05% b. 15.61% c. 17.34% d. 19.27% e. 21.20% (11-3) IRR C I Answer: a EASY . Thorley Inc. is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC or negative, in both cases it will be rejected. Year 0 1 2 3 4 5 Cash flows -$1,250 $325 $325 $325 $325 $325 a. 9.43% b. 9.91% c. 10.40% d. 10.92% e. 11.47% (11-8) Payback C I Answer: c EASY . Taggart Inc. is considering a project that has the following cash flow data. What is the project's payback? Year 0 1 2 3 Cash flows -$1,150 $500 $500 $500 a. 1.86 years b. 2.07 years c. 2.30 years d. 2.53 years e. 2.78 years (11-8) Payback C I Answer: c EASY . Resnick Inc. is considering a project that has the following cash flow data. What is the project's payback? Year 0 1 2 3 Cash flows -$350 $200 $200 $200 a. 1.42 years b. 1.58 years c. 1.75 years d. 1.93 years e. 2.12 years (11-8) Payback C I Answer: c EASY . Susmel Inc. is considering a project that has the following cash flow data. What is the project's payback? Year 0 1 2 3 Cash flows -$500 $150 $200 $300 a. 2.03 years b. 2.25 years c. 2.50 years d. 2.75 years e. 3.03 years (11-8) Payback C I Answer: c EASY . Mansi Inc. is considering a project that has the following cash flow data. What is the project's payback? Year 0 1 2 3 Cash flows -$750 $300 $325 $350 a. 1.91 years b. 2.12 years c. 2.36 years d. 2.59 years e. 2.85 years (11-2) NPV C I Answer: a EASY/MEDIUM . Cornell Enterprises is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that a project's projected NPV can be negative, in which case it will be rejected. WACC: 10.00% Year 0 1 2 3 Cash flows -$1,050 $450 $460 $470 a. $92.37 b. $96.99 c. $101.84 d. $106.93 e. $112.28 (11-2) NPV C I Answer: c EASY/MEDIUM . Warnock Inc. is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that a project's projected NPV can be negative, in which case it will be rejected. WACC: 10.00% Year 0 1 2 3 Cash flows -$950 $500 $400 $300 a. $54.62 b. $57.49 c. $60.52 d. $63.54 e. $66.72 (11-2) NPV C I Answer: e EASY/MEDIUM . Jazz World Inc. is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that a project's projected NPV can be negative, in which case it will be rejected. WACC: 14.00% Year 0 1 2 3 4 Cash flows -$1,200 $400 $425 $450 $475 a. $41.25 b. $45.84 c. $50.93 d. $56.59 e. $62.88 (11-2) NPV C I Answer: b EASY/ MEDIUM . Barry Company is considering a project that has the following cash flow and WACC data. What is the project's NPV? Note that a project's projected NPV can be negative, in which case it will be rejected. WACC: 12.00% Year 0 1 2 3 4 5 Cash flows -$1,100 $400 $390 $380 $370 $360 a. $250.15 b. $277.94 c. $305.73 d. $336.31 e. $369.94 (11-3) IRR C I Answer: d EASY/MEDIUM . Datta Computer Systems is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC (and even negative), in which case it will be rejected. Year 0 1 2 3 Cash flows -$1,100 $450 $470 $490 a. 9.70% b. 10.78% c. 11.98% d. 13.31% e. 14.64% (11-3) IRR C I Answer: a EASY/MEDIUM . Simkins Renovations Inc. is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC (and even negative), in which case it will be rejected. Year 0 1 2 3 4 Cash flows -$850 $300 $290 $280 $270 a. 13.13% b. 14.44% c. 15.89% d. 17.48% e. 19.22% (11-3) IRR C I Answer: c EASY/MEDIUM . Maxwell Feed & Seed is considering a project that has the following cash flow data. What is the project's IRR? Note that a project's projected IRR can be less than the WACC (and even negative), in which case it will be rejected. Year 0 1 2 3 4 5 Cash flows -$9,500 $2,000 $2,025 $2,050 $2,075 $2,100 a. 2.08% b. 2.31% c. 2.57% d. 2.82% e. 3.10% (11-2) NPV sensitivity to WACC C I Answer: d MEDIUM . Last month, Lloyd's Systems analyzed the project whose cash flows are shown below. However, before the decision to accept or reject the project, the Federal Reserve took actions that changed interest rates and therefore the firm's WACC. The Fed's action did not affect the forecasted cash flows. By how much did the change in the WACC affect the project's forecasted NPV? Note that a project's projected NPV can be negative, in which case it should be rejected. Old WACC: 10.00% New WACC: 11.25% Year 0 1 2 3 Cash flows -$1,000 $410 $410 $410 a. -$18.89 b. -$19.88 c. -$20.93 d. -$22.03 e. -$23.13 (11-2) NPV sensitivity to WACC C I Answer: a MEDIUM . Lasik Vision Inc. recently analyzed the project whose cash flows are shown below. However, before Lasik decided to accept or reject the project, the Federal Reserve took actions that changed interest rates and therefore the firm's WACC. The Fed's action did not affect the forecasted cash flows. By how much did the change in the WACC affect the project's forecasted NPV? Note that a project's projected NPV can be negative, in which case it should be rejected. Old WACC: 8.00% New WACC: 11.25% Year 0 1 2 3 Cash flows -$1,000 $410 $410 $410 a. -$59.03 b. -$56.08 c. -$53.27 d. -$50.61 e. -$48.08 (11-6) MIRR C I Answer: e MEDIUM . Ehrmann Data Systems is considering a project that has the following cash flow and WACC data. What is the project's MIRR? Note that a project's projected MIRR can be less than the WACC (and even negative), in which case it will be rejected. WACC: 10.00% Year 0 1 2 3 Cash flows -$1,000 $450 $450 $450 a. 9.32% b. 10.35% c. 11.50% d. 12.78% e. 14.20% (11-6) MIRR C I Answer: e MEDIUM . Ingram Electric Products is considering a project that has the following cash flow and WACC data. What is the project's MIRR? Note that a project's projected MIRR can be less than the WACC (and even negative), in which case it will be rejected. WACC: 11.00% Year 0 1 2 3 Cash flows -$800 $350 $350 $350 a. 8.86% b. 9.84% c. 10.94% d. 12.15% e. 13.50% (11-6) MIRR C I Answer: b MEDIUM . Malholtra Inc. is considering a project that has the following cash flow and WACC data. What is the project's MIRR? Note that a project's projected MIRR can be less than the WACC (and even negative), in which case it will be rejected. WACC: 10.00% Year 0 1 2 3 4 Cash flows -$850 $300 $320 $340 $360 a. 14.08% b. 15.65% c. 17.21% d. 18.94% e. 20.83% (11-6) MIRR C I Answer: c MEDIUM . Hindelang Inc. is considering a project that has the following cash flow and WACC data. What is the project's MIRR? Note that a project's projected MIRR can be less than the WACC (and even negative), in which case it will be rejected. WACC: 12.25% Year 0 1 2 3 4 Cash flows -$850 $300 $320 $340 $360 a. 13.42% b. 14.91% c. 16.56% d. 18.22% e. 20.04% (11-8) Payback C I Answer: e MEDIUM . Stern Associates is considering a project that has the following cash flow data. What is the project's payback? Year 0 1 2 3 4 5 Cash flows -$1,100 $300 $310 $320 $330 $340 a. 2.31 years b. 2.56 years c. 2.85 years d. 3.16 years e. 3.52 years (11-8) Discounted payback C I Answer: b MEDIUM . Fernando Designs is considering a project that has the following cash flow and WACC data. What is the project's discounted payback? WACC: 10.00% Year 0 1 2 3 Cash flows -$900 $500 $500 $500 a. 1.88 years b. 2.09 years c. 2.29 years d. 2.52 years e. 2.78 years (11-8) Discounted payback C I Answer: d MEDIUM . Masulis Inc. is considering a project that has the following cash flow and WACC data. What is the project's discounted payback? WACC: 10.00% Year 0 1 2 3 4 Cash flows -$950 $525 $485 $445 $405 a. 1.61 years b. 1.79 years c. 1.99 years d. 2.22 years e. 2.44 years (Comp.) NPV vs. IRR C I Answer: a MEDIUM . Tesar Chemicals is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. The CEO believes the IRR is the best selection criterion, while the CFO advocates the NPV. If the decision is made by choosing the project with the higher IRR rather than the one with the higher NPV, how much, if any, value will be forgone, i.e., what's the chosen NPV versus the maximum possible NPV? Note that (1) "true value" is measured by NPV, and (2) under some conditions the choice of IRR vs. NPV will have no effect on the value gained or lost. WACC: 7.50% Year 0 1 2 3 4 CFS -$1,100 $550 $600 $100 $100 CFL -$2,700 $650 $725 $800 $1,400 a. $138.10 b. $149.21 c. $160.31 d. $171.42 e. $182.52 (11-5) NPV vs. IRR C I Answer: c MEDIUM/HARD . A firm is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. The CEO wants to use the IRR criterion, while the CFO favors the NPV method. You were hired to advise the firm on the best procedure. If the wrong decision criterion is used, how much potential value would the firm lose? WACC: 6.00% Year 0 1 2 3 4 CFS -$1,025 $380 $380 $380 $380 CFL -$2,150 $765 $765 $765 $765 a. $188.68 b. $198.61 c. $209.07 d. $219.52 e. $230.49 (11-5) NPV vs. IRR C I Answer: c MEDIUM/HARD . Sexton Inc. is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. If the decision is made by choosing the project with the higher IRR, how much value will be forgone? Note that under certain conditions choosing projects on the basis of the IRR will not cause any value to be lost because the one with the higher IRR will also have the higher NPV, so no value will be lost if the IRR method is used. WACC: 10.25% Year 0 1 2 3 4 CFS -$2,050 $750 $760 $770 $780 CFL -$4,300 $1,500 $1,518 $1,536 $1,554 a. $134.79 b. $141.89 c. $149.36 d. $164.29 e. $205.36 (11-5) NPV vs. IRR C I Answer: e MEDIUM/HARD . Moerdyk & Co. is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. If the decision is made by choosing the project with the higher IRR, how much value will be forgone? Note that under certain conditions choosing projects on the basis of the IRR will not cause any value to be lost because the one with the higher IRR will also have the higher NPV, i.e., no conflict will exist. WACC: 10.00% Year 0 1 2 3 4 CFS -$1,025 $650 $450 $250 $50 CFL -$1,025 $100 $300 $500 $700 a. $5.47 b. $6.02 c. $6.62 d. $7.29 e. $7.82 (11-5) NPV vs. IRR C I Answer: b MEDIUM/HARD . Kosovski Company is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and are not repeatable. If the decision is made by choosing the project with the higher IRR, how much value will be forgone? Note that under some conditions choosing projects on the basis of the IRR will cause $0.00 value to be lost. WACC: 7.75% Year 0 1 2 3 4 CFS -$1,050 $675 $650 CFL -$1,050 $360 $360 $360 $360 a. $11.45 b. $12.72 c. $14.63 d. $16.82 e. $19.35 (Comp.) NPV vs. MIRR C I Answer: d MEDIUM/HARD . Nast Inc. is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. If the decision is made by choosing the project with the higher MIRR rather than the one with the higher NPV, how much value will be forgone? Note that under some conditions choosing projects on the basis of the MIRR will cause $0.00 value to be lost. WACC: 8.75% Year 0 1 2 3 4 CFS -$1,100 $375 $375 $375 $375 CFL -$2,200 $725 $725 $725 $725 a. $32.12 b. $35.33 c. $38.87 d. $40.15 e. $42.16 (Comp.) NPV vs. payback C I Answer: d MEDIUM/HARD . Yonan Inc. is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. If the decision is made by choosing the project with the shorter payback, some value may be forgone. How much value will be lost in this instance? Note that under some conditions choosing projects on the basis of the shorter payback will not cause value to be lost. WACC: 10.25% Year 0 1 2 3 4 CFS -$950 $500 $800 $0 $0 CFL -$2,100 $400 $800 $800 $1,000 a. $24.14 b. $26.82 c. $29.80 d. $33.11 e. $36.42 (Comp.) IRR vs. MIRR C I Answer: a HARD . Noe Drilling Inc. is considering Projects S and L, whose cash flows are shown below. These projects are mutually exclusive, equally risky, and not repeatable. The CEO believes the IRR is the best selection criterion, while the CFO advocates the MIRR. If the decision is made by choosing the project with the higher IRR rather than the one with the higher MIRR, how much, if any, value will be forgone, i.e., what's the NPV of the chosen project versus the maximum possible NPV? Note that (1) "true value" is measured by NPV, and (2) under some conditions the choice of IRR vs. MIRR will have no effect on the value lost. WACC: 7.00% Year 0 1 2 3 4 CFS -$1,100 $550 $600 $100 $100 CFL -$2,750 $725 $725 $800 $1,400 a. $185.90 b. $197.01 c. $208.11 d. $219.22 e. $230.32 CHAPTER 11 ANSWERS AND SOLUTIONS . (11-1) Capital budget F I Answer: b EASY . (11-2) PV of cash flows F I Answer: b EASY . (11-2) NPV F I Answer: b EASY . (11-2) NPV and IRR F I Answer: b EASY . (11-2) Mutually exclusive projects F I Answer: a EASY . (11-2) Mutually exclusive projects F I Answer: b EASY . (11-3) IRR F I Answer: a EASY . (11-3) IRR F I Answer: b EASY . (11-4) Multiple IRRs F I Answer: a EASY . (11-4) Multiple IRRs F I Answer: b EASY . (11-5) Reinvestment rate assumption F I Answer: a EASY . (11-5) Reinvestment rate assumption F I Answer: b EASY . (11-5) Reinvestment rate assumption F I Answer: a EASY . (11-6) Modified IRR F I Answer: a EASY . (11-6) Modified IRR F I Answer: b EASY . (11-6) Modified IRR F I Answer: b EASY . (11-8) Payback period F I Answer: a EASY . (11-2) Mutually exclusive projects F I Answer: b MEDIUM Think about the following equally risky projects. The cost of capital is WACC = 10% 0 1 2 3 4 5 6 S -1000.00 1400.00 L -1000.00 378.34 378.34 378.34 378.34 378.34 378.34 IRRS = 40.0% NPVS = $272.73 IRRL = 30.0% NPVL = $647.77 S has the higher IRR, but L has a much higher NPV and is therefore preferable. If the project could be repeated, though, S would turn out to be better?it would have both a higher NPV and IRR. . (11-5) NPV vs. IRR F I Answer: b MEDIUM . (11-5) NPV vs. IRR F I Answer: b MEDIUM . (11-5) NPV vs. IRR F I Answer: b MEDIUM . (11-5) NPV vs. IRR F I Answer: a MEDIUM . (11-7) NPV profiles F I Answer: a MEDIUM . (11-8) Discounted payback F I Answer: b MEDIUM The discounted payback corrects the fault of not considering the timing of cash flows, but it does not correct for the nonconsideration of after-payback cash flows. . (11-9) Ranking methods F I Answer: a MEDIUM . (11-9) Ranking methods F I Answer: b MEDIUM One project might have cash flows that extend well past the payback year, leading to different rankings. . (11-10) Small business practices F I Answer: a MEDIUM . (Comp.) NPV and IRR F I Answer: b MEDIUM . (11-7) NPV profiles F I Answer: b HARD Project X may have a negative NPV if r > IRR. The NPV profile line crosses the horizontal axis, and the NPV at the cost of capital is in the lower right quadrant. EMBED Word.Document.12 . (11-7) NPV profiles F I Answer: b HARD We can see from the graph that S has the higher NPV if r > 0. . (11-7) NPV profiles F I Answer: b HARD We do not know if the cost of capital is to the right or left of the crossover point. Therefore, NPVX may be either higher or lower than NPVY. . (11-2) NPV C I Answer: c EASY . (11-3) IRR C I Answer: e EASY The IRR assumes reinvestment at the IRR, and that is generally not as valid as assuming reinvestment at the WACC, as with the NPV. . (11-3) IRR C I Answer: e EASY The IRR would rank a project that cost $100 and had a 100% IRR ahead of a project that cost $1,000,000 and had an IRR of 90%. The larger project would increase the firm's value more, as the NPV would demonstrate. . (11-3) IRR C I Answer: d EASY . (11-3) IRR C I Answer: d EASY . (11-4) Normal vs. nonnormal CFs C I Answer: e EASY . (11-4) Normal vs. nonnormal CFs C I Answer: a EASY . (11-8) Payback C I Answer: d EASY Statement d is true. The payback does indicate how long it should take to recover the investment; hence, it is a measure of liquidity. . (11-8) Payback C I Answer: b EASY . (11-8) Payback C I Answer: b EASY . (Comp.) Ranking methods C I Answer: b EASY . (Comp.) Ranking methods C I Answer: d EASY . (11-4) Normal vs. nonnormal CFs C I Answer: d EASY/MEDIUM . (11-2) NPV C I Answer: e MEDIUM Statement e is correct. The others are all false. If you draw an NPV profile for one project, you will see that if the WACC is less than the IRR, the NPV will be positive. . (11-2) NPV C I Answer: b MEDIUM . (11-4) Multiple IRRs C I Answer: d MEDIUM . (11-5) NPV and IRR C I Answer: a MEDIUM . (11-5) NPV and IRR C I Answer: e MEDIUM . (11-5) NPV vs. IRR C I Answer: d MEDIUM . (11-5) NPV vs. IRR C I Answer: c MEDIUM . (11-5) NPV vs. IRR C I Answer: e MEDIUM Statement e is correct. In a weak economy, the interest rates and the WACC are likely to be low, and these conditions favor long-term projects. But the constant 4-year payback would not recognize this situation. . (11-7) NPV profiles C I Answer: a MEDIUM Statement a is true and the other statements are false. Distant cash flows are more severely penalized by high discount rates, so if the NPV profile line has a steep slope, this indicates that cash flows occur relatively late. . (11-7) NPV profiles C I Answer: b MEDIUM Statement b is true, while the other statements are false. Since Project L's undiscounted CFs are larger, they must occur in the more distant future, and since distant cash flows are impacted more by changes in the discount rate, L's NPV profile must be steeper. One can also see this in an NPV profile graph like the one in Question 52. The higher Y-axis intercept indicates more undiscounted CFs, and for the profiles to cross, the one with the higher intercept must be steeper. . (11-7) NPV profiles C I Answer: a MEDIUM The NPV profiles cross at 12%. To the left, or at lower discount rates, C has the higher NPV, so its slope is steeper, causing its profile to hit the X axis sooner. This means that C has the lower IRR , hence D has the higher IRR. . (11-8) Payback C I Answer: d MEDIUM . (Comp.) NPV, IRR, and MIRR C I Answer: a MEDIUM . (Comp.) NPV, IRR, and MIRR C I Answer: c MEDIUM . (Comp.) NPV, IRR, and MIRR C I Answer: c MEDIUM . (Comp.) NPV, IRR, and MIRR C I Answer: e MEDIUM . (Comp.) Ranking methods: NPV C I Answer: b MEDIUM Statement b is correct, and the others are false. Cash flows from a project can be used to replace funds that would be raised in the market at the WACC, so the WACC is the opportunity cost for reinvested cash flows. Since the NPV assumes reinvestment at the WACC while the IRR assumes reinvestment at the IRR, NPV is generally the better method. . (Comp.) Miscellaneous concepts C I Answer: a MEDIUM . (11-7) NPV profiles C I Answer: a MEDIUM/HARD The easiest way to think about this question is to begin by drawing an NPV profile as shown below, then using it to decide which statement is correct. Statement a is true, because both projects have an IRR greater than the WACC and thus will have a positive NPV. Statement b is false, because at 6%, the WACC is less than the crossover rate and Project L has a higher NPV than S. Statement c is false, because at 13% the WACC is greater than the crossover rate and S would have a higher NPV than L. Statement d is false, because of reasons mentioned for statement a. Statement e is false, because Project L?s NPV profile is steeper, which means Project L?s NPV is more sensitive to changes in WACC. . (11-7) NPV profiles C I Answer: e MEDIUM/HARD Statement e is true, while the others are false. . (11-7) NPV profiles C I Answer: b MEDIUM/HARD Again, it is useful to draw NPV profiles that fit the description given in the question. Any numbers that meet the criteria will do. Statement a is false, because if the profiles do not cross, then one will dominate the other, with both a higher IRR and a higher NPV at every discount rate. Statement b is true. Statement c is false. Statement d is false because a conflict can result from differences in the timing of the cash flows. Statement e is false because scale differences can result in profile crossovers and thus conflicts. . (11-7) NPV profiles C I Answer: b MEDIUM/HARD Again, it is useful to draw NPV profiles that fit the description given in the question. Any number that meets the criteria will do. As we can see from the graph, statement b is true, but the other statements are false. . (11-4) Multiple IRRs C I Answer: c HARD Statement c is true, while the other statements are false. It is not necessary to calculate the two IRRs and the MIRR as the data in the problem are correct, but we show the Excel calculations below. WACC 10% Years 0 1 2 CF -$15,000 $110,000 -$100,000 NPV $2,355.37 IRR1 6.33% IRR2 527.01% MIRR 11.32% . (11-6) MIRR C I Answer: e HARD Answer e is essentially the definition of the MIRR, hence it is correct. . (11-7) NPV profiles C I Answer: e HARD Refer to the NPV profile below. Statement a is false, because you do not know which project has the higher NPV unless you know the WACC. Statement b is false, because if the WACC is greater than IRRL but less than IRRS then Project S will have a positive NPV and Project L?s NPV will be negative. Statements c and d are false, because IRR is independent of WACC. Statement e is true, because Project S has the higher IRR, so Project L?s NPV profile is above Project S?s when the WACC is less than the crossover rate. . (11-7) NPV profiles C I Answer: c HARD Recall that the very definition of the IRR is the discount rate at which the NPV is zero. Therefore, statement c is true. All the other statements are false. . (11-7) NPV profiles C I Answer: d HARD Statement d is true; the other statements are false. . (11-7) NPV profiles C I Answer: c HARD Statement c is true, while the other statements are false. If we draw an NPV profile graph, we would see that A must have the steeper slope. If the crossover is 8% and the WACC is 10%, then B will have the higher NPV. . (11-7) NPV profiles C I Answer: c HARD Consider the following NPV profile graph: We can see that statements a, d, and e are all incorrect. Statement be is also incorrect, because if the projects have the same timing pattern, then A must have the higher cost. That leaves statement c as being correct, and that conclusion is confirmed by noting that since A have the steeper slope, its cash flows must come in slower, hence B has the faster cash flows and thus the faster payback. . (11-6) MIRR C I Answer: c VERY HARD One could prove that (1) if the IRR is equal to the WACC, then the MIRR and the IRR will be equal, (2) if the IRR is greater than the WACC, the MIRR will be less than the IRR, and (3) the MIRR will be greater than the IRR if the IRR is less than the WACC. This situation exists because the MIRR assumes reinvestment at the WACC and therefore compounds at that rate, while the IRR assumes reinvestment at the IRR itself and therefore compounds at the IRR. Therefore, if the IRR exceeds the WACC, the TV found under the IRR method will be larger, and vice versa. The IRR and the MIRR are found as the rate that causes the PV of the TV to equal the cost. Therefore, if the IRR exceeds the WACC, causing the IRR's TV to be larger, then the IRR will exceed the MIRR, and vice versa. As a result, statement c is correct?if the IRR exceeds the WACC, the IRR will exceed the MIRR. The other statements are false. Note too that this answer could also be confirmed with a numerical example. . (11-2) NPV C I Answer: a EASY WACC: 9.00% Year 0 1 2 3 Cash flows -$1,000 $500 $500 $500 NPV = $265.65 . (11-2) NPV C I Answer: c EASY WACC: 11.00% Year 0 1 2 3 4 Cash flows -$1,000 $350 $350 $350 $350 NPV = $85.86 . (11-2) NPV C I Answer: e EASY WACC: 10.25% Year 0 1 2 3 4 5 Cash flows -$1,000 $300 $300 $300 $300 $300 NPV = $130.01 . (11-3) IRR C I Answer: b EASY Year 0 1 2 3 Cash flows -$1,000 $425 $425 $425 IRR = 13.21% . (11-3) IRR C I Answer: d EASY Year 0 1 2 3 4 Cash flows -$1,050 $400 $400 $400 $400 IRR = 19.27% . (11-3) IRR C I Answer: a EASY Year 0 1 2 3 4 5 Cash flows -$1,250 $325 $325 $325 $325 $325 IRR = 9.43% . (11-8) Payback C I Answer: c EASY Year 0 1 2 3 Cash flows -$1,150 $500 $500 $500 Cumulative CF -$1,150 -$650 -$150 $350 Payback = 2.30 - - - 2.30 Payback = last year before cum CF turns positive + abs. val. last neg. cum CF/CF in payback year. . (11-8) Payback C I Answer: c EASY Year 0 1 2 3 Cash flows -$350 $200 $200 $200 Cumulative CF -$350 -$150 $50 $250 Payback = 1.75 - - 1.75 - Payback = last year before cum CF turns positive + abs. val. last neg. cum CF/ CF in payback year. . (11-8) Payback C I Answer: c EASY Year 0 1 2 3 Cash flows -$500 $150 $200 $300 Cumulative CF -$500 -$350 -$150 $150 Payback = 2.50 - - - 2.50 Payback = last year before cum CF turns positive + abs. val. last neg. cum CF/CF in payback year. . (11-8) Payback C I Answer: c EASY Year 0 1 2 3 Cash flows -$750 $300 $325 $350 Cumulative CF -$750 -$450 -$125 $225 Payback = 2.36 - - - 2.36 Payback = last year before cum CF turns positive + abs. val. last neg. cum CF/CF in payback year. . (11-2) NPV C I Answer: a EASY/MEDIUM WACC: 10.00% Year 0 1 2 3 Cash flows -$1,050 $450 $460 $470 NPV =$92.37 . (11-2) NPV C I Answer: c EASY/MEDIUM WACC: 10.00% Year 0 1 2 3 Cash flows -$950 $500 $400 $300 NPV = $60.52 . (11-2) NPV C I Answer: e EASY/MEDIUM WACC: 14.00% Year 0 1 2 3 4 Cash flows -$1,200 $400 $425 $450 $475 NPV = $62.88 . (11-2) NPV C I Answer: b EASY/MEDIUM WACC: 12.00% Year 0 1 2 3 4 5 Cash flows -$1,100 $400 $390 $380 $370 $360 NPV = $277.94 . (11-3) IRR C I Answer: d EASY/MEDIUM Year 0 1 2 3 Cash flows -$1,100 $450 $470 $490 IRR = 13.31% . (11-3) IRR C I Answer: a EASY/MEDIUM Year 0 1 2 3 4 Cash flows -$850 $300 $290 $280 $270 IRR = 13.13% . (11-3) IRR C I Answer: c EASY/MEDIUM Year 0 1 2 3 4 5 Cash flows -$9,500 $2,000 $2,025 $2,050 $2,075 $2,100 IRR = 2.57% . (11-2) NPV sensitivity to WACC C I Answer: d MEDIUM Old WACC: 10.00% New WACC: 11.25% Year 0 1 2 3 Cash flows -$1,000 $410 $410 $410 Old NPV = $19.61 New NPV = -$2.42 Change = -$22.03 . (11-2) NPV sensitivity to WACC C I Answer: a MEDIUM Old WACC: 8.00% New WACC: 11.25% Year 0 1 2 3 Cash flows -$1,000 $410 $410 $410 Old NPV = $56.61 New NPV = -$2.42 Change = -$59.03 . (11-6) MIRR C I Answer: e MEDIUM WACC: 10.00% Year 0 1 2 3 Cash flows -$1,000 $450 $450 $450 Compounded values, FVs $544.50 $495.00 $450.00 TV = Sum of compounded inflows: $1,489.50 MIRR = 14.20% Found as discount rate that equates PV of TV to cost, discounted back 3 years @ WACC MIRR = 14.20% Alternative calculation, using Excel's MIRR function . (11-6) MIRR C I Answer: e MEDIUM WACC: 11.00% Year 0 1 2 3 Cash flows -$800 $350 $350 $350 Compounded values, FVs $431.24 $388.50 $350.00 TV = Sum of compounded inflows: $1,169.74 MIRR = 13.50% Found as discount rate that equates PV of TV to cost, discounted back 3 years @ WACC MIRR = 13.50% Alternative calculation, using Excel's MIRR function . (11-6) MIRR C I Answer: b MEDIUM WACC: 10.00% Year 0 1 2 3 4 Cash flows -$850 $300 $320 $340 $360 Compounded values $399.30 $387.20 $374.00 $360.00 TV = Sum of comp'ed inflows: $1,520.50 MIRR = 15.65% Found as discount rate that equates PV of TV to cost, discounted back 4 years @ WACC MIRR = 15.65% Alternative calculation, using Excel's MIRR function . (11-6) MIRR C I Answer: c MEDIUM WACC: 12.25% Year 0 1 2 3 4 Cash flows -$850 $300 $320 $340 $360 Compounded values $424.31 $403.20 $381.65 $360.00 TV = Sum of comp'ed inflows: $1,569.16 MIRR = 16.56% Found as discount rate that equates PV of TV to cost, discounted back 4 years @ WACC MIRR = 16.56% Alternative calculation, using Excel's MIRR function . (11-8) Payback C I Answer: e MEDIUM Year 0 1 2 3 4 5 Cash flows -$1,100 $300 $310 $320 $330 $340 Cumulative CF -$1,100 -$800 -$490 -$170 $160 $500 Payback = 3.52 - - - - 3.52 - . (11-8) Discounted payback C I Answer: b MEDIUM WACC: 10.00% Year 0 1 2 3 Cash flows -$900 $500 $500 $500 PV of CFs -$900 $455 $413 $376 Cumulative CF -$900 -$445 -$32 $343 Payback = 2.09 - - - 2.09 . (11-8) Discounted payback C I Answer: d MEDIUM WACC: 10.00% Year 0 1 2 3 4 Cash flows -$950 $525 $485 $445 $405 PV of CFs -$950 $477 $401 $334 $277 Cumulative CF -$950 -$473 -$72 $262 $539 Payback = 2.22 - - - 2.22 - . (Comp.) NPV vs. IRR C I Answer: a MEDIUM First, recognize that NPV makes theoretically correct capital budgeting decisions, so the highest NPV tells us how much value could be added. We calculate the two projects' NPVs, IRRs, and MIRRs, but the MIRR information is not needed for this problem. We then see what NPV would result if the decision were based on the IRR (and the MIRR). The difference between the NPV is the loss incurred if the IRR criterion is used. Of course, it's possible that IRR could choose the correct project. WACC: 7.5000% Year 0 1 2 3 4 TV MIRR CFS -$1,100 $550 $600 $100 $100 Compounded CFs: 673.77 686.94 107.00 100.00 $1,567.71 9.5469% CFL -$2,700 $650 $725 $800 $1,400 Compounded CFs: 796.28 830.05 856.00 1400.00 $3,882.33 9.6663% MIRR, L = 9.67% IRR, L = 10.71181% NPV, L = $224.3065 MIRR, S = 9.55% IRR, S = 12.24157% NPV, S = $86.2036 MIRR Choice: L IRR Choice: S NPV Choice: L NPV using MIRR: $224.31 NPV using IRR: $86.20 NPV using NPV: $224.31 Lost value using IRR versus MIRR: $138.10 Loss below: 7.9850% Lost value using MIRR versus NPV: $0.00 Loss below: 10.1638% Lost value using IRR versus NPV: $138.10 Loss below: 10.1638% . (11-5) NPV vs. IRR C I Answer: c MEDIUM/HARD WACC: 6.000% Year 0 1 2 3 4 CFS -$1,025 $380 $380 $380 $380 CFL -$2,150 $765 $765 $765 $765 IRR, L 15.781% IRR, S 17.861% NPV, L $500.81 $209.07 NPV, S $291.74 $209.07 = Value lost if use the IRR criterion S L 291.7 500.8 0% 495.0 910.0 2% 421.9 762.9 4% 354.4 626.9 6% 291.7 500.8 8% 233.6 383.8 10% 179.5 274.9 12% 129.2 173.6 13.860% 85.4 85.4 14% 82.2 79.0 16% 38.3 -9.4 18% -2.8 -92.1 20% -41.3 -169.6 22% -77.4 -242.4 24% -111.4 -310.7 Note that the WACC is constrained to be less than the crossover point, so there is a conflict between NPV and IRR, hence following the IRR rule results in a loss of value. In the next problem the constraint is relaxed. Graphs such as this one could be created for the following problems, but we do not show them. . (11-5) NPV vs. IRR C I Answer: c MEDIUM/HARD WACC: 10.25% 13.275% = crossover Year 0 1 2 3 4 CFS -$2,050 $750 $760 $770 $780 CFL -$4,300 $1,500 $1,518 $1,536 $1,554 IRR, L 15.58% IRR, S 18.06% NPV, L $507.40 NPV, S $358.05 $149.36 = Value lost if use the IRR criterion Note that the WACC is not constrained to be less than the crossover point, so there may not be a conflict between NPV and IRR, hence following the IRR rule may not result in a loss of value. In that case, the correct answer is $0.00. . (11-5) NPV vs. IRR C I Answer: e MEDIUM/HARD WACC: 10.000% 10.549% = crossover Year 0 1 2 3 4 CFS -$1,025 $650 $450 $250 $50 CFL -$1,025 $100 $300 $500 $700 IRR, L 15.66% IRR, S 19.86% NPV, L $167.61 NPV, S $159.79 $7.82 = Value lost if use the IRR criterion Note that the WACC is not constrained to be less than the crossover point, so there may not be a conflict between NPV and IRR, hence following the IRR rule may not result in a loss of value. In that case, the correct answer is $0.00. . (11-5) NPV vs. IRR C I Answer: b MEDIUM/HARD WACC: 7.75% 8.994% = crossover Year 0 1 2 3 4 CFS -$1,050 $675 $650 CFL -$1,050 $360 $360 $360 $360 IRR, L 13.95% IRR, S 17.13% NPV, L $149.03 NPV, S $136.31 $12.72 = Value lost if use the IRR criterion Note that the WACC is not constrained to be less than the crossover point, so there may not be a conflict between NPV and IRR, hence following the IRR rule may not result in a loss of value. In that case, the correct answer is $0.00. . (Comp.) NPV vs. MIRR C I Answer: d MEDIUM/HARD WACC: 8.750% Crossover = 10.396% Year 0 1 2 3 4 TV MIRR CFS -$1,100 $375 $375 $375 $375 482.30 443.50 407.81 375.00 $1,708.61 11.64% CFL -$2,200 $725 $725 $725 $725 932.45 857.43 788.44 725.00 $3,303.31 10.70% MIRR, L 10.70% MIRR, S 11.64% NPV, L $161.74 NPV, S $121.59 $40.15 = Value lost if use the MIRR criterion Note that the WACC is not constrained to be less than the crossover point, so there may not be a conflict between NPV and IRR, hence following the MIRR rule may not result in a loss of value. In that case, the correct answer is $0.00. . (Comp.) NPV vs. payback C I Answer: d MEDIUM/HARD WACC: 10.250% Crossover = 11.093% Year 0 1 2 3 4 CFS -$950 $500 $800 $0 $0 CFL -$2,100 $400 $800 $800 $1,000 Cumulative CF, S -$950 -$450 $350 $350 $350 Cumulative CF, L -$2,100 -$1,700 -$900 -$100 $900 Payback S = 1.56 - - 1.56 - - Payback L = 3.10 - - - - 3.10 NPV, L = $194.79 NPV, S = $161.68 Value lost $33.11 Note that the WACC is not constrained to be less than the crossover point, so there may not be a conflict between NPV and payback, hence following the IRR rule may not result in a loss of value, so the correct answer may be $0.00. . (Comp.) IRR vs. MIRR C I Answer: a HARD First, recognize that NPV makes theoretically correct capital budgeting decisions, so the higher NPV tells us how much value could be added. We calculate the two projects' NPVs, IRRs, and MIRRs. We then see what NPV would result if the decision were based on the IRR and the MIRR. Under some conditions, MIRR will choose the project with the higher NPV while the IRR chooses the lower NPV project. Then, the difference between the NPVs is the loss incurred if the IRR criterion is used. Of course, it's possible that both the MIRR and the IRR could choose the wrong project; with this set of cash flows, that happens at WACC > 8.62133%. WACC: 7.00% Year 0 1 2 3 4 TV IRR/MIRR CFS -$1,100 $550 $600 $100 $100 12.2416% Compounded CFs: 673.77 686.94 107.00 100.00 $1,567.71 9.2618% CFL -$2,750 $725 $725 $800 $1,400 10.9810% Compounded CFs: 888.16 830.05 856.00 1400.00 $3,974.21 9.6426% -0.3808% MIRR, S = 9.2618% IRR, S = 12.2416% NPV, S = $96.00 MIRR, L = 9.6426% IRR, L = 10.9810% NPV, L = $281.90 MIRR Choice: L IRR Choice: S NPV Choice: L NPV based on MIRR: $281.90 NPV based on IRR: $96.00 NPV using NPV: $281.90 Lost value using IRR versus MIRR: NPVl ? NPVs = $185.90 Loss below: 8.6213% Page PAGE 438 True/False Chapter 11: Capital Budgeting Chapter 11: Capital Budgeting True/False Page PAGE 437
{"url":"http://www.studyblue.com/notes/note/n/ffm12_ch_11_tb_01-28-09doc/file/1354805","timestamp":"2014-04-21T16:03:12Z","content_type":null,"content_length":"109183","record_id":"<urn:uuid:48b10ef0-f1c8-4d10-a992-6f8bdaf35f02>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
SIGGRAPH 2001 : Courses 15: Visualizing Relativity Sunday, Half Day, 1:30 - 5 pm Room 502A For those who seek a deeper intuitive understanding of the theories of relativity and an introduction to how modern computer graphics techniques can be adapted to visualize and simulate the physics of interacting light and matter under extreme conditions. The first half of the course focuses on how relativistic effects can be intuitively understood starting from Euclidean 3D geometry. The second half concentrates on recent advances in photorealistic simulation of scenes and relativistic phenomena using computer graphics to show features that could never be seen in real life at human time and space scales. Substantial familiarity with conventional mathematical methods of 3D computer graphics and prior exposure to 3D rendering techniques. No prior knowledge of the theories of relativity is required. Attendees may find that some of the material covered in Course 5 provides useful background. A geometric, intuitive approach to special relativity. Minkowski diagrams. How relativistic transformations are related to familiar geometric concepts used in 3D rotations. Properties of light under the extreme conditions of both special and general relativity: changes of color, intensity, and direction of light, and gravitational light bending. Relativistic rendering techniques. Andrew Hanson Indiana University Daniel Weiskopf Universität Stuttgart
{"url":"http://www.siggraph.org/s2001/conference/courses/crs15.html","timestamp":"2014-04-18T23:17:36Z","content_type":null,"content_length":"16564","record_id":"<urn:uuid:a7f42b02-1060-47d6-baa8-299f4969e964>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
IAS Mains Mathematics Syllabus Exam Paper-I Linear Algebra: Vector spaces over R and C, linear dependence and independence, subspaces, bases, dimension; Linear transformations, rank and nullity, matrix of a linear transformation. Algebra of Matrices; Row and column reduction, Echelon form, congruence’s and similarity; Rank of a matrix; Inverse of a matrix; Solution of system of linear equations; Eigenvalues and eigenvectors, characteristic polynomial, Cayley-Hamilton theorem, Symmetric, skew-symmetric, Hermitian, skew-Hermitian, orthogonal and unitary matrices and their eigenvalues. Calculus: Real numbers, functions of a real variable, limits, continuity, differentiability, mean-value theorem, Taylor's theorem with remainders, indeterminate forms, maxima and minima, asymptotes; Curve tracing; Functions of two or three variables: limits, continuity, partial derivatives, maxima and minima, Lagrange's method of multipliers, Jacobian. Riemann's definition of definite integrals; Indefinite integrals; Infinite and improper integrals; Double and triple integrals (evaluation techniques only); Areas, surface and volumes. Analytic Geometry: Cartesian and polar coordinates in three dimensions, second degree equations in three variables, reduction to canonical forms, straight lines, shortest distance between two skew lines; Plane, sphere, cone, cylinder, paraboloid, ellipsoid, hyperboloid of one and two sheets and their properties. Ordinary Differential Equations: Formulation of differential equations; Equations of first order and first degree, integrating factor; Orthogonal trajectory; Equations of first order but not of first degree, Clairaut's equation, singular solution. Second and higher order linear equations with constant coefficients, complementary function, particular integral and general solution. Second order linear equations with variable coefficients, Euler-Cauchy equation; Determination of complete solution when one solution is known using method of variation of parameters. Laplace and Inverse Laplace transforms and their properties; Laplace transforms of elementary functions. Application to initial value problems for 2nd order linear equations with constant Dynamics & Statics: Rectilinear motion, simple harmonic motion, motion in a plane, projectiles; constrained motion; Work and energy, conservation of energy; Kepler's laws, orbits under central Equilibrium of a system of particles; Work and potential energy, friction; common catenary; Principle of virtual work; Stability of equilibrium, equilibrium of forces in three dimensions. Vector Analysis: Scalar and vector fields, differentiation of vector field of a scalar variable; Gradient, divergence and curl in cartesian and cylindrical coordinates; Higher order derivatives; Vector identities and vector equations. Application to geometry: Curves in space, Curvature and torsion; Serret-Frenet’s formulae. Gauss and Stokes’ theorems, Green’s identities. Exam Paper-II (1) Algebra: Groups, subgroups, cyclic groups, cosets, Lagrange’s Theorem, normal subgroups, quotient groups, homomorphism of groups, basic isomorphism theorems, permutation groups, Cayley’s theorem. Rings, subrings and ideals, homomorphisms of rings; Integral domains, principal ideal domains, Euclidean domains and unique factorization domains; Fields, quotient fields. Real Analysis: Real number system as an ordered field with least upper bound property; Sequences, limit of a sequence, Cauchy sequence, completeness of real line; Series and its convergence, absolute and conditional convergence of series of real and complex terms, rearrangement of series. Continuity and uniform continuity of functions, properties of continuous functions on compact sets. Riemann integral, improper integrals; Fundamental theorems of integral calculus. Uniform convergence, continuity, differentiability and integrability for sequences and series of functions; Partial derivatives of functions of several (two or three) variables, maxima and minima. Complex Analysis: Analytic functions, Cauchy-Riemann equations, Cauchy's theorem, Cauchy's integral formula, power series representation of an analytic function, Taylor’s series; Singularities; Laurent's series; Cauchy's residue theorem; Contour integration. Linear Programming: Linear programming problems, basic solution, basic feasible solution and optimal solution; Graphical method and simplex method of solutions; Duality. Transportation and assignment problems. Partial differential equations: Family of surfaces in three dimensions and formulation of partial differential equations; Solution of quasilinear partial differential equations of the first order, Cauchy's method of characteristics; Linear partial differential equations of the second order with constant coefficients, canonical form; Equation of a vibrating string, heat equation, Laplace equation and their solutions. Numerical Analysis and Computer programming: Numerical methods: Solution of algebraic and transcendental equations of one variable by bisection, Regula-Falsi and Newton-Raphson methods; solution of system of linear equations by Gaussian elimination and Gauss-Jordan (direct), Gauss-Seidel(iterative) methods. Newton's (forward and backward) interpolation, Lagrange's interpolation. Numerical integration: Trapezoidal rule, Simpson's rules, Gaussian quadrature formula. Numerical solution of ordinary differential equations: Euler and Runga Kutta-methods. Computer Programming: Binary system; Arithmetic and logical operations on numbers; Octal and Hexadecimal systems; Conversion to and from decimal systems; Algebra of binary numbers. Elements of computer systems and concept of memory; Basic logic gates and truth tables, Boolean algebra, normal forms. Representation of unsigned integers, signed integers and reals, double precision reals and long integers. Algorithms and flow charts for solving numerical analysis problems. Mechanics and Fluid Dynamics: Generalized coordinates; D' Alembert's principle and Lagrange's equations; Hamilton equations; Moment of inertia; Motion of rigid bodies in two dimensions. Equation of continuity; Euler's equation of motion for inviscid flow; Stream-lines, path of a particle; Potential flow; Two-dimensional and axisymmetric motion; Sources and sinks, vortex motion; Navier-Stokes equation for a viscous fluid. Important Links:
{"url":"http://www.ims4maths.com/coaching/syllabus/ias-mathematics","timestamp":"2014-04-20T19:02:03Z","content_type":null,"content_length":"73456","record_id":"<urn:uuid:cde50e0f-9fcc-419d-8390-84ff7771c8ed>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with qudratic equation anonimnystefy wrote: That is my solution with the plus/minus sign inside the root taken as +. I am sure taking the minus sign would work as well. No the minus will not work in this case , as we are finding y^2 and y^2 cant be negative. But thats not the big problem of mine...my problem is that I can't get to the answer using this equation And the correct answer for x and y in my book and every where else is Another thing I want to say,when I write the equation to this,i.e.. moving the equation from one side to another I get correct answer.Is here I am doing something wrong? Last edited by debjit625 (2012-07-30 19:27:52)
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=227965","timestamp":"2014-04-17T12:58:14Z","content_type":null,"content_length":"31264","record_id":"<urn:uuid:8f7e3044-76c0-4af1-9857-bf636c2a5207>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
[racket] Does Redex have support for multi-hole evaluation contexts? [racket] Does Redex have support for multi-hole evaluation contexts? From: Matthias Felleisen (matthias at ccs.neu.edu) Date: Wed Jul 11 09:48:27 EDT 2012 This looks like a fairly conventional form of parallel evaluation with some form of attempt at 'free deterministic update'. If I were to model this kind of thing, I am sure I'd use a multi-hole reduction semantics to represent non-coordinated parallelism. Too bad Redex can't model this, but it doesn't mean you can't formulate it as a plain old reduction semantics. Whether you "believe" it to be deterministic is pretty irrelevant. If you want it to be deterministic, you construct it that way, and with multi-hole "standard" reductions it is quite easy (near trivial) to show this -- assuming you get your effects right. Enough said, good luck -- Matthias On Jul 9, 2012, at 3:11 PM, Ryan Newton wrote: > Hi all, > Thanks for all the advice. > Just to be clear about the context. This is a parallel language that we developed on paper which we strongly believed was deterministic. We wrote the paper proof and also, on the side, wanted to make a redex model to check ourselves. > We felt it was necessary to include simultaneous steps (Matthias's [A] scenario) to model real parallel machines and force us to deal with potentially incompatible simultaneous updates to the store. > Lindsey had a slightly awkward time shoehorning things into redex, and the resulting model runs pretty slowly. BUT, these are just nits and possible areas for improvement. Redex was still enormously helpful and fulfilled its purpose. There wasn't any existential crisis here. > -Ryan > P.S. It would be simpler to just share the paper rather than describing the problems out of context. However, I don't want to send the draft out quite yet, because it needs a few more tweaks to have all the refactorings pushed through and achieve internal consistency. But I'm attached the reduction rules and grammar, FYI. > On Mon, Jul 9, 2012 at 12:34 PM, Matthias Felleisen <matthias at ccs.neu.edu> wrote: > On Jul 9, 2012, at 6:29 AM, Lindsey Kuper wrote: > > I had been assuming that "one-step" meant "small-step", but > > now I think that by "one-step" David means a relation that doesn't > > reduce redexes in parallel. So, in fact, ours *is* multi-step because > > it *does* reduce in parallel. > Lindsey and Ryan, I have had no time until now to catch up with this thread (argh). And I have my own POPL deadline, so here are some general remarks: > 1. The parallel reduction trick that David pointed out goes back to the early 1970s. Tait appears to be the source. See Barendregt. > 2. 'small step' is NOT 'one-step' and 'one-step' is not 'notion of reduction'. See Redex, chapter 1 for definitions. They are the same ones as Barendregt uses and the informed PL community has used when publishing semantics papers in this style. I dislike 'small step' A LOT but if you want a relationship |---> (standard reduction) is what most publishing PLists would call a 'small step' semantics. > 3. Also Redex, chapter 1 suggests that 'eval' is the actual semantics and |---> or -->> are two distinct ways of specifying it. Since eval is the key, this also eliminates any worries about reflexive rules -- as long as you think of eval as the mathematical semantics and the arrow relations as just one possible mechanism to define it. > 4. The reduction relations (-->> or standard) become important only if you wish to make some claim about the intension of the semantics, i.e., how it relates to actual machine steps. You don't have to. Plotkin 74, for example, shows that it is perfectly okay to show two different unrelated ways of defining eval (secd and a recursive interpreter) and to prove them equivalent -- brute force in his case. You do that because you might be able to use one definitional mechanism to show one thing (type soundness) and with another one you can show something else (cost of execution). I have done it many times. > 5. The confusion between reduction relations and parallel execution is about 40 years old (perhaps a bit less). Indeed, the confusion between reduction relations in LC and plain execution is that old -- see so-called "optimal reduction sequences", which a well-trained computer scientist (===/== mathematician masquerading as a PL person) will quickly recognize that it is nonsense. Fortunately we have had a proof for 10 years that this horse is dead. > ;; --- > May I propose a re-thinking of your approach that may benefit us Redex designers? > -- figure out what your real goal is; it cannot be to develop a semantics of some hypothetical PL (see above, especially 4) > -- develop a paper and pencil model of a semantics that helps you prove this goal/theorem > -- if it happens to be a reduction semantics (I am of course convinced that 90% of all goals can be reached via reduction semantics), > allow one of two things to happen: > [A] you need to model 'simultaneous' steps meaning you need multi-hole > [B] you don't need truly simultaneous steps but you allow some non-determinism in |---> > For [A], Redex is ill-suited. The define-judgment form isn't remotely as helpful for semantics as is the core redex language. > For [B], Redex is your friend. > See my paper with Cormac in the mid 90s on modeling futures with reduction systems, for one approach to parallelism. We did intend to tackle an FP language with set! at the time, but the analysis didn't work out so we abandoned this direction. > In short, decouple what you want to accomplish from the tool that might look like it could help. > -- Matthias > ____________________ > Racket Users list: > http://lists.racket-lang.org/users > <grammar_excerpt.pdf><semantics_excerpt.pdf> Posted on the users mailing list.
{"url":"http://lists.racket-lang.org/users/archive/2012-July/053038.html","timestamp":"2014-04-16T14:17:54Z","content_type":null,"content_length":"12034","record_id":"<urn:uuid:a755ad37-d7a6-4c57-aa25-dc792c2037d0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00599-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Multiple endogenous regressors Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Multiple endogenous regressors From William Buchanan <william@williambuchanan.net> To statalist@hsphsun2.harvard.edu Subject Re: st: Multiple endogenous regressors Date Thu, 20 Oct 2011 17:11:31 -0700 Hi Elizabeth, Also, if you're unfamiliar with IVE I would highly recommend taking at look at Angrist and Pishcke (2009) Mostly Harmless Econometrics: An Empiricist's Guide. Princeton, NJ: Princeton University Press. The authors provide a fairly thorough discussion of the instrumental variable estimator and provide some discussion on the topic of multiple endogenous regressors. Additionally, the package - ivreg2 -written by Baum, Schaffer, and Stillman has a help file with a wealth of information and references that would probably help. - Billy On Oct 20, 2011, at 4:41 PM, Yuval Arbel wrote: > Elizabeth, > I'm also new to the statalist, but fortunately, I taught econometrics > courses. Here are my answers to your questions > On Thu, Oct 20, 2011 at 10:38 PM, Lim, Elizabeth <nxl091000@utdallas.edu> wrote: >> Hello, >> I'm new to the STATA list, and I don't have much econometrics background, so please forgive me if my questions sounded too basic. I've also read the discussion threads in the archives but did not find the information adequate for my purpose/understanding. >> I am running the two-stage least squares (2SLS) test for 5 endogenous regressors. Here are my questions:- >> (1) Theoretically, the literature suggests that it is possible to generalize the 2SLS mechanism for a single endogenous regressor to multiple endogenous regressors. I've read articles in finance, accounting, economics, etc, that control for endogeneity. So far, the studies that I've come across only control for one endogenous variable. I suspect that it's complicated to run 2SLS for multiple endogenous regressors. From an implementation standpoint, what are the potential econometrics and statistical problems related to running multiple endogenous regressors with 2SLS? > Yuval: I would strongly recommend not to deal with such a complex > system of 5 endogenous variables. The problem is you should have > enough exogenous variables outside the equations you would like to > identify. i'm doubtful whether you can find so many exogenous > variables, which are really exogenous. In fact, this problem of > complexity has motivated econometricians to develop the VAR model, > where the independent variables are different lags of the dependent > variables. > For a beginer I recommend "Ramu Ramanathan: Introductory Econometrics > with Applications" >> (2) If I can't find sufficient instruments to run all 5 endogenous regressors at the same time, what potential problems might arise if I run each of the 5 endogenous regressors independently in 5 different 2SLS models? > Yuval: this is a very serious problem, which is known as "unidentified > equations" - in which case you get biased and inconsistent estimates. > For further details I suggest to look at the unidentified supply and > demand equations presented in Ramu Ramanathan. But anyway you have to > be sure that your model is correctly specified >> (3) Assuming that I can find adequate instruments, I want to run the first stage F statistics to check the validity of my instruments for these 5 endogenous regressors. For a single endogenous regressor, the literature suggests that the first stage F statistics greater than 10 indicates a valid instrument. Can I use this same rule of thumb for multiple endogenous regressors? > Yuval: First of all the problem would be to convince logically that > the variables are really exogenous. Recall that at the end of the day > the researcher is the one who is responsible for phrasing the > econometric model and justify it. Secondly, I would recommend instead > the Yu-Hausman test available in STATA (the command in STATA is > "hausman"). The idea of the test is to compare the OLS estimates to > the 2SLS estimates. The Hausman test measures the magnitude of the > bias generated by improperly using the OLS instead of the 2SLS method >> (4) Again assuming that I can find adequate instruments, I want to run the overidentification test akin to Basmann's F test and Hansen's J test. Can I still use these same overidentification tests for multiple endogenous variables? > Yuval: See my answer below. I don't see any reason to run an > overidentification test. >> References related to any of these four questions would be greatly appreciated. Thanks in advance for your advice and suggestions. > Yuval: for a begginer I would recommend the following textbooks in econometrics: > Ramu Ramanathan: Introductory Econometrics with Application - see the > chapter that deals with simultaneous equation models > Jan Kmenta: Elements of Econometrics - I suggest to read the chapter > that deals with the error-in-variable model. In this chapter you can > find an explanation to the Hausman test. > If you are somewhat familiar with matrix algebra, you can also look at > Greene textbook - find the hausman test and take a look at the > explanations there >> Regards, >> Elizabeth >> * >> * For searches and help try: >> * http://www.stata.com/help.cgi?search >> * http://www.stata.com/support/statalist/faq >> * http://www.ats.ucla.edu/stat/stata/ > -- > Dr. Yuval Arbel > School of Business > Carmel Academic Center > 4 Shaar Palmer Street, Haifa, Israel > e-mail: yuval.arbel@gmail.com > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-10/msg00926.html","timestamp":"2014-04-18T00:50:28Z","content_type":null,"content_length":"13927","record_id":"<urn:uuid:be880184-e875-4da1-acbf-cccc6ab9bc84>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Is a positive link the closure of a positive braid? up vote 3 down vote favorite Alexander's Theorem guarantees that every oriented link is the closure of some braid. In other words, the map $$ \displaystyle \coprod\_n \mathcal B_n\longrightarrow \{\text{ oriented links }\} $$ is surjective. One algorithm (I'm actually not sure that it's Alexander's original demonstration) involves choosing a basepoint in the complement of a diagram for the link and applying Reidemeister moves until the resulting diagram winds around the point in a consistent direction, at which point the diagram is manifestly the closure of a link. If we begin with a positive diagram for a link, then do we necessarily obtain a positive braid? braid-groups knot-theory I don't think any of the algorithms obviously preserve positivity. Vogel's algorithm (which I think is what you're referring to) does a lot of Reidemeister II's, which always destroys positivity. I think if there's any pair of incoherently oriented Seifert circles, Vogel's algorithm will give a non-positive result. That said, I don't know of an obvious counter-example. – Ben Webster♦ Oct 15 '10 at 3:03 add comment 1 Answer active oldest votes Rudolph proved that positive links are strongly quasipositive. This paper might also be relevant, which allows one to create a braid with the same number of Seifert circles and up vote 4 down vote writhe. However, Yamada's algorithm doesn't seem to preserve positivity either. Thank you. Yamada's paper is indeed interesting. – Sammy Black Oct 15 '10 at 18:55 add comment Not the answer you're looking for? Browse other questions tagged braid-groups knot-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/42236/is-a-positive-link-the-closure-of-a-positive-braid","timestamp":"2014-04-21T07:31:31Z","content_type":null,"content_length":"52323","record_id":"<urn:uuid:7270cb72-07af-45bc-96e1-51cec7dd3f23>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00021-ip-10-147-4-33.ec2.internal.warc.gz"}
How many miles from California to Beijing, china? You asked: How many miles from California to Beijing, china? Say hello to Evi Evi is our best selling mobile app that can answer questions about local knowledge, weather, books, music, films, people and places, recipe ideas, shopping and much more. Over the next few months we will be adding all of Evi's power to this site. Until then, to experience all of the power of Evi you can download Evi for free on iOS, Android and Kindle Fire.
{"url":"http://www.evi.com/q/how_many_miles_from_california_to_beijing,_china","timestamp":"2014-04-19T08:06:22Z","content_type":null,"content_length":"55017","record_id":"<urn:uuid:1fe80892-219f-4a34-8e78-1c830cfea951>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Effects of Shape and Strain Distribution of Quantum Dots on Optical Transition in the Quantum Dot Infrared Photodetectors We present a systemic theoretical study of the electronic properties of the quantum dots inserted in quantum dot infrared photodetectors (QDIPs). The strain distribution of three different shaped quantum dots (QDs) with a same ratio of the base to the vertical aspect is calculated by using the short-range valence-force-field (VFF) approach. The calculated results show that the hydrostatic strain ɛ[H]varies little with change of the shape, while the biaxial strain ɛ[B]changes a lot for different shapes of QDs. The recursion method is used to calculate the energy levels of the bound states in QDs. Compared with the strain, the shape plays a key role in the difference of electronic bound energy levels. The numerical results show that the deference of bound energy levels of lenslike InAs QD matches well with the experimental results. Moreover, the pyramid-shaped QD has the greatest difference from the measured experimental data. Quantum dots; PL spectrum; Strain; QDIP Due to three-dimensional confinement for electrons in the quantum-dot structure, quantum-dot infrared photodetectors (QDIPs) have attracted much attention for theoretical and experimental studies in recent years [1-3]. One important characteristic for QDIPs is the sensitivity to normal-incidence infrared radiation which is advantage to focal plane arrays. The longer lifetime of excited electrons inspirited by the greatly suppressed electron–phonon interaction makes the QDIPs have another advantages of displaying low dark current, large detectivity, and better response [4]. The introduction of strain may provide a facile way to fabricate various wavelength from mid-wavelength to long-wavelength multicolor infrared (IR) detectors via InAs or InGaAs quantum dot (QD) capped by GaAs, InGaAs, InP, or GaInP. Meanwhile, the geometry shape of QDs always results in quite different responding wavelength for QDIPs [5]. Nowadays, more complicated nanostructures, such as QD molecules are investigated for the potential use of photoelectric devices [6]. It is well known that the much sensitivity of QD’s bound energy levels to the shape, size, and strain provides the detector greater potential to obtain the ideal responding wavelength for the application of medical or molecular application. So the study of the shape, size, and strain of QD system has been an interesting subject for the development and precious controlling of the QDIPs structure. Much theoretical and experimental work has been done to explore the effect of the shape, size, or strain of QDs on the bound energy levels or the possible optical transition. The bound energy levels in fat lenslike QD basing on the quantum-well approximate theoretical results have a bigger difference by comparing to the experimental results. In wojs’ work, the energy levels of lenslike In[0.5]Ga [0.5]As/GaAs QD were studied as a function of the dot’s size, and found that the parabolic confining potential and its corresponding energy spectrum were shown to be and excellent approximation [7]. Here, we calculate the strain energy of self-assembled QDs with the short-range valence-force-field (VFF) approach to describe inter-atomic forces by using bond stretching and bending. The role of strain (for three different shapes) in determining the bound levels is analyzed in detail. Considering three different shape QDs with the same ratio of the base to the vertical aspect 3:1, the bound energy levels are calculated by the recursion method [3]. The theoretical results show that the difference of bound energy levels of lenslike InAs QD matches with the experimental results. While the bound energy levels of pyramid-shaped QD have the biggest difference from the measured experimental data. Though the bound-to-continuum transition of the truncated pyramid QD is mostly acceptable because the behavior is much similar to the structure of the well-studied quantum-well infrared photodetectors (QWIPs), the bound ground states of electrons and holes are very far from the experimental results. The paper is organized as following, in the section “Sample Preparations and Experimental Results,” the investigated experimental device and experimental results such as AFM/TEM images, the photoluminescence (PL), and photocurrent (PC) spectrums are described. In the section “Theoretical Results and Discussions,” the exact strain distributions of pyramid, truncated pyramid, and lenslike-shaped InAs/GaAs QD are calculated by the short-range VFF approach, and the energy levels of the bound states are calculated by the recursion method. The final section discusses the summary. Sample Preparations and Experimental Results Figure 1shows a schematic of the QDIP structure. The sample was grown on semi-insulating GaAs (001) substrates by using the solid-source molecular beam epitaxy (MBE). Five layers of nominally 3.0 momolayer (ML) InAs (quantum dots) were inserted between highly Si-doped bottom and top GaAs 1000 nm contact layers with doping density 1 × 10^18 cm^−3. Each layer of InAs is capped by 21 ML spacer GaAs material to form the InAs QDs, and the five layers of GaAs/InAs are called S-QD. In addition there is a 50 nm GaAs layer inserted between the S-QD regions and bottom (top) Si-doped GaAs contact layers, respectively. Figure 1. Typical QDIP structure of GaAs/InAs material The typical constant-mode ambient atomic force microscopy (AFM) data and the cross-sectional TEM for the counterpart samples are present in Fig. 2a and b, respectively. The average height of quantum dots is about 74 ± 16 Å, and the quantum dot density has a range from 613/um^2to 733/um^2. The average quantum dot width with the range from 228 to 278 Å represents the full width at half maximum (FWHM) of AFM scan profile. Figure 2b shows that the cross-sectional transmission electron microscope images on the S-QD counterpart. It is noted that the quantum dot density in the lower layer is higher than that in the upper layer. Figure 2. aTypical AFM-determined island size distribution.bCross-sectional TEM images of QDIP structure The near-infrared photoluminescence (PL) as a function of energy at 77 K is shown in Fig. 3. A main peak corresponding to the quantum dot ground state transitions is centered at 1.058 eV and a small broad shoulder due to smaller quantum dots or InAs wetting layer appears at 1.216 eV. Figure 4shows the intra-band photocurrent as a function of energy at 77 K in the absence of bias. It is well known that the intra-band photocurrent can present more direct information on the quantum dot electronic states. An obvious intra-band photocurrent peak appears at 170.675 meV. Figure 3. The near infrared photoluminescence (PL) spectra at 77 K Figure 4. The intra-band photocurrent (PC) at 77 K and 0 bias Theoretical Results and Discussions Strain and Confinement Profile Here, we adopt the short-range VFF approach to describe interatomic forces in terms of band stretching and bending [8,9]. The model has been widely applied in bulk and alloys [10-14], as well as low-dimensional systems [15,16]. It was further developed to an enharmonic VFF model by Bernard and Zunger for Si–Ge compounds, alloys and superlattices [17]. In the VFF model, the deformation of a lattice structure is completely specified when the location of every atom in a strained state is given [18], and the elastic energy of a bond is minimal in its three-dimensional bulk lattice structure. For small deformations, the bond energy can be written as a Taylor expansion in the variations of the bond length and the angles between the bond and its nearest neighbor bonds. Under the rubric of short-range contributions and by following the general notations in Ref. [9-14], the elastic energy of an interatomic bond (by setting the elastic energy at equilibrium as the zero reference) can be written in the harmonic form, with j = 1,2...6 denoting the six nearest neighbor bonds in zinc blende structure. δr[i] is the variation of the length of bond i, and δΩ[ij] is the variation of the angle between the i’th and the j th bonds. The total elastic energy is the sum of all bond energies K’s for VFF bonds of zinc blende bulk materials are easily obtained from elastic coefficients C[11]C[12], and C[44] listed in Ref. [ 17]. Table 1 lists the C and K values for InAs and GaAs. Note that K’s values depend slightly on the temperature of the material due to the temperature dependence of the lattice constant. The dependence, however, is small. The values listed in Table 1 are obtained for the materials at 100 K. Table 1. Values of C’s [19] and K’s (at 100 K) of zincblende InAs and GaAs bulk materials The local band edges employing the following formulas for the conduction (CB), the heavy hole (HH), and the light hole (LH) bands can be approximated as: where the hydrostatic strain ɛ[H] and the biaxial strain ɛ[B] are defined as V[HH] and V[LH] are the heavy-hole and light-hole bands, a[c]a[v] and b are the deformation potentials, and E[CB/VB] are the unstrained band edge energies. Notice that the sheer-strain-induced HH–LH coupling and split-off contributions are ignored. In our calculation we adopt the parameters from Ref. [17]. The investigated three different shaped InAs QDs follows pyramid-shaped (with the four facets being (111), 5. The hydrostatic strain ɛ[H]makes the height of CB lower, but the height changes are almost same for different shape of QDs. While the biaxial strain ɛ[B]is rather complex for the three different shapes. For the lens-shaped QD, the InAs lattice is compressed by GaAs in the growth plane and stretched in the plane, which is vertical to the growth plane. The pyramid-shaped QD is more complex. The lattice of the top of QD is stretched in the growth plane, and in the central ɛ[B] = 0, which means that there is no splitting for the valence band in the position. The compressing and stretching condition of bottom in the pyramid QD is the same as the lens-shaped QD. The difference of QD shape makes the strain distribution rather different, and the calculated strain distribution of truncated-pyramid QD resembles to that of the pyramid-shaped QD. The existence of GaAs coating makes the strain distribution at the top of the three different shapes of QD to present different characters as: ɛ[H] < 0 and ɛ[B] < 0 for lenslike and truncated-pyramid QD; and ɛ[H] < 0 and ɛ[B] > 0 for pyramid-shaped QD. Figure 6shows the calculated confinement potential distribution induced with the strain for three different shaped QD. The potential has different characters. The difference of electron energy levels mainly comes from the difference of shape because there is little change in the value of ɛ[H]for different shaped QDs. The role of hydrostatic strain ɛ[H]makes the height of CB less. The splitting of hole potential is determined by biaxial strain ɛ[B], which changes a lot when the shape varies. Figure 5. The calculated strain distribution of the pyramid and lens-shaped QD at x-z plane, wherea+care hydrostatic strain from −0.075 (blue) to 0.07 (red) andb+c) are biaxial strain from −0.13 (blue) to 0.12 (red) Figure 6. The calculated bandoffset (doted + line) and energy level (dotted line) of the three different shaped QD when strain is included The Calculation and Analysis of the Energy Level The recursion method is used to calculate the energy levels of three different shaped QDs [20]. For the experimental data, the broadening (FWHM) of the PC spectrum and the first peak of the PL spectrum are 34.01 and 29.34 meV, respectively. The broadening is 131.42 meV for the second peak. The much greater difference in the FWHM implies the possibility that the second peak comes from wetting layer, but not from the bound levels of QDs. So for the simplicity of comparing with the experimental results, we only calculate all the energy levels of electron and the bound ground energy level of hole with the corresponding results shown in Fig. 6 (dotted line). Next, we present the change in energy level for three different shape QDs. From Fig. 6, some results are found. The energy difference between the ground states of hole and electron is 0.990 eV for pyramid QD, 1.215 eV for truncated pyramid QD, and 1.053 eV for lens-shaped QD, respectively. The possible bound-to-bound transition of electronic inter-subbands is 254.1 meV for pyramid QD and 156.4 meV for lens-shaped QD. There is only one bound state in conduction band, so the possible transition of different energy levels is bound-to-continuum state with the energy being 199.3 meV. From the experimental data, the PL spectrum presents the ground state transition from electron to hole with the value being 1.058 eV. Compared with the experimental data, the corresponding calculated value has the difference (d[PL]) of 6.43% for pyramid QD, 14.8% for truncated-pyramid QD, and 0.47% for lens-shaped QD. Also the differences (d[PC]) from the measured PC peak (170.675 meV) are 48.88%, 16.77%, and 8.36% for pyramid, truncated-pyramid, and lens-shaped QD, respectively. The compared results show that the energy difference of the lens-shaped QD is the most favored for the possible QD structure. Though the transition of the bound-to-continuum transition for truncated pyramid QD is mostly acceptable because the behavior is much like the well-studied QWIP structure, the energy difference of the bound ground states between electron and hole is rather far from the experimental results. The pyramid is not possible to be the shape of our investigated QD for the biggest difference of 48.88%. If we define a parameter σ to estimate whether the shape or size is the most favored to QD of investigated QDIP structure, the most suitable expression should be with d[PL] and d[PC] being the difference of calculated and measured PL and PC spectrum, respectively. In our calculation, σ is 34.86%, 15.82%, and 5.92% for pyramid, truncated-pyramid, and lens-shaped QD, respectively. For the lens-shaped QD, σ has the least value, which means the QDIP structure is constructed by the lens-shaped QD. The lens-shaped InAs/GaAs QD is observed by many researchers, and in this way our calculation can get a good agreement with the measured data. Also the PL, PC spectrum, and σ provide us a way to find out the most suitable shape and size of QD which makes σ the minimum. The different shape of QDs can have different response wavelength as described in our calculation. The results mean that one can obtain the ideal response wavelength of QDIP structure by controlling the growth condition to change the shape of QDs. In summary, we have studied the strain distribution of self-assembled QD by the short-range VFF approach to describe inter-atomic forces in terms of bond stretching and bending. The strain-driven self-assembled process of QD based on lattice mismatch has been clearly demonstrated. The recursion method is used to calculate the bound energy levels of QD for three different shapes but at the same ratio 3:1 for the base to the vertical aspect. For the three different shaped QD, the hydrostatic strain ɛ[H]has a little change. The results indicate that the difference of bound energy is mainly controlled by the shape. The biaxial strain ɛ[B]changes a lot with the shape. Moreover, the strain and the shape both play key role in determining the ground state of hole. The results show that the difference of bound-to-bound energy levels of lenslike InAs QD matches well with the experimental data, while the pyramid-shaped QD has the biggest difference from the measured data. Though the bound-to-continuum transition for truncated pyramid QD is mostly acceptable because the behavior is much like the well-studied QWIP structure, the bound ground states between electron and hole value is rather far from the experimental results. Also the biggest difference of 48.88% makes the pyramid an impossible shape for our investigated QD. Our theoretical investigation provides a feasible method for finding the most seemly geometry and size of QDIP structure by adjusting the shape/size of QD and the comparing theoretical and experimental results. It is useful in designing the ideal QDIPs device. The project is partially supported by the National Natural Science Foundation of China (Grant No:10474020), CNKBRSF 2006CB13921507, and Knowledge Innovation Program of CAS. Sign up to receive new article alerts from Nanoscale Research Letters
{"url":"http://www.nanoscalereslett.com/content/3/12/534","timestamp":"2014-04-25T05:32:33Z","content_type":null,"content_length":"93424","record_id":"<urn:uuid:40316199-5e0c-4ec2-997f-c732e30042f6>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Diameter (of a circle) From Greek: dia- "across, through" + metron "a measure" The distance across a circle through its center point. Try this Drag the orange dot. The blue line will always remain a diameter of the circle. The diameter of a circle is the length of the line through the center and touching two points on its edge. In the figure above, drag the orange dots around and see that the diameter never changes. Sometimes the word 'diameter' is used to refer to the line itself. In that sense you may see "draw a diameter of the circle". In the more recent sense, it is the length of the line, and so is referred to as "the diameter of the circle is 3.4 centimeters" The diameter is also a chord. A chord is a line that joins any two points on a circle. A diameter is a chord that runs through the center point of the circle. It is the longest possible chord of any The center of a circle is the midpoint of its diameter. That is, it divides it into two equal parts, each of which is a radius of the circle. The radius is half the diameter. If you know the radius Given the radius of a circle, the diameter can be calculated using the formula R is the radius of the circle If you know the circumference If you know the circumference of a circle, the diameter can be found using the formula C is the circumference of the circle π is Pi, approximately 3.142 If you know the area If you know the area of a circle, the diameter can be found using the formula A is the area of the circle π is Pi, approximately 3.142 Use the calculator on the right to calculate the properties of a circle. Enter any single value and the other three will be calculated. For example: enter the diameter and press 'Calculate'. The area, radius and circumference will be calculated. Similarly, if you enter the area, the radius needed to get that area will be calculated, along with the diameter and circumference. Related items Radius The radius is the distance from the center to any point on the edge. As you can see from the figure above, the diameter is two radius lines back to back, so the diameter is always two times the radius. See radius of a circle Circumference The circumference is the distance around the edge of the circle. See Circumference of a Circle for more. Things to try 1. In the figure above, click 'reset' and drag any orange dot. Notice that the diameter is the same length at any point around the circle. 2. Click on "show radius". Drag the orange dot at the end of the radius line. Note how the radius is always half the diameter. 3. Uncheck the "fixed size" box. Repeat the above and note how the radius is always half the diameter no matter what the size of the circle. Thales' Theorem subtends a right angle to any point of the circle's circumference. (see figure on right). No matter where the point is, the triangle formed is always a right triangle. See Thales Theorem for an interactive animation of this concept. Other circle topics Equations of a circle Angles in a circle (C) 2009 Copyright Math Open Reference. All rights reserved
{"url":"http://www.mathopenref.com/diameter.html","timestamp":"2014-04-16T10:09:45Z","content_type":null,"content_length":"16581","record_id":"<urn:uuid:8fd1881e-1749-4a8d-be7a-0a1eff2a2322>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00297-ip-10-147-4-33.ec2.internal.warc.gz"}
What matrix groups can be embedded in $Sp_4$? up vote 7 down vote favorite In a joint paper with Yifan Yang we constructed an "exotic" embedding of $SL_2(\mathbb R)$ in $Sp_4(\mathbb R)$ (in fact, of $PSL_2(\mathbb R)$ in $PSp_4(\mathbb R)$), namely, $$ \iota\colon\begin {pmatrix} a & b \cr c & d \end{pmatrix} \mapsto\begin{pmatrix} a^2d+2abc & -3a^2c & abd+\frac12b^2c & \frac12b^2d \cr -a^2b & a^3 & -\frac12ab^2 & -\frac16b^3 \cr 4acd+2bc^2 & -6ac^2 & ad^2+2bcd & bd ^2 \cr 6c^2d & -6c^3 & 3cd^2 & d^3 \end{pmatrix}. $$ An equivalent form of the embedding was independently discovered by Don Zagier, and we could not find it in the literature. Although the properties of the embedding (discussed in the preprint above) are nice by themselves, I am interested in an exhaustive list of possibilities to embed other matrix groups and their direct products in $Sp_4(\mathbb R)$ (or $PSp_4(\mathbb R)$). For example, can the direct product of two copies of $SL_2(\mathbb R)$ be embedded? As I am not a specialist in Lie groups, I would appreciate plainer sources. Thank you for any help in advance! lie-groups gr.group-theory matrices nt.number-theory 3 Isn't it enough to check whether the group has a 4-dimensional representation V such that \Lambda^2(V) has positive dimension? – Qiaochu Yuan Jul 5 '10 at 11:42 2 Also, related, if you haven't seen it: mathoverflow.net/questions/9378/… – Qiaochu Yuan Jul 5 '10 at 11:45 2 @Wadim: are you asking whether SL_2(R) x SL_2(R) embeds into Sp_4(R)? Isn't this obvious? SL_2(R) acts on R^2 and preserves an alternating form. So SL_2(R) x SL_2(R) acts on R^4 and preserves an alternating form---just take the direct sum. Am I missing something? – Kevin Buzzard Jul 5 '10 at 11:49 3 PS there are several different "explicit" Sp_4's in the literature---different people take different explicit alternating forms. So I think it's good practice to say explicitly what your J is if you're going to write down explicit matrices as in the question. – Kevin Buzzard Jul 5 '10 at 11:54 Tilouine is a master of automorphic forms on Sp_4 (so, the number theory of Sp_4, not the group theory). I think he likes his J to be anti-diagonal because then I think it makes the standard 3 parabolic subgroups a bit nicer-looking? Of course you can easily move from one form to the other via some conjugation. I don't know if Tilouine would know too much about these exotic SL_2's though. – Kevin Buzzard Jul 5 '10 at 12:08 show 12 more comments 1 Answer active oldest votes You can embed the direct product of two copies of $SL_2(\mathbb{R})$. One embedding sends the entries to the center $2\times2$ block, the other sends the entries to the corners with the $2\times2$ identity matrix in the center block. Here, the $J$ matrix is the standard anti-diagonal matrix up vote 2 For embeddings of other groups, you could look at the Bruhat decomposition of Sp(4) and write a decomposition of each cell. Some explicit information is given on the decomposition for down vote GSp(4) in a book by Ralf Schmidt and Brooks Roberts, which is available on Ralf's website. add comment Not the answer you're looking for? Browse other questions tagged lie-groups gr.group-theory matrices nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/30610/what-matrix-groups-can-be-embedded-in-sp-4/60887","timestamp":"2014-04-17T12:39:45Z","content_type":null,"content_length":"58811","record_id":"<urn:uuid:410b51c6-93a0-4868-8682-d65cb91470b4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Double Galaxies - Igor Karachentsev 4.3. Relative Velocities, Separations, and the Type of Orbital Motion To determine the character of the orbital motions of double galaxies we examined the distribution of pairs according to mass estimates, i.e., according to the combination of observed quantities y^2X. Another possibility involves the investigation of the dependence of the mean velocity difference of the galaxies in a pair on their projected linear separation <y| X>. The question of the form of the basic regression moments < y^k| X > for circular and radial motion was examined by Zonn (1968), Karachentsev (1970b, 1981d) and Noerdlinger (1975). This work will be reviewed very briefly. 1. Circular motion. Relations (4.1) through (4.3) allow one to determine the observed quantities y and X from the total pair mass M[12], the spatial separation r, and the orientation angles i and M [12], r, i, and where p(r) and p (M[12]) are the density distributions of pairs according to spatial separation and total mass of the components, and r[*] = [GM[12] X^2 / y^2]^1/3. According to the calculated regression of the k-th order From this, after some straightforward algebra, we obtain regressions for the first and second orders: where the density p (r) follows from the observed distribution p (X) by means of the Abell inversion In the special case in which all pairs have orbits of the same radius, the regression takes a linear form 2. Radial motion. We have from the energy integral for two massive particles where l is the oscillation amplitude for the components of the pair. Introducing the concept of an oscillation phase, z r/l we may re-write (4.31) as Then, for the observed quantities we have the transformations where z and Incorporating the mutual independence of the random values Z, l,M[12], and moving from (4.32) to observed quantities, we may determine the density regression distribution p (y, X) and inserting this in (4.28) we would have the regression <y| X>. The expression in this case for the regression is rather cumbersome. We note that the regression method may be applied to the catalogue of double galaxies. The observational material is shown in figure 24. In order to apply the data on radial velocity differences uniformly to both giant and dwarf galaxies, we employ the radial velocity difference which is normalised to the total luminosity of the pair L. The distribution of double galaxies in y[L] and X is shown by the points. The inset in the figure shows the distribution for wide pairs. The dashed line shows the location of the critical value f^* = 100 separating physical and false pairs. Along the upper margin of the figure are shown the double systems for which the normalised radial velocity difference exceeds 700 km/s (as before, these are optical pairs). Calculation of the regression <y[L]| X> for circular and radial motions was carried out with Monte Carlo techniques. Originally for the observed moments <X^k> we calculated moments of the spatial separation <r^k> for the case of circular orbits, or moments of the amplitude <l^k> for radial motions. We used the obvious relation where k) is the gamma-function. Further, in calculating the moments, analytical expressions for the functions p (r) and p (l ) were applied. Finally, a computer was used to generate the observed quantities y and X on the basis of the independence of the random variables (r, i, l, z), since we know the distribution functions. The results of performing the mutual regression with 9,000 trials are presented in figure 25. The connected point-to-point lines show the mean values of the modulus and radial velocity difference in steps of 0.1 in <X>. The dispersion gives an estimate of the statistical accuracy of the regression arising for large values of X from the small sample size in the tail of the distribution. In the region with values X > 0.1, the regression for circular motion was systematically higher than for radial motion. The observed data for 487 pairs with f < 100 are shown in figure 25 as diamonds, the height of which indicates the standard deviation in the mean. These data incorporate a correction for errors in measurement of radial velocities. From comparison of the calculated regressions with the observed, we conclude that the proposed circular motion of double galaxies is satisfied with high probability. This is in agreement with the results of the analysis described in the previous paragraph. However, we do not view this regression analysis as sufficiently powerful to determine the type of orbital motion. We base this on the fact that the regressions for the extreme types of motion differ only slightly. On the other hand, it is possible we are dealing with unknown factors which systematically change the form of the regression. Having a control sample of model pairs selected with the aid of exactly the same isolation criterion as for the real pairs, we also calculated the regression <y[L]| X> for them. Note that this control sample satisfies the condition for random motion in which the vectors V and r are not correlated. Yet the model pairs show a no less marked decrease in mean velocity difference on passing from tight to wide pairs. The regression for the M-pairs is shown in the same units in the upper right inset in figure 25. Supplementary analysis was performed to clarify the origins of this effect. From the selection criteria (section 3.6), the luminosities of double galaxies exhibit a correlation with linear separation, which will enter into the regression on going from y to y[L] according to (4.34). Another source may be the increased probability of including in a sub-sample false pairs satisfying the basic criterion f < 100. The very closest among these appear to be members of groups with large virial velocity differences. Correcting the observed regression for the recognised selection effects results in it becoming closer to the theoretical regression for circular orbits and suggests a situation of `super circularity' for the motion in pairs. These conclusions may be reconciled if one assumes that the actual absolute errors in radial velocity measurements are typically 1.4 times higher than the internal errors ( section 2.3).
{"url":"http://ned.ipac.caltech.edu/level5/Sept02/Keel/Keel4_3.html","timestamp":"2014-04-17T15:44:39Z","content_type":null,"content_length":"12160","record_id":"<urn:uuid:83f3162c-510f-4070-9d2e-645b00b0b745>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
On the 3rd Day of Christmas… A Harry & David Giveaway! (winner announced) UPDATE: The winner of the Harry & David Olympia Gift Box is: #2,966 – Rachel: “I follow you through Twitter” Congratulations, Rachel! Be sure to reply to the email you’ve been sent, and we’ll get your Harry & David gift box shipped out to you! I have been a huge fan of Harry & David ever since discovering the wonder of a dip made with their Pepper and Onion Relish and cream cheese. To this day, it is the most addicting thing I have ever eaten, and people fawn all over it when I make it for a get-together. Any time I make my way to the outlets, I always hit up the Harry & David store for a re-fill of my favorite items. So, when the good folks over at Harry & David contacted me and offered to gift a Brown Eyed Baker reader with some of their goodies, I was beyond excited to share my love of their products with you. I pored over their product selection, and in the end chose this gift box because it includes a great sampling of some of my very favorite products. Whoever wins has to promise me to make the Pepper and Onion Relish Dip immediately! Continue reading below for details on the gift box and how to enter to win! One (1) winner will receive the Harry & David Olympia Gift Box, which includes the following: • Caramel Moose Munch Snack • Raspberry Galettes • White Chocolate Raspberry Deluxe Gourmet Brownies • German Chocolate Deluxe Gourmet Brownies • Fruit Bowl Jelly Beans • Peanut Butter Pretzels • Super Party Mix • Olive Oil Crackers • Classic Recipe Pepper and Onion Relish To enter to win, simply leave a comment on this post and answer the question: “What’s your favorite snack food?” You can receive up to FIVE additional entries to win by doing the following: 1. Subscribe to Brown Eyed Baker by either RSS or email. Come back and let me know you’ve subscribed in an additional comment. 2. Follow @thebrowneyedbaker on Instagram. Come back and let me know you’ve followed in an additional comment. 3. Follow @browneyedbaker on Twitter. Come back and let me know you’ve followed in an additional comment. 4. Become a fan of Brown Eyed Baker on Facebook. Come back and let me know you became a fan in an additional comment. 5. Follow Brown Eyed Baker on Pinterest. Come back and let me know you became a fan in an additional comment. Deadline: Thursday, December 6, 2012 at 11:59pm EST. Winner: The winner will be chosen at random using Random.org and announced at the top of this post. If the winner does not respond within 48 hours, another winner will be selected. Disclaimer: This giveaway is sponsored by Harry and David; all opinions are my own. Good Luck!! 3,412 Responses to “On the 3rd Day of Christmas… A Harry & David Giveaway! (winner announced)” 1. Wow! I did not know you had a fb page… i would have liked it sooner! I love coming to your site as a stress reliever (currently a med student). Love your recipes! My favorite snack food is any sort of potato chips or corn chips 2. I love popcorn, or pirates booty! 4. Following you on email now! Thanks! 6. follow instagram @saramama1 7. follow pinterest @saramama 8. Already a Facebook Fan as well! =D 9. I receive your fab emails 11. subscriber rss google reader 12. Popcorn or roasted chick peas. 14. recently i’ve been liking m&ms and lay’s potato chips~! 15. Cinnamon yogurt covered pretzels! 16. I follow you on Facebook! 17. I follow you on Pinterest! 18. I subscribe to your email updates! 19. Mmm, moose munch. I love things like that with the sweet and v salty. 24. My favorite snack is chocolate – plain, with nuts, combined with salty snacks. Chocolate in all forms! 27. I follow you on Pinterest 28. Favorite snack food would be popcorn. Very favorite popcorn would be Salted Caramel Popcorn from Cosmos Creations. 29. Following you on pinterest! 32. Following you on Facebook now too! 33. My favorite snack food would have to be Chocolate Chip Oatmeal Cookies. 35. I’m also a fan on facebook 36. My favorite snack food is cheese and crackers 37. Something crunchy or something with chocolate! 38. I am already a fan on facebook. 39. I follow you on pinterest. 44. popcorn. It is even better if it is chocolate coated! 46. Chocolate is definitely my favorite! 50. I’m a subscriber and my favorite snack is popcorn, especially Moose Munch! 51. I follow you on Pinterest. 53. Lightly salted potato chips! 54. I subscribe to your RSS feed on Facebook. I love carmel moose munch. 57. I’m your fan on Facebook. 58. My favorite snack is kettle corn, especially if it’s mixed with cheesy popcorn (sort of like Chicago mix?). 59. My favorite snack is caramel corn! 61. I follow BEB on Pinterest 62. Found you on Pinterest and liked you on Facebook! 63. My favorite snack food is popped chips 65. Pretzel crisps and hummus! Yummmmm! 66. my favorite snack is anything sweet! especially with chocolate~ 67. I have subscribed you through email 69. I am your fan on facebook 70. I am your fan on facebook!!! 72. I follow you on pinterest 73. Hmmm… this is a tough one. Pocky? 74. Best snack is any sort of Chips with Dip 77. Popcorn. Any and all types and forms of popcorn. So good! 78. Following you on Pinterest! 79. Following you on Facebook! (it’s how I was reminded of your giveaway!) 80. I love pistachio mixed nuts by planters! 84. I love sweet and salty, and of course Chocolate!! 85. my fav snack food is apple with peanut butter or Quaker caramel corn rice cakes. 86. I Subscribe to Brown Eyed Baker by email. 87. I Follow @thebrowneyedbaker on Instagram. 88. I Follow @browneyedbaker on Twitter. 89. Definitely cheese and crackers. 90. I like Brown Eyed Baker on Facebook. 91. I subscribe to Brown Eyed Baker on RSS feed. 92. My favorite snack is edamame in the shell tossed with a bit of pink sea salt….very yummy! 93. I Follow Brown Eyed Baker on Pinterest. 94. I follow Brown Eyed Baker on Pinterest 95. I subscribed to Brown Eyed Baker via email. 96. I am a fan and have been following Brown Eyed Baker on Facebook for some time now. 97. my favorite snack food is potato chips with ranch dip 99. I followed you on pinterest 100. My favorite snack would be anything with chocolate
{"url":"http://www.browneyedbaker.com/2012/12/05/on-the-3rd-day-of-christmas-a-harry-david-giveaway/comment-page-31/","timestamp":"2014-04-16T07:52:23Z","content_type":null,"content_length":"118639","record_id":"<urn:uuid:7a46fe29-9779-4282-a748-24711ddb0b5f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Spanaway Prealgebra Tutor Find a Spanaway Prealgebra Tutor ...I am enrolled at Western Governor's University. My hope is to teach high school mathematics some day.Rote memorization of some basic identities is a great help, but not slavish adherence to all your teacher's methods. I teach rote memorization of the three binomial formulas, and of the multipli... 34 Subjects: including prealgebra, chemistry, Spanish, calculus ...My approach is straightforward, and I have never failed to see improvements in my clients once their learning style was accommodated. I can provide these results to every client, regardless of age. Thank you for your consideration.I studied American Sign Language for three years in high school. 25 Subjects: including prealgebra, reading, English, GRE ...Once they know how to approach a problem, then they can tackle the calculations and methods for going through the problems step by step. Beyond tutoring children how to approach and think through challenge topics, I aim to instill what I feel is the most important part of succeeding in any educa... 25 Subjects: including prealgebra, chemistry, algebra 1, physics ...I can also help students understand their style of learning and assist with study skills. My philosophy in tutoring is to build confidence and academic proficiency so I can release the student. I look forward to working with you and seeing you succeed. 46 Subjects: including prealgebra, reading, English, chemistry ...I serve as an advisor to the sustainability advocates, develop service projects for STEM (science technology engineering math) students, and as an assistant teacher to the calyx elementary school on Whidbey Island. I am currently a tutor to two sets of siblings, both in elementary reading, writi... 9 Subjects: including prealgebra, reading, writing, algebra 1 Nearby Cities With prealgebra Tutor Bonney Lake prealgebra Tutors Dupont, WA prealgebra Tutors Elk Plain, WA prealgebra Tutors Fife, WA prealgebra Tutors Fircrest, WA prealgebra Tutors Gig Harbor prealgebra Tutors Graham, WA prealgebra Tutors Lakewood, WA prealgebra Tutors Loveland, WA prealgebra Tutors Milton, WA prealgebra Tutors Pacific, WA prealgebra Tutors Puy, WA prealgebra Tutors Roy, WA prealgebra Tutors Sumner, WA prealgebra Tutors University Place prealgebra Tutors
{"url":"http://www.purplemath.com/spanaway_wa_prealgebra_tutors.php","timestamp":"2014-04-19T20:26:14Z","content_type":null,"content_length":"23911","record_id":"<urn:uuid:0ff17e55-cda1-4f09-add2-bc055a717c26>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00083-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: AR1 From "Michael Blasnik" <michael.blasnik@verizon.net> To <statalist@hsphsun2.harvard.edu> Subject st: Re: AR1 Date Mon, 03 Oct 2005 08:31:40 -0400 ----- Original Message ----- From: "Dirk Nachbar" <Dirk.Nachbar@dwp.gsi.gov.uk> Dear all I am trying to simulate an AR(1) process but it proves to be hard in Stata. My idea was to declare a matrix and then do it in a loop, but it doesn't work. Maybe someone can help. Dirk ------------------ clear set matsize 800 set obs 700 gen time=_n tsset time gen y=invnorm(uniform()) mkmat y local i = 1 while `i' < `obs' { matrix y[i+1,1]=1+0.5*y[i,1] +invnorm(uniform()) local i = `i' + 1 } svmat y arima y, arima(1,0,0) --------------------------------------------------- Dirk Nachbar © Copyright 1996–2014 StataCorp LP | Terms of use | Privacy | Contact us | What's new | Site index
{"url":"http://www.stata.com/statalist/archive/2005-10/msg00030.html","timestamp":"2014-04-19T17:13:56Z","content_type":null,"content_length":"7351","record_id":"<urn:uuid:f572cb7e-1ac5-4b53-a213-1fe7ffe73629>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
The graph of the Cartesian equation xy = 1 is a hyperbola with two asymptotes The graph of the Cartesian equation xy = 1 is a hyperbola with two asymptotes. If the equation is written as y = 1/x we see that y approaches 0 as | x | becomes arbitrarily large, so part of the hyperbola approaches the x axis asymptotically. If the equation is written as x = 1/y we see that x approaches 0 as | y | becomes arbitrarily large, so part of the hyperbola approaches the y axis asymptotically.
{"url":"http://www.projectmathematics.com/document/Asymptote.html","timestamp":"2014-04-20T20:55:57Z","content_type":null,"content_length":"2917","record_id":"<urn:uuid:945f7d9d-d854-4d59-a853-27536f54bcfb>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Units for Inverse Quantity Date: 9/10/96 at 16:6:23 From: Anonymous Subject: Units for Inverse Quantity We just started doing inverse-proportion graphs. There is a data table with time vs distance. Distance is in centimeters, time is in seconds. For my situation, the two are inversely proportional. After creating the graph of their values, we are supposed to add a third column to the data table, "1/distance." It shows the reciprocals of the values in the distance column. When we plot the graph of time vs. 1/distance, it makes for a nice best-fit line graph. What units should be used when referring to the 1/distance values? My teacher has suggested that it might be cm^-1, but he isn't sure. Can you tell me? Date: 9/10/96 at 18:41:45 From: Doctor Tom Subject: Re: Units for Inverse Quantity Your teacher is right. If you invert the number, you have to invert the units. cm^-1 for the distance, sec^-1 for time. You can read these as "per centimeter" or "per second". To get (perhaps) a better feel for why this is true, let me use a hypothetical situation of my own. Suppose a bell rings every 1/4 of a second. So the time between rings is 1/4 second, right? We also often say that the bell rings at a rate of "4 per second" - that's what you get if you invert "1/4 second". Or here's a more interesting example. Suppose you work out some physics problem, and the answer is a velocity divided by an acceleration. What kind of a "thing" is it? I'll just use D to represent "distance" units (you can use centimeters, feet, furlongs, or light-years; I don't care), and I'll use T for the time units (seconds, hours, fortnights, ... you pick!) Velocities always have units D/T and accelerations have units of D/(T^2). So a velocity/acceleration has units of (D/T)/((D/(T^2)) = T, so you had better be calculating a time in your formula. You can always use the units to check your results this way. It doesn't prove that your answer is right, but it can certainly prove that it's wrong! -Doctor Tom, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/54430.html","timestamp":"2014-04-21T10:49:36Z","content_type":null,"content_length":"6968","record_id":"<urn:uuid:87239864-d3f1-431f-99cb-0b0ed0b83df5>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: The 4 kg head of a sledgehammer is moving at 6 m/s when it strikes a spike driving it into a log. The duration of the impact is 2 ms. Find the time average of the impact force. • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50bdf8fae4b09e7e3b85834d","timestamp":"2014-04-20T00:52:15Z","content_type":null,"content_length":"46849","record_id":"<urn:uuid:fd748ffd-a7d0-4ad1-ad2b-9ede199fc5c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
reshape (from base) Explained: Part II May 5, 2012 By tylerrinker Part II Explains More Complex Wide to Long With base reshape In part I of this base reshape tutorial we went over the basics of reshaping data with reshape. We learned two rules that help us to be more efficient and effective in using this powerful base tool: RULE 1: Stack repeated measures/Replicate and stack everything else RULE 2: Naming your columns in a way R likes makes your life easier In part II we will be looking at more complex wide to long reshapes (more than one series of repeated measures) by building on what we learned in part I. Let’s start by generating some data with two series/nested repeated measures): dat <- data.frame(id=paste0("ID.", 1:5), sex=sample(c("male", "female"), 5, replace=TRUE), matrix(rpois(30, 10), 5, 6)) colnames(dat)[-c(1:2)] <- paste0(rep(1:2, each=3), rep(c("work", "home", "church"), 2)) Which looks like this: id sex 1work 2home 1church 2work 1home 2church 1 ID.1 female 7 8 7 10 6 10 2 ID.2 male 10 13 10 7 13 15 3 ID.3 male 11 10 6 10 10 7 4 ID.4 female 6 8 12 9 15 7 5 ID.5 male 9 11 15 10 10 12 As you can see we have nested repeated measures at three different locations (work, home, church) at two different times (the 1 or 2 prefix). Now let’s follow Rule 2 and get our names in a way R likes them (You may ask why I didn’t name them correctly to begin with? Fair question. Let me ask one though. Have you ever got a data set 100% the way you wanted it to be?). names(dat) <- gsub("([0-9]+)([a-z]+)", "\\2\\.\\1", names(dat)) # BASICALLY, THIS SAYS FIND THE NAMES THAT ARE NUMERICALPHA. # # OTHERWISE LEAVE IT ALONE. THE [0-9]+ SAYS FIND THE NUMERIC # # STRING (PLUS SIGN SAYS FIND ALL THE PROCEDING CHARACTERS 1 # # OR MORE TIMES). THE [a-z]+ SAYS FIND THE ALPHA STRING (PLUS # # AGAIN MEANS FIND THE ALPHAS 1 OR MORE TIMES). THE "." IS # # CHARACTERS I'M INSERTING AND THE 1 AND 2 CORRESPOND TO THE # # PARENTHESIS IN THE ARGUMENT OF gsub. BASICALLY FLIP FLOPING # # THE POSITION OF 1 AND 2. # # OR MANUAL REPLACEMENT. YOU CAN SEE WHERE REGEX CAN COME IN # # HANDY AS THE DATA SET GROWS. # #names(dat)[-c(1:2)] <- c("work.1", "home.2", "church.1", # "work.2", "home.1", "church.2") Which now looks like: id sex work.1 home.2 church.1 work.2 home.1 church.2 1 ID.1 female 7 8 7 10 6 10 2 ID.2 male 10 13 10 7 13 15 3 ID.3 male 11 10 6 10 10 7 4 ID.4 female 6 8 12 9 15 7 5 ID.5 male 9 11 15 10 10 12 Alright in part I we learned the following arguments: ○ data – dataframe you’re supplying reshape ○ direction – either ‘long’ or ‘wide’ (in this case we are going to long so choose that) ○ varying – the repeated measures columns we want to stack (takes indexes or column names but I’m lazy and will use indexes if you want names use: c(“colname1″, “colname2″, ○ v.names - This is what we we call the measurements (values) of each repeated measure. Name it anything you want. ○ timevar – This is what we’ll call the times of each repeated measures (the categorical variable if you will). Name it anything you want. ○ times - Basically this is your: (# of starting rept. meas. cols.) ÷ (final # of stacked cols.) = (times vector length) In the first example we want to have a time 1 and time 2 column by stacking all the locations for time 1 in a column and all the locations for time 2 in a column (these are the v.names columns). Since we have two times we’ll need two column names (I called them TIME_1 and TIME_2 but this is up to you). We’ll need to keep track of these locations in the timevar column. If you notice the major difference between simple repeated measures and more complex repeated measures is that we don’t supply an index of columns to varying but a list of indexes. This is where rule 1 becomes important. What are you stacking? In this case we’re wanting to take everything in time 1 and stack it and the same for time 2 and using time.vars to keep track of the locations. In the example code below I have 1. The bare bones example (no time column) 2. An example with a time column (numeric values for cells) 3. An example with time column and locations for cell values (adj. w/ time.vars arg.) # BARE MINIMUM # reshape(dat, #dataframe direction="long", #wide to long varying=list(c(3:5), c(6:8)), #repeated measures list of indexes # STACKING OF TIME 1 AND 2 REPEAT EVERYTHING ELSE # reshape(dat, #dataframe direction="long", #wide to long varying=list(c(3:5), c(6:8)), #repeated measures list of indexes #idvar='id', #1 or more of what's left timevar="PLACE", #the repeated measures times v.names=c("TIME_1", "TIME_2"))#the repeated measures values # STACKING OF TIME 1 AND 2 WITH NAMED TIME CELLS # dat2 <- reshape(dat, #dataframe direction="long", #wide to long varying=list(c(3:5), c(6:8)), #repeated measures list of indexes #idvar='id', #1 or more of what's left timevar="PLACE", #the repeated measures times v.names=c("TIME_1", "TIME_2"), #the repeated measures values times =c("wrk", "hom", "chr")) row.names(dat2) <- NULL The final outcome is: id sex PLACE TIME_1 TIME_2 1 ID.1 female wrk 7 10 2 ID.2 male wrk 10 7 3 ID.3 male wrk 11 10 4 ID.4 female wrk 6 9 5 ID.5 male wrk 9 10 6 ID.1 female hom 8 6 7 ID.2 male hom 13 13 8 ID.3 male hom 10 10 9 ID.4 female hom 8 15 10 ID.5 male hom 11 10 11 ID.1 female chr 7 10 12 ID.2 male chr 10 15 13 ID.3 male chr 6 7 14 ID.4 female chr 12 7 15 ID.5 male chr 15 12 This may be what we want but what if we wanted to have a work, home and church column by stacking all the times for work on each other, all the times for home and all the times for church (these are the v.names columns)? Well we do this with the list of indexes we supply to varying. This again is rule number 1. We know we have three var.names columns (the locations) so we need three indexes to pass as a list to varying. We want to stack all the times for work so we supply the index of 3 (work.1) and 6 (work.2) and do the same for home (c(4, 7)) and play (c(5, 8)). We now switch timevar to TIME because it’s no longer keeping track of the locations and the v.names will be given the three locations as names. We also could supply a times argument to reshape but it doesn’t make sense considering the default numeric index (1, 2) already makes sense. # STACKING OF THE THREE PLACES # dat3 <- reshape(dat, #dataframe direction="long", #wide to long varying=list(c(3, 6), c(4, 7), c(5, 8)), #repeated measures list of indexes #idvar='id', #1 or more of what's left timevar="TIME", #the repeated measures times v.names=c("WORK", "HOME", "CHURCH")) #the repeated measures values row.names(dat3) <- NULL Remember rule 1? The rule about naming. It’s on these more complex reshapes (more than one series of repeated measures/nested repeated measures) that proper naming pays off. The idea of passing varying a list of indexes was because reshape can’t figure out who’s who if you haven’t named them correctly but since we named them to have the three locations followed by a period and then a numeric index our life is easy peesy cheesy. Look below and you’ll see all we do is tell varying what columns are repeated measures and he figures out what to stack from the names. Additionally, there’s no need to supply the argument v.names because R is such a smarty he figured it out all by himself (what a big boy). You ask well why didn’t this work for stacking above with two times (the dat2 example)? Good question. It doesn’t work because we need to have the form measurment_column_name.time_column. So our rename job at the beginning was work.time, home.time, church.time. In this example our three measurement columns will be work, home, and time and the numeric index after each name indicates which time. If we wanted to have it easy for the dat2 example we would to have named the repeated measures as time_1.1, time_1.2, time_1.3, time_2.1, time_2.2, time_2.3. The dot numeric index at the end stands for the three locations. If you’re interested in seeing this please see the link of the script of this demonstration found at the bottom of this article as it contains extra code not found in this post. So you have three approaches 1. Name it correctly (just indexes 1:n) 2. Provide a list of indexes (who cares about names) 3. Both name correctly and list of indexes (safety my friend) # STACKING OF THE THREE PLACES REWARDED BY GOOD COLUMN NAMING # dat3 <- reshape(dat, #dataframe direction="long", #wide to long varying=3:8, #indexes #idvar='id', #1 or more of what's left timevar="TIME") #the repeated measures times #v.names=c("WORK", "HOME", "CHURCH")) #Rewarded: no need for v.names row.names(dat3) <- NULL Which gives us: id sex TIME WORK HOME CHURCH 1 ID.1 female 1 7 8 7 2 ID.2 male 1 10 13 10 3 ID.3 male 1 11 10 6 4 ID.4 female 1 6 8 12 5 ID.5 male 1 9 11 15 6 ID.1 female 2 10 6 10 7 ID.2 male 2 7 13 15 8 ID.3 male 2 10 10 7 9 ID.4 female 2 9 15 7 10 ID.5 male 2 10 10 12 Hold the phone Fenster! So let me get this straight. If I’ve been a good R user and followed the Rule #2 (name the way R liketh) then all I have to provide reshape is data, direction and varying (maybe idvar)? Yep that’s right. See I told you that nameology was important, makes your life easy. don’t believe me try it out: reshape(dat, direction="long", varying=3:8) See reshape is actually pretty simple once you figure it out. But sometimes we need to stack all the repeated measures into one column (for certain analysis and visualizations) and keep track of both time and location. To do this we simply supply all repeated measures columns to varying (indexes 3:8) as a vector (not a list as we only want one final column and lists are for when we want multiple repeated measures columns), provide v.names and timevar with appropriate names (I chose LOC_TIME for timevar as both the nested repeated measures of location and time will be in this column), and last give a vector of names to the times argument. Keep in mind that reshape will stack the columns you gave to varying in the order you supplied them. To figure out the number of times (as stated above) we take the original number of columns and divide by the total number of end columns (6 ÷ 1 = 6) which means we have to supply 6 names to the times argument (otherwise we have the numeric 1-6 default which can be pretty difficult to keep track of). This is where paste and R’s recycling rule comes in handy. Simply supply paste with the first vector of repeated measure series (location) and then the second, but use rep with the second providing each = (#of first series of repeated measures). The recycling rule will take care of the rest. # DOUBLE STACK. STACK TIMES AND PLACES AND NOTE EACH TIME AND # # PLACE. # of TIMES = # OF COLUMNS STACKED. # dat4 <- reshape(dat, #dataframe direction="long", #wide to long varying=3:8, #repeated measures list of indexes #idvar='id'), #1 or more of what's left timevar="LOC_TIME", #the repeated measures times v.names=c("VALUE"), #the repeated measures values times =paste(c("work", "home", "church"), rep(1:2, each=3))) row.names(dat4) <- NULL This gives us: id sex LOC_TIME VALUE 1 ID.1 female work 1 7 2 ID.2 male work 1 10 3 ID.3 male work 1 11 4 ID.4 female work 1 6 5 ID.5 male work 1 9 6 ID.1 female home 1 8 7 ID.2 male home 1 13 8 ID.3 male home 1 10 29 ID.4 female church 2 7 30 ID.5 male church 2 12 This is nice but the information for the timevar (location and time) is all garbled and may make analysis or visualization functions difficult. The best approach would be to split this data into two different columns. Many people are familiar with Wickham’s colsplit from the reshape2 package. This is one approach. I also have a function called colsplit2 that operates from the base package that I keep in my .Rprofile (I actually call it colsplit as well but for namespace purposes we’ll call it colsplit2). this is similar to Wickham’s but a little different. With Wickham’s you provide just the one column and it splits it into two and you then need to cbind it back to the original some how. My function takes the dataframe and the column to be split and outputs a new data frame with two columns in the same place as the original singular column. This is a base alternative if you’re attempting to avoid dependence. For this tutorial I’ll use my function but the downloadable script has both methods. colsplit2 <- function(dataframe, splitcol, new.names=NULL, sep=""){ if(is.numeric(dataframe[, splitcol])) stop("splitcol can not be numeric") X <- data.frame(do.call(rbind, strsplit(as.vector( dataframe[, splitcol]), split = sep))) z <- if (!is.numeric(splitcol)) match(splitcol, names(dataframe)) else splitcol if (!is.null(new.names)) colnames(X) z) { cbind(dataframe[, 1:(z-1), drop=FALSE], X, dataframe[, (z + 1):ncol(dataframe), drop=FALSE]) } else { if (z!=1 & ncol(dataframe) == z) { cbind(dataframe[, 1:(z-1), drop=FALSE], X) } else { if (z==1 & ncol(dataframe) > z) { cbind(X, dataframe[, (z + 1):ncol(dataframe), drop=FALSE]) } else { } #END OF colsplit2 FUNCTION dat4 <- colsplit2(dat4, "LOC_TIME", c("place", "time"), " ") We now have: id sex place time VALUE 1 ID.1 female work 1 7 2 ID.2 male work 1 10 3 ID.3 male work 1 11 4 ID.4 female work 1 6 5 ID.5 male work 1 9 6 ID.1 female home 1 8 7 ID.2 male home 1 13 8 ID.3 male home 1 10 29 ID.4 female church 2 7 30 ID.5 male church 2 12 Let’s do a bit of visualization with one of my favorite packages, Wickham’s ggplot2. For social sciences (and particularly repeated measures) the faceting with facet_grid is pretty nice. One little change to the time column to make the labels on facet_grid nicer. I use a paste approach that alters the actual variable because it’s easier to explain but in real practice I don’t like to alter variable I prefer add another column or approach it with other means. The website Cookbook for Rprovides a very nice alternative to altering your variable content using the labeller argument of facet_grid (look under the heading Modifying facet label text in the link). # MAKE THE NAMES ON LABELS PRETTY FOR GGPLOT FACETING (ONE OF # # MANY APPROACHES) # dat4$time <- paste("time", dat4$time) # PLOT IT WITH GGPLOT2 # ggplot(data=dat4, aes(sex, VALUE)) + geom_boxplot() + facet_grid(place~time) ggplot(data=dat4, aes(place, VALUE)) + geom_boxplot() + facet_grid(time~sex) In Part III of this series we’ll look at the less used long to wide format For a .txt version of this demonstration click here for the author, please follow the link and comment on his blog: TRinker's R Blog » R daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/reshape-from-base-explained-part-ii/","timestamp":"2014-04-17T10:03:38Z","content_type":null,"content_length":"58612","record_id":"<urn:uuid:ad4dd4af-7554-424e-8cf8-26ede37f6672>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00236-ip-10-147-4-33.ec2.internal.warc.gz"}
Geeky company names I started a discussion on Twitter this evening about consulting company names. Here are some of the names. • Turing Machine Computing: If we can’t do it, it can’t be done. • Heisenberg Consulting: You can have speed or quality, but not both at the same time. • Perelman Consulting: Please don’t pay us. We don’t want your money. • Gödel Systems: Your job is done but we can’t prove it. • Gödel Consulting: because no one is supplying ALL your needs. • Lebesgue Consulting: We’ve got your measure. • Noether Consulting: We find the conserved values of your system. • Fourier consulting: We transform your world periodically. • Zorn’s Consulting: Your choice is axiomatic. • Spherical Computing: Without parallel. • Markov Chain Consulting: It doesn’t matter how we got here. • Dirac Consulting: We get right to the point. • Shannon Consulting: We’ll find a way to deliver your message. • Neyman & Pearson Consulting: No one is more powerful than us. • Complex Conjugate Consulting: We make your product real. • Hadamard Consulting: Real solutions by complex methods. • Zeno Consulting: We’ll get you arbitrarily close to where you want to be. • Hilbert Consulting: You think you have a problem? • Riemann Hypothesis Consulting: When your job is on the line and everything is critical Here are footnotes explaining the puns above. • Turing: In computer science, Turing machines define the limits of what is computable. • Heisenberg: The Heisenberg Uncertainty Principle says that there is a limit to how well you can know a particle’s momentum and position. The more accurately you know one, the less you know about the other. • Perelman: Turned down prize money from the Fields Institute and Clay Institute after solving the Poincaré conjecture. • Gödel: His incompleteness theorem says that number theory contains true theorems that cannot be proved. • Lebesgue: Founder of measure theory, a rigorous theory of length, area, volume, etc. • Noether: Established a deep connection between symmetry and conservation laws. • Fourier: Known for Fourier transforms and Fourier series, expressing functions as sums or integrals of periodic functions. • Zorn: Known for Zorn’s lemma, equivalent to the axiom of choice. • Spherical: There are no parallel lines in spherical geometry. • Markov Chain: The probability distribution for the next move in a Markov chain depends only on the current state and not on previous history. • Complex Conjugate: A complex number times its conjugate is a real number. See xkcd. • Dirac: The reference here is to the Dirac delta function. Informally, a point mass. Formally, a distribution. • Shannon: Founder of communication theory. • Neyman-Pearson: The Neyman-Pearson lemma concerns most powerful hypothesis tests. • Hadamard: Said “The shortest path between two truths in the real domain passes through the complex domain.” That is, techniques from complex analysis are often the easiest way to approach problems from real analysis. • Zeno: Zeno’s paradox says you cannot get anywhere because first you have to get halfway there, then halfway again, etc. • Hilbert: Created a famous list of 23 research problems in math in 1900. • Riemann: The Riemann hypothesis says that all the non-trivial zeros of the Riemann zeta function line on the critical line Re(z) = 1/2. That was awesome, I ROLFCOPTER with the Markov one. Hey, pretty good! I think the Noether one is the weakest. How about “We maximize your ideals!” from her work in algebra. And poor Paul “known for the delta function” Dirac! Kind of like Isaac “known for his namesake cookie” Newton. Very funny! Note Zeno typo: can’t, not can. It warms my heart to see that geekdom is alive and well. > Heisenberg Consulting: You can have speed or quality, but not both at the same time. How about: ‘speed xor quality’. I fixed the typo in Zeno’s description and clarified the description of Dirac. He certainly did bigger things than give his name to the delta function. Awesome! I laughed a lot with “Markov Chain”! You forgot Banach Consulting: “Contracting around around a fixed point” As a designer, doing logos for these will be quite the interesting task. Some additions to the list above: - Feynman Consulting: We Drum Up The Answers.™ (Feynman was an avid drummer and he was constantly in the search for answers to interesting problems) - Einstein Consulting: Energize Your Business.™ (Einstein’s mass-energy equivalence. Convert passive workforce (mass) to active output (energy)). - Newton Consulting: Gravitate To A Better Future.™ (Newton’s Law of Universal Gravitation) - Bohr Consulting: Run Circles Around Your Competition.™ (Bohr described circular orbits for electrons in his model of the atom) And one more that deviates from the “Scientist name” template… - Duality Consulting: The Problem Is Also The Solution.™ (Wave-Particle duality) Sequential Computing: Our algorithms are unparallelled. Von Neumann architectural consultants (“Still the standard”). Bishop Berkeley acoustic dampening (“You won’t even be sure if it made a sound”). In Sweden we have a state investment company called “Fouriertransform” Erdos consulting: we’ve got your number. This made my day. Artin’s TV Production Agency: Artin’s assist all stages of your TV career- ascend to an Emmy, descend by aN oether (sic) method. I’m loving these! Jackknife Consulting: Don’t be the one left out. Gibbs Consulting: want to see samples of our work? Awesome geek humor. I’ll try a couple… Euler Excursions: Making sure you see ALL the sights. Hotelling Consultants: Making your product unique, just like everyone else’s. Hermite Online Dating: It only takes one kiss. Hamming Consultants: Spanning the range of business, one change at a time. Oops, I’m insufficiently geeky — that last one should be Gray Solutions. I got Hamming codes and Gray codes mixed up. Euler tours visit all of the edges in a graph. Hotelling’s Rule describes how firms simultaneously attempt to make their products similar to others (to draw away customers) and different from others (to prevent having them drawn away by similar Hermite (or ‘osculatory’) interpolation draws a polynomial through a set of points that ‘kisses’ each point as a point of tangency. Gray codes are a binary coding of the integers such that successive integers differ by only one bit. The next step is to design a logo. I wonder what it’d look like for Markov Chain Consulting. Backus Computing: We make all your programming functional. Evgeni: I gave it a shot Evgeni: Oops, missed a word in that one. Here’s the correct version. Einstein Fitness Club -> Your mass is our energy. I’d hate to put a downer on all this mirth but ‘Keynes Consulting’ “when people are mathematically illiterate we make the government the stabilising buyer of math[s]” …the best of luck in your freelance ‘Endeavour.’ After COsolving Poincare conjecture. That’s actually Perelman’s reason for not accepting the prize: he doesn’t think his contribution to the solution was the largest. [...] de Twitter que la gente inventara nombres de empresas relacionados con matemáticos. En su post Geeky company names podéis ver una recopilación de las que le [...] Posted in Business, Math
{"url":"http://www.johndcook.com/blog/2013/02/26/geeky-company-names/","timestamp":"2014-04-20T20:56:22Z","content_type":null,"content_length":"54852","record_id":"<urn:uuid:16140f3c-994c-436f-8b98-f688d8120a72>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Open Balls and Open Sets Date: 10/04/98 at 10:26:50 From: Robert Subject: Open Balls/ Open Sets I am having trouble understanding the concepts of open balls and open sets. I know the definitions for both: Open Ball: An open ball B in a set E centered at a point v, with radius r > 0, is the set of all x in E such that |x - v| < r. Open Set: We define a set U to be open if for each point x in U there exists an open ball B centered at x contained in U. I was wondering if you could give me some examples for these definitions, in order for me to understand them better. Can you also explain how every open ball is an open set? Thank you so much for your help. - Robert Date: 10/08/98 at 19:47:16 From: Doctor Mike Subject: Re: Open Balls/ Open Sets For an open ball, another word you could think of here would be "sphere." It is the same thing. Inside a spherical ball is every point whose distance from the center of the sphere is less than the radius of the sphere. The surface of the sphere has points whose distance from the center is equal to the radius of the sphere. The open ball is the inside part. For examples, think of any enclosed volume: a square box, a rectangular box, an egg, a cylinder with top and bottom, a balloon of any shape. What these have in common is one or more surfaces that enclose some space. If you just consider what is on the "inside" (or interior) then that would be an example of an open set in space. The reason: By the exact definition of "open," around any point in such an interior you can fit a complete Ball which is also completely in the interior. Granted, if the point you pick is just a fraction of a nanometer from the surface, the ball is going to be very small, but you can do it. If you understand why every open ball is an open set, you have the concept. How can we prove that this ball qualifies to be "open"? We just use what the definition says. Let x be some point (any point) inside this spherical ball. Keep in mind that it is not actually on the outside surface of it, but actually inside it. If D is the distance from the point x to the center of the spherical ball, and if R is the radius of the spherical ball, then it must be true that D < R . [The distance from the center must be less than the radius. Right?] Now, how far is the point x from the surface of the spherical ball? Answer: (R-D). So, if you place a ball of radius (R-D)/2 at the point x, it will fit inside the ball of radius R with room to spare. Done. (We just finished doing all we had to do to qualify by the definition.) If this is not immediately clear to you, think about it a few more times (and re-read the definition of "open set" again) and I think pretty soon it will begin to make sense. Here's one more thing to think about. In the definition of open, it depended in a major way on balls of a certain radius. What else has a radius? Circles, of course. All that we did above for open sets and open spheres in space could be done for open sets and open "circular disks" in a plane or on some other kind of flat surface. In advanced math the concept of distance (and so the concept of radius) is generalized to define something called a Metric Space. An abstract form of "open ball" can be defined in such an environment, too. I hope this helps. - Doctor Mike, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/52430.html","timestamp":"2014-04-19T00:23:00Z","content_type":null,"content_length":"8368","record_id":"<urn:uuid:cd0bb646-f3da-4c86-b699-dc12aa26337e>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Computer Laboratory Bayesian inference for latent variable models Ulrich Paquet July 2008, 137 pages This technical report is based on a dissertation submitted March 2007 by the author for the degree of Doctor of Philosophy to the University of Cambridge, Wolfson College. Bayes’ theorem is the cornerstone of statistical inference. It provides the tools for dealing with knowledge in an uncertain world, allowing us to explain observed phenomena through the refinement of belief in model parameters. At the heart of this elegant framework lie intractable integrals, whether in computing an average over some posterior distribution, or in determining the normalizing constant of a distribution. This thesis examines both deterministic and stochastic methods in which these integrals can be treated. Of particular interest shall be parametric models where the parameter space can be extended with additional latent variables to get distributions that are easier to handle algorithmically. Deterministic methods approximate the posterior distribution with a simpler distribution over which the required integrals become tractable. We derive and examine a new generic α-divergence message passing scheme for a multivariate mixture of Gaussians, a particular modeling problem requiring latent variables. This algorithm minimizes local α-divergences over a chosen posterior factorization, and includes variational Bayes and expectation propagation as special cases. Stochastic (or Monte Carlo) methods rely on a sample from the posterior to simplify the integration tasks, giving exact estimates in the limit of an infinite sample. Parallel tempering and thermodynamic integration are introduced as ‘gold standard’ methods to sample from multimodal posterior distributions and determine normalizing constants. A parallel tempered approach to sampling from a mixture of Gaussians posterior through Gibbs sampling is derived, and novel methods are introduced to improve the numerical stability of thermodynamic integration. A full comparison with parallel tempering and thermodynamic integration shows variational Bayes, expectation propagation, and message passing with the Hellinger distance α = 1/2 to be perfectly suitable for model selection, and for approximating the predictive distribution with high accuracy. Variational and stochastic methods are combined in a novel way to design Markov chain Monte Carlo (MCMC) transition densities, giving a variational transition kernel, which lower bounds an exact transition kernel. We highlight the general need to mix variational methods with other MCMC moves, by proving that the variational kernel does not necessarily give a geometrically ergodic chain. Full text PDF (2.0 MB) BibTeX record author = {Paquet, Ulrich}, title = {{Bayesian inference for latent variable models}}, year = 2008, month = jul, url = {http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-724.pdf}, institution = {University of Cambridge, Computer Laboratory}, number = {UCAM-CL-TR-724}
{"url":"http://www.cl.cam.ac.uk/techreports/UCAM-CL-TR-724.html","timestamp":"2014-04-18T10:41:31Z","content_type":null,"content_length":"6703","record_id":"<urn:uuid:a43c7949-0f0f-4499-84d6-f49d43d568e8>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
15-317 Constructive Logic 15-317 Constructive Logic Lecture 13: Arithmetic In this lecture we introduce first-order intuitionistic arithmetic (also called Heyting arithmetic), which is based on the schemas of primitive recursion and induction from the previous lecture, but also includes equality on natural numbers and the previously introduced quantifiers. We go through several example proofs and their formalization. • Reading: none • Previous lecture: Induction • Next lecture: Logic Programming • Key concepts: □ Equality introduction and elimination rules □ Equational specifications [ Home | Schedule | Assignments | Handouts | Software ] Frank Pfenning
{"url":"http://www.cs.cmu.edu/~fp/courses/15317-f08/lectures/13-arith.html","timestamp":"2014-04-18T08:32:45Z","content_type":null,"content_length":"2788","record_id":"<urn:uuid:c4b9707f-15c9-4fbc-9fb4-bbe7c1ec9988>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
[FOM] Predicativism and natural numbers Nik Weaver nweaver at math.wustl.edu Mon Jan 16 04:03:59 EST 2006 Giovanni Lagnese wrote: > But one can take as primitive also the concept of powerset. So there > must be a philosophical reason for taking as primitive the concept of > natural numbers but not the concept of powerset. And this reason can > not be the predicativism. No, you cannot take the concept of power set as primitive if you don't even know what sets are. This aspect of my last message seems not to have been understood. The impatience with predicativism seen here seems to arise from an implicit acceptance of a platonic conception of set theory. It is taken for granted that there are these abstract entities, sets, which reside in some metaphysical world, and starting from this point one wants to know why predicativists accept some parts of that world and not others. I do not accept any part of this picture and I think that anyone who does has a lot of work to do in defending his own views before going after rival stances like predicativism. Quoting Hartley Slater ("Grammar and sets", to appear in the Australasian Journal of Philosophy): "the very notion of a set ... is based on a series of grammatical confusions." (Also: "The expectation ... has been that one must look elsewhere for _another object_ to be the pair of apples. But this supposed other object is a grammatical mirage.") I could turn the point around and ask why, if you are willing to accept the notion of power set as primitive (for all x, there is a set P(x) such that y is in P(x) <=> y is a subset of x), you do not more generally accept full comprehension (for all x and any formula p, there is a set z such that y is in z <=> p(x,y) holds). Can you give a principled explanation of why in your supposed platonic universe of sets one holds and the other doesn't? >From my perspective, we cannot accept any set as primitive. In order to meaningfully use the language of set theory one must explicitly identify an interpretation, i.e., we must specify which objects are to play the role of sets and for which pairs of these objects the membership relation is supposed to obtain. On this view it is obvious that we cannot simply posit the existence of power sets. We have to show how they can be built. And there is a fundamental circularity in the concept of power set which obstructs us from doing this in general. See the bottom of page 10 of my paper "Mathematical conceptualism". The argument that identifying a structure which is to play the role of the natural numbers is somehow circular puzzles me. I make a mark on a piece of paper, then I make another mark next to it, and so on. To say that this doesn't work because I can't actually physically make infinitely many marks seems to me to take physics as being more fundamental than mathematics, a view which has been explicitly advocated in recent posts in another thread, but which I cannot accept. Mathematics, i.e., the realm of logical possibility, is more primitive than the realm of physical possibility. So I accept (a structure which plays the role of) the natural numbers because I can see clearly how its construction could be carried out, which is enough. I do not see how a structure playing the role of the power set of the natural numbers could be constructed in any comparable way. In my paper "Analysis in J_2" I outline how the vast bulk of mainstream mathematics can be carried out in an explicitly constructed predicative universe (and essentially the same point has been made by many other authors; see the references in the J_2 paper). The J_2 structure in fact provides a much better fit with normal mathematical practice than the fictional Cantorian universe with its vast cardinals which play absolutely no role in mainstream mathematics and its set-theoretic pathologies which only distract and irritate the core mathematician. Predicativists need not consider themselves on the defensive. It is only a matter of time before it will be widely recognized that our approach both avoids the metaphysical nonsense of Cantorian set theory and leads to a conception of the mathematical universe which is in greater harmony with normal mathematics. Nik Weaver Math Dept. Washington University St. Louis, MO 63130 USA nweaver at math.wustl.edu More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2006-January/009581.html","timestamp":"2014-04-19T07:03:36Z","content_type":null,"content_length":"6834","record_id":"<urn:uuid:6fe4c491-90e4-45ff-b603-c8cc1fb66dcb>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Math problem Math problem Since I have had great luck in being able to have my questions answered re my rifle problem I would like to throw this out. This is my second time through, A MANY-COLORED GLASS, by Freeman Dyson, this week, so I know a thorough understanding of this concept is crucial to understanding this part of the book and I don't seem to be able to grasp it. The problem is one of pyysics and has to do with symetry breaking, going from a disordered state to an ordered state. He writes: "Another name for the process of phase transition from disorder to order is symmetry breaking. From a mathematical point of view, a disordered phase has a higher degree of symmetry than an ordered phase. For example, the enviorment of a molecule of water dissolved in humid air is the same in all directions, while the envoronment of the same molecule after it is precipitated into a snowflake is a regular crystal with a crystalline axis oriented along particular directions. The molecule sees its environment change from the greater symmetry of a sphere to the lesser symmetry of the hexagonal prism. The change in the environment from disorder to order is associated with the sudden loss of symmetry. Sudden loss of symmetry is characteristic of many of the most important phase transitions in the history of the universe." What I don't get is what does he mean when he says, "...from a mathematical point of view..."? Also puzzling to me is, it seems like a hexagonal prism is more ordered than a water molecule. Water molecules aren't in a distinct shape, they change shape constantly. The hexagonal prism, the result of freezing the water molecule, on the other hand, is ridgid. So it seems to me that the hexagonal prism is more ordered than the water molecule. At first I wondered if there was a misprint but writing subsequent to the above paragraph follows the same logic. I don't get it. I have been stumped before with things like this, that I don't understand, and attempts to help myself learn by following Google threads only serve to confuse me more. In order to understand these problems I generally need a basic explaination or an analogy to which I can relate. Any help either in a direction of something to read or a basic explaination would be most certainly appreciated. Re: Math problem Originally posted by Tom W View Post Since I have had great luck in being able to have my questions answered re my rifle problem I would like to throw this out. This is my second time through, A MANY-COLORED GLASS, by Freeman Dyson, this week, so I know a thorough understanding of this concept is crucial to understanding this part of the book and I don't seem to be able to grasp it. The problem is one of pyysics and has to do with symetry breaking, going from a disordered state to an ordered state. He writes: "Another name for the process of phase transition from disorder to order is symmetry breaking. From a mathematical point of view, a disordered phase has a higher degree of symmetry than an ordered phase. For example, the enviorment of a molecule of water dissolved in humid air is the same in all directions, while the envoronment of the same molecule after it is precipitated into a snowflake is a regular crystal with a crystalline axis oriented along particular directions. The molecule sees its environment change from the greater symmetry of a sphere to the lesser symmetry of the hexagonal prism. The change in the environment from disorder to order is associated with the sudden loss of symmetry. Sudden loss of symmetry is characteristic of many of the most important phase transitions in the history of the universe." What I don't get is what does he mean when he says, "...from a mathematical point of view..."? Also puzzling to me is, it seems like a hexagonal prism is more ordered than a water molecule. Water molecules aren't in a distinct shape, they change shape constantly. The hexagonal prism, the result of freezing the water molecule, on the other hand, is ridgid. So it seems to me that the hexagonal prism is more ordered than the water molecule. At first I wondered if there was a misprint but writing subsequent to the above paragraph follows the same logic. I don't get it. I have been stumped before with things like this, that I don't understand, and attempts to help myself learn by following Google threads only serve to confuse me more. In order to understand these problems I generally need a basic explaination or an analogy to which I can relate. Any help either in a direction of something to read or a basic explaination would be most certainly appreciated. Makes sense to me. Symmetry's basic definition as I understand is division with equal parts so to speak. For example a pipe, if it were perfect, could be split in half lengthwise would have bilateral symmetry. One division in mathematical terms. The author is stating that a "disordered phase" has a higher degree of symmetry meaning it can be divided more times into equal parts while an ordered phase may have symmetry it will have less opportunity to do so. I'm just a plumber though. Now quit reading books and grease the tractor. EDIT: CHANGED TO PIPE FOR SLIM. Last edited by BobsPlumbing; 02-09-2009, 12:10 AM. Re: Math problem Originally posted by JCsPlumbing View Post Makes sense to me. Symmetry's basic definition as I understand is division with equal parts so to speak. For example a person, if they were perfect, that could be split in half from head to toe facing you would have bilateral symmetry. One division in mathematical terms. The author is stating that a "disordered phase" has a higher degree of symmetry meaning it can be divided more times into equal parts while an ordered phase may have symmetry it will have less opportunity to do so. I'm just a plumber though. Now quit reading books and grease the tractor. I suppose there would be bilateral symmetry except their heart would be on one side and their appendix on the other, not to mention all the blood everywhere And I bet a snowflake formed from a single molecule of water would be dang hard to see! And with TWO atoms of hydrogen and ONE atom of oxygen it looks rather like a silhouette of Mickey Mouse's head "Man will do many things to get himself loved, he will do all things to get himself envied." Mark Twain Re: Math problem Originally posted by SlimTim View Post I suppose there would be bilateral symmetry except their heart would be on one side and their appendix on the other, not to mention all the blood everywhere And I bet a snowflake formed from a single molecule of water would be dang hard to see! And with TWO atoms of hydrogen and ONE atom of oxygen it looks rather like a silhouette of Mickey Mouse's head I was speaking theoretically for an example. Maybe I should have used a piece of pipe. Last edited by BobsPlumbing; 02-09-2009, 12:11 AM. Re: Math problem Count the total number of each side of each angle of the snowflake and compare them to those on the molecule of water prior to the change. When the water was in a liquid state it was in ordered phase, as it changed state into a snowflake it became a disordered phase. "Somewhere a Village is Missing Twelve Idiots!" - Casey Anthony I never lost a cent on the jobs I didn't get! Re: Math problem I don't have a great mind and I may sound real stupid, but here's what I think he means by "From a Mathematical View". The sphere has symetry because it perfectly round or the same even as a three dimentional shape, as opposed to the snowflake? I believe the author speaks of perfection in regards to math because perfection can be an objective term depending on the context. I believe Man is perfect in his imperfection, he is what nature or God intended. Both the sphere and the snowflake are perfect for what they are but only the sphere has perfection on a mathematical level? Did that sound bad? Re: Math problem Many Many Thanks for the answers. I think I got it but it is pretty late for an old guy like me. I will attempt to reread the section tomorrow with your explainations in mind and see if I can wrap my brain around it. Thanks again, Re: Math problem i don't know much about the math, never have and probably never will. but that's okay, i can accept that. i'm thinking planets. spheres. organized and true. rings of saturn, they have a shape but the individual parts that make up the rings are disorganized. clear as mud to me. Re: Math problem The author mentions the "environment of the droplet" compared to the crystal shape of the solid phase of the same droplet. The symmetry of the droplet itself is one thing, but when you add in the environment around it, you somewhat change the equation. Then again, at that point you are not necessarily looking at just symmetry but symmetry in the environment. A single object has whatever symmetry it has. However, if you then associate symmetry as including whatever is surrounding the object, you have to extend the lines of symmetry. When you do that, you induce variability (at least I would think so). As variability increases, constancy decreases and therefore you have a decrease in symmetry. Maybe. I'm all confuzzled now. Thanks... :P In the environment, if you assume the water droplet is in the air, symmetry is only viable as a measurement of the object itself. If you take the molecular symmetry of the water droplet and expand that to the environment, you break symmetry because of the molecular particles in the air itself. In other words, I think the premise is flawed. I put it all back together better than before. There\'s lots of leftover parts. Re: Math problem Originally posted by VASandy View Post The author mentions the "environment of the droplet" compared to the crystal shape of the solid phase of the same droplet. The symmetry of the droplet itself is one thing, but when you add in the environment around it, you somewhat change the equation. Then again, at that point you are not necessarily looking at just symmetry but symmetry in the environment. A single object has whatever symmetry it has. However, if you then associate symmetry as including whatever is surrounding the object, you have to extend the lines of symmetry. When you do that, you induce variability (at least I would think so). As variability increases, constancy decreases and therefore you have a decrease in symmetry. Maybe. I'm all confuzzled now. Thanks... :P In the environment, if you assume the water droplet is in the air, symmetry is only viable as a measurement of the object itself. If you take the molecular symmetry of the water droplet and expand that to the environment, you break symmetry because of the molecular particles in the air itself. In other words, I think the premise is flawed. Just when I thought I was beginning to understand... I didn't have a good grasp of the concept originally but when he added the part about the environment that really threw me. But, Oh Boy here I am heading out on a limb with a saw, I also think he is wrong, or at least I don't think his assumption is correct or his premise is flawed or something... But, that said, the passage works, makes sense to me at least, when interpreted from the explainations by the other posters.
{"url":"https://www.ridgidforum.com/forum/general-topics/open-discussion/23864-math-problem?t=23245","timestamp":"2014-04-24T23:32:37Z","content_type":null,"content_length":"126403","record_id":"<urn:uuid:af6a23db-b1e0-49a3-958e-12ef1ad742ee>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 199 Iterative Circle Packing Problem 199 Published on Saturday, 21st June 2008, 06:00 am; Solved by 1031 Three circles of equal radius are placed inside a larger circle such that each pair of circles is tangent to one another and the inner circles do not overlap. There are four uncovered "gaps" which are to be filled iteratively with more tangent circles. At each iteration, a maximally sized circle is placed in each gap, which creates more gaps for the next iteration. After 3 iterations (pictured), there are 108 gaps and the fraction of the area which is not covered by circles is 0.06790342, rounded to eight decimal places. What fraction of the area is not covered by circles after 10 iterations? Give your answer rounded to eight decimal places using the format x.xxxxxxxx .
{"url":"http://projecteuler.net/problem=199","timestamp":"2014-04-20T20:58:00Z","content_type":null,"content_length":"5425","record_id":"<urn:uuid:7596bfc7-ebb8-4e63-bb7e-26988866949f>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 50 You observe someone pulling a block of mass 48 kg across a low-friction surface. While they pull a distance of 3.0 m in the direction of motion, the speed of the block changes from 4.9 m/s to 7.7 m/ s. Calculate the magnitude of the force exerted by the person on the block. Thi... Also, I know the rest energies would cancel out. You observe someone pulling a block of mass 48 kg across a low-friction surface. While they pull a distance of 3.0 m in the direction of motion, the speed of the block changes from 4.9 m/s to 7.7 m/ s. Calculate the magnitude of the force exerted by the person on the block. Thi... In the approximation that the Earth is a sphere of uniform density, it can be shown that the gravitational force it exerts on a mass m inside the Earth at a distance r from the center is mg(r/R), where R is the radius of the Earth. (Note that at the the surface and at the cent... College Physics n the approximation that the Earth is a sphere of uniform density, it can be shown that the gravitational force it exerts on a mass m inside the Earth at a distance r from the center is mg(r/R), where R is the radius of the Earth. (Note that at the the surface and at the cente... Oh, I forgot to add the negative sign with 3/7. 1. Get the inverse of each side in cos(θ)=-((3)/(7)). Cos^-1 (3/7) = theta. 2. From trig identities, we know sin2θ = 2sinθcosθ Substitute that in for sin2θ and just solve since you figured out θ in part 1 and you were given what cosθ equals, ... A hanging wire made out of titanium with diameter 0.080 cm is initially 2.3 m long. When a 78 kg mass is hung from it, the wire stretches an amount 2.89 cm. A mole of titanium has a mass of 47.9 grams, and its density is 4.51 g/cm3. Based on these experimental measurements, wh... One mole of tungsten has a mass of 184 grams, and its density is 19.3 grams per cubic centimeter, so the center-to-center distance between atoms is 2.51×10-10 m. You have a long thin bar of tungsten, 2.2 m long, with a square cross-section 0.16 cm on a side. You hang the... Jim is able to sell a hand-carved statue for $670 which was a 35% profit over his cost. How much did the statue originally cost him? The city council has decided to add a 0.3% tax on motel and hotel rooms. If a traveler spends the night in a motel room that costs $55 before taxes, how much will the city receive in taxes from him? I am studying for a exam, would you also know a good website for math review? ... If Leah is 6 years older than Sue, and John is 5 years older than Leah, and the total of their ages is 41. Then how old is Sue? How do you know what and, is, and older mean in mathematical terms? What nucleotide sequences do not code for proteins? political science How do local governments get their revenue in U.S.? and what are the sources? I am trying to simplify 24a^2/b^4 (the entire expression has a radical sign around it). I got 4a times the radical of 6a/b^4. However, my teacher said this isn't correct on my quiz. Why, and what is the right answer? Thanks!!! Yes, it is supposed to be a three, not a fourty three. I had to factor 5x^3-10x^2+43x-6. I got 5x(x^2+-2x)+3(x-2). My teacher said this was half right. What did I do wrong then? Thank you!! I am unsure how to factor x^3-8. Could you please explain? I'm sorry I mistyped my answer. I said it was 21x+4. What is wrong with THIS answer? Thanks. On my quiz for (3x+2)^2, I wrote the answer as 21x=4. My teacher gave me a minus one, which meant it was half wrong. What is wrong about my answer? Thanks I'm trying to solve the problem: 6y-bracket 3y-(y+7)bracket. I got an answer of 4y-7 multiple times. However, my book said the answer is 4y+7. I do not see how the 7 can be positive. I distributed the negative, so therefore, the seven should be negative. Or are parenthesis... I need to rationalize the denominator of 2/3 square root of five, (three being the index). My book suggests you should multiply this by 3 square root of 5^2 over 3 square root of 5^2. I thought the numbers that you rationalize the original with should be the denominator unalte... Dalton's Law states that the total pressure of a system is equal to the partial pressure of all the mixed gases. The equation looks like this. Ptotal = P1 + P2 + P3 + ... Assume that the temperature and volume is constant. When going to high altitudes, the pressure in a mo... intro to probability In a study of particulate pollution in air samples over a smokestack, X1 represents the amount of pollant per sample when a cleaning device is not operating, and X2 represents the amount per sample when the cleaning device is operating. Assume that (X1, X2) has a joint probabi... intro to probability For the sheet-metal stamping machine in a certain factory, the time between failures, X1 has a mean between the failures, of 56 hours and a variance of 16 hours. The prepair time X2, has a mean time to repair of 5 hours and a variance of 4 hours. a) If X1 and X2 are independen... Intro to probability, My main problem is deciding with discrete distribution to use: BERNOULLI, BINOMIAL, DISCRETE UNIFORM, GEOMETRIC NEGATIVE BINOMIAL, OR POISSON. Every time I choose one, it's the wrong one. Is there some way I can easily find out which one to use. Because what I do now is I ... MY MAIN PROBLEM IS FIGURING OUT WHAT DISCRETE DISTRIBUTION TO USE, BERNOULLI, BINOMIAL, DISCRETE UNIFORM, GEOMETRIC NEGATIVE BINOMIAL, OR POISSON. Every time I choose one, it's the incorrect one. Is there some way I can easily find out which one to use. -------------------... Intro to Probability MY MAIN PROBLEM IS FIGURING OUT WHAT DISCRETE DISTRIBUTION TO USE, BERNOULLI, BINOMIAL, DISCRETE UNIFORM, GEOMETRIC NEGATIVE BINOMIAL, OR POISSON. Every time I choose one, it's the incorrect one. Is there some way I can easily find out which one to use. -------------------... Intro to Probability MY MAIN PROBLEM IS FIGURING OUT WHAT DISCRETE DISTRIBUTION TO USE, BERNOULLI, BINOMIAL, DISCRETE UNIFORM, GEOMETRIC NEGATIVE BINOMIAL, OR POISSON. Every time I choose one, it's the incorrect one. Is there some way I can easily find out which one to use. ------------------... Intro to probability thanks damon for your help Intro to probability Two balanced coins are tossed. What are the expected value and the variance of the number of heads observed? ================================== I'm confused on how to start. I don't know how many times im suppose to toss coin, and how to set up my table for the expecte... Intro to Probability thanks damon Intro to Probability thank you drwls for your help I greatly appreciate it. Intro to Probability two construction contracts are to be randomly assigned to one or more of 3 firms- I,II,III. A firm may receive more than one contract. if each contract has a potential profit of 90,000 dollars. find the expected potential profit for firm I. then find the expected potential pro... thank you damon posted twice, sorry 1.) An octave contains twelve different musical notes (in Western music). How many different eight note melodies can be constructed from these twelve notes if: (a) no note can be used more than once? (b) any note can be used as often as you please? ============================... Intro to Probability, check some of my work please 1.) An octave contains twelve different musical notes (in Western music). How many different eight note melodies can be constructed from these twelve notes if: (a) no note can be used more than once? (b) any note can be used as often as you please? ============================... Intro to Probability, please check my work. thank you drwls Intro to Probability, please check my work. A fair coin is flipped three times. You win $5 every time the outcome is heads. Let the random variable X represent the total number of dollars you win. (a) List the sample space. (b) Determine the probability function of X. ====================== Answer: a) 2^3= 8 possibiliti... Intro to Probability Widgets are produced at a certain factory by each of three machines A, B and C. These machines produce 1000, 600 and 400 widgets per day, respectively. The probability that a given widget is defective is 4% for one produced by Machine A, 3% if produced by Machine B, and 2% if ... Intro to Probability In a certain election, the incumbent Republican will run against the Democratic nominee. There are three Democratic candidates, D1, D2 and D3, whose chances of gaining the Democratic nomination are .50, .35 and .15, respectively. Here are the chances that the Republican will w... Intro to probability An octave contains twelve different musical notes (in Western music). How many different eight note melodies can be constructed from these twelve notes if: (a) no note can be used more than once? (b) any note can be used as often as you please? ================================... Intro to probability oh, now wasn't that simple, thank you reiny Intro to probability A certain pizza restaurant offers three different sizes of pizza and eight different toppings. How many distinct pizzas having two different toppings can be made? ==================================== == I think part of the answer to the question is (8)\ (2)/ where I choose 2 to... Math: determinants If det(A)=-7 A= | a b c | | d e f | | g h i | Find det(2A^-1), det((2A)^-1). Find | a g d | | b h e | | c i f | Math: Factoring I'm having a hard time factoring polynomials, especially 3rd degree. -x^3+2x^2+4x-8 -x^3-5x^2+20x+12 I don't know how to begin. I thought long division, but what would I divide by? Consider the following kinetic parameter for a given enzyme: Km=4.7x10-5 M Vmax= 22 nmol/min/mg [i]=5x10-1 mM [s]=2x10-4 M Ki=3x10-4 M Calculate the rate of product formation in the presence of a competitive inhibitor. I know that I have to solve for Vo. I also know that I nee... Consider the following kinetic parameter for a given enzyme: Km=4.7x10-5 M Vmax= 22 nmol/min/mg [i]=5x10-1 mM [s]=2x10-4 M Ki=3x10-4 M Calculate the rate of product formation in the presence of a competitive inhibitor. I know that I have to solve for Vo. I also know that I nee...
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Amelie","timestamp":"2014-04-20T02:47:45Z","content_type":null,"content_length":"19002","record_id":"<urn:uuid:edb2d733-a945-4782-a2c5-2b9ef809135e>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
INTEGERS: After Acceptance Formatting your paper for INTEGERS: AMS-TeX Important: Please include any and all line skips that are given below. I. Please use the following declarations in your AMS- TeX file. \input amstex II. Please use the following in your topmatter. \leftheadtext{\hskip 8pt \smalltt INTEGERS: \smallrm Electronic Journal of Combinatorial Number Theory \smalltt x (200x), \#Axx } \rightheadtext{\smalltt INTEGERS: \smallrm Electronic Journal of Combinatorial Number Theory \smalltt x (200x), \#Axx \hskip 8pt} III. For the title page (title, authors, and abstract) please use the following construct. \centerline{\bf YOUR TITLE HERE IN CAPITAL LETTERS ONLY} \vskip 20pt \centerline{\smallit Author One\footnote{any footnote here}, One University, Somewhere, State 00000} \centerline{\tt me@math.one.edu} (optional) \vskip 20pt \centerline{\smallit Author Two\footnote{any footnote here}, Two University, Someplace, Country AAAAA } \centerline{\tt me@math.one.edu} (optional) \vskip 30pt \centerline{\smallit Received:, Accepted:, Published} \vskip 30pt \centerline{\bf Abstract} Put your abstract here. Please limit it to half of a page of text. \vskip 30pt Your paper starts here with your first section. See IV below for more information on how to number sections. IV. In the body of the paper, please label and number your (sub)sections as follows. Please note that we use the command "\subhead" for both sections and subsections. "\subhead \nofrills{\bf N. Section Name}", where N=1,2,... corresponds to the section number. "\subhead \nofrills{\bf N.x Subsection Name}", where N=1,2,3,... and x=1,2,3,... correspond to the subsection number. After the end of each section and before the beginning of the next section please put the line: \vskip 30pt V. The references and acknowledgments sections of the paper should not be numbered sections. Hence, you should use "\subhead \nofrills{\bf References}" and/or "\subhead \nofrills{\bf Acknowledgments}." Feel free to label your references as you see fit.
{"url":"http://www.emis.de/journals/INTEGERS/amstexformat.html","timestamp":"2014-04-21T07:46:50Z","content_type":null,"content_length":"3425","record_id":"<urn:uuid:6e92f66d-2a82-4e7f-b7fa-94bbd771d48b>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
A Detailed Description of the LIFT Model 1. General Concepts Several principles guide Inforum's modeling efforts, which include LIFT (an 97-sector U.S. model), Iliad (a 360-sector U.S. model), and an international family of interindustry models. Behavioral equations in the models are estimated for detailed sectors, as functions of sector-specific variables. The models are dynamic, with changing I-O coefficients and with investment dependent upon the rate of growth of output. The models forecast a specific sequence of future years, not an equilibrium at some future point without specifying the path to the equilibrium. The parameters of the various equations must be sensible because the models are put to practical use. The causation in these models runs from the sectoral detail to the macroeconomic totals. The central input-output identities provide structural consistency to the models . 2. The LIFT Model - Its Implementation LIFT is comprised of three main components: 1) the real side , 2) the price-income side , and 3) the accountant . The real side estimates final demands, output by producing sector, and labor requirements. The price side estimates both the components of gross product originating by industry (value-added) and unit prices by product. The accountant closes the model with respect to income, determines the economic aggregates, and estimates transactions which have not been calculated elsewhere in the model. The components are run iteratively until the model converges on a solution. The Real Side In the real side of LIFT, equations for final demands are evaluated and production and labor requirements are calculated for 97 producing sectors . Government purchases are exogenous; other components of final demand are determined by behavioral equations. Personal consumption equations have been estimated for nearly eighty categories of expenditures, using relative prices, real income, and demographic variables. Equipment investment equations have been estimated for 56 industries, and depend upon changes in industry outputs and changes in the relative prices of capital, labor, and energy. Construction is determined for 25 categories of structures. The Inforum International System contributes product-specific explanatory variables for foreign trade. Exports by product are a function of foreign demands for imports and relative prices, which incorporate exchange rate movements. Imports by product are a function of product-specific domestic demand and relative foreign to domestic prices. The solution of the I-O equation, q = Aq + f yields the solution for output. The solution for output is an iterative one. Because current output, imports, and inventory change depend upon one another, these three sets of equations are solved together. (Another iterative loop includes equipment and construction investment in the determination of output.) Labor productivity (output per hour) for the 97 sectors are estimated as a function of trends and changes in output. The equations recognize that the influence of output is not symmetric over the business cycle. Employment is determined by labor productivity, output, and the length of the work year. The Price-Income Side To determine unit prices for 97 products, we solve the dual equation, p = pA + v (unit prices, p , are the sum of unit material costs, pA , plus unit value-added costs, v ). Value-added by industry is determined from equations for the components of Gross Product Originating (GPO) by some forty industries. The real side of the model is defined in terms of products. Final demands are demands for products. Statistics on prices measure the prices of products. (For these reasons, the A-matrix reflects a commodity technology.) However, statistics on the factors of production (from the National Accounts) -- labor income, capital income, and indirect taxes -- reflect the organization of firms. Therefore, to translate between the real side's product classification and the income side's industry classification, we have constructed a "Product-to-Industry" Bridge. This bridge is similar to, but somewhat different from, the Make Table (which identifies where, in terms of industries, products are made). Our bridge translates value-added between its product and industry classification. This translation is made in both directions. When the GPO equations need an indicator of real activity, the bridge is used to produce "constant-price, value-added weighted, output." Alternatively, when we have determined nominal GPO by industry, we use the bridge to translate it into our estimate of value-added by product, the v vector. Labor compensation is determined by hours (from the real side) and equations for average hourly compensation ("wage" rates). Corporate profits and proprietor income, by industry, are functions of material and labor costs, and of various measures of economic activity (growth in output, changes in unemployment, etc.). Net interest payments are a function of interest rates. Interest rates are influenced by the rate of economic growth, by the rate of inflation, and by monetary tightness. (On the real side, they influence investment activity.) Other equations determine the remaining components of capital income: capital consumption allowances, inventory valuation adjustments, subsidies, and business transfer payments. Indirect business taxes (sales taxes, property taxes, excise taxes) are the other component of GPO. The Accountant The third part of the model is known as the "accountant," for it does the work of the national income accountant. It compiles the aggregate national account tables by summing up the sectoral detail for final demands and income by industry. It determines aggregate prices as a weighted sum of product prices. It converts value-added information into personal income. It determines nominal GDP by applying the estimates of unit prices to the real (constant dollar) estimates of final demand. The accountant constructs personal income as the sum of labor income, proprietor income, and dividends (from the income side), interest income from business and from government, and transfer payments from government and business. Taxes are removed from personal income to yield disposable income. When deflated, it becomes real disposable income, the variable used to explain the real side's personal consumption expenditures. The savings function is also calculated by the accountant. It is a function of the unemployment rate, the percentage change in income, auto purchases as a share of PCE, interest payments as a share of income, personal contributions to social insurance as a share of income, and inflation. A key feature in the stability of the model is the role of the unemployment rate in several equations. As economic activity slackens, the savings rate falls. Thus, consumers spend a larger share of their income and help stimulate demand. On the price side, an increasing unemployment rate moderates increases in several of the components of income by industry (wage rates and profits in particular), thus moderating inflation and keeping up the level of real income. Thus, the current Inforum model of the U.S. economy, LIFT , is a macroeconomic interindustry model in that it determines all the variables usually considered in macroeconomics -- income, savings, employment, unemployment, inflation, interest rates, etc., but the model differs from most other macro models, for industry detail is central in the model's structure and causation. For a more detailed description of the LIFT model, please review the working paper entitled "The LIFT Model", by Douglas Meade (2001). A published paper on an older vesion of the model is "LIFT: Inforum's Model of the U.S. Economy", in Economic System Research , volume 3(1), 1991. To return to the main Lift page, click here.
{"url":"http://inforumweb.umd.edu/services/models/liftdetail.html","timestamp":"2014-04-19T22:24:04Z","content_type":null,"content_length":"21421","record_id":"<urn:uuid:f60bc9a5-7c13-4669-9f85-0886bcf4a313>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
edges on the curve ways You are talking about how the road is banked (sloped) such that it is higher on the outside of the curve, right? It has to do with the support force (normal force) from the surface of the road. If the road is banked, then the normal force (which points perpendicular from the surface) will have a component that points toward the center of the circle (the circle that the curve is part of).
{"url":"http://www.physicsforums.com/showthread.php?t=235094","timestamp":"2014-04-19T07:34:49Z","content_type":null,"content_length":"28127","record_id":"<urn:uuid:0d206238-df93-406d-996b-9bd534609daa>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectrum Sharing: Fundamental Limits, Scaling Laws, and Self-Enforcing Protocols Raul Hernan Etkin EECS Department University of California, Berkeley Technical Report No. UCB/EECS-2006-168 December 11, 2006 Spectrum sharing arises whenever multiple wireless systems operate in the same frequency band. Due to mutual interference, the performance of the systems is coupled and tradeoff occurs between the rates that can be simultaneously achieved. The fundamental characterization of this tradeoff is given by the capacity region of the interference channel, whose determination is an open problem in information theory. For the case of the two-system Gaussian interference channel we provide a characterization of this capacity region to within a single bit/s/Hz. This characterization is obtained by deriving new outer bounds, and by using simple communication schemes that are special cases of the ones introduced by Han and Kobayashi. When many systems share the band, the interference cancellation techniques used in the two-system case may not be feasible, and a large fraction of the interference must be treated as noise. As the number of systems grows, interference aggregates and limits performance. We study the statistical properties of the interference aggregation phenomenon in a random network model and determine how to mitigate the strongest interference components. Frequency selective fading can provide gains in a multi-user system due to multi-user diversity. We investigate whether similar gains can be achieved in multi-system spectrum sharing situations. For this we fix the rate of each system and study how the required bandwidth scales as the number of systems M grows large. While for Rayleigh fading the multi-user diversity gain provides bandwidth savings of the order of log(log M), the multi-system diversity gain can provide larger bandwidth savings, of order log M. We lastly consider the problem of incentives in spectrum sharing. Systems are often independent and selfish. We investigate whether efficiency and fairness can be obtained with self-enforcing spectrum sharing rules that do not require cooperation among the systems. Any self-enforcing protocol must correspond to an equilibrium of a game. We first analyze the possible outcomes of a one shot game, and notice many inefficient solutions. However, since systems often coexist for long periods, a repeated game is more appropriate to model their interaction. In the repeated game, the possibility of building reputations and applying punishments enables a larger set of self-enforcing outcomes. When this set includes the optimal operating point, efficient, fair, and incentive compatible spectrum sharing becomes possible. We prove that our results are tight and quantify the best achievable performance in non-cooperative scenarios. Advisor: David Tse and Abhay Parekh BibTeX citation: Author = {Etkin, Raul Hernan}, Title = {Spectrum Sharing: Fundamental Limits, Scaling Laws, and Self-Enforcing Protocols}, School = {EECS Department, University of California, Berkeley}, Year = {2006}, Month = {Dec}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-168.html}, Number = {UCB/EECS-2006-168}, Abstract = {Spectrum sharing arises whenever multiple wireless systems operate in the same frequency band. Due to mutual interference, the performance of the systems is coupled and tradeoff occurs between the rates that can be simultaneously achieved. The fundamental characterization of this tradeoff is given by the capacity region of the interference channel, whose determination is an open problem in information theory. For the case of the two-system Gaussian interference channel we provide a characterization of this capacity region to within a single bit/s/Hz. This characterization is obtained by deriving new outer bounds, and by using simple communication schemes that are special cases of the ones introduced by Han and Kobayashi. When many systems share the band, the interference cancellation techniques used in the two-system case may not be feasible, and a large fraction of the interference must be treated as noise. As the number of systems grows, interference aggregates and limits performance. We study the statistical properties of the interference aggregation phenomenon in a random network model and determine how to mitigate the strongest interference components. Frequency selective fading can provide gains in a multi-user system due to multi-user diversity. We investigate whether similar gains can be achieved in multi-system spectrum sharing situations. For this we fix the rate of each system and study how the required bandwidth scales as the number of systems M grows large. While for Rayleigh fading the multi-user diversity gain provides bandwidth savings of the order of log(log M), the multi-system diversity gain can provide larger bandwidth savings, of order log M. We lastly consider the problem of incentives in spectrum sharing. Systems are often independent and selfish. We investigate whether efficiency and fairness can be obtained with self-enforcing spectrum sharing rules that do not require cooperation among the systems. Any self-enforcing protocol must correspond to an equilibrium of a game. We first analyze the possible outcomes of a one shot game, and notice many inefficient solutions. However, since systems often coexist for long periods, a repeated game is more appropriate to model their interaction. In the repeated game, the possibility of building reputations and applying punishments enables a larger set of self-enforcing outcomes. When this set includes the optimal operating point, efficient, fair, and incentive compatible spectrum sharing becomes possible. We prove that our results are tight and quantify the best achievable performance in non-cooperative scenarios.} EndNote citation: %0 Thesis %A Etkin, Raul Hernan %T Spectrum Sharing: Fundamental Limits, Scaling Laws, and Self-Enforcing Protocols %I EECS Department, University of California, Berkeley %D 2006 %8 December 11 %@ UCB/EECS-2006-168 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-168.html %F Etkin:EECS-2006-168
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2006/EECS-2006-168.html","timestamp":"2014-04-21T07:03:59Z","content_type":null,"content_length":"10026","record_id":"<urn:uuid:531fcac2-149c-4b37-b3d4-c9aed410022c>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Advances in Algebra Summary This section contains 1,580 words (approx. 6 pages at 300 words per page) Advances in Algebra A number of important algebraic results had been calculated by Babylonian mathematicians around 2000 B.C. The Egyptians also addressed the solution of algebraic problems, but did not advance as far, possibly due to their more cumbersome number system. The Greeks used algebraic methods in support of their interests in geometry and in the theory of numbers. The Arabs preserved Greek mathematical manuscripts and integrated Greek and Hindu ideas on algebra in books that would reach Italy during the Renaissance and stimulate a development of algebraic ideas and important new results. In its modern and elementary sense, algebra is the branch of mathematics concerned with finding the values of unknown quantities defined by mathematical relationships. The Babylonians were the first group to concern themselves, or at least to have left a record of their concern, with algebraic problems. The term "Babylonian... This section contains 1,580 words (approx. 6 pages at 300 words per page)
{"url":"http://www.bookrags.com/research/advances-in-algebra-scit-01123/","timestamp":"2014-04-16T16:24:35Z","content_type":null,"content_length":"33824","record_id":"<urn:uuid:0756f23a-0a0c-4045-801f-f1cee5b6de87>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00219-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture Notes One interesting example of structural induction would be to prove that inorder.reflect==reverse.inorder, where reflect is defined as follows: reflect :: BST a -> BST a reflect Nil = Nil reflect (Node x l r) = Node x (reflect r) (reflect l) This would a slightly more ellaborated structural induction over trees. There are countless examples of structural induction over lists. A notable class of things that are proven using induction deals with optimization issues. This class includes fusion laws, which can be applied to code to reduce the number of traversals of lists. One of the simplest fusion laws is (map f).(map g) == map (f.g). This illustrates the concept quite nicely (we do one map instead of two). Such properties have relatively simple proofs by structural induction, which you are invited to try. Still on optimization, using accumulating parameters to improve the efficiency of the code is fine, but writing code for such a thing is more complex than straightforward implementations. For any function with accumulating arguments you should make sure (preferably by a proof) that you are implementing the same thing. An example is that one should prove (using structural induction) that reverse x == pour x [] to make sure that the optimization in pour did not break anything. You are invited to try it out.
{"url":"http://cs.ubishops.ca/home/csc206/node4.html","timestamp":"2014-04-17T06:45:04Z","content_type":null,"content_length":"8291","record_id":"<urn:uuid:5c248a90-9606-4280-8978-aaad44519855>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00474-ip-10-147-4-33.ec2.internal.warc.gz"}
Warning: The lesson plans listed below come from a variety of different sources. Their origins are not yet labelled and the links take you directly to foreign sites. • Pre-algebra □ Three Skill For Algebra. The three skills explain why letters or symbols are favored in algebra in place of numbers. □ Expressions With Variables • Functions and Graphs • Integers • Rational Numbers • Equations in One Variable • Equations in Two Variables • Simultaneous Equations • Problem Solving • Exponents and Logarithms • Polynomials • Factoring • Fractional Equations • Square Roots • Quadratic Equations • Inequalities • Complex Numbers • Sequences and Series • Matrices Online Mentors Suggestions and comments Do you have lesson plans or ideas on these topics that you'd like to share? Send them in! BACK TO Algebra Resources || FORWARD TO Computer Software || BACK TO Math Forum Page Ruth Carver, Margaret Sinclair 18 July 1995
{"url":"http://mathforum.org/sum95/ruth/ideas.html","timestamp":"2014-04-21T15:27:47Z","content_type":null,"content_length":"7394","record_id":"<urn:uuid:17ec88ef-2c33-44c8-ba8f-1099778a30d3>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00604-ip-10-147-4-33.ec2.internal.warc.gz"}
Edmonston, MD Algebra 2 Tutor Find an Edmonston, MD Algebra 2 Tutor ...As a private tutor, I have accumulated over 750 hours assisting high school, undergraduate, and returning adult students. And as a research scientist, I am a published author and have conducted research in nonlinear dynamics and ocean acoustics. My teaching focuses on understanding concepts, connecting different concepts into a coherent whole and competency in problem solving. 9 Subjects: including algebra 2, calculus, physics, geometry ...Everyone learns differently, and the beauty of tutoring is that we can adapt our approach on-the-fly to address just what it is that one specific student is stumbling over.Apart from doing some TA work in college, I have never formally taught Calculus. (Formally, I've taught Latin and computer pr... 18 Subjects: including algebra 2, calculus, geometry, writing Hello students and parents! I am a biological physics major at Georgetown University and so I have a lot of interdisciplinary science experience, most especially with mathematics (Geometry, Algebra, Precalculus, Trigonometry, Calculus I and II). Additionally, I have tutored people in French and Che... 11 Subjects: including algebra 2, chemistry, calculus, French ...I tutor students in math and French (fluent). I have experience with students of all ages and level of education. In general, I favor a direct interaction approach with students, however that method may vary with the subject being taught and the actual student level of knowledge. Looking forward to hearing from you. 28 Subjects: including algebra 2, English, reading, physics ...I have been playing chess for most of my life and have been able to teach people on the fly. I am an active participant on gameknot.com which I have been playing for over 5 years. I just tutored my first student in chess and it went quite smoothly! 31 Subjects: including algebra 2, reading, English, writing Related Edmonston, MD Tutors Edmonston, MD Accounting Tutors Edmonston, MD ACT Tutors Edmonston, MD Algebra Tutors Edmonston, MD Algebra 2 Tutors Edmonston, MD Calculus Tutors Edmonston, MD Geometry Tutors Edmonston, MD Math Tutors Edmonston, MD Prealgebra Tutors Edmonston, MD Precalculus Tutors Edmonston, MD SAT Tutors Edmonston, MD SAT Math Tutors Edmonston, MD Science Tutors Edmonston, MD Statistics Tutors Edmonston, MD Trigonometry Tutors Nearby Cities With algebra 2 Tutor Berwyn Heights, MD algebra 2 Tutors Bladensburg, MD algebra 2 Tutors Brentwood, MD algebra 2 Tutors Cheverly, MD algebra 2 Tutors Colmar Manor, MD algebra 2 Tutors Cottage City, MD algebra 2 Tutors Hyattsville algebra 2 Tutors Landover Hills, MD algebra 2 Tutors Mount Rainier algebra 2 Tutors North Brentwood, MD algebra 2 Tutors Riverdale Park, MD algebra 2 Tutors Riverdale Pk, MD algebra 2 Tutors Riverdale, MD algebra 2 Tutors Rogers Heights, MD algebra 2 Tutors University Park, MD algebra 2 Tutors
{"url":"http://www.purplemath.com/Edmonston_MD_algebra_2_tutors.php","timestamp":"2014-04-20T21:14:25Z","content_type":null,"content_length":"24440","record_id":"<urn:uuid:cb43747d-3e50-404d-b9d6-02d9219d9ae9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Dobransky | Humanitarian Versus National Interests American Diplomacy rarely publishes statistical analysis. But because this paper analyzes the debate over foreign aid’s purposes and motivations, we thought it might be of interest to our readers. The paper examines the most recent data on the recipients of official development assistance, presenting fourteen traditional hypotheses regarding foreign aid, and it challenges each one with socio-economic statistics from the CIA’s World Factbook online. The paper concludes that national interest motivations appear to be a major factor in foreign aid, contrary to the many publicly stated explanations. –Ed. Foreign aid has always been a hotly debated topic in the post-WWII period. With the inception of the Marshall Plan, the United States carried out one of the greatest economic recovery acts in history. As the Cold War intensified, the idea of the Marshall Plan was expanded to include the rest of the developing world, many of whom were just getting out of the throes of European colonialism and were ripe targets for both superpowers. The Organization for Economic Cooperation and Development (OECD) was created in 1960 by twenty of the wealthiest countries to promote better economic relations between countries. The Development Assistance Committee (DAC) was established thereafter within the OECD to support economic development in the Third World. All donor aid that was at least 25% in grants was considered to be Official Development Assistance (ODA), as opposed to loans and other conditional assistance by institutions like the World Bank. The intentions were ideal but the argument remained as to whether ODA was primarily for economic and humanitarian reasons or did national interests and power politics influence it in reality? The scholarly debate continues to this day as to what the primary purpose of foreign aid is and should be. Should foreign assistance be for humanitarian purposes and the promotion of socio-economic development in the impoverished world? Or, should it be for national interests and the promotion of security and power? Hans Morgenthau complained more than half a century ago that he, as a realist, could not fully answer this question, given the mixed picture of interests and stated intentions. With current OECD/DAC data along with the CIA’s online World Factbook, we may now be able to get a better understanding of what the primary purpose(s) of foreign aid is. This research paper analyzes and evaluates ODA and its recipients by looking at key socio-economic data to determine what may be driving ODA’s direction and purpose. A Brief Scholarly Review Proponents of foreign aid as a humanitarian goal have been large and vocal throughout the past half century. Scholarly wise, David Lumsdaine was the leading proponent of the humanitarian strand in foreign assistance. In Moral Vision in International Politics (1993), Lumsdaine argues that modern foreign assistance must have had a strong moral fiber since it reached hundreds of billions of dollars and was supported by large numbers of constituents inside and outside of government for decades. He notes like others (Wood 1986, and Ruttan 1996, et al.) that nothing in all of history compares with the massive wealth transfer between countries (peacefully and voluntarily) and the declared ideal humanitarianism. Since the end of the Cold War, many scholars (Meernik, Krueger, and Poe 1998, Lancaster 2007, and Heckelman and Knack 2008, et al.) have recognized that foreign assistance has taken on a much greater emphasis for promoting economic development, as well as democracy. The stated goals and ideals of foreign aid are widely promoted, but they often may not reflect reality. The humanitarian research stresses the amount, types, and recipients of foreign aid. The OECD and others have collected much data showing the totals, distribution, and categories of foreign aid. Few studies, however, have attempted to measure the real-world results and effectiveness of foreign aid over the decades, as well as the statistical relationship between ODA and specific humanitarian/national interests; general totals and targets yes, but not the outcomes and socio-economic details. After trillions of ODA over the last several decades, it behooves us to assess the current recipients’ key socio-economic data and, then, determine whether or not humanitarian or national interests have prevailed. Baldwin (1966 and 1985), Hook (1995), and Lancaster (2007), et al. have argued that foreign aid can be driven by national interests, including security and power politics. David Baldwin’s Economic Statecraft (1985) and Steven Hook’s National Interest and Foreign Aid (1995) are the classic examples showing foreign aid being used primarily for power interests. They state that throughout the Cold War it could be demonstrated that a number of major powers directed their amount and type of foreign assistance to countries that corresponded with their national interests—i.e. security, political, economic, and ideological interests. They point out that foreign aid was often framed domestically as promoting the national interests and that the direction and character of foreign assistance supported these claims. They stress that regardless of the secondary purposes and results that the humanitarianism may have garnered, the underlying purpose of foreign aid was and still is hardcore national interests. They support their claims by looking at the total amount and specific recipients of foreign aid in the post-WWII period and they conclude that the national security and economic interests of the donors must have played a significant and overriding role in which they gave their foreign aid. Research Data and Methods In order to assess whether primarily humanitarian or national interests influence ODA, this paper will use the most recent data (2008) from OECD/DAC. The DAC reports include a list of all of the recipients of ODA, and each recipient’s total GNI (gross national income, which is similar to gross domestic product), GNI per capita, and total population. It should be noted that of the 151 recipients of ODA, not all recipients were sovereign countries and that some GNI and GNI per capita data were not in the DAC reports, so current GDP and GDP per capita were used. All this and remaining data and variables were collected from the CIA’s World Factbook online. The main research question is whether primarily humanitarian or national interests determine ODA. The dependent variable is Net ODA. The independent variables are socio-economic and military factors that may influence the total amount of ODA and each one in itself proffers a hypothesis. The fourteen independent variables are GNI (GDP) per capita, population size, GNI (GDP) total, literacy, education, life expectancy, infant mortality, and urbanization levels, HIV/AIDS rate, total arable land, total exports, total imports, external debt amount, and military expenditures as a percentage of GDP. Each of these variables is defined and given by DAC and the CIA (see Appendix for specific descriptions). This research project carries out OLS regression analysis to determine the strength of the relationship between Net ODA and each of the independent variables. It, then, uses multiple regression to test a number of grouped variables and all the variables together to determine if the overall analysis and results change according to the different combinations, let alone have any significance between each other (F significance value). Hypotheses and Findings The fourteen hypotheses and independent variables that are tested with Net ODA produced some interesting results. The following data and brief interpretations are based upon OLS regression results using STATA 10.0. More complete tests/findings can be found in the Appendix. A brief summary is as follows. Hypothesis #1: The lower the GNI per capita of an ODA recipient, the greater the total ODA. This is a significant variable, with a 95% Confidence Interval (numbers do not intersect zero), a T value of 2.93 (greater than 1.96), and a P value of 0.004 (less than 0.05). Its Adjusted R-squared does not appear to be significant in itself, with a 0.0482 figure explaining the DV variance in the model, which means that other relevant variables are needed to see the bigger picture. But, like the rest of the independent variables that we will see below which all have individual Adjusted R-squared numbers that appear insignificant, taken together in multiple regression at the end, the Adjusted R-squared becomes 0.1717 (R-squared is 0.2925), which is significant given this particular study, especially when it comes to national interest factors which stated publicly are not supposed to enter the equation at all. In other words, even small figures for the individual variables and as a whole could end up being significant if they are not specifically related to humanitarian reasons. Hypothesis #2: The larger the impoverished recipient’s population is, the greater the ODA. This is a significant variable, with a 95% Confidence Interval, a T value of 2.49, and a P value of 0.014. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0335 figure explaining the DV variance in the model, which means that other relevant variables are needed to get a better picture. Hypothesis #3: The less the recipient’s GNI, the higher the ODA. This is not a significant variable, with no 95% Confidence Interval, a T value of 1.38, and a P value of 0.168. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0061 figure explaining the DV variance in the model, which means that other relevant variables are necessary for a fuller view. Hypothesis #4: The lower the literacy level of a recipient, the more ODA is given. This is a significant variable, with a 95% Confidence Interval, a T value of 3.28, and a P value of 0.001. Its Adjusted R-squared does not appear to be significant in itself (though higher than the previous ones), with a 0.0635 figure explaining the DV variance in the model, which means that other relevant variables are needed. Hypothesis #5: The lower the education level of a recipient, the greater the ODA. This is a significant variable, with a 95% Confidence Interval, a T value of 2.36, and a P value of 0.020. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0360 figure explaining the DV variance in the model, which means that other relevant variables would help give us a better picture. Hypothesis #6: The lower the life expectancy in a recipient, the more ODA. This is a significant variable, with a 95% Confidence Interval, a T value of 2.72, and a P value of 0.007. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0418 figure explaining the DV variance in the model, which means that other relevant variables are needed to complete the view. Hypothesis #7: The higher the infant mortality rate in a recipient, the greater the ODA. This is a significant variable—in fact, the most significant of all the variables—with a 95% Confidence Interval, a T value of 3.65, and a P value of 0.000. Its Adjusted R-squared does not appear to be significant in itself (but, it is higher than all others), with a 0.0774 figure explaining the DV variance in the model, which means that other relevant variables are needed to see the bigger picture but this one moves us closer. Hypothesis #8: The lower the urbanization level of a recipient, the higher the ODA. This is not a significant variable, with no 95% Confidence Interval, a T value of 1.14, and a P value of 0.254. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0021 figure explaining the DV variance in the model, which means that other relevant variables are needed to get a fuller picture. Hypothesis #9: The higher the HIV/AIDS rate in a recipient, the greater the ODA. This is not a significant variable—in fact, it is the least significant variable of them all—with no 95% Confidence Interval, a T value of 0.12, and a P value of 0.908. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0084 figure explaining the DV variance in the model, which means that other relevant variables are very much needed to see the bigger picture. Hypothesis #10: The less arable land a recipient has, the more ODA is given. This is not a significant variable, with no 95% Confidence Interval, a T value of 1.34, and a P value of 0.183. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0053 figure explaining the DV variance in the model, which means that other relevant variables are necessary. Hypothesis #11: The more the recipient exports, the higher the ODA. This is not a significant variable, with no 95% Confidence Interval, a T value of 1.28, and a P value of 0.201. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0043 figure explaining the DV variance in the model, which means that other relevant variables are needed to get a better view. Hypothesis #12: The more the recipient imports, the higher the ODA. This is not a significant variable, with no 95% Confidence Interval, a T value of 1.67 (though, close), and a P value of 0.097. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0118 figure explaining the DV variance in the model, which means that other relevant variables would be helpful in getting a wider picture (see the end for multiple regression using exports and imports variables to find a significant relationship in both these variables when together with Net ODA). Hypothesis #13: The larger the external debt of the recipient, the more ODA it receives. This is a significant variable, with a 95% Confidence Interval, a T value of 2.41, and a P value of 0.017. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0328 figure explaining the DV variance in the model, which means that other relevant variables are needed, though this is more of a national interest variable (as opposed to a humanitarian variable) so this small percentage of 3.28% may be significant in capturing variance in some cases. Hypothesis #14: The higher the recipient’s percentage of military expenditures per GDP, the more ODA it receives. This is a significant variable, with a 95% Confidence Interval, a T value of 2.26, and a P value of 0.025. Its Adjusted R-squared does not appear to be significant in itself, with a 0.0324 figure explaining the DV variance in the model, which means that other relevant variables are necessary to get a better picture, though once again this is clearly a national interest variable so this small percentage of 3.24% may be significant in capturing the variance in a number of cases. Final Analysis: Exports and Imports Regression Model and All the Independent Variables in Multiple Regression. The Exports and Imports independent variables together are significant, with a 95% Confidence Interval for both of them, a T value of -2.54 (exports) and 2.76 (imports), and a P value of 0.012 (exports) and 0.007 (imports). The Adjusted R-squared does not appear to be significant in itself, with a 0.0466 figure explaining the DV variance in the model, which means that other relevant variables are needed to see the bigger picture. Yet, the two variables together produce significant results compared to when the variables were analyzed separately. All fourteen independent variables together in a multiple regression model produce surprising changes in the overall results of the simple regression models. Six IVs (GNI per capita, population, literacy, education, life expectancy, and external debt) go from significant to not significant (though the external debt variable was very close to being significant). The six IVs that were not significant in the simple regression models (GNI, urbanization, HIV/AIDS, arable land, exports, and imports) remained insignificant in the multiple regression model. The infant mortality and military expenditures variables were the only two variables that were significant in both the simple and multiple regression models. What is notable is that the percentage of military expenditures per GDP variable overtakes the infant mortality variable as the predominant IV in this multiple regression model. The military expenditures variable is significant, with a 95% Confidence Interval, a T value of 2.99, and a P value of 0.004, which is an improvement over the simple regression model, while the infant mortality variable is reduced but is still significant. The overall model’s Adjusted R-squared is 0.1717 (R-squared is 0.2925), which is a significant increase over the simple regression figures, meaning that the fourteen independent variables together explain approximately 17% (29% with R-squared) of the DV variance in the model, which is reasonably significant when considering the overall picture and the fact that just two of the fourteen variables were significant in themselves. Overall, these research findings provide an interesting set of results. On the one hand, in the simple regression models, they support the argument that humanitarian interests play a significant role in foreign aid, with 6 of the 8 significant independent variables being at least nominally for humanitarian reasons (GNI per capita, population, literacy, education, life expectancy, and infant mortality). On the other hand, national interests do play a significant role in the simple regression models of foreign aid, with the recipient’s external debt and military expenditures being significant. Furthermore, after testing multiple variables for further analysis, it was found that exports and imports together play a significant role in influencing net ODA, and military expenditures is the most significant variable in the multiple regression model. This would support the national interest perspective that major donors/powers will be influenced security wise by the natural resources/products a developing country exports along with its willingness to import from the donor country, especially when it comes to what is called tied aid (i.e., a donor’s foreign aid is conditioned on the recipient using the funds to purchase goods and services from the donor itself). With exports and imports together, it means that 10 of the 14 independent variables in the simple regression models are significant, with nearly half (4) being in the category of national interests. In the end, the military expenditures variable was a surprise from an ODA perspective, in both the simple and multiple regression models. It is common for bilateral military assistance to be given, but for ODA to appear to be influenced heavily by military expenditures is a significant finding. The regression analysis on exports and imports with net ODA was another interesting finding that warrants further investigation, especially in terms of energy security. Finally, the external debt variable is an important finding. Donors may be sending ODA to recipients in order to have them pay back their debts to donors and donor financial institutions, let alone prevent the recipients from defaulting on donor loans. This significant relationship between ODA and the recipient’s external debt demands further research. If ODA is being determined significantly by a recipient’s debt to foreign creditors, then it may be the donor’s special financial interests and powers that are determining a large part of ODA, especially the amount and recipient. This would contradict the whole humanitarian grounds of foreign aid and suggest that donor financial interests and institutions are playing a very influential role in the disbursement of ODA, which would be another significant finding. Moreover, the dismal and most recent socio-economic statistics of many recipients after decades of foreign aid indicate that there may be other non-humanitarian factors involved. All in all, these findings suggest that national interests play a significant role in ODA and that there are ripe opportunities for further research in the foreign assistance area. Baldwin, David A. (1966), Foreign Aid and American Foreign Policy. New York: Frederick A. Praeger. ———. (1985), Economic Statecraft, Princeton, NJ: Princeton University Press. Central Intelligence Agency, CIA World Factbook. http://www.cia.gov/library/publications/the-world-factbook (accessed February 28, 2010). Heckelman, Jac C. and Stephen Knack ( 2008), “Foreign Aid and Market-Liberalizing Reform.”, Economica 75: 524-548. Hook, Steven W. (1995), National Interest and Foreign Aid, Boulder, CO: Lynne Rienner Publishers. Lancaster, Carol (2007), Foreign Aid: Diplomacy, Development, Domestic Politics. Chicago: University of Chicago Press. Lumsdaine, David Halloran (1993), Moral Vision in International Politics: The Foreign Aid Regime, 1949-1989. Princeton, NJ: Princeton University Press. Meernik, James, Eric L. Krueger, and Steven C. Poe (1998), “Testing Models of U.S. Foreign Policy: Foreign Aid during and after the Cold War.” The Journal of Politics 60 (1): 63-85. Organization for Economic Cooperation and Development, OECD and Development Assistance Committee Reports and Data, http://www.oecd.org (accessed February 16, 2010). Ruttan, Vernon W. (1996), United States Development Assistance Policy: The Domestic Politics of Foreign Economic Aid. Baltimore: The Johns Hopkins University Press. Schraeder, Peter J., Steven W. Hook, and Bruce Taylor (1998) “Clarifying the Foreign Aid Puzzle: A Comparison of American, Japanese, French, and Swedish Aid Flows.” World Politics 50: 294-323. Wood, Robert E. (1986), From Marshall Plan to Debt Crisis: Foreign Aid and Development Choices in the World Economy. Berkeley, CA: University of California Press. Please note that not all data is represented in this version of the paper. If additional details are desired, please contact the author directly. 151 ODA Recipients (2008) Afghanistan Djibouti Madagascar Serbia Albania Dominica Malawi Seychelles Algeria Dominican Rep Malaysia Sierra Leone Angola Ecuador Maldives Solomon Islands Anguilla Egypt Mali Somalia Antigua and Barbuda El Salvador Marshall Islands South Africa Argentina Equatorial Guinea Mauritania Sri Lanka Armenia Eritrea Mauritius St. Helena Azerbaijan Ethiopia Mayotte St. Kitts-Nevis Bangladesh Fiji Mexico St. Lucia Barbados Gabon Micronesia St. Vincent and Grenadines Belarus Gambia Moldova Sudan Belize Georgia Mongolia Suriname Benin Ghana Montenegro Swaziland Bhutan Grenada Montserrat Syria Bolivia Guatemala Morocco Tajikistan Bosnia and Herzegovina Guinea Mozambique Tanzania Botswana Guinea-Bissau Myanmar Thailand Brazil Guyana Namibia Timor-Leste Burkina Faso Haiti Nauru Togo Burundi Honduras Nepal Tokelau Cambodia India Nicaragua Tonga Cameroon Indonesia Niger Trinidad and Tobago Cape Verde Iran Nigeria Tunisia Central African Rep. Iraq Niue Turkey Chad Jamaica OmanPakistan Turkmenistan Chile Jordan Palau Tuvalu China Kazakhstan Palestinian Adm Areas Uganda Colombia Kenya Panama Ukraine Comoros Kiribati Papua New Guinea Uruguay Congo Dem Rep Korea Dem Rep Paraguay Uzbekistan Congo Rep Kyrgyz Rep Peru Vanuata Cook Islands Laos Philippines Venezuela Costa Rica Lebanon Rwanda Viet Nam Cote d'Ivoire Lesotho Samoa Wallis and Futuna Croatia Liberia Sao Tome and Principe Yemen Cuba Libya Senegal Zambia Macedonia Zimbabwe Definitions/Measurements of Variables Using the OECD/DAC and the CIA World Factbook All currency in U.S. Dollars Net ODA=In Millions. Total Official Development Assistance to a Recipient in 2008 (most recent available information). GNI Per Capita=Gross National Income Per Individual in Exact Currency (GNI/Population). Population=In Millions. GNI=Total In Millions. Literacy=Percent of Population 15 Years and Over Who Can Read and Write. Education=Average Number of Years of Primary and Tertiary Education for Citizens. Life Expectancy=Average Life Span In Years for Citizens. Infant Mortality Rate=Total Deaths Per 1,000 Live Births Urbanization=Percentage of Population Living in an Urban Area HIV/AIDS=Adult Prevalence Rate, i.e. Percentage of Adult Population with HIV/AIDS. Arable Land=Percentage of Arable Land Within a Recipient’s Borders. Exports=Total Exports in Millions. Imports=Total Imports in Millions. External Debt=Total Debt in Millions Owed to Foreign Creditors. Military Expenditures=Percentage of Military Expenditures to GD Infant Mortality as the Most Significant Independent Variable Multiple Regressions Steve Dobransky is an Adjunct Professor at Cleveland State University, and he is completing his Ph.D. at Kent State University. He majors in International Relations. He has an M.A. from Ohio University and a B.A. from Cleveland State University. Contact: sdobrans @ kent.edu.
{"url":"http://www.unc.edu/depts/diplomat/item/2010/0406/comm/dobransky_humanitarian.html","timestamp":"2014-04-19T04:30:08Z","content_type":null,"content_length":"42583","record_id":"<urn:uuid:3ace3caa-3be4-49bd-a9fd-bf24a9a97722>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculated Chemistry: Nobel Prize 2013 By Penny Carmichael, on 8 November 2013 Article by Enrico Berardo This year’s Nobel Prize in Chemistry was awarded to Martin Karplus, Michael Levitt and Arieh Warshel, who played a crucial role in the development and application of methods for the simulation of complex chemical reactions. The Nobel laureates have been recognized for their effort in developing computer programs that simulate the behaviour of chemical systems at various length scales, from simple molecules to proteins, enabling the study of phenomena such as catalysis, protein folding and drug design. Originally chemists used to create molecular models using plastic balls and sticks, but since the 1960s the modelling is carried out more and more on computers, allowing the calculation of important molecular properties such as stability and reactivity. Throughout the years, a continuous increase in the computational resources and more efficient algorithms enabled the application of calculations to larger and more realistic systems, such as proteins, drugs and materials. The work of Karplus, Levitt and Warshel, focuses on the development of methods that made Newtonian classical physics work side by side with the inherently different quantum theory. This Nobel Prize not only rewards the lifetime achievements of the three laureates, but it also recognizes the relevance of computational chemistry as a support and explanation for many experimental results. In the classical approach atoms and bonds are approximated as balls and springs, making the calculations easy to solve and allowing the study of very large systems (up to thousands of atoms). However, since electrons are not considered explicitly, this method cannot be used to simulate reactions that cannot be easily parameterized with experimental data. For that purpose, an unbiased quantum mechanical approach needs to be used instead. In this approach electrons are considered explicitly, leading to a very detailed description of chemical processes. Its weakness is that the calculations require enormous computing power, limiting the size of the systems that can be studied. The ground-breaking work of the three laureates revolutionized the field of computational chemistry, where combining quantum with classical mechanics (“QM/MM”), allowed the study of systems that were not even remotely conceivable before the 1970s. The use of those “hybrid” methods made the study of biomolecules such as enzymes possible, treating the reactive atoms of the molecule (core) with quantum mechanics, while the less demanding classical mechanical approach is used to describe the remaining part of the system. The work behind this year’s Nobel Prize has been the starting point for further theoretical developments of more realistic models and for applied studies. Nowadays, hybrid methods are not only employed in the study of molecules of biological interest or complex organic reactions, but also for the optimization of solar cells or the study of materials used for catalytical applications. Computational chemistry at UCL is represented by a grouping of international strength characterized by successful academic and industrial collaborations. It accounts for up to twenty different research groups where, thanks to the UCL and UK national computational resources, a breadth of different topics and methodologies are investigated. UCL’s computational chemistry has a strong tradition on the simulation of materials where Prof. Catlow’s, Prof. de Leeuw’s and Prof. Michaelides’ extended groups focus mainly on the study of catalytical applications of metal and metal oxides systems. Prof. Kaltsoyannis’ group employs quantum chemical investigations on actinide and lanthanide systems, with the focus on nuclear waste materials. Members of Prof. Price’s group investigate the thermodynamic stability of organic crystal polymorphs through the use of a classical mechanical approach. In the groups of Prof. Coveney and Prof. Gervasio large scale computational methods are developed and used for the modelling of systems like complex fluids, and molecules of biological interests. Today simulations became so powerful that such a large variety of topics and methodologies can be employed to predict the outcomes of traditional experiments. However, no simulations will ever be able to predict if a future Nobel Prize winner is hiding in the UCL chemistry corridors. This is for the future to decide. Further information on the work being carried out by UCL Chemistry’s computational groups can be found here Sources for text and images: 1) Popular science background 2) Scientific background 3) Interesting annual review by Prof. Martin Karplus
{"url":"http://blogs.ucl.ac.uk/chemdeptblog/2013/11/08/calculated-chemistry-nobel-prize-2013/","timestamp":"2014-04-21T09:43:55Z","content_type":null,"content_length":"33068","record_id":"<urn:uuid:e5c5cce0-400e-46b5-ad1e-d4ccd9fc2f06>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding the Solution in a Quadratic Equation? February 27th 2010, 05:20 PM Finding the Solution in a Quadratic Equation? I'm back again with a few problems I'm stuck on. I'm really not sure where to go with them, or how to solve them. Unfortunately I'm a complete dunce when it comes to math, and only manage to get by with help from tutors and such. But these were a few problems that I didn't have time to ask at the tutor's... I don't know if it makes any difference, but here are some of the instructions that were printed at the top of my paper... Find the solution for each of the following equations. 1- Put the equation in $Ax^2 + Bx + C = 0$ form 2- Determine the form of the equation 3- Find the solutions -Set each factor equal to 0 and solve -Solve using the square root method ( $\sqrt{Ax^2 } = +- \sqrt{C}$) -Solve by factoring -Use the Quadratic Equation Most of the problems are the $Ax^2 + Bx + C = 0$ form, and I know that if there is nothing that adds to B and multiplies to C (or maybe it's the other way around) that I need to use the square root method, which is different for me as before we were just told to write "no solution". This time I guess we have to continue it. I've got most of the problems solved, but there's still a few I'm struggling with and I'm not sure where to go with them. They're probably easier than I think they are, so I'm probably over-thinking things, but... 1. $x^2 + 9x = 0$ For this, do I just bring the 0 over and than add the 0 back in at the end as well, and than solve, or is it something different? Though I guess bringing over the 0 doesn't make much sense, now that I think about it... 2. $3x^2 = 60$ I'm really not sure where to go with this one, as it has no Bx and such... 3. $(3x-4)^2 = 16$ 4. $4x^2 - 16 = 0$ Also, for a couple problems I finished I'm just not sure if I got the correct answer. Would anyone be willing to just look them over for me and tell me whether I got them right or wrong? Problem: $4x^2 = 2x + 7$ Answer: $\frac {1 + \sqrt{29 }}{2}$ or $\frac {1 - \sqrt{29 }}{2}$ Problem: $x^2 -2x - 10 = 0$ Answer: $\frac {1 + \sqrt{11 }}{1}$ or $\frac {1 - \sqrt{11 }}{1}$ Any help or advice would be very much appreciated. Thank you! :) February 27th 2010, 06:32 PM $x^{2}+9x=0 \Rightarrow x^{2}+9x+0=0 ; a=1, b=9, c=0$ You can use the quadratic here, or complete the square and solve. Using quadratic: $3x^2 = 60$ You are half right; though the "Bx" isn't visible, it is there. Your B, as with your C in question one, is 0, therefore the term drops off. I could just as easily write this as: $3x^2+0x = 60$ Or we could use: "-Solve using the square root method" $\sqrt{3x^2 } = \pm \sqrt{60}$ $4x^2 = 2x + 7$ Using the quadratic: $x=\frac{-(-2)\pm\sqrt{(-2)^{2}-4(4)(-7)}}{2(4)}$ - Set up our quadratic. $x=\frac{2\pm\sqrt{4+112}}{8}$ - Perform operations. $x=\frac{2\pm\sqrt{4(29)}}{8}$ - Factor a 4 out of the quantity under the square root sign. $x=\frac{2\pm 2\sqrt{29}}{8}$ - Take the square root of 4, and bring it out of the square root sign. $x=\frac{1\pm\sqrt{29}}{4}$ - Dive by 2. All of the problems you have can be solved using all off the methods you outlined in your problem - it is just easier to use one method over another. For example: $3x^2+0x = 60$ Can be solved using the quadratic, or much quicker to use the "square root method" as I mentioned. Your task is figuring out which is the easiest way to go about solving. To check that you understand how to do these problems, use multiple methods to solve the same problem.
{"url":"http://mathhelpforum.com/algebra/131092-finding-solution-quadratic-equation-print.html","timestamp":"2014-04-23T16:58:05Z","content_type":null,"content_length":"12039","record_id":"<urn:uuid:c470f268-7beb-4ef2-9d25-ed755e620ae2>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
Seq magic The innocent-looking seq operator causes all manner of mayhem in GHC. This page summarises the issues. See also discussion in Trac #5129, #5262 The baseline position Our initial story was that (seq e1 e2) meant precisely case e1 of { _ -> e2 } Indeed this was seq's inlining. This translation validates some important rules * `seq` is strict in both its arguments * (e1 `seq` e2) e3 ===> e1 `seq` (e2 e3) * case (e1 `seq` e2) of alts ===> e1 `seq` (case e2 of alts) * value `seq` e ===> e But this approach has problems; see Note [Deguaring seq] in DsUtils. Problem 1 (Trac #1031) f x y = x `seq` (y `seq` (# x,y #)) The [CoreSyn let/app invariant] (see CoreSyn) means that, other things being equal, because the argument to the outer seq has an unlifted type, we'll use call-by-value thus: f x y = case (y `seq` (# x,y #)) of v -> x `seq` v But that is bad for two reasons: • we now evaluate y before x, and • we can't bind v to an unboxed pair Seq is very, very special! Treating it as a two-argument function, strict in both arguments, doesn't work. We "fixed" this by treating seq as a language construct, desugared by the desugarer, rather than as a function that may (or may not) be inlined by the simplifier. So the above term is desugared to: case x of _ -> case y of _ -> (# x,y #) Problem 2 (Trac #2273) let chp = case b of { True -> fst x; False -> 0 } in chp `seq` ...chp... Here the seq is designed to plug the space leak of retaining (snd x) for too long. If we rely on the ordinary inlining of seq, we'll get let chp = case b of { True -> fst x; False -> 0 } case chp of _ { I# -> ...chp... } But since chp is cheap, and the case is an alluring contet, we'll inline chp into the case scrutinee. Now there is only one use of chp, so we'll inline a second copy. Alas, we've now ruined the purpose of the seq, by re-introducing the space leak: case (case b of {True -> fst x; False -> 0}) of I# _ -> ...case b of {True -> fst x; False -> 0}... We can try to avoid doing this by ensuring that the binder-swap in the case happens, so we get his at an early stage: case chp of chp2 { I# -> ...chp2... } But this is fragile. The real culprit is the source program. Perhaps we should have said explicitly let !chp2 = chp in ...chp2... But that's painful. So the desugarer does a little hack to make seq more robust: a saturated application of seq is turned directly into the case expression, thus: x `seq` e2 ==> case x of x -> e2 -- Note shadowing! e1 `seq` e2 ==> case x of _ -> e2 So we desugar our example to: let chp = case b of { True -> fst x; False -> 0 } case chp of chp { I# -> ...chp... } And now all is well. Be careful not to desugar True `seq` e ==> case True of True { ... } which stupidly tries to bind the datacon 'True'. This is easily avoided. The whole thing is a hack though; if you define mySeq=seq, the hack won't work on mySeq. Problem 3 (Trac #5262) f x = x `seq` (\y.y) With the above desugaring we get f x = case x of x { _ -> \y.y } and now ete expansion gives f x y = case x of x { _ -> y } Now suppose that we have f (length xs) `seq` 3 Plainly (length xs) should be evaluated... but it isn't because f has arity 2. (Without -O this doesn't happen.) Problem 4: seq in the IO monad See the extensive discussion in Trac #5129. Problem 5: the need for special rules Roman found situations where he had case (f n) of _ -> e where he knew that f (which was strict in n) would terminate if n did. Notice that the result of (f n) is discarded. So it makes sense to transform to case n of _ -> e Rather than attempt some general analysis to support this, I've added enough support that you can do this using a rewrite rule: RULE "f/seq" forall n e. seq (f n) e = seq n e You write that rule. When GHC sees a case expression that discards its result, it mentally transforms it to a call to seq and looks for a RULE. (This is done in Simplify.rebuildCase.) As usual, the correctness of the rule is up to you. To make this work, we need to be careful that seq is not desguared into a case expression on the LHS of a rule. To increase applicability of these user-defined rules, we also have the following built-in rule for seq seq (x |> co) y = seq x y This eliminates unnecessary casts and also allows other seq rules to match more often. Notably, seq (f x |> co) y --> seq (f x) y and now a user-defined rule for seq may fire. A better way Here's our new plan. • Introduce a new primop seq# :: a -> State# s -> (# a, State# s #) (see be5441799b7d94646dcd4bfea15407883537eaaa) • Implement seq# by turning it into the obvious eval in the backend. In fact, since the return convention for (# State# s, a #) is exactly the same as for a, we can implement seq# s a by a (even when it appears as a case scrutinee). • Define evaluate thus evaluate :: a -> IO a evaluate x = IO $ \s -> seq# x s That fixes problem 4. We could go on and desugar seq thus: x `seq` e2 ==> case seq# x RW of (# x, _ #) -> e2 -- Note shadowing! e1 `seq` e2 ==> case seq# x RW of (# _, _ #) -> e2 and if we consider seq# to be expensive, then we won't eta-expand around it, and that would fix problem 3. However, there is a concern that this might lead to performance regressions in examples like this: f :: Int -> Int -> IO Int f x y | x `seq` False = undefined f x 3 = do ... some IO monad code here ... so f turns into f = \x . \y . case seq# x RW of (# _, x #) -> case y of 3 -> \s . some IO monad code and we won't get to eta-expand the \s as we would normally do (this is pretty important for getting good performance from IO and ST monad code). Arguably f should be rewritten with a bang pattern, and we should treat bang patterns as the eta-expandable seq and translate them directly into case, not seq#. But this would be a subtle difference between seq and bang patterns. Furthermore, we already have pseq, which is supposed to be a "strictly ordered seq", that is it preserves evaluation order. So perhaps pseq should be the one that more accurately implements the programmer's intentions, leaving seq as it currently is. We are currently pondering what to do here.
{"url":"https://ghc.haskell.org/trac/ghc/wiki/Commentary/Compiler/SeqMagic","timestamp":"2014-04-17T07:55:52Z","content_type":null,"content_length":"19590","record_id":"<urn:uuid:4c10700e-29c7-4e68-987b-7b5231442f51>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite Element problem - Understanding Convergence, Completeness and Compatibility? August 20th 2010, 11:15 AM Finite Element problem - Understanding Convergence, Completeness and Compatibility? I am reading this book Introduction to finite element methods by Niels Ottosen and Hans Petersson. In chapter 7 it brings up general requirements for FEM, such as convergence, completeness and compatibility. It states some definitions such as: 1. Its complete when the approximation is represented by an arbitrary constant temperature gradient and constant temperature. 2. Its compatibility when the approximation of the temperature over element boundaries is continuous. 3. Its convergence when completeness and compatibility is fulfilled. Okay so far, but my problem is how to actually apply these definitions to an example? What has pascals triangle with this? Could someone help me understand what they say and mean with an example? Say for a 6-node element, triangle with node at ends and one in the middle. Approximation T = a1 + a2x +a3y + a4xy + a5x^2 + a6y^2?
{"url":"http://mathhelpforum.com/differential-equations/154080-finite-element-problem-understanding-convergence-completeness-compatibility-print.html","timestamp":"2014-04-17T13:28:29Z","content_type":null,"content_length":"4164","record_id":"<urn:uuid:e72f9a3e-4616-4cbd-aa34-461b2a628900>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Factoring and gcoeff() Bill Allombert on Fri, 09 Dec 2005 21:31:27 +0100 [Date Prev] [Date Next] [Thread Prev] [Thread Next] [Date Index] [Thread Index] Re: Factoring and gcoeff() On Fri, Dec 09, 2005 at 09:05:25PM +0100, Alessio Rocchi wrote: > Hi everybody. > Before beginning to explain my problem, i would really say thanks to Bill > Alombert for all the (wonderful) help he gave me until now. > Ok, let's go on: i can succesfully factor numbers using factor() function > with libpari. > I need now to insert all prime factors (and their exponents) into a long > int array (long int*. I know that it would be really simpler to use Pari > types, but i need to use just pure C long int pointers). > Well, I use (pseudo-)code like this: > typedef struct{ > long int* factors; > long int* exps; > } Factors; > GEN F=factor(<number>); > long length=lg(F[1]); > //code for allocating Factors.factors[] and Factors.exps[] > for(int i=0; i<length; i++){ > Factors.factors[i]=itos(gcoeff(F, i, 1)); > Factors.exps[i]=itos(gcoeff(F, i, 2)); > } lg(F[1]) is the length of the _object_ F[1], which is one more than the length of the _columns vector_ F[1]. Also indices start at 1 in PARI, as in GP, not at 0, so gcoeff(F, 0, 1) is random garbage, hence itos(gcoeff(F, 0, 1)) will segfault. You probably want to do: GEN F=factor(<number>); long length=lg(F[1])-1; //code for allocating Factors.factors[] and Factors.exps[] for(int i=0; i<length; i++){ Factors.factors[i]=itos(gcoeff(F, i+1, 1)); Factors.exps[i]=itos(gcoeff(F, i+1, 2)); Please try to send examples that can be compiled, they are much easier to
{"url":"http://pari.math.u-bordeaux.fr/archives/pari-dev-0512/msg00013.html","timestamp":"2014-04-18T13:08:51Z","content_type":null,"content_length":"5637","record_id":"<urn:uuid:488f077a-2fe7-44b4-8cc8-e1dd9fa9fd1e>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
Simplifying Square Roots There are 3 main rules for simplifying a square root. They are: 1) Take out all perfect squares in the number 2) Don't leave a fraction in the square root...separate it 3) Don't leave a radical in the denominator In this article, we will examine each of these three rules by looking at several examples of each. Lets first start by defining what a perfect square is. All perfect squares can be found by squaring all of the whole numbers : 0, 1, 2 , 3, 4, 5, 6, 7, etc. When we square these numbers we get: 0,1, 4, 9, 16, 25, 36, 49, etc. These numbers are what we will refer to as perfect squares. So now lets examine how to use rule #1...taking out perfect squares....by looking at this example : √45. I use a story to help students remember what to look for. The radical symbol is a jail cell and the number 45 is in the jail cell. The only way any numbers can "escape" from the cell is if the original number can be broken down into pairs of factors. Only factors that are in pairs can escape, while factors that aren't pairs don't have a chance of escaping and must stay in jail forever. Again, looking at the example, the number 45 can be broken down into 3*3*5. There are a pair of 3's and then a 5 that doesn't have a partner. The story continues and is a little sad in that while the pair of numbers attempt to "escape"...one of the partners of the pair succeeds in escaping, while the other partner gives his life for the one that succeeded. Going back to the the example above...one 3 escapes, while the other 3 gets "killed" in the process. If there are two pairs...one of each pair escapes and the other "dies". The two that escape get lost in the crowd so they can't get spotted so they get multiplied together with whatever might be out there already. Further examples will show this. The 5 doesn't have a partner, so it is doomed to stay in jail forever. Therefore your answer this time is 3√5. It sounds a little violent, but the story kind of helps those students that forget which number goes on the outside of the radical sign and which ones stay inside. Lets do one more example together before you try. Take the example :√72. The number 72 can be broken down into...2*2*2*3*3. This time there are two 2's and two 3's and one 2 that does not have a partner. One of the 2's and one of the 3's "die" and the other two come "out of the cell" and get multiplied together. The last 2 doesn't have a partner so it must remain in the jail cell forever, so your answer ends up being 6√2. So now it is time for you to try one. Try simplifying : √162, then check your answer on the next page. Did you get this? 162 breaks down into 2*9*9.....Since there are two 9's, one dies and one escapes and the 2 doesn't have a partner, so you end up with 9√2 for the answer. Or if you break down the 9's down so you end up with 2*3*3*3*3, you have two pairs of 3's so for each pair a 3 dies and two 3's come out and get multiplied together and the 2 still doesn't have a partner so it stays inside still giving you the answer of 9√2. Either way you get the same answer. Now lets talk about rule 2...don't leave a fraction in the square root. Lets look at the example: √¼. What we do is split the square root into two separate square roots, so we get √1/√4. Then we simplify both square roots by taking the √1 =1 and √4 =2 giving us the fraction 1/2. We must make sure the fraction is reduced and since it is in this case, the final answer is 1/2. Lets try a little more difficult fraction. Suppose we start with the √(3/12). First this time, we can reduce the fraction before we break it down. Therefore we get √(1/4), which gives us the same answer as before : 1/2. But sometimes we can't reduce the fraction to start with. Take the problem : √(50/49). Since we can't break it down before we make two square roots, we go ahead and make two square roots leaving us with √50/√49. Then we take the square root of each ending up with 5√2/7 because we have to break 50 down into 5*5*2 and use the method above to simplify. Sometimes when we make two separate square roots out of a fraction, the denominator doesn't have a perfect square. According to rule #3, we cannot leave a square root in the denominator, so we must do what we call rationalize the denominator. This means we must multiply the top and bottom of the fraction with the square root of the same number that is in the denominator so that you have two numbers the same so they have a partner and one can escape and one dies. Watch this example....√(1/3) can't be reduced so it must be split up into 2 square roots giving us 1/√3. Since we can't leave a square root in the denominator, we multiply 1 by √3 and √3 by √3 so that the result is √3/3 because when we multiply √3 by √3 we end up with 2 3's so one dies and one escapes, so it is no longer under the radical sign. This leaves us with √3/3 as the final answer because it can't be reduced either. One 3 is under a radical and the other one isn't, so it can't be reduced. To reduce a fraction, either both numbers must be under a square root or both must have escaped and be outside the square root symbol. Lets try that again. Take the example: √(4/3). First split up the square root into 2 square roots.....√4/√3. The √4 =2 and √3 can not be broken down. So we must multiply the 2 by √3 and the √3 by √3 giving us a final answer of 2√3/3 which can't be reduced. Now lets have you try one. Take √(16/27) amd simplify it using the 3 rules discussed previously. Then check your answer on the next page to see if you got the correct answer. Did you get this? First, split up the square root into two square roots. So now you have √16/√27. Then we break down 16 into 4*4 so √16=4 and 27 breaks down into 3*3*3 which gives us 3√3. This leaves us with 4/ 3√3. Then we multiply the 4 by √3 and 3√3 by √3 leaving us with 4√3/9 because with 2 3's under the square root, one dies and one comes out and gets multiplied by the 3 already there, leaving us with the 9 and nothing under the square root. Therefore the final answer should have been 4√3/9.
{"url":"http://www.bayareamath.com/cms/algebra-ii/49-radicals/68-simplifying-square-roots?showall=1","timestamp":"2014-04-17T04:39:04Z","content_type":null,"content_length":"25684","record_id":"<urn:uuid:679447bf-c2e6-424a-8ba5-bc69c0eb1570>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Pure Mathematics Pure Mathematics Research Group With nearly thirty permanent members of staff, including fifteen professors, and with internationally known research groups ranging across a wide span of mathematics, Manchester is one of the leading centres for pure mathematics in the UK. As well as the core areas of algebra, analysis and dynamical systems, geometry and topology, and mathematical logic, our research spills into mathematical physics and theoretical computer science. Our group includes a Fellow of the Royal Society ( ), a Fellow of the British Academy ( ), and two Fellows of the American Mathematical Society ( For our research, see the area descriptions and individual staff members' home pages, as well as the MIMS EPrints repository. Many of our recent achievements are outlined in the group's RAE2008 submission. A vibrant programme of seminars, a large and lively group of postgraduate students and purpose-designed areas for mathematical interaction, all help create a stimulating environment for creating new mathematics. Other activities of the group include organising international conferences and writing textbooks and research monographs. We are part of the MAGIC consortium which, via the web, presents a range of lecture courses for our postgraduate students far greater than could be provided at any single institution. Pure Mathematics has a long tradition of excellence at Manchester. The 1920s and 30s saw Manchester become one of the world's leading centres for number theory, with Louis Mordell and Kurt Mahler holding chairs here. In 1945 Max Newman arrived from code-breaking work at Bletchley Park and ensured the growth in eminence of the department, recruiting stars such as the logician Alan Turing, often considered to be the father of artificial intelligence, and the topologist Frank Adams. Manchester also has a long tradition in algebra, through the work of leading figures such as Bernhard Neumann, Hanna Neumann and Brian Hartley. We welcome applications to study for the degree of PhD ( apply online ) in the areas of pure mathematics listed below, as well as a one year taught MSc programme. For a complete list of current research interests, please refer to the research areas pages and to the pages of individual staff members. We also welcome applications to hold a Research Fellowship in the group. Research Areas
{"url":"http://www.mims.manchester.ac.uk/research/pure/","timestamp":"2014-04-21T07:04:06Z","content_type":null,"content_length":"8471","record_id":"<urn:uuid:6490771e-757a-4812-ac3a-8db0ed32abe5>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00646-ip-10-147-4-33.ec2.internal.warc.gz"}
Subgroups of free abelian groups are free: a topological proof? up vote 16 down vote favorite There is a well-known topological proof of the fact that subgroups of free groups are free. Many people, myself included, think it is easier and more natural than the purely algebraic proofs which had been given earlier by (IIRC) Nielsen and Schreier. It goes as follows: 1) If S is any set, then the CW-complex X obtained as the wedge of #S circles is a graph whose fundamental group is isomorphic to F(S), the free group on S. 2) If H is a subgroup of F(S), then by covering space theory H is the fundamental group of a covering space Y of X. 3) The covering space of any graph is again a graph. 4) Any graph has the homotopy type of a wedge of circles, so the fundmamental group of Y is again free. My question is: to what extent is there an analogous proof of the result with "free" replaced everywhere by "free abelian"? In the case of a finitely generated free abelian group -- say G \cong Z^n -- there is at least an evident topological interpretation. Namely, we can take X to be the n-torus (product of n copies of S ^1), and then observe that any covering space of a torus is homeomorphic to a torus of dimension d cross a Euclidean space of dimension n-d, hence homotopy equivalent to a torus of rank d <= n. Even in this case though I'd like some assurance that the proof of this topological fact does not use the algebraic fact we're trying to prove. (Is for instance some basic Lie theory relevant here?) Then, what happens if the free group has arbitrary rank? Can we take X to be a direct limit over a family of finite-dimensional tori? Does the proof go through? at.algebraic-topology gr.group-theory 7 For the record, a topological proof that the fundamental group of a graph is free usually involves something like finding a maximal spanning tree and applying Seifert-van Kampen. This may look easier than an algebraic proof, but it hides the same details - algebraic proofs generally choose a set of coset representatives (spanning tree), and then reduce any word to a canonical product of elements (those corresponding to edges not in the spanning tree). All these details are there in the (gory) proof of the Seifert-van Kampen theorem itself. – Tyler Lawson Nov 8 '09 at 3:06 @Pete, Ben I retagged the question editing out some tags that don't seem convey any information (per meta.stackoverflow.com/questions/18878/…) and adding the group-theory tag (the question should be of interest to people in group theory). Feel free to revert or discuss on Meta. – Ilya Nikokoshev Nov 8 '09 at 18:18 add comment 3 Answers active oldest votes The "free group" proof rests on proving that that the fundamental group of a graph is free. For the analogue we'd need to essentially prove that the fundamental group of a "torus" (something that looks like a quotient of a vector space by a discrete subgroup) is free abelian. A sketch: Given a real vector space V, we can put the direct limit topology on it (so that subsets are closed if and only if their intersection with any finite dimensional subspace is closed). This is a contractible topological group. If A is a free abelian group, then A is a discrete subgroup of the associated real vector space (ℝ ⊗ A) and the quotient space has fundamental group A. Any covering space is a quotient of (ℝ ⊗ A) by a discrete subgroup B of A. So the question boils down to showing: Any discrete subgroup of a vector space (with the direct limit topology) is free abelian. Let's say that a partial basis is a set S of elements of B such that • S is linearly independent, and up vote 15 • S generates B ∩ Span(S). down vote accepted Then partial bases are a partial order under containment, and Zorn's lemma implies that there is a maximal element S. I claim that S is a basis of B as a free abelian group. S is linearly independent by construction, so it generates a free abelian group, and hence it suffices to show that it generates all of B. If b in B is not in S, then it is not in Span (S). Let S' be (S ∪ {b}). Then Span(S')/Span(S) is a 1-dimensional vector space and the image of B ∩ Span(S') must be discrete, because otherwise Span(S') would contain an element (rb + v) for v in Span(S) that we could use to generate a non-discrete subset of B. (If v is a combination of w[1]...w[n] in S, then it suffices to check that any subgroup of the finite-dimensional space Span(w[1]...w[n],b) requiring more than n generators is indiscrete.) Thus any lift of a generator of B ∩ Span(S') would extend to a larger generating set, contradicting maximality. (My apologies for the comment last night, which this morning looks snarkier than I intended. I'm a fan of using this topological reasoning for free groups myself, because it compartmentalizes the proof into much more understandable pieces. In particular, I don't think I'd really understand a purely algebraic proof that an index n subgroup of a free group on m generators is free on nm - n + 1 generators.) This is a great answer -- just what I was looking for. Thanks. – Pete L. Clark Nov 8 '09 at 20:41 add comment An obvious point, but I hope worth making. The free group proof rests on the fact that graphs can be characterized locally. As tori can't be characterized locally by their topology, there's up vote no hope of a truly analogous proof for free abelian groups using tori. 4 down I think it is the geometry that plays a role, not the topology. Locally graphs are spaces of curvature minus infinity, while tori are spaces of curvature zero. Both curvature conditions are therefore inherited by the covering spaces. – Igor Belegradek Nov 8 '09 at 19:32 True. However, spaces of curvature 0 have fundamental groups that are virtually free abelian, which isn't quite good enough (spaces of curvature -infty, ie graphs, have fundamental groups that are free on the nose). Moreover, the proof that spaces of zero curvature have virtually free abelian fundamental groups (which is part of Bieberbach's theorem) is quite a bit deeper than the fact that graphs have free fundamental groups. I'd have to think about it a bit, but it wouldn't surprise me if the fact that subgroups of free abelian groups are free abelian is already used in the proof. – Andy Putman Nov 8 '09 at 22:58 It is hard to be sure when talking about assumptions like this that are always unspoken, but I think you could prove Bieberbach without this. Thinking about the proof, the following statement seems attainable: if G is a lattice in Isom(R^n), then the translational part of G is finite index. This subgroup, of course, is a discrete subgroup of R^n, so by Tyler's argument above it's free abelian. This wouldn't yield another proof of the result, it would just say that Bieberbach's theorem is independent of it. – Tom Church Nov 9 '09 at 1:07 I agree completely, Igor - it's clearly the geometry, rather than the topology, that's important. – HJRW Nov 9 '09 at 1:21 add comment For a purely topological proof of the statement "Every subgroup of a free abelian group is free abelian.", why not just continue to use a wedge of circles, but use the first simplicial up vote -1 homology and the Hurewicz theorem? If you take "free abelian" to mean "a direct limit of finte rank free abelian", it seems to follow from the statement about free groups. down vote 3 Because the subgroups of homology don't correspond to subgroups of the free group in the way you want. If V is a subgroup of Z^n, the corresponding subgroup G of F_n has infinite-rank first homology whenever V has infinite index. – Tom Church Nov 17 '09 at 4:31 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology gr.group-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/4578/subgroups-of-free-abelian-groups-are-free-a-topological-proof?answertab=votes","timestamp":"2014-04-21T10:04:17Z","content_type":null,"content_length":"71691","record_id":"<urn:uuid:6cd8b170-45d2-4cac-8b16-83979c23b609>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00118-ip-10-147-4-33.ec2.internal.warc.gz"}
Foundations of Computer Science Principal lecturer: Prof Larry Paulson Taken by: Part IA CST, Part IA NST, Part I PPS Past exam questions Information for supervisors (contact lecturer for access permission) No. of lectures and practicals: 15 + 6 Suggested hours of supervisions: 5 This course is a prerequisite for Programming in Java and Prolog (Part IB). The main aim of this course is to present the basic principles of programming. As the introductory course of the Computer Science Tripos, it caters for students from all backgrounds. To those who have had no programming experience, it will be comprehensible; to those experienced in languages such as C, it will attempt to correct any bad habits that they have learnt. A further aim is to introduce the principles of data structures and algorithms. The course will emphasise the algorithmic side of programming, focusing on problem-solving rather than on hardware-level bits and bytes. Accordingly it will present basic algorithms for sorting, searching, etc., and discuss their efficiency using O-notation. Worked examples (such as polynomial arithmetic) will demonstrate how algorithmic ideas can be used to build efficient applications. The course will use a functional language (ML). ML is particularly appropriate for inexperienced programmers, since a faulty program cannot crash. The course will present the elements of functional programming, such as curried and higher-order functions. But it will also discuss traditional (procedural) programming, such as assignments, arrays, pointers and mutable data structures. • Introduction. Levels of abstraction. Floating-point numbers, and why von Neumann was wrong. Why ML? Integer arithmetic. Giving names to values. Declaring functions. Static binding, or declaration versus assignment. • Recursive functions. Examples: Exponentiation and summing integers. Overloading. Decisions and booleans. Iteration versus recursion. • O Notation. Examples of growth rates. Dominance. O, Omega and Theta. The costs of some sample functions. Solving recurrence equations. • Lists. Basic list operations. Append. Naïve versus efficient functions for length and reverse. Strings. • More on lists. The utilities take and drop. Pattern-matching: zip, unzip. A word on polymorphism. The “making change” example. • Sorting. A random number generator. Insertion sort, mergesort, quicksort. Their efficiency. • Datatypes and trees. Pattern-matching and case expressions. Exceptions. Binary tree traversal (conversion to lists): preorder, inorder, postorder. • Dictionaries and functional arrays. Functional arrays. Dictionaries: association lists (slow) versus binary search trees. Problems with unbalanced trees. • Queues and search strategies. Depth-first search and its limitations. Breadth-first search (BFS). Implementing BFS using lists. An efficient representation of queues. Importance of efficient data • Functions as values. Nameless functions. Currying. • List functionals. The “apply to all” functional, map. Examples: matrix transpose and product. The “fold” functionals. Predicate functionals “filter” and “exists”. • Polynomial arithmetic. Addition, multiplication of polynomials using ideas from sorting, etc. • Sequences, or lazy lists. Non-strict functions such as IF. Call-by-need versus call-by-name. Lazy lists. Their implementation in ML. Applications, for example Newton-Raphson square roots. • Elements of procedural programming. Address versus contents. Assignment versus binding. Own variables. Arrays, mutable or not. • Linked data structures. Linked lists. Surgical concatenation, reverse, etc. At the end of the course, students should • be able to write simple ML programs; • understand the importance of abstraction in computing; • be able to estimate the efficiency of simple algorithms, using the notions of average-case, worse-case and amortised costs; • know the comparative advantages of insertion sort, quick sort and merge sort; • understand binary search and binary search trees; • know how to use currying and higher-order functions. * Paulson, L.C. (1996). ML for the working programmer. Cambridge University Press (2nd ed.). Okasaki, C. (1998). Purely functional data structures. Cambridge University Press. Gentler alternative to the main text: Hansen, M. & Rischel, H. (1999). Introduction to programming using SML. Addison-Wesley. For reference only: Gansner, E.R. & Reppy, J.H. (2004). The Standard ML Basis Library. Cambridge University Press. ISBN: 0521794781
{"url":"http://www.cl.cam.ac.uk/teaching/1213/FoundsCS/","timestamp":"2014-04-18T16:09:12Z","content_type":null,"content_length":"10821","record_id":"<urn:uuid:5e4fdfac-3f63-4264-a4bc-3e50b142d36d>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
Mode calculations in asymmetrically aberrated laser resonators using the Huygens–Fresnel kernel formulation « journal navigation Mode calculations in asymmetrically aberrated laser resonators using the Huygens–Fresnel kernel formulation Optics Express, Vol. 19, Issue 20, pp. 19702-19707 (2011) A theoretical framework is presented for calculating three-dimensional resonator modes of both stable and unstable laser resonators. The resonant modes of an optical resonator are computed using a kernel formulation of the resonator round-trip Huygens–Fresnel diffraction integral. To substantiate the validity of this method, both stable and unstable resonator mode results are presented. The predicted lowest loss and higher order modes of a semi-confocal stable resonator are in agreement with the analytic formulation. Higher order modes are determined for an asymmetrically aberrated confocal unstable resonator, whose lowest loss unaberrated mode is consistent with published results. The three-dimensional kernel method provides a means to evaluate multi-mode configurations with two-dimensional aberrations that cannot be decomposed into one-dimensional representations. © 2011 OSA 1. Introduction Unstable optical resonators [ 1. A. E. Siegman, “Unstable optical resonators for laser applications,” Proc. IEEE 53(3), 277–287 (1965). [CrossRef] 2. A. E. Siegman and R. W. Arrathoon, “Modes in unstable optical resonators and lens waveguides,” IEEE J. Quantum Electron. 3(4), 156–163 (1967). [CrossRef] ] have found wide application for high power lasers. Unstable resonators possess major advantages with regard to mode volume (for relatively short resonators), transverse mode control, single-mode operation (i.e., near diffraction limited output), energy extraction (i.e., uniformity of illumination seen by the gain), and far-field brightness. The success of unstable resonators is realized within the regime where the laser configuration exhibits moderate gain per pass and a Fresnel number larger than a few times unity [ 3. A. E. Siegman, “Unstable optical resonators,” Appl. Opt. 13(2), 353–367 (1974). [CrossRef] [PubMed] ]. Diffraction effects in unstable resonators create a population of transverse modes, which at a particular equivalent Fresnel number, may possess mode loss degeneracy. It cannot be over emphasized that the numerical determination of the properties of the transverse mode populations has evolved into sophisticated calculation techniques with mathematical and computational complexities [ 4. A. E. Siegman and H. Y. Miller, “Unstable optical resonator loss calculations using the prony method,” Appl. Opt. 9(12), 2729–2736 (1970). [CrossRef] [PubMed] 7. L. W. Chen and L. B. Felsen, “Coupled-mode theory of unstable resonators,” IEEE J. Quantum Electron. 9(11), 1102–1113 (1973). [CrossRef] One would like to be able to calculate the three-dimensional mode properties in a straightforward fashion. The typical method for determining the three-dimensional mode structure of a rectangular aperture is to take the product of the strip resonator modes describing each axes (as mentioned [ 3. A. E. Siegman, “Unstable optical resonators,” Appl. Opt. 13(2), 353–367 (1974). [CrossRef] [PubMed] 4. A. E. Siegman and H. Y. Miller, “Unstable optical resonator loss calculations using the prony method,” Appl. Opt. 9(12), 2729–2736 (1970). [CrossRef] [PubMed] ]), with care being taken to mix all possible permutations. Certain higher order aberrations common in laser resonators (e.g., astigmatism and coma) cannot be decomposed into one-dimensional aperture representations. The method presented in this paper provides a direct means for determining both unity and non-unity aspect ratio modes with either symmetric or asymmetric aberrations. Furthermore, any arbitrary aperture shape or aberration can be investigated. An additional strength of this method is that it not only predicts the three-dimensional aberrated lowest loss mode properties but also determines the aberrated higher order modes in a single calculation. High power laser systems having volume constrains and a large gain aperture necessitate operation at large equivalent Fresnel numbers, which enhances the possibility of multi-mode operation. The determination of single-mode operation is further complicated by aberrations common to less than perfect cooling schemes in solid-state lasers. The question of interest in these applications is whether a specific aberrated resonator design will operate with single transverse mode discrimination. The importance arises in the belief that good beam quality requires single-mode operation. For cases where multi-mode operation occurs, the methods presented here will allow for the determination of the higher order field distributions and their impact on beam quality. 2. Three-dimensional Huygens-Fresnel kernel formulation The resonator diffraction integral requires that the optical amplitude distribution reproduces itself to within a constant after making a round-trip in the resonator. The constant is the eigenvalue associated with the amplitude distribution, or eigenmode. The eigenvalue equation for an optical resonator round-trip has the general form: is the eigenvalue associated with the complex amplitude (eigenvector) is the wavelength of the circulating light, is the output aperture function cross sectional area, [ ] are elements of the round-trip resonator ABCD characteristic matrix, and [ ] are the three dimensional spatial variables at the source (subscript 1) and observation planes (subscript 2). The round-trip ABCD matrix for a laser resonator:where is the length of the resonator, are the generalized resonator parameters defined as , and are the radii of curvature of the resonator mirrors. Equation (1) can be simplified by normalization of the spatial variables by the output aperture radii of the dimension as , and . The normalized form of the diffraction integral: where the limits of integration are from −1 to 1, Eq. (2) ), and [ ] are the output aperture cavity Fresnel numbers [ ]. Next we apply the Gaussian quadrature scheme to approximate the definite integral ( Eq. (3) ) by a discrete kernel weighted by Legendre polynomials with the abscissas at Legendre zeros ]. The general form:for , and is the resonator kernel broken into components, with eigenmodes , and . A further simplification is made by letting . Application of these substitutions to Eq. (4) provides the eigenvalue equationwhere , and . The three-dimensional Huygens-Fresnel kernel formulation is and the kernel takes the form . The constituent contributions are defined as is the phase aberration, and provides a means to study variable reflectivity mirrors [ 10. V. Magni, G. Valentini, and S. De Silvestri, “Recent developments in laser resonator design,” Opt. Quantum Electron. 23(9), 1105–1134 (1991). [CrossRef] ]; otherwise, the values of are unity over the mirror diameter representing a hard edge. 3. Aberrated optical resonator studies To demonstrate the generality of the kernel formulation, we calculated the first nine three-dimensional mode intensity distributions of a semi-confocal stable resonator. The semi-confocal stable resonator configuration selected is described by the following parameters: = 1, = 0.5, = 10 m, and = 1 μm. Results of the three-dimensional kernel calculation are shown in Fig. 1 . The three-dimensional mode order is from left to right, top to bottom, or inferred from the number of nulls along each axis. The beam waist of the fundamental mode ( ) is 1.03 times the analytic value. The mode beam waist ratios along each axis ( ) are 1.03, 1.50, 1.79, which are in reasonable agreement with the analytic formulation [ 11. H. Kogelnik and T. Li, “Laser beams and resonators,” Appl. Opt. 5(10), 1550–1567 (1966). [CrossRef] [PubMed] ] considering the mesh size used. The axial intensity plots are presented adjacent to the modes they represent. The amplitude of the local maxima is sensitive to the aperture diameter used to calculate the Fresnel number. The calculation used a diameter of 4 . Note that the lowest loss and higher order stable resonator modes are determined for a particular working two-dimensional aperture. The calculation does not require the implementation of an artificial aperture, obscuration, or segmented mirror (i.e., added artificial loss) to individually determine activate higher order modes as would be required in a Fox–Li type calculation. This numerical method more accurately describes the physics of large aperture stable resonators. The case of most widespread practical importance in high power laser oscillators is the positive branch confocal unstable resonator [ 12. W. F. Krupke and W. R. Sooy, “Properties of an unstable confocal resonator CO2 laser system,” IEEE J. Quantum Electron. 5(12), 575–586 (1969). [CrossRef] ]. To further the validity of this method, we will now present graphically the results of the kernel calculation of a confocal unstable resonator. When , the resonator can be characterized by the round-trip geometrical magnification and the equivalent Fresnel number . The equivalent Fresnel number [ 2. A. E. Siegman and R. W. Arrathoon, “Modes in unstable optical resonators and lens waveguides,” IEEE J. Quantum Electron. 3(4), 156–163 (1967). [CrossRef] ] is defined by and the fractional power loss per round-trip for a particular eigenmode is given by For the unstable resonator calculation we chose a two-dimensional aperture based on the strip resonators reviewed in Rensch and Chester [ 13. D. B. Rensch and A. N. Chester, “Iterative diffraction calculations of transverse mode distributions in confocal unstable laser resonators,” Appl. Opt. 12(5), 997–1010 (1973). [CrossRef] [PubMed] ] having resonator parameters: = 0.852, = 1.21, = 1.42, and = 0.52. We present the transverse mode resulting from an aberrated cavity. Unstable resonator mode intensities are normalized to unity and length scales are normalized consistent with Ref [ 13. D. B. Rensch and A. N. Chester, “Iterative diffraction calculations of transverse mode distributions in confocal unstable laser resonators,” Appl. Opt. 12(5), 997–1010 (1973). [CrossRef] [PubMed] ]. The four optical distortions studied include: tilt ( ), focus ( ), astigmatism ( ), and coma ( ), shown in Fig. 2 Figure 3 shows the lowest loss mode calculated using the Fox–Li iterative method via a fast Fourier transform for comparison with the Huygens–Fresnel kernel results detailed in Fig. 4 . The two methods show good agreement. The higher order modes are presented in Fig. 4 (ii)-(v) for the unaberrated and aberrated cases. For the kernel calculations, the field beyond the output coupler mirror was determined by a round-trip Huygens–Fresnel integral propagation of the resultant eigenmode. The magnitudes of the aberrations were kept small in this study to ensure the validity of a thin sheet approximation. The starting and ending reference plane (and phase sheet) can be located anywhere within the cavity via a modification of Eq. (2) . Comparison of the lowest loss kernel and Fox–Li are in agreement to within 2x10 To further demonstrate the generality of the kernel method, a high Fresnel number rectangular aperture confocal unstable resonator is evaluated. The aperture is composed of two strip resonators cases reviewed in Rensch and Chester [ 13. D. B. Rensch and A. N. Chester, “Iterative diffraction calculations of transverse mode distributions in confocal unstable laser resonators,” Appl. Opt. 12(5), 997–1010 (1973). [CrossRef] [PubMed] ] having resonator parameters: = 0.852, = 1.21, = 1.42, with a vertical = 6.25 and a horizontal = 3.12. Along with the higher order mode progress, the lowest loss mode is compared with the Fox-Li method in Fig. 5 4. Conclusion The Huygens–Fresnel kernel formulation presented provides a framework for directly calculating the three-dimensional laser resonator mode properties for both stable and unstable resonators. The numerical procedure has the capability to provide not only the lowest loss mode but also the higher order modes simultaneously. The novelty of the explicit three-dimensional formulation is that it provides a means to evaluate multi-mode configurations with two-dimensional aberrations that cannot be decomposed into one-dimensional representations. The straightforward numerical procedure directly provides the three-dimensional transverse mode properties for both unity and non-unity aspect ratio mode profiles at both small and large Fresnel number unstable resonators. The kernel formulation presented allows for asymmetrical phase distortions, commonly observed in high power laser medium, and provides a means to determine the effects of multi-mode operation on beam quality. This work can be used directly as an aid in designing laser resonators. References and links 1. A. E. Siegman, “Unstable optical resonators for laser applications,” Proc. IEEE 53(3), 277–287 (1965). [CrossRef] 2. A. E. Siegman and R. W. Arrathoon, “Modes in unstable optical resonators and lens waveguides,” IEEE J. Quantum Electron. 3(4), 156–163 (1967). [CrossRef] 3. A. E. Siegman, “Unstable optical resonators,” Appl. Opt. 13(2), 353–367 (1974). [CrossRef] [PubMed] 4. A. E. Siegman and H. Y. Miller, “Unstable optical resonator loss calculations using the prony method,” Appl. Opt. 9(12), 2729–2736 (1970). [CrossRef] [PubMed] 5. R. L. Sanderson and W. Streifer, “Unstable laser resonator modes,” Appl. Opt. 8(10), 2129–2136 (1969). [CrossRef] [PubMed] 6. P. Horwitz, “Asymptotic theory of unstable resonator modes,” J. Opt. Soc. Am. 63(12), 1528–1543 (1973). [CrossRef] 7. L. W. Chen and L. B. Felsen, “Coupled-mode theory of unstable resonators,” IEEE J. Quantum Electron. 9(11), 1102–1113 (1973). [CrossRef] 8. A. G. Fox and T. Li, “Resonant modes in a maser interferometer,” Bell Syst. Tech. J. 40, 453–488 (1961). 9. P. J. Davis and I. Polonsky, “Numerical Interpolation, Differentiation, and Integration,” in Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables, M. Abramowitz and I. A. Stegun, eds. (Dover, 1972) pp. 875–924. 10. V. Magni, G. Valentini, and S. De Silvestri, “Recent developments in laser resonator design,” Opt. Quantum Electron. 23(9), 1105–1134 (1991). [CrossRef] 11. H. Kogelnik and T. Li, “Laser beams and resonators,” Appl. Opt. 5(10), 1550–1567 (1966). [CrossRef] [PubMed] 12. W. F. Krupke and W. R. Sooy, “Properties of an unstable confocal resonator CO2 laser system,” IEEE J. Quantum Electron. 5(12), 575–586 (1969). [CrossRef] 13. D. B. Rensch and A. N. Chester, “Iterative diffraction calculations of transverse mode distributions in confocal unstable laser resonators,” Appl. Opt. 12(5), 997–1010 (1973). [CrossRef] [PubMed] OCIS Codes (140.3410) Lasers and laser optics : Laser resonators (140.4780) Lasers and laser optics : Optical resonators ToC Category: Lasers and Laser Optics Original Manuscript: June 27, 2011 Revised Manuscript: August 16, 2011 Manuscript Accepted: August 18, 2011 Published: September 23, 2011 F. X. Morrissey and H. P. Chou, "Mode calculations in asymmetrically aberrated laser resonators using the Huygens–Fresnel kernel formulation," Opt. Express 19, 19702-19707 (2011) Sort: Year | Journal | Reset 1. A. E. Siegman, “Unstable optical resonators for laser applications,” Proc. IEEE53(3), 277–287 (1965). [CrossRef] 2. A. E. Siegman and R. W. Arrathoon, “Modes in unstable optical resonators and lens waveguides,” IEEE J. Quantum Electron.3(4), 156–163 (1967). [CrossRef] 3. A. E. Siegman and H. Y. Miller, “Unstable optical resonator loss calculations using the prony method,” Appl. Opt.9(12), 2729–2736 (1970). [CrossRef] [PubMed] 4. R. L. Sanderson and W. Streifer, “Unstable laser resonator modes,” Appl. Opt.8(10), 2129–2136 (1969). [CrossRef] [PubMed] 5. P. Horwitz, “Asymptotic theory of unstable resonator modes,” J. Opt. Soc. Am.63(12), 1528–1543 (1973). [CrossRef] 6. L. W. Chen and L. B. Felsen, “Coupled-mode theory of unstable resonators,” IEEE J. Quantum Electron.9(11), 1102–1113 (1973). [CrossRef] 7. A. G. Fox and T. Li, “Resonant modes in a maser interferometer,” Bell Syst. Tech. J.40, 453–488 (1961). 8. P. J. Davis and I. Polonsky, “Numerical Interpolation, Differentiation, and Integration,” in Handbook of Mathematical Functions: with Formulas, Graphs, and Mathematical Tables, M. Abramowitz and I. A. Stegun, eds. (Dover, 1972) pp. 875–924. 9. V. Magni, G. Valentini, and S. De Silvestri, “Recent developments in laser resonator design,” Opt. Quantum Electron.23(9), 1105–1134 (1991). [CrossRef] 10. H. Kogelnik and T. Li, “Laser beams and resonators,” Appl. Opt.5(10), 1550–1567 (1966). [CrossRef] [PubMed] 11. W. F. Krupke and W. R. Sooy, “Properties of an unstable confocal resonator CO2 laser system,” IEEE J. Quantum Electron.5(12), 575–586 (1969). [CrossRef] OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed. « Previous Article | Next Article »
{"url":"http://www.opticsinfobase.org/oe/fulltext.cfm?uri=oe-19-20-19702&id=222736","timestamp":"2014-04-16T19:58:41Z","content_type":null,"content_length":"180157","record_id":"<urn:uuid:d6623387-3c5f-4c8e-b3c2-a11888b67c16>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Topology Proof? October 7th 2007, 02:48 PM #1 Aug 2007 Topology Proof? How can you prove sets how can u prove the following sets are are open, a. the left half place {z: Re z > 0 }; b. the open disk D(z0,r) for any $z_0 \varepsilon C$ and r > 0. a. how can u prove the following set is a closed set: D(z0, r) MY WORKING SO FAR 1.. could you please give me a hint on how to start a and b as ive researched but still havent got much of an idea. once i get a little hint then ill try solving and show you my working.. if D(z0,r) is closed, this implies C\S (the compliment) is open. Therefore, for any z not belonging to the set, there is an e > 0 such that D(z,e) C C\S. This further implies z is not a limit point of S which means that it is a closed set? is this correct proof for 2a?? What are you expected to do to show a set is open? I ask because 1b is often used as a basic open set. Therefore, you must be using some other definition. hmm well my definition for an open set is : the set S is said to be open if intS = S (int = interior) is this what you had in mind?? how about 2a.. have i done that bit correctly? The statement that $z_0 \in {\mathop{\rm int}} (S)$ means that $\left( {\exists r > 0} \right)\left[ {\left\{ {z:\left| {z - z_0 } \right| < r} \right\} \subset S} \right]$. Then for #1a, choose $r = \frac{{{\mathop{\rm Re}olimits} (z_0 )}}{2}$. This is a Post Script. For 1b, by definition an open disk is its own interior. Last edited by Plato; October 7th 2007 at 04:40 PM. October 7th 2007, 03:21 PM #2 October 7th 2007, 03:40 PM #3 Aug 2007 October 7th 2007, 04:27 PM #4
{"url":"http://mathhelpforum.com/calculus/20133-topology-proof.html","timestamp":"2014-04-23T17:35:48Z","content_type":null,"content_length":"40905","record_id":"<urn:uuid:615c23aa-a081-4f77-bb68-aca09d540eba>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Centreville, VA Prealgebra Tutor Find a Centreville, VA Prealgebra Tutor ...I also tutor outside of BITS other children in my neighborhood and have experience tutoring children with learning disabilities. I am a patient person as I realize all children are different and learn at various paces. I am a hard worker and I will put my best foot forward to not only help your... 12 Subjects: including prealgebra, Spanish, reading, grammar ...Other staff members had labeled this student “too stupid to learn fractions,” and didn't want to “waste their time” helping him. After several days where I worked with this student one-on-one, mostly using techniques to help him visualize what the fractions represented, he began to develop confi... 17 Subjects: including prealgebra, reading, English, biology ...I like to get feedback from my students often in order to improve their experience with tutoring continuously and to be able to cater to their specific needs. I have gotten tremendous satisfaction from seeing my students' grades improve and from hearing positive feedback from them (including con... 40 Subjects: including prealgebra, English, reading, chemistry ...I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help students understand better course materials and what is integral in extracting information from problems and solving them. I would like to see students try solving problems on their own first and treat me with respect so that it can be reciprocated. 17 Subjects: including prealgebra, chemistry, physics, calculus ...I have hundreds of hours of experience with more than 50 students. I have worked with four Fairfax County Public Schools science teachers and have picked up many of their teaching techniques. I am familiar with many of the student oriented educational sites available on the web, and have worked with students on the Jefferson Labs, NovaNET and Khanacademy computer assisted learning 13 Subjects: including prealgebra, chemistry, physics, calculus Related Centreville, VA Tutors Centreville, VA Accounting Tutors Centreville, VA ACT Tutors Centreville, VA Algebra Tutors Centreville, VA Algebra 2 Tutors Centreville, VA Calculus Tutors Centreville, VA Geometry Tutors Centreville, VA Math Tutors Centreville, VA Prealgebra Tutors Centreville, VA Precalculus Tutors Centreville, VA SAT Tutors Centreville, VA SAT Math Tutors Centreville, VA Science Tutors Centreville, VA Statistics Tutors Centreville, VA Trigonometry Tutors Nearby Cities With prealgebra Tutor Annandale, VA prealgebra Tutors Burke, VA prealgebra Tutors Chantilly prealgebra Tutors Fairfax Station prealgebra Tutors Fairfax, VA prealgebra Tutors Herndon, VA prealgebra Tutors Manassas Park, VA prealgebra Tutors Manassas, VA prealgebra Tutors Mc Lean, VA prealgebra Tutors Oakton prealgebra Tutors Reston prealgebra Tutors Sterling, VA prealgebra Tutors Sully Station, VA prealgebra Tutors Vienna, VA prealgebra Tutors Woodbridge, VA prealgebra Tutors
{"url":"http://www.purplemath.com/centreville_va_prealgebra_tutors.php","timestamp":"2014-04-19T23:46:57Z","content_type":null,"content_length":"24514","record_id":"<urn:uuid:a0f68724-7e35-4a14-831b-6b05647163e4>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Does the weak approximation theorem hold for general topological fields? up vote 4 down vote favorite The weak approximation theorem states that given a field $F$ and nontrivial inequivalent absolute values $|\cdot|_1,\ldots,|\cdot|_n,$ and letting $F_i$ denote $F$ with the topology from $|\cdot|_i$, then the diagonal in $F_1 \times \ldots \times F_n$ is dense. So suppose now we have the same setup, except now the topologies don't necessarily come from absolute values (but do make $F$ a topological field, and are all still distinct and nondiscrete and of course Hausdorff). Does the result still hold? Related to this older question in that one way to come up with a counterexample for both simultaneously would be to find a field with two (nondiscrete, Hausdorff) topologies with one strictly finer than the other. add comment 1 Answer active oldest votes Well, I feel silly -- this is answered early on in Wiesław's "Topological Fields" now that I look. The answer is no, distinct (non-discrete) topologies on a field need not be independent, they can be comparable, or even incomparable but still dependent. For a simple example, take two values on the rationals; the topology generated by both together will still make the rationals a topological field, and is not discrete, and is up vote 2 down vote strictly finer than either of the ones you started with. I think you can recover it if you add some conditions weaker than coming from an absolute value, but, well, now you know where to look for this sort of thing... add comment Not the answer you're looking for? Browse other questions tagged gn.general-topology linear-algebra topological-groups uniform-spaces fields or ask your own question.
{"url":"https://mathoverflow.net/questions/74418/does-the-weak-approximation-theorem-hold-for-general-topological-fields","timestamp":"2014-04-21T15:21:52Z","content_type":null,"content_length":"51706","record_id":"<urn:uuid:2c6e840b-a5a8-4dda-bb13-fdc5ac0d99c2>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Section Modulus Section Modulus per unit length of wall (per/ft or per/m) This is the most important design criteria, as together with the allowable stress, it defines the carrying capacity (moment resistence M) of the section. Allowable stress is a percentage of the yield point of the steel. Accepted engineering practice in the USA is to design to 65% of the yield point (F[y]). The calculation for the minimum required section modulus is: S[min] = M[max] / .65 F[y] The calculation for the section modulus of the sheet piling section is: S = I / c (per unit of wall) Aside from section modulus, the reason why many other properties are listed in specification sheets (moment of inertia, area, etc.) is to support other design calculations the engineer might require. These other properties should not be used as primary design criteria as section modulus is the critical design parameter.
{"url":"http://www.sheet-piling.com/glossary/section_modulus_sheet.html","timestamp":"2014-04-16T19:20:36Z","content_type":null,"content_length":"2475","record_id":"<urn:uuid:2365f2b3-1092-4e73-bf6b-001eb1c0c8f3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Center for Science Education: BEGINNING OUR DISCUSSION OF MOTION When we meet again on 21 March, we will begin our discussion of motion. By March 21, I want you to read through chapters 2 and 3 in the text, and work on the problems I will give you below. These chapters will introduce you to the basic concepts of motion. One of the most important aspects of this material that you will face, and certainly your future students will face, will be the need to know the proper meaning of terms and use them appropriately. Part of the difficulty in this is that the terms we will encounter, speed, velocity, and acceleration are all terms we know from everyday English, and the temptation will be great to rely on our "everday" understanding of these words rather than the precise scientific definitions we will study. Please read the text carefully to realize that speed and velocity are not interchangeable terms, they are certainly related, but are not equivalent. Speed measures only the rate or magnitude of motion, whereas velocity is a description of both rate and direction of travel. So, a statement like "30 mi/hr" is a description of speed, whereas the phrase "the winds are from the north at 25 mi/ hr" is a statement of velocity, since it combines both rate of motion and direction of motion. Speed and velocity are examples of classes of objects or concepts known as scalars and vectors. Scalars are objects which are described by magnitude alone (i.e., a number without direction describes them fully) whereas vectors are described by both magnitude and direction. See if you can think of other examples of scalars and vectors. One of the key concepts we will study in Chapters 2 and 3 will be acceleration. The everyday meaning of this term is clear, it is almost always used to mean "speed up", as in "this car will go from 0 to 60 in 3 secs flat". This is actually a fair description of one type of acceleration, since it describes how much the speed changes and the time in which that change occurs, but it is important to note that there are many types of acceleration. In physics, acceleration refers to any change of motion, and this includes any change in speed (either speeding up or slowing down) or any change in direction. Consider driving a car; suppose you wanted to make the car travel in a circle, what would you do to cause this to occur? Suppose you wanted to keep the car travelling in the same circle for five complete laps, imagine what you would have to do to accomplish that. Now, suppose at the end of the fifth lap, you let the steering wheel go back to its original position. What would be the motion of the car then? We will find that acceleration is a key conept in understanding forces (the subject of chapters 4-6). We will find that forces tend to cause accelerations, and the greater the force acting on an object, the greater its acceleration can be. Chapters two and three introduce us to some of the simple calculations we will do in this course. In reviewing the calculations done in the book, please note how important it is to include all units in your calculations. For instance, in the example given above (the car going from zero to sixty in three seconds flat...), we can calculate the acceleration of the object, as long as we tighten up the language and include appropriate units. In everyday spoken English, the phrase "...zero to sixty..." almost certainly refers to the speed measured in miles/hour. Given this, we can use the simple definition of acceleration given on pp. 15-16 of the text: acceleration = change in speed/time interval acceleration = (60 mi/hr - 0 mi/hr)/3 secs = 20 mi/hr/sec This calculation and the resulting answer are noteworthy for a few reasons. First, notice how units are treated; it is clear that the calculation calls for a division, and there should be no anxiety that 60/3=20. However, notice that the division also extends to the units; dividing the units in the numerator (mi/hr) by the units in the denominator (sec) gives you the final result of mi/hr/sec (you would say this as 'miles per hour per second'). What does this strange looking unit mean? It means that the car is accelerating, i.e., changing its motion (and in this particular case, its speed) at the rate of 20 mi/hr for every second in the time interval under consideration. Please read the text as assigned and for the first day back, 21 March, please turn in complete answers to the following. (In doing calculations, you must show all work and how you arrive at an answer, not merely present the answer.) 1. Suppose a car travels on a perfectly circular track at a constant speed of 20 mi/hr. Is the car accelerating? Explain why or why not. 2. Do numbers 27, 28, 33, 34, 42 from pp. 26-27 of the text. 3. Read no. 52/p. 27 in the activities section, and try to do this with a friend. Be sure to write down your results. (If you can't get this done before class, that's fine also.) 4. Numbers 19, 21, 29 and 30 from p. 41 in the text. Please do not hesitate to contact me if you have any questions. Enjoy the rest of your break, and I look forward to seeing you on the 21st. David B. Slavsky Loyola University Chicago Cudahy Science Hall, Rm. 404 1032 W. Sheridan Rd., Chicago, IL 60660 Phone: 773-508-8352 IES 310 phone: 773-508-2149 dslavsk@luc.edu David Slavsky Home
{"url":"http://www.luc.edu/faculty/dslavsk/courses/ntsc105/classnotes/march10.shtml","timestamp":"2014-04-19T14:30:48Z","content_type":null,"content_length":"9227","record_id":"<urn:uuid:485c89c4-6e55-4b12-af27-1e8b4efff24e>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Videos for change - Homework Help Videos - Brightstorm How to estimate the instantaneous rate of change of the amount of a drug in a patient's bloodstream computing average rates of change over shorter and shorter intervals of time, and how to represent this rate of change on a graph.
{"url":"http://www.brightstorm.com/tag/change/","timestamp":"2014-04-17T01:17:28Z","content_type":null,"content_length":"67822","record_id":"<urn:uuid:292e048a-6c8d-4582-a9f0-bf755b0c0dd5>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00485-ip-10-147-4-33.ec2.internal.warc.gz"}
Critical points October 12th 2009, 03:53 PM Critical points $F(x) = \frac{p-4}{p^2+2}$ f(x) = p-4 f'(x) = -4 g(x) = $p^2 + 2$ g'(x) = 2p $F'(x) = \frac{f'(x)g(x)-f(x)g'(x)}{g(x)^2}$ $F'(x)= \frac{-6p^2+8p-8}{(p^2+2)^2}<br /> <br />$ I understand that a critical number is where F'(x) is either c or DNE. But as far as I can tell, neither the bottom nor the top can be equal to 0, which I think neglects both cases. October 12th 2009, 06:01 PM $F(x) = \frac{p-4}{p^2+2}$ f(x) = p-4 f'(x) = -4 g(x) = $p^2 + 2$ g'(x) = 2p $F'(x) = \frac{f'(x)g(x)-f(x)g'(x)}{g(x)^2}$ $F'(x)= \frac{-6p^2+8p-8}{(p^2+2)^2}<br /> <br />$ I understand that a critical number is where F'(x) is either c or DNE. But as far as I can tell, neither the bottom nor the top can be equal to 0, which I think neglects both cases. the derivative of $p-4$ is not $-4$ you also did the quotient rule incorrectly. October 12th 2009, 06:07 PM $F(x) = \frac{p-4}{p^2+2}$ f(x) = p-4 f'(x) = -4 g(x) = $p^2 + 2$ g'(x) = 2p $F'(x) = \frac{f'(x)g(x)-f(x)g'(x)}{g(x)^2}$ $F'(x)= \frac{-6p^2+8p-8}{(p^2+2)^2}<br /> <br />$ I understand that a critical number is where F'(x) is either c or DNE. But as far as I can tell, neither the bottom nor the top can be equal to 0, which I think neglects both cases. For one thing, you have $F(x)=\frac{p-4}{p^2+2}$. With respect to $x$, this is a constant function (and therefore has no critical points). I'm not sure if that's what you meant, but with the case $F(x)=\frac{x-4}{x^2+2}$, we have: Therefore, $F'(x)=\frac{x^2+2-2x(x-4)}{(x^2+2)^2}=\frac{-x^2+8x+2}{(x^2+2)^2}$ Using the quadratic formula on the numerator, we get that $F$ has two critical points (with horizontal tangent lines) at: $F'(x)$ is defined everywhere, so there are no vertical tangents. On a related note, it only makes sense to define a vertical tangent of $F$ at $x=a$ when: 1) $F'(a)$ is undefined. 2) $F(a)$ is defined. Otherwise, you'll just get the equation for the asymptote. Take for example, $F(x)=\sqrt{x}$ and investigate the behavior of the tangent line at $x=0$.
{"url":"http://mathhelpforum.com/calculus/107636-critical-points-print.html","timestamp":"2014-04-19T20:38:20Z","content_type":null,"content_length":"12298","record_id":"<urn:uuid:04459197-c4af-462e-94e0-b812a3a04f55>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Bellflower, CA Prealgebra Tutor Find a Bellflower, CA Prealgebra Tutor ...I graduated with a BS in Microbiology, the scientific study of germs and other very tiny things, from California state University Long Beach. I also have a minor in Chemistry. Think of the classic mad scientist stereotype concocting new medicines or flavors, safely and sanely. 19 Subjects: including prealgebra, English, reading, biology I am an Animal Veterinary Science major at Cal Poly Pomona. Available Friday-Monday unless otherwise specified. My strong areas are in natural and physical sciences, intermediate math, biology and chemistry. 10 Subjects: including prealgebra, chemistry, biology, algebra 1 I am currently enrolled at Cal Poly Pomona. This summer I would like to help out anyone who is struggling with math, particularly elementary math, pre-algebra, and Algebra 1; and maybe also some reading. I like helping kids out with topics that may be tricky for them in a way that is innovative, f... 4 Subjects: including prealgebra, algebra 1, precalculus, elementary math ...These days, when I'm not tutoring students, I can be found on The Disney Channel playing one! I get along with just about everybody, and I expect nothing but the best from and for my students. I know my stuff and you can, too! 26 Subjects: including prealgebra, English, reading, writing ...I have been a Science Department chair and Science Fair Coordinator. I love teaching and sharing my love of learning.I am a credentialed multiple subject teacher in the state of California I did student teaching for Korean Bilingual 2nd grade class. I passed CSET-LOTE 3 & LOTE 5 (Korean) I am a qualified teacher in the state of California. 19 Subjects: including prealgebra, reading, chemistry, ESL/ESOL Related Bellflower, CA Tutors Bellflower, CA Accounting Tutors Bellflower, CA ACT Tutors Bellflower, CA Algebra Tutors Bellflower, CA Algebra 2 Tutors Bellflower, CA Calculus Tutors Bellflower, CA Geometry Tutors Bellflower, CA Math Tutors Bellflower, CA Prealgebra Tutors Bellflower, CA Precalculus Tutors Bellflower, CA SAT Tutors Bellflower, CA SAT Math Tutors Bellflower, CA Science Tutors Bellflower, CA Statistics Tutors Bellflower, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/bellflower_ca_prealgebra_tutors.php","timestamp":"2014-04-20T21:20:09Z","content_type":null,"content_length":"24112","record_id":"<urn:uuid:023d6ea3-25b8-4d0d-9608-30de6088a9c2>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00441-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with Transverse Wave Equation 1. The problem statement, all variables and given/known data Two vibrating sources emit waves in the same elastic medium. The first source has a frequency of 25 Hz, while the 2nd source's frequency is 75 Hz. Waves from the first source have a wavelength of 6.0 m. They reflect from a barrier back into the original medium, with an angle of reflection of 25 degrees. Waves from the second source refract into a different medium with an angle of incidence of 35 degrees. The speed of the refracted wave is observed to be 96 m/s. I need to find the speed of waves from the second source in the originial medium and if the angle of refraction of the waves from the second source are greater, less or equal to 35 degrees as they enter the different medium. 2. Relevant equations The universal wave equation is V = (f)(L) V = speed f = frequency L = wavelength also f = 1/T T = period (time) f anf t are reciprocals of each other 3. The attempt at a solution Given: f = 75Hz refracted v = 96m/s angle of incidence = 35 degrees Required: speed (v) wavelength (L) Analysis: V = (f)(L) = (75)(?) I just don't know how to find either the v or the L as this is the only formula I've been given and I can't use it with two variables. I tried making a graph with the angles but didn't have enough information. I don't know if I can somehow use the angles to calculate the refracted wave's original speed or if there's a way to use the refracted wave's speed to calculate the original speed. This is a grade eleven physics problem and I am so frustrated with this and can't see a way to solve it. If someone could even just help me find the wavelength of the second source that would help a lot.
{"url":"http://www.physicsforums.com/showthread.php?t=456731","timestamp":"2014-04-17T21:28:22Z","content_type":null,"content_length":"23775","record_id":"<urn:uuid:42114255-ce8c-4eb2-80f8-4936d7800b5b>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00316-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert Hours to Decimals. Convert Hours into a Decimal Number. light, easy training on calculating hours and minutes in excel │ Excel Timesheets │ Add & Convert │ General Knowledge │ │ Part A. Create a basic Excel Timesheet │ Add up Hours in Excel │ How does excel calculate hours │ │ Part B. Insert Unpaid Lunch Breaks │ Convert Conventional Hours into a Decimal # │ How does excel calculate dates │ │ Part C. Calculate Overtime Pay │ Convert Hours and Minutes into Minutes │ Express Log in and Log out in Excel │ │ Timesheets for Night Shifts │ Average Hours ignoring Zero's and Error Values │ Entering only the last 2 digits of a year │ Convert conventional Hours into a decimal NumberConvert hours into a Decimal Number. Use a calculator or Excel. Here is how: Question: How do I convert conventional hours into a decimal number? Short Answer: To convert hours using a calculator: Divide the minutes by 60. To convert hours using Excel: Change the cell format to "Number" with 2 decimal places and multiply it by 24 Download our Decimal conversion chart Long answer If you need to pay a worker in a hurry, and would like to use a calculator, here is what you can do to convert the time to a decimal number: Step 1: Divide the minute portion of the time worked into 60 Step 2: Add the decimal number obtained to the hour portion. Example: My worker worked for 4hr 23 min at a rate of $13.50. How much do I pay him? Step 1: Divide: 23 / 60 = 0.38 Step 2: Add: 4 + 0.38 = 4.38 To pay your worker, multiply 4.38 by $13.50 If you like to use excel: Step 1 In cell A1, Enter the time as hh:mm Step 2 In Cell B1, type =(A1)*24 Step 3 Change the format of Cell B to "Number" with 2 decimal places. Step 1 In cell A1 Type 4:23 Step 2 In cell B1 Type =(A1)*24 Step 3 Change the format of cell B to "Number" with 2 decimal places. You will now see 4.38 To pay your worker, enter in cell C1 =(b1*13.50) It's the learn Fizz that does the Bizz. Template Library Confused? Download our ready to use Excel Templates to convert hours. See our Templates Time Card Calculator Free Online Timecard Calculator. Easy and fast to calculate timesheets. Try it out!
{"url":"http://www.calculatehours.com/excel-how-to/convert-hours-to-decimal.html","timestamp":"2014-04-20T11:01:30Z","content_type":null,"content_length":"18755","record_id":"<urn:uuid:f0055fc0-e2a6-420a-ad4b-631fc21aa2ed>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00300-ip-10-147-4-33.ec2.internal.warc.gz"}
Modified Box Plot Help + Simple Standard Deviation January 20th 2011, 04:26 PM #1 Sep 2009 Modified Box Plot Help + Simple Standard Deviation Hey guys so i have 3 box plot diagrams given to me: and i have to answer a few questions on it, without knowing what the actual raw data is: 1) Which one of the 3 distributions is most likely to have a mean that is greater than the median? (my guess is C because the Interquartile Range is the biggest amongst the 3) 2) Which one has the smallest maximumn? (my guess is A because B has outliers and C looks alot bigger) 3) Which one has the smallest value of Q3? (my guess is A) i just want someone to confirm if i'm right on these, thanks. for Standard Deviation: Consider a sample of 3 observations, with a standard deviation of s =2. If the smallest two observations are 10 and 12, what is the value of the third observation? is 14 right? because it says 10 and 12 are the lesser of the 3 observations Hey guys so i have 3 box plot diagrams given to me: and i have to answer a few questions on it, without knowing what the actual raw data is: 1) Which one of the 3 distributions is most likely to have a mean that is greater than the median? (my guess is C because the Interquartile Range is the biggest amongst the 3) I don't think so, think about what the histograms might look like. A distribution with positive skew will have a mean greater than the median. So which looks as though it might have the biggest Hey guys so i have 3 box plot diagrams given to me: and i have to answer a few questions on it, without knowing what the actual raw data is: 1) Which one of the 3 distributions is most likely to have a mean that is greater than the median? (my guess is C because the Interquartile Range is the biggest amongst the 3) 2) Which one has the smallest maximumn? (my guess is A because B has outliers and C looks alot bigger) 3) Which one has the smallest value of Q3? (my guess is A) i just want someone to confirm if i'm right on these, thanks. I think you should revise what a box and whisker plot actually shows, your answer to 3 shows this need. Also for Q2 you need to check what they are actually asking for, probably which has the greatest upper whisker which ignores the outliers, or are outliers to be included? January 21st 2011, 11:59 PM #2 Grand Panjandrum Nov 2005 January 22nd 2011, 12:07 AM #3 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/statistics/168908-modified-box-plot-help-simple-standard-deviation.html","timestamp":"2014-04-23T21:08:47Z","content_type":null,"content_length":"39629","record_id":"<urn:uuid:d7f36bde-617d-4653-bc24-cea146d96ed5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Ewing Township, NJ Math Tutor Find an Ewing Township, NJ Math Tutor Hi! My name is Varun, and I am currently an Applied Mathematics Major at The College of New Jersey. I have been tutoring Mathematics and the Sciences for about 6 years now, covering subjects ranging from but not limited to Algebra I, Chemistry, Physics, Calculus and Linear Algebra. 26 Subjects: including algebra 1, Microsoft Word, Microsoft PowerPoint, study skills ...I then changed my career and became a teacher. I currently teach high school level math, chemistry and physics at a private school. I love to teach and help people understand challenging 15 Subjects: including differential equations, linear algebra, algebra 1, algebra 2 ...I have master's degree work in labor economics, financial analysis and game theory. I have the utmost confidence in my ability to relate this material in a comprehensible manner to the student. Furthermore, I am proficient in econometrics, having taught several students how to use SAS and STATA to perform regressions and analyses. 19 Subjects: including calculus, Microsoft Excel, precalculus, statistics ...I instruct students on how to be better readers, how to connect texts to each other and to the greater human canon, and how to compose thoughtful analyses of works of literature. Via both my educational background and my experience as a teacher of college-level Literature and Composition, I have... 15 Subjects: including SAT math, English, reading, writing ...I will not give up on my students and I will do everything in my power to see them succeed. I am currently free after 2 o' clock Monday through Friday, and I'm available on Saturday as well. I look forward to hearing from my future students. 20 Subjects: including algebra 1, algebra 2, calculus, probability Related Ewing Township, NJ Tutors Ewing Township, NJ Accounting Tutors Ewing Township, NJ ACT Tutors Ewing Township, NJ Algebra Tutors Ewing Township, NJ Algebra 2 Tutors Ewing Township, NJ Calculus Tutors Ewing Township, NJ Geometry Tutors Ewing Township, NJ Math Tutors Ewing Township, NJ Prealgebra Tutors Ewing Township, NJ Precalculus Tutors Ewing Township, NJ SAT Tutors Ewing Township, NJ SAT Math Tutors Ewing Township, NJ Science Tutors Ewing Township, NJ Statistics Tutors Ewing Township, NJ Trigonometry Tutors Nearby Cities With Math Tutor Bensalem Math Tutors Cherry Hill Math Tutors Cherry Hill Township, NJ Math Tutors Edison, NJ Math Tutors Lawrence, NJ Math Tutors Levittown, PA Math Tutors Middletown Twp, PA Math Tutors Morrisville, PA Math Tutors New Brunswick Math Tutors Piscataway Math Tutors Plainfield, NJ Math Tutors Trenton, NJ Math Tutors Washington Crossing Math Tutors West Trenton, NJ Math Tutors Yardley, PA Math Tutors
{"url":"http://www.purplemath.com/Ewing_Township_NJ_Math_tutors.php","timestamp":"2014-04-19T17:28:01Z","content_type":null,"content_length":"24165","record_id":"<urn:uuid:cbf74274-26a6-40ea-a026-413445ad0985>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
kinematics, branch of physics and a subdivision of classical mechanics concerned with the geometrically possible motion of a body or system of bodies without consideration of the forces involved ( i.e., causes and effects of the motions). A brief treatment of kinematics follows. For full treatment, see mechanics. Kinematics aims to provide a description of the spatial position of bodies or systems of material particles, the rate at which the particles are moving (velocity), and the rate at which their velocity is changing (acceleration). When the causative forces are disregarded, motion descriptions are possible only for particles having constrained motion—i.e., moving on determinate paths. In unconstrained, or free, motion, the forces determine the shape of the path. For a particle moving on a straight path, a list of positions and corresponding times would constitute a suitable scheme for describing the motion of the particle. A continuous description would require a mathematical formula expressing position in terms of time. When a particle moves on a curved path, a description of its position becomes more complicated and requires two or three dimensions. In such cases continuous descriptions in the form of a single graph or mathematical formula are not feasible. The position of a particle moving on a circle, for example, can be described by a rotating radius of the circle, like the spoke of a wheel with one end fixed at the centre of the circle and the other end attached to the particle. The rotating radius is known as a position vector for the particle, and, if the angle between it and a fixed radius is known as a function of time, the magnitude of the velocity and acceleration of the particle can be calculated. Velocity and acceleration, however, have direction as well as magnitude; velocity is always tangent to the path, while acceleration has two components, one tangent to the path and the other perpendicular to the tangent.
{"url":"http://www.britannica.com/print/topic/318099","timestamp":"2014-04-20T07:16:55Z","content_type":null,"content_length":"9201","record_id":"<urn:uuid:f14e335c-7797-4040-b5e6-31a85f3226e3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
A rocket is launced upward.. June 12th 2008, 06:00 PM #1 Junior Member Jan 2008 A rocket is launced upward.. A rocket is launched upward from the top of a 30-foot tall platform. The height of the rocket in feet, $h(t)$, at time $t\geq0$, is given by the equation $h(t)= -16t^2+256t+30$ a) At what time is the rocket at a maximum height? b) What is the maximum height in feet that the rocket can reach? Can you show me how to do these? This problem is really asking about the coordinates of the vertex. The x-coordinate of the vertex represents time (t) and the y-coordinate represents height (h). To find the x-coordinate of the vertex of a quadratic of the form $y=ax^2 + bx + c$, use: $x=- \frac{b}{2a}$. Once you've found the x-coordinate, that represents the time at which the object reaches its highest point. To find the height AT THAT TIME, simply plug in the x-coordinate to your equation (for t) and you'll get out the height. June 12th 2008, 06:45 PM #2 Feb 2008 Westwood, Los Angeles, CA
{"url":"http://mathhelpforum.com/algebra/41444-rocket-launced-upward.html","timestamp":"2014-04-19T13:31:46Z","content_type":null,"content_length":"32482","record_id":"<urn:uuid:454fb811-eee5-47df-b042-addedc28e674>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Getting Better at Guessing Printable view Students will work in groups to examine a jar of coins to make both "guesstimations" and more precise estimations. Coin Type(s) Coin Program(s) • Students will practice estimation skills. • Students will learn how to make better estimations. • Students will work in groups and discuss information about their guesses and estimates. Major Subject Area Connections • Kindergarten • First grade • Second grade Class Time Sessions: One Session Length: 30-45 minutes Total Length: 0-45 minutes Terms and Concepts • Estimation • Logical Reasoning • Penny • (Plastic peanut butter) jar filled with 75 to 100 pennies • Math journals • Class chart divided into 2 columns labeled “Our Penny Guesstimates” and “Our Penny Estimates” • Sticky notes, a different color for each group • Fill a jar with pennies, counted. • Make a chart with 2 columns labeled "Our Penny Guesstimates" and "Our Penny Estimates" 1. Display a clear jar filled with pennies in the classroom a few days before the lesson starts. Let the students make observations about the pennies in the jar but not take the pennies out of the 2. Divide students in to groups with 3 to 5 members. 3. Have each group come up with a group guesstimate for the number of pennies in the jar. Each group can have a chance to examine the jar before making their guesstimate. 4. Have each group write their guesstimate on a sticky note and place it on a classroom chart. 5. Remove about half of the pennies from the jar. Have the class decide how to determine about half. 6. Count out the half of the pennies that were removed. 7. Allow each group to make an estimate based on this new information. Have each group put their estimate on a different colored sticky note and add it to the classroom chart. Keep the group’s guesstimate and estimate next to each other. 8. Have the students in their groups answer the following questions. The students can write their group answers in their math journal. □ What is the order of the guesstimates from lowest to highest? □ What is the order of the estimates from lowest to highest? □ What is the range for the guesstimates? □ What is the range for the estimates? □ Find the differences between the guesstimates and estimates for each group. 9. Have students present their answers to the class. 10. Pick a group to count the total amount of pennies in the jar and add the actual number to the chart. 11. Discuss as a class different strategies for making better estimates. • Students can make simple word problems using the data on the classroom chart and the information they found in their groups. • Students can do research to find out what they could purchase with the exact amount of pennies in the jar. Assess the groups' learning based on the information in their math journals and on the group presentation they gave when they answered the above questions. Give grades to the groups. Discipline: Math Domain: K.CC Counting and Cardinality Grade(s): Grade K Cluster: Compare numbers • K.CC.6. Identify whether the number of objects in one group is greater than, less than, or equal to the number of objects in another group, eg, by using matching and counting strategies. • K.CC.7. Compare two numbers between 1 and 10 presented as written numerals. Discipline: Math Domain: K.CC Counting and Cardinality Grade(s): Grade K Cluster: Know number names and the count sequence • K.CC.1. Count to 100 by ones and by tens. • K.CC.2. Count forward beginning from a given number within the known sequence (instead of having to begin at 1). • K.CC.3. Write numbers from 0 to 20. Represent a number of objects with a written numeral 0-20 (with 0 representing a count of no objects). Discipline: Math Domain: K.CC Counting and Cardinality Grade(s): Grade K Cluster: Count to tell the number of objects • K.CC.4. Understand the relationship between numbers and quantities; connect counting to cardinality. □ When counting objects, say the number names in the standard order, pairing each object with one and only one number name and each number name with one and only one object. □ Understand that the last number name said tells the number of objects counted. The number of objects is the same regardless of their arrangement or the order in which they were counted. □ Understand that each successive number name refers to a quantity that is one larger. • K.CC.5. Count to answer "how many?" questions about as many as 20 things arranged in a line, a rectangular array, or a circle, or as many as 10 things in a scattered configuration; given a number from 1-20, count out that many objects. Discipline: Mathematics Domain: K-2 Number and Operations Cluster: Compute fluently and make reasonable estimates. Grade(s): Grades K–2 In K through grade 2 all students should • develop and use strategies for whole-number computations, with a focus on addition and subtraction; • develop fluency with basic number combinations for addition and subtraction; and • use a variety of methods and tools to compute, including objects, mental computation, estimation, paper and pencil, and calculators. Discipline: Mathematics Domain: K-2 Number and Operations Cluster: Understand meanings of operations and how they relate to one another. Grade(s): Grades K–2 In K through grade 2 all students should • understand various meanings of addition and subtraction of whole numbers and the relationship between the two operations; • understand the effects of adding and subtracting whole numbers; and • understand situations that entail multiplication and division, such as equal groupings of objects and sharing equally. Discipline: Mathematics Domain: All Reasoning and Proof Cluster: Instructional programs from kindergarten through grade 12 should enable all students to Grade(s): Grades K–2 • Recognize reasoning and proof as fundamental aspects of mathematics • Make and investigate mathematical conjectures • Develop and evaluate mathematical arguments and proofs • Select and use various types of reasoning and methods of proof Discipline: Mathematics Domain: K-2 Number and Operations Cluster: Understand numbers, ways of representing numbers, relationships among numbers, and number systems. Grade(s): Grades K–2 In K through grade 2 all students should • count with understanding and recognize "how many" in sets of objects; • use multiple models to develop initial understandings of place value and the base-ten number system; • develop understanding of the relative position and magnitude of whole numbers and of ordinal and cardinal numbers and their connections; • develop a sense of whole numbers and represent and use them in flexible ways, including relating, composing, and decomposing numbers; • connect number words and numerals to the quantities they represent, using various physical models and representations; and • understand and represent commonly used fractions, such as 1/4, 1/3, and 1/2. Discipline: Mathematics Domain: All Communication Cluster: Instructional programs from kindergarten through grade 12 should enable all students to Grade(s): Grades K–2 • organize and consolidate their mathematical thinking through communication • communicate their mathematical thinking coherently and clearly to peers, teachers, and others; • analyze and evaluate the mathematical thinking and strategies of others; and • use the language of mathematics to express mathematical ideas precisely.
{"url":"http://www.usmint.gov/kids/teachers/lessonPlans/viewLP.cfm?id=10","timestamp":"2014-04-20T13:32:47Z","content_type":null,"content_length":"38967","record_id":"<urn:uuid:e88b4951-c8be-460c-b413-736c29c9396e>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00167-ip-10-147-4-33.ec2.internal.warc.gz"}
Blog year 2010 in review December 30, 2010 By Pat The blog year started in August and consists of 30-something posts. Here is a summary. Quant concepts A performance step beyond “Economists’ Hubris” points out that random portfolios are a more powerful method of performance measurement than the method that is suggested in the “Economists’ Hubris” paper (though that method is probably pretty good). The volatility puzzle solved? suggests that perhaps the reason that low volatility stocks have a higher expected return than high volatility stocks is because hardly anyone pays attention to volatility when selecting stocks. The decision between active and passive investment is explored in Freeloading turnstile jumpers. Market structure Elevated stock correlations is a short discussion of fund manager opportunity and its changes through time. Deflation, inflation and blown tires points to an interesting analogy regarding inflation. Anomalies meet volatility discusses a paper by Bernd Scherer about low volatility portfolios. I wonder if the French/Fama anomalies found to be associated with low volatility portfolios are caused by the inattention to volatility (as in The volatility puzzle solved?). Were stock returns really better in 2007 than 2008? suggests the surprising proposition that stock returns might have been as good in 2008 as in 2007. This same theme is continued in Bear hunting which tries to find periods that were bear markets. Primitive stock markets is an invitation to find better ways of structuring markets. The ARORA guessing game points to a fun game where you decide which of two series is market data. It also includes a hypothesis of why we can tell the difference. Market ethics Are momentum strategies antisocial? The title asks the question. I’m not sure of the answer. The good side of inside trading points to an article that shows inside trading to be a more ambiguous subject than we would expect. Psychic fund management unfortunately exists. Making science happen is about the Blackawton bee study. If 8 year-olds can do science, why can’t the fund management industry do more of it? The book reviews were: Table 1 provides a quick view of my recommendations. Table 1: Recommendations of reviewed books. title author recommended Brain Rules John Medina yes Drive Daniel Pink yes The Happiness Equation Nick Powdthavee no Obliquity John Kay yes The R Book Michael Crawley generally no The Quants Scott Patterson maybe R is a name you need to know declares Forbes magazine. In this instance I believe them to be correct. Apparently R appears in the hardcopy magazine published in late December. Some quibbles about “The R Book” by Michael Crawley If you have the book, I suggest that you have a glance at this post. Ideas for World Statistics Day includes some ideas concerning R. Bear hunting shows some R code and also points to R code you can download. One function in particular may be useful — a function that nicely plots data over years or decades. A tale of two returns has R code for computing returns and for creating a particular plot. The following posts include minor amounts of R code: Posts related to statistics were: Feeding a greedy algorithm explains the idea of greedy algorithms and their use. Clever versus simple risk management discusses why risk modeling is hard. Most popular Most under-valued Most read is not at all the same as most important. Here is my list of the posts that have been given less attention than they deserve: Most disappointing American TV does cointegration is about cointegration. It is also about the Fringe television series. I would have thought that Fringe geeks would relish this post which explains an otherwise unintuitive aspect of the show. Apparently not. Thank you I’d like to thank those who have read the blog over the past few months. Special thanks go to those who (However, spammers are free to resist commenting.) All the best in the coming year. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/blog-year-2010-in-review/","timestamp":"2014-04-20T01:05:04Z","content_type":null,"content_length":"47872","record_id":"<urn:uuid:c55c1fab-5476-4890-8525-64245d7ac978>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: Problem when importing a file Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: Problem when importing a file From Nick Cox <njcoxstata@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: Problem when importing a file Date Thu, 22 Dec 2011 10:43:53 +0000 Another take on the question is that this should not be happening. For example, go back to R and check out their foreign package, which (if I recall correctly) many people use to export Stata .dta files. On Thu, Dec 22, 2011 at 1:02 AM, Nick Cox <njcoxstata@gmail.com> wrote: > You can fix the first line by giving 5 names not 4. It looks as if > there is a blank second line. Delete it. Do both of these things in > any decent text editor. > You can -drop- any variable (which you are calling a column) once you > have the data in Stata. That's the least of your problems. > As for the rest, there are "some end of line issues". I think you need > to say what they are. > Nick > On Wed, Dec 21, 2011 at 10:27 PM, Shubhabrata Mukherjee > <joy_stat@yahoo.com> wrote: >> I am trying to import a text file (created in R) into Stata and having problems with it. Can someone tell me a way to solve this: >> The text file looks like the following: >> V1 V2 A3 A4 >> 1 0.015 0.986 0.010 0.001 >> 2 0.085 0.975 0.000 0.002 >> 3 0.095 0.900 0.010 0.001 >> ......... >> ...... >> ... >> I used the command insheet using chunk1.txt, delim(" ") >> Two problem occur >> 1) Somehow I need to delete the the numbers in the first column from the second row onwards (1, 2, 3...) as they are not part of the dataset but was created by the text file as placeholders for each row. >> 2) At the end there are two variables created ('v110' and 'v111') where v110 contains the values which should be under A4 and 'v111' gets created because of some end of line issues. >> How can I fix these? * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2011-12/msg00777.html","timestamp":"2014-04-16T13:24:02Z","content_type":null,"content_length":"9357","record_id":"<urn:uuid:a2aaefa8-8236-462d-a327-c3cd0ca56237>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Size of Tyria [Archive] - Guild Wars 2 - GWOnline Ranger Nietzsche 31-08-2006, 22:11 Even though my last thread on the topic (http://forums.gwonline.net/showthread.php?t=400768) didn't get much of a response I decided to continue my estimations of the size of parts of our GW world. This was my method. I found a long, uninterrupted distance in Old Ascalon to run. I picked it because of the ease of killing things in my way. One assumption I made is that Vertical difference doesn't change the horizontal speed* I ran for precisely 20 seconds. Next I found the distance ratio between how far I ran and a larger stretch on the zoomed in World Map (1:7)** Then I found the ratio between that stretch and the overall dimensions of the world map (1:16.5 vert and 1:19.5 horiz)*** Some math later, I found the time to run the full length of the map to be 38.5 minutes vertically, and 45.5 minutes horizontally. Now, this corresponds to an incredibly small map. The fastest marathons (unofficial records here) ever ran are around 2 hours. So, given that our characters never get tired etc. the world map is about 8 miles across, and slightly less vertically. End message: This place is tiny, but larger than cantha on a ratio of about 1.35:1. This ratio is an additional confirmation of the accuracy of my determinations, as the aspect ratios of the two maps should be equal, and then the ratio of vertical to vertical should equal the ratio of horizontal to horizontal. In my case they differ by about .1 which is not too bad considering my methods. EDIT: Added Pictures *I made this assumption based on a few trials, and that there is no Z-axis in the guild wars universe, the map moves under the player, the character model isn't actually moving at all. **Zoomed in World Map ***World Map ****Error analysis, see other thread, the error analysis is the same.
{"url":"http://guildwars.incgamers.com/forums/archive/index.php/t-418999.html","timestamp":"2014-04-18T15:48:33Z","content_type":null,"content_length":"22985","record_id":"<urn:uuid:0b87ff2b-7bef-4fd6-b282-5b4440c228d0>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
The Electronical Rattle Bag In last weeks post, we saw how the motion of particles that move along straight lines creates the illusion of a spinning circle. This time we actually let the individual particles move in circular paths and observe various patterns that result when the relative phase of each particle is varied. Here, “phase” just means where along the circular path a certain particle is when compared to the others. In the first animation, each of the particles arrive at the edge of the black circle at the same time to create the effect of a spinning and contracting/expanding circle. In the second animation, the particles are phased just right to create the illusion of a circle that slides along the edge of the black circle. This is similar to the Tusi motion from the previous post except in this instance the circle doesn’t spin. In the third animation, the phases are adjusted to make it seem like the particles move along a straight line that spins around, but really each particle is still only moving along a circular path. This is a somewhat opposite effect from the Tusi motion where the particles were always moving along straight lines. Inspired by the not-Tusi-couple. Mathematica code: Disk[{0, 0}, 1.05]}, {White, Opacity[o], Circle[{.525, 0}, .525]}, n*2 Pi/m, {0, 0}], {n, 1, m, 1}], .525 {1 + Cos[-2 Pi (p*n/m + t)], Sin[-2 Pi (p*n/m + t)]}, .02]}, n*2 Pi/m, {0, 0}], {n, 1, m, 1}]}, PlotRange -> 1.1, ImageSize -> 500], {{m, 8, "circles"}, 1, 20, 1}, {{o, .5, "path opacity"}, 1, 0}, {{p, 0, "phase"}, 0, 2, 1}, {t, 0, 1}] Reblogged from HAX PAX MAX.
{"url":"http://burningfp.tumblr.com/tagged/math","timestamp":"2014-04-16T13:02:15Z","content_type":null,"content_length":"78320","record_id":"<urn:uuid:74efeac4-e30f-45fd-9ec1-74f4df3da05c>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 16th 2008, 08:13 PM #1 Super Member Feb 2008 A,B,C are sets and f: A-->B and g: B-->C. Prove: 1)If f and g are one-to-one, then so is g o f. 2)If g o f is one-to-one, then g need not be one-to-one. Any advice? Of course you know what to prove gof(x) = gof(y) $\rightarrow$ x = y Since g(f(x)) = g(f(y)) and g is one-one, we have f(x) = f(y). Since f is also 1-1, we have x=y. 2) is nice, I will give you a general hint,try it General Hint: Draw a few blobs(actually 3, they are the sets). Mark some points in them(they are the elements). Now map points from one blob to another, remembering they conditions in data. Try your best to prove the question wrong. You will see why it must be right. Geometric intuition is your best pal, learn to use him. April 16th 2008, 08:21 PM #2
{"url":"http://mathhelpforum.com/discrete-math/34829-functions.html","timestamp":"2014-04-19T16:15:24Z","content_type":null,"content_length":"32809","record_id":"<urn:uuid:570bbf44-f1c9-42ac-a048-858871acfc58>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
Markov chain model March 26th 2010, 10:32 AM #1 Mar 2010 Markov chain model The manufacture of a certain type of electronic board consists of four steps: tinning, forming, insertion, and solder. After the forming step, 5% of the parts must be re-tinned; after the insertion step, 20% of the parts are bad and must be scrapped; and after the solder step, 30% of the parts must be returned to insertion, 10% must be scrapped, and the remaining ones are stored in the "ready to ship" area. We assume that when a part is returned to a processing step, it is treated like any other part entering the step. (a) Model this process as a Markov chain and give its transition matrix. (b) What is the fraction of parts that end up scrapped? (c) How many boards should we start with if the goal is to have the expected number of boards that finish in the good category equal to at least 100? Follow Math Help Forum on Facebook and Google+
{"url":"http://mathhelpforum.com/advanced-statistics/135809-markov-chain-model.html","timestamp":"2014-04-18T11:57:05Z","content_type":null,"content_length":"29371","record_id":"<urn:uuid:7d5d7cfa-30d4-49ea-86d4-23713588b795>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Generalized degree spectrum up vote 3 down vote favorite A standard graph invariant is the degree sequence, but it is well-known, that the degree sequence is not a complete graph invariant, i.e. a graph cannot be reconstructed uniquely from its degree That means: the degree sequence contains too little information about the graph. But what about generalizations of the degree sequence, containing more information but still relying on degrees only, i.e. counting? Let $G$ be a undirected graph with $n$ nodes. [B^0] Consider the set $D^0 = [n]^{[1]}$ of all functions $d^0:[1] \rightarrow [n]$ and assign to each $d^0 \in D^0$ the number of vertices with degree $d^0_i = d^0$ (precisely $d^0_i = d^0(1)$). [C^0] This yields a function $D_G^1: D^0 \rightarrow [n]$, which bears the same information as the degree sequence. Let's call it degree spectrum. Note, that it takes into account only the 1-neighbourhood of each node. [A^1] Consider for each node $v_i$ the function $d^1_i: D^0 \rightarrow [n]$ assigning to each $d^0 \in D^0$ the number of its neighbours with degree $d^0_j = d^0$. [B^1] Consider the set $D^1 = [n]^{D^0}$ of all functions $d^1: D^0 \rightarrow [n]$ and assign to each $d^1 \in D^1$ the number of vertices with $d^1_i = d^1$. [C^1] This yields a function $D_G^2: D^1 \rightarrow [n]$, which is another graph invariant, taking into account the 2-neighbourhood of each node. This process can be continued: [A^k+1] Consider for each node $v_i$ the function $d^{k+1}_i: D^k \rightarrow [n]$ assigning to each $d^k \in D^k$ the number of its neighbours with $d^k_j = d^k$ [B^k+1] Consider the set $D^{k+1} = [n]^{D^k}$ of all functions $d^{k+1}: D^k \rightarrow [n]$ and assign to each $d^{k+1} \in D^{k+1}$ the number of vertices with $d^{k+1}_i = d^{k+1}$. [C^k+1] This yields a function $D_G^{k+2}: D^{k+1} \rightarrow [n]$, which is another graph invariant, taking into account the k+2-neighbourhood of each node. Question: Has this kind of generalized degree spectrum already been investigated? Under which name? If it not has been investigated already I will feel free to continue this post, otherwise I will stop here. add comment 3 Answers active oldest votes This sort of thing occurs under the name of "stable coloring" in Section 2.2 of Martin Otto's book "Bounded Variable Logics and Counting: A Study in Finite Models". (I don't have the book handy at the moment, so I'm copying the reference from a paper that cites it; I hope it's correct.) I vaguely recall having seen other names for this construction as well, probably the up vote 4 names of the inventor (or re-inventor), but I don't remember the name; you can probably find a reference in Otto's book. down vote add comment Thanks to Andreas' hint I found this slide show by Martin Fürer: Combinatorial Methods for the Graph Isomorphism Problem. On slide 4 he treats vertex classification which is [S:exactly :S] similar in spirit to what I tried to sketch here (but expressed in three lines only): Start: Color the vertices by their degree. up vote 2 down vote Loop: Color the vertices by the multiset of colors of their neighbors. Stop: When the color partition stabilizes. So I am going to stop this thread here. add comment For the record: A paper from which you can learn a lot about vertex classification, the $k$-dim Weisfeiler-Lehman method and its history is: An Optimal Lower Bound on the Number of up vote 0 down Variables for Graph Identification (1992) by Jin-yi Cai , Neil Immerman , Martin Fürer. add comment Not the answer you're looking for? Browse other questions tagged graph-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/38796/generalized-degree-spectrum/39288","timestamp":"2014-04-20T03:29:15Z","content_type":null,"content_length":"58287","record_id":"<urn:uuid:c6ba7f70-d835-4fb1-b616-78b6c17cd9bc>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Extracting randomness: A survey and new constructions Results 1 - 10 of 67 - 34 [DRS07] [DS05] [EHMS00] [FJ01] Yevgeniy Dodis, Leonid Reyzin, and Adam , 2004 "... We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying mater ..." Cited by 292 (35 self) Add to MetaCart We provide formal definitions and efficient secure techniques for • turning noisy information into keys usable for any cryptographic application, and, in particular, • reliably and securely authenticating biometric data. Our techniques apply not just to biometric information, but to any keying material that, unlike traditional cryptographic keys, is (1) not reproducible precisely and (2) not distributed uniformly. We propose two primitives: a fuzzy extractor reliably extracts nearly uniform randomness R from its input; the extraction is error-tolerant in the sense that R will be the same even if the input changes, as long as it remains reasonably close to the original. Thus, R can be used as a key in a cryptographic application. A secure sketch produces public information about its input w that does not reveal w, and yet allows exact recovery of w given another value that is close to w. Thus, it can be used to reliably reproduce error-prone biometric inputs without incurring the security risk inherent in storing them. We define the primitives to be both formally secure and versatile, generalizing much prior work. In addition, we provide nearly optimal constructions of both primitives for various measures of “closeness” of input data, such as Hamming distance, edit distance, and set difference. - Journal of the ACM , 2001 "... A “randomness extractor ” is an algorithm that given a sample from a distribution with sufficiently high min-entropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Min-entropy is a measure of the amount of randomness in a distribution). We present a ..." Cited by 107 (30 self) Add to MetaCart A “randomness extractor ” is an algorithm that given a sample from a distribution with sufficiently high min-entropy and a short random seed produces an output that is statistically indistinguishable from uniform. (Min-entropy is a measure of the amount of randomness in a distribution). We present a simple, self-contained extractor construction that produces good extractors for all min-entropies. Our construction is algebraic and builds on a new polynomial-based approach introduced by Ta-Shma, Zuckerman, and Safra [TSZS01]. Using our improvements, we obtain, for example, an extractor with output length m = k/(log n) O(1/α) and seed length (1 + α) log n for an arbitrary 0 < α ≤ 1, where n is the input length, and k is the min-entropy of the input distribution. A “pseudorandom generator ” is an algorithm that given a short random seed produces a long output that is computationally indistinguishable from uniform. Our technique also gives a new way to construct pseudorandom generators from functions that require large circuits. Our pseudorandom generator construction is not based on the Nisan-Wigderson generator [NW94], and turns worst-case hardness directly into pseudorandomness. The parameters of our generator match those in [IW97, STV01] and in particular are strong enough to obtain a new proof that P = BP P if E requires exponential size circuits. - In Proceedings of the 33rd Annual ACM Symposium on Theory of Computing , 2001 "... Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bi ..." Cited by 89 (20 self) Add to MetaCart Abstract Trevisan showed that many pseudorandom generator constructions give rise to constructionsof explicit extractors. We show how to use such constructions to obtain explicit lossless condensers. A lossless condenser is a probabilistic map using only O(log n) additional random bitsthat maps n bits strings to poly(log K) bit strings, such that any source with support size Kis mapped almost injectively to the smaller domain. Our construction remains the best lossless condenser to date.By composing our condenser with previous extractors, we obtain new, improved extractors. For small enough min-entropies our extractors can output all of the randomness with only O(log n) bits. We also obtain a new disperser that works for every entropy loss, uses an O(log n)bit seed, and has only O(log n) entropy loss. This is the best disperser construction to date,and yields other applications. Finally, our lossless condenser can be viewed as an unbalanced - Journal of the ACM , 1999 "... We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. ..." Cited by 87 (5 self) Add to MetaCart We introduce a new approach to constructing extractors. Extractors are algorithms that transform a "weakly random" distribution into an almost uniform distribution. Explicit constructions of extractors have a variety of important applications, and tend to be very difficult to obtain. - In Proceedings of the 31st Annual ACM Symposium on Theory of Computing , 1999 "... We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log² n) additional random bits, and can extract all the min-entropy using O(log³ n) additional rando ..." Cited by 78 (16 self) Add to MetaCart We give explicit constructions of extractors which work for a source of any min-entropy on strings of length n. These extractors can extract any constant fraction of the min-entropy using O(log&sup2; n) additional random bits, and can extract all the min-entropy using O(log&sup3; n) additional random bits. Both of these constructions use fewer truly random bits than any previous construction which works for all min-entropies and extracts a constant fraction of the min-entropy. We then improve our second construction and show that we can reduce the entropy loss to 2 log(1=") +O(1) bits, while still using O(log&sup3; n) truly random bits (where entropy loss is defined as [(source min-entropy) + (# truly random bits used) (# output bits)], and " is the statistical difference from uniform achieved). This entropy loss is optimal up to a constant additive term. our... - In Proceedings of the 22nd Annual IEEE Conference on Computational Complexity , 2007 "... We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous ..." Cited by 77 (7 self) Add to MetaCart We give an improved explicit construction of highly unbalanced bipartite expander graphs with expansion arbitrarily close to the degree (which is polylogarithmic in the number of vertices). Both the degree and the number of right-hand vertices are polynomially close to optimal, whereas the previous constructions of Ta-Shma, Umans, and Zuckerman (STOC ‘01) required at least one of these to be quasipolynomial in the optimal. Our expanders have a short and self-contained description and analysis, based on the ideas underlying the recent list-decodable errorcorrecting codes of Parvaresh and Vardy (FOCS ‘05). Our expanders can be interpreted as near-optimal “randomness condensers, ” that reduce the task of extracting randomness from sources of arbitrary min-entropy rate to extracting randomness from sources of min-entropy rate arbitrarily close to 1, which is a much easier task. Using this connection, we obtain a new construction of randomness extractors that is optimal up to constant factors, while being much simpler than the previous construction of Lu et al. (STOC ‘03) and improving upon it when the error parameter is small (e.g. 1/poly(n)). - In Proceedings of the 37th Annual ACM Symposium on Theory of Computing , 2005 "... We show how to extract random bits from two or more independent weak random sources in cases where only one source is of linear min-entropy and all other sources are of logarithmic min-entropy. Our main results are as follows: 1. A long line of research, starting by Nisan and Zuckerman [15], gives e ..." Cited by 62 (6 self) Add to MetaCart We show how to extract random bits from two or more independent weak random sources in cases where only one source is of linear min-entropy and all other sources are of logarithmic min-entropy. Our main results are as follows: 1. A long line of research, starting by Nisan and Zuckerman [15], gives explicit constructions of seeded-extractors, that is, extractors that use a short seed of truly random bits to extract randomness from a weak random source. For every such extractor E, with seed of length d, we construct an extractor E ′ , with seed of length d ′ = O(d), that achieves the same parameters as E but only requires the seed to be of min-entropy larger than (1/2 + δ) · d ′ (rather than fully random), where δ is an arbitrary small constant. 2. Fundamental results of Chor and Goldreich and Vazirani [6, 22] show how to extract Ω(n) random bits from two (independent) sources of length n and min-entropy larger than (1/2 + δ) · n, where δ is an arbitrary small constant. We show how to extract Ω(n) random bits (with optimal probability of error) when only one source is of min-entropy (1/2 + δ) · n and the other source is of logarithmic min-entropy. 1 3. A recent breakthrough of Barak, Impagliazzo and Wigderson [4] shows how to extract Ω(n) random bits from a constant number of (independent) sources of length n and min-entropy larger than δn, where δ is an arbitrary small constant. We show how to extract Ω(n) random bits (with optimal probability of error) when only one source is of min-entropy δn and all other (constant number of) sources are of logarithmic min-entropy. 4. A very recent result of Barak, Kindler, Shaltiel, Sudakov and Wigderson [5] shows how to extract a constant number of random bits from three (independent) sources of length n and min-entropy larger than δn, where δ is an arbitrary small constant. We show how to extract Ω(n) random bits, with sub-constant probability of error, from one source of min-entropy δn and two sources of logarithmic min-entropy. - In EUROCRYPT , 2005 "... We show two efficient techniques enabling the use of biometric data to achieve mutual authentication or authenticated key exchange over a completely insecure (i.e., adversarially controlled) channel. In addition to achieving stronger security guarantees than the work of Boyen, we improve upon his so ..." Cited by 60 (13 self) Add to MetaCart We show two efficient techniques enabling the use of biometric data to achieve mutual authentication or authenticated key exchange over a completely insecure (i.e., adversarially controlled) channel. In addition to achieving stronger security guarantees than the work of Boyen, we improve upon his solution in a number of other respects: we tolerate a broader class of errors and, in one case, improve upon the parameters of his solution and give a proof of security in the standard model. 1 Using Biometric Data for Secure Authentication Biometric data, as a potential source of high-entropy, secret information, havebeen suggested as a way to enable strong, cryptographically-secure authentication of human users without requiring them to remember or store traditionalcryptographic keys. Before such data can be used in existing cryptographic protocols, however, two issues must be addressed: first, biometric data are not uni-formly distributed and hence do not offer provable security guarantees if used - STOC'03 , 2003 "... This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min-)ent ..." Cited by 51 (12 self) Add to MetaCart This paper provides the first explicit construction of extractors which are simultaneously optimal up to constant factors in both seed length and output length. More precisely, for every n, k, our extractor uses a random seed of length O(log n) to transform any random source on n bits with (min-)entropy k, into a distribution on (1 − α)k bits that is ɛ-close to uniform. Here α and ɛ can be taken to be any positive constants. (In fact, ɛ can be almost polynomially small). Our improvements are obtained via three new techniques, each of which may be of independent interest. The first is a general construction of mergers [22] from locally decodable error-correcting codes. The second introduces new condensers that have constant seed length (and retain a constant fraction of the min-entropy in the random source). The third is a way to augment the “win-win repeated condensing” paradigm of [17] with error reduction techniques like [15] so that the our constant seed-length condensers can be used without error accumulation.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=102303","timestamp":"2014-04-18T06:54:07Z","content_type":null,"content_length":"41429","record_id":"<urn:uuid:3f25412b-3b00-4aed-8e18-3c39b218c6e2>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Colloquium Schedule Spring 2009 Colloquium Schedule - Spring 2009 Tuesday, May 5, Location TBA at 2:30 pm Chris Wingard Department of Mathematics & Statistics University of Nevada Reno Exact Filtering of Measurement Errors in Dynamical Systems Abstract: Measurements of values in dynamical systems are often incomplete or accompanied by error. Incomplete measurements occur when not all of the relevant quantities in the system are measured. We provide a technique which recovers the full state of a dynamical system given an incomplete measurement. One type of error we consider is constant shifting, caused by systematic bias in the measuring device. Two other types of measuring device errors are amplification and attenuation, both of which result in a constant multiplicative rescaling of measurement. Lastly, we consider periodic noise, which occurs when a measured signal is obscured by another signal. For each kind of error, we provide a filtering method which recovers the full state of the dynamical system. Friday, April 24, AB 102, 2:30 pm Benito Chen Department of Mathematics University of Texas at Arlington Some bacterial growth models with randomness Abstract: In mathematical modeling of population growth, and in particular of bacterial growth, parameters are either measured directly or determined by curve fitting. These parameters have large variability that depends on the experimental method and its inherent error, on differences in the actual population sample size used, as well as other factors that are difficult to account for. In this work the parameters that appear in the Monod kinetics growth model are considered random variables with specified distributions. A stochastic spectral representation of the parameters is used, together with the polynomial chaos method, to obtain a system of differential equations, which is integrated numerically to obtain the evolution of the mean and higher-order moments with respect to Friday, April 24, AB 110, at 1:00 pm N. Christopher Phillips Department of Mathematics University of Oregon The existence of outer automorphisms of the Calkin algebra is undecidable in ZFC Abstract: Let H be a separable infinite dimensional Hilbert space. The Calkin algebra is the quotient of the algebra of all bounded operators on H by the ideal of compact operators on H. In 1977, in connection with exten- sion theory for C*-algebras, the question of the existence of outer automor- phisms of the Calkin algebra was raised. It turns out that this question is undecidable in ZFC (Zermelo-Fraenkel set theory plus the axiom of choice). Assuming the Continuum Hypothesis, joint work with Nik Weaver shows that outer automorphisms exist. Ilijas Farah has recently shown that it is consistent with ZFC that all automorphisms of the Calkin algebra are inner. This talk will primarily concentrate on the result with Weaver. It is in- tended for a general audience. Thursday, April 23, AB 102, 2:30 pm William Mitchell National Institute of Standards and Technology hp-Adaptive Finite Elements for the Schroedinger Equation Abstract: Recently the hp-version of the finite element method for solving partial differential equations has received increasing attention. This is an adaptive finite element approach in which adaptivity occurs in both the size, h, of the elements (spacial or h adaptivity) and in the order, p, of the approximating piecewise polynomials (order or p adaptivity). The objective is to determine a distribution of h and p that minimizes the error using the least amount of work in some measure. The main attraction of hp adaptivity is that, in theory, the discretization error can decrease exponentially with the number of degrees of freedom, n, even in the presence of singularities, and this rate of convergence has been observed in practice. We apply adaptive finite element methods to a Schroedinger equation that models the interaction of two trapped atoms. Ultra-cold atoms can be held in the cells of an optical trap. The barriers between the cells can be lowered to allow the atoms to interact, causing entanglement and providing for one possible realization of a quantum gate for quantum computers. We present some preliminary computations with this model. Thursday, April 16, AB 106, 2:30 pm Ron Fintushel Department of Mathematics Michigan State University Smooth structures on 4-manifolds Abstract: I will talk about some of the basic problems in 4-manifold theory and approaches for trying to solve them. In the past few years the subject has had some major advances. I'll describe the history that led to these new results and show some ways to recreate them. This talk will be suitable for a general mathematical audience. Wednesday, April 15, AB 106, 4:00 pm Glen Hansen Multiphysics Methods Group Idaho National Laboratory MOOSE: A Parallel Solution Framework for Complex Multiscale Multiphysics Applications Abstract: The Multiphysics Methods Group is developing a software framework called MOOSE (Multiphysics Object Oriented Simulation Environment). MOOSE is based on a physics-based preconditioned Jacobian-free Newton Krylov (JFNK) approach to support rapid application development for engineering analysis and design. The framework is designed for tightly coupled solution of finite element problems, and provides a finite element library, input and output capabilities, mesh adaptation, and a set of parallel nonlinear solution methods. The JFNK abstraction results in a clean architecture for implementing a variety of multiphysics and multiscale problems. This talk begins with an overview of the architecture of MOOSE and a presentation of the JFNK solution method. Two representative examples are considered in detail; BISON, a nuclear fuel performance application and PRONGHORN, a pebble bed nuclear reactor simulation code. BISON is a quasi-steady and transient application that currently couples models for fuel thermomechanics, oxygen diffusion, and fission product swelling. Further, BISON incorporates a mesoscale phase-field simulation for the calculation of fuel thermal conductivity in a nonlinearly consistent manner. PRONGHORN couples a neutron diffusion solution to a porous media flow model to simulate the behavior of a pebble bed reactor. Friday, April 10, AB-209 at 1:00 pm George Molchan Russian Academy of Science, Moscow Unilateral small deviations of self-similar Gaussian processes Abstract: Let x(s), x(0) = E[x(s)] = 0 be a real-valued Gaussian self-similar random process with Hurst parameter H and d-dimensional time. We consider the problem of asymptotic behavior of the probability p(T) that x(s) does not exceed a fixed positive level in a star shaped expanding domain TxG as T goes to infinity; here G is a fixed domain that includes 0. Typically p(T) has unusual log p(t) =-θ(log T)^D(1+o(1)), T >> 1, and the problem is reduced to the following questions: existence, estimation, and explicit values of (θ, D). We give a complete solution of the problem for the fractional Brownian motion (FBM). In this case D = 1 and θ= 1-H, if G= [0,1] or = d, if G={s : |s|<1}. We prove the Le&Shao hypothesis about existence of θ for fractional Brownian sheet in the case d = D = 2 and G = [0,1]x[0,1]. We discuss the hypothesis that in the case of integrated FBM one has D = 1 and θ=H(1-H) if G=[0,1], and θ=1-H if G=[-1,1]. This hypothesis is important for analysis of 1-d inviscid Burgers' equation with random initial data. The proof is known for θ=1/2 only. Methods of analysis of the problem are represented as well. Thursday, April 9, AB-106 at 2:30 pm Deanna Needell Department of Mathematics UC Davis Algorithms in Compressed Sensing Abstract: Compressed sensing is a new and fast growing field of applied mathematics that addresses the shortcomings of conventional signal compression. Given a signal with few nonzero coordinates relative to its dimension, compressed sensing seeks to reconstruct the signal from few nonadaptive linear measurements. As work in this area developed, two major approaches to the problem emerged, each with its own set of advantages and disadvantages. The first approach, L1-Minimization, provided strong results, but lacked the speed of the second, the greedy approach. The greedy approach, while providing a fast runtime, lacked stability and uniform guarantees. This gap between the approaches has led researchers to seek an algorithm that could provide the benefits of both. Recently, we bridged this gap and provided a breakthrough algorithm, called Regularized Orthogonal Matching Pursuit (ROMP). ROMP is the first algorithm to provide the stability and uniform guarantees similar to those of L1-Minimization, while providing speed as a greedy approach. After analyzing these results, we developed the algorithm Compressive Sampling Matching Pursuit (CoSaMP), which improved upon the guarantees of ROMP. CoSaMP is the first algorithm to have provably optimal guarantees in every important aspect. This talk will provide an introduction to the area of compressed sensing and a discussion of these two recent developments. Thursday, February 26, AB-102 at 2:30 pm Cornelia Van Cott Department of Mathematics University of San Francisco Covering links and the slicing of Bing doubles Abstract: A link is slice if its components bound disjoint smooth disks in B4. Showing that links are (or are not) slice is a difficult problem with a long history of deep results. In this talk, we will overview the history and motivation behind the study of slice links. Then we will focus on a particular class of links: iterated Bing doubles. We will see that many of the classical tools for showing links are slice break down for Bing doubles, but new results involving branched covers of S3 have yielded progress. Thursday, February 19, AB-102 at 2:30 pm John Louie Professor of Geophysics College of Science, UNR Predicting Earthquake Shaking in Complex 3D Geology Abstract: Predicting the strength of ground shaking caused by an earthquake scenario is a task that depends on complex but fortunately mostly linear phenomena at the earthquake source, along the path between the earthquake and the urban area, and within the urban area. While Nevada structural geologists work on identifying possible earthquake sources and their likelihood, my work has involved assessing the effects of the many geological basins that pock the Nevada landscape, and on evaluating near-surface seismic properties in the urban areas. Two essential tools for modeling earthquake scenarios are a Community Seismic Velocity Model, assembling geological, geophysical, and geotechnical knowledge into 3D grids allowing seismic computations; and a viscoelastic wave-propagation code implemented on a computing cluster. Developing and adapting such tools for use in Nevada, I have examined wave-propagation phenomena occurring in likely earthquake scenarios. Nonlinear effects may have minimal influence, though current efforts are assessing whether soil weakening that accompanies strong earthquake shaking may be computed in a fully separated manner. In realistic models of the Nevada crust, the largest shaking amplitudes are carried by Rayleigh surface waves, and are thus fundamentally affected by basin-edge geometry and shallow geotechnical properties. Earthquake-rupture directivity has a very strong effect on shaking predictions, adding uncertainty to hazard assessments. The many basins that are present between Nevada urban areas and potential earthquake source zones diffract and spread earthquake-source effects. The expected strong correlation between shaking amplitude and total basin depth can appear, but basin depths are not predictive of the areas of strongest shaking within basins. To date, our models of shaking from the Feb. 21, 2008 earthquake in Wells can be validated against some simple aspects of the data recorded from that event. Attempts to validate our models of the April 25, 2008 Mogul event have so far been completely unsuccessful. Additional information and downloadable wave-propagation movies for computers and cell phones are available at www.seismo.unr.edu/ma. Wedneday, January 28, EJCH 108H at 4:00 pm Henrik Nordmark Institute for Logic, Language and Computation Universiteit van Amsterdam, Netherlands. A short excursion into Math, Philosophy and Logic Abstract: In empirical sciences such as physics, biology or psychology, truth can be established by testing theories against reality. We conduct experiments which either refute or confirm our hypotheses. However, in mathematics truth is traditionally established via logical deductions from axioms, which we simply assume to be true. Now, if these axiom systems are supposed to describe some sort of mathematical reality in the same way that scientific theories describe physical reality, how is it that we know anything about this intangible mathematical reality in the first place? On the other hand, if we do not wish to commit ourselves to a mysterious platonic universe, then what is mathematics actually about? And why does it seem to be so useful and pervasive in science, engineering & finance? This talk is essentially an overview of some of the problems that arise in philosophy of mathematics and some of the attempts that have been made to answer these questions by different philosophical camps. No prior knowledge of philosophy of mathematics is presumed as I shall build up everything more or less from scratch. Some prior familiarity with set theory and symbolic logic is useful but not indispensable. Henrik Nordmark is a graduate student at the Institute for Logic, Language and Computation at the Universiteit van Amsterdam in the Netherlands. As an undergraduate, Henrik studied Mathematics, Psychology and Philosophy at the University of Nevada, Reno.
{"url":"http://www.unr.edu/math/student-resources/seminars/spring-2009","timestamp":"2014-04-21T02:05:22Z","content_type":null,"content_length":"39662","record_id":"<urn:uuid:598151ec-adb0-4bde-9aaf-19a6dd09572d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
A Simple Telescope: Worksheet Fall 2011 ASTR 110L, Sec. 2 Name: ________________ 1. Using the objective lens and some tissue paper fastened to the other end of the tube, form an image of the three colored lights. Then sketch the following: Real-life object orientation. Projected image orientation. 2. Using a distant light source, measure the focal length, F, of the large objective lens. Include units! Estimate the error in your measurement. 3. Using a distant light source, measure the focal length, f, of the small eyepiece lens. Include units! Estimate the error in your measurement. 4. Calculate the magnification, M, of the simple telescope. Show your formula and your work. Round off your final answer to an appropriate number of significant digits. This is the theoretical magnification that we expect the simple telescope to deliver. 5. Measure the magnification of the simple telescope by observing the 1-meter target set up in the hallway. This is the experimental magnification the telescope actually delivers. 6. Do your two magnifications (theoretical and experimental) agree with each other, given the possible errors in your measurements? If not, can you think of other sources of uncertainty? Joshua E. Barnes (barnes at ifa.hawaii.edu) Updated: 26 September 2011
{"url":"http://www.ifa.hawaii.edu/~barnes/ast110l/simplescope_ws.html","timestamp":"2014-04-18T15:42:50Z","content_type":null,"content_length":"3329","record_id":"<urn:uuid:24cc4130-27c3-4c83-a628-87016b551538>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
1st Grade Math: Subtraction Help 1st Grade Math: Subtraction Subtraction is essentially the opposite of addition. Subtraction finds the difference between two numbers, the minuend minus the subtrahend. If the minuend is larger than the subtrahend, the difference will be positive; if the minuend is smaller than the subtrahend, the difference will be negative; and if they are equal, the difference will be zero. Subtraction is neither commutative nor associative. For that reason, it is often helpful to look at subtraction as addition of the minuend and the opposite of the subtrahend, that is a − b = a + (−b). When written as a sum, all the properties of addition hold. Word Problems Many 1st grade math students find subtraction difficult. They feel overwhelmed with subtraction homework, tests and projects. And it is not always easy to find subtraction tutor who is both good and affordable. Now finding subtraction help is easy. For your subtraction homework, subtraction tests, subtraction projects, and subtraction tutoring needs, TuLyn is a one-stop solution. You can master hundreds of math topics by using TuLyn. At TuLyn, we have over 2000 math video tutorial clips including subtraction videos subtraction practice word problems subtraction questions and answers , and subtraction worksheets subtraction videos replace text-based tutorials in 1st grade math books and give you better, step-by-step explanations of subtraction. Watch each video repeatedly until you understand how to approach subtraction problems and how to solve them. • Tons of video tutorials on subtraction make it easy for you to better understand the concept. • Tons of word problems on subtraction give you all the practice you need. • Tons of printable worksheets on subtraction let you practice what you have learned in your 1st grade math class by watching the video tutorials. How to do better on subtraction: TuLyn makes subtraction easy for 1st grade math students. Do you need help with Subtraction of Whole Numbers in your 1st Grade Math class? Do you need help with Subtraction Of Fractions in your 1st Grade Math class? Do you need help with Subtraction Of Integers in your 1st Grade Math class? Do you need help with Subtraction Of Decimals in your 1st Grade Math class? Do you need help with Properties Of Subtraction in your 1st Grade Math class? 1st Grade: Subtraction Videos Adding And Subtracting Decimals Mixed Review Example Video Clip Length: 3 minutes 16 seconds Video Clip Views: This clip solves a given decimal addition and subtraction question: 1.867 - 0.43 + 72.491 - 25.4 subtraction video clips for 1st grade math students. 1st Grade: Subtraction Worksheets Free subtraction printable worksheets for 1st grade math students. 1st Grade: Subtraction Word Problems Carla has 5 crayons Carla has 5 crayons. Nina has 3 crayons. The teacher gives Carla 3 more crayons and Nina 5 more ... subtraction homework help word problems for 1st grade math students. First Grade: Subtraction Practice Questions subtraction homework help questions for 1st grade math students. How Others Use Our Site It is for my 5th grader and 1st grader and for us (parents) to help with teaching/math homework. Area, multiplication, subtraction, division, addition, fraction, and graphs.
{"url":"http://www.tulyn.com/1st-grade-math/subtraction","timestamp":"2014-04-16T22:30:38Z","content_type":null,"content_length":"17395","record_id":"<urn:uuid:9a23ae55-33f2-4542-9d8d-e690a4febfe4>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
Possible Answer Define and set up your BPSK modulator object. ... This example creates binary data, modulates the data, and then displays the data using a scatter plot. ... Try MATLAB, Simulink, and Other Products Get trial now. Join the conversation. Preventing Piracy; - read more bpsk_ask example for modulation in matlab u can use it easily... Search and download open source project / source codes from CodeForge.com - read more Please vote if the answer you were given helped you or not, thats the best way to improve our algorithm. You can also submit an answer or search documents about bpsk matlab example Share your answer: bpsk matlab example? Question Analizer bpsk matlab example resources
{"url":"http://www.askives.com/bpsk-matlab-example.html","timestamp":"2014-04-17T18:46:05Z","content_type":null,"content_length":"35562","record_id":"<urn:uuid:40522318-d1fd-4bfb-ba80-7ab7b58eb2f0>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00504-ip-10-147-4-33.ec2.internal.warc.gz"}
Determine the moment of inertia of the semi-ellipsoid with.... - (391414) | Transtutors Determine the moment of inertia of the semi-ellipsoid with Determine the moment of inertia of the semi-ellipsoid with respect to the x axis and express the result in terms of the mass m of the semi-ellipsoid. The material has a constant density ρ. Posted On: Nov 06 2013 01:28 PM Tags: Engineering, Civil Engineering, Others, College Solution to be delivered in 24 hours after verification Solution to "Determine the moment of inertia of the..." Related Questions in Others Ask Your Question Now Copy and paste your question here... Have Files to Attach? Questions Asked Questions Answered Topics covered in Engineering
{"url":"http://www.transtutors.com/questions/determine-the-moment-of-inertia-of-the-semi-ellipsoid-with-391414.htm","timestamp":"2014-04-21T07:11:31Z","content_type":null,"content_length":"69872","record_id":"<urn:uuid:f28cd5de-dbca-4bac-b620-75660fc77a0b>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
The NRICH Project aims to enrich the mathematical experiences of all learners. To support this aim, members of the NRICH team work in a wide range of capacities, including providing professional development for teachers wishing to embed rich mathematical tasks into everyday classroom practice. More information on many of our other activities can be found here.
{"url":"http://nrich.maths.org/public/leg.php?group_id=40&code=206","timestamp":"2014-04-17T06:57:32Z","content_type":null,"content_length":"30853","record_id":"<urn:uuid:a564dd81-a02e-4dd6-8938-d2b583d1e949>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Calculus Tutors Andover, MA 01810 Enthusiastic Explainer: Physics & Math ...That included a new course I designed with a math teacher that combined physics with pre- , counting as two courses, meeting in back-to-back periods every day. 2. I was a physics major at Cornell. 3. After earning my masters degree in education, I took... Offering 10+ subjects including calculus
{"url":"http://www.wyzant.com/Windham_NH_calculus_tutors.aspx","timestamp":"2014-04-18T03:30:02Z","content_type":null,"content_length":"59223","record_id":"<urn:uuid:51253e78-c7ff-4a91-be26-4b9d64033a2e>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Positioned Layout proposal From: Shelby Moore <shelby@coolpage.com> Date: Thu, 21 Oct 2010 15:35:10 -0400 Message-ID: <1c7e311891996a756f942f297081b84c.squirrel@sm.webmail.pair.com> To: shelby@coolpage.com Cc: "Boris Zbarsky" <bzbarsky@mit.edu>, "www-style list" <www-style@w3.org> >> On 10/21/10 12:32 PM, Shelby Moore wrote: >>> Good point. I am glad we can get some initial thinking about >>> quantification. >>> The key may be how multi-level hierarchy is handled. I was supposing >>> it >>> is necessary that the hierarchy places some restrictions on >>> combinations, >>> as is the case with the CSS box model. >> It's reasonably common to have thousands to tens of thousands of >> siblings in the CSS box model. > But they probably have a repeating pattern, and I am supposing such > repeating patterns will present an opportunity for optimization. > It is very unlikely to have maximum entropy (randomization) amongs 10,000 > elements in a page. I doubt any thing could be comprehended from the page, > because by definition, there is no Shannon-Entropy information in that > case of maximum entropy. > What CSS currently does is model some of the most popular repeating > patterns, e.g. table cols and rows. > I am supposing we can generalize the search for patterns and optimizing > them. Actually it is quite simple and obvious. Those 1000s of elems on page are mostly inline content. I showed the equivalence in the general model for display:inline: And we see that the generalized CSS for display:inline has a very well defined and manageable constraint model. Really all we have to do is re-apply the existing known models from typeography (i.e. current CSS) to the generalized model, and this will remove nearly all of the N from the generalized constraint solver. We only have to define how those models (of repeating generalized constraints) interact with random generalized constraints of a much more limited quantity. We know they will be limited, because otherwise randomness will insure no mutual information (comprehension) for the reader. If there are any cases that introduce new repeating patterns of large N count, then we will recognize these as popular layout models that have to be optimized, e.g multi-columns. This is exactly what I meant by generalizing the model will allow us to accelerate new things and give us a better understanding of how to code our layout engines to enable such. >>>> And these are constraints on variables with, typically, positive reals >>>> or reals as domains, correct? >>> I have not yet contemplated how large the per table entry data >>> structure >>> is >> The question is not of data structure size, but of algorithmic >> complexity. Most constraint satisfaction algorithms I have been able to >> find seem to assume finite domains for the variables, and have >> complexity at least D^N where N is the number of variables and D is the >> domain size, at first read. Again, maybe I'm just totally >> misunderstanding them.... > Ditto what I wrote above. That D^N complexity assumes the domain is set of > all random possibilities. >> But in our case N is order of 1e2--1e5 and D is infinite in theory. In >> practice, D is order of 2^{30} in Gecko, say. > Typo? Don't you mean in Gecko D^N is on order of 2^30? >>> but if is less than 1024 bytes, then n = 1024 is less than 1 GB of >>> virtual memory (hopefully physical memory) >> Per page, yes? > Yes but virtual memory, and that is before any such optimization for > repeating patterns. And 1024 was just pulled out of the air, it might be > 1/10 or 10 times that (but I lean towards it will be 1/10). >>> and let's just ballpark guesstimate on the order of less than Gflops. >> Where is this estimate coming from? What are we measuring, even? flops >> are operations/second; what does it mean to measure problem complexity >> in flops? > I was just making some wild guessestimate of complexity and typical use > case in my head, and extracting to ops/sec. > We would really need to dig more into this. Afaics, you are raising > issues which are for the next stage things I wrote I want to contemplate. > So I will have to put that off for now, and come back to it later. Received on Thursday, 21 October 2010 19:35:38 GMT This archive was generated by hypermail 2.3.1 : Tuesday, 26 March 2013 17:20:33 GMT
{"url":"http://lists.w3.org/Archives/Public/www-style/2010Oct/0504.html","timestamp":"2014-04-18T21:47:31Z","content_type":null,"content_length":"13100","record_id":"<urn:uuid:df5b446a-faa8-4263-a7a1-bdd955322c0b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Converting Rectangular to Polar December 8th 2009, 09:55 AM Converting Rectangular to Polar I am having a brain fade today, and just really need to make sure I am doing this correctly. i am given ... Convert the rectangular equation x^2 + y^2 - 2y = 0 I think the answer is -2sin(theta) however I am missing a step to proove it. and one other one on the reverse side... convert polar to rectangular r = (2/(2sin(theta) - 3cos(theta))) I am lost on that one. any help is greatly appreciate December 8th 2009, 11:51 AM mr fantastic I am having a brain fade today, and just really need to make sure I am doing this correctly. i am given ... Convert the rectangular equation x^2 + y^2 - 2y = 0 I think the answer is -2sin(theta) however I am missing a step to proove it. and one other one on the reverse side... convert polar to rectangular r = (2/(2sin(theta) - 3cos(theta))) I am lost on that one. any help is greatly appreciate Straight forward substitution: $r^2 - 2r \sin \theta = 0$. Therefore .... December 8th 2009, 02:16 PM I guess I assumed since it was easy that I was missing something. And on the second one I figured out that I just multiply by the denom to get the form I need ... thanks for your help.
{"url":"http://mathhelpforum.com/calculus/119339-converting-rectangular-polar-print.html","timestamp":"2014-04-16T20:20:37Z","content_type":null,"content_length":"5324","record_id":"<urn:uuid:dae58bea-8393-4eab-8e5b-7083f84443f3>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Stats, Seems so simple October 22nd 2008, 05:06 AM Stats, Seems so simple 10 people get on the elevator from the ground floor. They are equally likely to get down at any of the 10 floors in the building, and choose their floors independently. a)Determine the expected number of floors where no one gets down from the elevator. b)How would your answer change if the 10 people get down in pairs? Me and my friend are stuck on this question he thinks the answer to part a) is 1. and the answer to part b is 0.5. But for part b) i think the answer has to be more the 5 anyway so i think he is wrong. I would apprecate any help as i dont even know which distribution to use. October 22nd 2008, 05:59 AM For any floor, the chance on x number of people going there can be modelled with a binomial distribution of Bin(10, 1/10) (the same as throwing a ten-sided dice once for every person). So the chance of a floor having 0 people is $P(X=0) = \left( \frac{10!}{0!10!} \right) (1/10)^0 (9/10)^{10} = (9/10)^{10} \approx 0.35$ The chance of x number of floors having 0 people is again a binomial Bin(10, 0.35) of which the expected value is 10 * 0.35 = 3.5 For (b) I assume you mean there are 5 people in the elevator. This changes the initial binomial to Bin(5, 1/10) for which $P(X=0) = (9/10)^5 \approx 0.59$ The chance of x number of floors with 0 people becomes Bin(10, 0.59). The expected value of that is 10 * 0.59 = 5.9
{"url":"http://mathhelpforum.com/advanced-statistics/55082-stats-seems-so-simple-print.html","timestamp":"2014-04-21T12:57:14Z","content_type":null,"content_length":"5113","record_id":"<urn:uuid:17118265-ee76-4d45-99a1-f6d668edcf88>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00409-ip-10-147-4-33.ec2.internal.warc.gz"}
How to calculation defect rate by operation of Roll throughput yield the way u calculated is having the same results as by 100%-yeild. its correct and also explained with correction in formula. the total is infact 10% which is also correct and can get by adding percentages if the sample size is almost same or having very less variation in all operations. otherwise percentage addition wont if the sample size is differnet then defect will be the addition of defects in all operations and sample size will be the average of all operations. and formula will be, Def % overall = Total Def *100%/ sample size Avg the above formula will show the defect% of overall production and for each operation below formula is correct. operation A. Defect of A*100%/sample size of A Operation B. Defect of B*100%/ sample size of B. Operation C…… for 100% inspection, sample size will be the production of process. I hope this will work
{"url":"http://www.isixsigma.com/topic/how-to-calculation-defect-rate-by-operation-of-roll-throughput-yield/","timestamp":"2014-04-21T09:38:53Z","content_type":null,"content_length":"93507","record_id":"<urn:uuid:c3bc09ec-18db-4f8c-9506-f4aab9a0be73>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00433-ip-10-147-4-33.ec2.internal.warc.gz"}
RE: st: RE: grouping by time window [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] RE: st: RE: grouping by time window From "Andre pierre" <hcpats@hotmail.fr> To statalist@hsphsun2.harvard.edu Subject RE: st: RE: grouping by time window Date Mon, 13 Jun 2005 16:17:09 +0200 Thanks for your comments. There is indeed an argument I didn't mention to justify this way of treating the data (that is: inventions produced at very closed dates, by the same firm are identical, or at least very similar in my data). I thus have to choose an arbitrary time window (that could be modified) to identify these "portfolios". I should ideally find a measure that maximises the number of products in each portfolio in a given time window, but that seems rather difficult to implement... From: "Nick Cox" <n.j.cox@durham.ac.uk> Reply-To: statalist@hsphsun2.harvard.edu To: <statalist@hsphsun2.harvard.edu> Subject: st: RE: grouping by time window Date: Mon, 13 Jun 2005 14:35:13 +0100 I am not sure this is determinate on the criteria given. In particular, if you start a first window at the first observation and then use the rule that no window can be more than a year long, then the windows so produced do not necessarily coincide with the set of disjoint windows that maximise the number of products within each, if indeed there is a unique set so A quite different comment -- and here you may well have a rationale not presented -- is that seems a rather arbitrary way to reduce these data. Andre pierre > I would like to create groups of observations with at least 2 > products > invented within 365 days (not a civil year), by firm and > industry code. > obs; Firm; ic; date > 1; 1; A; 31 dec 02 > 2; 1; A; 2 Apr 03 > 3; 1; A; 12 dec 03 > 4; 1; A; 5 jan 04 > 5; 1; A; 4 may 04 > 6; 1; B; 4 may 04 > 7; 2; A; 1 jan 01 > . > . > . > In this example, observations 1 to 3 and 4 to 5 would correspond to a > "group", as the products are invented by the same firm, in > the same industry > code within 365 days. The groups have to be "successive", and not > "overlapping". * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/ MSN Search : des réponses à tous vos besoins ! http://www.imagine-msn.com/hotmail/default.aspx?locale=fr-FR * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2005-06/msg00346.html","timestamp":"2014-04-21T04:37:53Z","content_type":null,"content_length":"7484","record_id":"<urn:uuid:eb954d85-b26a-44bf-bb35-4ec3117a04df>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00422-ip-10-147-4-33.ec2.internal.warc.gz"}
Sebastien Bubeck This first week at the Simons Institute was a lot of fun! I attended the first workshop in the Real Analysis program which was about Testing, Learning and Inapproximability. There was plenty of good talks and I learned a lot of … Continue reading Last week I attended the Random-Approx conference at Berkeley. I missed quite a few talks as I was also settling in my new office for the semester at the Simons Institute so I will just report on the three invited talks: Luca Trevisan gave a … Continue reading The videos for COLT 2013 were just published on videolectures.net. The videos for ICML 2013 are also available on TechTalks.tv. This is the first (short) post dedicated to the Big Data program of the Simons Institute. We received from the program organizer Mike Jordan our first reading assignment which is a report published by the National Academy of Sciences on the “Frontiers in Massive … Continue reading ICML 2013 just finished a few days ago. The presentation that inspired me the most was the invited talk by Santosh Vempala. He talked about the relations between sampling, optimization, integration, learning, and rounding. I strongly recommend Vempala’s short survey … Continue reading The 2013 edition of COLT at Princeton just finished a few days ago and I’m happy to report that everything went smoothly! We had a sort of stressful start (at least for the organizers..) since on the day before the … Continue reading In this post we discuss the following notion: Let and be two metric spaces, one says that embeds into with distortion if there exists and such that for any , We write this as . Note that if … Continue reading It has been two weeks since my last post on the blog, and I have to admit that it felt really good to take a break from my 2-posts-per-week regime of the previous months. Now that ORF523 is over the … Continue reading In this last lecture we consider the case where one can only access the function with a noisy -order oracle (see this lecture for a definition). This assumption models the ‘physical reality’ of many practical problems (on the contrary to … Continue reading In this lecture we are interested in the following problem: where each is -smooth and strongly-convex. We denote by the condition number of these functions (which is also the condition number of the average). Using a gradient descent … Continue reading
{"url":"http://blogs.princeton.edu/imabandit/author/sebastien-bubeck/page/2/","timestamp":"2014-04-17T00:57:38Z","content_type":null,"content_length":"43947","record_id":"<urn:uuid:6e7d9375-0bf7-4d2f-a1eb-3616b7dd7767>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00088-ip-10-147-4-33.ec2.internal.warc.gz"}