content
stringlengths
86
994k
meta
stringlengths
288
619
Making Math Real - HomeMaking Math Real Friday, May 16, 2014; East Windsor, NJ Parents and educators are invited to the ALL ABOUT MATH Conference Sponsored by LDA-NJ and Co-sponsored by Decoding Dyslexia-NJ. David Berg will present a special 3-hour seminar "Addressing The Common Core: Making Math Real: The Multisensory Structured Solution For All Students" from 12:30-3:30pm.
{"url":"http://www.makingmathreal.org/","timestamp":"2014-04-21T07:04:40Z","content_type":null,"content_length":"24506","record_id":"<urn:uuid:becd3f37-fb6d-4466-aa0d-8e3caee5ca70>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
Norwalk, CA Algebra 2 Tutor Find a Norwalk, CA Algebra 2 Tutor ...I specialize in Mathematics, English, and the SAT. For the SAT I assign homework each week and use college board practice tests so the students are comfortable with the exam. For general tutoring and homework help I will work with the text the student is using and provide additional practice problems and assignments where applicable. 23 Subjects: including algebra 2, English, reading, geometry ...Finishing a teaching credential and BA in physics by next year also keeps me in practice and allows me to demonstrate applications and physical meaning to concepts in math that can often seem abstract and overwhelming to others. Having taken AP chemistry, myself, as a high school student, I unde... 10 Subjects: including algebra 2, chemistry, physics, calculus ...Algebra is an important fundamental to most forms of higher mathematics. I did very well on the tests and I have study books available just in case. I have been taking math classes since high school and constant exposure has made it my easiest subject to teach. 26 Subjects: including algebra 2, chemistry, writing, English ...My overall GPA is 3.75, and my major GPA is 3.80. Subjects that I am experienced in tutoring includes Algebra, Geometry and Calculus. Subjects that I am available and well-studied are Linear Algebra and Differential Equations I have tutored mathematics in my community college for one year, and the subjects varies from pre-algebra to calculus. 11 Subjects: including algebra 2, calculus, geometry, precalculus Hi, I am a former High School Math teacher that taught all Math Subjects from Basic Math to Calculus. I love teaching/tutoring especially in Algebra subjects. I now work in the Engineering profession, where high level math is constantly used. 7 Subjects: including algebra 2, calculus, precalculus, geometry Related Norwalk, CA Tutors Norwalk, CA Accounting Tutors Norwalk, CA ACT Tutors Norwalk, CA Algebra Tutors Norwalk, CA Algebra 2 Tutors Norwalk, CA Calculus Tutors Norwalk, CA Geometry Tutors Norwalk, CA Math Tutors Norwalk, CA Prealgebra Tutors Norwalk, CA Precalculus Tutors Norwalk, CA SAT Tutors Norwalk, CA SAT Math Tutors Norwalk, CA Science Tutors Norwalk, CA Statistics Tutors Norwalk, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Norwalk_CA_Algebra_2_tutors.php","timestamp":"2014-04-18T04:23:48Z","content_type":null,"content_length":"24030","record_id":"<urn:uuid:db598689-bfb3-4d69-9c60-b82fa0a3646d>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00343-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 118 , 2004 "... Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves ..." Cited by 369 (17 self) Add to MetaCart Elliptic curves have been intensively studied in number theory and algebraic geometry for over 100 years and there is an enormous amount of literature on the subject. To quote the mathematician Serge Lang: It is possible to write endlessly on elliptic curves. (This is not a threat.) Elliptic curves also figured prominently in the recent proof of Fermat's Last Theorem by Andrew Wiles. Originally pursued for purely aesthetic reasons, elliptic curves have recently been utilized in devising algorithms for factoring integers, primality proving, and in public-key cryptography. In this article, we aim to give the reader an introduction to elliptic curve cryptosystems, and to demonstrate why these systems provide relatively small block sizes, high-speed software and hardware implementations, and offer the highest strength-per-key-bit of any known public-key scheme. - Designs, Codes and Cryptography , 2004 "... We present a general technique for the efficient computation of pairings on supersingular Abelian varieties. As particular cases, we describe efficient pairing algorithms for elliptic and hyperelliptic curves in characteristic 2. The latter is faster than all previously known pairing algorithms, and ..." Cited by 130 (23 self) Add to MetaCart We present a general technique for the efficient computation of pairings on supersingular Abelian varieties. As particular cases, we describe efficient pairing algorithms for elliptic and hyperelliptic curves in characteristic 2. The latter is faster than all previously known pairing algorithms, and as a bonus also gives rise to faster conventional Jacobian arithmetic. - Proceedings of PKC 2001, volume 1992 of LNCS , 1992 "... Abstract. This paper introduces a novel class of computational problems, the gap problems, which can be considered as a dual to the class of the decision problems. We show the relationship among inverting problems, decision problems and gap problems. These problems find a nice and rich practical ins ..." Cited by 122 (11 self) Add to MetaCart Abstract. This paper introduces a novel class of computational problems, the gap problems, which can be considered as a dual to the class of the decision problems. We show the relationship among inverting problems, decision problems and gap problems. These problems find a nice and rich practical instantiation with the Diffie-Hellman problems. Then, we see how the gap problems find natural applications in cryptography, namely for proving the security of very efficient schemes, but also for solving a more than 10-year old open security problem: the Chaum’s undeniable signature. , 2001 "... Frey and Rück gave a method to map the discrete logarithm problem in the divisor class group of a curve over ¢¡ into a finite field discrete logarithm problem in some extension. The discrete logarithm problem in the divisor class group can therefore be solved as long ¥ as is small. In the elliptic ..." Cited by 88 (9 self) Add to MetaCart Frey and Rück gave a method to map the discrete logarithm problem in the divisor class group of a curve over ¢¡ into a finite field discrete logarithm problem in some extension. The discrete logarithm problem in the divisor class group can therefore be solved as long ¥ as is small. In the elliptic curve case it is known that for supersingular curves one ¥§¦© ¨ has. In this paper curves of higher genus are studied. Bounds on the possible values ¥ for in the case of supersingular curves are given. Ways to ensure that a curve is not supersingular are also given. 1. , 2000 "... Abstract. We present an index-calculus algorithm for the computation of discrete logarithms in the Jacobian of hyperelliptic curves defined over finite fields. The complexity predicts that it is faster than the Rho method for genus greater than 4. To demonstrate the efficiency of our approach, we de ..." Cited by 78 (6 self) Add to MetaCart Abstract. We present an index-calculus algorithm for the computation of discrete logarithms in the Jacobian of hyperelliptic curves defined over finite fields. The complexity predicts that it is faster than the Rho method for genus greater than 4. To demonstrate the efficiency of our approach, we describe our breaking of a cryptosystem based on a curve of genus 6 recently proposed by Koblitz. 1 , 1998 "... Abstract. This contribution introduces a class of Galois field used to achieve fast finite field arithmetic which we call an Optimal Extension Field (OEF). This approach is well suited for implementation of publickey cryptosystems based on elliptic and hyperelliptic curves. Whereas previous reported ..." Cited by 65 (14 self) Add to MetaCart Abstract. This contribution introduces a class of Galois field used to achieve fast finite field arithmetic which we call an Optimal Extension Field (OEF). This approach is well suited for implementation of publickey cryptosystems based on elliptic and hyperelliptic curves. Whereas previous reported optimizations focus on finite fields of the form GF (p) and GF (2 m), an OEF is the class of fields GF (p m), for p a prime of special form and m a positive integer. Modern RISC workstation processors are optimized to perform integer arithmetic on integers of size up to the word size of the processor. Our construction employs well-known techniques for fast finite field arithmetic which fully exploit the fast integer arithmetic found on these processors. In this paper, we describe our methods to perform the arithmetic in an OEF and the methods to construct OEFs. We provide a list of OEFs tailored for processors with 8, 16, 32, and 64 bit word sizes. We report on our application of this approach to construction of elliptic curve cryptosystems and demonstrate a substantial performance improvement over all previous reported software implementations of Galois field arithmetic for elliptic curves. "... . We describe some algorithms for computing the cardinality of hyperelliptic curves and their Jacobians over finite fields. They include several methods for obtaining the result modulo small primes and prime powers, in particular an algorithm `a la Schoof for genus 2 using Cantor 's division pol ..." Cited by 59 (7 self) Add to MetaCart . We describe some algorithms for computing the cardinality of hyperelliptic curves and their Jacobians over finite fields. They include several methods for obtaining the result modulo small primes and prime powers, in particular an algorithm `a la Schoof for genus 2 using Cantor 's division polynomials. These are combined with a birthday paradox algorithm to calculate the cardinality. Our methods are practical and we give actual results computed using our current implementation. The Jacobian groups we handle are larger than those previously reported in the literature. Introduction In recent years there has been a surge of interest in algorithmic aspects of curves. When presented with any curve, a natural task is to compute the number of points on it with coordinates in some finite field. When the finite field is large this is generally difficult to do. Ren'e Schoof gave a polynomial time algorithm for counting points on elliptic curves i.e., those of genus 1, in his , 2000 "... We develop a generic framework for the computation of logarithms in nite class groups. The model allows to formulate a probabilistic algorithm based on collecting relations in an abstract way independently of the specific type of group to which it is applied, and to prove a subexponential running ti ..." Cited by 54 (9 self) Add to MetaCart We develop a generic framework for the computation of logarithms in nite class groups. The model allows to formulate a probabilistic algorithm based on collecting relations in an abstract way independently of the specific type of group to which it is applied, and to prove a subexponential running time if a certain smoothness assumption is verified. The algorithm proceeds in two steps: First, it determines the abstract group structure as a product of cyclic groups; second, it computes an explicit isomorphism, which can be used to extract discrete logarithms. - Mathematics of Computation , 2004 "... Abstract. In this article, we examine how the index calculus approach for computing discrete logarithms in small genus hyperelliptic curves can be improved by introducing a double large prime variation. Two algorithms are presented. The first algorithm is a rather natural adaptation of the double la ..." Cited by 51 (10 self) Add to MetaCart Abstract. In this article, we examine how the index calculus approach for computing discrete logarithms in small genus hyperelliptic curves can be improved by introducing a double large prime variation. Two algorithms are presented. The first algorithm is a rather natural adaptation of the double large prime variation to the intended context. On heuristic and experimental grounds, it seems to perform quite well but lacks a complete and precise analysis. Our second algorithm is a considerably simplified variant, which can be analyzed easily. The resulting complexity improves on the fastest known algorithms. Computer experiments show that for hyperelliptic curves of genus three, our first algorithm surpasses Pollard’s Rho method even for rather small field sizes. 1. - Applicable Algebra in Engineering, Communication and Computing , 2003 "... The ideal class group of hyperelliptic curves can be used in cryptosystems based on the discrete logarithm problem. In this article we present explicit formulae to perform the group operations for genus 2 curves. The formulae are completely general but to achieve the lowest number of operations we t ..." Cited by 50 (3 self) Add to MetaCart The ideal class group of hyperelliptic curves can be used in cryptosystems based on the discrete logarithm problem. In this article we present explicit formulae to perform the group operations for genus 2 curves. The formulae are completely general but to achieve the lowest number of operations we treat odd and even characteristic separately. We present 3 different coordinate systems which are suitable for different environments, e. g. on a smart card we should avoid inversions while in software a limited number is acceptable. The presented formulae render genus two hyperelliptic curves very useful in practice. The first system are affine coordinates where each group operation needs one inversion. Then we consider projective coordinates avoiding inversions on the cost of more multiplications and a further coordinate. Finally, we introduce a new system of coordinates and state algorithms showing that doublings are comparably cheap and no inversions are needed. A comparison between the systems concludes the paper.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=144897","timestamp":"2014-04-17T18:49:12Z","content_type":null,"content_length":"37545","record_id":"<urn:uuid:a3b9eaf6-35ad-48de-b23d-bdd815706373>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by sunny on Monday, December 31, 2012 at 6:29pm. A horizontal spring mass system is set up to keep time so that it goes from one side to the other in 6 seconds. The mass used in this system is 0.69 kg. The distance from one side to the other is 2.4 What spring constant should be used if it is to oscillate correctly? k = N/m An equation of the form x(t)=Xmax cos(ωt) can be used to describe the motion of this spring-mass system. From the data given, determine the values of Xmax and ω. Xmax = m ω = rad/sec An equation of the form v(t)= Vmax sin(ω2t) can be used to describe the velocity of this spring-mass system. From the data given, determine the values of Vmax and ω2. Vmax = m/s ω2 = rad/sec An equation of the form a(t)=amax cos(ω3t) can be used to describe the acceleration of this spring-mass system. From the data given, determine the values of amax and ω3. amax = m/s2 ω3 = rad/sec • physics - Damon, Monday, December 31, 2012 at 7:09pm period T = 12 seconds amplitude = 1.2 meter w = 2 pi f = 2 pi/T = sqrt (k/m) 2 pi/12 = sqrt (k/.69) .5236 = sqrt(k/.69) .2742 = k/.69 k = .189 Newtons/meter w = .5236 radians/second we know already Xmax = 1.2 meter, we know that too x = 1.2 cos (.5236 t) meters w = w2 = w3 the frequency of v and a is THE SAME as for the position! v = -1.2*.5236 * sin(.5236 t) so Vmax = -.6283 m/s a = -1.2 * .5236^2 cos (.5236 t) = - w^2 x amax = -1.2*.5236^2 = - .329 m/s^2 Related Questions physics - a 0.75 kg mass attached to avertical spring streches the spring 0.30m... physics - A spring with a spring constant of 1.33 102 N/m is attached to a 1.6 ... physics - A spring with a spring constant of 1.50 102 N/m is attached to a 1.2 ... Physics - Block A of mass 2.0 kg and Block B of 8.0 kg are connected by a spring... Physics Classical Mechanics - Attach a solid cylinder of mass M and radius R to ... Physics - A spring oriented vertically is attached to a hard horizontal surface ... Physics Urgent please help - Consider an ideal spring that has an unstretched ... Physics - A pendulum and a mass spring system are both undergoing simple ... physics - A pendulum and a mass spring system are both undergoing simple ... Physics - Consider an ideal spring that has an unstretched length l0 = 3.3 m. ...
{"url":"http://www.jiskha.com/display.cgi?id=1356996587","timestamp":"2014-04-16T20:04:46Z","content_type":null,"content_length":"9629","record_id":"<urn:uuid:ed9e9e51-3e00-4897-9305-c32681582cc6>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00257-ip-10-147-4-33.ec2.internal.warc.gz"}
Comparing The P/E, EPS And Earnings Yield (PG,WIN) Tickers in this Article: PG The price/earnings (P/E) ratio, also known as an “earnings multiple,” is one of the most popular valuation measures used by investors and analysts. The basic definition of a P/E ratio is stock price divided by earnings per share (EPS). The fact that the P/E measure is a ratio makes it particularly apt for valuation purposes, but it's a little difficult to use intuitively when evaluating potential return, especially among different investment types. This is where earnings yield comes in.Earnings Yield Defined Earnings yield is defined as EPS divided by the stock price (E/P). In other words, it is the reciprocal of the P/E ratio.Thus, Earnings Yield = EPS / Price = 1 / (P/E Ratio), expressed as a percentage.If Stock A is trading at $10 and its EPS for the past year (or trailing 12 months, abbreviated as “ttm”) was 50 cents, it has a P/E of 20 (i.e. $10/50 cents) and an earnings yield of 5% (50 cents/$10).If Stock B is trading at $20 and its EPS (ttm) was $2, it has a P/E of 10 and an earnings yield of 10% ($2/$20).Assuming that A and B are very similar companies operating in the same sector, with nearly identical capital structures, which one do you think represents the better value? The obvious answer is B. From a valuation perspective, it has a much lower P/E. From an earnings yield point of view, B has a yield of 10%, which means that every dollar invested in the stock would generate EPS of 10 cents. Stock A, on the other hand, only has a yield of 5%, which means that every dollar invested in it would generate EPS of 5 cents.The earnings yield makes it easier to compare potential returns between, say, a stock and a high-yield bond. Let’s say an investor with some risk appetite is trying to decide between Stock B and a junk bond with a 6% yield. Comparing B’s P/E of 10 and the junk bond’s 6% yield is akin to comparing apples and oranges. But using B’s 10% earnings yield makes it easier for the investor to compare returns and decide whether the yield differential of 4 percentage points justifies the risk of investing in the stock rather than the bond. Note that even if Stock B only has a 4% dividend yield (more about this later), the investor is more concerned about total potential return than actual return.EPS and P/EEPS is the bottom-line measure of a company’s profitability, and it's basically defined as net income divided by the number of outstanding shares. Basic EPS has the basic number of shares outstanding in the denominator, while fully diluted EPS (FDEPS) uses the number of fully diluted shares in the denominator.Likewise, P/ E comes in two main forms: • Trailing P/E refers to the price/earnings ratio based on EPS for the trailing four quarters or 12 months as noted earlier. • Forward P/E means the price/earnings ratio based on future estimated EPS, such as the current fiscal or calendar year, or the next year. The P/E ratio for a specific stock, while useful enough on its own, is of even greater utility when compared against other parameters, such as: • Sector P/E: Comparing the stock’s P/E to those of other similar-sized companies in its sector, as well as to the sector’s average P/E, will enable one to determine whether the stock is trading at a premium or discount valuation in relation to its peers. • Relative P/E: Comparing the stock's P/E with its P/E range over a period of time provides an indication of investor perception. A stock may be trading at a much lower P/E now than it did in the past because investors perceive that its most rapid growth is behind it. • P/E to Earnings Growth (PEG Ratio): The PEG ratio compares the P/E to future or past earnings growth. A stock with a P/E of 10 and earnings growth of 10% has a PEG ratio of 1, while one with a P/ E of 10 and earnings growth of 20% has a PEG ratio of 0.5. According to the PEG ratio, the second company is undervalued compared to the first one. Using Earnings Yield to Compute Dividend Payout Ratio One issue that often arises with a stock that pays a dividend is that of its payout ratio, which in its most basic form is the ratio of dividends paid as a percentage of EPS. The payout ratio is an important indicator of dividend sustainability. If a company consistently pays out more in dividends than it earns in net income, the dividend may be in jeopardy at some point. While a less-stringent definition of the payout ratio uses dividends paid as a percentage of cash flow per share, for the sake of simplicity, we define dividend payout ratio in this section as: Dividends per Share (DPS) / EPS.The dividend yield is another measure commonly used to gauge a stock's potential return. A stock with a dividend yield of 4% and possible appreciation of 6% has a potential total return of 10%. Dividend Yield = Dividends per Share (DPS) / Price Since Dividend Payout Ratio = DPS / EPS, dividing both the numerator and denominator by price gives us:Dividend Payout Ratio = (DPS/P) / (EPS/P) = Dividend Yield / Earnings YieldExamples Let’s use Procter & Gamble (NYSE:PG) to illustrate this concept. P&G closed at $84.28 on Nov. 27, 2013. The stock had a P/E of 20.86, based on trailing 12-month EPS, and a dividend yield (ttm) of 2.75%.P&G’s dividend payout ratio was therefore = 2.75 / (1/20.86)* = 2.75 / 4.79 = 57.5%*Remember that Earnings Yield = 1 / (P/E Ratio)The payout ratio could also be calculated by merely dividing the DPS ($2.32) by the EPS ($4.04) for the past year. However, in reality this calculation requires one to know the actual values for per-share dividends and earnings, which are generally less widely known by investors than the dividend yield and P/E of a specific stock.Thus, if a stock with a dividend yield of 5% is trading at a P/E of 15 (which means its earnings yield is 6.67%), its payout ratio is approximately 75%.How does Procter & Gamble’s dividend sustainability compare with that of telecom services provider Windstream Holdings (Nasdaq:WIN), which had the highest indicated dividend yield of all S&P 500 constituents (as of Nov. 27, 2013) at over 12%? At its closing price of $8.09, WIN had a dividend yield of 12.36% and was trading at a P/E of 27.9 (for an earnings yield of 3.58%). With the dividend yield of 12.36%, far higher than the stock’s earnings yield, the dividend payout ratio for WIN was 345%. In other words, WIN’s dividend payout was almost 3.5 times its EPS over the past year. This is confirmed by its DPS (ttm) of $1 and EPS of 29 cents. An investor looking for a stock with a high degree of dividend sustainability would be better off choosing Procter & Gamble than Windstream.P/E vs. Earnings Yield The P/E’s pre-eminence as a valuation measure is unlikely to be derailed anytime soon by the earnings yield, which is not as widely used. While the major advantage of the earnings yield is that it enables an intuitive comparison of potential returns to be made, it has the following drawbacks: • Greater Degree of Uncertainty: The return indicated by the earnings yield has a much greater degree of uncertainty than the return from a fixed-income instrument. • More Volatility: Since net income and EPS can fluctuate significantly from one year to the next, the earnings yield will generally be more volatile than fixed-income yields. • Indicative Return Only: The earnings yield only indicates the approximate return based on EPS; the actual return may diverge substantially from the earnings yield, especially for stocks that pay no dividends or small dividends. As an example of the last point, assume a fictitious Widget Co. is trading at $10 and will earn $1 in EPS over the year ahead. If it pays out the entire amount as dividends, the company would have an indicated dividend yield of 10%. What if the company does not pay any dividends? In this case, one avenue of potential return to Widget Co. investors is from the increase in the company’s book value thanks to retained earnings (i.e. it made profits but did not pay them out as dividends).To keep things simple, assume Widget Co. is trading exactly at book value. If its book value per share increases from $10 to $11 (due to the $1 increase in retained earnings), the stock would trade at $11 for a 10% return to the investor. But what if there is a glut of widgets in the market and Widget Co. begins trading at a big discount to book value? In that case, rather than a 10% return, the investor may incur a loss from the Widget Co. holdings.The Bottom LineP/E may be the established standard for valuation purposes, but its reciprocal – the earnings yield – is especially useful for comparing potential returns across different instruments. The earnings yield also enables back-of the-envelope calculations to be made for computing the dividend payout ratio of a stock using widely followed measures such as its dividend yield and P/E ratio. comments powered by Disqus
{"url":"http://www.investopedia.com/articles/investing/120513/comparing-pe-eps-and-earnings-yield.asp","timestamp":"2014-04-17T16:05:08Z","content_type":null,"content_length":"88953","record_id":"<urn:uuid:12115834-edde-4e50-8ef7-fc2a15450489>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Magnitude of the electric field in a copper wire. 1. The problem statement, all variables and given/known data In the circuit shown, two thick copper wires connect a 1.5 V battery to a Nichrome wire. Each copper wire has radius R = 7 mm and is L = 18 cm long. Copper has 8.4 × 10^28 mobile electrons per cubic meter and an electron mobility of 4.4×10^−3 (m/s)/(V/m). The Nichrome wire is l = 5 cm long and has radius r = 3 mm. Nichrome has 9 × 10^28 mobile electrons/m3 and an electron mobility of 7 × 10^−5 (m/s)/(V/m). What is the magnitude of the electric field in the copper wire? Answer in units of N/C 2. Relevant equations [itex]\bar{v}[/itex] = uE 3. The attempt at a solution I plug in my known values, but I have two unknowns, the electric fields. I'm not sure how to use these equations.
{"url":"http://www.physicsforums.com/showthread.php?s=c0831329574af83d474a1dae1db0b1e6&p=4127398","timestamp":"2014-04-17T15:32:05Z","content_type":null,"content_length":"29880","record_id":"<urn:uuid:c0a4bedb-f68d-4181-9176-39e046ce97ec>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - What is a Self Consistent Electric Field? Ive been doing some reading into 1 dimensional plasma numerical simulations and they keep referring to solving for a "self-consistent" field. If the simulation is in one dimension with periodic boundary conditions, how would I go about solving this electric field? dE/dx = n - ρ(x) where: n = const = 1 ρ(x) is the charge density and I want to solve for E numerically where E is "self consistent" Thanks for your input.
{"url":"http://www.physicsforums.com/showpost.php?p=4184850&postcount=1","timestamp":"2014-04-19T19:52:19Z","content_type":null,"content_length":"8935","record_id":"<urn:uuid:06f71a6c-d121-42b5-a58e-483d417f5c3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00341-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Structures, 7 This week, the test is over, and it’s back to work, on the real numbers. In keeping with the general theme of the course, real numbers are not defined as Dedekind cuts or Cauchy sequences, but as something much more familiar: infinite decimals. I started off in the first lecture explaining the Pythagorean proof that the square root of 2 is irrational. So the rational numbers are not yet big enough to contain the numbers we need to do mathematics with (e.g. to measure the diagonal of a unit square). But is the set of infinite decimals big enough? I explained how we can find an infinite decimal expansion of the square root of 2. It turned out that very few of the students had learned the method of finding square roots at school, and none at primary school. So I was able to tell them a little story. When I was at primary school, at a certain point the teacher took a dislike to me, and one day when I was off sick he taught the class square roots; when I came back he set a test on them. I had to invent an algorithm for finding square roots during the test. I felt very pleased with myself, and this should count as one of my earliest mathematical discoveries, even if not original. I think my algorithm was very simple and unsophisticated, something like the one I showed the class in the lecture: • 1^2 = 1 < 2, 2^2 = 4 > 2, so √2 = 1.… • 1.4^2 = 1.96 < 2, 1.5^2 = 2.25 > 2, so √2 = 1.4… • 1.41^2 = 1.9881 < 2, 1.42^2 = 2.0164 > 2, so √2 = 1.41… and so on; this process can in principle be continued to find any digit. (Actually I hadn’t brought along the relevant numbers, so I had to ask a student with a calculator in the front row to call out the last two numbers.) A little thought shows that this argument actually establishes that any positive real number has a real square root. Of course we have to ask how infinite decimals represent numbers (which involves some talk of limits). I had introduced this by asking the class to be ready to explain why 0.4999… = 0.5000…. We had skirted around this question before, and I was aware that some people felt (or were prepared to argue) that the first number is not the same as the second, even if it is only infinitesimally smaller. This turned out to be a good discussion; we went over Achilles and the tortoise (which is one approach to the problem), and then considered it more formally as a question about limits. At the end, even the doubters seemed to be convinced. My comment on this was that a mathematician’s response to the question would be, “I can’t answer the question until you tell me what you mean by 0.4999….” In the final lecture I talked about the Principle of the Supremum: a non-empty set of real numbers which is bounded above has a least upper bound, or supremum. This is quite abstract, but I hope that the students will get something from this; at least when they meet it again they might recognise it. I didn’t even attempt to give a general proof, but merely a “proof by example”. So how do you show that a set, such as the set of positive real numbers x satisfying x^2 < 2, has a supremum? You generate the supremum one decimal place at a time, exactly as for square roots. 1.4 is not an upper bound, but 1.5 is, so the supremum begins 1.4…; and so on. I pointed out that this is the fundamental property of the real numbers, showing that we have filled in all the gaps in the rational numbers, and that it is the foundation for calculus and analysis. I was challenged to show how this could be the case. So I took the infinite series for e (the base of natural logarithms), and used the Principle of the Supremum to show that this series converges (to the supremum of the set of partial sums; all you have to do is show that the partial sums are bounded above, for example by 3, and then the supremum turns out to be the sum of the series). Previous | Next This entry was posted in exposition, teaching and tagged Achilles and the tortoise, Principle of the Supremum, real numbers, square roots. Bookmark the permalink.
{"url":"http://cameroncounts.wordpress.com/2012/11/15/mathematical-structures-7/","timestamp":"2014-04-19T20:15:58Z","content_type":null,"content_length":"68127","record_id":"<urn:uuid:575e4121-39aa-45e5-bcaf-0b2b2f5e3671>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00457-ip-10-147-4-33.ec2.internal.warc.gz"}
Two Bill’s pizza..dough and Carmelina Sauce..great! From many years of seasoning pans in commercial kitchens and at home, the oil doesn't matter nearly as much as the oven temp and the coating technique. What temp is your oven? Norma,I've been using 500 g flour, 350 g water (in other words, 70% hydration), a 7g package of Oekker instant yeast and 1 tsp salt. I let it rise till doubled (about 1 hour). For a small pie, I use 275 g of finished dough. Put it in the pan and let it sit for 10 min. before spreading it across the pan.One thing I did discover: I had been combing all the ingredients in the bowl of my Kitchenaid and kneading 5 min, with the dough hook. But the last couple of times, I used the flat beater first, for about 1 min., just to combine the ingredients, and then switched to the dough hook to knead for 5 min. Don't ask me why, but the dough was much easier to spread across the pan.Gene Getting some tasty-looking cheese edges and crispy bottom crusts, Norma! Gene,Good to hear you found a way you like to mix your DS dough. At home and at market I use the flat beater only, but then I do a rest period in between because my doughs are 75% hydration and I need to get the dough less sticky and develop the gluten okay.Norma 500F is hot enough, so if you're ending up with a sticky coating rather than a solid, smooth, carbon layer I think you're using too much oil and it's not burning off completely. That's an interesting thought: I've been laying on a relatively thick coat of butter flavored Crisco, to create "lubrication" between the cheese and the side of the pan. I thought that if I used a thin coat or no coating at all, that the cheese would fuse to the bare metal.But maybe I'm wrong?Gene Norma,Using the flat beater for the whole process makes sense - but I'm too lazy to get the dough out from between the spaces in the beater... :^)Gene I would briefly run a torch across the top for a 'lil pretty char spots. No sauce? This is the recipe for Buddy's Chicken Fajita Pizza posted on the Cooking Channel from a link from Pizza Cuz http://www.cookingchanneltv.com/shows/pizza-cuz/recipes.html http:// www.cookingchanneltv.com/recipes/buddys-chicken-fajita-pizza.html and the Buddy's Cheese Pizza http://www.cookingchanneltv.com/recipes/buddys-cheese-pizza.html It says the recipes are courtesy of Buddy's Pizza, but I don't know what the recipe looks like in bakers percents for the dough. Flour Blend* (100%): 410.59 g | 14.48 oz | 0.91 lbs Water (82.8558%): 340.2 g | 12 oz | 0.75 lbs IDY (1.70485%): 7 g | 0.25 oz | 0.02 lbs | 2.32 tsp | 0.77 tbsp Salt (1.35935%): 5.58 g | 0.2 oz | 0.01 lbs | 1 tsp | 0.33 tbsp Total (185.92%): 763.37 g | 26.93 oz | 1.68 lbs | TF = N/A * The Flour Blend is a 50/50 blend of KAAP and KABF; the dough is for two 8" x 10" pan pizzas, with each dough ball weighing 13.46 ounces; the corresponding thickness factor = 13.46/( 8 x 10) = 0.16829; no bowl residue compensation. Norma,I saw those recipes recently and was surprised that no one mentioned them, especially the recipe for the dough. At the time, I did some calculations in my head and concluded that the nominal hydration value for the dough was very much on the high side--well over 80%. Also, the recipe said to add more flour (by the spoonful) to reduce any stickiness in the dough. Since the flour and water are given by volume measurements, I concluded that it would be difficult to calculate a precise hydration value for the dough.However, since you asked about baker's percents, I decided this morning, as a Memorial Day gift to you, to try to come up with a baker's percent version of the dough recipe your referenced. To do this, I made a few assumptions. First, I assumed that a typical home pizza maker using the dough recipe would use the Medium method of Flour Measurement as embodied in the Mass-Volume Measurement Calculator at http://foodsim.unclesalmon.com/, where the flour would be measured out by dipping the measuring cup into a bag or container of flour. Second, I assumed that the flour is a combination of bread flour and all-purpose flour (as the recipe instructions suggest) with a ratio of the two flours established to get us closer to the protein content of the flour that Buddy's uses. As an example, if you use a 50/50 blend (by weight) of King Arthur All-Purpose flour (KAAP) and King Arthur Bread flour (KABF), even though they are not bleached and bromated flours such as Buddy's uses, the protein content of the blend will be 12.2% (the same as the Occident flour). So, for 3 cups of flour as called for the dough recipe, for convenience you would use 1 1/2 cups of the KAAP and 1 1/2 cups of the KABF by volume measured out by the Medium flour Measurement method. Third, I assumed that a cup of water is 8 ounces, as is typically recited in recipes for non-professionals, such as a home baker, even though a cup of water technically weighs more than 8 ounces (it is 8.345 ounces although I often use 8.15 ounces for conversion purposes). Fourth, I assumed that the"fast acting yeast" is IDY even though the instructions for rehydrating the yeast are more applicable to ADY than to IDY (more on this below). Finally, I assumed that the salt is ordinary table salt.Doing some math calculations and plugging everthing into the expanded dough calculating tool at http://www.pizzamaking.com/expanded_calculator.html, this is what we get:Flour Blend* (100%):Water (82.8558%):IDY (1.70485%):Salt (1.35935%):Total (185.92%):410.59 g | 14.48 oz | 0.91 lbs340.2 g | 12 oz | 0.75 lbs7 g | 0.25 oz | 0.02 lbs | 2.32 tsp | 0.77 tbsp5.58 g | 0.2 oz | 0.01 lbs | 1 tsp | 0.33 tbsp763.37 g | 26.93 oz | 1.68 lbs | TF = N/A* The Flour Blend is a 50/50 blend of KAAP and KABF; the dough is for two 8" x 10" pan pizzas, with each dough ball weighing 13.46 ounces; the corresponding thickness factor = 13.46/( 8 x 10) = 0.16829; no bowl residue compensation. As mentioned above, the yeast is rehydrated in warm water. I believe that this rehydration method, along with using a lot of yeast, is to speed up the fermentation process. You will also note that the calculated thickness factor is considerably larger than what Buddy's is using. For example, if you were to use 13.46 ounces of dough, 8 ounces of brick cheese, and about 4.5 ounces of pizza sauce (the cheese pizza recipe is silent as to the amount of pizza sauce to use), the total weight of the unbaked pizza would be about 26 ounces for an 8" x 10" pan pizza. To get that pizza in the range of a baked 8" x 10" Buddy's cheese pizza, it would take a substantial weight loss during baking, more than would ever be achieved in Buddy's conveyor ovens or even in your ovens, at home or at market.Should you decide to try the formulation recited above, you could use 3 cups of Occidental flour measured out volumetrically (or by weight if you prefer) and you could use less dough, much as you have been using to date to make an 8" x 10" pizza. I am reasonably confident you would end up with a credible Buddy's clone dough.Peter I wonder since you assumed that the fast acting yeast is IDY how many home pizza makers have access to that yeast. At least in my local supermarkets all that is offered if ADY, or fast acting pizza Norma,A couple of brands of "fast acting" yeasts that are commonly sold in supermarkets are the Fleischmann's RapidRise yeast (http://www.breadworld.com/products.aspx) and the Red Star QuickRise yeast (http://www.redstaryeast.com/products/red-star%C2%AE/red-star%C2%AE-quick-rise-yeast). They are both instant dry yeasts. Yeast products that are sold as bread machine yeasts--usually in small jars--are also instant dry yeasts, even though they may not be identified as such. I think that ADY can also be used in the formulation I posted, perhaps without having to make any other changes given the large amount used.Peter I didn't realize that Fleischmann's RapidRise yeast and Red Star QuickRise yeast were regular IDY's. I thought they were quicker acting than regular IDY. I learned something new today, thanks. Norma,Yeast is a highly complicated subject, both technologically and in practice. There are many different strains of yeast, each with its own DNA so to speak and with its own favored applications, and its specific forms and uses in home settings can be different than its forms and uses in professional applications. I think you will get a better feel for what I am saying by reading Reply 21 at http://www.pizzamaking.com/forum/index.php/topic,16775.msg164500/topicseen.html#msg164500, including the article on yeast referenced in Reply 21, and Reply 53 at http://www.pizzamaking.com/forum/ index.php/topic,5379.msg47676/topicseen.html#msg47676. My practice is to find a yeast I like and stick with it for all of my dough preparations even though there may be some variations from one brand or type of yeast to another. I suspect that the variations from one brand to another are slight, especially at the retail level, but for consistency purposes, or unless I am following a recipe that calls for a particular type or brand of yeast, I choose to use only a single type or brand. This is especially true when conducting multiple experiments with a particular dough formulation where I do not want to introduce new variables from one experiment to another such as yeast type or brand.Peter
{"url":"http://www.pizzamaking.com/forum/index.php/topic,21559.msg256232.html","timestamp":"2014-04-16T16:19:02Z","content_type":null,"content_length":"109719","record_id":"<urn:uuid:553d16e9-54f4-4a7e-93a4-5a1ca5ce1500>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
June 13, #2 2012, 13:24 New Member Quote: Qingang Xiong Originally Posted by Join Date: 4ndr34 May 2012 Hello Foamers, Ames, IA, USA I need a clarification on how the chemical time step is computed in ODEChemistryModel.C: Posts: 20 scalarField& tc = ttc(); Rep Power: const label nReaction = reactions_.size(); 3 if (this->chemistry_) forAll(rho, celli) scalar rhoi = rho[celli]; scalar Ti = this->thermo().T()[celli]; scalar pi = this->thermo().p()[celli]; scalarField c(nSpecie_); scalar cSum = 0.0; for (label i=0; i<nSpecie_; i++) scalar Yi = Y_[i][celli]; c[i] = rhoi*Yi/specieThermo_[i].W(); cSum += c[i]; forAll(reactions_, i) const Reaction<ThermoType>& R = reactions_[i]; omega(R, c, Ti, pi, pf, cf, lRef, pr, cr, rRef); forAll(R.rhs(), s) scalar sr = R.rhs()[s].stoichCoeff; tc[celli] += sr*pf*cf; tc[celli] = nReaction*cSum/tc[celli]; return ttc; In tc formula, at the numerator there is the sum of the concentrations, while at denominator there is the sum of parents of the reaction rates in the forward direction (pf*cf) multiplied for the stoichiometric coefficients of the products (sr). Then, it appears that the fastest reaction is dominant for the calculation of the chemical time, because in the sum at denominator the largest reaction rate is dominant, but shouldn't it be the slowest reaction to dominate in the calculation of the chemical time in a multi-reaction mechanism? Did I misinterpret the code or is there a reason why the fastest reaction is dominant? Thanks all Dear 4ndr34, As my understanding, since denominator is the sum of parents of the reaction rates, so the contribution of faster rate will make this denominator larger then to let tc smaller. Because faster reaction needs smaller time step, so the time step is controlled by the fast reactions. While the completion of multi-reactions is dominated by the slowest one. Hope my understanding is correct.
{"url":"http://www.cfd-online.com/Forums/openfoam-solving/103206-odechemistrymodel.html","timestamp":"2014-04-19T01:47:59Z","content_type":null,"content_length":"83213","record_id":"<urn:uuid:3ad5fe10-e1c3-41a7-a184-3de99a9aec09>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00052-ip-10-147-4-33.ec2.internal.warc.gz"}
Simpsons rule for volume 8 Mar 2007, 9:36 PM #2 Senior Member Join Date Aug 2004 The Track Rep Power 8 Mar 2007, 9:23 PM #1 Re: Simpsons rule for volume Simpsons rule for volume X_X" dunno what im doing wrong but heres the question The curve y = 2^x is rotated about the x-axis between x = 1 and x = 2. Use Simpsons rule with 3 function values ..find volume... 3 significant figures the answer is 27.2 units cubed while i keep gettin 17.2 units cubed, well its getting late and im getting dizzy =/ maybe thats why lol. but yeh.. its a bit hard to type up the answer without the integration symbols and stuff, but any help is appreciated Use the function y = 2^x to find the function values. These are all nice points above the x-axis, so they are effectively radii of circles, once the curve is rotated around the x-axis. So, now you can find the area of these three circles since you know the radii! Area circle 1 = πr^2 = π x (2^1)^2 = 4π Area circle 2 = πr^2 = π x (2^1.5)^2 = 8π Area circle 3 = πr^2 = π x (2^2)^2 = 16π Rather than use Simpson's Rule to find the area, use it to find the volume. Volume = h/3 x (A[1] + 4A[2] + A[3]) = 0.5/3 x (4π + 4 x 8π + 16n) = 0.5/3 x 52π = 27.227136333 = 27.2 cubic units Last edited by PC; 8 Mar 2007 at 9:42 PM. Re: Simpsons rule for volume ohhh nice, thanks could someone also do it using the proper formula too pi [integrate]b/a y^2 dx its not actually b/a.. but i dunno how else to type it lol, witht hat formula, would it only work with cylinders, spheres and cones since thats where its devised from? also back to the simpsons rule thing, do u always find the 'area of the circles' at the function values then sub it in the formula to get the volume, before i was just subbing the y^2 into the formula and then multiply the whole thing by pi =/ 9 Mar 2007, 6:39 AM #3
{"url":"http://community.boredofstudies.org/12/mathematics/137704/simpsons-rule-volume.html","timestamp":"2014-04-19T22:05:34Z","content_type":null,"content_length":"66150","record_id":"<urn:uuid:733dd821-2ffb-4fa6-8619-de8c5cea1990>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00376-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher Order Differential Equations: Variation of parameter. I'm not exactly sure how to solve the following non-homogeneous ODE by variation of parameters. Solve the given non-homogeneous ODE by the variation of parameters: x^2y" + xy' -1/4y = 3/x + 3x Can someone please point me in the right direction? Help will be much appreciated!!
{"url":"http://www.physicsforums.com/showthread.php?t=585657","timestamp":"2014-04-16T04:31:04Z","content_type":null,"content_length":"24074","record_id":"<urn:uuid:59968f26-1ae2-4a2e-9e1f-978c4ab9e004>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Jacobian Substitution Help April 5th 2012, 11:23 AM #1 Mar 2012 Jacobian Substitution Help Evaluate the Integral over the tilted square region with corners at (0,0)(π/2, π/2)(π,0)(π/2,-π/2) by doing a change of variables u=x+y and v=x-y I"m having problems calculating the bounds. I think that I need to rewrite the bounds y=x, y=-x, y=-x+π, y=x-π in terms of u and v but I'm having trouble with how to do it. The rest of the problem makes sense to me, just the bounds are giving me some confusion. Re: Jacobian Substitution Help If y=x then x-y=0 and so v=0. April 5th 2012, 12:03 PM #2 Senior Member Jan 2008
{"url":"http://mathhelpforum.com/calculus/196853-jacobian-substitution-help.html","timestamp":"2014-04-17T16:13:13Z","content_type":null,"content_length":"30954","record_id":"<urn:uuid:eee0dee8-383f-4048-83ee-e660a8b15740>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Creating objects when the user needs them? Captain Penguin Creating objects when the user needs them? I'm practicing C++ by writing matrix manipulation programs, each one being more sophisticated than the one before. (I'm also in linear algebra, so matrices are of specific interest to me) Anyways, say I wanted the user to make as many matrices as they wanted and then bring them up when desired, but I dont' want to waste a huge amount of memory by making 30 objects at the beginning of the program. For example, a prompt which asks the user if they want to make another matrix. If they say yes, a new object is THEN created with the users parameters. Is this possible? Here's my code right now: #include <iostream.h> #include <stdlib.h> // needed for system("PAUSE") class matrix matrix(int,int); // constructor - takes size, aXb, and fills the array with 0's ~matrix(); // destructor - doesn't do much, but still needed void ShowMatrix(); void GetValues(); private: // the following variables are private, which means they // can only be changed by functions/methods within the class itself! int x; int y; int a; int b; int matrix_ar[8][8]; // 8x8 array matrix::matrix(int A, int B) a = A; // sets row size b = B; // set column size for(x = 0; x<8 ; x++) // fills array with 0's for(y = 0; y<8; y++) void matrix::ShowMatrix() // displays the matrix for(x = 0; x<a; x++) cout << "| "; for(y = 0; y<b; y++) cout << matrix_ar[x][y] << "\t"; cout << " |\n"; void matrix::GetValues() cout << "CON GRAD JOO LAY SHUNS! You've just made a NEW MATRIX!\n"; cout << "The SIZE of your MATRIX is: " << a << "x" << b << endl; cout << "Now you must enter the matrix values! WHOO!\n"; for(x = 0; x<a; x++) for(y=0; y<b; y++) cout << "(" << (x+1) << ", " << (y+1) << ")\n" << endl; cin >> matrix_ar[x][y]; int main(int argc, char *argv[]) // some variables... int a; int b; // program really starts here! cout << "Welcome!\n"; cout << "Enter the size of your matrix:\n"; cout << "Rows: "; cin >> a; cout << endl << "Columns: "; cin >> b; matrix matrix_A(a,b); return 0; P.S. -> I brought this up in my previous thread on arrays, but the one reply wasn't clear so I thought I'd create a new topic. research the 'new' keywork. it dynamically allocates memory for objects. Object a; Object* p; p = &a; //p points to a p = new Object(); //p points to a block of memory containing an object delete p; this memory is allocaed until the program exits, so you need to be careful to delete memory after you're done with it. Yeah you can its simple. When the user inputs what you want do matrix* pcMatrix = new matrix(row,col); then when your done with it make sure you clean up and delete the object by doing: delete pcMatrix; Captain Penguin Yea thats what I thought, but doesn't that require making a different pointer for every matrix? Or is there something I don't understand? Captain Penguin Originally posted by ygfperson research the 'new' keywork. it dynamically allocates memory for objects. Object a; Object* p; p = &a; //p points to a p = new Object(); //p points to a block of memory containing an object delete p; this memory is allocaed until the program exits, so you need to be careful to delete memory after you're done with it. ahh but what if the user wants object q, r, s, t, v? Wouldn't that require making the pointers dynamically? Still don't understand :confused: The Dog There's no simple way to explain this if you don't understand pointers properly. Anyway, if you want a new object just new a pointer to an object. eg. if i wanted to create objects only if the user wants to then i'd do this: cout << "Do u want an object created on the stack?"; if(AnswerIsYes()) //Use your own method for testing user input Object* newObject = new Object; delete newObject; //delete object when you're done with it Captain Penguin Originally posted by The Dog There's no simple way to explain this if you don't understand pointers properly. Anyway, if you want a new object just new a pointer to an object. eg. if i wanted to create objects only if the user wants to then i'd do this: cout << "Do u want an object created on the stack?"; if(AnswerIsYes()) //Use your own method for testing user input Object* newObject = new Object; delete newObject; //delete object when you're done with it but I DO understand pointers. Here's exactly what I want to do: The user will have a menu. He will be able to create a matrix, delete a matrix, out of the blue. He will be able to create as many as he wants at one time (well, upto a certain limit). Each matrix will have its own variable name. The matrices will be able to added and subtracted and multiplied with each other. Each matrix will be accessible with a few button presses. Should I create a certain number of pointers, say 5 or 10 (the # would be the maximum # of matrices the user can use)? Then whenever the user creates a new matrix, do this? matrix *p; p = new matrix; I think this would work, correct? But the only problem with it is that it would require lots of repetitive code, since I'd have to write a statement like the one above for every pointer. And would it waste memory if not all the pointers are used? Is there a more elegant way of doing it? Yeah I was about to say something like that because I saw a disturbing error with the code. Object *newObject; newObject= new Object // do stuff delete newObject; newObject= NULL; I'm not sure what Alphabird thought was wrong with the code that he posted, but anyway, I have the solution to your problem. Create a linked list which has a pointer and the name of each of your dynamically allocated matrixes. Dynamically allocate the nodes of the linked list. Problem solved...NEXT! Captain Penguin Originally posted by blackrat364 I'm not sure what Alphabird thought was wrong with the code that he posted, but anyway, I have the solution to your problem. Create a linked list which has a pointer and the name of each of your dynamically allocated matrixes. Dynamically allocate the nodes of the linked list. Problem solved...NEXT! I think the problem was that the object wasn't set to NULL? Anyways.. linked lists? nodes? Hmm... havn't learned about linked lists as of yet. Can you provide a link that explains? Also, what should I know beforehand? (I know alll about pointers, classes, arrays, references.. basically upto day 10 in "Teach yourself C++ in 21 days") ah, well, I'll just explain linked lists really fast. With an example, even. class matrix; class linkedlist; //you would want to add member functions to this class to go to the next node and add new nodes, etc, but I'm being lazy right now. You would also want a constructor to set those pointers to class linkedlist matrix *thematrix; char name[8]; linkedlist *nextnode; int main() linkedlist headnode; linkedlist *lastnode; linkedlist.thematrix=new matrix; if (needtoaddanothermatrix) //create a new node, made lastnode point to the node just created, create a new matrix to go in the new node lastnode->nextnode=new linkedlist; lastnode = lastnode->nextnode; lastnode->thematrix = new matrix; That code is hardly sufficient for doing anything, for one thing, you've got a great big memory leak...you have to negotiate your entire linked list, starting at headnode, and use the delete keyword to delete the matrix in each node AND all of the nodes. That would be much easier with destructors for the linked list class. The basic theory behind a linked list is a structure which contains a pointer to the next item in the structure, so the list can grow or shrink in run time, dynamically allocated. Thought I should throw that in. If you still have questions, ask, or just do a board search or a search on google for "linked lists."
{"url":"http://cboard.cprogramming.com/cplusplus-programming/24822-creating-objects-when-user-needs-them-printable-thread.html","timestamp":"2014-04-20T13:49:29Z","content_type":null,"content_length":"21897","record_id":"<urn:uuid:05093fcd-c740-4cf6-943b-ec9a2af9a346>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Feferman on inherent vagueness of CH Jerry Seligman pytms at ccunix.ccu.edu.tw Tue Dec 2 19:58:01 EST 1997 (1) Vagueness can come from quantifiers as well as from predicates. Consider the the all-too-friendly inhabitants of Smalltown, Nowhere County: Everyone greets everyone else at least once a year. Everyone greets everyone else at least once a week. The first may have a determinate truth value because everyone in the whole county sees everyone else at least once a year - at the Nowhere County Fair, perhaps. But we may suppose that the domain of quantification is vague, there being no census in Smalltown, and the second statement lacks a determinate truth value. The vagueness of CH, if any, lies in the domain of quantification, namely sets of sets of naturals numbers, sets of reals, the universe of sets, or whatever. (It depends on how you state CH.) (2) This does not address Neil Tennant's point, since it only shows that an expression without vague parts can lack a determinate reference. What Neil is worried about, if I understand him correctly, is the claim that such expressions lack a definite *sense*. The conviction that they have sense is emphasized in FOM because some statements about possibly vague collections (such as the sets of reals) have a determinate truth value, and we can prove what it is. There is a strong intuition that if you can prove it, it must make sense. (3) Nonetheless, I think we should resist the idea that the sense of a statement be linked to its future provability from new axioms (see Maddy's post). The consequence that some such statements have a sense without our knowing it, and others are not, puts sense too far from understanding for my taste. The sense of CH is given as much by its utility in proving other statements, and its wider role in reasoning about the independence of axioms and other foundational matters as by the possibility of its being proved from new axioms. An expression that can be usefully employed in this way must make sense, in some sense, even if it does not have a determinate reference (truth value). (4) The last point is probably unfair to Penelope Maddy, who only suggested that we take the lack of future provability as a gloss on "inherent vagueness", not as a semantic thesis about the sense of mathematical statements. Following this suggestion would be to understand "inherent vagueness" as a matter of reference (albeit future-historical reference) rather than sense, and this could make Feferman's claim compatible with Tennant's conviction that CH has a sense. This may not be what Feferman wants. (5) Sense does not always determine reference, even in mathematics. Isn't that a lesson of FOM? Phone: +886 (0)5 272 0411 x6267 Office: Philosophy 406 +886 (0)5 242 8228 (direct) Mail: Philosophy Institute, Fax: +886 (0)5 272 1203 National Chung Cheng University, E-mail: pytms at ccunix.ccu.edu.tw Min-Hsiung, Chia-Yi, Taiwan. More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-December/000388.html","timestamp":"2014-04-18T10:35:33Z","content_type":null,"content_length":"5419","record_id":"<urn:uuid:68ed4fe0-cc71-4f0b-b3c9-fef1313af24b>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about regression on CoolData blog Targeting rare behavior Guest post by Kelly Heinrich, Assistant Director of Prospect Management and Analytics, Stanford University Last August, about two months into a data analyst position with a university’s development division, I had the task to build a predictive model for the Office of Gift Planning (OGP). The OGP wanted a tool to help them focus on the constituents who are most likely to make a planned gift. I wanted to identify a few hundred of the best planned giving prospects who I could prioritize by the probability of donating. After a bit of preliminary research, I chose: 1) 50 years of age and older and 2) inclusion in a recent wealth screening as the criteria for the study population. This generated a file of 133,000 records; 582 of them were planned gift donors. I’ve worked with files larger than this and did not expect a problem. However, that turned out to be a mistake because the planned gift donors, who exhibited the target behavior, comprised 0.4% of the population, a proportion so small it can be considered rare. I’ll explain more about that later; first I want to describe the project as it developed. I decided to use logistic regression with the dependent variable being either “made a planned gift” or “has not made a planned gift”. I cleaned the data and identified some strong relationships between the variables. After trying several combinations for the regression model, I had one with a Nagelkerke of .24, which is relatively good. (Nagelkerke is like a pseudo R squared; it can be loosely interpreted as the variability of the dependent variable that is accounted for by the model’s independent variables.) However, when I applied the algorithm to the study population, only 31 constituents without a planned gift and only 11 planned giving donors were identified as having a probability of giving of .5 or greater. I lowered the probability threshold of giving to .2 or greater and 105 non-planned givers and 52 planned gift donors fell into this range. This was still disappointing. Desperate to identify more new potential prospects, I explored more criteria to narrow the study population and built three successive models. For the purpose of the follow-up exploratory research and this article, I re-built all four models using the same independent variables to easily compare their outcomes. Here’s a summary of the four models: Models B, C, and D are all subsets of the original data set. Each model has advantages and disadvantages to it and I was uncertain how to evaluate them against one another. For example, each additional filtering criterion resulted in losing part of the target population, meaning that I systematically eliminated constituents with characteristics that are in fact associated with making a planned gift. I scored everyone who was identified with a probability of .2 or greater in any of the models by the number of models in which they were identified. I’m not unhappy with that solution, but since then I’ve been learning about better methods for targeting rare behavior. If the OGP was interested only in prioritizing the prospects already in their pool of potential planned giving donors, model D would serve their need. However, we wanted to identify the best potential planned giving prospects within the database. If we want to uncover untapped potential in an ever-growing database, we need to explore methods on how to target rare behavior. This seems especially important in our field where 1) donating, in general, is somewhat rare and 2) donating really generous gifts is rarer. Better methods of targeting rare behavior will also be useful for modeling for special initiatives and unique kinds of gifts. As I’ve been learning, logistic regression suffers from small sample bias when the target behavior is rare, relative to the study population. This helps explain why applying the algorithm to the original population resulted in very few new prospects–even though the model had a decent Nagelkerke of .24. Some analysts suggest using alternative sampling methods when the target behavior comprises less than 5% of the study. (See endnote.) Knowing that the planned gift donors in my original project comprised only 0.4% of the population, I decided to experiment with two new approaches. In both of the exploratory models, I created the study population size so planned gift donors would comprise 5 percent. First, I generated a study population by including all 582 of the planned gift donors and a random selection of 11,060 non-planned-gift constituents (model E). Then, I applied the algorithm from that population to the entire non-planned-gift population of 132,418. In the second approach (model F), the planned gift population was randomly split into two equal size groups of 291. I also randomly selected 5,530 non-planned-gift constituents. To build the regression model, I combined one of the planned gift donor groups (of 291) with 5,530 non-planned-gift constituents. I then tested the algorithm on the holdout sample (the other planned giving group of 291 with 5,530 non-planned-gift constituents). Finally, I applied the algorithm to the entire original population of 133,000. Here are the results: Using the same independent variables as in models A through D, model E had a Nagelkerke of .39 and model F .38, which helps substantiate that the independent variables are useful predictors for planned giving. Models E and F were more effective at predicting the planned givers (129 and 123 respectively with a probability of giving greater than or equal to .5) compared to model A (11), i.e. more than ten times as many. The sampling techniques have some advantages and disadvantages. The disadvantage is that by reducing the non-planned-gift population, it loses some of its variability and complexity. However, the advantage, in both models E and F, is that 1) the target population maintains its complexity, 2) new prospects are not limited by characteristic selection (the additional criteria that I used to reduce the population in models B, C, and D), which increases the likelihood of identifying constituents who were previously not on the OGP’s radar, and 3) the effects of the sample bias seem to be reduced. It’s important to note that I displayed the measures (Nagelkerke and estimated probabilities) from the exploratory models and populations purely for comparison purposes. Because the study population is manipulated in the exploratory methods, the probability of giving should not be directly interpreted as actual probabilities. However, they can be used to prioritize those with the highest probabilities and that will serve our need. To explore another comparison between models A and F, I ranked all 133,000 records in each. I then sorted all the records in model F in descending order. I took the top 1,000 records from model F and then ran correlation between the rank of model A and the rank of model F; they have a correlation of .282, meaning there is a substantial difference between the ranked records. Over the last several months, Peter Wylie, Higher Education Consultant and Contractor, and I have been exchanging ideas on this topic. I thank him for his insight, suggestions, and encouragement to share my findings with our colleagues. It would be helpful to learn about the methods you’ve used to target rare behavior. We could feel more confident about using alternative methods if repeat efforts produced similar outcomes. Furthermore, I did not have a chance to evaluate the prospecting performance of these models, so if you have used a method for targeting rare behavior and have had an opportunity to assess its effectiveness, I am very interested in learning about that. I welcome ideas, feedback, examples from your research, and questions in regard to this work. Please feel free to contact me at The ideas for these alternative approaches are adapted from the following articles: • Chawla, Nitesh, Aleksandar Lazarevic, Lawrence Hall, and Kevin Bowyer. 2003. “SMOTEBoost: Improving Prediction of the Minority Class in Boosting.” http://www3.nd.edu/~dial/papers/ECML03.pdf • King, Gary and Lanche Zeng. 2001. “Logistic Regression in Rare Events Data.” http://gking.harvard.edu/files/abs/0s-abs.shtml Kelly Heinrich has been conducting quantitative research and analysis in higher education development for two and a half years. She has recently accepted a position as Assistant Director of Prospect Management and Analytics with Stanford University that will begin in June 2013. Comments Off Logistic vs. multiple regression: Our response to comments Guest post by John Sammis and Peter B. Wylie Thanks to all of you who read and commented on our recent paper comparing logistic regression with multiple regression. We were not sure how popular this topic would be, but Kevin told us that interest was high, and there were a number of comments and questions. There were several general themes in the comments; Kevin has done an excellent job responding, but we thought we should throw in our two cents. Why not just use logistic? The point of our paper was not to suggest that logistic regression should not be used — our point was that multiple regression can achieve prediction results quite similar to logistic regression. Based on our experience working with and training fundraising professionals getting introduced to analytics, logistic regression can be intimidating. Our goal is always to get these folks to use analytics to help with their fundraising initiatives. We find many of them catch on with multiple regression, and much less so with logistic regression. Predicted values vs. probabilities We understand that the predicted values generated by multiple regression are different from the probabilities generated by logistic regression. Regardless of the statistic modeling technique we use, we always bin the raw prediction or probability values into equal-sized score levels. We have found that score level bins are easier to use than raw values. And using equal-sized score levels allows for easier evaluation of the scoring model. “I cannot agree” Some commenters, knowledgeable about statistics, said they would not use multiple regression when the inputs called for logistic. According to the rules, if the target variable is binary, then linear modelling doesn’t make sense — and the rules must be obeyed. In our view, this rigid approach to method selection is inappropriate for predictive modelling. The use of multiple linear regression in place of logistic regression may not always make theoretical sense, but predictive modellers are concerned with whether or not a model produces an output that is useful in practical terms. The worth of a model is testable against new, real-world data, therefore a model has only one criterion for determining “appropriate” use: Whether it really predicts what the modeler claims it will predict. The truth is revealed during evaluation. A modest proposal No one reading this should simply take our word that these two dissimilar methods yield similar results. Neither should anyone dismiss it out of hand without providing a critique based on real data. We would encourage anyone to try doing something on your own with data using both techniques and show us what you find. In particular, graduate students looking for a thesis or dissertation topic might consider producing something under this title: “Comparing Logistic Regression and Multiple Regression as Techniques for Predicting Major Giving.” Heck! Peter says that if anyone were interested in doing a study like this for a thesis or dissertation, he would be willing to offer advice on how to: 1. Do a thorough literature review 2. Formulate specific research questions 3. Come up with a study design 4. Prepare a proposal that would satisfy a thesis or dissertation committee. That’s quite an offer. How about it? When less data is more, in predictive modelling When I started doing predictive modelling, I was keenly interested in picking the best and coolest predictor variables. As my understanding deepened, I turned my attention to how to define the dependent variable in order to really get at what I was trying to predict. More recently, however, I’ve been thinking about refining or limiting the population of constituents to be scored, and how that can help the model. What difference does it make who gets a propensity score? Up until maybe a year ago, I wasn’t too concerned. Sure, probably no 22-year-old graduate had ever entered a planned giving agreement, but I didn’t see any harm in applying a score to all our alumni, even our youngest. Lately, I’m not so sure. Using the example of a planned gift propensity model, the problem is this: Young alumni don’t just get a score; they also influence how the model is trained. If all your current expectancies were at least 50 before they decided to make a bequest, and half your alumni are under 30 years old, then one of the major distinctions your model will make is based on age. ANY alum over 50 is going to score well, regardless of whether he or she has any affinity to the institution, simply because 100% of your target is in that age group. The model is doing the right thing by giving higher scores to older alumni. If ages in the sample range from 21 to 100+, then age as a variable will undoubtedly contribute to a large chunk of the model’s ability to “explain” the target. But this hardly tells us anything we didn’t already know. We KNOW that alumni don’t make bequest arrangements at age 22, so why include them in the model? It’s not just the fact that their having a score is irrelevant. I’m concerned about allowing good predictor variables to interact with ‘Age’ in a way that compromises their effectiveness. Variables are being moderated by ‘Age’, without the benefit of improving the model in a way that we get what we want out of it. Note that we don’t have to explicitly enter ‘Age’ as a variable in the model for young alumni to influence the outcome in undesirable ways. Here’s an example, using event attendance as a predictor: Let’s say a lot of very young alumni and some very elderly constituents attend their class reunions. The older alumni who attend reunions are probably more likely than their non-attending classmates to enter into planned giving agreements — for my institution, that is definitely the case. On the other hand, the young alumni who attend reunions are probably no more or less likely than their non-attending peers to consider planned giving — no one that age is a serious prospect. What happens to ‘event attendance’ as a predictor in which the dependent variable is ‘Current planned giving expectancy’? … Because a lot of young alumni who are not members of the target variable attended events, the attribute of being an event attendee will be associated with NOT being a planned giving expectancy. Or at the very least, it will considerably dilute the positive association between predictor and target found among older alumni. I confirmed this recently using some partly made-up data. The data file started out as real alumni data and included age, a flag for who is a current expectancy, and a flag for ‘event attendee’. I massaged it a bit by artificially bumping up the number of alumni under the age of 50 who were coded as having attended an event, to create a scenario in which an institution’s events are equally popular with young and old alike. In a simple regression model with the entire alumni file included in the sample, ‘event attendance’ was weakly associated with being a planned giving expectancy. When I limited the sample to alumni 50 years of age and older, however, the R squared statistic doubled. (That is, event attendance was about twice as effective at explaining the target.) Conversely, when I limited the sample to under-50s, R squared was nearly zero. True, I had to tamper with the data in order to get this result. But even had I not, there would still have been many under-50 event attendees, and their presence in the file would still have reduced the observed correlation between event attendance and planned giving propensity, to no useful end. You probably already know that it’s best not to lump deceased constituents in with living ones, or non-alumni along with alumni, or corporations and foundations along with persons. They are completely distinct entities. But depending on what you’re trying to predict, your population can fruitfully be split along other, more subtle distinctions. Here are a few: • For donor acquisition models, in which the target value is “newly-acquired donor”, exclude all renewed donors. You strictly want to have only newly-acquired donors and never-donors in your model. Your good prospects for conversion are the never-donors who most resemble the newly-acquired donors. Renewed donors don’t serve any purpose in such a model and will muddy the waters considerably. • Conversely, remove never-donors from models that predict major giving and leadership-level annual giving. Those higher-level donors tend not to emerge out of thin air: They have giving histories. • Looking at ‘Age’ again … making distinctions based on age applies to major-gift propensity models just as it does to planned giving propensity: Very young people do not make large gifts. Look at your data to find out at what age donors were when they first gave $1,000, say. This will help inform what your cutoff should be. • When building models specifically for Phonathon, whether donor-acquisition or contact likelihood, remove constituents who are coded Do Not Call or who do not have a valid phone number in the database, or who are unlikely to be called (international alumni, perhaps). • Exclude international alumni from event attendance or volunteering likelihood models, if you never offer involvement opportunities outside your own country or continent. Those are just examples. As for general principles, I think both of the following conditions must be met in order for you to gain from excluding a group of constituents from your model. By a “group” I mean any collection of individuals who share a certain trait. Choose to exclude IF: 1. Nearly 100% of constituents with the trait fall outside the target behaviour (that is, the behaviour you are trying to predict); AND, 2. Having a score for people with that trait is irrelevant (that is, their scores will not result in any action being taken with them, even if a score is very low or very high). You would apply the “rules” like this … You’re building a model to predict who is most likely to answer the phone, for use by Phonathon, and you’re wondering what to do with a bunch of alumni who are coded Do Not Call. Well, it stands to reason that 1) people with this trait will have little or no phone contact history in the database (the target behaviour), and 2) people with this trait won’t be called, even if they have a very high contact-likelihood score. The verdict is “exclude.” It’s not often you’ll hear me say that less (data) is more. Fewer cases in your data file will in fact tend to depress your model’s R squared. But your ultimate goal is not to maximize R squared — it’s to produce a model that does what you want. Fitting the data is a good thing, but only when you have the right data. Logistic regression vs. multiple regression by Peter Wylie, John Sammis and Kevin MacDonell (Click to download printer-friendly PDF: Logistic vs MR-Wylie Sammis MacDonell) The three of us talk about this issue a lot because we encounter a number of situations in our work where we need to choose between these two techniques. Many of our late night/early morning phone/ internet discussions have been gobbled up by talking about which technique seems to be better under what circumstances. More than a few times, I’ve suggested we write something up about our experience with both techniques. In the end we’ve always decided to put off doing that because … well, because we’ve thought it might put a lot of people to sleep. Disagree as we might about lots of things, we’re of one mind on the dictum: “Don’t bore people.” They have enough tedious stuff in their lives; we don’t need to add to their burden. On the other hand, as analytics has started to sink its teeth more and more into the world of advancement, it seems there is a group of folks out there who wrestle with the same issue. And the issue seems to be this: “If I have a binary dependent variable (e.g., major giver/ non major giver, volunteer/non-volunteer, reunion attender/non-reunion attender, etc.), which technique should I use? Logistic regression or multiple regression?” We considered a number of ways to try to answer this question: • We could simply assert an opinion based on our bank of experience with both techniques. • We could show you the results of a number of data sets using both techniques and then offer our opinion. • We could show you a way to compare both techniques using some of your own data. We chose the third option because we think there is no better way to learn about a statistical technique than by using the technique on real data. Whenever we’ve done this sort of exploring ourselves, we’ve been humbled by how much we’ve learned. Before we show you a way to compare the two techniques, we’ll offer some thoughts on why this question (“Should I use logistic regression or multiple regression?”) is so tough to find an answer to. If you’re anxious to move on to our comparison process, you can skip this section. But we hope you don’t. Why This Is Not an Easy Question to Find an Answer To We see at least two reasons why this is so: • Multiple regression has lived in the neighborhood a long time; logistic regression is a new kid on the block. • The articles and books we’ve read on comparisons of the two techniques are hard to understand. Multiple regression is a longtime resident; logistic regression is a new kid on the block. When World War II came along, there was a pressing need for rapid ways to assess the potential of young men (and some women) for the critical jobs that the military services were trying to fill. It was in this flurry of preparation that multiple regression began to see a great deal of practical application by behavioral scientists who had left their academic jobs and joined up for the duration. The theory behind multiple regression had been worked out much earlier in the century by geniuses like Ronald Fisher, Karl Pearson, and Edward Hotelling. But the method did not get much use until the war effort necessitated that use. The computational effort involved was just too forbidding. Logistic regression is a different story. From the reading we’ve done, logistic regression got its early practical use in the world of medicine where biostatisticians were trying to predict binary outcomes like survived/did not survive, contracted disease/did not contract disease, had a coronary event/did not have a coronary event, and the like. It’s only been within the last fifteen or twenty years that logistic regression has found its way into the parlance of statisticians in the behavioral sciences. These two paragraphs are a long way around of saying that logistic regression is (in our opinion) nowhere near as well vetted as is multiple regression by people like us in advancement who are interested in predicting behavior, especially giving behavior. The articles and books we’ve read on comparisons of the two techniques are hard to understand. Since I (Peter) was pushing to do this piece, John and I decided it would be my responsibility to do some searching of the more recent literature on logistic regression as it relates to the substance of this project. To start off, I reread portions of texts I have accumulated over the years that focus on multiple regression as a general data analytic technique. Each text has a section on logistic regression. As I waded back into these sections, I asked myself: “Is what I’m reading here going to enlighten more than confuse the folks we have in mind for this piece?” Without exception, my answer was, “Nope, just the reverse.” There was altogether too much focus on complicated equations and theory and nowhere near enough emphasis on the practical use of logistic regression. (This, in spite of the fact that each text had an introduction ensuring us the book would go light on math and heavy on application.) Then, using my trusty iPad, I set about seeing what I could find on the web. Not surprisingly, I found a ton of articles (and even some full length books) that had found their way into the public domain. I downloaded a bunch of them to read whenever I could find enough time to dig into them. I’m sorry to report that each time I’d give one of these things a try, I would hear my father’s voice (dad graduated third in his class in engineering school) as he paged through my own science and math texts when I was in college: “They oughta teach the clowns who wrote these things to write in plain English.” (I always tried to use such comments as excuses for bad grades. Never worked.) Levity aside, it is hard to find clearly written articles or books on the use of logistic versus multiple regression in the behavioral sciences. I think it’s a bad situation that needs fixing, but that fixing won’t occur anytime soon. On the other hand, I think dad was right not to let me off easy for giving up on badly written material. And you shouldn’t let my pessimism dissuade you from trying out some of these same articles and books. (If enough of you are interested, perhaps Kevin and John and I can put together a list of suggested readings.) A Way to Compare Logistic Regression with Multiple Regression As promised we’ll take you through a set of steps you can use with some of your own data: 1. Pick a binary dependent variable and a set of predictors. 2. Compute a predicted probability value for every record in your sample using both multiple regression and logistic regression. 3. Draw three random subsamples of 20 records each from the total sample so that each subsample includes the predicted multiple regression probability value and the predicted logistic regression probability value for every record. 4. Display each subsample of these records in a table and a graph. 5. Do an eyeball comparison of the probability values in both the tables and the graphs. 1. Pick a binary dependent variable and a set of predictors. For this example, we used a private four year institution with about 13,000 solicitable alums. Here are the variables we chose: Dependent variable. Each alum who had given $31 or more lifetime was defined as 1, all others who had given less than that amount were defined as 0. There were 6,293 0’s and 6,204 1’s. Just about an even fifty/fifty split. Predictor variables: • CLASS YEAR • SQUARE OF CLASS YEAR • EMAIL ADDRESS LISTED (YES/NO, 1=YES, 0=NO) • MARITAL STATUS (SINGLE =1, ALL OTHERS=0) • HOME PHONE LISTED (YES/NO, 1=YES, 0=NO) • UNIQUE ID NUMBER Why did we use ID number as one of the predictors? Over the years we’ve found that many schools use all-numeric ID numbers. When these numbers are entered into a regression analysis, they often work as predictors. More importantly, they help to create very granular predicted scores that can easily be binned into equal size groups. 2. Compute a predicted probability value for every record in your sample using both multiple regression and logistic regression. This is where things start to get a bit technical and where a little background reading on both multiple regression and logistic regression wouldn’t hurt. Again, most of the material you’ll find will be tough to decipher. Here we’ll keep it as simple as we can. For both techniques the predicted value you want to generate is a probability, a number that varies between 0 and 1. In this example, that value will represent the probability that a record has given $31 or more lifetime to the college. Now here’s the rub, the logistic regression model will always generate a probability value that varies between 0 and 1. However, the multiple regression model will almost always generate a value that varies between something less than 0 (a negative number) and a number greater than 1. In fact, in this example the range of probability values for the logistic regression model extends from .037 to .948. The range of probability values for the multiple regression model extends from -.122 to 1.003. (By the way, this is why so many statisticians advise the use of logistic regression over multiple regression when the dependent variable is binary. In essence they are saying, “A probability value can’t exceed 1 nor can it be less than 0. Since multiple regression often yields values less than 0 and greater than 1, use logistic regression.” To be fair, we’re exaggerating a bit, but not very 3. Draw three random subsamples of 20 records each from the total sample so that each subsample includes the predicted multiple regression probability value and the predicted logistic regression probability value for all 20 records. The size and number of these subsamples is, of course, arbitrary. We decided that three subsamples were better than two and that four or more would be overkill. Twenty records, as you’ll see a bit further on, is a number that allows you to see patterns in a table or graph without overcrowding the picture. 4. Display each subsample of these records in a table and a graph. Tables 1-3 and Figures 1-3 below show how we took this step for our example. To make sure we’re being clear, let’s go through some of the details in Table 1 and Figure 1 (which we constructed for the first subsample of twenty randomly drawn records). In Table 1 the probability values for multiple regression for each record are displayed in the left-hand column. The corresponding probability values for the same records for logistic regression are displayed in the right-hand column. For example, the multiple regression probability for the first record is .078827109. The record’s logistic regression probability is .098107437. In plain English, that means the multiple regression model for this example is saying that this particular alum has about eight chances in a hundred of giving $31 or more lifetime. The logistic regression model is saying that the same alum has about ten chances in a hundred of giving $31 or more lifetime. Table 1: Predicted Probability Values Generated from Using Multiple Regression and Logistic Regression for the First of Three Randomly Drawn Subsamples of 20 Records Figure 1 shows the pairs of values you see in Table 1 displayed graphically in a scatterplot. You’ll notice that the points in the scatterplot appear to fall along what roughly looks like a straight line. This means that the multiple regression model and the logistic regression model are assigning very similar probabilities to each of the 20 records in the subsample. If you study Table 1, you can see this trend, but the trend is much easier to discern in the scatter plot. Table 2: Predicted Probability Values Generated from Using Multiple Regression and Logistic Regression for the Second of Three Randomly Drawn Subsamples of 20 Records Table 3: Predicted Probability Values Generated from Using Multiple Regression and Logistic Regression for the Third of Three Randomly Drawn Subsamples of 20 Records 5. Do an eyeball comparison of the probability values in both the tables and the graphs. We’ve already done such a comparison in Table 1 and Figure 1. If we do the same comparison for Tables 2 and 3 and for Figures 2 and 3, it’s pretty clear that we’ll come to the same conclusion: Multiple regression and logistic regression (for this example) are giving us very similar answers. So Where Does This All Take Us? We’d like to cover several topics in this closing section: • A frequent objection to using multiple regression versus logistic regression when the dependent variable is binary • Trying our approach on your own • The conclusion we think you’ll eventually arrive at • How we’ve just scratched the surface here A frequent objection to using multiple regression versus logistic regression when the dependent variable is binary Earlier we said that many statisticians seem to advise the use of logistic regression over multiple regression by invoking this logic: “A probability value can’t exceed 1 nor can it be less than 0. Since multiple regression often yields values less than 0 and greater than 1, use logistic regression.” We also said we were exaggerating the stance of these statisticians a bit (but not very much). While we can understand this argument, our feeling is that, in the applied fields we toil in, that argument is not a very practical one. In fact a seasoned statistics professor we know says (in effect): “What’s the big deal? If multiple regression yields any predicted values less than 0, consider them 0. If multiple regression yields any values greater than 1, consider them 1. End of story.” We agree. Trying our approach on your own In this piece we’ve shown the results of one comparison between multiple and logistic regression on one set of data. It’s clear that the results we got for the two techniques were very similar. But does that mean we’d get such similar results with other examples? Not necessarily. So here’s what we’d recommend. Try doing your own comparisons of the two techniques with: • Different data sets. If you’re a higher education institution, you might pick a couple of data sets, one for alums who’ve been out for more than 25 years and one for folks who’ve been out less than 10 years. If you’re a non-profit, you can use a set of members from the west coast and one from the east coast. • Different variables. Try different binary dependent variables like those we mentioned earlier: major giver/non major giver, volunteer/non-volunteer, reunion attender/non-reunion attender, etc. And try different predictors. Try to mix categorical variables like marital status with quantitative variables like age. If you’re comfortable with more sophisticated stats, try throwing in cross products and exponential terms. • Different splits in the dependent variable. In our example piece the dependent variable was almost an exact 50/50 split. Since the underlying variable we used was quantitative (lifetime giving), we could have adjusted those splits in a number of ways: 60/40, 75/25, 80/20, 95/5, and on and on the list could go. Had we tried these different kinds of splits, would we have the same kinds of results for the two techniques? Since we actually did look at different splits like these, we can report that the results for both techniques were pretty much the same. But that’s for this example. That could change with a different data set and different variables. The conclusion we think you’ll eventually arrive at We’re very serious about having you compare multiple regression and logistic regression on a variety of data sets with a variety of variables and with different splits in the dependent variable. If you do, you’ll learn a ton. Guaranteed. On the other hand, if we put ourselves in your shoes, it’s easy to imagine your saying, “Come on guys. I’m not gonna do that. Just tell me what you think about which technique is better when the dependent variable is binary. Pick a winner.” Given our experience, we can’t pick a winner. In fact, if pushed, we’re inclined to opt in favor of multiple regression for a couple of reasons. It not only seems to perform about as well as logistic regression, but more importantly (with the stats software we use) multiple regression is simply faster and easier to use than logistic regression. But we still use logistic regression for models with dependent variables. And we continue to compare its efficacy against multiple regression when we can. And we rarely see a meaningful difference between the results. Why do we still use both modeling techniques? Because we think taking a hard and fast stance when you’re doing applied science is not a good idea. Too easy to end up with egg on your face. Our best advice is to use whichever method is most familiar and readily available to you. As always, we welcome your comments and reactions. Maybe even more so with this one. Evaluate models with fresh data using Tableau heat maps When I build predictive models, I normally don’t build just one for each purpose. Presumably the model is going to be used, so I want it to be the best one possible. Yes, I test the model scores against a holdout data sample, but if I built only one model, I wouldn’t have anything solid on which to base my evaluation of the results. I might reject a lone model if it truly failed against the validation set, but that has never happened to me — even a lackluster performance can be better than nothing, and therefore the model is flawed, but useful. That statement is true of models in general. So testing results with nothing to compare against is pointless. I usually produce one multiple linear regression model and one binary logistic regression model using the stats software package Data Desk. Many permutations are possible, though: The sample to be scored can be limited in various ways, and the dependent variable can be formulated any number of ways. The choice of technique (for me, one type of regression or another) is usually determined by the nature of the DV (though not always). Given unlimited time, I would produce multiple models, but doing two at a time is manageable and keeps the task of comparison simple. The model that does the best classifying the members of the holdout sample wins the prize, and the loser is discarded. But there’s a problem. I’ve never had a model bomb when it comes to scoring the validation set, but I HAVE had models fail after deployment. Data that is held out for validation of the model is one thing — the real world outside the model can be a whole OTHER thing. Logically it should not be so: If the model doesn’t “know” anything about the holdout data, then you’d think its performance on it would indicate how it will perform in the future. Not so. At least, not always. I am not so quick, then, to discard the loser. I like to evaluate both models on fresh data as it comes in (new gifts, for example). The loser might be the better choice overall, or it might turn out that a combination of the two models performs better than one on its own. Maybe one model works better for a subset of the population (young alumni, say), which suggests that adding interaction terms or even using a multiple-model approach is something to consider in the future. If the models predict slightly different propensities (as a result of how the DVs were formulated), with both of them contributors to a desirable result, then it might be worthwhile keeping both score sets by multiplying them together. I don’t have an extended period of time for such testing — the model needs to be put into operation before it gets stale. Unfortunately, evaluation has always been a cumbersome process. I need to query the database for fresh results (conversions, upgrades, new planned giving expectancies — whatever) and then match it up by ID and score for each model (scores for untested models are not going to be in the database, obviously), and then produce some charts in Excel to visualize and compare results. It’s not a ton of work, but it takes just long enough to prevent me from doing it more than once before it’s time to commit. Even if I am evaluating the models after the fact, in order to learn for the next iteration of model-building, it’s not an exercise I will want to carry out There is a better way. Think reports. What does a report do? A report pulls real-time (or nightly-refreshed) data and assembles it in an interpretable way in a tabular or visual display. It performs this service on a regular or semi-regular basis, or on-demand. (Yeah, okay, maybe I should have said an ideal report). If part of your job consists in report preparation as well as predictive modeling, then you should be building model scores into your reports. Here’s a tutorial on how to use Tableau to easily create a report that compares the performance of two sets of model scores in a single visualization called a heat map. This visualization can be refreshed with live data as often as desired. If you want, you can add other fields (age, sex, degree, donor status, etc.) and easily filter the data to see how model performance differs depending on the composition of the population. Note that this is probably not a report you’ll be sharing with your vice president. It does look cool, but it is mainly a diagnostic and exploration tool for your own use. The small initial investment of time is worth it if you build multiple models — it can be reused again and again. This tutorial assumes you’re already somewhat familiar with the basics of Tableau. If you don’t have the software, and you don’t want to download a free trial, stick around anyway — other software packages offer ways to create heat maps, and the basic idea is the same. In this example, I am comparing percentile scores from two models I developed to predict which alumni are most likely to give at least $1,000 in the current fiscal year. One is a multiple linear regression model with a dependent variable defined as the sum of giving for the past five years (log-transformed). The other is a logistic regression model with a binary dependent variable defined as ‘has giving of at least $1,000 in any one of the past three years’. The exact definitions of the DVs are reasonable but somewhat arbitrary. They are closely related, but different. The techniques and the predictor variables are also different, so we should expect the models to yield different results. Tested against the validation set (which was the same for both models), the logistic model proved superior. But only a test on new gift data will be truly convincing. I want to take the entire population of alumni whom I have scored (a sample of about 27,000 individuals), and match them up with what they have given since the model was created. In this made-up example, let’s suppose I created my models last August, and I want to see what those 27,000 alumni have given since the day I completed the work. In reality, I would have chosen a winning model months ago and this would be an after-the-fact analysis, but I am doing this in order to enrich the visualization for the purposes of this example. (Cheating, in other words.) Tableau allows you to combine data from multiple sources. In this case, you will connect to an Excel file to get your model scores (since they’re not in the database), and then connect to your database for giving results since September 1. If you do not connect directly to your database from Tableau, then you can paste your gifts data into a second sheet in your Excel workbook and extract the data via a single connection to that file — no problem. The first worksheet will have three columns: One for unique ID, and one each for the scores from the two models. In this example, the scores were output from Data Desk as percentiles. If you want, you can add columns for key attributes such as age, sex and so on. The second worksheet (or the custom SQL that retrieves data directly from your data warehouse) will provide ID and sum of giving since September 1. Normally in report creation, Tableau handles all the aggregation of the data — the input is raw transaction data, with each ID potentially appearing on multiple rows. In this example, however, we have aggregated the data already (summing giving by ID), and there is only one row of data for each ID. It doesn’t matter, but it might have implications for some of the specific steps that follow. You should refer to your Tableau references for connecting to data sources. All I will add is that when you add the table (or worksheet) that contains the giving data, be sure to left-join on ID, because obviously not everyone you have scored has given since Sept. 1. From here on in, I will use Tableau terminology that won’t make any sense if you don’t know the software (specifically, Tableau Desktop version 7.0). Let’s build our first view: 1. If your data has been extracted correctly, ‘ID’ will be listed under Dimensions, and your two model score sets will be listed under Measures. In this example, I will from now on refer to them as MLR (for Multiple Linear Regression) and Logistic. Obviously I’m referring to my own data — just try to replicate what I’m talking about using your own data file. 2. For now, pause Auto Updates (or turn off automatic updates). 3. Right-click on Logistic and select “Create bins …” This will bin the percentile score into whatever size we desire. Change the default bin size to 5 and click OK. Note that a new variable is created in the Dimensions pane, because bins are categorical, not numerical. 4. Right-click on MLR and do the same thing. 5. Drag Logistic (bins) to the Columns shelf. Drag MLR (bins) to the Rows shelf. 6. Drag ID to the Text shelf. Click on the down-arrow of the ID pill you’ve just created, and select Measure –> Count. This will create a count of all IDs that fall into each cell. It turns green to indicate it’s now a measure instead of a dimension. (Because each ID appears in our data only once, it doesn’t matter whether we use either Count or Count Distinct.) 7. Change the Marks type from Automatic to Square (right above the Text shelf). Notice that the Text shelf suddenly turns into a Label shelf — each square of the heat map will be labeled with the number of IDs. 8. Drag ID from the Dimensions pane again, and this time drop it onto the Color shelf. 9. Click on the down-arrow of the ID pill you’ve just created, and select Measure –> Count. This will base the color or shading of the cell on the number of IDs that fall into that cell. The top left corner of your screen will look like this: Now we’re ready to allow the view to automatically update. The result won’t look much like a heat map: Probably just a bunch of little squares with numbers beside them. We need to enlarge the squares. Under the Size shelf is a slider: Move this to the centre of the size range. Then drag one of the rows in the view to make it taller — hover over the axis for MLR (on the far left) until the pointer turns into an up-and-down arrow, then click and drag. When you let go, the squares will resize and the alleys of white space should start to close up. Keep messing with it until the squares touch on all sides. With a little formatting of labels for readability, the final product will look something like this. (Click on thumbnail image for full size.) A heat map can convey a lot of information at a glance. You can immediately see where a lot of individuals are concentrated: They’re in the darkest squares. The numbers are hard to read, but up in the top left of the map, we see that the number of people who fall into the 0-4 bin in both the MLR and Logistic models is 572. In the lower right area of the map, we see that 563 people fell into the 95 to 99 bin in both models. Notice that Tableau didn’t bin evenly: Every single bin has 5 score levels in it except for the bin labeled 100, which contains only individuals with a score of 100. In the map, we see that 147 people scored exactly 100 in both models. This can be corrected (using a calculated field instead of automatic binning), but I have decided to leave it the way it is. Due to the nature of this modeling exercise, I am mainly interested in the top few percentile scores anyway, and the 100 group is of particular interest. Having them mapped separately from the rest is not a problem. The names of the bins don’t reflect what they include. For example, “90″ really means “90 to 94″. You can rename them using aliases. Right-click on Logistic in the Dimensions pane, select Field Properties –> Aliases…, and change the displayed values in the Values column. Do the same for MLR. We haven’t looked at the recent-gift data yet, but before we move on, what can we learn from this view? It appears the models agree on the individuals with extremely high or extremely low scores. In the middle range, there is still a lot of agreement but also many more cases of divergence, in which an individuals scores high in one model but low in the other. This is clear, at-a-glance evidence that our models are similar but different. Depending on the application, choosing one model over the other could have a big effect on the result, for better or worse. In this particular application, where I am interested mainly in very high-scoring alumni only, it may not make that much difference at all … but let’s not jump to that conclusion just yet. If your data set included some key grouping information such as age or sex, it might be interesting to create a filter to examine whether the models differ on those factors. Here’s an example with 1. Drag Age from the Measures pane into the Filters shelf. 2. When Tableau asks you how you want to filter on Age, select “All Values” and click Next. 3. On the next box, select Range of Values, and click OK. 4. Hover over the green Age pill on the filters shelf, click the down-arrow on the right end of the pill, and select Show Quick Filter. Now you can set the upper and lower age bounds of the individuals you want to be counted in the heat map. As you slide the scale, it will display Age with numbers after the decimal, even though your values are all whole numbers. If this bothers you, right-click on Age in the Measures pane, select Field Properties –> Number Format…, and click on Number (Custom). Adjust the number of decimal places to zero. Here’s what the quick filter looks like: The next two images show the heat map for different age ranges. The first one is ages 20 to 50, the second is 51 to 80. Again, click on the thumbnails for full-size images — although the beauty of a heat map is that you can see the pattern from a distance. Right off the bat, it’s evident that it’s harder for younger individuals to get a high score, but they fare better in the MLR model than they do in the Logistic model. Imagine a 45-degree line sloping from the top left corner to the bottom right corner — the presence of more dark-shaded squares under that line indicates individuals with higher MLR scores than Logistic scores. The logistic model, on the other hand, slightly favours older alumni. This alone might explain why the Logistic model outperformed the MLR model in terms of the validation set. The difference might be due to how age-related variables were introduced to each model as predictors; they may have been more influential in one than the other. It’s hard to say without going back to the models themselves for a close One can spend a lot of time playing and learning with these filters. Let’s fast-forward and (finally) introduce recent-gift data — the giving that all scored individuals have engaged in since September 1, the day after the models were supposedly created. This data appears in the Measures pane as a variable I’ll call ‘Sum of Giving’. I’m specifically interested in who has given at least $1,000 (cumulatively), so I will need to create a calculated field to flag these people. 1. Right-click on Sum of Giving and select Create Calculated Field… 2. Give the field a name. I called it “Leadership donor”. 3. The field Sum of Giving is already in the expression window. Now you just need to add some text around it to complete the expression: 4. Click OK. This creates a field (variable) with the value 1 for any donor who has given at the Leadership level, and nothing if otherwise. Note that you can enter any amount in place of 999. If you want to count donors vs. non-donors, enter “>0″. 5. The field appears in the Measures pane, because Tableau recognizes it as numeric. We’re using it as a categorical variable, so let’s convert it into a Dimension instead. Right-click on the field name and select “Convert to Dimension”, or simply drag the field into the Dimensions pane — both actions accomplish the same thing. Now we have a flag we can use to zero in on our higher-end donors. Let’s create a new view for that. At the lower left of your screen, right-click on the tab for the existing view and select “Duplicate Sheet”. This will allow us to continue exploring the heat map without changing our original version. We could, of course, do all our work in a single view and use filters to dynamically alter the view — that’s one of the strengths of Tableau — but for now let’s keep our views separate. 1. If you still have filters applied for Age or other variables, click on the quick filter menu and select “Clear Filter”. You can reapply it later if you want — we’re just getting it out of the way so we can see the full picture. 2. Drag ‘Leadership donor’ to the Filters shelf. 3. In the box that pops up, click “Select from list” on the General tab (it should already be selected), and then check the little box for ’1′. 4. Click OK. The result looks like this. (Click for full size.) Our big donors are clustered nicely down in the lower right corner, where both the MLR and the Logistic model scores are very high. Some of the lower-score bins contain zero Leadership donors, and Tableau has automatically hidden those rows and columns from view. Take a couple of minutes to study the map. Follow the three darkest squares (labeled 48, 74, and 23) as they form a 45-degree line up the centre of the map. If you compare the values in the squares that are directly opposite each other over this line, you’ll notice that there are slightly more Leadership donors on the upper side of the line. Those are donors who have higher Logistic scores than MLR scores. As well, notice that the scattered cloud of donors above the line is more extensive than that below the line. These observations should lead us to believe that the Logistic model performs slightly better than the MLR model. That conclusion is a bit hasty, though. There might be more Leadership donors on the high-Logistic/low-MLR side simply because more alumni ended up in those squares in the first place. We need to calculate the PERCENTAGE of the population of each square that went on to become a Leadership donor. That’s right, we’re going to create a third view, and calculate percentages to plug into each 1. Right-click on the tab for Sheet 2 and select Duplicate Sheet. (By the way, you can name these sheets whatever you want, just as in Excel.) 2. Remove the filter for Leadership donor. 3. Under Analysis in the top menu bar, select Create Calculated Field… 4. Name the new field ‘Leadership percentage’. 5. Enter this expression, which divides the number of Leadership donors by the total number of individuals. 6. Click OK. The new field appears in the Measures pane, which is fine. 7. Drag ‘Leadership percentage’ from Measures onto the Label shelf, replacing the count of ID. 8. Drag ‘Leadership percentage’ from Measures again, this time onto the Color shelf. 9. Right-click on any square in the map, and select Format…, which opens a formatting pane at the far left. 10. On the Pane tab, in the Default section, click on the down-arrow to the right of “Numbers”, and select Percentage. The result is below. (Click for full size.) You can select any precision for your percentages — I’ve rounded to whole numbers to avoid clutter. The darkest square is a single donor with a very high MLR score but a very low Logistic score, who just happened to give at the Leadership level. That square is of course labeled 100%, which causes the rest of the display to be toned down to a degree that makes it hard to see the patterns. This single donor might be a person to look at more carefully, but for now, let’s exclude that person from the map. Hover your pointer over the square, and select Exclude from the tooltip box. (This creates a specific filter for this individual, which you can remove anytime.) All the squares are recoloured accordingly: Now some of the darkest squares are also based on very sparse data. You can exclude any that you wish, but I’m fine with this display for now. For one thing, we can clearly see that having a Logistic score of 95 or higher is darn significant, regardless of what a donor’s MLR score is. For example, there are four Leadership donors who scored only 65-69 in the MLR model but have Logistic scores of 95-99, which is what we want to see. (Those donors are in the square labeled 14%.) Being able to demonstrate that one model is superior is pretty nifty. But I am especially intrigued at how easy it is to see how the models might work together to improve accuracy. Have a look at the square containing individuals who scored 100 in both models. There were 147 such individuals, and 48 of them gave $1,000 or greater — a whopping 32.6%. Here are a couple of facts to think about: • Of all the individuals who scored 100 in the Logistic model, 26.7% went on to give at the Leadership level. • Of all the individuals who scored 100 in the MLR model, 23.1% went on to give at the Leadership level. Do you see what I’m getting at? When we combine both scores and zero in on people in the top percentile for both models, our yield of Leadership donors increases by nearly six percentage points over the best-performing model, to 32.6%. The same boost is evident for other high-scoring cells in the heat map: The logistic model identifies some big donors that the MLR model misses, but the MLR model can enhance the accuracy of the logistic model. This is potentially useful for prospect identification in Major Giving, when we really want to be as focused as possible. So far I’ve shown you only donor numbers. What about revenue? Our data set includes gift amounts, so let’s create a new view to visualize actual aggregate dollar totals. 1. Duplicate the last sheet you created, and remove any filters that had been applied. 2. Drag ‘Sum of Giving’ to the Label and Color shelves. 3. Format the values as currency. 4. For fun, change the color from green to red by clicking on Edit Colors in the context menu for the Sum of Giving card. The result is pretty dramatic. This is for all donors, not just Leadership donors, but if you want to narrow it down to Leadership donors only, re-apply your filter. Just as with raw donor counts, the view above is a little misleading, simply because more prospects equals more donors, equals more dollars. So let’s create a calculated field to give us AVERAGE dollars per donor for every cell in the heat map. The individuals with scores of 100 in both models gave nearly $5,000 on average — no other cell comes close. But guess what’s even better: • The individuals who scored 100 in the Logistic model gave an average of $2,927. • The individuals who scored 100 in the MLR model gave an average of $2,971. The models are strongest where they intersect! I’ve spent a lot of time and more than 4,000 words explaining how to do this in Tableau. This is very unusual for me — why a specific product such as Tableau, when one can create heat maps even in Excel? * • It’s just so easy to do it in Tableau, and the result looks attractive without requiring the user to fuss with formatting. • The data can be refreshed whenever necessary. If you’re connecting to an Excel file, simply paste new data into the file and refresh the data extract. It’s that simple. (Remember to refresh the extract rather than replace the data source entirely, if you want to retain your aliases as you’ve defined them.) • That goes for refreshing the giving data, AND for loading a whole different set of individuals and scores. You don’t need to rebuild these views from scratch (although it’s pretty easy to do so). • Tableau allows you to dynamically filter the data any which way you want. It’s a great way to explore the data. In my example, it would have been really interesting to filter on donors who UPGRADED to the $1,000+ level. Which model did a better job predicting upgrading? I don’t know, but I’m going to find out. • You can drill down to the underlying data. If you want to see a list of the people who scored 100 in both models, just hover the pointer over that square and click on the data icon, then the ‘Underlying’ tab. Imagine having wealth/capacity scores on one axis, and propensity scores on the other … • I’ve shared my heat maps here as static images, but you can share your analyses as fully-functioning views, even with people who don’t have the software on their computers. Save it as a Packaged Workbook, and they’ll be able to open it in Tableau Reader (which they can download for free). They can use the filters you’ve set up to play with the data themselves. This may be the longest CoolData post ever, but as usual I feel I am barely scratching the surface. * P.S.: Heat maps are easily created in a combination of Data Desk and Excel. Without going into too much detail: In Data Desk use contingency tables (a.k.a. cross tabs) to create the basic matrix of numbers, with one score set as x and the other as y, and use derived variable expressions to limit the counts as desired. Copy and paste the table text into Excel, and use conditional formatting to create the desired shading. Unfortunately this requires some fussing and the result is static. Stepwise, model-foolish? My approach to building predictive models using multiple linear regression might seem plodding to some. I add predictor variables to the regression one by one, instead of using stepwise methods. Even though the number of predictor variables I use has greatly increased, and the time needed to build a model has lengthened, I am even less likely to use stepwise regression today than I was a few years ago. Stepwise regression, available in most stats software, tosses all the predictor variables into the analysis at once and picks the best for you. It’s a semi-automated process that can work forwards or backwards, adding or deleting variables until it’s satisfied a statistical rule of thumb. The software should give you some control over the process, but mostly your computer is making all the big I understand the allure. We’re all looking for ways to save time, and generally anything that automates a repetitive process is a good thing. Given a hundred variables to choose from, I wouldn’t be surprised if my software was able to get a better-fitting model than I could produce on my own. But in this case, it’s not for me. Building a decent model isn’t just about getting a good fit in terms of high R square. That statistic tells you how well the model fits the data that the model was built on — not data the model hasn’t yet seen, which is where the model does its work (or doesn’t). The true worth of the model is revealed only over time, but you’re more likely to succeed if you’ve applied your knowledge and judgement to variable selection. I tend to add variables one by one in order of their Pearson correlation with the target variable, but I am also aware of groups of variables that are highly correlated with each other and likely to cause issues. The process is not so repetitive that it can always be automated. Stepwise regression is more apt to select a lot of trivial variables with overlapping effects and ignore a significant predictor that I know will do the job better. Or so I suspect. My avoidance of stepwise regression has always been due to a vague antipathy rather than anything based on sound technical concerns. This collection of thoughts I came across recently lent some justification of this undefined feeling: Problems with stepwise regression. Some of the authors’ concerns are indeed technical, but the ones that resonated the most for me boiled down to this: Automated variable selection divorces the modeler from the process so that he or she is less likely to learn things about their data. It’s just not as much fun when you’re not making the selections yourself, and you’re not getting a feel for the relationships in your data. Stepwise regression may hold appeal for beginning modellers, especially those looking for push-button results. I can’t deny that software for predictive analysis is getting better and better at automating some of the most tedious aspects of model-building, particularly in preparing and cleaning the data. But for any modeller, especially one working with unfamiliar data, nothing beats adding and removing variables one at a time, by hand.
{"url":"http://cooldata.wordpress.com/category/regression/","timestamp":"2014-04-18T23:15:13Z","content_type":null,"content_length":"121890","record_id":"<urn:uuid:a1c6d5f6-bbed-4837-aef4-e4260d30ca08>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Geometry terminology: altitude from vertex, median from vertex, the bisector of angle 1. The problem statement, all variables and given/known data So these are line segments in triangles. I don't understand how they are different. 2. Relevant equations 3. The attempt at a solution
{"url":"http://www.physicsforums.com/showpost.php?p=3662168&postcount=1","timestamp":"2014-04-16T07:48:49Z","content_type":null,"content_length":"8671","record_id":"<urn:uuid:952807cb-cf05-4194-b744-2ed9e1582d79>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric series September 20th 2012, 09:14 AM Geometric series Hi everyone, I am struck in an expansion. Hope someone could help me in this expansion. The expansion that I have is 1+(2^2)(r^2)+(3^2)(r^4)+(4^2)(r^6)+....... , 1+(2^4)(r^2)+(3^4)(r^4)+(4^4)(r^6)+....... , ...... Can we find geometric progression for this expansion. Thank you .....:) September 20th 2012, 09:54 AM Re: Geometric series Hi everyone, I am struck in an expansion. Hope someone could help me in this expansion. The expansion that I have is 1+(2^2)(r^2)+(3^2)(r^4)+(4^2)(r^6)+....... , 1+(2^4)(r^2)+(3^4)(r^4)+(4^4)(r^6)+....... , ...... Can we find geometric progression for this expansion. Thank you .....:) Are these two different problems? If not, what is meaning of the "," between two series? And what do you mean by "the geometric expansion"? Those are NOT geometric serie and you can't just "make" them geometric series. September 20th 2012, 09:57 AM Re: Geometric series Ya they are 2 different problems. Can we get a solution for that 1st problem? September 20th 2012, 10:00 AM Re: Geometric series We have a solution for 1+ar+ar^2+ar^3+....... that is a/(1-r). Do we have a solution like this for the problem that i mentioned? September 20th 2012, 10:50 AM Re: Geometric series Hello, gopi9! Here is the first one . . . $S \:=\: 1+2^2r^2+3^2r^4+4^2r^6 \:+\:\hdots$ $\begin{array}{ccccccc}\text{We are given:} &S &=& 1 + 4r^2 + 9r^4 + 16r^6 + 25r^8 + \hdots \\ \text{Multiply by }r^2\!: & r^2S &=& \quad\; r^2 + 4r^4 + 9r^6 + 16 r^8 + \hdots \\ \end{array}$ $\text{Subtract: }\:S - r^2S \;=\;1 + 3r^2 + 5r^4 + 7r^6 + 9 r^8 + \hdots$ $\begin{array}{ccccccc}\text{So we have:} & (1-r^2)S &=& 1 + 3r^2 + 5r^4 + 7r^6 + 9r^8 + \hdots \\ \text{Multiply by }r^2\!: & r^2(1-r^2)S &=& \quad\;r^2 + 3r^4 + 5r^6 + 7r^8 + \hdots \end{array} $\text{Subtract: }\:(1-r^2)S - r^2(1-r^2)S \;=\;1 + 2r^2 + 2r^4 + 2r^6 + 2r^8 + \hdots$ $\text{And we have: }\:(1-r^2)^2S \;=\;1 + 2r^2\underbrace{(1 + r^2 + r^4 + r^6 + \hdots)}_{\text{geometric series}}$ $\text{The geometric series has the sum: }\frac{1}{1-r}$ $\text{Hence, we have: }\:(1-r^2)^2S \;=\;1 + 2r^2\left(\frac{1}{1-r}\right) \;=\;\frac{1+r^2}{1-r^2}$ $\text{Therefore: }\:S \;=\;\frac{1+r^2}{(1-r^2)^3}$ September 20th 2012, 10:55 AM Re: Geometric series That looks great.. Thank you so much Soroban.. September 20th 2012, 01:21 PM Re: Geometric series I tried to find the solution for 2nd part in the same way. I tried many iterations but I could not find something that can be taken common(as 2r^2 in the above proof) to make in to geometric The problem is that I have many series like this (2^2, 2^4, 2^6,.....). Can we generalize the solution. Thanks in advance September 20th 2012, 01:30 PM Re: Geometric series For the 2nd one the solution that I got is 1+12(r^2)+23(r^4)+24(r^6)(1/(1-(r^2)))= (1+11r^2+11r^4+r^6)/(1-r^2) but this is so complicated and for higher powers it will be much more complicated.. September 23rd 2012, 03:41 AM Re: Geometric series $\text{And we have: }\(1-r^2)^2S \;=\;1 + 2r^2\underbrace{(1 + r^2 + r^4 + r^6 + \hdots)}_{\text{geometric series}}$ $\text{The geometric series has the sum: }\frac{1}{1-r}$ $\text{Hence, we have: }\(1-r^2)^2S \;=\;1 + 2r^2\left(\frac{1}{1-r}\right) \;=\;\frac{1+r^2}{1-r^2}$ That was a beautiful piece of manipulation. I was admiring it so closely that I found a small transcription error. Small point, but, just in case anyone else is following closely (they should!): In the 2nd and 3rd lines of the quote, the appearance of $\frac{1}{1-r}$ were obviously intended to be $\frac{1}{1-r^2}$. You corrected it by the final equals expresssion on the 3rd line of the quote.
{"url":"http://mathhelpforum.com/advanced-algebra/203769-geometric-series-print.html","timestamp":"2014-04-21T00:39:56Z","content_type":null,"content_length":"13526","record_id":"<urn:uuid:d350e171-3036-4827-bb20-0420bbc2d4e1>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Cameron graph Cameron graph There is a rank 4 strongly regular graph Γ with parameters v = 231, k = 30, λ = 9, μ = 3. The spectrum is 30^1 9^55 (–3)^175. It is the unique strongly regular graph with these parameters that is a gamma-space with lines of size 3. The vertices are the (22 choose 2) = 231 pairs from a fixed 22-set S provided with a fixed Steiner system S(3,6,22). Two pairs are adjacent when they are disjoint and their 4-point union is contained in a block. The automorphism group is M[22].2 of order 887040, acting rank 4 with point stabilizer 2^5:Sym(5). Gamma space This graph is the collinearity graph of a partial linear space with lines of size 3, namely the triples of pairs that partition a block of the Steiner system. This geometry is a gamma space: given a line L, each point outside L is collinear to 0, 1, or 3 points of L. It has Fano subplanes, 10 on each point and 2 on each line. The 15 lines and 10 planes on a fixed point form the edges and vertices of the Petersen graph. We see a GQ(2,2) subgeometry on each block. Triple cover This graph has a triple cover on 693 vertices with full group 3.M[22].2. a) A symbol. Γ has 2765664 21-cocliques falling in 29 orbits of sizes 22, 352, 2310, 4620, 5280, 12320, 18480, 21120, 24640, 27720, 36960 (5x), 49280 (2x), 73920, 110880 (2x), 147840 (5x), 221760 (2x), 443520 (2x). Since 21 is the Hoffman bound on the size of a coclique, these cocliques are maximal, and each vertex outside one has 3 neighbours inside. For two of these 29 orbits the stabilizer of a coclique is transitive on its 21 vertices. These are precisely the two orbits of cocliques with a stabilizer that is maximal in Aut(Γ). These orbits have lengths 22 and 352. A coclique in the former is called a symbol, one in the latter a heptad (see c below). There are 22 symbols, forming a single orbit. The stabilizer of one is L[3](4).2 with vertex orbit sizes 21+210. b) A quad. Γ has 77 quads, subgeometries GQ(2,2), forming a single orbit. The stabilizer of one is 2^4:S[6] with vertex orbit sizes 15+120+96. c) A heptad. There are 352 of these (21-cocliques, see above), forming a single orbit. The stabilizer of one is A[7] with vertex orbit sizes 21+105+105 (the pairs in the heptad, meeting the heptad in 1 symbol, and those outside the heptad). d) A vertex. There are 231 of these, forming a single orbit. The stabilizer of one is 2^5:S[5] with vertex orbit sizes 1+30+160+40. The relation defined by the orbit of size 40 is the triangular graph T(22). The local graphs are 2-clique extensions of the line graph of the Petersen graph. e) A Fano plane. There are 330 of these, maximal 7-cliques on which the lines induce a Fano plane, forming a single orbit. The stabilizer of one is 2^3.L[3](2) x 2 with vertex orbit sizes 7+28+84+112. These are the only maximal cliques. f) A decad / Sylvester subgraph / GO(2,1). Let our S(3,6,22) be obtained by deriving S(5,8,24) twice, so that there are two outside symbols a and b. There are 1288 splits of the set of 24 symbols into two dodecads. Of these splits, 616 have a and b on the same side, and 672 have them on different sides. Let a decad be a set of ten symbols that together with a and b form a dodecad. There are 616 decads, forming a single orbit. The stabilizer of one is A[6].2^2 (of order 1440) with vertex orbit sizes 30+36+45+120. On the orbit of size 36 Γ induces a Sylvester graph. On the orbit of size 45 Γ induces a GO(2,1) (generalized octagon of order (2,1), the flag graph of GQ(2,2), on 45 = 1+4+8+16+16 vertices). g) An undecad / L2(11) subgraph. Let an undecad be a set of eleven symbols that together with a or b form a dodecad. There are 672 of these forming a single orbit. The stabilizer of one is L[2](11):2 with vertex orbit sizes 55+66+110. The graph induced on the orbit of size 55 has valency 6 and point stabilizer S[4], see L[2](11) on 55. A. E. Brouwer, Uniqueness and nonexistence of some graphs related to M[22], Graphs Combin. 2 (1986) 21-29. J. I. Hall & S. V. Shpectorov, P-geometries of rank 3, Geom. Dedic. 82 (2000) 139-169.
{"url":"http://www.win.tue.nl/~aeb/graphs/Cameron.html","timestamp":"2014-04-20T14:10:14Z","content_type":null,"content_length":"5362","record_id":"<urn:uuid:52a2f27c-875b-4eb4-ac48-205909f49a86>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Key Questions: Public Support of Mathematics Can Mathematics Get Its Act Together? by Leon Seitelman The mathematics profession must immediately develop a continuing, coordinated program to demonstrate the importance of mathematics to the general public, funding agencies and policymakers. This awareness campaign should emphasize the profession's contribution to technology and the economy. Raising public consciousness about the importance of mathematics to modern life is a prerequisite for obtaining public support and funding for mathematics projects. The mathematics profession is in serious trouble. Budgets for mathematics, in both academia and government, are under vigorous attack. The balkanized culture of mathematics has stifled concerted action to garner public support. If the mathematics profession is to survive and prosper, its public presentation must change. All mathematicians must appreciate their shared professional responsibility to show that public support of mathematics provides real value. We need to develop materials to reach the public with this message. The Sky Is (or Was) Falling During the mid-January budget crisis, National Science Foundation Director Neal Lane called attention to the need for action: "...if you don't take it as one of your professional responsibilities to inform your fellow citizens about the importance of the science and technology enterprise, then...public support isn't going to be there. One thing that has been striking during this year of budget battles and, most recently, the shutdown, is the perceived stony silence of the science and technology community. I can assure you that this perceived lack of concern has not gone unnoticed in Washington." In short, we are an invisible constituency with no (apparent) redeeming social value. We must take the initiative in educating our fellow citizens about the importance of mathematics, and not just in the context of pending legislation. This article recommends specific action to make this happen. First Priority: Reach the Public Academic scholarship is formal, but effective communication with the rest of society requires the mathematics community to provide information to readers and listeners on terms and in language familiar to them. In our textbooks, we may leave the proof to the reader; but in public policy discussions, we must show every step. We Need to Change The profession needs a serious attitude adjustment, to gain public support. Non-technical people make policy decisions; we have to give decisionmakers the information they need, on their terms and in their language; if we don't, we won't get their support. We have to understand that we will succeed only if we present information that the average person can easily assimilate, and only enough to establish the importance of our discipline. We must point to important, understandable applications of mathematics, e.g., recent advances in coding theory that find expression in genetic research or compact disk technology, or that contribute to securing communications on the Internet. We need to show respect to the general public. We must be positive in our approach, demonstrating the vitality and importance of our discipline, but careful to avoid disparaging comparisons with other fields. This will require real work; a slipshod approach would be worse than doing nothing. Unusual effort will be needed, and uncharacteristic tact. In mathematics, as in every other endeavor, there is no free lunch. We Must Start a Dialog The profession needs to establish an ongoing dialog with every Administration and Congress, completely apart from the funding process. We have to show how mathematics contributes to society, and the security and productivity of the country. Otherwise, we'll be treated as just another special interest group, but without a powerful lobbyist. We need a unifying theme; a suggestion follows. Mathematics Is Important and Adds Real Value Mathematics is the language of technology. It is used to formulate, interpret, and solve problems in fields as diverse as engineering, economics, communication, seismology, and ecology. Mathematics provides us with powerful theoretical and computational techniques to advance our understanding of the modern world and societal problems, and to develop and manage the technology industries that are the backbone of our economy. Mathematics is a living discipline. Some traditional subjects in pure mathematics have been studied for hundreds of years; other topics, developing during the last few decades from the study of industrial issues, have formed a body of applied mathematics closely tied to the understanding of practical problems and basic phenomena. There is remarkable synergy between these seemingly disparate fields of study, and the abstract nature of mathematics supports important applications in an ever-growing number of areas. Keep the Message Simple We must build our message on that theme. That message must be unmistakable -- Mathematics Is Vital to the Health and Prosperity of the Nation. □ Mathematics Is Vital to the National Interest Strong mathematical capability on a national scale is essential for industrial and technological leadership. □ Mathematics Is an Enabler for Other Disciplines Virtually all other technology benefits directly from the extension of mathematical knowledge. □ Mathematical Competence Is a Workplace Necessity Mathematical requirements will increase dramatically for occupations in the information age. Lay out the Realities It is imperative to explain the benefits that result from mathematics, discuss how its practitioners work, and present the rationale for public support. For example: Mathematics enriches our knowledge and technology through continuing improvements and unexpected breakthroughs. Many important advances in technology apply techniques developed in one branch of mathematics to problems from another branch. The interdisciplinary impact of mathematics can be both substantial and unexpected. Appropriate examples are the application of chaos theory to economic modeling and markets, image reconstruction techniques to medicine and seismology, and group theory to nuclear Investments in mathematics provide a high rate of return. Although many technical fields rely to an extraordinary extent on analytical or computational techniques, they typically commit only a tiny portion of their resources to the support and advancement of these underlying disciplines. Further, the critical supporting role of mathematics is often completely unknown to the user community. These factors reinforce the inaccurate but popular notion that mathematics has no significant role in modern society. Budgets for mathematics projects are labor-intensive. Support for projects in mathematics generally includes relatively modest sums for computing equipment and software, compared to laboratory science fields such as physics and biology. For this reason, the true contribution of mathematics is far greater than its relatively modest funding. But limited capital requirements also mean that budget cuts in mathematics affect personnel more than in other disciplines. The growth and health of mathematics should be a national priority. Mathematics has a substantial impact on economic growth and development. Because mathematical knowledge is built steadily on a foundation of previous results, steady progress requires reliable, continuing funding for the mathematical infrastructure. A Unified Field Theory Is Sensible The mathematics profession needs to present a unified message; it is but a tiny fraction of the entire scientific endeavor. But each of the three major professional societies is currently planning to produce its own policy statement. If the voices of mathematics compete, they may simply drown each other out. In this case, academic preoccupation with precision and perfection works against us; professional societies should focus the effort to get the crucial message -- that mathematics is important -- to the general public. It is far better to publish a timely message that is 95% accurate, than to wait for completion of a perfect statement. In policy discussions, time is a critical A Check List for Action Much of our "publicity" about fields of mathematics and their applications can be taken from the excellent testimony given by our representatives before various Congressional committees. (SIAM Managing Director Jim Crowley's testimony last Summer is an excellent example.) We need to develop public service announcements for radio and television, and to encourage the influential print media to present this kind of information. When mathematics contributes to the understanding of particular social issues (e.g., statistical modeling to develop more accurate census estimates, formulation and interpretation of economic models (inflation, Social Security scenarios, budget models), epidemiology models, ATM security), we need to encourage the media to include this fact in their reporting. Citizens will not know about the contribution of mathematics, if no one tells them. The reality of the 1990's is that all of our institutions are being examined for relevance and value, with support levels more closely reflecting collective judgment about value obtained for investment. That this requires us to undertake a new role in education and publicity simply bears witness to the fact that mathematicians are, indeed, part of society. And that we have no more of an a priori claim on public resources than any other group. Ironically, this scenario means that the task of representing the value of mathematics to the general public falls disproportionately to the applications community, regarded by many within the profession as the proverbial black sheep of the family. The transcendent challenge to establishment of a profession-wide position will almost certainly be the forging of a consensus position rooted in an applications perspective. For this reason, the applied mathematics community must take the lead in bridging the cultural chasm with pure mathematics. A Modest Proposal Specifically, we should: □ Endorse the need for a common front for the profession. □ Issue a statement on science policy endorsed by AMS, MAA, and SIAM. (Invite INFORMS, AMATYC, NCTM, ... to sign on, but write it ourselves.) □ Collect brief descriptions of applications illustrating the impact of mathematics. □ Develop straightforward presentations of interesting and productive applications of mathematics, to which the average citizen can relate, for the profession's public outreach. □ Establish a permanent, joint committee of the three mathematics societies to ensure a continuing professional outreach, and fund it. The critical first task is development of a united front to strengthen the profession. We won't go anywhere until we all start to pull in the same direction. But when we do, the country will benefit, and so will mathematics...a true "win-win" situation! The author appreciates the many constructive suggestions offered by Ben Fusaro, Bob Borrelli, Courtney Coleman, and Brent Morris. This article has been reprinted from the June 1996 issue of SIAM News. Three years after the initial publication of this article, Seitelman reflects on a visitor's question: "What developments [relative to the need for a unified policy initiative among mathematics institutions and fields] have taken place in the interim [since June 1996], and are mathematics programs at universities still endangered?"
{"url":"http://mathforum.org/social/articles/seitelman.html","timestamp":"2014-04-21T10:03:19Z","content_type":null,"content_length":"16845","record_id":"<urn:uuid:15ebc52a-65a3-48dc-ba35-82cb35c4d2dd>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00148-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Programming algorithms... Replies: 2 Last Post: Jul 8, 1995 9:45 PM Messages: [ Previous | Next ] Programming algorithms... Posted: Jul 7, 1995 11:00 AM >So far, no one responses to my concern that "if we don't teach >algorithm to kids now, who is gonna put the algorithm into the >calculators in the next generation, and nonetheless to say to imporve >the algorithm in the next generation calculators." >Any comment? Heck, I guess I'll have a go at it... The following is not meant to reflect how I feel about learning algorithms. Rather, it is meant to reflect a little on your reflections. Please don't read into this that I am against teaching algorithms. That's not going to be my point. You argue that not teaching the algorithm now may mean that, in the future, there will be no one to put the algorithm into the calculator. First of all, what percentage of the population will ever really need to do such calculator/computer programming? Should we teach the many millions of students the algorithm so that perhaps 50 of them can do the programming? Second of all, I know my algorithms quite well, but I wouldn't know the first thing about getting down to a nitty gritty computer language in order to teach a calculator to do it. Just some thoughts.... Norm Krumpe Date Subject Author 7/7/95 Programming algorithms... Norm Krumpe 7/7/95 Re: Programming algorithms... Rvav@aol.com 7/8/95 Re: Programming algorithms... Chi-Tien Hsu
{"url":"http://mathforum.org/kb/thread.jspa?threadID=481250","timestamp":"2014-04-19T05:03:00Z","content_type":null,"content_length":"19334","record_id":"<urn:uuid:212c787b-a736-4755-aacf-d92a753bf84f>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Boulder Creek Statistics Tutors I believe that the biggest hurdle to overcome with most struggling students is a fear of failure. Let me help your child to build the confidence they need to be successful. I'm an Australian high school mathematics and science teacher, with seven years experience, who has recently moved to the bay area because my husband found employment here. 11 Subjects: including statistics, chemistry, calculus, physics I tutored all lower division math classes at the Math Learning Center at Cabrillo Community College for 2 years. I assisted in the selection and training of tutors. I have taught algebra, trigonometry, precalculus, geometry, linear algebra, and business math at various community colleges and a state university for 4 years. 11 Subjects: including statistics, calculus, geometry, algebra 1 I graduated from UCLA with a math degree and Pepperdine with an MBA degree. I have taught business psychology in a European university. I tutor middle school and high school math students. 11 Subjects: including statistics, calculus, geometry, Chinese ...I plan to get my bachelor's in Cell and Molecular Biology. I've taken 2 years of chemistry, both inorganic and organic. My organic chemistry course was taught by an instructor that is well known for his skill in the subject, students attending other universities often choose to come to Cabrillo to take the course. 31 Subjects: including statistics, reading, chemistry, geometry ...Because I enjoy doing it and because I am good at it. What makes me good at tutoring? Knowing math, knowing my students, being good at drawing people out, and being good at adjusting how I teach so that it suits the unique individual I am working with. 22 Subjects: including statistics, reading, English, physics
{"url":"http://algebrahelp.com/Boulder_Creek_statistics_tutors.jsp","timestamp":"2014-04-19T14:31:14Z","content_type":null,"content_length":"25191","record_id":"<urn:uuid:faca71b9-b787-4e0c-8a4a-daa302a41956>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00201-ip-10-147-4-33.ec2.internal.warc.gz"}
Tyngsboro SAT Math Tutor Find a Tyngsboro SAT Math Tutor ...I can teach the basics of computer programming. I have experience in C++ as a programmer for 2 years and in my undergraduate major. I also have experience in Java as I have been teaching it this past year to high school students. 19 Subjects: including SAT math, calculus, physics, algebra 2 ...A rocket scientist, that is. I am a retired engineer who worked on the Apollo program to land a man on the moon. For several years, I helped design jet engines for commercial aircraft. 14 Subjects: including SAT math, physics, calculus, geometry ...I've been a substitute in local high schools for math and science and have also taught algebra, geometry, physics and German in these capacities. In addition, I am also mentoring a student for the past three years from the 7th grade through his current junior year in high school as part of the S... 12 Subjects: including SAT math, physics, geometry, algebra 1 ...In these studies, mathematics is an indispensible tool to deconvolute and understand the experimental results. Math is also used to model and evaluate possible mechanisms of the reaction pathway. Truly, math is the queen of the sciences as well as a necessity in everyday life. 12 Subjects: including SAT math, chemistry, prealgebra, study skills ...In addition, one of my Master's Degree courses was specifically on hacking the Linux Kernel, where we studied the kernel module by module, and some modules line by line, and re-wrote parts of the operating system to study the effects it had upon performance. In addition to my academic and teachi... 46 Subjects: including SAT math, calculus, geometry, statistics
{"url":"http://www.purplemath.com/tyngsboro_sat_math_tutors.php","timestamp":"2014-04-21T02:21:50Z","content_type":null,"content_length":"23686","record_id":"<urn:uuid:763802ff-7b84-44cc-bb31-fba9b636a8cc>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Noriyuki Abe, Hokkaido University On a structure of modulo p compact induction of split p-adic groups We consider a compact induction from a modulo p irreducible representation of a hyperspecial maximal compact subgroup of a split p- adic group. This representation is related to a parabolic induction. As an application, we give a classification of modulo p irreducible admissible representations in terms of supercuspidal representations. Konstantin Ardakov, Queen Mary University of London Localisation of affinoid enveloping algebras I will explain how to relate representations of affinoid enveloping algebras to Berthelot's arithmetic D-modules on the flag variety, and sketch some applications to the p-adic representation theory of semisimple compact p-adic Lie groups. Joël Bellaïche, Brandeis University On Hecke algebras modulo $p$ We will present, motivate, and discuss special cases of a conjecture on the dimension and structure of the local components of the algebra of Hecke operators acting on the space of modular forms modulo a prime $p$ of a fixed level and all weights. Francesc Castella, McGill University On the p-adic Euler system of big Heegner points Attached to a newform f of weight 2 and an imaginary quadratic field K, the Kummer images of Heegner points give rise to an anticyclotomic Euler System for the p-adic Galois representation associated with f. In this talk I will try to explain, assuming that the prime p splits in K, how the extension by Ben Howard of this construction to Hida families can be seen as producing a two-variable p-adic L-function in the spirit of Perrin-Riou. In the weight variable, we can thus show that Howard's construction interpolates the étale Abel-Jacobi images of Heegner cycles; in the anticyclotomic variable, this leads to new cases of the Bloch-Kato conjecture. The extension of some of these results to general CM fields in which p splits completely seems to be within Samit Dasgupta, University of California - Santa Cruz $p$-arithmetic cohomology and completed cohomology of totally indefinite Shimura varieties This is a report on work in progress with Matthew Greenberg. Let $F$ be a totally real field and let $p$ be a rational prime that for simplicity we assume is inert in $F$. Let $f$ be a modular form for a totally indefinite quaternion algebra $A$ over $F$ whose local representation at $p$ is Steinberg. We define a Darmon $\mathcal{L}$-invariant attached to $f$, which is a vector of $p$-adic numbers indexed by the embeddings of $F$ into $\mathbf{C}_p$. This $\mathcal{L}$-invariant is defined using the cohomology of a $p$-arithmetic subgroup of $A^*$ and is modeled after Darmon's definition of the $L$-invariant in the case $F=\mathbf{Q}, A=\mathrm{M}_2(\mathbf{Q})$. Next we consider certain $p$-adic Banach space representations of $\mathrm{GL}_2(F_p)$ denoted $B(k,\mathcal{L})$, where $k$ is the vector of weights of $f$ and $\mathcal{L}$ is a vector as before. These Banach space representations generalize the construction of Breuil in the case $F=\mathbf{Q}$, and build on work of Schraen. Our main result is that there exist $\mathrm{GL}_2(F_p) $-interwining operators from $B(k,\mathcal{L})$ to the $f$-isotypic component of the completed cohomology of $A$ if and only if the vector $\mathcal{L}$ is the negative of the Darmon $\mathcal{L} $-invariant. This generalizes a result of Breuil in the case $F=\mathbf{Q}, A=\mathrm{M}_2(\mathbf{Q})$. Gabriel Dospinescu, École Polytechnique Extensions of de Rham representations and locally algebraic vectors In this joint work with Vytautas Paskunas, we extend Colmez' results on locally algebraic vectors in the p-adic Langlands correspondence. The main result is the following: if p>3 and if Pi is a finite length admissible unitary Banach space representation of GL_2(Q_p), for which the locally algebraic vectors are dense in Pi, then the image of Pi by the Montreal functor is a potentially semi-stable p-adic Galois representation. We will explain how the theory of phi-gamma modules can be applied to prove this theorem. Matthew Emerton, University of Chicago Density of Galois representations having prescribed types If X is a deformation space of p-adic Galois representations (either local or global), and S is a set of types, then one can consider the subset X(S) of points in the rigid analytic generic fibre of X whose associated Galois representations are potentially Barsotti--Tate at p of type lying in the set S. We give conditions on the set S which imply that the set X(S) is Zariski dense in X. As one application we conclude that the set of representations which are both potentially Barsotti--Tate and crystabelline at p is Zariski dense in X. This is joint work with Vytautus Paskunas. Laurent Fargues, Université de Strasbourg Beyond the curve I will give results and conjectures I never had time to speak about concerning the curve I defined with J.M.-Fontaine. Toby Gee, Imperial College The Breuil-Mézard Conjecture for potentially Barsotti-Tate representations I will discuss the proof of many cases of the Breuil-Mézard conjecture for two-dimensional potentially Barsotti-Tate representations (joint with Mark Kisin). David Geraghty, Princeton University Modularity lifting in non-regular weight Modularity lifting theorems were introduced by Taylor and Wiles and formed a key part of the proof of Fermat's Last Theorem. Their method has been generalized successfully by a number authors but always with the restriction that the Galois representations and automorphic representations in questions have regular weight. I will describe a method to overcome this restriction in certain cases. I will focus mainly on the case of weight 1 elliptic modular forms. This is joint work with Frank Calegari. Florian Herzig, University of Toronto Ordinary representations of GLn(Qp) and fundamental algebraic representations Motivated by a hypothetical p-adic Langlands correspondence for GLn(Qp) we associate to an n-dimensional ordinary (i.e. upper-triangular) representation rho of Gal(Qp-bar/Qp) over E a unitary Banach space representation Pi(rho)^ord of GLn(Qp) over E that is built out of principal series representations. (Here, E is a finite extension of Qp.) There is an analogous construction over Fp-bar. In the latter case we show under suitable hypotheses that Pi(rho)^ord occurs in the rho-part of the cohomology of a compact unitary group. This is joint work with Christophe Breuil. Payman Kassaei, King's College London Modularity lifting in weight (1,1,...,1) We show how p-adic analytic continuation of overconvergent Hilbert modular forms can be used to prove modularity lifting results in parallel weigh one. Combined with mod-p modularity results, these results can be used to prove certain cases of the strong Artin conjecture over totally real fields. Ruochuan Liu, University of Michigan Crystalline periods of eigenfamilies of p-adic representations We will explain a (phi,Gamma)-module variant of Kisin's finite slope subspace technique and its applications to eigenfamilies of p-adic representations Jonathan Pottharst, Boston University Triangulation of eigenvarieties In the 1980s, Wiles showed the Galois representation over a Hida family to be reducible when restricted to a decomposition group at p. This result is the basis for the study of variation of Selmer groups of modular forms in the family. In joint work with K. S. Kedlaya and L. Xiao, we prove the analogous result over the eigencurve, using a strong finiteness result for Galois cohomology of rigid analytic families of (phi,Gamma)-modules over the Robba ring. Applications to Iwasawa theory are then possible by our previous work. (A similar result has been found recently by R. Liu, using a significant strengthening of Kisin's method of interpolation of crystalline periods.) David Savitt, University of Arizona The Buzzard-Diamond-Jarvis conjecture for unitary groups We will discuss the proof of the weight part of Serre's conjecture for rank two unitary groups in the unramified case (that is, the Buzzard-Diamond-Jarvis conjecture for unitary groups). This is joint work with Toby Gee and Tong Liu. More precisely, we prove that any Serre weight which occurs is a predicted weight; this completes the analysis begun by Barnet-Lamb, Gee, and Geraghty, who proved that all predicted weights occur. Our methods are purely local, using Liu's theory of (phi,G-hat) modules to determine the possible reductions mod p of certain two-dimensional crystalline Galois representations. Benjamin Schraen, Centre national de la recherche scientifique On the presentation of supersingular representations Let F be a quadratic extension of the field of p-adic numbers and k an algebraic closed field of characteristic p. We say that a smooth representation of GL2(F) on a k-vector space is of finite presentation if it is a of finite presentation in the category of smooth representations of GL2(F). In this talk we prove that if F is different of Qp, irreducible supersingular representations of GL2(F) on k-vector spaces are not of finite presentation. Claus Sorensen, Princeton University Eigenvarieties and invariant norms By a slight modification of the classical local Langlands correspondence, one can attach a locally algebraic representation of GL(n) to any n-dimensional potentially semistable Galois representation (with distinct Hodge-Tate weights). A conjecture of Breuil and Schneider asserts that the former admits an invariant norm. We will prove this when the latter comes from a classical point on an eigenvariety. More generally, for any definite unitary group G, we will explain how its eigenvariety (of some fixed tame level) mediates part of a global correspondence between Galois representations of the CM field, and Banach-Hecke modules B with a unitary G-action. For any regular weight W, we express the locally W-algebraic vectors of B in terms of the Breuil-Schneider representation on the Galois side. Yichao Tian, Morningside Center of Mathematics Analytic continuation of weight 1 overconvergent Hilbert modular forms in the tamely ramified case Abstract: The method of analytic continuation was initiated by Buzzard-Taylor to treat the icosahedral case of the Artin conjecture over Q. In this talk, I will explain how to extend this approach to the Hilbert case. Let p be an odd prime number, and F be a totally real field in which p is unramified. We prove that a p-adic Galois representation over F, which is residually ordinarily modular and saitsifies certain local conditions at p, comes from actually a Hilbert modular form of weight 1. For the moment, we only know how to treat the case where the Galois representation is tamely ramified at p. This is a joint work with Payman Kassaei and Shu Sasaki. I hope Payman will have explained the general principle, then I will focus on the details of the analytic continuation process. Gergely Zábrádi, Eötvös Loránd From (phi,Gamma)-modules to G-equivariant sheaves on G/P Let G be the Q_p-points of a Q_p-split connected reductive group with Borel subgroup P=TN. For any simple root alpha of T in N, we associate functorially to a finitely generated etale (phi,Gamma) -module D over Fontaine's ring (equipped with an additional linear action of the group Ker(alpha)) a G-equivariant sheaf on the flag variety G/P. This functor is faithful. In case of G=GL_2(Q_p) the global sections of this sheaf coincide with the representation D\boxtimes P^1 constructed by Colmez. This is joint work with Peter Schneider and Marie-France Vigneras. Back to Top
{"url":"http://www.fields.utoronto.ca/programs/scientific/11-12/galoisrep/wksp_p-adic/abstracts.html","timestamp":"2014-04-17T15:51:52Z","content_type":null,"content_length":"27268","record_id":"<urn:uuid:a8479ab0-fb53-4092-8117-c8944bdb73d9>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00315-ip-10-147-4-33.ec2.internal.warc.gz"}
Clarkdale, GA Geometry Tutor Find a Clarkdale, GA Geometry Tutor ...I have tutored several students in math in physics including Algebra and have seen very positive results. I earned a BS in math and physics from the University of Alabama in Huntsville and a MS in physics from Georgia Tech. I am currently working on a PhD in physics at Georgia Tech. 11 Subjects: including geometry, calculus, physics, precalculus ...My reduced hourly rate for online tutoring is $65. If you would like to try it, let me know and we will schedule a 'tour'. Thanks.) My name is Anthon and I'm excited to have the opportunity to be your or your child's tutor. 19 Subjects: including geometry, physics, calculus, GRE ...Science and math are my strengths, and I had some great teachers over the years. Because of this, I can help you enjoy some of the tougher subjects like I do. You can do it, and I am happy to 32 Subjects: including geometry, chemistry, physics, algebra 1 ...My strength is in math, but throughout my years of teaching elementary grades I have become stronger in helping out in other subjects.I hold a masters degree for grades K-6. I also hold Connecticut State Certification and Georgia Certification for grades K-6. I also spent 5 years substitute teaching grades K-6. 6 Subjects: including geometry, elementary (k-6th), elementary math, spelling ...One of my skills is breaking down subject matter into smaller chunks that make it meaningful to the student. The best tutorial method for special needs students is the hands on, kinesthetic approach to learning. I currently teach math to ADD/ADHD students. 18 Subjects: including geometry, reading, English, biology Related Clarkdale, GA Tutors Clarkdale, GA Accounting Tutors Clarkdale, GA ACT Tutors Clarkdale, GA Algebra Tutors Clarkdale, GA Algebra 2 Tutors Clarkdale, GA Calculus Tutors Clarkdale, GA Geometry Tutors Clarkdale, GA Math Tutors Clarkdale, GA Prealgebra Tutors Clarkdale, GA Precalculus Tutors Clarkdale, GA SAT Tutors Clarkdale, GA SAT Math Tutors Clarkdale, GA Science Tutors Clarkdale, GA Statistics Tutors Clarkdale, GA Trigonometry Tutors Nearby Cities With geometry Tutor Aragon, GA geometry Tutors Austell geometry Tutors Braswell, GA geometry Tutors Chattahoochee Hills, GA geometry Tutors Dallas, GA geometry Tutors Ellenwood geometry Tutors Hapeville, GA geometry Tutors Hiram, GA geometry Tutors Lebanon, GA geometry Tutors Palmetto, GA geometry Tutors Powder Springs, GA geometry Tutors Red Oak, GA geometry Tutors Taylorsville, GA geometry Tutors Temple, GA geometry Tutors Winston, GA geometry Tutors
{"url":"http://www.purplemath.com/Clarkdale_GA_geometry_tutors.php","timestamp":"2014-04-19T23:43:20Z","content_type":null,"content_length":"23933","record_id":"<urn:uuid:53bb5d52-a77a-4d33-931b-f9a3ec3b3ffd>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: new package seqlogit available on ssc [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] Re: st: new package seqlogit available on ssc From "Anders Alexandersson" <andersalex@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: new package seqlogit available on ssc Date Wed, 29 Aug 2007 11:53:59 -0400 I just noticed Martin Buis's new -seqlogit- command which looks very interesting. How does it improve on Rory Wolfe's -ocratio- command? An obvious improvement is the Stata version, 9.2 versus 5.0. Are there other improvements? Unfortunately, I cannot attend the user meeting on September 7. Anders Alexandersson On 8/16/07, Maarten buis <maartenbuis@yahoo.co.uk> wrote: > Thanks to Kit Baum a new package called -seqlogit- is now available on > ssc. This package fits a sequential logit model. Traditionally, the aim > of a sequential logit model is to study a process that can be described > as a series of choices between a small number of options. However, > these choices eventually lead to an end result. The aim of this package > is to not only study the effects of explanatory variables on these > choices (study the process), but also study the effects on the end > result. > For example, in education parents make a series of choices at what type > of schools to send their children. Typically, the aim of the model is > explore the effects of explanatory variables on the probabilities of > continuing from one level to the next, i.e. study the process through > which education is attained. However, in the end this process lead to > an outcome: highest achieved level of education. Both the process and > the end result are of substantive interest and are related to one > another, but they are not the same. > This package implements a method by Buis (2007) to study simultaneously > the effect of explanatory variables on the process (the probability of > passing from one level to the next) and the final outcome (highest > achieved level). The effect of the explanatory variable on the end > result can be shown to be a weighted sum of the log odds of passing the > transitions. The weights are the product of the following three > elements: > o the proportion of people at risk of passing the transition, so a > transition receives more weight if more people are at risk of passing > it. > o the variance of the dummy indicating whether the transition was > passed or not, so a transition receives more weight if close to 50% > pass, and less weight if virtually everybody passes or fails that > transition. > o the expected difference in outcome between those that pass and those > that fail the transition, so a transition receives more weight if > people gain more from passing it. > -seqlogit- will estimate a sequential logit and allows one to use > -predict- to predict the weights, and its components. Furthermore, it > contains the -seqlogitdecomp- command, which graphically displays the > decomposition of the effect of the explanatory variable on the end > result into the effects of the explanatory variable on the individual > transitions and their weights. > This package also contains three ancillary files, that describe an > example. The files are a do-file that implements the example, the > dataset used by the do-file, and a pdf-file which shows the output and > explains some of the tricks used in the example. To install the package > and the ancillary files type -ssc install seqlogit, all- > I will give a presentation on this package on Friday September 7 at the > 2007 Nordic and Baltic Stata Users Group meeting. > Note: > What I call here a sequential logit model is also known under a number > of different names: sequential response model (maddala 1983), > continuation ratio logit (Agresti 2002), model for nested dichotomies > (fox 1997), and the Mare model (shavit and blossfeld93) (after (Mare > 1981)). > References: > Agresti, Alan 2002 . Categorical Data Analysis, 2nd edition. Hoboken, > NJ: Wiley-Interscience. > Buis, Maarten L. 2007 ``Not all transitions are equal: The relationship > between inequality of educational opportunities and inequality of > educational outcomes'' http://home.fsw.vu.nl/m.buis/wp/distmare.html. > Fox, John 1997 Applied Regression Analysis, Linear Models, and Related > Methods. Thousand Oaks: Sage. > Maddala, G.S. 1983 Limited Dependent and Qualitative Variables in > Econometrics Cambridge: Cambridge University Press. > Mare, Robert D. 1981 ``Change and Stability in educational > stratification'' American Sociological Review, 46(1), p.p. 72-87. > Shavit, Yossi and Hans-Peter Blossfeld 1993 Persistent Inequality: > Changing Educational Attainment in Thirteen Countries Boulder: Westview > Press. > ----------------------------------------- > Maarten L. Buis > Department of Social Research Methodology > Vrije Universiteit Amsterdam > Boelelaan 1081 > 1081 HV Amsterdam > The Netherlands > visiting address: > Buitenveldertselaan 3 (Metropolitan), room Z434 > +31 20 5986715 > http://home.fsw.vu.nl/m.buis/ > ----------------------------------------- > ___________________________________________________________ > Yahoo! Answers - Got a question? Someone out there knows the answer. Try it > now. > http://uk.answers.yahoo.com/ > * > * For searches and help try: > * http://www.stata.com/support/faqs/res/findit.html > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2007-08/msg01121.html","timestamp":"2014-04-18T03:18:34Z","content_type":null,"content_length":"11256","record_id":"<urn:uuid:86f17465-caec-4413-8b9e-832c8813c76c>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
2011-2012 Marie Curie Fellow, University of Oxford (Prof. Feliciano Giustino) 2007-2008 Postdoctoral researcher, University Claude Bernard Lyon (Prof. Xavier Blase) I develop and apply ab initio computational methods for modeling of emerging materials with applications in energy transport and electronics. I am particularly interested in phonon mediated superconductors, thermoelectrics and carbon nanomaterials.
{"url":"http://www2.binghamton.edu/physics/people/margine.html","timestamp":"2014-04-19T14:53:08Z","content_type":null,"content_length":"20542","record_id":"<urn:uuid:782b8474-61fc-4382-8c2d-d4f6243ec675>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: in a plane, if two lines are perpendicular to the same line, then they are what to each other. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4de2af544c0e8b0b4c52a9d8","timestamp":"2014-04-21T07:59:30Z","content_type":null,"content_length":"39916","record_id":"<urn:uuid:4d6c7107-3e23-4d9b-bde4-9857b396097c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: How does \(\Delta V\) and \(\Delta U\)have the same magnitude for a given charge, when the change in U the integral multiplied by Q? • one year ago • one year ago Best Response You've already chosen the best response. \[\Delta V=V_b-V_a=\frac{\Delta U}{q_0}=-\int_a^b\vec E \cdot d\vec l\] Best Response You've already chosen the best response. we'll I'm being told that "changes in U and V have the same magnitude for a given charge; they only depend on E.dl (the dot product of each differential path vector with the electric field)" and I don't understand how U and V can have the same magnitude Best Response You've already chosen the best response. are we ignoring the test charge (q)? Best Response You've already chosen the best response. maybe the test charge is one unit Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/51205862e4b06821731cc3a9","timestamp":"2014-04-17T04:21:01Z","content_type":null,"content_length":"35121","record_id":"<urn:uuid:142a4f91-79f7-4ce6-ab6f-4e8ef53cc961>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00048-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Meg on Wednesday, May 25, 2011 at 4:48pm. PLEASE HELP ME, I HAVE AN EXAM TOMORROW AND THIS THE ONLY QUESTION I HAVE. THE AREA OF A REGULAR HEXAGON IS 35 IN SQUARED. FIND THE LENGTH OF A SIDE. ROUND YOUR ANSWER TO THE NEAREST TENTH. I KNOW THE ANSWER IS 3.7 BUT I DON'T KNOW HOW TO GET IT!! • GEOMETRY - Ms. Sue, Wednesday, May 25, 2011 at 4:59pm This site explains how to find the area of a regular hexagon. • GEOMETRY - Meg, Wednesday, May 25, 2011 at 5:15pm IM GOING TO CRY!!!!!!!!!!!!!!!! • GEOMETRY - Meg, Wednesday, May 25, 2011 at 5:21pm But it doesn't say what the apothem or the perimeter is so HOW do I find the length of A side????? The website didn't help at all. • GEOMETRY - Reiny, Wednesday, May 25, 2011 at 6:04pm The area is made up of 6 equilateral triangles look at one of these. call each side x draw a perpendicular from a vertex to the base, call it h. Makes no difference which one, since all sides are the same We can find the height h by using Pythagoras ((1/2)x)^2 + h^2 = x^2 (1/4)x^2 + h^2 = x^2 h^2 = x^2 - (1/4)x^2 = (3/4)x^2 h = √3x/2 area of one triangle = (1/2)basexheight = (1/2)x(√3x/2) = (√3/4)x^2 area of whole hexagon = 6(√3/4)x^2 = (3√3/2)x^2 but this equals 35 (3√3/2)x^2 = 35 x^2 = 70/(3√3) = 13.4715 x = 3.67035 , they rounded that off to 3.7 Related Questions Geometry - the area of a regular hexagon is 35" squared. Find the length of a ... Geometry - Hi, I'm having a difficult time understanding a couple of Geometry ... math - how do you find the area of a hexagon? Here is the formula: Area of ... help hexagon geometry - A regular hexagon is inscribed in a circle and another ... Maths - I have a maths exam tomorrow and am studying with a friend. We are ... geometry - Hexagon A is a regular hexagon. The total length of all the sides of ... Geometry. - Find the perimeter of a regular hexagon that has an area of 54 ... geometry - A regular hexagon has an area of 96sqrt3. What is the length of each ... geometry - The area of a particular regular hexagon is x^3 square units, where x... Geometry - ABCDEF is a regular hexagon with side length 43√4. Points G,H,I...
{"url":"http://www.jiskha.com/display.cgi?id=1306356532","timestamp":"2014-04-16T17:21:13Z","content_type":null,"content_length":"9889","record_id":"<urn:uuid:cb3dd9d0-09fc-4ee6-84cb-67a25ed8693a>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00200-ip-10-147-4-33.ec2.internal.warc.gz"}
A Step Backward ? Roger N. Clark: > I'll assemble the questions and answers into one post in > case other people want to go through the exercise. Here it is: Let's start from the beginning and work through things one step at a time. First rule: don't jump ahead to conclusions. The first question (this is not a trick question): Given two cameras A and B: Camera B is a perfectly scaled up version of camera A. Thus, the sensor in camera B is twice as large as the sensor in camera A. Both cameras have the same number of pixels, such that the pixels in cameras B are twice the linear size (4 times the area) of camera A. Each camera has an f/2 lens and exposes a scene with constant lighting at f/2. The lens on camera B is twice the focal length as camera A so that both cameras record the same field of view. Let's say that camera A collects 10,000 photons in each pixel in a 1/100 second exposure at f/2. How many photons are collected in each pixel in camera B in the same exposure time on that same scene with the same lighting at f/2? Answer: 4 * 10,000 = 40,000 photos per pixel. The number of photons collected scales as the area of the pixel. The area of a pixel in camera B is 4 times that of camera A, so the answer is 4 * 10,000. Now that we have established that the two cameras have identical fields of view, just that one camera is twice size of the other. Their spatial resolution on the subject is identical. Now let's say we are taking a picture of a flat wall so there are no depth of field issues. The images have the same pixel count, the same field of view, and the same spatial resolution taken with the same exposure time and f/stop. Are the pictures from the two cameras the same? Answer: No. They are the same except camera B has collected 4 times the photons. The images from both cameras are absolutely identical in every respect except that the image from camera B, the larger camera, has higher a signal-to-noise ratio (2x higher). An analogy to collecting photons in a camera with pixels is like collecting rain drops in buckets, assuming the drops are falling randomly, which is probably the way it is. Larger buckets collect more rain drops. If you measure the number of drops collected from a bunch of buckets, you will find that the amount in each bucket is slightly different. The noise in the counted raindrops collected in any one container is the square root of the number of rain drops. This is called Poisson statistics (e.g. check wikipedia). So if you double the count (photons in a pixel or rain drops in a bucket), the noise in the count will go up by square root 2. For example, put out 10 buckets and you would find the level in the buckets that on average collected 10,000 rain drops, would vary in the measured amount of water by 1% from bucket to bucket (the standard square root (10,000)/10,000 = 0.01 = 1% (10,000 rain drops is about 0.5 liter of water by the way.) So with Poisson statistics, which is the best that can be done measuring a signal based on random arrival times (e.g.of photons), the signal / noise = signal / square root(signal) = square root(signal). So in our camera test, collecting 4x the photons increases signal-to-noise ratio by square root (4) = 2. Fortunately, most digital cameras have such noise characteristics except at near zero signal. This means that improving noise performance can only come through increasing the photon count. That can be done 3 ways: increasing quantum efficiency (currently dcams are around 30%), fill factor (most are probably already above 80%), or increase the pixel size (e.g. the larger bucket collects more rain drops). Next question: Assuming there is no change in aberrations if you change f/stop, what could be done to the above test images to make camera B produce an image that is completely identical to that from camera A? A: Assume the subject is static; no movement, so there are two answers; extra credit for giving both. B: Assume the subject is not static, then there is one answer. What is it? Answer A: We have 4x the photons, so the two answers are: 1) stop down 2 stops to decrease the light level 4x. 2) decrease the shutter speed 4x. (OPTIONAL: Increase the ISO): While changing ISO changes the perceived image, it does not change the number of photons collected. Answer B: The one and only answer is stop the lens down two stops. This reduces the photon count and also happens to make the depth of field the same as the smaller sensor camera, finally making the results from the two cameras identical (total photons per pixel as well as depth of field). Changing the ISO higher 2 stops would bring the digitized signal to the same relative level as the small camera, but that could also be done in post processing (again the photon count and signal-to-noise ratio would be the same). The ISO change would also make the metering the same as the small camera, then the metered shutter speeds would be identical too. In real cameras, boosting the ISO increase is a good step as it reduces A/D quantization and reduces the read noise contribution to the signal. So, what was the result of the exercise? In making the images from two different sized cameras identical in terms of resolution, angular coverage, exposure time, and signal-to-noise ratio, we find the final property: the depth of field is also identical. I have added this discussion to: The Depth-of-Field Myth and Digital Cameras
{"url":"http://www.velocityreviews.com/forums/t428535-p17-a-step-backward.html","timestamp":"2014-04-16T14:00:49Z","content_type":null,"content_length":"53218","record_id":"<urn:uuid:385610e2-5ca7-4853-959e-6c72c5d16796>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
Star Polygons May 16th 2005, 04:18 AM #1 Star Polygons Alice's netball squad warms up by spacing themselves equally around a circle, facing inwards, and doing the following exercises. In the first exercise, each girl passes the ball to the first player on her left, starting and ending with Alice. In the second exercise, they pass to the second girl to their left; in the third exercise to the third girl to their left , and so on. Each exercise starts and ends with Alice. The angle of an exercise is the between the lines along which a player receives and passes on the ball. Eg. With 8 players, throwing to the person on the left, the angle is 135 degrees. With 8 players, throwing to the 2nd person on the left the angle is 90 degrees. With 8 players, throwing to the 3rd person on the left the angle is 45 degrees. With 8 players, throwing to the 4th person on the left the angle is 0 degrees. (The throw is just between two people.) So, the path which the ball follows for exercise 5 is the same as the path for exercise 3, but with the direction of the ball movement reversed (first-left then right). The same applies to the paths for exercises 2 and 6 and for exercises 1 and 7. Q: Find, with proof, the angle of the fourth exercise (throwing the ball to the fourth person on the left) with 9 players. Q: At one training session the coach suggests an exercise with an angle of 17.5 degrees. Alice protests that too many players would be needed. What is the smallest number of players required, and which exercise uses an angle of 17.5 degrees? I see this Q, as taking into account the formula (n-2)180 = sum of angles inside a regular polyhedra. (n-2)180 degrees in the sum of the interior angles each angle is (n-2)180/n... The angles formed by drawing all diagonals of the polygon to one specific vertex (Alice). These angles are all equal because they are all inscribed angles cutting equal arcs. n-2 such angles at each vertex. The sum of these is the interior angle .... so But my way could be completly wrong, any thoughts would be appreciated. If there are are 8 players, the circumference of the circle is divided into 8 equal arcs. Each one measures 360/8 = 45 degrees. If there are are 9 players, the circumference of the circle is divided into 9 equal arcs. Each one is 360/9 = 40 degrees. If there are are n players, the circumference of the circle is divided into n equal arcs. Each arc is 360/n degrees. An angle is inscribed in a circle if the vertex lies on the circumference of the circle. The measure of this inscribed angle is half the measure of the arc subtended by the angle. Q: Find, with proof, the angle of the fourth exercise (throwing the ball to the fourth person on the left) with 9 players.) 9 players. 9 equal arcs of 40 degrees each. The ball is thrown to the 4th player, then to the next 4th. That means the last player to catch the ball is the 8th player from Alice. That means there is still one more 40-degree arc before Alice, or, between the 8th player and Alice, there is one 40-degree arc. This last 40-degree arc is the arc subtended by the angle in question. Therefore, the angle of the fourth exercise is 40/2 = 20 degrees. ----answer. Q: At one training session the coach suggests an exercise with an angle of 17.5 degrees. Alice protests that too many players would be needed. What is the smallest number of players required, and which exercise uses an angle of 17.5 degrees? 17.5 degrees So, 17.5 *2 = 35 degrees ---the measure of the arc subtended by the angle of the exercise. 360/n = 35 360 = 35*n n = 360/35 = 10.286 ---not an integer. That means, the arc subtended by the angle in question is not one equal arc. The arc subtended by the 17.5 degrees, which is 35 degrees, is a combination of more than one equal spaces between players. How many equal spaces can there be in a 35-degree arc? >>>Seven of 5-degree spaces. >>>Five of 7-degree spaces. 360/7 = 51.428 players ---cannot be. 360/5 = 72 players ---could be. Hence, there are 72 players,and each is separated by a 5-degree arc. The desired angle, 17.5 degrees, needs 35 degrees, and that is the total arc for 7 of 5-degree spaces. Umm, there is no way to throw the ball to two players and end up with 7 equal spaces left. 72 - 7 = 65 65/2 = 32.5 ---there is no player between the 32nd and 33rd players. If we multiply that 32.5 by 2, we get 65. ....that is it! Then we divide the 5-degree space by 2. We get 2.5 degrees per space. Then, 360/2.5 = 144 equal spaces = 144 players. Alice throws the ball to the 65th, then the 65th throws the ball to the, (65*2), 130th player. 144 -130 = 14 spaces left. 14*(2.5 degrees) = 35 degrees The inscribed angle subtended by this 35-degree arc is 35/2 = 17.5 degrees ---the desired angle for the exercise. Therefore, for this exercise, >>>144 is the smallest number of players. >>>throwing to the 65th player to the left is the pattern. Last edited by ticbol; May 21st 2005 at 10:29 AM. Thanks for your detailed solution! May 20th 2005, 01:52 PM #2 MHF Contributor Apr 2005 May 21st 2005, 09:45 PM #3
{"url":"http://mathhelpforum.com/algebra/246-star-polygons.html","timestamp":"2014-04-19T02:32:05Z","content_type":null,"content_length":"38734","record_id":"<urn:uuid:78dffd97-b614-4a68-9e41-1087c29a1f2c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Calendar problem June 1st 2012, 09:58 PM #1 Junior Member Jan 2008 Calendar problem What is the greatest total number of Mondays there could be in July and August of the same year? How can I show that it's 9? Re: Calendar problem Well, give it a shot: what happens if July 1st is a Monday? June 1st 2012, 11:32 PM #2 MHF Contributor Dec 2007 Ottawa, Canada
{"url":"http://mathhelpforum.com/algebra/199556-calendar-problem.html","timestamp":"2014-04-18T12:31:52Z","content_type":null,"content_length":"28979","record_id":"<urn:uuid:149b065b-798c-422b-a3d7-234b73f3a581>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00405-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:Grundgleichungen (Minkowski).djvu/22 This page has been , but needs to be (V') $\mathfrak{e}'=\epsilon\mathfrak{E}',\ \mathfrak{M}'=\mu\mathfrak{m}',\ \mathfrak{s}'=\sigma\mathfrak{E}'$, where $\epsilon,\ \mu,\ \sigma$ are the dielectric constant, magnetic permeability, and conductivity for the system x', y', z', t', i.e. in the space-time point x, y, z, t of matter. Now let us return, by means of the reciprocal Lorentz-transformation to the original variables x, y, z, t, and the magnitudes $\mathfrak{w},\varrho,\mathfrak{s,e,m,E,M}$ and the equations, which we then obtain from the last mentioned, will be the fundamentil equations sought by us for the moving bodies. Now from § 4, and § 6, it is to be seen that the equations A), as well as the equations B) are covariant for a Lorentz-transformation, i.e. the equations, which we obtain backwards from A') B'), must be exactly of the same form as the equations A) and B), as we take them for bodies at rest. We have therefore as the first result: — The differential equations expressing the fundamental equations of electrodynamics for moving bodies, when written in $\varrho$ and the vectors $\mathfrak{s,\ e,\ m,\ E,\ M}$ are exactly of the same form as the equations for moving bodies. The velocity of matter does not enter in these equations. In the vectorial way of writing, we have (I) $\begin{array}{rcrl} (I) & \qquad & curl\ \mathfrak{m}-\frac{\partial e}{\partial t} & =\mathfrak{s},\\ \\(II) & & div\ \mathfrak{e} & =\varrho,\\ \\(III) & & curl\ \mathfrak{E}+\frac{\partial\ mathfrak{M}}{\partial t} & =0,\\ \\(IV) & & div\ \mathfrak{M} & =0\end{array}$, The velocity of matter occurs only in the auxilliary equations which characterise the influence of matter on the basis of their characteristic constants $\epsilon,\ \mu,\ \sigma$. Let us now transform these auxilliary equations x, y, z into the original co-ordinates x, y, z, and t.) According to formula 15) in § 4, the component of $\mathfrak{e}'$ in the direction of the vector $\mathfrak{w}$ is the same us that of $\mathfrak{e}+[\mathfrak{wm}]$, the component of $\mathfrak{m}'$ is the same as that of $\mathfrak{m}-[\mathfrak{we}]$, but for the perpendicular direction $\mathfrak{\bar{w}}$, the components of $\mathfrak{e}'$ and $\mathfrak{m}'$
{"url":"http://en.wikisource.org/wiki/Page:Grundgleichungen_(Minkowski).djvu/22","timestamp":"2014-04-17T16:24:24Z","content_type":null,"content_length":"26203","record_id":"<urn:uuid:b6ed16ab-8f31-444c-a50b-b5b0a2466223>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Tutorial Model I've been working through Dorothy Bishop's tutorial, linked from this website here: or can be found in online form on her blog here: I found I was getting NaN std errors (see the comments at the bottom of the blog link above for the full discussion - I've paraphrased and extended the comments there in the question below). Dorothy Bishop kindly replied with some suggestions, but thought that I should ask on this forum. I think the model she's using is underdefined. I'll try to explain why below. First, The model there are recurrent bidirectional connections on S,W,B and P (e,f,g,h). (8 unknowns: a,b,c,d,e,f,g,h). The covariance matrix would have numbers in the following places: W S B P W x1 x2 0 0 S x2 x3 0 0 B 0 0 x4 x5 P 0 0 x5 x6 (10 observed values? x1,x2,x3,0,0,0,0,x4,x5,x6) The problem The NaN Std Deviations are caused by the inverted Hessian matrix (which is the covariance matrix) having negative values on the diagonal. I think this is generally due to something being underdefined in the design? In layman terms, I think it is a problem that the strength of the correlation between W and S can be modified by EITHER changing a or changing b? In some ways the model isn't defined well enough? Under or over defined? In the tutorial it is shown that there are 10 observed values (in the covariance matrix) and 8 unknown parameters, suggesting 2 dof. I'm a bit worried that the 0s in the covariance matrix don't help much - if that makes sense?... Thinking about this a bit more: Given that this model can be split into two smaller models, surely these should be possible to estimate too? But when I count up the DoF for just the V,W,S (the model looks like: V-(a)->W, V-(b)->S, V-(1)->V, W-(e)->W, S-(f)->S) This gives us 4 unknowns (a,b,e,f), and only three values in the covariance matrix (a^2+e, b^2+f and ab)... ...doesn't this mean the model is "secretly" underdefined? Because the model can be split into two submodels which are both underdefined, the combined model is also going to be underdefined. What do people think? Thanks! (just learning about SEM). Mike Smith University of Edinburgh
{"url":"http://openmx.psyc.virginia.edu/print/1594","timestamp":"2014-04-19T14:35:45Z","content_type":null,"content_length":"10986","record_id":"<urn:uuid:865a229a-d5ac-42e9-af54-88e908396a68>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00218-ip-10-147-4-33.ec2.internal.warc.gz"}
Does 0.9999(repeating) = 1? So your point is based on if you assume the axioms for a system are wrong. You get if the axioms are "wrong" I.e. lead to a contradiction you can prove anything true in the system An axiomatic approach to the reals is mostly a short cut to avoid all the hassle with generating it from the set theory axioms, something you can do completeness included So I thought about this last night, before I fell asleep. I'm probably wrong and don't understand base numbers the way I think I do, but I thought I'd throw it out there anyway. If we assume a Base 1 numerical system, wouldn't X = (1/y) 1 be mathematically sound? the equation is solely meant to define 1 as a number, so it doesn't work with other numbers. But it does work with one.
{"url":"http://forums.na.leagueoflegends.com/board/showthread.php?s=&t=3035994&page=103","timestamp":"2014-04-24T20:42:45Z","content_type":null,"content_length":"50410","record_id":"<urn:uuid:813bee47-2a22-4d53-a17b-bf6c2c740a8f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
The Symmetries of Things Table of Contents I Symmetries of Finite Objects and Plane Repeating Patterns 1. Symmetries Rosette Patterns Frieze Patterns Repeating Patterns on the Plane and Sphere Where Are We? 2. Planar Patterns Mirror Lines Describing Kaleidoscopes More Mirrors and Miracles Wanderings and Wonder-Rings The Four Fundamental Features! Where Are We? 3. The Magic Theorem Everything Has Its Cost! Finding the Signature of a Pattern Just Symmetry Types How the Signature Determines the Symmetry Type Interlude: About Kaleidoscopes Where Are We? 4. The Spherical Patterns The 14 Varieties of Spherical Pattern The Existence Problem: Proving the Proviso Group Theory and All the Spherical Symmetry Types All the Spherical Types Where Are We? 5. Frieze Patterns Where Are We? 6. Why the Magic Theorems Work Folding Up Our Surface Maps on the Sphere: Euler’s Theorem Why char = ch The Magic Theorem for Frieze Patterns The Magic Theorem for Plane Patterns Where Are We? 7. Euler’s Map Theorem Proof of Euler’s Theorem The Euler Characteristic of a Surface The Euler Characteristics of Familiar Surfaces Where Are We? 8. Classification of Surfaces Caps, Crosscaps, Handles, and Cross-Handles We Don’t Need Cross-Handles Two crosscaps make one handle That’s All, Folks! Where Are We? 9. Orbifolds II Color Symmetry, Group Theory, and Tilings 10. Presenting Presentations Generators Corresponding to Features The Geometry of the Generators Where Are We? 11. Twofold Colorations Describing Twofold Symmetries Classifying Twofold Plane Colorings Complete List of Twofold Color Types Duality Groups Where Are We? 13. Threefold Colorings of Plane Patterns A Look at Threefold Colorings Complete List for Plane Patterns Where Are We? Other Primefold Colorings Plane Patterns The Remaining Primefold Types for Plane Patterns The "Gaussian" Cases The "Eisensteinian" Cases Spherical Patterns and Frieze Patterns Where Are We? 14. Searching for Relations On Left and Right Justifying the Presentations The Sufficiency of the Relations The General Case Alias and Alibi Where Are We? Answers to Exercises 15. Types of Tilings Heesch Types Isohedral Types Where Are We? 16. Abstract Groups Cyclic Groups, Direct Products, and Abelian Groups Split and Non-split Extensions Dihedral, Quaternionic, and QuasiDihedral Groups Extraspecial and Special Groups Groups of the Simplest Orders The Group Number Function gnu(n) The gnu-Hunting Conjecture: Hunting moas Appendix: The Number of Groups to Order 2009 III Repeating Patterns in Other Spaces 17. Introducing Hyperbolic Groups No Projection Is Perfect! Analyzing Hyperbolic Patterns What Do Negative Characteristics Mean? Types of Coloring, Tiling, and Group Presentations Where Are We? 18. More on Hyperbolic Groups Which Signatures Are Really the Same? Inequivalence and Equivalence Theorems Existence and Construction Enumerating Hyperbolic Groups Thurston’s Geometrization Program Appendix: Proof of the Inequivalence Theorem Interlude: Two Drums That Sound the Same 19. Archimedean Tilings The Permutation Symbol Relative versus Absolute Enumerating the Tessellations Archimedes Was Right! The Hyperbolic Archimedean Tessellations Examples and Exercises 20. Generalized Schläfli Symbols Flags and Flagstones More Precise Definitions More General Definitions Interlude: Polygons and Polytopes 21. Naming Archimedean and Catalan Polyhedra and Tilings Truncation and "Kis"ing Marriage and Children Coxeter’s Semi-Snub Operation Euclidean Plane Tessellations Additional Data Architectonic and Catoptric Tessellations 22. The 35 "Prime" Space Groups The Three Lattices Displaying the Groups Translation Lattices and Point Groups Catalogue of Plenary Groups The Quarter Groups Catalogue of Quarter Groups Why This List Is Complete Appendix: Generators and Relations 23. Objects with Prime Symmetry The Three Lattices Voronoi Tilings of the Lattices Salt, Diamond, and Bubbles Infinite Platonic Polyhedra Their Archimedean Relatives Pseudo-Platonic Polyhedra The Three Atomic Nets and Their Septa Naming Points Checkerstix and the Quarter Groups Hexastix from Checkerstix Tristakes, Hexastakes, and Tetrastakes Understanding the Irish Bubbles The Triamond Net and Hemistix Further Remarks about Space Groups 24. Flat Universes Compact Platycosms The Klein Bottle as a Universe The Other Platycosms Infinite Platycosms Where Are We? 25. The 184 Composite Space Groups The Alias Problem Examples and Exercises 26. Higher Still Four-Dimensional Point Groups Regular Polytopes Four-Dimensional Archimedean Polytopes Regular Star-Polytopes Groups Generated by Reflections The Gosset Series The Symmetries of Still Higher Things Where Are We? Other Notations for the Plane and Spherical Groups Editorial Reviews The book contains many new results. ... [and] is printed on glossy pages with a large number of beautiful full-colour illustrations, which can be enjoyed even by non-mathematicians. -- EMS Newsletter, June 2009 One of the most base concepts of art [is] symmetry. The Symmetries of Things is a guide to this most basic concept showing that even the most basic of things can be beautiful-and addresses why the simplest of patterns mesmerizes humankind and the psychological and mathematical importance of symmetry in ones every day life. The Symmetries of Things is an intriguing book from first page to last, highly recommended to the many collections that should welcome it. -- The Midwest Book Review, June 2008 Conway, Burgiel, and Goodman-Strauss have written a wonderful book which can be appreciated on many levels. ... [M]athematicians and math-enthusiasts at a wide variety of levels will be able to learn some new mathematics. Even better, the exposition is lively and engaging, and the authors find interesting ways of telling you the things you already know in addition to the things you don't. -- Darren Glass, MAA Reviews, July 2008 This rich study of symmetrical things . . . prepares the mind for abstract group theory. It gets somewhere, it justifies the time invested with striking results, and it develops . . . phenomena that demand abstraction to yield their fuller meaning. . . . the fullest available exposition with many new results. -- D. V. Feldman, CHOICE Magazine , January 2009 This book is a plaything, an inexhaustible exercise in brain expansion for the reader, a work of art and a bold statement of what the culture of math can be like, all rolled into one. Like any masterpiece, The Symmetries of Things functions on a number of levels simultaneously. . . . It is imperative to get this book into the hands of as many young mathematicians as possible. And then to get it into everyone else’s hands. -- Jaron Lanier, American Scientist, January 2009 You accompany the authors as they learn about the structures they so beautifully illustrate on over 400 hundred glossy and full-colour pages. Tacitly, you are given an education in the ways of thought and skills of way-finding in mathematics. . . . The style of writing is relaxed and playful . . . we see the fusing of the best aspects of textbooks—conciseness, flow, reader-independence—with the best bit of popular writing—accessibility, fun, beauty. -- Phil Wilson, Plus Magazine, February 2009 This book gives a refreshing and comprehensive account of the subject of symmetry—a subject that has fascinated humankind for centuries. . . . Overall, the book is a treasure trove, full of delights both old and new. Much of it should be accessible for anyone with an undergraduate-level background in mathematics, and is likely to stimulate further interest. -- Marston Conder, Mathematical Reviews, March 2009 Inspired by the geometric intuition of Bill Thurston and empowered by his own analytical skills, John Conway, together with his coauthors, has developed a comprehensive mathematical theory of symmetry that allows the description and classification of symmetries in numerous geometric environments. This richly and compellingly illustrated book addresses the phenomenological, analytical, and mathematical aspects of symmetry on three levels that build on one another and will speak to interested lay people, artists, working mathematicians, and researchers. -- L'Enseignement Mathematique, December 2009
{"url":"http://www.crcpress.com/product/isbn/9781568812205","timestamp":"2014-04-16T10:39:19Z","content_type":null,"content_length":"101959","record_id":"<urn:uuid:b325bf25-1588-4f11-85b5-53b17f79b906>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Godel on the foundations of nonstandard analysis Jon Barwise barwise at phil.indiana.edu Fri Nov 7 08:21:16 EST 1997 Moshe Machover writes: >I'd be interested to hear FOM contributors' views on the foundational >status of nonstandard analysis. This is a very interesting question, about which I had time to write. Let me instead point to an intriguing passage from Godel 1961, in Vol III of the collected works, page 377. "Indeed, mathematics has evolved into ever higher abstractions, away from matter and to ever greater clarity in its foundations (e.g. by [giving] an exact foundation of the infinitesimal calculus [and] the complex numbers)--thus, away from skepticism." Comment: One interesting feature of Robinson's foundation of the infinitesimal calculus is the lack of cateogoricity, which some find troubling. It might be taken as a sign that there is not a unique concept of infinitesimal, but rather competing conceptions, each shown to be consistent by the competing models. What makes this somewhat unsatisfactory, as compared with the natural number, the reals, or the complexes, is in the latter case we have some assurance that we are all talking about the same things. How is that for brief? Jon More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1997-November/000192.html","timestamp":"2014-04-16T07:21:34Z","content_type":null,"content_length":"3609","record_id":"<urn:uuid:b9bb0a8a-52c2-401d-b182-b570ba801f7a>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Joint PGF of two non-independent random variables. August 21st 2009, 04:59 AM #1 Junior Member May 2009 Joint PGF of two non-independent random variables. Working through a question, I've obtained the following: $G_{u}(z) = \frac{1}{2}(1+z^2)$ $G_{v}(z) = \frac{\frac{5}{2}(1+z)}{7-2z}$ I'm now asked to obtain the joint PGF of U and V. These two random variables are not independent; how would I go about finding the joint PGF in this case? you cannot obtain joint dsitributions from marginals without knowing at least one conditional distribution or independence. That's the study of copulas. August 22nd 2009, 10:03 PM #2
{"url":"http://mathhelpforum.com/advanced-statistics/98791-joint-pgf-two-non-independent-random-variables.html","timestamp":"2014-04-18T21:44:28Z","content_type":null,"content_length":"33858","record_id":"<urn:uuid:fa804586-2984-4cc2-870b-e23e145a8ec8>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Graph Replies: 3 Last Post: Dec 19, 2012 4:40 AM Messages: [ Previous | Next ] Re: Graph Posted: Dec 18, 2012 5:12 PM In article <5d700382-9617-4a00-b88f-9881a0073257@googlegroups.com>, Sonia <slopezbardo@gmail.com> wrote: > A planar graph, simple, G with no vertex of degree one or two have a vertex > of degree 3, 4 or 5 > Do you know how to show, or what book I can find something? If you have been reading about planar graphs, you have probably seen that if a connected simple planar graph has v vertices and e edges, then e <= 3v - 6. Now suppose your statement is false, i.e. every vertex has degree at least 6. What follows? Ken Pledger.
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2421354&messageID=7938890","timestamp":"2014-04-17T18:38:43Z","content_type":null,"content_length":"19868","record_id":"<urn:uuid:58a32354-397b-404e-a390-5b87866731bf>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
estimating remainder and error May 7th 2010, 09:02 AM #1 Apr 2010 estimating remainder and error estimating waht Ak=(i/k^3) there is an example of (1/k^2) with the integral test and deciding how far u off. the answer to 1/k^2 is (pi^2/6) i have no idea how to set this up. its in the binomial series section of the sequence and series chapter thank you! Try posting the actual question please. thats all the prof said. im sorry if its not clear but its not veryclear to me he solved the 1/k^2 with using 1000 to set up the inequality. I very much doubt that is what your Professor said! Is it possible he was talking about estimating an infinite sum? well, first he showed up how to do the squared, and also in mathmatica, he showed us the answer to the 1/k^2 to come out to what i have stated above. so it might be the infinite sum, how do i go about saovling it thanx in advance May 7th 2010, 11:17 PM #2 Grand Panjandrum Nov 2005 May 9th 2010, 05:52 PM #3 Apr 2010 May 10th 2010, 03:09 AM #4 MHF Contributor Apr 2005 May 16th 2010, 05:53 PM #5 Apr 2010
{"url":"http://mathhelpforum.com/calculus/143545-estimating-remainder-error.html","timestamp":"2014-04-18T09:58:17Z","content_type":null,"content_length":"41037","record_id":"<urn:uuid:7ca30082-022e-4337-b3a4-76760e5ba509>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Outrageously hard equivalence relations March 19th 2009, 05:04 PM #1 Senior Member Jan 2009 Outrageously hard equivalence relations Verify it is equiv. rel. For the set X={m,n,p,q,r,s}, let R be the relation on P(X) (power set of X) given by A R B iff A and B have the same number of elements. Now, list all elements in {m}/R; in{m,n,p,q,r}/R. How many elements are in X/R? How many in P(X)/R? I know I need to show it is symmetric, transitive, and reflexive. I know how to this when we have like equations and things, but this one is confusing for me. Please help. Thank you. Equivalence Classes and Quotient Sets Hello zhupolongjoe Verify it is equiv. rel. For the set X={m,n,p,q,r,s}, let R be the relation on P(X) (power set of X) given by A R B iff A and B have the same number of elements. Now, list all elements in {m}/R; in{m,n,p,q,r}/R. How many elements are in X/R? How many in P(X)/R? I know I need to show it is symmetric, transitive, and reflexive. I know how to this when we have like equations and things, but this one is confusing for me. Please help. Thank you. As you have written the question, some of it doesn't make sense. I'll explain why, but it will take a little time, so stay with me. $P(X)$, the power set of $X$, is the set of all the subsets of $X$. So: $P(X) = \{\oslash, \{m\}, \{n\}, \dots, \{m,n\}, \{m, p\}, \dots, \{m,n,p,q,r,s\}\}$ Look carefully at the words above in italics: $P(X)$ is a set of ... subsets ...; so the elements of $P(X)$ are themselves sets - and I have listed a few of them above. Now $R$ is a relation on $P(X)$ (not a relation on $X$) - in other words $R$ is a relation between two subsets of $X$, which is: subset $A$ is related to subset $B$ if and only if $A$ and $B$ have the same number of elements. So, for example: □ $\{m,n\}\,R\,\{n,p\}$ because they both have 2 elements □ $\{n,q,r\} \,R\, \{m,q,s\}$ because they both have 3 elements ... and so on. Next, you need to understand the $/$ notation. If a relation $R$ is defined on a set $A$, then $A/R$ (called the quotient set of $A$ by $R$) is the set of all the equivalence classes into which $A$ is 'divided' (or partitioned) by $R$ (hence the use of the word 'quotient'). For example, if $A = \mathbb{Z}$ and $m\,R\,n \iff 2|(m - n)$ (in other words, $(m - n)$ is divisible by $2$), then $R$ partitions $\mathbb{Z}$ into two equivalence classes: {even integers} and {odd integers}, and these two sets are therefore the elements of $A/R$. So, in the question you're asking, whereas it makes sense to talk about $P(X)/R$, it makes no sense to talk about $\{m\}/R$ or $X/R$, because $R$ is not defined on individual elements like $m$, but on sets. OK. So let's look at the question that does make sense: How many elements are there in $P(X)/R$? This question means: when $P(X)$ is partitioned into equivalence classes by $R$, how many of these equivalence classes are there? Well, for example, all the subsets of $X$ that contain one element - that's $\{m\}, \{n\}, \{p\}, \{q\}, \{r\}, \{s\}$ - are equivalent. So they form one equivalence class. (And that, I suspect, is what the question means when it says 'list all the elements in ${m}/R$', although that is incorrect use of the notation.) All the subsets of $X$ containing 2 elements form another equivalence class; all the subsets containing 0 elements form another, and so on. Since you can form subsets containing 0, 1, 2, 3, 4, 5 or 6 elements, there are therefore 7 equivalence classes, and therefore 7 elements in $P(X)/R$. The proof that $R$ is an equivalence relation is trivial. Any relation that contains the words ' $A$ ... the same as $B$' usually is. If $A$ has the same number of elements as $B$, then it's obvious that $A\,R\,A$ (reflexive) and $A\,R\,B \iff B\,R\,A$ (symmetric) and $(A\,R\,B \wedge B\,R\,C) \Rightarrow A\,R\,C$ (transitive). Ok, thank you. I don't know why the book would ask a question that doesn't make sense...I just put the question just as it appeared in the text. Maybe they want you to realize that the question is nonsensical? But thank you the same! I just have on further question: "All the subsets of containing 2 elements form another equivalence class; all the subsets containing 0 elements form another, and so on. Since you can form subsets containing 0, 1, 2, 3, 4, 5 or 6 elements, there are therefore 7 equivalence classes, and therefore 7 elements in ." Why wouldn't it be the 1+6+.....+6+1..i.e. add all the elements..like there is one subset with 0 elements, 6 subsets with 1 element...6 subsets with 5 elements and so on...? Equivalence Classes Hello zhupolongjoe[/quote] Because this is the number of elements in $P(X)$ - and in fact it's equal to $2^6 = 64$. (Perhaps you can work out why?) On the other hand $P(X)/R$ is the set of equivalence classes, and there are 7 of these, into which the 64 elements of $P(X)$ are distributed by $R$. In the other illustration I gave you (the $2|(n-m)$ relation on $\mathbb{Z}$) there are just two equivalence classes: {odd integers} and {even integers}, whereas, of course, there are infinitely many integers in each one. Got it now, thanks so much! March 20th 2009, 03:02 AM #2 March 20th 2009, 10:36 AM #3 Senior Member Jan 2009 March 20th 2009, 10:57 AM #4 Senior Member Jan 2009 March 20th 2009, 11:08 AM #5 March 20th 2009, 11:23 AM #6 Senior Member Jan 2009
{"url":"http://mathhelpforum.com/discrete-math/79570-outrageously-hard-equivalence-relations.html","timestamp":"2014-04-18T20:11:58Z","content_type":null,"content_length":"60503","record_id":"<urn:uuid:13b83e1f-f147-44d4-a40b-fd3f54d065b2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00528-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] N dimensional dichotomy optimization Sebastian Walter sebastian.walter@gmail.... Tue Nov 23 07:47:10 CST 2010 On Tue, Nov 23, 2010 at 11:43 AM, Gael Varoquaux <gael.varoquaux@normalesup.org> wrote: > On Tue, Nov 23, 2010 at 11:37:02AM +0100, Sebastian Walter wrote: >> >> min_x f(x) >> >> s.t. lo <= Ax + b <= up >> >> 0 = g(x) >> >> 0 <= h(x) >> > No constraints. >> didn't you say that you operate only in some convex hull? > No. I have an initial guess that allows me to specify a convex hull in > which the minimum should probably lie, but its not a constraint: nothing > bad happens if I leave that convex hull. >> > Either in R^n, in the set of integers (unidimensional), or in the set of >> > positive integers. >> According to http://openopt.org/Problems >> this is a mixed integer nonlinear program http://openopt.org/MINLP . > It is indead the name I know for it, however I have additional hypothesis > (namely that f is roughly convex) which makes it much easier. >> I don't have experience with the solver though, but it may take a long >> time to run it since it uses branch-and-bound. > Yes, this is too brutal: this is for non convex optimization. > Dichotomy seems well-suited for finding an optimum on the set of > intehers. >> In my field of work we typically relax the integers to real numbers, >> perform the optimization and then round to the next integer. >> This is often sufficiently close a good solution. > This is pretty much what I am doing, but you have to be careful: if the > algorithm does jumps that are smaller than 1, it gets a zero difference > between those jumps. If you are not careful, this might confuse a lot the > algorithm and trick it into not converging. ah, that clears things up a lot. Well, I don't know what the best method is to solve your problem, so take the following with a grain of salt: Wouldn't it be better to change the model than modifying the optimization algorithm? It sounds as if the resulting objective function is piecewise constant. AFAIK most optimization algorithms for continuous problems require at least Lipschitz continuous functions to work ''acceptable well''. Not sure if this is also true for Nelder-Mead. > Thanks for your advice, > Gaël > _______________________________________________ > NumPy-Discussion mailing list > NumPy-Discussion@scipy.org > http://mail.scipy.org/mailman/listinfo/numpy-discussion More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2010-November/054064.html","timestamp":"2014-04-21T03:02:41Z","content_type":null,"content_length":"5889","record_id":"<urn:uuid:1e257d63-ab00-4331-9022-920634101b2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00661-ip-10-147-4-33.ec2.internal.warc.gz"}
The problem of finding the shortest distance of the figure July 14th 2012, 08:50 PM #1 Jul 2012 Hong Kong The problem of finding the shortest distance of the figure I calculate the question by this formula: DH+HE+EF+FB I found the length of DH+FB by Pyth. theorem. Then, I got the total length of DH+FB is 16.4m ∴DH+HE+EF+FB is equal to 16.4+19+9m =44.4m But the correct answer is C. Is the shortest distance= DH+HE+EF+FB(m)? Can anybody help me to solve this question? Thank you. Re: The problem of finding the shortest distance of the figure The shortest distance is DE+EB Use Pythagoras Theorem to find DE+EB=SQRT(625+36)+ SQRT(225+36)=SQRT(661)+SQRT(261) ANSWER IS 41.9 Re: The problem of finding the shortest distance of the figure Generally speaking, the shortest distance between any two points is a straight line. If there is something blocking that straight line, then the shortest distance is the closest you can get to this blockage to go around it. So I agree with Hemvanezi, the shortest distance is DE + EB. Alternatively, you could do DG + GB. July 14th 2012, 09:26 PM #2 Jul 2012 July 14th 2012, 10:23 PM #3
{"url":"http://mathhelpforum.com/new-users/201005-problem-finding-shortest-distance-figure.html","timestamp":"2014-04-18T12:39:38Z","content_type":null,"content_length":"38538","record_id":"<urn:uuid:169beba0-1efb-4039-a813-b4a4d1e1f83e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
University of Tasmania, Australia Research topics in algebra include radical theory and ring theory; semigroups, varieties, e-varieties and pseudovarieties; coding and cryptography, finite automata, formal languages, data structures, rewriting systems and algorithms. The discipline of Mathematics offers a range of courses and research supervision in statistics, and pure and applied mathematics. Mathematics provides the language that underpins technology and describes all aspects of the natural world. Mathematics provides fundamental skills in problem solving, modelling and analysis. Physics is the fundamental science, the foundation upon which engineering and technology are built. This science is concerned with the farthest reaches of space and the tiny world of atoms and molecules. Physics provides a basis for an understanding of biology, chemistry and geology.
{"url":"http://www.utas.edu.au/maths-physics/","timestamp":"2014-04-20T13:39:48Z","content_type":null,"content_length":"16867","record_id":"<urn:uuid:c153d5ed-e0c3-4cf0-a515-2e763c4f86aa>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Scatterplot matrix [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: RE: Scatterplot matrix From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Scatterplot matrix Date Fri, 7 May 2004 12:48:05 +0100 . graph matrix ... Actually, there's a wrinkle here which I think is worth spelling out. Presuming that your response variable is y and the others are covariates, the . graph matrix x1 x2 x3 x4 y ensures that the bottom line of graphs all have y as vertical axis variable and the various x* as horizontal axis variables. You may agree with me in preferring this arrangement to that produced by . graph matrix y x1 x2 x3 x4 So mention the response _last_ (clearly, the opposite of what you must do with e.g. -regress-). When you are unsure of command names, . search graph matrix (e.g.) will point towards them. TELHAJ {Shqiponje|Shqiponja} > After a regress command with -robust- and -cluster- option I used a > Ramsey test (-ovtest, rhs) which suggested that there are higher order > trends, but omitted from the regression model. I was trying to use a > scatterplot matrix to look at higher order trends. > As I remember in Stata 7, the command for this was: > Graph y x1 x2 x3 x4, matrix * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2004-05/msg00194.html","timestamp":"2014-04-18T08:28:32Z","content_type":null,"content_length":"6002","record_id":"<urn:uuid:e1c8a4ba-649b-469d-b720-854912ddebf5>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
Andrew Mattei Gleason “Have you ever thought of this?” is how Andrew Gleason often preceded the formulation of some idea, or some question, to his mathematical colleagues and students. Usually we had not (thought of the idea) and even if we had, we would not have expressed it in as clarifying, or as enticing, a manner as he did. The “this” could range quite broadly: ideas related to transformation groups and his famous solution to Hilbert’s Fifth Problem, to measure theory, projective geometry, Hilbert Spaces, or to combinatorics, to graph theory, to coding theory, or—and this was also one of Gleason’s many great loves—to the teaching and perfection of mathematical skills at any level (how to treat measuring when teaching first-graders; reforming the teaching of Calculus; and savoring the latest Putnam Competition exam questions). Quite a span. Andrew Gleason’s own early education had a significant geographical span. He graduated from high school in Yonkers, New York, having also taken courses in Berkeley, California. His undergraduate years at Yale were spent largely taking graduate level courses. When Andy graduated in 1942, he joined the U.S. Navy as a member of a group of 8–10 mathematicians working to crack enemy codes. In 1946 Gleason came to Harvard, having been elected as a Junior Fellow of the Society of Fellows. To be such a Fellow in those days meant that one could achieve an academic career without having a PhD. Andy never did pursue a doctoral degree; he spent his time in the Society of Fellows developing a broad mathematical culture and thinking about Hilbert’s Fifth Problem. At the end of his three-year fellowship, Gleason was appointed Assistant Professor of Mathematics in the Harvard Department of Mathematics. Soon thereafter, he took a two-year leave of absence from Harvard to return to the U.S. Navy to serve during the Korean War (mid-1950 to mid-1953). After this, Gleason returned to Harvard where he spent the rest of his academic career. He was appointed Professor of Mathematics in 1957. Gleason married Jean Berko in 1959. Jean Berko Gleason, a prominent psycholinguist, has had a distinguished academic career as Professor in the Department of Psychology at Boston University. The Gleasons have three daughters: Katherine, Pamela, and Cynthia. Gleason was named the Hollis Professor of Mathematicks and Natural Philosophy in 1969 (the oldest endowed chair in the sciences in the United States). He became a Senior Fellow of the Harvard Society of Fellows in 1977 and was Chair of the Society of Fellows from 1989 to 1996. Gleason retired from Harvard University in 1992. His mathematical conversations, his seminar discussions, his writing, and his lectures had qualities most cherished in a mathematician: he was comprehensible, clear and to the point; his formulations had a scintillating precision, and they were always delivered with enthusiasm and wide-eyed wonder. One of his colleagues once summed up this saying: “When he touched a thing, he made it Our late colleague Raoul Bott once joked that Andy lacked the essential Hungarian talent for being absent when an important administrative task needed to be done. One of us recounted this in a talk at Andy’s memorial service, adding: Andy served Harvard, his department, the Society of Fellows, and the mathematical profession with generosity and skill. Although he held strong opinions, he never imposed them on others, and never made anyone feel small if they didn’t possess his brilliance. In fact, I never heard Andy raise his voice, either in conversation or in a meeting. That’s not to say he wasn’t convincing—it was his vision of a small faculty, training the best graduate students and amplified by the energy of outstanding undergraduates, that defines the mathematics department we have at Harvard today. Gleason’s best known work is his resolution of Hilbert’s Fifth Problem. David Hilbert, slightly over a century ago, formulated two dozen problems that have, since then, represented celebrated milestones measuring mathematical progress. Many of the advances in Hilbert’s problems initiate whole new fields, new viewpoints. Those few mathematicians who have resolved one of these problems have been referred to as members of the Honors Class. Hilbert fashioned his “Fifth Problem” as a way of offering a general commodious context for the then new theory of Sophus Lie regarding transformation groups. Nowadays, Lie’s theory is the mainstay of much mathematics and physics, and his kind of groups, “Lie groups,” constitute an important feature of our basic scientific landscape. One can think of a transformation group as a collection of symmetries of a geometric space. Some spaces admit infinitely many symmetries: think of the circle, which can be rotated at any angle. The grand problem facing Sophus Lie is how to deal with these infinite groups of symmetries. Can one use the methods of Calculus effectively to treat the issues that arise in connection with these infinite transformation groups? At the International Congress of Mathematicians held in Cambridge in 1950, Andy proposed a possible method to arrive at an (affirmative!) answer to this question, in the context proposed by Hilbert. Andy emphasized the central role played by the one-parameter subgroups in the picture. The following year he proved a key result about maximal connected compact subgroups, and the year after that, using results of Montgomery & Zippin, and Yamabe, Andy clinched things, and showed that the answer to Hilbert’s question answer is “yes.” An extremely important advance. The depth of Andy’s work is extraordinary, as is its breadth: from his computer explorations very early in the history of machine computation (a search problem in the n-cube) to solving a conjecture of our late colleague George Mackey (about measures on the closed subspaces of a Hilbert space) to the intricacies of finite projective geometry and coding theory, to the relationship between complex analytic geometry and Banach algebras. Gleason was also one of the rare breed of mathematicians who did not stay on just one side of the Pure mathematics/ Applied mathematics “divide.” In fact, his work and attitude gave testimony to the tenet that there is no essential divide. Indeed, the ideas and mathematical interests that Andy nurtured in his applied work for the government, which was a passion for him throughout his lifetime, connects well with his public work on finite geometries, and his love for combinatorics. Andy’s interest in the training of mathematicians and in exposition and teaching in general, led him to edit, with co-authors, a compendium of three decades of William Lowell Putnam mathematical competition problems, to write a bold text formulating the foundations of analysis starting with a grand and lucid tour of logic and set theory, and also to engage in the important project of K-12 mathematical education, and to reform efforts in the teaching of Calculus. The founding idea behind the various mathematical education initiatives with which Andy was involved—either the programs for early mathematical education that were referred to (by both detractors and promoters) as New Math, or the programs for teaching Calculus (providing syllabi and texts that came to be referred to as the Harvard Consortium)—was to present mathematics concretely and intuitively, and to energize and empower the students and teachers. The essential mission of the Calculus Consortium was and is Andy’s credo that the ideas should be based in equal parts of geometry for visualization of the concepts, computation to ground it in the real world, and algebraic manipulation for power. This relates to Andy’s general view: that a working mathematician should have at his or her disposal a toolkit of basic techniques for analyzing any problem. He felt that all good problems in math—at any level—should weave together algebra, geometry, and analysis, and students must learn to draw on any of these tools, having them all “at the ready.” Andy emphasized this in the Mathematics Department’s discussion regarding the structure of the department’s comprehensive qualifying exam for graduate students. He also loved to think about exam problems that exhibited this unifying call upon different techniques; for example, he would work out the problems of the (undergraduate) Putnam Competition exam, year after year, just for fun. Andy had many honors. He received the Newcomb Cleveland Prize from the American Association for the Advancement of Science for his work on Hilbert’s Fifth Problem. He received the Yueh-Gin Gung and Dr. Charles Y. Hu Award for Distinguished Service to Mathematics—the Mathematical Association of America’s most prestigious award. He was president of the American Mathematical Society (1981–1982), a member of the National Academy of Sciences, the American Academy of Arts and Sciences, and the American Philosophical Society. Respectfully submitted, Benedict Gross David Mumford (Brown University) Barry Mazur, Chair
{"url":"http://news.harvard.edu/gazette/story/2010/04/andrew-mattei-gleason/","timestamp":"2014-04-16T04:12:24Z","content_type":null,"content_length":"74766","record_id":"<urn:uuid:565a8c21-eea0-4dd6-b1f7-d5ec13117d5a>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Theoretical Analysis Next: Conversions between Representations Up: Analysis Previous: Analysis The theoretical analysis begins with the simple operations such as converting between representations and averaging streams, and moves onto more complex operations and sequences of them. Features of particular interest include: The number of digits of the input streams required to generate a specified number of output digits. The number of stream operations required at each step of the algorithm. Some operations (eg. average) require an operation on the digits, the result of which is prefixed to the stream resulting from a single recursive call using the `cons' operation. Others (eg. multiplication) may make several stream operations for each digit generated. Digit size: In many of the operations using dyadic digits, there is the possibility of the size of the size of the representation of the digits swelling. It may interesting to compare similar work by Heckmann [15] [14] which analyses the way the size required for the representation of reals using linear fractional transformations behaves during certain computations. These points are of interest because they affect the performance of the algorithms and the amount of memory they require. An algorithm with high lookahead requires many input digits to generate output. If the input is a complex expression itself, computing even a a few extra digits of this input may affect performance. The branching will also affect the performance. An algorithm which branches will slow down over time as the number of stream operations required accumulates. An algorithm with no branching will continue generating new digits at a fairly constant rate (with respect to rate at which input digits are generated). Dyadic digit swell will affect performance because the time required for primitive digit operations will dramatically increase. Next: Conversions between Representations Up: Analysis Previous: Analysis Martin Escardo
{"url":"http://www.dcs.ed.ac.uk/home/mhe/plume/node106.html","timestamp":"2014-04-17T07:13:54Z","content_type":null,"content_length":"6544","record_id":"<urn:uuid:8a2772be-c1b1-41fa-93ef-684e1d4b9bf0>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00645-ip-10-147-4-33.ec2.internal.warc.gz"}
Livermore, CO Math Tutor Find a Livermore, CO Math Tutor ...Learning about how trigonometry pervades into other areas of mathematics as well as how to use trigonometry to solve problems is essential. It is not just about how to compute, but to understand how these computations come into play in other areas. Looking up at the sky (at night) gives us a rare opportunity to see and understand the world beyond on normal experience. 47 Subjects: including SAT math, discrete math, electrical engineering, MATLAB ...Let me help you look at math and science differently, and learn techniques that will make a variety of subjects easier to understand. I earned a BS in Electrical Engineering and Computer Science at the University of California, Berkeley, then worked at a major computer company for 10 years befor... 13 Subjects: including geometry, precalculus, trigonometry, differential equations ...Whether solving for x or close-reading a sonnet, I'm here to help you practice the art of explaining yourself, connecting the dots and finding your own best work. Because you can do it.I played violin throughout junior high and high school, and majored in vocal performance for two years while at... 31 Subjects: including algebra 1, probability, TOEFL, grammar I am a Ph.D student in Electrical and Computer Engineering at Colorado State University (CSU) currently. Before coming to CSU, I got my bachelor's degree at a famous university in China that specially cultivates middle school and high school teachers. My major was mathematics, and I got a teacher's certificate in China. 11 Subjects: including geometry, discrete math, algebra 1, algebra 2 ...I eventually dropped my music major in order to spend my academic time focusing on business and economics, while continuing my own studies of music composition and instrumentation on my own. At the time of writing this, I am proficient in the notation of all orchestral instruments, excluding som... 35 Subjects: including prealgebra, economics, piano, SAT math Related Livermore, CO Tutors Livermore, CO Accounting Tutors Livermore, CO ACT Tutors Livermore, CO Algebra Tutors Livermore, CO Algebra 2 Tutors Livermore, CO Calculus Tutors Livermore, CO Geometry Tutors Livermore, CO Math Tutors Livermore, CO Prealgebra Tutors Livermore, CO Precalculus Tutors Livermore, CO SAT Tutors Livermore, CO SAT Math Tutors Livermore, CO Science Tutors Livermore, CO Statistics Tutors Livermore, CO Trigonometry Tutors
{"url":"http://www.purplemath.com/Livermore_CO_Math_tutors.php","timestamp":"2014-04-20T16:27:08Z","content_type":null,"content_length":"24029","record_id":"<urn:uuid:b81bc6cb-231e-4d44-921b-e093b6277704>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: All Lesson Plans on PowerPoint, creative opening question for a lesson Discussion: All Lesson Plans on PowerPoint Topic: creative opening question for a lesson << see all messages in this topic < previous message | next message > Subject: RE: creative opening question for a lesson Author: Craig Date: Jun 29 2006 I recently heard a talk about the purpose for mathematics after arithmetic (elementary school). The speak claimed that "you need it for the next math course" is usually true, but doesn't spark much student interest. "People use this in lots of careers" is probably not true, especially with some topics like factoring polynomials. His answer, while probably not one that students relate to any more than preparation for the next math course, is that in the study of mathematics students learn reasoning, logic, and problem solving in ways they don't see in other parts of the secondary curriculum, and it is precisely these skills that make mathematics so valuable in the workplace and in academics. Most people don't use the fact that a water molecule consists of two hydrogen atoms and one oxygen atom (but most probably know it). Factoring is a form of molecular decomposition--it breaks down a polynomial into smaller, somewhat more manageable, pieces. This process (breaking down into smaller, more manageable pieces) is a crucial component in problem solving, whether in math, science, engineering, social science, even in the fine arts. Therefore, it is a skill worthy of attention in the curriculum. That said, students usually don't care about the long term "big picture." The response about the crazy math teacher, or the jailer who only releases prisoners who can factor, at least elicits groans from the student and allows the teacher to plow ahead. Factoring polynomials is the same as "eating your vegetables." You do it either because YOU realize it's good for you, or you do it because someone else who realizes it's good for you required you to. Reply to this message Quote this message when replying? yes no Post a new topic to the All Lesson Plans on PowerPoint discussion Visit related discussions: Lesson Plans Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=cell&do=r&msg=25112","timestamp":"2014-04-20T06:21:28Z","content_type":null,"content_length":"17430","record_id":"<urn:uuid:32b3c2f3-70c1-469c-aaef-3112a5fed35e>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00570-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help December 14th 2008, 05:21 PM #1 Dec 2008 Please anybody find out below answer. i will very glad to hear from you. A Relation R on the set X is to be tested for reflexivity.Design an algorithm to test the square matrix A for reflexivity of the relation R. The input should be the matrix A and its size, n. Show us what you have already done. Actually I think how write Psedocode for Reflexive metrix? December 14th 2008, 07:08 PM #2 Grand Panjandrum Nov 2005 December 14th 2008, 07:13 PM #3 Dec 2008
{"url":"http://mathhelpforum.com/discrete-math/64999-hello.html","timestamp":"2014-04-18T16:36:58Z","content_type":null,"content_length":"30271","record_id":"<urn:uuid:19791128-24e6-4bd2-a3d7-8f1e94693d0e>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00041-ip-10-147-4-33.ec2.internal.warc.gz"}
system of equations June 17th 2008, 09:27 AM #1 May 2008 system of equations hello everyone! I have some questions about solving a system of equations by the Gaussian and Gauss-Jordan elimination methods. 1) how do i determine which method should i use for each system of equations? 2) using any of the two methods, do i have to do it by hand or is there a quicker way to do it (for example using a calculator)? If there is,how can i use my calculator for this? my calculator is a TI-84 plus. 3) how do i solve a system of equations with only the x variable. The variables for one of my systems of equations are x1, x2,x3,x4? Do i treat it as a system with x,y and z? thank you so much hello everyone! I have some questions about solving a system of equations .... 2) ... or is there a quicker way to do it (for example using a calculator)? If there is,how can i use my calculator for this? my calculator is a TI-84 plus. 3) how do i solve a system of equations with only the x variable. The variables for one of my systems of equations are x1, x2,x3,x4? Do i treat it as a system with x,y and z? thank you so much I'm going to show you how you can use your calculator: 1. You have a system of simultaneous equations: $\left|\begin{array}{l}x_1+2x_2 = 3 \\ 4x_1+5x_2 = 6\end{array}\right.$ 2. Go to MATRIX --> EDIT --> choose a matrix name (in my example I've taken 2:[ B ]) --> ENTER 3. You'll get an input screen: First type in the numbers of rows and afterwards the number of columns. The cursor is placed automatically on the correct place in the matrix. 4. Type in the coefficients of the variables + ENTER: The cursor will move one place ahead. After the last value + ENTER you quit this menu. 5. Go to MATRIX --> MATH --> B: rref( (+ ENTER of course) 6. Goto MATRIX --> NAMES --> [ B ] (+ ENTER of course) 7. Don't forget the finishing bracket and (+ ENTER of course) 8. You'll get the result screen: In the first row there is only a 1 (indicating one $x_1$) and the corresponding value. So this row reads $x_1 = -1$ In the second row there is no $x_1$ but one $x_2$ and the corresponding value. So this row reads $x_2 = 2$ June 17th 2008, 11:06 AM #2
{"url":"http://mathhelpforum.com/advanced-algebra/41801-system-equations.html","timestamp":"2014-04-17T05:53:21Z","content_type":null,"content_length":"36372","record_id":"<urn:uuid:b087fdc3-bee8-4488-bed3-85d7fa587986>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
ExpX? How is the math done? 12-09-2004, 11:05 PM ExpX? How is the math done? The following is what my output has to be. Well, minus the numbers obviously. I have to enter an integer for x, than n and than get an answer. Now, I'm really confused. How exactly is the math done? If I enter 5 for x and 6 for n, how do I get 26.041666? [phps-Computer:~] php% javac ExpX.java [phps-Computer:~] php% java ExpX enter an integer for x: 5 enter an integer for n: 6 E(5) using n =6 terms is: 26.041666666666668 [phps-Computer:~] php% java ExpX enter an integer for x: 34 enter an integer for n: 3 E(34) using n =3 terms is: 578.0 [phps-Computer:~] php% java ExpX enter an integer for x: 12 enter an integer for n: 9 E(12) using n =9 terms is: 10664.228571428572 [phps-Computer:~] php% java ExpX enter an integer for x: 5 enter an integer for n: -4 service not available [phps-Computer:~] php% java ExpX enter an integer for x: 3 enter an integer for n: 6 E(3) using n =6 terms is: 2.025 12-12-2004, 10:00 AM Definition of e Exponential function Here is what Java's exp method does: static double exp(double a) Returns Euler's number e raised to the power of a double value. Without the code it's impossible to say what yours is doing. My guess is your program is calculating 'E' using varying degrees of accuracy (with variable n). Normally you would expect them to raise 'x' to the E power. Where E is calculated to the 'n'th accuracy. When I tested it though with your examples, the results didn't match up. So, in short, without the code it's just guessing.
{"url":"http://forums.devx.com/printthread.php?t=140815&pp=15&page=1","timestamp":"2014-04-21T08:48:25Z","content_type":null,"content_length":"6355","record_id":"<urn:uuid:4b188914-2a07-4fbf-92c0-e1acbddcb86d>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00610-ip-10-147-4-33.ec2.internal.warc.gz"}
Word Problem Database Challenge Problems 1. Three paper bags contain a total of 24 apples. The first and second bags contain a total of 11 apples. The second and third bags contain a total of 18 apples. How many apples are in the first and third bags together? 2. A worm has fallen into a hole that is 26 inches deep. It climbs up 8 inches and slides back 3 inches every day. How many days will it take the worm to reach the top? 3. Rita, Peter, and Skeeter are penguins. They weigh 90 pounds altogether. Rita and Peter know they weigh the same amount. Peter and Skeeter know they weigh 68 pounds together. How much does Skeeter weigh? 4. A new store opened up at the mall. During the first hour, 94 people came in and 27 people left. During the second hour, 18 people came in and 9 people left. How many people were still in the store after 2 hours? 5. Ben had some baseball cards. His friend, Tim, had 20 cards. After Ben bought 3 more cards, he had twice as many cards as Tim. How many cards did Ben have at first?
{"url":"http://www.mathplayground.com/wpdatabase/Addition_Subtraction_Challenge_7.htm","timestamp":"2014-04-19T11:58:02Z","content_type":null,"content_length":"50647","record_id":"<urn:uuid:e340bfd2-0f32-46bc-ada9-8f7dddaab0fd>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
Prediction of the Shape of the <i>KJ</i> Ductile-to-Brittle Transition Temperature Curve for Ferritic Pressure Vessel Steels Using the Material's Resistance to Crack Extension <i>KJ</i> versus Δ<i>a</i> Curve STP1480: Prediction of the Shape of the KJ Ductile-to-Brittle Transition Temperature Curve for Ferritic Pressure Vessel Steels Using the Material's Resistance to Crack Extension KJ versus Δa Curve Wardle, G. Technical Director, Warhelle Consulting Ltd, Lowton, Cheshire Geary, W. Section Head (Metallurgy & Materials), Health and Safety Laboratory, Buxton, Pages: 10 Published: Jan 2007 Fracture toughness (KJ (cleavage) ) measurements made within the ductile-to-brittle transition region for ferritic pressure vessel steels are not always described by the shape of the Master Curve currently given in ASTM Standard E 1921. The objective of this paper is to show how the shape of the transition toughness curve may be related to the shape of the material's resistance to crack extension curve (KJR) once the crack tip starts to blunt and ductile crack extension precedes cleavage failure. Using an empirical relationship between the mean ductile crack extension (Δa) prior to the onset of cleavage failure and temperature (Δa=λexp(ϕT)), then the relationship between KJ and T may simply be given by KJ=KA+α[λexp(ϕT)]β, where KA, α, and β are simply the coefficients to an offset power law to KJ versus Δa data given by KJR=KA+α(Δa)β. For specimens of different sizes, a reference temperature TK may be defined for a given reference level of Δa or equivalent KJ. Unique curves may be defined for materials with differing crack extension resistance curves through plots of KJ versus (T-TΔa(ref)) or (T-TK(ref)). The generalized form of the transition curve may be given by KJ=KA+α{Δa(ref)exp[ϕ(T-TΔa(ref))]}β, or KJ=KA+(K(ref)-KA){exp[ϕ(T-TK(ref))]}β. Experimentally, ϕ has been estimated as approximately 0.08 for nuclear pressure vessel materials such as A533B/A508. Using a specific situation wherein the value of ^KA=30 MPa √m, K(ref)=100 MPa √m, and β=0.25 then, for a B=25-mm specimen, within experimental scatter this provides an almost exact match to the Master Curve given by KJ=30+70exp[0.019(T-TK100)]. The predictions of transition curve shape using the material's resistance to crack extension curve are believed to be complementary to the Master Curve method, and may describe the shape of the transition curve when there is ductile crack extension prior to cleavage failure. Further work is recommended to investigate the relationship between the predicted ductile-to-brittle transition curve and the Master Curve for probabilities of cleavage failure post-initiation. fracture toughness, transition curve shape, resistance curve, Master Curve Paper ID: STP45530S Committee/Subcommittee: E08.09 DOI: 10.1520/STP45530S
{"url":"http://www.astm.org/DIGITAL_LIBRARY/STP/PAGES/STP45530S.htm","timestamp":"2014-04-16T16:26:50Z","content_type":null,"content_length":"21437","record_id":"<urn:uuid:28bdb962-21ea-4984-8b81-fe2d2cb902a9>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Polar form of a linear equation? April 19th 2011, 02:10 PM #1 Senior Member Dec 2010 Polar form of a linear equation? Write the equation in rectangular form r= 5 sec (theta- 60 degrees) How would I do this usually I am used to r being on the "other side" of the equation. Rearrange this as r*cos(theta - 60) = 5 Now expand the cos(theta - 60) and that will give you two terms that involve x = r*cos(theta) and r*sin(theta). April 19th 2011, 02:40 PM #2
{"url":"http://mathhelpforum.com/pre-calculus/178108-polar-form-linear-equation.html","timestamp":"2014-04-21T09:02:07Z","content_type":null,"content_length":"33769","record_id":"<urn:uuid:21a0078a-3e76-4152-8aa5-d45eaea54f1c>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Questions on smoothness of Riemann metrics up vote 8 down vote favorite I've heard assertions of the sort: 1. Let there be a Riemann metric (not very smooth, say of class $C^1$ or $C^2$ or maybe $C$?) in a neighbourhood of a point on a manifold. Then it is possible to choose coordinates so that the metric is $C^\infty$ or even analytic in them. 2. In case of 3-dimensional manifolds it is possible to choose such coordinates globally, so the manifold becomes a smooth one. In the case of higher dimensions $n\ge4$ it is not true. Are those assertions true? I've heard them some time ago and not sure I remember all the details. Is it a well-known thing? Are there some detailed references? 2 For #1, see: DeTurck, Dennis M.; Kazdan, Jerry L. Some regularity theorems in Riemannian geometry. Ann. Sci. École Norm. Sup. (4) 14 (1981), no. 3, 249–260. You need assumptions on the curvature tensor (and its covariant derivatives) if you want higher regularity. I don't know what you mean by #2. Could you explain why the three-dimensional sphere has global co-ordinates? – Deane Yang Apr 20 '11 at 14:07 2 In fact, for #1, in the DeTurck-Kazdan paper you find a counterexample in the first paragraph. Note that in this case "changing coordinates" actually corresponds to changing atlas (as noted by Anton below). I wonder if for #1 you intend them to be Einstein manifolds? In which case the result is true using elliptic regularity. – Willie Wong Apr 20 '11 at 16:21 On #2: if you believe that the best coordinates one can use is the harmonic ones, then in fact on any compact, closed manifold, you will not be able to extend the coordinates globally... – Willie Wong Apr 20 '11 at 16:46 1 @Igor: you posted a link to the Georgia Tech proxy, which most of us cannot go through :-p. The link Igor meant to post is MR: ams.org/mathscinet-getitem?mr=MR2204038 article: dx.doi.org/10.1090/ S0002-9947-06-04090-6 – Willie Wong Apr 20 '11 at 17:34 1 A most recent discussion of these issues is the following paper [Taylor, Trans. Amer. Math. Soc. 358 (2006), 2415-2423] avaialble at ams.org/journals/tran/2006-358-06/S0002-9947-06-04090-6/… – Igor Belegradek Apr 20 '11 at 18:21 show 1 more comment 3 Answers active oldest votes 1. NO. Given a Riemannian manifold, it might be possible to improve smoothness by changing atlas. It is proved by Shefel, that the atlas with harmonic functions as coordinates is the best. But, the obtained metric might be worse than $C^\infty$. up vote 11 down vote 2. There is no local-global issue here, harmonic atlas is defined locally and it is the best one globally. So you get problems starting with dimension 2. Anton, what's the reference for Shefel? – Deane Yang Apr 20 '11 at 15:21 @Deane: Shefel, S. Z. --- 1979 and 1982. both in Russian the second one is translated. – Anton Petrunin Apr 20 '11 at 15:40 Anton, thanks. I'm still not sure which papers you're citing. Is it: "Smoothness of a conformal mapping of Riemannian spaces. (Russian) Sibirsk. Mat. Zh. 23 (1982), no. 1, 153–159, 222."? And how do his results compare to DeTurck and Kazdan? Did he prove the same results either earlier or independently? Or does he prove more? – Deane Yang Apr 20 '11 at 15:54 @Deane: looking at the MR, the results seems to be roughly comparable. The main lemma mentioned in the MR is equivalent to theorem 2.1 of DeTurck-Kazdan. And it looks like from the 1 review of '79 you get an a priori estimate from the connection coefficients back up to the metric: Theorem 2 controls the regularity of the conformal map by the regularity of the conformal factor, compare that to Theorem 3.4 of DeTurck-Kazdan. So I would guess "roughly the same result, slightly earlier, not communicated well to the 'west' for the obvious reasons." – Willie Wong Apr 20 '11 at 16:41 1 I will make sure to cite Shefel from now on. – Deane Yang Apr 21 '11 at 2:02 show 2 more comments I confirm the Anton's answer (No, and the phenomenon is essentially local), but I suggest another explanation which works for C^1 2-dimensional metrics. We will look for a counterexample in the class of metrics such that they are C^2 everywhere except for some line, where they are C^1. Then, it is possible and relatively easy to cook an example such that the curvature of the metric is discontinuous at this special line; you can do it in the class of confomally flat metrics such that the conformal coefficient depends on up vote 7 one variable only and the line is where this variable is a constant. down vote Since in order to determine the curvature of a metric you only need the distance function corresponding to this metric, and distance function does not depend on how smooth is your atlas, you can not make this metric smooth by the change of the atlas. add comment If you combine the work of Jost-Karcher on almost linear co-ordinates with the work of DeTurck-Kazdan and Shefel on harmonic co-ordinates (I recommend a paper of Stefan Peters on a proof of the Gromov convergence theorem), you get the following: If there exist local co-ordinates in which a Riemannian metric $g$ is $C^1$ and has bounded sectional curvature, then there exist local (harmonic) co-ordinates in which the metric is $C^{1, up vote 7 \alpha}$ for every $\alpha > 0$. If, in addition to this, the covariant derivatives of the Ricci tensor up to order $k$ are locally bounded, then there exist local harmonic co-ordinates in down vote which the metric is $C^{k+1,\alpha}$ for any $\alpha > 0$. If, in particular, the covariant derivatives of Ricci of all orders are bounded, then there exist local harmonic co-ordinates in which the metric is $C^\infty$. add comment Not the answer you're looking for? Browse other questions tagged riemannian-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/62393/questions-on-smoothness-of-riemann-metrics/62432","timestamp":"2014-04-19T17:18:08Z","content_type":null,"content_length":"72241","record_id":"<urn:uuid:af6a6b5b-25b4-4382-bdab-d497cedf62bb>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00353-ip-10-147-4-33.ec2.internal.warc.gz"}
Stockertown Township, PA Statistics Tutor Find a Stockertown Township, PA Statistics Tutor ...I have also taught it at the high school level. Algebra 2 is an extension of Algebra I. I've spent many years using it in my engineering career and I have also taught this at the high school 11 Subjects: including statistics, physics, probability, ACT Math ...I use many examples to illustrate various math concepts and I will review any math concepts multiple times to assist with student comprehension. My background includes a Master's degrees in Mathematics and Statistics, as well as Masters degrees in Computer Science and Electrical Engineering. I ... 12 Subjects: including statistics, calculus, geometry, algebra 1 ...My qualifications for tutoring precalculus are based on long experience and a perfect 800 score on the SAT subject test, level 2, without a graphing calculator. It is such a shame that English is not rigorously taught at most schools today. I can remember strict teachers drilling proper English... 23 Subjects: including statistics, English, calculus, algebra 1 ...Identify and solve Linear Inequalities. 7. Identify and solve Algebra Word Problems. 8. Identify and solve Polynomial Functions. 9. 27 Subjects: including statistics, calculus, geometry, algebra 1 ...My GPA is a 3.81 so all the math classes I've taken have went very well and I'd love to offer help to anyone who needs it! I'm experienced in all middle school and high school math classes, as well as offering prep help for the SAT, ACT, or Praxis exams. Upon request I can forward you any references, professional evaluations, and background checks that you would like to see. 14 Subjects: including statistics, geometry, precalculus, algebra 2 Related Stockertown Township, PA Tutors Stockertown Township, PA Accounting Tutors Stockertown Township, PA ACT Tutors Stockertown Township, PA Algebra Tutors Stockertown Township, PA Algebra 2 Tutors Stockertown Township, PA Calculus Tutors Stockertown Township, PA Geometry Tutors Stockertown Township, PA Math Tutors Stockertown Township, PA Prealgebra Tutors Stockertown Township, PA Precalculus Tutors Stockertown Township, PA SAT Tutors Stockertown Township, PA SAT Math Tutors Stockertown Township, PA Science Tutors Stockertown Township, PA Statistics Tutors Stockertown Township, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/Stockertown_Township_PA_statistics_tutors.php","timestamp":"2014-04-17T19:17:04Z","content_type":null,"content_length":"24558","record_id":"<urn:uuid:b697d369-df08-4b05-87a0-351a26892df5>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Spectrum to RGB Conversion In 1931, the International Commission on Illumination (CIE) defined three standard primaries, called X, Y and Z. The corresponding functions color-matching functions. The X, Y and Z represent the weights of the respective color-matching functions needed to approximate a particular spectrum. To match a color with power distribution P, the amounts of the primaries are given by the following formulae [1]: where k for self-luminous bodies, such as CRT, is equal to 680 lumens per watt. To transform from XYZ to RGB (with D65 white point), the matrix transform is used [3]: [ R ] [ 3.240479 -1.537150 -0.498535 ] [ X ] [ G ] = [ -0.969256 1.875992 0.041556 ] * [ Y ] [ B ] [ 0.055648 -0.204043 1.057311 ] [ Z ]. The range for valid R, G, B values is [0,1]. Note, this matrix has negative coefficients. Some XYZ color may transform to R, G, B values that are negative or greater than one. This means that not all visible colors can be produced using the RGB system.
{"url":"http://www.cs.rit.edu/~ncs/color/t_spectr.html","timestamp":"2014-04-21T01:58:51Z","content_type":null,"content_length":"2931","record_id":"<urn:uuid:92a8fac5-3e48-4bfb-af27-f284c8737197>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Powers Solution 2^N begins with 603245 iff 603246*10^m > 2^N >= 603245*10^m for some positive integer m ==> m+log(603246) > N*log(2) >= m+log(603245); so 2^N begins with 603245 iff frac(log(603246)) > frac(N*log(2)) = frac(log(603245)). If we are using natural density then N*log(2) is uniformly distributed mod 1 since log(2) is irrational, hence the probability is frac(log(603246)) - frac(log(603245)) = frac(log(603246)-log(603245)) = frac(log(603246/603245)). A neat observation is that since it is known p_n*c, where p_n is the nth prime and c is irrational, is uniformly distributed mod 1, we get the same answer if we replace 2^N with 2^{p_n}. -- Chris Long, 265 Old York Rd., Bridgewater, NJ 08807-2618
{"url":"http://rec-puzzles.org/index.php/Powers%20Solution","timestamp":"2014-04-20T03:10:36Z","content_type":null,"content_length":"6873","record_id":"<urn:uuid:f5871f42-83d2-4242-88f4-f97689fdfd1f>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Using figure handles outside of function in which they're defined Replies: 1 Last Post: Feb 7, 2013 12:37 PM Messages: [ Previous | Next ] Using figure handles outside of function in which they're defined Posted: Feb 7, 2013 11:46 AM Hi all, Here's what I'm attempting... 1) I define a figure handle 'h=plot(x,y)' in one function. 2) I declare 'h' as global 3) I recall this 'h' in a different function, trying to use it with 'copyobj(h,handles.axes1)' where axes1 is a different axes than where 'h' was originally plotted. I get an error with copyobj, that 'h' is an invalid handle. As a check, I tried using the copyobj command, similarly, but within the first function where 'h' was originally defined. And that works just fine. Am I to understand that I cannot use figure handles this way, declaring them as globals, between multiple functions? Is there a way around this? Thanks all! Date Subject Author 2/7/13 Using figure handles outside of function in which they're defined Chaitanya 2/7/13 Re: Using figure handles outside of function in which they're defined dpb
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2433481","timestamp":"2014-04-21T13:28:16Z","content_type":null,"content_length":"17723","record_id":"<urn:uuid:dffa46c6-c805-4927-bb34-50f026dc9611>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00235-ip-10-147-4-33.ec2.internal.warc.gz"}
Conditional geometric distributions up vote 3 down vote favorite If $p<1$ and $X$ is a random variable distributed according to the geometric distribution $P(X = k) = p (1-p)^{k-1}$ for all $k\in \mathbb{N}$, then it is easy to show that $E(X) = \frac 1p$, $\ mathop{Var}(X)=\frac{1-p}{p^2}$ and $E(X^2) = \frac{2-p}{p^2}$. Now consider a "conditional" geometric distribution, defined as follows (if there is standard terminology for this, let me know and I'll call it that): 1. Fix a set $J\subset \mathbb{N}$ and a number $\mu>0$ (this will eventually be large). 2. Let $P(X=k) = C \gamma^k$ if $k\in J$ and $P(X=k)=0$ otherwise, where $C>0$ and $\gamma<1$ are chosen so that probabilities sum to $1$ and $E(X) = \mu$. I'm trying to understand how $E(X^2)$ (or equivalently, $\mathop{Var}(X)$) depends on $J$ and $\mu$. In the case where $J=\mathbb{N}$ the standard results show that $p=\frac 1\mu$ and so $E(X^2) = \ mu^2(2-\frac 1\mu)$. I'm interested in the case where $\mu$ becomes very large and would like to obtain a similar estimate $E(X^2) \approx A\mu^2$, for some constant $A>1$, in a more general setting. The example I'm working with at the moment is $J= \{2^n \mid n\in \mathbb{N}\}$, but ideally I'd like some conditions on the set $J$ that would guarantee an estimate of the above form. Is there a standard name for these distributions, or a reference where I can read more about them? Are estimates of this form known? Edit: As Brendan McKay pointed out below, this boils down to understanding the behaviour of the function $g(\gamma) = \sum_{j\in J} \gamma^j$, and in fact the issue that motivated the question I posed can be stated more directly in terms of this function. The condition $E(X) = \mu$ is equivalent to the equation $\mu = \gamma g'(\gamma) / g(\gamma)$, which determines $\gamma$ implicitly as a function of $\mu$. We would like to understand how $g(\gamma) $ grows as $\mu\to\infty$, and hence $\gamma\to 1$. (In particular, this means we're really interested in the case where $J$ is infinite.) In the case $J=\mathbb{N}$, one has $g(\gamma) = \frac\gamma{1-\gamma} = 1 - \frac 1{1-\gamma}$, and so $\mu = \gamma (\frac{\gamma}{(1-\gamma)^2}) (\frac{1-\gamma}\gamma) = \frac{\gamma}{1-\gamma}$, so that in fact $g(\gamma(\mu)) = \mu$ and the two quantities go to infinity together. In the more general case a reasonably simple argument shows that $\lim_{\mu\to\infty} g(\gamma(\mu)) = \infty$ provided $J$ is infinite, but it's not at all clear to me how the rate at which $g$ grows (in terms of $\mu$) depends on $J$ for more general sets. That's the original motivation -- after some messing around we decided that we could figure out the growth rate if we knew something about $E(X^2)$ as suggested above, and since it was phrased in terms of what seemed to be a reasonably natural probability distribution, we decided to ask it in that form. But now you have the whole story... add comment 3 Answers active oldest votes I'm not sure what you really want but here is a couple of simple minded inequalities that can serve as a baseline. Below $g=\sum_{k\in J}\gamma^k$, $M=\sum_{k\in J}k\gamma^k$, so $\mu=\frac Mg$. We'll need the counting function $F(n)=\#\{k\in G: k\le n\}$ of the set $J$. I will assume that $F$ is extended as a continuous increasing function to the set $[1,+\infty)$ and that $g\ge 1$. 1) For every $N$, we have the trivial estimate $g\le F(N)+\frac MN$. Taking $N=2\mu$, we get $g\le F(2\mu)+\frac g2$, i.e., $$ g\le 2F(2\mu) $$ 2) Let $\nu$ satisfy $F(\nu)=3g$. Since $g\ge F(\nu)\gamma^\nu$, we conclude that $\gamma^\nu\le \frac 13$ so $1-\gamma>\frac 1\nu$. Now, for every $N$, we have $$ M\le Ng+(N+\frac 1{1-\ gamma})\frac 1{1-\gamma}\gamma^N\le Ng+(N+\nu)\nu e^{-N/\nu}\. $$ Since we clearly have $\nu\ge F(\nu)=3g$, we can choose $N=\nu\log\frac\nu g\ge \nu$. For this choice, the second term up vote 3 on the right is at most $2Ng$, so, dividing by $g$ we get $\mu\le 3N$, i.e., $$ \mu\le 3F^{-1}(3g)\log\frac{F^{-1}(3g)}{g} $$ down vote accepted Examples of what these inequalities yield: 1) Dense set ($F(n)\approx n$). Then $g\approx\mu$ 2) Power lacunarity ($F(n)\approx n^p$, $0<p<1$). Then $g$ is between $\mu^p(\log\mu)^{-p}$ and $\mu^p$ up to a constant factor. 3) Geometric lacunarity ($F(n)\approx\log n$). Then $g\approx \log\mu$. As you see, one can lose a logarithm sometimes but the advantage is that I do not make any regularity assumptions here. Of course, if $F$ is regular enough, you can, probably, do a bit I like these estimates -- this is the sort of thing I was looking for. The truth is that I didn't have a particularly clear idea of exactly what I wanted when I asked the question, which is why it never really came out as clearly as I'd have liked. It came up in some work a colleague and I are doing, where we started by maximising the entropy of a probability distribution on $\mathbb{N}$ with a fixed mean -- which led to the geometric distribution -- and then wanted to consider the case where the support of the distribution was forced to lie in $J$. (ctd...) – Vaughn Climenhaga Nov 28 '11 at 5:45 (ctd...) In order to get the sorts of estimates we wanted for the application we had in mind, we thought we'd need some more detailed information about the relationship between $g$ and $\mu$ in terms of the structure of $J$, and so I asked this question in a rather vague and open-ended attempt to see what might be true. In the end we found another way to deal with the issue we were faced with, that doesn't require dealing with conditional geometric distributions, but I still find this question interesting for its own sake. – Vaughn Climenhaga Nov 28 '11 at 5:48 add comment Define $g(\gamma) = \sum_{j\in J} \gamma^j$. The condition $E(X^2)\sim A\mu^2$ as $\mu\to\infty$ seems to be equivalent to $$ \frac{g(\gamma) g''(\gamma)}{(g'(\gamma))^2} \to A $$ as $\ gamma\to 1$ from below. Alternatively define $h(x)=\sum_{j\in J} ~e^{-jx}$ and then you want $$ \frac{h(x)h''(x)}{(h'(x))^2} \to A$$ as $x\to 0$ from above. up vote 3 down vote Of course these are translations of the problem rather than solutions, but I mention them as someone will probably see what to do next. If $J$ is finite then this is obviously a closed-form solution. Otherwise the answer depend heavily on what form $J$ is in. e.g. if $J$ is not a decidable set then very few digits of $A$ should be computable. For any set whose generating function has a nice closed form, there will be a nice formula for $A$. It seems like the only fully general question left is whether $A$ is always defined. – Will Sawin Nov 18 '11 at 7:41 @Brendan: I seem to recall seeing one or two equations like this as we derived the question that I posed from the question that originally motivated it... I'll edit the original question to include the motivation as well. Certainly we'd be very happy to understand the limits you point out, and that would suffice... – Vaughn Climenhaga Nov 18 '11 at 18:05 @Will: We're mostly interested in what happens when $J$ is infinite, and in particular in quantifying the behaviour under some conditions on (say) the growth rate of the gaps in $J$, or something like that. (If $J$ has bounded gap size then this should be more or less comparable to the case $J=\mathbb{N}$.) – Vaughn Climenhaga Nov 18 '11 at 18:07 As a consequence, in the case of a $J$ obtained repeating a finite set $F\subset [0,n)$ with periodicity $n$, i.e. $J=F+n\mathbb{N}$, we have $g(x)=(1-x^n)^{-1}P(x)$ with $P(x):=\sum_ {k \in F} x^k$; so for $x\to1$, $g'(x)=nx^{n-1}(1-x^n)^{-2}P(x)+O((1-x)^{-1})$ and $g''(x)=n^2x^{2n-2}(1-x^n)^{-3}P(x)+O((1-x)^{-2})$, whence by Brendan's formula $g''g/(g')^2\to 2$ as $x\ to 1$. So every periodic $J$ has $A=2$. – Pietro Majer Nov 19 '11 at 21:16 add comment Can you estimate $C$ and $\mu$ etc...using the first term say $j=\min J$? up vote 0 It seems to me for instance that $1/C=\gamma^j+\gamma^{j_2}+\cdots\leq \sum_{k=j}^\infty \gamma^k=\gamma^j/(1-\gamma)$. down vote The issue is that we're really interested in an estimate from the other direction, of the form $1/C \geq$ some function of $\gamma$; in other words, we want a lower bound on how $E(X^2) $ grows in terms of $E(X)$, which is going to depend much more on the behaviour of the tail of $J$ then on the initial terms. – Vaughn Climenhaga Nov 18 '11 at 18:02 Thanks. There's a typo in our edit. It's $g(\gamma)=\frac{1}{1-\gamma}-1$ – Pietro Poggi-Corradini Nov 19 '11 at 1:41 add comment Not the answer you're looking for? Browse other questions tagged pr.probability or ask your own question.
{"url":"http://mathoverflow.net/questions/81215/conditional-geometric-distributions","timestamp":"2014-04-20T06:18:50Z","content_type":null,"content_length":"70572","record_id":"<urn:uuid:32d2b512-d849-4e13-a343-9ecc29049ad3>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5983381 - Partitioning and reordering methods for static test sequence compaction of sequential circuits 1. Field of the Invention This invention relates to methods for compacting test sets for sequential circuits. More specifically, to partitioning and reordering methods for fast static test sequence compaction of sequential This application relates to U.S. application Ser. No. 09/001,543 filed on Dec. 31, 1997 entitled, "State Relaxation Based Subsequence Removal Method for Fast Static Compaction in Sequential Circuits," which is assigned to the Assignee of the present invention and which is incorporated herein by reference. 2. Background and Description of Related Art Since the cost of testing sequential circuits is directly proportional to the number of test vectors in a test set, short test sequences are desirable. Reduction in test set size can be achieved using static or dynamic test set compaction algorithms. Dynamic techniques such as those described in Chakradhar, S., et al., "Bottleneck Removal Algorithm for Dynamic Compaction and Test Cycle Removal," Proc. European Design Automation Conf., pp. 98-104, September 1995), Chakradhar, S., et al., "Bottleneck Removal Algorithm for Dynamic Compaction in Sequential Circuits," IEEE Trans. on Computer-Aided Design, (Accepted for Publication) 1997 and Niermann, T. M., et al., "Method for Automatically Generating Test Vectors for Digital Integrated Circuits," U.S. Pat. No. 5,377,197, 1994, perform compaction concurrently with the test generation process. These techniques often require modification of the test generator. Static compaction techniques, on the other hand, are employed after the test generation process. Obviously, static techniques are independent of the test generation algorithm and do not require modifications to the test generator. In addition, static compaction techniques can further reduce the size of test sets obtained after dynamic compaction. Several static compaction approaches for sequential circuits have been proposed in the following papers: Niermann, T. M., et al. "Test Compaction for Sequential Circuits," IEEE Trans. Computer-Aided Design, vol. 11, no. 2, pp. 260-67, February 1992, So, B., "Time-efficient Automatic Test Pattern Generation System," Ph.D. Thesis, EE Dept. Univ. of Wisconsin-Madison, 1994, Pomeranz, I., et al., "On Static Compaction of Test Sequences for Synchronous Sequential Circuits," Proc. Design Automation Conf., pp. 215-20, June 1996 and Hsiao, M. S. et al., "Fast Algorithms for Static Compaction of Sequential Circuit Test Vectors," Proc. IEEE VLSI Test Symp., pp. 188-195, April 1995. Some of these approaches (Niermann, Time Compaction, So) cannot reduce test sets produced by random or simulation-based test generators. Static compaction techniques based on vector insertion, omission, or selection have also been investigated (Pomeranz). These techniques require multiple fault simulation passes. If a vector is omitted or swapped, the fault simulator is invoked to make sure that the fault coverage is not affected. Vector restoration techniques, as described in Guo, R., et al., "Procedures for Static Compaction of Test Sequences for Synchronous Sequential Circuits Based on Vector Restoration," Technical Report Aug. 3, 1997, Electrical and Computer Engineering Department, University of Iowa, August 1997, aim to restore sufficient vectors necessary to detect all faults, starting with the harder faults. Fast static test set compaction based on removing recurrence subsequences that start and end on the same states has also been reported recently (Hsiao). However, these test sets are not as compact as those achieved by algorithms that use multiple fault simulation passes. The present invention included methods of compacting a sequential circuit test vector set by partitioning faults into hard and easy faults, re-ordering vectors in a test set and a combination of partitioning and re-ordering. FIG. 1 shows two typical fault coverage curves. The curve with a small dip, curve 2, is associated with test sets that are composed of random vectors followed by deterministic vectors that are generated by automatic test pattern generators (ATPG's). In either case, fault coverage increases rapidly for the first few test vectors and then eventually levels off. In this level region, a large number of vectors are required to detect very few additional faults. This observation is formalized by using two parameters x (x.sub.1 for curve 1, x.sub.2 for curve 2) and y. The first x % of the vectors detect y % of the faults. For example, it is possible that the first 10% of the vectors in the test set detect 90% of the faults. It should also be observed that faults detected during the quick rise of the fault coverage curve are also usually detected by vectors generated during the level region of the curve. Empirical observations lead to two questions: 1. Because the majority of the test set ((100-x) % of the vectors) is used to detect a few hard faults ((100-y) % of the detected faults), can the computational time be reduced by compacting the test set only with respect to the hard faults? 2. If the test set is re-ordered by placing vectors comprising the last w % of the test set to be at the beginning of the test set, how much of the y % easily detectable faults will still be detected by the re-ordered w % vectors? The first question suggests that fault list partitioning be performed for static compaction. The test set is compacted by only considering the hard faults. This substantially reduces the cost of fault simulation because only a few faults have to considered during multiple fault simulation passes. Also, computationally expensive static compaction techniques that have been proposed in the past can now be re-examined. This is because the cost of fault simulation can be greatly reduced by using fault list partitioning. With respect to the second question, re-ordering of test vectors for sequential circuits must be done carefully. This is because detection of a fault in a sequential circuit requires a specific sequence of vectors. Re-ordering is effective if vectors that detect hard faults also detect other faults. The basic method of this invention is to partition the test vector set into two subsequences and then perform re-ordering compaction on one of the two subsequences. The contribution of this invention is two fold. First, the computational cost for static test set compaction is substantially reduced by careful fault list and test set partitioning. Second, re-ordering of vectors is shown to be very effective in reducing the test set size. Significant compactions have been obtained very quickly for large ISCAS89 sequential benchmark circuits and several synthesized circuits. FIG. 1 is a graph of typical fault coverage curves. FIG. 2 is a diagram showing test set compaction using partitioning. FIGS. 3(a) and (b) are diagrams showing selection of the partition at the middle and beginning of the test set. FIG. 4 is a diagram showing a re-ordered test set. FIG. 5 is a diagram showing a shorter re-ordered test set. FIG. 6 shows a test set algorithm via partitioning and re-ordering. FIG. 7 is a table showing compaction results for HITEC test sets. FIG. 8 is a table showing compaction results for STRATEGATE test sets. FIG. 9 is a table showing compaction results for production circuits. The embodiments will be described below with reference to the figures and the following examples. The preferred embodiment, which is a combination of partitioning and re-ordering, will be described after the individual methods are described. Given a test set T, a subsequence of the test set is represented as T[v.sub.i, v.sub.i+1, . . . , v.sub.j ], where v.sub.i and v.sub.j are the i.sup.th and j.sup.th vectors in the test set T, respectively. Furthermore, the set of faults detected by a subsequence T[v.sub.i, . . . , v.sub.j ] is denoted as F.sub.det [v.sub.i, . . . , v.sub.j ]. Consider a test set T with n vectors v.sub.1, . . . , v.sub.n. Assume that this test set detects f faults. If a static compaction algorithm requires m fault simulations, then the worst case time required for multiple fault simulation passes is proportional to m simulations. Test Set and Fault List Partitioning The process of compaction using partitioning is illustrated in FIG. 2. It begins by splitting the original test set T into two subsequences T[v.sub.i, . . . , v.sub.i ] and T[v.sub.i+1, . . . , v.sub.n ]. Let r be the ratio of total faults f and the set of faults detected by the subsequence T[v.sub.i+1, . . . , v.sub.n ]: ##EQU1## If the test set is compacted with respect to only faults in F.sub.det [v.sub.i+1, . . . , v.sub.n ] (Step 1 of FIG. 2), then the computational cost can be reduced to m significant savings in computational time can be achieved. For example, if only 10% of the total faults are detected by the second subsequence, then the time required for multiple fault simulation passes can be reduced by an order of magnitude. After compaction in Step 1, it is possible that the compacted test set T.sub.compact may not detect all target faults. This is because only a subset of faults were considered during compaction. A possible solution is to combine T.sub.compact and the first subsequence T[v.sub.1, . . . , v.sub.i ] (Step 2 of FIG. 2). One can always append T.sub.compact to the first sequence T[v.sub.1, . . . , v.sub.i ]. This ensures that all target faults f are detected. A second option is to append T[v.sub.1, . . . , v.sub.i ] at the end of the sequence T.sub.compact. This is a better combination because many of the faults detected in T[v.sub.1, . . . , v.sub.i ] may have already been detected by T.sub.compact. This can result in dropping of some or all of the vectors in subsequence T[v.sub.1, . . . , v.sub.i ]. Note that after fault list partitioning, static compaction of the test set T can be performed by any compaction algorithm. In fact, more expensive compaction algorithms may now be cost effective due to the lower cost of fault simulation. It is better to split the test set at a position closer to the beginning of the test set. Suppose the test set is split in half. Then, compaction of test set T has to be performed by considering only faults detected by the second half of the test set. However, after the first phase of compaction, the first half of the test set still has to be appended, as shown in FIG. 3(a), resulting in a possibly less compact test set. On the other hand, if the test set had been partitioned at an earlier position, as shown in FIG. 3(b), then the portion of the original test set that has to be appended is much smaller. However, the cost of fault simulation during compaction can be higher because more faults may have to be simulated. Static compaction is performed with respect to only a fraction of the faults. Therefore, the computational cost of compaction procedures would be less than a method that considers all faults. Also, one would expect less compact test sets because only a subset of faults are considered for compaction. However, experiments show that both computational cost and the quality of compaction are benefited by fault list partitioning. Re-Ordering of Vectors Another valid question that stems from the shape of the fault coverage curve illustrated in FIG. 1 is whether sequences that detect hard faults can also detect many other, easier faults. In other words, if the sequence of vectors that detect hard faults is copied to the beginning of the test set, can some vectors in the modified test set be omitted? Obviously, it is desirable to compact the modified test set so that it is smaller than the original test set. Again, consider a test set T[v.sub.1, . . . , v.sub.n ] that detects f faults. If a new test sequence is created by copying the subsequence T[v.sub.k, . . . , v.sub.n ], 1≦k≦n, to be at the beginning of the original test set T (see FIG. 4), then the modified test sequence is Tnew[v.sub.k, . . . , v.sub.n, v.sub.1, . . . , v.sub.n ]. All target faults are still detectable because the original test set is a subset of the modified test sequence. There are (n-k+1)+n vectors in the modified test set. Clearly, at least n-k+1 vectors can be omitted from the modified test set without affecting the fault coverage. This is because the first n-k+1 vectors of the modified test set can always be dropped and the compact test set is same as the original test set T. However, it is possible that more than n-k+1 vectors can be dropped by omitting vectors at the end of the modified test set. For example, consider two faults f.sub.y and f.sub.z that are detected by the original test set T[v.sub.1, . . . , v.sub.n ]. Also, assume that faults f.sub.y and f.sub.z are detected after vectors v.sub.m and v.sub.n, respectively. Here, v.sub.n is the last vector in the test set and m&lt;n. Suppose there exists a k, m&lt;k&lt;n, such that subsequence T[v.sub.k, . . . , v.sub.n ] detects only one fault f.sub.z. FIG. 5 illustrates this scenario. Re-ordering T[v.sub.k, . . . , v.sub.n ] vectors yields the modified test set T[f.sub.k, . . . , v.sub.n, v.sub.i, . . . , v.sub.n ]. Clearly, the subsequence T[v.sub.m +1, . . . , v.sub.n ] at the end of the modified test set is unnecessary, since this subsequence contributes to the detection of only fault f.sub.z and this fault is already detected by the first few vectors of the modified test set. Therefore the modified test set can be compacted to be T[v.sub.k, . . . , v.sub.n, v.sub.1, . . . , v.sub.m ]. The size of the compact test set is (n-k+1)+m. Since k&gt;m, the new test set size can be less than the original test size n. For example, if k=m+3, then the compacted test set is two vectors smaller than the original test set. Computing the exact k for the last detected fault f.sub.z may be computationally expensive. Instead, an arbitrary k can be selected and the subsequence T[v.sub.k, . . . , v.sub.n ] can be re-ordered to be at the beginning of the test set. Fault simulation of the modified test sequence will determine whether some vectors can be removed. This process can be repeated until every vector in the original test set has been re-ordered. The size of the subsequence being re-ordered plays a significant role in determining the amount of compaction that is possible. For example, if large subsequences are re-ordered, then less compaction is achieved. When the subsequence size is small, then more compaction is achievable but at a higher computational cost for fault simulation. For instance, if the re-ordered subsequence consists of 5% of vectors in the test set, then it would take up to 20 passes to finish re-ordering of all vectors in the original test set. Also, this will require a maximum of 20 fault simulation passes. On the other hand, if the re-ordered subsequence consists of only 1% of the vectors then it would take up to 100 passes. Note that if a large number of vectors are omitted during the first few passes then the total number of passes required can be significantly less than the maximum. Combined Test Set Compaction Method The first step in the combined test set compaction method is to perform test set and fault-list partitioning. This involves splitting of the test set into two subsequences T[v.sub.1, . . . , v.sub.i ] and T[v.sub.i+1, . . . v.sub.n ]. Only faults detected by the second subsequence are considered for compaction. The specific value of i (1≦i&lt;n) has a significant impact on the execution time and quality of the resulting compacted test set. This value can be determined in several ways: 1. Choose a value for i such that the subsequence T[v.sub.i+1, . . . v.sub.n ] has a pre-determined number of vectors, or 2. Choose a value of i such that the subsequence T[v.sub.1, . . . v.sub.i ] detects a pre-determined number of faults, or 3. Choose a value for i based on both faults and pre-determined number of vectors. For example, one can partition a test set by 1) including 90% of the test set as part of the second subsequence, or 2) by including sufficient vectors in the first subsequence so that 80% of the faults are detected and the remaining vectors are included in the second subsequence. In either case, faults detected by the first subsequence are not considered during compaction of the original test set. There are advantages and disadvantages of each partitioning technique. If the test set is split based on a pre-determined number of vectors, then few faults may be detected by the first subsequence. This may require consideration of a large number of faults during compaction, and the savings in execution times may not be as great. On the other hand, if partitioning is performed by including sufficient number of vectors in the first subsequence so that a pre-determined percentage of faults are detected, then the first subsequence can have too many vectors. This can result in a less compact test set. One technique may be better than the other depending on the test set and the target fault list. Finding the optimal value of i may be as difficult as the compaction problem itself. In the present invention, the value of i is chosen based on a pre-determined percentage of faults that have to be detected by the first subsequence. The value of i can also be chosen using more elaborate methods. After partitioning has been performed, re-ordering is performed. Re-ordering a subsequence of vectors in a test set results in a new test that is a concatenation of the subsequence and the original test set. The size of the subsequence to be re-ordered has a significant impact on the execution time and quality of the resulting compacted test set. For example, if a large number of vectors are re-ordered during each iteration, then the execution time will be small but resultant test sets may be less compact. Selection of the number of vectors to reorder is also dependent on the test set and the target fault list. In general, coarse-grain (more vectors) re-ordering may be better for large test set sizes. This is because fine-grain re-ordering can require a large number of fault simulation passes, and the computing resources required can be prohibitive. However, fine-grain re-ordering can lead to good compaction. Therefore, a hybrid approach is developed. The test set is first quickly reduced using coarse-grain re-ordering. Then, fine-grain reordering is performed to further reduce the test set. Coarse-grain re-ordering is performed by considering a subsequence size of 5% of the test set. Then, fine-grain reordering is performed by considering the subsequence size to be 1% of the test set. This two-step reordering has proven to be effective for many circuits. The pseudo-code for vector-reordering with partitioning is shown in FIG. 6. The method involves first picking a partitioning point. Next, coarse and fine-grain re-ordering is performed with respect to only (100-Y) % of the partitioned faults. When the re-ordering is complete, the first partition of the vectors is appended. Fault simulation is again applied to remove any non-contributing vectors from the first partition. If no partitioning is desired, the partitioning and the concatenation steps in the algorithm are skipped, and Y is set equal to 0%. The static test set compaction algorithm was implemented in C by repetitively calling a commercial fault simulator via system calls. HITEC and STRATEGATE test sets generated for both ISCAS89 sequential benchmark circuits and several synthesized circuits were used to evaluate the effectiveness of the algorithms. HITEC is a state-of-the-art deterministic test generator. STRATEGATE is a simulation-based test generator based on genetic algorithms that generate test sets with very high fault coverages. All experiments were performed on a Sun UltraSPARC with 256 MB RAM. Note that due to repetitive system calls to the commercial fault simulator, there is extra overhead in reading in the circuit, fault list, and setting up of the data structures necessary for compaction. Experimental Results The compaction results are shown in the tables in FIGS. 7 and 8 for HITEC and STRATEGATE vectors, respectively. Both tables show the total numbers of faults, the number of vectors in the original test set, and the number of faults detected by the test set. For each test set, compact test sets are generated using two methods. One method compacts the test set by only considering re-ordering of vectors. Results for this experiment are shown in column No-partition. The number of vectors in the compact test set are shown in column Vec, the percentage reduction in test vectors as compared to the original test set is shown in column % R, the number of faults detected by the compact test set is shown in column Det and the CPU seconds required is shown in column Time. The second method uses both partitioning and re-ordering. Results for this experiment are shown in column Partition. The partitioning technique splits the test set by considering sufficient vectors in the first subsequence so that 80% of faults are detected. Therefore, the second subsequence consists of only 20% of the faults, and compaction of the entire test set is done with respect to only these faults. After the first phase of compaction, the first subsequence is appended to the compact test set. Vector-reordering is based on a two-step process. First, subsequences that include 5% of the test set (coarse-grain re-ordering) are considered, followed by the second step that considers re-ordering of sequences that include 1% (fine-grain re-ordering) of the test set. For most circuits, significant reductions in test set sizes were achieved for both vector re-ordering without partitioning and vector re-ordering with partitioning. Fault coverages for the compacted test sets are always greater than or equal to the original fault coverages. For example, in circuit s35932, the compacted HITEC test set detected more faults. Since STRATEGATE vectors already provide high fault coverages, additional faults were not detected after compaction. On average, 41.1% and 35.1% reductions were obtained for HITEC vectors with and without partitioning, respectively, with a maximum test set reduction of 72.2% for circuit s35932. Similarly, averages of 48.9% and 46.4% reductions were achieved for STRATEGATE vectors with and without partitioning, with maximum test set reduction of 88.4% for s5378. The tables in FIGS. 7 and 8 show that execution times are significantly lower with partitioning. For smaller circuits, partitioning reduces the execution time by about 50%. For the larger circuits, the execution time is reduced by a factor of 4.32 (compaction of HITEC vectors for circuit am2910). Ideally, by considering 20% faults during compaction, a 5-fold increase in speed can be expected. This is seen for larger circuits. Ideal measures are difficult to obtain due to fixed overheads from logic-simulation costs. For instance, if logic-simulation constitutes 40% of total simulation cost (this cost includes logic and fault simulation costs), then the best speed-up we can achieve is ## EQU2## Thus, greater fractions of the simulation cost are needed for logic simulation for smaller circuits due to the fewer numbers of faults. On average, partitioning accelerated the compaction process by 2.90 times for HITEC vectors and by 3.14 times for STRATEGATE vectors. The size of compact test sets derived using partitioning is comparable to the size of compact test sets derived without using partitioning. For example, differences between compacted STRATEGATE test set sizes for partitioning and non-partitioning techniques are less significant. However, there are cases where marginally smaller compact test sets are achieved in the no partitioning case. This seems to happen when the number of vectors necessary to detect 80% of faults is large. However, compaction with partitioning appears to result in marginally better compact test sets when the number of vectors in the first subsequence of the split test set is small. Examination of HITEC and STRATEGATE test sets reveals that a smaller percentage of STRATEGATE vectors are necessary to detect 80% of detected faults. Therefore, a larger percentage of HITEC vectors are required to detect 80% of the faults. For HITEC test sets, compaction without partitioning resulted in marginally better test sets than those with partitioning. One can always partition at a lower fault coverage (e.g., at 60% or 70%) to reduce the number of vectors in the first subsequence, but this can increase the execution times for compaction with partitioning. There are always exceptions. For instance, in the HITEC test set for s444, compaction with partitioning achieves a significantly more compact test set in much shorter execution time. This is because the first 80% of the detected faults are also detected by vectors in the second subsequence that detects hard faults (20% of remaining faults). The static compaction method was also applied to a few large production circuits. These circuits have several non-Boolean primitives, such as tristate buffers, bidirectional buffers and buses. In addition, they have set/reset flip-flops and multiple clocks. Original test sets for these circuits were derived using a commercial test generator. Compaction results are shown in the table in FIG. 9. For these circuits, it was not possible to run experiments without partitioning because the run times were prohibitively long. Therefore, compaction with partitioning had to be used to cut down the execution times. Significant reductions in test set sizes have been achieved as compared to the original test set. Furthermore, fault coverages obtained by the compacted test sets were often In terms of reductions in test set sizes among the various static test set compaction techniques, vector-omission based compaction as described in Pomeranz generally outperforms other compaction approaches. However, such technique may not be applicable to large circuits or large test sets. Compaction based on recurrence subsequence removal is very fast, but it produces less compact test sets. In addition, sometimes a slight drop in fault coverage may result after compaction due to the optimistic assumptions made in the subsequence removal algorithm. Circuits for which only a few recurrence subsequences exist, such as circuits s1196, s1238, s1423, s5378, and am2910, recurrence subsequence removal becomes ineffective. Finally, both Vector-restoration as described in Guo and the present invention produce compact test sets at less expensive costs than the Pomeranz method. These static compaction approaches are not constrained by lack of recurrence subsequences and significant reductions in test set sizes were achieved for these circuits. One significant feature of the present invention is that the test-set and fault-list partitioning strategy can be applied to any of the previously proposed vector-omission (Pomeranz) and vector-restoration (Guo) methods to further reduce execution times. This is an important feature that distinguishes this method from the previously proposed methods. Overall, this method is practical for large designs. The present invention embodies a new static test set compaction framework using fault list and test set partitioning, and vector-reordering. Significant reductions in test set sizes have been obtained using the invention. Furthermore, partitioning techniques rapidly accelerate the compaction process. Although not attempted here, partitioning and re-ordering techniques can be easily used to accelerate other known static compaction algorithms. These techniques can significantly reduce the CPU seconds required for compaction without compromising on the quality of compaction. Compaction algorithms based on extensive fault simulations can particularly benefit from the partitioning technique. Experiments show that the proposed compaction technique is viable for large circuits or large test sets. While the above is a description of the invention in its preferred embodiments, various modifications and equivalents may be employed. Therefore, the above description and illustration should not be taken as limiting the scope of the invention which is defined by the claims.
{"url":"http://www.google.com/patents/US5983381?dq=%22Meaning-based+information+organization+and+retrieval%22","timestamp":"2014-04-19T13:18:32Z","content_type":null,"content_length":"88538","record_id":"<urn:uuid:4714b1e0-9991-460a-b1be-e58dfa42807f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
"atanas " <atanaslove2000@abv.bg> wrote in message <hqvsda$m6d$1@fred.mathworks.com>... > Hello, > I have problem for my project: > Let we matrices C0,C1,C2,C3,D0,D1,D2, and D3 are with size 2x2. > We knowing matrices C0,C1,C2,and C3 that > [C0 C1 C2 C3]*[C0 C1 C2 C3]'=I > We construct the matrix > W=[C0 C1 C2 C3 zeros(2,2) zeros(2,2); > zeros(2,2) zeros(2,2) C0 C1 C2 C3; > D0 D1 D2 D3 zeros(2,2) zeros(2,2); > zeros(2,2) zeros(2,2) D0 D1 D2 D3]. > How to find matrix D0,D1,D2, and D3 so that satisfy condition W*W'=I? Atanas, this thread is stretching out too long, so in spite of what I said earlier, I have decided to give you my version of a matlab code that would solve your problem. I hope you will find it useful. I leave it up to you to discover the logic behind the algorithm. Remember, it is very important that any set of C0, C1, C2, C3 you create yourself instead of using part 1 should pass the test that is given there with just as much accuracy as in part 1, namely with errors only out in the 14th or 15th decimal place. Otherwise the procedure in part 2 will not be accurate. It depends on the C's being correctly chosen. If you recall, I mentioned earlier that there is one degree of freedom in the choice of the D's, for any given set of C's. In the code below in the line y = null([y1,y2])'; matlab's 'null' function must choose two normal mutually orthogonal four-element vectors which are orthogonal to y1 and y2, so it must choose them in a two-dimensional subspace. However, the two vectors could be at any rotated orientation in this subspace and this is where the one degree of freedom for the D's comes in. Roger Stafford % Part 1 - Random generation of C's x = orth(randn(4))'; y = orth(randn(4,2))'; C0 = y(:,1:2)*x(1:2,1:2); C1 = y(:,1:2)*x(1:2,3:4); C2 = y(:,3:4)*x(3:4,1:2); C3 = y(:,3:4)*x(3:4,3:4); % Check the C's Z2 = zeros(2); V = [C0,C1,C2,C3,Z2,Z2; format long clear V Z2 x y % Part 2 - Given the proper C's, create D's x1 = orth([C0,C1]')'; x2 = orth([C2,C3]')'; y1 = [C0,C1]/x1; y2 = [C2,C3]/x2; y = null([y1,y2])'; D0 = y(:,1:2)*x1(:,1:2); D1 = y(:,1:2)*x1(:,3:4); D2 = y(:,3:4)*x2(:,1:2); D3 = y(:,3:4)*x2(:,3:4); % Check the D's Z2 = zeros(2); W = [C0,C1,C2,C3,Z2,Z2; format long clear W Z2 x1 x2 y y1 y2
{"url":"http://www.mathworks.com/matlabcentral/newsreader/view_thread/280366","timestamp":"2014-04-21T03:21:24Z","content_type":null,"content_length":"100861","record_id":"<urn:uuid:deab3114-05f5-4e4a-9486-8f560b4ef2c4>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Degree Name PhD (Doctor of Philosophy) The principal objects of study in this thesis are the noncommutative Hardy algebras introduced by Muhly and Solel in 2004, also called simply ``Hardy algebras,'' and their quotients by ultraweakly closed ideals. The Hardy algebras form a class of nonselfadjoint dual operator algebras that generalize the classical Hardy algebra, the noncommutative analytic Toeplitz algebras introduced by Popescu in 1991, and other classes of operator algebras studied in the literature. It is known that a quotient of a noncommutative analytic Toeplitz algebra by a weakly closed ideal can be represented completely isometrically as the compression of the algebra to the complement of the range of the ideal, but there is no known general extension of this result to Hardy algebras. An analogous problem on representing quotients of Hardy algebras as compressions of images of induced representations is considered in Chapter 2. Using Muhly and Solel's generalization of Beurling's theorem together with factorizations of weakly continuous linear functionals on infinite multiplicity operator spaces, it is shown that compressing onto the complement of the range of an ultraweakly closed ideal in the space of an infinite multiplicity induced representation yields a completely isometric isomorphism of the quotient. A generalization of Pick's interpolation theorem for elements of Hardy algebras evaluated on their spaces of representations was proved by Muhly and Solel. In Chapter 3, a general theory of reproducing kernel W*-correspondences and their multipliers is developed, generalizing much of the classical theory of reproducing kernel Hilbert space. As an application, it is shown using the generalization of Pick's theorem that the function space representation of a Hardy algebra is isometrically isomorphic (with its quotient norm) to the multiplier algebra of a reproducing kernel W*-correspondence constructed from a generalization of the Szegõ kernel on the unit disk. In Chapter 4, properties of polynomial approximation and analyticity of these functions are studied, with special attention given to the noncommutative analytic Toeplitz algebras. In the final chapter, the canonical curvatures for a class of Hermitian holomorphic vector bundles associated with a C*-correspondence are computed. The Hermitian metrics are closely related to the generalized Szegõ kernels, and when specialized to the disk, the bundle is the Cowen-Douglas bundle associated with the backward shift operator. Copyright 2010 Jonas R Meyer
{"url":"http://ir.uiowa.edu/etd/712/","timestamp":"2014-04-20T16:20:52Z","content_type":null,"content_length":"23630","record_id":"<urn:uuid:b4f858e7-5761-4497-a853-c0a097608ee3>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Courses Taught Mat 117-Mathematical Concepts Mat140-College Algebra Mat145-Trigonometry Mat150-Precalculus Mat220-Business Calculus Mat250-Calculus I Mat308-Calculus II Mat411-Differential Equations Mat442-Intro Numerical Analysis Mat542-Numerical Analysis Math Videos The Zero Matrix Cheri Advertisement for "The Zero Matrix" Cheri Advertisement with departmental group photo Cinemark Trailer The Zero Identity The AWM Soap Bubble Outreach project The Euclidean Math Club Approximation Theory, Splines, Wavelet Analysis Roach, D. W., “Frequency selective parameterized wavelets of length ten", Journal of Concrete and Applicable Mathematics, vol. 8-1, January 2010. Roach, D. W., “The Parameterization of the length Eight Orthogonal Wavelets with No Parameter Constraints”, Approximation Theory XII: San Antonio 2007, M. Neamtu and L. Schumaker (eds.), pp 332-347, 2008. Roach, D. W., and R. Robinson, “Knot removal using the bounding tension spline”, Advances in Constructive Approximation, M. Neamtu and E. B. Saff (eds.), Nashboro Press, pp. 467-476, 2004. Roach, D. W., D. Gibson, and K. Weber, “Why is the square root of 25 not equal to plus or minus five?”, Math ematics Teacher, Vol. 97-1, pp 12-13, Jan 2004. Lai, M.J. and D. W. Roach, “Parameterizations of univariate orthogonal wavelets with short support”, Approximation Theory X , pp. 369—384, Innovations in Applied Math ematics, Vanderbilt Univ. Press , 2002. Lai, M.J. and D. W. Roach, “The nonexistence of bivariate symmetric wavelets with two vanishing moments and short support”, Trends in Approximation Theory, pp. 213-223, Innovations in Applied Math ematics, Vanderbilt Univ. Press, 2001. Christion, M. A., and D. W. Roach, “The numerical performance of Wavelets for PDE’s: The multi-scale Finite Element”, Computational Mechanics , v. 25(23), pp. 230-244, March 2000. Hardin, D. P., and D. W. Roach, “Multiwavelet Prefilters I: Orthogonal prefilters preserving approximation order p<2, IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing , vol. 45-8, pp. 1106-1112, August 1998. Hereford J., D. W. Roach, and R. Pigford, “Image compression using parameterized wavelets with feedback”, Independent Component Analyses, Wavelets, and Neural Networks , A. Bell, M. Vickerhauser, H. Szu, Editors, Proceedings of SPIE, Vol. 5102, pp. 267-277, 2003. Roach, D. W., “Orthogonal approximation order preserving prefilters for multiwavelets”, Wavelets: Application in Signal and Image Processing IX, Andrew F. Laine, Michael A. Unser, Akram Aldroubi, Editors, Proceedings of SPIE Vol. 4478 (2001) pp. 242-253. Lai, M. J. and D. W. Roach, “Nonseparable symmetric wavelets with short support”, Wavelet Applications in Signal and Image Processing VII, Proceedings of SPIE, pp. 132-146, July 1999. Christon, M. A., and D. W. Roach, T.E. Voth, “The numerical performance of wavelets and reproducing kernels for PDE’s”, Proceedings of the International Conference on Computational Engineering Science, 6 pp., October 1998. Christon, M. A., R. S. Baty, S. P. Burns, D. W. Roach, T. G. Trucano, T. E. Voth, J. R. Weatherby, and D. Womble, “An investigation of wavelet bases for grid-based multi-scale simulations-Final Report”, Technical Report SAND98-2456, Sandia National Laboratories, Albuquerque, New Mexico, 142 pp., September 1998. Hardin D. P., and D. W. Roach, “Semi-orthogonal wavelets for elliptic variational problems”, Proceedings of the Tangier 98 International Wavelet Conference on Multi-scale Methods, INRIA, 6 pp., April 1998. Research Presentations March 2010 Talk-13th International Conference on Approximation Theory , San Antonio , TX . October 2008 Talk-Applied Mathematics and Approximation Theory 2008, Memphis , TN. November 2007 Talk-AMS sectional meeting, special session on wavelets, MTSU, Murfreesboro , TN. November 2007 Seminar talk- The University of Georgia, Athens , GA. March 2007 Talk- 12th International Conference on Approximation Theory , San Antonio , TX . May 2005 Talk-The Application of Splines and Wavelets, Athens , GA. Oct 2004 Talk-AMS South-Eastern Sectional Meeting, Nashville , TN. Mar 2004 Talk-KYMAA state-wide meeting, Murray , KY. May 2003 Talk – International Conference on Advances in Constructive Approximation, Nashville , TN. April 2003 Talk – SPIE Aerosense: Aerospace/Defense Sensing, Simulation, and Controls, Kissimee , FL. March 2003 Talk - Kentucky MAA Annual Meeting, Bellarmine University , Louisville , KY. April 2002 Talk - Kentucky MAA Annual Meeting, Georgetown College , Georgetown , KY. March 2002 Talk - 10 th S. E. Approximation Theory Conference, UGA, Athens , GA. Oct 2001 Invited Talk - Western Kentucky Math Symposium, WKU, Bowling Green , KY. Jul 2001 Invited Talk - SPIE's 46th Annual Meeting, San Diego , CA . Mar 2001 Talk - 10th International Conference on Approximation Theory, UMSL, St. Louis , MO. May 2000 Talk - Trends in Approximation Theory an International Symposium, Nashville , TN. Sept 1999 Poster - Image Processing Multiresolution Analysis and Statistics, Georgia Tech, Atlanta, GA. July 1999 Invited Talk – SPIE’s 45 th Annual Meeting, Denver, CO. April 1998 Talk – University of New Mexico Colloquium, Albuquerque , NM . April 1998 Poster – International Wavelet Conference Tangier 98, Tangier, Morocco . Jan 1998 Talk – Ninth International Conference on Approximation Theory, Nashville , TN. May 1997 Talk – Sandia National Laboratories Colloquium, Albuquerque , NM . April 1997 Talk – Ninth Southeastern Approximation Theory Conference, Athens GA. March 1997 Talk – Multiwavelet Conference, Sam Houston State University , Huntsville , TX . Jan 1997 Talk – Special Session of the AMS/MAA Joint Meeting, San Diego , CA .
{"url":"http://campus.murraystate.edu/academic/faculty/david.roach/","timestamp":"2014-04-17T01:43:39Z","content_type":null,"content_length":"17717","record_id":"<urn:uuid:e64ecefe-375b-4e15-8e06-4a9d4527e006>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
Newest &#39;gm.general-mathematics computational-complexity&#39; Questions According to Stephen Cook on wikipedia, http://en.wikipedia.org/wiki/P_versus_NP_problem ...it would transform mathematics by allowing a computer to find a formal proof of any theorem which has a ... This is a question my son Bob asked me. For some sets it is relatively easy to test for membership but a lot more difficult to find members, and for others the reverse is true. Here is an elementary
{"url":"http://mathoverflow.net/questions/tagged/gm.general-mathematics+computational-complexity","timestamp":"2014-04-20T11:32:51Z","content_type":null,"content_length":"34473","record_id":"<urn:uuid:62734dd3-b7b6-4208-b0cb-a292bec7ed7f>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Do decidable properties of finitely presented groups depend only on the profinitization? up vote 15 down vote favorite This is a just-for-fun question inspired by this one. Let $P$ be a property of finitely presentable groups. Suppose that 1. The truth of $P(G)$ only depends on the isomorphism class of $G$. 2. Given a finite presentation of $G$, the truth of $P(G)$ is computable. Let $\hat{G}$ denote the profinite completion of $G$. Is it possible to have groups $G$ and $H$, and such a property $P$, so that $\hat{G} = \hat{H}$ but $P(G) \neq P(H)$? For example, is there a computable property which separates Higman's group from the trivial group? gr.group-theory computability-theory I'm glad to see that you've picked up the thread, David. This question is related to understanding the equivalence relation \equiv at conclusion of my question, to which you link. mathoverflow.net/ questions/16532. A related issue: It seems that the relation of having the same profinite completion is likely not decidable. Do we have proof of this? – Joel David Hamkins Feb 27 '10 at 0:53 Do you know of interesting properties satisfying 1. and 2.? The ones I can think of involve calculating the abelianization of G (or other nilpotent quotients). These certainly won't work. On an unrelated-but-feels-a-bit-related note, Bridson (Paper I <a href="people.maths.ox.ac.uk/~bridson/papers/profinite/…) constructs examples of injective homomorphism $i:H \hookrightarrow G$ so $i$ induces an isomorphism on profinite completions, but you can't decide if $G$ and $H$ are isomorphic. You can take H and G to both be residually finite, or alternatively you can take $H=\{1\}$. – Daniel Groves Feb 27 '10 at 0:55 Oh, I just read the linked question, and so I guess you don't know of interesting properties satisfying 1. and 2. ... – Daniel Groves Feb 27 '10 at 0:58 Not to say that the answers given in that question aren't interesting, just that I know about them. (I realised that my second comment was quite rude, for which I apoligize.) – Daniel Groves Feb 27 '10 at 1:03 Joel, Regarding whether or not having isomorphic profinite completions is decidable... In the second paper on the webpage that Daniel linked to above, Bridson exhibits pairs of a group and a subgroup such that no algorithm can determine whether or not they have isomorphic profinite completions. Unfortunately, it may be impossible to compute a presentation for the subgroup. I discuss this sort of thing in my answer to this question: mathoverflow.net/questions/15957/… – HJRW Feb 27 '10 at 2:19 show 1 more comment 3 Answers active oldest votes OK, I think I have an example of two groups with the same profinitization and a computable property which distinguishes them. The point is that very fine detail about the commutator subgroups can't be seen in the profinitization. Let $q$ be prime and let $K$ be the $q$-th cyclotomic field. Choose $q$ such that the class group of $K$ is not trivial. Let $I$ be a trivial ideal of $\mathcal{O}_K$ and $J$ a nontrivial ideal. Our groups $G$ and $H$ will be $(\mathbb{Z}/q) \ltimes I$ and $(\mathbb{Z}/q) \ltimes J$. For any group $B$, let $B' = [B,B]$ and $B'' = [B', B']$. Note that $B/B'$ acts on $B'/B''$ by conjugation. Our computable criterion is the following: $B/B' \cong \mathbb{Z}/q \times \mathbb{Z}/q =: A$, the action of the group ring $\mathbb{Z}[A]$ on $B'/B''$ factors through a map $\mathbb{Z}[A] \to \mathcal{O}_K$ and, as such, $B'/ B''$ is a free $\mathcal{O}_K$ module. We leave it as an exercise that $G$ satisfies this condition and $H$ does not. up vote I believe this condition should be computable. [S:We can go from a finite presentation of $B$ to one of $B'$.:S] (UPDATE I have revised this argument.) Abelianizations are computable, so we 5 down can check whether $B/B'$ has the right format. If it does, then $B'$ has finite index in $B$. I think we can use this to get a finite presentation of $B'$: Let $\Delta$ be a two-dimensional vote $CW$-complex with one vertex, an edge for each generator of $B$ and a two cell for each relation. Let $\Delta'$ be the cover of $B$ corresponding to $B'$. Since $B$ has finite index in $B'$, $\Delta'$ will have finitely many cells, and we get a finite presentation of $B'$. We can the compute the abelianization of $B'$ and, I think, the action of the abelianization of $B$ on that of $B'$ should be computable. Note that there are only $q^2$ maps from $\mathbb{Z} [A]$ to $\mathcal{O}_K$, so we can just check them each in turn. The class of a finite generated module for a Dedekind domain should be computable by standard number theory methods, although I admit I couldn't describe them. The fact that these two groups have the same profinitization is relatively well known. Let $\hat{I}$ and $\hat{J}$ denote the profinite completions of $I$ and $J$. The profinite completions of $G$ and $H$ are $\mathbb{Z}/n \ltimes \hat{I}$ and $\mathbb{Z}/n \ltimes \hat{J}$. We can identify $\hat{I}$ and $\hat{J}$ with submodules of $\mathbb{A}^0_K$, the integral adeles of $K$. Since $I$ and $J$ are locally principal, these are principal ideals in the ring $\ mathbb{A}^0_K$. They are thus equivalent as $\mathbb{A}^0_K$ modules, and thus as $\mathcal{O}_K$ modules. 4 The commutator subgroup of a finitely presented group need not be a finitely presented group. For example, the commutator subgroup of the free group on 2 generators is not even finitely generated. – Bjorn Poonen Feb 27 '10 at 15:17 David, you're correct that a finite-index subgroup of a fp group is fp, so if the abelianization of B is finite and B is fp then B' is also fp. So why is your G fp? – HJRW Feb 27 '10 at To add a little bit: the first construction of a pair of non-isomorphic, fp, residually finite groups with isomorphic profinite completions was given by Bridson and Grunewald in 2004. Their construction is still essentially the only known one. Your groups seem to be obviously residually finite, so if they're fp then they this is very interesting. – HJRW Feb 28 '10 at @Henry: A semidirect product of finitely presented groups is finitely presented. – Bjorn Poonen Feb 28 '10 at 1:51 @Henry: I think it was known <=1964 that there exist 2 residually finite fp groups with isomorphic profinite completions. If one starts with a smooth projective variety over a number 1 field k, extends the base via two embeddings k --> C, and takes the fundamental group, then the two resulting groups are fp groups with isomorphic profinite completions (étale fundamental group of V_kbar). In 1964 Serre gave an example in which these groups were not isomorphic. And if I remember correctly, they were residually finite. In fact, I think they were exactly the groups David has constructed. – Bjorn Poonen Feb 28 '10 at 2:27 show 7 more comments In this paper, Owen Cotton-Barratt and I construct two finitely presentable groups with isomorphic profinite completions, but such that one is conjugacy separable (implying solvable conjugacy problem) and the other has unsolvable conjugacy problem. (The construction is very much in the spirit of the paper of Bridson that Daniel Groves mentioned in the comments.) Sorry, I only just noticed requirement 2. Since almost no properties are computable from a finite presentation, and yet the class of properties computable from a finite presentation is up vote 5 mysterious (eg does it include having a proper finite-index subgroup?), I don't see how you'll get any interesting answers with condition 2. down vote FURTHER EDIT: As Bjorn explained to me in the comments to David's answer, it's not nearly as hard as I had thought to build two fp groups with the same profinite completion. Indeed, there are virtually abelian examples. As one can solve the isomorphism problem for virtually abelian groups, it follows that there examples of computable properties that are not determined by the profinite completion, as David's answer shows. add comment Is it decidable from a presentation whether or not a group is large, i.e. admits a homomorphism onto the nonabelian free group on two letters? This seems totally unlikely, and surely either Henry or Daniel would know, but I like the following theorem anyway, so I'll advertise. Lackenby showed (`Detecting large groups', GR/0702571) that largeness is a property of the profinite up vote 3 completion of discrete finitely presented groups. down vote 3 Huh, interesting! It's unknown whether this is decidable. It's equivalent to asking whether a group has a proper finite-index subgroup, as G has a proper finite-index subgroup if and only if G*G*G is large. – HJRW Feb 27 '10 at 2:40 But how does this answer the question? For a negative answer, what is needed is a decidable property that doesn't respect the profinite completion. (And for a positive answer to the question, you need to grapple with the collection of all decidable properties.) – Joel David Hamkins Feb 27 '10 at 14:36 Joel, as far as I can tell, there are no genuinely interesting properties that are known to be computable. Indeed, I think 'largeness' is one of the few real candidates. So it's certainly relevant, even if it doesn't answer the question in the strictest sense. – HJRW Feb 27 '10 at 16:42 2 Actually, what Stover asks is decidable, whether a group has a homomorphism to a free group, by a result of Razborov. The usual definition of large is that there is a finite-index subgroup which maps onto a (non-cyclic) free group, which is actually the condition that Marc shows is determined by the profinite completion. If there is a non-residually finite hyperbolic group, then I suspect the answer to the question is no. ams.org/mathscinet-getitem?mr=755958 – Ian Agol Feb 27 '10 at 23:26 Right, good point, Ian. In my comment above, I presumed Matt had given the usual definition. Actually, you don't need the full power of Razborov, you just need Makanin. I mentioned this in an answer to this question: mathoverflow.net/questions/16532/… – HJRW Feb 28 '10 at 0:07 show 2 more comments Not the answer you're looking for? Browse other questions tagged gr.group-theory computability-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/16565/do-decidable-properties-of-finitely-presented-groups-depend-only-on-the-profinit/16575","timestamp":"2014-04-18T10:45:49Z","content_type":null,"content_length":"83555","record_id":"<urn:uuid:488450c3-7454-41b3-8767-260f2b517b75>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Thermodynamic Properties of high pressure gases Air @ 300 K and 330 bar: Kin. Viscosity = 0.00000090477 (ft²/s) Thank-you for this, could you tell me how you calculated/where you found this data? Just so I can do it for myself in future as I reckon I will have a range of other pressures and temps to calculate kinematic viscosity for in future.
{"url":"http://www.physicsforums.com/showthread.php?p=3418891","timestamp":"2014-04-16T19:12:06Z","content_type":null,"content_length":"37685","record_id":"<urn:uuid:8efd3498-209c-4b24-b1ea-703ac93656c1>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Boolean Matrices and Combinational Circuit Design - IEEE Trans. CAD/IC , 1987 "... A network of switches controlled by Boolean variables can be represented as a system of Boolean equations. The solution of this system gives a symbolic description of the conducting paths in the network. Gaussian elimination provides an efficient technique for solving sparse systems of Boolean eq ..." Cited by 16 (5 self) Add to MetaCart A network of switches controlled by Boolean variables can be represented as a system of Boolean equations. The solution of this system gives a symbolic description of the conducting paths in the network. Gaussian elimination provides an efficient technique for solving sparse systems of Boolean equations. For the class of networks that arise when analyzing digital metal-oxide semiconductor (MOS) circuits, a simple pivot selection rule guarantees that most s switch networks encountered in practice can be solved with O(s) operations. When represented by a directed acyclic graph, the set of Boolean formulas generated by the analysis has total size bounded by the number of operations required by the Gaussian elimination. This paper presents the mathematical basis for systems of Boolean equations, their solution by Gaussian elimination, and data structures and algorithms for representing and manipulating Boolean formulas. , 2012 "... Is it right, that regardless of the existence of the already elaborated algebra of logic, the specific algebra of switching networks should be considered as a utopia? Paul Ehrenfest, 1910 A switch, whether mechanical or electrical, is a fundamental building element of digital systems. The theory of ..." Add to MetaCart Is it right, that regardless of the existence of the already elaborated algebra of logic, the specific algebra of switching networks should be considered as a utopia? Paul Ehrenfest, 1910 A switch, whether mechanical or electrical, is a fundamental building element of digital systems. The theory of switching networks, or simply circuits, dates back to Shannon’s thesis (1937), where he employed Boolean algebra for reasoning about the functionality of switching networks, and graph theory for describing and manipulating their structure. Following this classic approach, one can deduce functionality from a given structure via analysis, and create a structure implementing a specified functionality via synthesis. The use of two different mathematical languages creates a ‘language barrier ’ – whenever a circuit description is changed in one language, it is necessary to translate the change into the other one to keep both descriptions in sync. For example, having tweaked a circuit structure one cannot be certain that the circuit functionality has not been broken, and vice versa.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1853653","timestamp":"2014-04-25T06:07:14Z","content_type":null,"content_length":"15417","record_id":"<urn:uuid:697d4ffe-082c-4256-8d70-987de93665d5>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
This is a split board - You can return to the Split List for other boards. People hate on CoD for hate's sake. Logical_One posted... BlueJester007 posted... Let's take the derivative of e^x *gasp* It's the same thing??!! What happens if we take the second derivative? It's the same thing again?? Let's not overlook the fact that e^x (exponential) is one of the most important and USEFUL functions in mathematics and engineering... Your point was? Why, yes. It is one of the most useful. But let's not overlook all the other functions that actually give you something else when their derivative is taken. x^2 gives you 2x. A little generic, but at least it is more than just x^2+1 or something of that nature. What about ln(x)? It's derivative is 1/x. Wow. That is drastically different. I was expecting another logarithm or something. But I got something completely different. What about 2^x? That's interesting. It gives you 2^xlog(2). It expands on the original. (By the way, thanks for actually responding to my mathematical post. People here seem to ignore these posts for some unknown reason.) Sig'd --vigorm0rtis CollegeDevil [This message was deleted at the request of the original poster] people hate on COD because other good game developers try to copy it's "success" Delta_F14 --- Vault Dweller > All other Video game characters BlueJester007 posted... Logical_One posted... BlueJester007 posted... Math Stuffs This mathematical metaphor is interesting; please keep it going. Zero Stones! Zero Crates! No i pretty much hate it because its a slight change everytime. it's for twitchy little add kids and takes 0 skill. Also making posts about how lame it is to hate CoD are just as frequent and just as lame. Surv1valism --- GT: IAstro ZombieI ( i's) PSN: IThe_WretchedI http://www.listal.com/list/game-collection-astrozombie29 (not finished) BlueJester007 posted... (By the way, thanks for actually responding to my mathematical post. People here seem to ignore these posts for some unknown reason.) Because math is for nerds. Nerd. (Just kidding. I am a comedian after all.) I'm the greatest ****er here! And you sniveling ****s would die without me! Ahahaha! Surv1valism posted... No i pretty much hate it because its a slight change everytime. it's for twitchy little add kids and takes 0 skill. Also making posts about how lame it is to hate CoD are just as frequent and just as lame. pothocket Exactly. The only "skill" you need is reaction time. The only "tactic" you need is camping because it increases your odds of reacting first. well I am not like your dad. I worked as a chef at TGIF I refuse to pool money towards a series of games whose company has done their fair share of driving the gaming industry towards an increasingly negative position. AsucaHayashi the quality of the game is pretty much irrelevant. If console gaming is so cheap then why do I have to spend $600~ in order to play Super Mario Galaxy, Uncharted 3 and Halo 3? CaPwnD posted... From: m4sturch33f | #045 Yeah, label anything you can't make a legit response to as 'ignorant'. Excellent display of intelligence kid. Respond to it? These people don't want a response. It's been said over and over. They act like elitists who put down other gamers because these ignorant people think that they are right and nothing me or anybody else says is going to make a difference. And you can gtfo out with your little "kids" remark. That just makes my comment all that much more true. I'm not even attempting to demean you or what you are. If you are a casual gamer that just so happens to be what the scope of your relationship with video games is. However; I do have a problem with the market shifting towards casual appeal. I don't find video games challenging anymore because it's "too hard or inaccessible" to people with less time than me. Video Gaming is my hobby and casual gamers ruin it by creating market trends that aren't to the benefit of the people who actually care at the end of the day. Call of Duty is a huge contributor to this effect. So is any WoW element beyond the scope of Vanilla WoW. NP lol--I didn't go through an mechanical engineering undergrad to ignore math... Logical_One --- Gamefaqs....the land where pokemon is the holy grail and final fantasy is God himself--Raptorleon3
{"url":"http://www.gamefaqs.com/boards/927749-xbox-360/64614638?page=6","timestamp":"2014-04-18T22:14:15Z","content_type":null,"content_length":"33242","record_id":"<urn:uuid:91a4096a-bd3a-416e-8924-6bd77c5f8e9e>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00453-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture Notes Most of this material was written as informal notes, not intended for publication. However, some notes are copyrighted and may be used for private use only. Errors are responsability of the authors. pdf format Multivariable Calculus Curl, div, grad and all that stuff explained by J. Cooper in a geometric fashion. Multivariable Calculus supplements. Include many applications to the physical sciences. By O. Knill. Elementary vector calculus applied to Maxwell Equation's and electric potencial. By D. Bump. Real Analysis Elementary notes on real analysis by T. Korner. Notes in analysis on metric and Banach spaces with a twist of topology. By Y. Safarov. Notes on Banach and Hilbert spaces and Fourier series by G. Olafsson. A paper on unified analysis and generalized limits by Ch. Brown. Also available at www.limit.com. Measure Theory and Integration Everything you need to know to get started on measure theory! Notes by G. Olafsson. Another very good set of notes on measure theory. These ones by B. Driver. Area of spheres, volume of balls and the Gamma function. Notes of IAP2001 made by D. Strook. Linear Algebra Nice notes on elementary linear algebra by J. Ellenberg. Great for a first course! Another set of notes in elementary linear algebra. By B. Lackey. Yet some some more notes on linear algebra. Notes by P. Martin at University city, London. Two sets of notes by R. Gardner. One of them based on Fraleigh's "Linear Algebra". A brief survey on Jordan canonical form by J. Beachy. Some basics fact on bilinear forms by Ch. Weibel. Abstract Algebra A very elegant course in group theory by J. Milne. A comprehensive introduction by J. Baker to finite groups representations. Some notes on permuation and alternating groups. Very basic facts about rings. Written by M. Vaughn-Lee. Notes on commutative algebra (modules and rings) by I. Fesenko. Notes on some topics on module theory E. L. Lady. An introduction to Galois theory by J. Milne. A set of notes on Galois theory by D. Wilkins. A short note on the fundamental theorem of algebra by M. Baker. Defintion and some very basic facts about Lie algebras. Nice introductory paper on representation of lie groups by B. Hall. Brief notes on homological algebra by I. Fesenko. An excellent reference on the history of homolgical algebra by Ch.Weibel. Complex Variables Lecture notes on complex analysis by T.Tao. Very elementary. Great for a beginning course. A more advanced course on complex variables. Notes written by Ch. Tiele. Some papers by D. Bump on the Riemman's Zeta function. Notes on a neat general topology course taught by B. Driver. Notes on a course based on Munkre's "Topology: a first course". By B. Ikenaga. Two sets of notes by D. Wilkins. General topology is discused in the first and algebraic topology in the second. A paper discussing one point and Stone-Cech compactifications. Written by J. Blankespoor and J. Krueger. Geometry of curves and surfaces in R^3. Notes written by R. Gardner. Brief and intuituve introduction to differential forms by D. Arapura. Notes on a course in calculus on normed vector spaces. Very concise introduction to differential geometry by S.Yakovenko. Basics on differential geometry. A nice set of notes written by D. Allcock. A comprehensive introduction to algebraic geometry by I. Dolgachev. Another very good set of notes by J. Milne. These ones devoted to algebraic geometry. A nice introduction to symplectic geometry by S. Montaldo. Dynamics on one complex variable. Lecture notes by J. Milnor.
{"url":"http://www.math.miami.edu/~dsolis/notes.html","timestamp":"2014-04-18T10:34:38Z","content_type":null,"content_length":"17454","record_id":"<urn:uuid:3af2aa7a-27b0-4611-9153-d16966766b36>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Textbook-Integrated Guide to Educational Resources Journal Articles: 40 results Using Pooled Data and Data Visualization To Introduce Statistical Concepts in the General Chemistry Laboratory Robert J. Olsen This article describes how data pooling and visualization can be employed in the first-semester general chemistry laboratory to introduce core statistical concepts such as central tendency and dispersion of a data set. Olsen, Robert J. J. Chem. Educ. 2008, 85, 544. Chemometrics | The Importance and Efficacy of Using Statistics in the High School Chemistry Laboratory Paul S. Matsumoto This paper describes some statistical concepts and their application to various experiments used in high school chemistry. Matsumoto, Paul S. J. Chem. Educ. 2006, 83, 1649. Chemometrics | Mathematics / Symbolic Mathematics Cross-Proportions: A Conceptual Method for Developing Quantitative Problem-Solving Skills Elzbieta Cook and Robert L. Cook This paper focuses attention on the cross-proportion (C-P) method of mathematical problem solving, which was once widely used in chemical calculations. We propose that this method regain currency as an alternative to the dimensional analysis (DA) method, particularly in lower-level chemistry courses. In recent years, the DA method has emerged as the only problem solving mechanism offered to high-school and general chemistry students in contemporary textbooks, replacing more conceptual methods, C-P included. Cook, Elzbieta; Cook, Robert L. J. Chem. Educ. 2005, 82, 1187. Learning Theories | Stoichiometry | Chemometrics | Student-Centered Learning CHEMiCALC (4000161) and CHEMiCALC Personal Tutor (4001108), Version 4.0 (by O. Bertrand Ramsay) Scott White and George Bodner CHEMiCALC is a thoughtfully designed software package developed for use by high school and general chemistry students, who will benefit from the personal tutor mode that helps to guide them through unit conversion, empirical formula, molecular weight, reaction stoichiometry, and solution stoichiometry calculations. White, Scott; Bodner, George M. J. Chem. Educ. 1999, 76, 34. Chemometrics | Nomenclature / Units / Symbols | Precision and Accuracy (the authors reply, 2) Midden, W. Robert Rounding-off rules and significant figures. Midden, W. Robert J. Chem. Educ. 1998, 75, 971. Precision and Accuracy (the authors reply, 1) Guare, Charles J. Rounding-off rules and significant figures. Guare, Charles J. J. Chem. Educ. 1998, 75, 971. Precision and Accuracy (3) Rustad, Douglas Rounding-off rules and significant figures. Rustad, Douglas J. Chem. Educ. 1998, 75, 970. Precision and Accuracy (1) Sykes, Robert M. Standard procedures for determining and maintaining significant figures in calculations. Sykes, Robert M. J. Chem. Educ. 1998, 75, 970. Those Baffling Subscripts Arthur W. Friedel and David P. Maloney Study of the difficulties students have in interpreting subscripts correctly and distinguishing atoms from molecules when answering questions and solving problems. Friedel, Arthur W.; Maloney, David P. J. Chem. Educ. 1995, 72, 899. Nomenclature / Units / Symbols | Stoichiometry | REACT: Exploring Practical Thermodynamic and Equilibrium Calculations Ramette, Richard W. Description of REACT software to balance complicated equations; determine thermodynamic data for all reactants and products; calculate changes in free energy, enthalpy, and entropy for a reaction; and find equilibrium conditions for the a reaction. Ramette, Richard W. J. Chem. Educ. 1995, 72, 240. Stoichiometry | Equilibrium | Thermodynamics | Symbolic Algebra and Stoichiometry DeToma, Robert P. Applying symbolic algebra (instead of the factor-label method) to stoichiometry calculations. DeToma, Robert P. J. Chem. Educ. 1994, 71, 568. Chemometrics | Nomenclature / Units / Symbols AnswerSheets Cornelius, Richard Review of a spreadsheet-based program that has modules on significant figures, VSEPR structures, stoichiometry, and unit conversions. Cornelius, Richard J. Chem. Educ. 1993, 70, 460. VSEPR Theory | Stoichiometry | AnswerSheets Cornelius, Richard Review of a spreadsheet-like program that includes modules on significant figures, conversions, stoichiometry, and VSEPR structures. Cornelius, Richard J. Chem. Educ. 1993, 70, 387. VSEPR Theory | Stoichiometry | Concept learning versus problem solving: There is a difference Nakhleh, Mary B.; Mitchell, Richard C. Previous studies indicate that there is little connection between algorithmic problem solving skills and conceptual understanding. The authors provide some ways to evaluate students along a continuum of low-high algorithmic and conceptual problem solving skills. The study shows that current lecture method teaches students to solve algorithms rather than teaching chemistry concepts. Nakhleh, Mary B.; Mitchell, Richard C. J. Chem. Educ. 1993, 70, 190. Chemometrics | Learning Theories | Student-Centered Learning On the chemically impossible "other" roots in equilibrium calculations, II Ludwig, Oliver G. In a previous paper the author described, using mathematics accessible to students, how an equilibrium calculation leading to a quadratic equation may be shown to have but one "chemical" root. The present work extends this demonstration to some reactions leading to cubic equations. Ludwig, Oliver G. J. Chem. Educ. 1992, 69, 884. Chemometrics | A carbonate project introducing students to the chemistry lab Dudek, Emily A description of a first semester general chemistry laboratory that helps acquaint students with a large variety of chemistry laboratory procedures. Dudek, Emily J. Chem. Educ. 1991, 68, 948. Chemometrics | Gravimetric Analysis | Titration / Volumetric Analysis | Separation Science The use of "marathon" problems as effective vehicles for the presentation of general chemistry lectures Burness, James H. A marathon problem is a long, comprehensive, and difficult problem that ties together many of the topics in a chapter and that is solved together by the instructor and students. Sample problems are included and advantages and disadvantages of this approach are discussed. Burness, James H. J. Chem. Educ. 1991, 68, 919. Developmental instruction: Part II. Application of the Perry model to general chemistry Finster, David C. The Perry scheme offers a framework in which teachers can understand how students make meaning of their world, and specific examples on how instructors need to teach these students so that the students can advance as learners. Finster, David C. J. Chem. Educ. 1991, 68, 752. Learning Theories | Atomic Properties / Structure | Chemometrics | Descriptive Chemistry A stoichiometric journey Molle, Brian A story to help students overcome some of the difficulties they encounter in stoichiometry calculations. Molle, Brian J. Chem. Educ. 1989, 66, 561. Stoichiometry | Chemistry according to ROF (Fee, Richard) Radcliffe, George; Mackenzie, Norma N. Two reviews on a software package that consists of 68 programs on 17 disks plus an administrative disk geared toward acquainting students with fundamental chemistry content. For instance, acids and bases, significant figures, electron configuration, chemical structures, bonding, phases, and more. Radcliffe, George; Mackenzie, Norma N. J. Chem. Educ. 1988, 65, A239. Chemometrics | Atomic Properties / Structure | Equilibrium | Periodicity / Periodic Table | Periodicity / Periodic Table | Stoichiometry | Physical Properties | Acids / Bases | Covalent Bonding Let's separate theories from calculations Freilich, Mark B. This author writes in a 'provocative opinion' article challenging the readers to think about heavily emphasizing 'thought problems' in chemistry and allowing students to master those before throwing calculations into the mix. Freilich, Mark B. J. Chem. Educ. 1988, 65, 442. Reaction stoichiometry and suitable "coordinate systems" Tykodi, R. J. Methods for dealing with problems involving reactions stoichiometry: unitize and scale up, factor-label procedure, de Donder ratios, and titration relations. Tykodi, R. J. J. Chem. Educ. 1987, 64, 958. Stoichiometry | Titration / Volumetric Analysis | Hard ways and easy ways Schwartz, Lowell M. Two examples of common general chemistry calculations and different approaches ("hard" and "easy") to solving them. Schwartz, Lowell M. J. Chem. Educ. 1987, 64, 698. Stoichiometry | On writing equations Campbell, J.A. The author presents a very direct approach to writing complicated equations without using a matrix approach. Campbell, J.A. J. Chem. Educ. 1986, 63, 63. Stoichiometry | A LAP on moles: Teaching an important concept Ihde, John The objective of the Learning Activity Packet on moles include understanding the basic concept of the mole as a chemical unit, knowing the relationships between the mole and the atomic weights in the periodic table, and being able to solve basic conversion problems involving grams, moles, atoms, and molecules. [Debut] Ihde, John J. Chem. Educ. 1985, 62, 58. Stoichiometry | Nomenclature / Units / Symbols | Chemometrics | Atomic Properties / Structure | Molecular Properties / Structure | Periodicity / Periodic Table The Elements of Style in Chemistry, A Computer-assisted Instruction Supported Text (Beatty, James W.; Beatty, James J.) Crawford, Victor A. Intended to support students who have trouble solving important types of problems in chemistry. Crawford, Victor A. J. Chem. Educ. 1984, 61, A27. Enrichment / Review Materials | The factor-label method: Is it all that great? Navidi, Marjorie H.; Baker, A. David The development of reasoning skills in chemistry is better achieved by postponing the introduction of the factor-label method. Navidi, Marjorie H.; Baker, A. David J. Chem. Educ. 1984, 61, 522. Reflections upon mathematics in the introductory chemistry course Goodstein, Madeline P. It is the purpose of this paper to call attention to the lack of mathematical competence by chemistry students and to invite consideration of one conceptual scheme which may be used to unify the mathematical approach. Goodstein, Madeline P. J. Chem. Educ. 1983, 60, 665. Chemometrics | Titration calculations- a problem-solving approach Waddling, Robin E. L. This author shares a strategy for helping students who might be struggling with understanding how to calculate and understand titration data. Waddling, Robin E. L. J. Chem. Educ. 1983, 60, 230. Acids / Bases | Titration / Volumetric Analysis | A pocket calculator program for the solution of pH problems via the method of successive approximations Guida, Wayne C. 37. Bits and pieces, 14. A description of a pocket calculator program for the solution of pH problems via the method of successive approximations . Guida, Wayne C. J. Chem. Educ. 1983, 60, 101. pH | Acids / Bases | Basic mathematics for beginning chemistry (Goldish, Dorthoy M.) Ellison, Herbert R. Ellison, Herbert R. J. Chem. Educ. 1981, 58, A65. Chemometrics | Mathematics / Symbolic Mathematics | Enrichment / Review Materials Think Wheeler, S. J., James D. Students have an easy enough time crunching numbers, but it is alarming how little they understand the concepts behind the numbers. Students should not be making remarks such as, "If they keep changing how they write the problems, how am I supposed to know how to solve them?" Wheeler, S. J., James D. J. Chem. Educ. 1981, 58, 1004. Learning Theories | A "road map" problem for freshman chemistry students Burness, James H. Question suitable for a take-home type of exam. Burness, James H. J. Chem. Educ. 1980, 57, 647. Gases | Solutions / Solvents | Stoichiometry | Nomenclature / Units / Symbols | Adopting SI units in introductory chemistry Davies, William G.; Moore, John W. Conventions associated with SI units, conversion relationships commonly used in chemistry, and a roadmap method for solving stoichiometry problems. Davies, William G.; Moore, John W. J. Chem. Educ. 1980, 57, 303. Nomenclature / Units / Symbols | The chemical equation. Part I: Simple reactions Kolb, Doris A chemical equation is often misunderstood by students as an "equation" that is used in chemistry. However, a more accurate description is that it is a concise statement describing a chemical reaction expressed in chemical symbolism. Kolb, Doris J. Chem. Educ. 1978, 55, 184. Stoichiometry | Chemometrics | Nomenclature / Units / Symbols | A pre-general chemistry course for the underprepared student Krannich, Larry K.; Patick, David; Pevear, Jesse Outline and evaluation of a course in chemical problem solving. Krannich, Larry K.; Patick, David; Pevear, Jesse J. Chem. Educ. 1977, 54, 730. Enrichment / Review Materials | A logic diagram for teaching stoichiometry Tyndall, John R. Presents a diagram that the author found helpful in teaching the fundamentals of stoichiometry. Tyndall, John R. J. Chem. Educ. 1975, 52, 492. Stoichiometry | Chemical calculations (Benson, Sidney W.) Melgaard, Kennett G. Melgaard, Kennett G. J. Chem. Educ. 1972, 49, A98. Grading the copper sulfide experiment Novick, Seymour The author recommends a more liberal analysis in grading the copper sulfide experiment. Novick, Seymour J. Chem. Educ. 1970, 47, 785. Stoichiometry | A formula for indirect gravimetry Fiekers, B. A. Derivation of a formula for indirect gravimetry and application to a sample problem. Fiekers, B. A. J. Chem. Educ. 1956, 33, 575. Gravimetric Analysis | Chemometrics | Quantitative Analysis
{"url":"http://www.chemeddl.org/alfresco/service/org/chemeddl/ttoc/ttoc_results/?id=10034&mode=primary&type=jcearticle&num_results=&guest=true","timestamp":"2014-04-19T17:07:19Z","content_type":null,"content_length":"34103","record_id":"<urn:uuid:0dbcf4a1-87b9-47ae-8fa8-2ffb43d5d416>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
This picture inspires a wonderful volume project, and can easily have scientific notation and proportions integrated into the project as well. (1) Have students calculate the volume of the Earth. (2) Research the amount of water that’s on the Earth (about 326 million trillion gallons according to science.howstuffworks.com) (3) Have students calculate what size sphere would hold that volume of water (4) Either with a computer drawing program or just on a piece of paper, have students use proportions to show the size of the Earth compared to the sphere that would hold the world’s water. ** The same thing can be done with air (atmosphere), though I couldn’t find a specific number as to the exact volume of air. But considering the atmosphere extends (very roughly) out to about 300 km (there’s more atmosphere, I’m sure, but the density of the molecules would be very negligible), simply take the radius of the Earth (6,378.1 km) to figure out the volume of the Earth, then draw another sphere around the Earth that has a radius of 6,678.1 (radius of Earth + 300) and calculate the volume of that sphere, and the difference would be the volume of the atmosphere … albeit a very rough estimation. Students shouldn’t be told this, of course! Here’s a site with more information about the Amount of Water in/on/above the Earth. Teaching Slope If there’s one thing I’ve figured out about teaching the basics of slope, it’s that there’s not one single method that will reach every single student. (This is true of any topic). However, it is still possible to reach every student since different methods work for different students. Here’s a few slope memory tricks that I’ve used when remediating students, if they just don’t get it after being shown the traditional ways: Mr. Slope Guy This was actually the favorite method of my below-level high school students. On every assessment relating to linear equations, the first thing most students did was sketch this on the top page as a guide. This isn’t my creation, but I can’t remember where and when I came across this to give the proper credit. Writing “slope” Since we write from left to write, people inherently will write the word “slope” from left to write, and this gives students a visual. Without moving the paper around, write the word “slope” on the line and if you find yourself writing upwards, it’s positive. Writing down is negative. Straight across is zero. And since there’s not really a place to write the word “slope” on a vertical line (without moving the paper), that’s undefined. This only works for distinguishing positive from negative slopes, but simply tracing the line with a fingertip from left to right lets students physically feel the direction of the line as to whether it’s going up or down. I prefer that students write the word “slope” as mentioned above since writing is inherently left to right and tracing is not, but some students prefer this method. One test asked “what is the slope of a horizontal line,” and a student told me that she couldn’t decide whether to write zero or undefined until she remembered that I had told them horiZontal has “z” for zero. Whatever works… Google Earth and Complex Area Here is a project by realworldmath.org. Real World Math integrates Google Earth with various math topics, this one on Complex Area. Below is a very brief excerpt from their site, but you need to visit the site itself for the full project: Complex Area Problems – Real World Math •Measure distances •Find the area of complex polygons •Solve word problems involving rates Lesson Description This lesson consists of two parts. The first requires students to find the area of a complex shape using … the area formulas for a parallelogram and triangle. The second part … Students will need to be able to solve rate problems with a proportion for this section. Data and Graphs – Online Activities For the classroom Circle Graphs Thirteen Ed Online Here is a ready-made online worksheet for circle graphs. Work can be done on a piece of paper, but students need to use computers to access the online graph and article. Graphing Review Bitesize: Frequency diagrams. Fun video/interactive review lesson. Frequency chart, circle graph, line graph and pictogram. Do the 10 question quiz to finish. Graphing Review 2 Bitesize: Handling data Line graphs, bar graphs, constructing circle graphs. 10 question quiz includes scatterplots and pictograms. Interactive lesson. Linear Equations – Online Activities For the classroom Graphing Point-Slope Given a point and slope, graph the line. Simple, clean graphing program good for student work on the board. Identifying Slope-Intercept Recognizing Slope and Y-Intercept Identify slope and y-intercept from an equation. All in slope-intercept form. Good for a mini checkpoint quiz. Slope Investigations Move the points around to see how the slope and graph of a line is affected. Mr. Kibbe’s Slope Demo Slope and y-intercept Investigations See how slope and y-intercept affect the graph of a line. Mr. Kibbe’s Slope and y-intercept demo. Writing slope-intercept Equations Kind of like Line Gems (for students), in that students write the equations that will go through the most points, but I like the cleaner look and feel of this one: Mr. Kibbe’s Slope Game. For students Writing slope-intercept equations Line Gems Write the equation of the line that will go through the most gems. Writing slope-intercept equations Algebra vs. the Cockroach Write the equation of the line that will exterminate the roaches. Probability and Statistics – Online Activities For the classroom Measures of Central Tendency Bitesize: Mean, Median, Mode and Range. A fun video/interactive review lesson. Finish with the 10 question quiz. Transformations – Online Activities For the classroom Congruence, Translations, Reflections & Rotations BBC – Bitesize: Introduction to Transformations Interactive lesson. Click on Activities for lesson. Do the 5 question “Test” when done. The Transformation Game I came across this Transformations board game. I haven’t used this in any of my classes, but I thought I’d go ahead and post it here so I would remember it later, and in case anyone else wanted to try it out. If you use it, let me know how it went! Linear Equations – Slope Project I recently made a slope worksheet where I drew figures on a coordinate plane, and students had to state the slope of each of the sides of the figures. Then it occurred to me that this would make a really great slope project. Students could create their own line design on a coordinate plane, and label the slopes of the lines they used. It doesn’t sound neat when stated like that, but here’s an example of what a final product might look like. (I only wrote the slope for six segments, but you get the idea). Stained glass and linear equations (or inequalities) are fairly common, but I think keeping it just as slope might be better. Students don’t have to have lines running all across the coordinate plane since they only have state the slope for smaller line segments. To ensure students don’t just draw a few squares, students should be given a list of criteria. For example, direct students that their design must include 6 negative sloped lines, 6 positive, 4 zero slope and 4 undefined slope lines. Or 5 pairs of parallel and 5 pairs of perpendicular lines, or some similar variation. That way students have to use different sloped lines in their designs, and it also gives them a finite number of segments they have to write the slope for. This way, they’re not penalized if they produce more complex designs. Here’s the Sample Project word document in case you want to use it as an example or if you want to modify it. And here’s a Student Template. Sumdog – Another Good Math Game Site From ordering decimals to the distributive property, this site has wonderful games that students will probably end up playing on their own time at home. I found this site on a lazy Sunday afternoon and was surprised to see a lot of students logged in and playing! I played the games signed in as “guest,” but teachers can upload student lists and even get progress reports of student activity (this part requires a subscription). But the games are free, so even if you don’t plan on subscribing, I’d encourage you to introduce the students to the site.
{"url":"http://algebrafunsheets.com/blog/2011/11/","timestamp":"2014-04-20T08:36:24Z","content_type":null,"content_length":"52758","record_id":"<urn:uuid:024aba8f-af47-41a2-b6b9-d83f6be1d746>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Small Basic Curriculum: Lesson 3.3: The Math Object Comments: 1 Small Basic > Curriculum > Online > Lesson 3.3: The Math Object Estimated time to complete this lesson: 1 hour The Math Object In this lesson, you will learn how to: • Use different properties of the Math object. • Use different operations of the Math object. Do complex mathematical calculations boggle your mind at times? Don’t worry! The Math object offers many mathematical functions that you can use in your programs. This object includes the following operations and properties: • Cos • Abs • GetRandomNumber • ArcSin • Sin • Floor • SquareRoot • GetDegrees • Remainder • Log • Pi • Min Operations of the Math Object Let’s learn about some operations of the Math object by writing a simple program. TextWindow.WriteLine("Enter the angle in degrees and get the sine of the angle:") number = TextWindow.Read() TextWindow.WriteLine("The sine of the angle is " + TextWindow.WriteLine("The arcsine of the angle is " + TextWindow.WriteLine("The angle in degrees is " + TextWindow.WriteLine("Enter the angle in degrees and get the cosine of the angle:") number = TextWindow.Read() TextWindow.WriteLine("The cosine of the angle is " + Math.Cos(number)) In this example, you get the sine and the cosine of an angle that you specify by using the Sin and Cos operations of the Math object. You can also get the angle in radians from the sine value by using the ArcSin operation. Next, you can convert the angle from radians to degrees with the GetDegrees operation. You can use the Sin operation to get the sine of the specified angle in radians. You can use the ArcSin operation to get the angle in radians, given the sine value. You can use the GetDegrees operation to convert the value of an angle from radians to degrees. You can use the Cos operation to get the cosine of the specified angle in radians. The Pi Property The value of pi is an important aspect of some mathematical calculations. You can retrieve the value of pi in your calculations by using the Pi property of the Math object. Let’s use this property to calculate the area of the circle. TextWindow.Write("Enter the radius of the circle:") Radius = TextWindow.Read() Area = Math.Pi * Math.Power(Radius, 2) TextWindow.WriteLine("Area of the Circle is " + Area) In this example, you retrieve the value of pi by using the Pi property of the Math object. Then you use that value in the formula to get the area of the circle. The Pi property of the Math object returns the value of pi, which is 3.14. The Abs Operation Abs is another useful operation that the Math object provides. Let’s check it out. By using the Abs operation, you can get the absolute value of the given number. For example, if you subtract a number from a smaller number, the result will be a negative numeral. TextWindow.WriteLine("Enter two numbers for subtraction: ") Number1 = TextWindow.Read() Number2 = TextWindow.Read() Substraction = Number1 - Number2 Textwindow.WriteLine("The answer is " + In this example, you subtract two numbers. Even if the first number is smaller than the second, the Abs operation returns a positive number. You can use the Abs operation of the Math object to get the absolute value of a number. For example, if the given number is -50, the Abs operation will return the value as 50. The Floor Operation While you create your Small Basic program, how can you get the integer value of a decimal number? The Floor operation was created to give an integer value that is smaller than or equal to a decimal number that you specify. Let’s see how you can use this operation in a program to calculate a student’s average grade. TextWindow.Write("Enter the name of the student: ") Name = TextWindow.Read() TextWindow.WriteLine("Enter the student's marks in six subjects:") For I = 0 To 5 Subject[i] = TextWindow.Read() Total = Total + Subject[i] Percentage = Total / 6 TextWindow.WriteLine("Total Marks:" + Total) TextWindow.WriteLine("Percentage:" + In this example, you enter the grades that a student earned in six subjects. Then, you use the Floor operation to get the student’s average as an integer value. The Log Operation When you perform complex calculations, you often need the logarithmic value (base 10) of a particular number. The Math object in Small Basic offers the Log operation to get the log value of the specified number. TextWindow.WriteLine("Enter number to get its log value: ") Number = TextWindow.Read() TextWindow.WriteLine("Log value of " + Number + " is: " + Math.Log(Number)) In this example, you use the Log operation to get the log value of 22.3. The GetRandomNumber Operation Now, let’s discuss the GetRandomNumber operation of the Math object. You can use this operation to get a random number between 1 and the maximum number that you specify. Let’s use this operation in a program. GraphicsWindow.BackgroundColor = "Black" GraphicsWindow.Width = 600 GraphicsWindow.Height = 500 For i = 0 To 800 GraphicsWindow.FontSize = Math.GetRandomNumber(30) x = Math.GetRandomNumber(GraphicsWindow.Width) y = Math.GetRandomNumber(GraphicsWindow.Height) GraphicsWindow.DrawText(x, y, "*") In this program, you draw the ‘*’ shape on the graphics window in different sizes and at different locations. You first set the height, width, and background color of the graphics window. Then you use set the font size by using the GetRandomNumber operation. The font size will be between 1 and 30 because you have specified 30 as the parameter for the GetRandomNumber operation. You also use this operation to randomly set the asterisks’ x-coordinates and y-coordinates. This is the output you will see: The Min Operation The Math object also provides the Min operation, which you can use to compare two numbers and identify the smaller number of the two. Let’s apply this operation in a program. TextWindow.WriteLine("Enter the first number:") Number1 = TextWindow.Read() TextWindow.WriteLine("Enter the second number:") Number2 = TextWindow.Read() min = Math.Min(Number1, Number2) If (Number1 = Number2) Then TextWindow.WriteLine("These numbers are the same") TextWindow.WriteLine("The smaller number is:" + min) In this example, you request two numbers from the user, use the Min operation to compare them, and display the smaller number in the text window. You also ensure that, if the user specifies the same number twice, the statement “Both numbers are the same” appears. The SquareRoot Operation By using the SquareRoot operation of the Math object, you can get the square root of a number that you specify. TextWindow.Write("Enter a number to get its square root: ") Number = TextWindow.Read() TextWindow.WriteLine("Square root of the entered number is " + Math.SquareRoot(Number)) In this example, you specify a number and use the SquareRoot operation to get its square root. The Remainder Operation You can get the remainder in a division problem by using the Remainder operation of the Math object. TextWindow.Write("Enter a number to check if it is even or odd: ") number = TextWindow.Read() If Math.Remainder(number, 2) = 0 Then TextWindow.WriteLine(number + " is an even number.") TextWindow.WriteLine(number + " is an odd number.") Goto start In this program, you want to verify whether a specified number is even or odd. You use the If condition to verify whether the number is even (that is, whether the remainder is 0 when you divide the number by 2). If the remainder is 1, the number is odd. To check the remainder, you use the Remainder operation of the Math object. Let’s Summarize… Congratulations! Now you know how to: • Use different properties of the Math object. • Use different operations of the Math object. Show What You Know By using the GetRandomNumber operation, write a program to move and rotate a rectangle in a random manner. Write a program to draw circles of different sizes in the graphics window. Set the size of the circle by using its area, and randomize the x-coordinates and y-coordinates of the circle. To see the answers to these questions, go to the Answer Key page. PowerPoint Downloads
{"url":"http://social.technet.microsoft.com/wiki/contents/articles/16376.small-basic-curriculum-lesson-3-3-the-math-object.aspx","timestamp":"2014-04-16T08:33:56Z","content_type":null,"content_length":"1048576","record_id":"<urn:uuid:be86aa03-1834-4b4a-bd07-cc851585338f>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Fallacies, Flaws, and Flimflam it's been good to know ya Posted by: Dave Richeson | December 18, 2008 Fallacies, Flaws, and Flimflam it’s been good to know ya At the end of 2008 the College Mathematics Journal will stop running its 20-year column “Fallacies, Flaws, and Flimflam.” The column was devoted to “mistakes, fallacies, howlers, anomalies, and the like”—usually found on student work. In honor of the departure of this entertaining column I submit the following FFF which was submitted as a solution to a homework problem in my Discrete Mathematics class. Prove or disprove: the quotient of any two rational numbers is rational. False. We will present a counterexample. The numbers 22 and 7 are rational. However $\frac{22}{7}=\pi$ and $\pi$ is irrational. Thus the conjecture is false. I can beat that with a submission from my geometry class. I forget what the theorem was the student was proving, but the first line was: “We will begin by assuming the conclusion.” By: Robert Talbert on December 18, 2008 at 10:54 am Excellent! At least (s)he was being clear about her/his assumptions! By: Dave Richeson on December 18, 2008 at 11:14 am 22/7 and pi are extremely close, but they are not equal. By: Stephen on February 22, 2012 at 7:12 pm • Exactly. That’s what makes this funny. :-) By: Dave Richeson on February 22, 2012 at 8:33 pm However, perhaps the student was right —> 1/0 is not a rational number. So there is a counterexample that shows the statement is false. By: Steven on January 30, 2013 at 1:32 am • Exactly—it is false, and that was the counterexample I was looking for. By: Dave Richeson on January 30, 2013 at 10:36 am Posted in Humor, Math | Tags: discrete mathematics, fallacies flaws and flimflam, irrational, mathematical errors, pi, proofs, rational
{"url":"http://divisbyzero.com/2008/12/18/fallacies-flaws-and-flimflam-its-been-good-to-know-ya/","timestamp":"2014-04-18T05:31:53Z","content_type":null,"content_length":"65927","record_id":"<urn:uuid:c1d3ee45-4124-4d64-b544-b423b07b9fc5>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Generic Conical Orbits, Keplers Laws, Satellite Orbits and Orbital Mechanics Generic Conical Orbits, Keplers Laws, Satellite Orbits and Orbital Mechanics The time for Mars to orbit the Sun is observed to be 1.88 Earth years. ... what you are speaking about and express it in numbers, you know something about ... – PowerPoint PPT presentation Number of Views:284 Avg rating:3.0/5.0 Slides: 95 Added by: Anonymous more less Transcript and Presenter's Notes
{"url":"http://www.powershow.com/view/e1fe9-ODk5Y/Generic_Conical_Orbits_Keplers_Laws_Satellite_Orbits_and_Orbital_Mechanics_powerpoint_ppt_presentation","timestamp":"2014-04-17T19:28:15Z","content_type":null,"content_length":"148523","record_id":"<urn:uuid:2bb57675-456c-4d82-a7dd-d19c766e8335>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Overview - MATH GR 4 CURR MASTERY VISUAL LEARNING GUIDES Build Strong Elementary Math Skills & Meet Math Standards. Capture elementary students' interests in learning math concepts with interactive flip charts. Students can improve their test scores with these curriculum review cards, interactive cd-roms, and visual learning guides. Math Review CD-ROM Includes 750 interactive review cards, providing comprehensive coverage of the current national curriculum standards for the grade-level content. Ideal for use in review and assessment preparation by an individual student or for projection on a whiteboard for use by the entire class. Compatible with either Windows or Mac. Math Visual Learning Guides Each guide covers a different topic and includes four panels. Includes 10 laminated write-on/wipe-off guides (11" x 17"), Teacher's Guide, and Classroom Presentation/Assessment Prep CD-ROM. Math Flip Charts Sets Includes 10 double-sided laminated charts (12" x 18") covering grade-level-specific curriculum content plus write-on/wipe-off charts on the reverse side; built-in freestanding easel for easy display; and activity guide with blackline masters of the charts for students to fill in with dry erase markers, labeling exercises, key vocabulary terms, and corresponding quiz questions for each chart along with answers. Flip Chart Set Titles 1st Grade Curriculum Mastery Flip Charts Set • Numbers • Subtraction Facts • Plane Shapes • Money • Solid Shapes • Fractions • Telling Time • Calendar • Addition Facts • Measurement 2nd Grade Curriculum Mastery Flip Charts Set • Adding Two-Digit Numbers • All About Time • Subtracting Two-Digit Numbers • Length, Weight & Temperature • Hundred Counting Chart • Data & Graphs • Place Value • Ordinal Numbers • Understanding Fractions • Symmetry 3rd Grade Curriculum Mastery Flip Charts Set • Adding & Subtracting • Addition & Subtraction Number Sense • Multiplication Concepts • All About Fractions • Multiplication Table • All About Decimals • Division Concepts • Geometry & Measurement • All About Money • Problem-Solving Strategies 4th Grade Curriculum Mastery Flip Charts Set • Place Value • Lines & Angles • Multiplying (2-Digits) • Area & Perimeter • Dividing (2-Digits) • Fraction Concepts • Add & Subtract Decimals • Add & Subtract Fractions • Polygons • Units of Measurement
{"url":"http://www.pcieducation.com/store/item.aspx?DepartmentId=26&CategoryId=15&TypeId=23&ItemId=49323","timestamp":"2014-04-19T04:28:28Z","content_type":null,"content_length":"38380","record_id":"<urn:uuid:56d8cdd3-fe11-4208-86e3-22de7e929b12>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00623-ip-10-147-4-33.ec2.internal.warc.gz"}
G. H. Hardy From Wikipedia, the free encyclopedia Godfrey Harold "G. H." Hardy FRS^1 (7 February 1877 – 1 December 1947)^2 was an English mathematician, known for his achievements in number theory and mathematical analysis.^3^4 He is usually known by those outside the field of mathematics for his essay from 1940 on the aesthetics of mathematics, A Mathematician's Apology, which is often considered one of the best insights into the mind of a working mathematician written for the layman. Starting in 1914, he was the mentor of the Indian mathematician Srinivasa Ramanujan, a relationship that has become celebrated.^5^6 Hardy almost immediately recognised Ramanujan's extraordinary albeit untutored brilliance, and Hardy and Ramanujan became close collaborators. In an interview by Paul Erdős, when Hardy was asked what his greatest contribution to mathematics was, Hardy unhesitatingly replied that it was the discovery of Ramanujan. He called their collaboration "the one romantic incident in my life."^5^7 Early life and career G. H. Hardy was born on 7 February 1877, in Cranleigh, Surrey, England, into a teaching family.^8 His father was Bursar and Art Master at Cranleigh School; his mother had been a senior mistress at Lincoln Training College for teachers. Both parents were mathematically inclined. Hardy's own natural affinity for mathematics was perceptible at a young age. When just two years old, he wrote numbers up to millions, and when taken to church he amused himself by factorising the numbers of the hymns.^9 After schooling at Cranleigh, Hardy was awarded a scholarship to Winchester College for his mathematical work. In 1896 he entered Trinity College, Cambridge.^10 After only two years of preparation under his coach, Robert Alfred Herman, Hardy was fourth in the Mathematics Tripos examination. Years later, he sought to abolish the Tripos system, as he felt that it was becoming more an end in itself than a means to an end. While at university, Hardy joined the Cambridge Apostles, an elite, intellectual secret society. As the most important influence Hardy cites the self-study of Cours d'analyse de l'École Polytechnique by the French mathematician Camille Jordan, through which he became acquainted with the more precise mathematics tradition in continental Europe. In 1900 he passed part II of the tripos and was awarded a fellowship. In 1903 he earned his M.A., which was the highest academic degree at English universities at that time. From 1906 onward he held the position of a lecturer where teaching six hours per week left him time for research. In 1919 he left Cambridge to take the Savilian Chair of Geometry at Oxford in the aftermath of the Bertrand Russell affair during World War I. Hardy spent the academic year 1928–1929 at Princeton in an academic exchange with Oswald Veblen, who spent the year at Oxford.^3 Hardy gave the Josiah Willards Gibbs lecture for 1928.^11^12 Hardy left Oxford and returned to Cambridge in 1931, where he was Sadleirian Professor until 1942. The Indian Clerk (2007) is a novel by David Leavitt based on Hardy's life at Cambridge, including his discovery of and relationship with Srinivasa Ramanujan. Hardy is credited with reforming British mathematics by bringing rigour into it, which was previously a characteristic of French, Swiss and German mathematics. British mathematicians had remained largely in the tradition of applied mathematics, in thrall to the reputation of Isaac Newton (see Cambridge Mathematical Tripos). Hardy was more in tune with the cours d'analyse methods dominant in France, and aggressively promoted his conception of pure mathematics, in particular against the hydrodynamics which was an important part of Cambridge mathematics. From 1911 he collaborated with J. E. Littlewood, in extensive work in mathematical analysis and analytic number theory. This (along with much else) led to quantitative progress on the Waring problem, as part of the Hardy–Littlewood circle method, as it became known. In prime number theory, they proved results and some notable conditional results. This was a major factor in the development of number theory as a system of conjectures; examples are the first and second Hardy–Littlewood conjectures. Hardy's collaboration with Littlewood is among the most successful and famous collaborations in mathematical history. In a 1947 lecture, the Danish mathematician Harald Bohr reported a colleague as saying, "Nowadays, there are only three really great English mathematicians: Hardy, Littlewood, and Hardy–Littlewood."^13^:xxvii Hardy is also known for formulating the Hardy–Weinberg principle, a basic principle of population genetics, independently from Wilhelm Weinberg in 1908. He played cricket with the geneticist Reginald Punnett who introduced the problem to him, and Hardy thus became the somewhat unwitting founder of a branch of applied mathematics. His collected papers have been published in seven volumes by Oxford University Press. Pure mathematics Hardy preferred his work to be considered pure mathematics, perhaps because of his detestation of war and the military uses to which mathematics had been applied. He made several statements similar to that in his Apology: "I have never done anything 'useful'. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world."[1] However, aside from formulating the Hardy–Weinberg principle in population genetics, his famous work on integer partitions with his collaborator Ramanujan, known as the Hardy–Ramanujan asymptotic formula, has been widely applied in physics to find quantum partition functions of atomic nuclei (first used by Niels Bohr) and to derive thermodynamic functions of non-interacting Bose-Einstein systems. Though Hardy wanted his maths to be "pure" and devoid of any application, much of his work has found applications in other branches of science. Moreover, Hardy deliberately pointed out in his Apology that mathematicians generally do not "glory in the uselessness of their work," but rather – because science can be used for evil as well as good ends – "mathematicians may be justified in rejoicing that there is one science at any rate, and that their own, whose very remoteness from ordinary human activities should keep it gentle and clean." Hardy also rejected as a "delusion" the belief that the difference between pure and applied mathematics had anything to do with their utility. Hardy regards as "pure" the kinds of mathematics that are independent of the physical world, but also considers some "applied" mathematicians, such as the physicists Maxwell and Einstein, to be among the "real" mathematicians, whose work "has permanent aesthetic value" and "is eternal because the best of it may, like the best literature, continue to cause intense emotional satisfaction to thousands of people after thousands of years." Although he admitted that what he called "real" mathematics may someday become useful, he asserted that, at the time in which the Apology was written, only the "dull and elementary parts" of either pure or applied mathematics could "work for good or ill." Attitudes and personality Socially he was associated with the Bloomsbury group and the Cambridge Apostles; G. E. Moore, Bertrand Russell and J. M. Keynes were friends. He was an avid cricket fan and befriended the young C. P. Snow who was one also. He was at times politically involved, if not an activist. He took part in the Union of Democratic Control during World War I, and For Intellectual Liberty in the late 1930s. Hardy was an atheist. Apart from close friendships, he had a few platonic relationships with young men who shared his sensibilities.^14 He was a lifelong bachelor, and in his final years he was cared for by his sister. Hardy was extremely shy as a child, and was socially awkward, cold and eccentric throughout his life. During his school years he was top of his class in most subjects, and won many prizes and awards but hated having to receive them in front of the entire school. He was uncomfortable being introduced to new people, and could not bear to look at his own reflection in a mirror. It is said that, when staying in hotels, he would cover all the mirrors with towels.^15 Hardy's aphorisms • No mathematician should ever allow himself to forget that mathematics, more than any other art or science, is a young man's game. (A Mathematician's Apology) • It is never worth a first class man's time to express a majority opinion. By definition, there are plenty of others to do that.^16 • A mathematician, like a painter or a poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas. • Nothing I have ever done is of the slightest practical use. • Hardy once told Bertrand Russell "If I could prove by logic that you would die in five minutes, I should be sorry you were going to die, but my sorrow would be very much mitigated by pleasure in the proof". Russell agreed with Hardy wholeheartedly about the delights of proofs, as he himself comments in his Autobiography. In popular culture See also Further reading • Kanigel, Robert (1991). The Man Who Knew Infinity: A Life of the Genius Ramanujan. New York: Washington Square Press. ISBN 0-671-75061-5. • Snow, C. P. (1967). Variety of Men. London: Macmillan. External links Wikiquote has a collection of quotations related to: G. H. Hardy
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=G._H._Hardy","timestamp":"2014-04-16T14:47:23Z","content_type":null,"content_length":"130548","record_id":"<urn:uuid:e81593d5-f661-49b1-a9b9-4ba1775ad6b1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
To See If A Number, Say 562437487, Is Divisible... | Chegg.com To see if a number, say 562437487, is divisible by 3, you just add up the digits of its decimal representation, and see if the result is divisible by 3. (5 + 6 + 2 + 4 + 3 + 7 + 4 + 8 + 7 = 46, so it is not divisible by 3.) To see if the same number is divisible by 11, you can do this: subdivide the number into pairs of digits, from the right-hand end (87, 74,43, 62, 5), add these numbers, and see if the sum is divisible by 11 (if it’s too big, repeat). How about 37? To see if the number is divisible by 37, subdivide it into triples from the end (487, 437, 562) add these up, and see if the sum is divisible by 37. This is true for any prime p other than 2 and 5. That is, for any prime p f/=2, 5, there is an integer r such that in order to see if p divides a decimal number n, we break n into r-tuples of decimal digits (starting from the right-hand end), add up these r-tuples, and check if the sum is divisible by p. (a) What is the smallest such r for p = 13? For p = 17? (b) Show that r is a divisor of p − 1.
{"url":"http://www.chegg.com/homework-help/see-number-say-562437487-divisible-3-add-digits-decimal-repr-chapter-1-problem-38-solution-9780073523408-exc","timestamp":"2014-04-18T18:21:48Z","content_type":null,"content_length":"28248","record_id":"<urn:uuid:46387359-8822-4d04-8a93-e5272cb25595>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00246-ip-10-147-4-33.ec2.internal.warc.gz"}
SAS-L archives -- August 2005, week 2 (#325)LISTSERV at the University of Georgia Date: Thu, 11 Aug 2005 11:42:41 -0700 Reply-To: Andre Bushmakin <bushmakin@MSN.COM> Sender: "SAS(r) Discussion" <SAS-L@LISTSERV.UGA.EDU> From: Andre Bushmakin <bushmakin@MSN.COM> Organization: http://groups.google.com Subject: Re: Data step code for smoothing via cubic spline interpolation? Comments: To: sas-l@uga.edu Content-Type: text/plain; charset="iso-8859-1" Because you need to calculate derivatives, I think, the best way for you is to look at Numerical Differentiation. I would recommend part 20.7-1 (c) of the "Mathematical handbook for scientists and engineers: definitions, theorems, and formulas for reference and review" by Granino A. Korn and Theresa M. Korn. The part, I mentioned, has several formulas for numerical differentiation after smoothing (formulas are based on differentiation of smoothed polynomial approximation). For example: y' = [1/(12*d)]*[(y - y ) - 8*(y - y ); k k-2 k+2 k-1 k+1 where d=x - x = x -x =... k+2 k+1 k+1 k I implemented some of this in SAS, but I did it using IML. William Turner wrote: > I'm trying to find data step code that would perform smoothing of an X,Y > series of data via cubic spline interpolation (or a similar spline > interpolation). I'm trying to replicate the smoothing done by > the "INTERPOL=SM<0-99>" option to the SYMBOL statement in SAS graph. I > know there are a couple of procedures that can do this, but I want to > perform the smoothing in a data step, continuously re-evaluating the y=f(x) > function as each new record (i.e., set of X,Y values) is input. I would > take a crack at writing it myself, but I must admit that I don't completely > understand the mechanics of this procedure. My goal is to smooth the data > sufficiently in order to identify local min and max values (where the first > derivative is zero) and points of inflection (where the second derivative > is zero). > > The data series are a fixed period counter (X) ranging from 1 to n, where > it's always true that (0 < x1 < x2 < ... < xn), and an oscillating > indicator that ranges between 0 and 15 (Y). The Y values are very noisy > (which is why I'm looking for a good smoothing function). I want to find a > solution that doesn't lag significantly, as a moving average approach would > do. At the lowest level of X represented in the data, there can only be > one value of Y. > > Any and all help or suggestions will be greatly appreciated. Thank you. > > > Regards, > > William
{"url":"http://listserv.uga.edu/cgi-bin/wa?A2=ind0508b&L=sas-l&D=0&F=P&O=D&P=36759","timestamp":"2014-04-25T05:34:24Z","content_type":null,"content_length":"11808","record_id":"<urn:uuid:94be930f-749a-4f47-943f-01a383c25d7d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Sammamish Geometry Tutor ...I've also programming in VBA most recently for an Excel update function. I was also responsible for our network at our satellite office working for GE as well as web-based instruction on parts of the GE system. I taught spiral math at the high school level in the Peace Corps. 39 Subjects: including geometry, reading, English, algebra 1 ...These are skills and lessons that I apply to every lesson when I work with students. Feel free to contact me if you have any questions or want more information - I look forward to hearing from you!I have worked as an Algebra teacher in Chicago and I thoroughly enjoy teaching the subject. I have... 27 Subjects: including geometry, chemistry, reading, writing ...I teach by breaking problems down into simple steps and keeping careful track of all quantities as we work. Working as a technical writer in the software industry, I wrote, edited, illustrated, and published professional documentation. I have a deep understanding of English grammar and usage, and a keen eye for readability. 18 Subjects: including geometry, chemistry, biology, algebra 2 ...Throughout high school I tutored pre-calculus students. Working with them and going over multiple problems until they understood the concepts they were struggling with. I have also taken a leadership program at the University of Berkeley and through it gained skills to successfully lead others through their challenges. 15 Subjects: including geometry, reading, Spanish, piano ...And then, I give the student sample problems to solve independently and coach them further as needed. My main goal is to make sure the student is self-sufficient, and capable of using the methods on quizzes or tests. With respect to my educational background and work experience, I'm a Physiology major, and I just graduated from the University of Washington. 26 Subjects: including geometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Sammamish_Geometry_tutors.php","timestamp":"2014-04-19T19:50:06Z","content_type":null,"content_length":"24054","record_id":"<urn:uuid:08551e0e-629b-4f9c-9a4c-6f9b675eab20>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Maywood, CA Calculus Tutor Find a Maywood, CA Calculus Tutor I am currently a senior Math major at Caltech. I tutored throughout high school (algebra, calculus, statistics, chemistry, physics, Spanish, and Latin) and tutored advanced math classes during college. Above all other things, I love to learn how other people learn and to teach people new things in... 28 Subjects: including calculus, Spanish, French, chemistry ...I excel at helping people feel they understand the material better: Previous clients have noted that I am able to take seemingly complex problems and make them very understandable. What this means for a client is that I identify weak points in ways that they think and teach them to reframe hard... 44 Subjects: including calculus, reading, chemistry, Spanish ...I have taken a class on numerical methods at Caltech that was done half in mathematica, half in Matlab. I am currently working on a physics research project studying the structure of a new type of material, a quasicrystal, and the code I am writing for the project is also in mathematica. I have taken a course in numerical methods that was partially taught in python. 26 Subjects: including calculus, physics, geometry, GRE ...I am a published poet, playwright, and essayist. This is all a really complicated way of saying I have a lot of hobbies and interests that have nothing to do with standardized testing. I like to create a tutoring environment that reflects this. 60 Subjects: including calculus, chemistry, reading, Spanish I am an experienced tutor in math and science subjects. I have an undergraduate and a graduate degree in electrical engineering and have tutored many students before. I am patient and will always work with students to overcome obstacles that they might have. 37 Subjects: including calculus, chemistry, writing, English Related Maywood, CA Tutors Maywood, CA Accounting Tutors Maywood, CA ACT Tutors Maywood, CA Algebra Tutors Maywood, CA Algebra 2 Tutors Maywood, CA Calculus Tutors Maywood, CA Geometry Tutors Maywood, CA Math Tutors Maywood, CA Prealgebra Tutors Maywood, CA Precalculus Tutors Maywood, CA SAT Tutors Maywood, CA SAT Math Tutors Maywood, CA Science Tutors Maywood, CA Statistics Tutors Maywood, CA Trigonometry Tutors
{"url":"http://www.purplemath.com/Maywood_CA_calculus_tutors.php","timestamp":"2014-04-16T04:21:16Z","content_type":null,"content_length":"24057","record_id":"<urn:uuid:1bf4c308-852f-467e-83ec-b0eb897e63b1>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
How Much House Can I Afford? Compound Interest How Much Will I Need to Retire? Simple Savings Calculators are intended for illustrative purposes only. Compound Savings Calculator This is a compound interest calculator that lets you start with an amount of money and see how much it grows.
{"url":"http://www.bankpbt.com/calculators.cfm?whichcalc=compound","timestamp":"2014-04-18T05:32:03Z","content_type":null,"content_length":"20225","record_id":"<urn:uuid:3a0600d9-6b43-4ae9-8cf4-abd27fca4add>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Higher Dimensional Algebra VII: Groupoidification John C. Baez, Alexander E. Hoffnung, and Christopher D. Walker Groupoidification is a form of categorification in which vector spaces are replaced by groupoids and linear operators are replaced by spans of groupoids. We introduce this idea with a detailed exposition of `degroupoidification': a systematic process that turns groupoids and spans into vector spaces and linear operators. Then we present three applications of groupoidification. The first is to Feynman diagrams. The Hilbert space for the quantum harmonic oscillator arises naturally from degroupoidifying the groupoid of finite sets and bijections. This allows for a purely combinatorial interpretation of creation and annihilation operators, their commutation relations, field operators, their normal-ordered powers, and finally Feynman diagrams. The second application is to Hecke algebras. We explain how to groupoidify the Hecke algebra associated to a Dynkin diagram whenever the deformation parameter $q$ is a prime power. We illustrate this with the simplest nontrivial example, coming from the $A_2$ Dynkin diagram. In this example we show that the solution of the Yang--Baxter equation built into the $A_2$ Hecke algebra arises naturally from the axioms of projective geometry applied to the projective plane over the finite field $\mathbb{F}_q$. The third application is to Hall algebras. We explain how the standard construction of the Hall algebra from the category of $\mathbb{F}_q$ representations of a simply-laced quiver can be seen as an example of degroupoidification. This in turn provides a new way to categorify - or more precisely, groupoidify - the positive part of the quantum group associated to the quiver. Keywords: categorification, groupoid, Hecke algebra, Hall algebra, quantum theory 2000 MSC: 17B37, 20C08, 20L05, 81R50, 81T18 Theory and Applications of Categories, Vol. 24, 2010, No. 18, pp 489-553. TAC Home
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/TAC/volumes/24/18/24-18abs.html","timestamp":"2014-04-18T08:24:51Z","content_type":null,"content_length":"3330","record_id":"<urn:uuid:26c3005d-1a93-4f46-863f-bfb7a41ca36e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
Petri Nets Title: Petri Nets Code: PES Ac.Year: 2011/2012 Term: Summer Study plans: Language: Czech Public info: http://www.fit.vutbr.cz/study/courses/PES/public/ Private info: http://www.fit.vutbr.cz/study/courses/PES/private/ Credits: 5 Completion: examination (written&verbal) Type of ┌──────────┬─────────────┬────────────────┬────────────────┬──────────────────┬───────┐ instruction: │ Hour/sem │ Lectures │ Sem. Exercises │ Lab. exercises │ Comp. exercises │ Other │ │ Hours: │ 39 │ 0 │ 0 │ 6 │ 7 │ │ │ Examination │ Tests │ Exercises │ Laboratories │ Other │ │ Points: │ 51 │ 29 │ 20 │ 0 │ 0 │ Guarantee: Češka Milan, prof. RNDr., CSc., DITS Lecturer: Češka Milan, prof. RNDr., CSc., DITS Instructor: Rogalewicz Adam, Mgr., Ph.D., DITS Faculty: Faculty of Information Technology BUT Department: Department of Intelligent Systems FIT BUT Substitute for: Learning objectives: To acquire the basic concepts and methods of the theory of Petri nets and its applications in system modelling, design, and verification. To be able to practically use Petri net-based computer-aided tools in typical applications. Basic concepts of Petri nets and their use in modelling of discrete-event systems, classification of Petri nets, the theory of C/E Petri nets, methods of analysis of C/E Petri nets, the theory of P/T Petri nets, methods of analysis of P/T Petri nets, Petri net languages, computability and complexity of Petri net-related problems, the problem of an automatic synthesis of Petri nets, restrictions and extensions of P/T Petri nets, coloured Petri nets, hierarchical and object-oriented Petri nets, Petri net-based tools, applications of Petri nets. Knowledge and skills required for the course: Basic knowledge of discrete mathematics concepts including graph theory and formal languages concepts, basic concepts of algorithmic complexity, and principles of computer modelling. Subject specific learning outcomes and competences: The acquired knowledge and experience will allow the students to actively use Petri nets and the computer-aided tools based on them in modelling, design, verification, and implementation of various classes of systems. Based on the acquired theoretical knowledge, the student is able to transfer approaches of the Petri net theory to the domain of other formal models too. Generic learning outcomes and competences: Abilities to apply and develop advanced information technologies based on suitable formal models, to propose and use such models and theories for automating the design, implementation, and verification of computer-based systems. Syllabus of lectures: 1. An introduction to Petri nets, their philosophy and applications, the notion of a net and of the derived basic terms 2. Condition/Event (C/E) Petri nets, cases and steps, the state space of C/E systems, cyclic and live C/E systems, equivalence of C/E systems. 3. Contact-free C/E systems, complementation, case graphs and their application for analysing C/E systems. 4. Processes of C/E systems, occurrence nets, properties of properties and composition of processes. 5. Complementation of C/E systems, the synchronic distance, special synchronic distances, C/E systems and the propositional calculus, facts. 6. Place/Transition (P/T) Petri nets, their definition, evolution rules, their state space, basic analytical problems (safety, boundedness, conservativeness, liveness). 7. Representing the possibly infinite state space of Petri nets by a reachability tree, computing and using reachability trees for analysing P/T Petri nets. 8. P and T invariants of P/T Petri nets, their definition, the ways of computing them and using them for analysing P/T Petri nets. 9. Subclasses and extensions of P/T Petri nets, state machines, marked graphs, free-choice Petri nets, Petri nets with inhibitors, timed and stochastic Petri nets. 10. The notion of a Petri net language, types of such languages, their closure properties, their relation to the Chomsky hierarchy. Computability and complexity of some selected Petri net-related 11. Coloured Petri nets (CPNs), their basic modelling primitives, an inscription language, CPN Design as an example of a tool based on CPNs. 12. Analysis of CPNs, occurrence graphs, invariants, and their use in analysing systems. 13. Hierarchical and object-oriented Petri nets, basic concepts of a hierarchical design, substitution and invocation, adding object-oriented features on top of Petri nets, PNtalk as a language based on object-oriented Petri nets. Syllabus - others, projects and individual work of students: 1. An application of C/E systems. 2. An application of P/T Petri nets. 3. An application of CPNS. 4. An application of object-oriented Petri nets. Each project implies modelling of a non-trivial system (or its part) by means of a Petri net of the given class and its simulation, analysis, and verification. Suitable computer-aided tools (e.g., PESIM, INA, PEP, TimeNET, CPN Design, Maria, PNtalk, etc.) will be used in the projects. Fundamental literature: 1. Reisig, W.: Petri Nets, An Introduction, Springer Verlag, 1985. ISBN: 0-387-13723-8 2. Jensen, K.: Coloured Petri Nets, Basic Concepts, Analysis Methods and Practical Use, Springer Verlag, 1993. ISBN: 3-540-60943-1 3. Girault, C., Valk, R.: Petri Nets for Systems Engineering: A Guide to Modeling, Verification, and Applications, Springer Verlag, 2002. ISBN 3-540-41217-4 4. Desel, J., Reisig, W., Rozenberg, G.: Lectures on Concurrency and Petri Nets, Advances in Petri Nets, Lecture Notes in Computer Science, vol. 3098, Springer Verlag, 2004. ISBN 3-540-22261-8 Study literature: 1. Reisig, W.: Petri Nets, An Introduction, Springer Verlag, 1985. ISBN: 0-387-13723-8 2. Jensen, K.: Coloured Petri Nets, Basic Concepts, Analysis Methods and Practical Use, Springer Verlag, 1993. ISBN: 3-540-60943-1 Controlled instruction: A written mid-term exam, a regular evaluation of projects. Progress assessment: A mid-term exam evaluation and an evaluation of projects.
{"url":"http://www.fit.vutbr.cz/study/courses/index.php?id=8139","timestamp":"2014-04-17T01:17:42Z","content_type":null,"content_length":"15451","record_id":"<urn:uuid:ad707c34-6530-4376-aa80-ef5b7da99c47>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
N Miami Beach, FL SAT Math Tutor Find a N Miami Beach, FL SAT Math Tutor ...I graduated from Purdue University in BS in Biology. I have worked in research for about 5 years and am very knowledgeable in the subject. I taught Geometry in high school. 18 Subjects: including SAT math, chemistry, calculus, geometry ...It is my mission to make sure you fully understand the material and to provide you with tools and techniques to use when I'm not around. I look forward to working with you! JesseI scored a 730 on the Math section of my SAT. 8 Subjects: including SAT math, chemistry, biology, GED ...I am also able to tutor for standardized exams including the ACT, SAT Reasoning Test, and the MCAT. For every session, I will prepare in advance of the session with any information I am provided, and will closely go over any problems the student is having in order to both provide the student wit... 32 Subjects: including SAT math, chemistry, calculus, physics I have a degree in Elementary Education and experience teaching in a variety of settings, but my favorite way to teach is one-on-one. What I like about tutoring is that I can monitor student progress closely and modify lessons immediately as needed. What I LOVE about tutoring is getting to be ther... 25 Subjects: including SAT math, reading, ESL/ESOL, algebra 1 ...As a senior, I was nominated for the Excellence in Tutoring Award. Additionally, I worked as a discussion leader for both general and organic chemistry where I led students through problem sets and answered any questions they may have had. Finally, I worked as a chemistry laboratory teaching assistant (TA) for two years and was recognized for my work by received a TA Excellence 14 Subjects: including SAT math, chemistry, calculus, geometry Related N Miami Beach, FL Tutors N Miami Beach, FL Accounting Tutors N Miami Beach, FL ACT Tutors N Miami Beach, FL Algebra Tutors N Miami Beach, FL Algebra 2 Tutors N Miami Beach, FL Calculus Tutors N Miami Beach, FL Geometry Tutors N Miami Beach, FL Math Tutors N Miami Beach, FL Prealgebra Tutors N Miami Beach, FL Precalculus Tutors N Miami Beach, FL SAT Tutors N Miami Beach, FL SAT Math Tutors N Miami Beach, FL Science Tutors N Miami Beach, FL Statistics Tutors N Miami Beach, FL Trigonometry Tutors Nearby Cities With SAT math Tutor Aventura, FL SAT math Tutors Golden Beach, FL SAT math Tutors Hallandale SAT math Tutors Hialeah SAT math Tutors Hollywood, FL SAT math Tutors Miami Gardens, FL SAT math Tutors Miami Shores, FL SAT math Tutors Miramar, FL SAT math Tutors North Miami Bch, FL SAT math Tutors North Miami Beach SAT math Tutors North Miami, FL SAT math Tutors Opa Locka SAT math Tutors Pembroke Park, FL SAT math Tutors Pembroke Pines SAT math Tutors Sunny Isles Beach, FL SAT math Tutors
{"url":"http://www.purplemath.com/N_Miami_Beach_FL_SAT_Math_tutors.php","timestamp":"2014-04-16T04:11:59Z","content_type":null,"content_length":"24249","record_id":"<urn:uuid:0658829f-5bd4-4e32-a91b-3ab3572cab5e>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00119-ip-10-147-4-33.ec2.internal.warc.gz"}
Arthur Geoffrey Walker Born: 17 July 1909 in Watford, Hertfordshire, England Died: 31 March 2001 in Chichester, Sussex,England Geoffrey Walker's father was Arthur John Walker (born in Watford, Hertfordshire in 1879) who was a coach builder. His mother was Eleanor Joanna Gosling (born in Watford, Hertfordshire in 1879; died 1946). Geoffrey Walker had an older brother Henry Brian Walker who was born about 1907 and died in Hendon, Middlesex in 1968. Geoffrey Walker attended Watford Grammar School and from there, having won a mathematics scholarship in his final year, he entered Balliol College, Oxford. He received a First Class degree from Oxford in 1931, having specialised in differential geometry for which he won a special distinction. He remained at Merton College, supported by a Harmsworth scholarship during 1932-34 and he also held an Oxford University senior mathematics scholarship in 1933. In his final years at Oxford he was greatly influenced by Milne. Leaving Oxford, he then moved to Edinburgh to undertake research. After submitting his Ph.D. thesis to Edinburgh, Walker was examined by Eddington. After the award of his doctorate he was appointed as a lecturer in mathematics at Imperial College in London. This was a post for the academic year 1935-36 and after completing this temporary appointment he received his first permanent post as a lecturer in mathematics at Liverpool University. He was appointed in 1936 and was to remain in Liverpool until 1947 when he was offered the chair of mathematics at the University of Sheffield. In 1952, after five years in Sheffield, Walker was to return to Liverpool University, this time as Professor of Pure Mathematics. He held this post until he retired in 1974. However, the pressures of administration in this post restricted his time for research:- He was a very able administrator and had the happy gift of "reading down the diagonal", as he termed it. This meant that when presented with a massive document he could extract the essential features in a very short time. His colleagues had great respect for his integrity, and as a result he found himself on numerous committees which diminished his time and energy for research. Walker worked on geometry, in particular differential geometry, relativity, and cosmology. His papers include ones on relativistic mechanics, completely symmetric spaces, completely harmonic spaces and Riemannian manifolds. He wrote an article Note on locally symmetric vector fields in a Riemannian space, published in 1976, in memory of Evan Tom Davies. This is concerned with the restrictions imposed on a Riemannian n-space by the existence of a locally symmetric vector field and it continues work begun by Walker in a paper on possible orientation of galaxies published early in his career in 1940. In 1962 Walker published Harmonic Spaces, a joint work with H S Ruse and T J Willmore. In 1975 he published Introduction to geometrical cosmology a survey which arose out of a course that Walker gave at the University of Arizona. The lectures consider the red-shift, the number of galaxies, and the distance between galaxies. Walker writes in the introduction:- This is an account of a course of 12 lectures given at the University of Arizona on the geometry of cosmology. It is entirely concerned with what might be called the classical theory, leading up to and discussing the standard model with the Robertson-Walker metric; it contains no new results though some of the methods may not have appeared in print. The Robertson-Walker metric which Walker mentions in this quotation arose from joint work which he did with his colleague H P Robertson in the late 1930s. Their work put Friedmann's theories of an expanding universe on a sound mathematical foundation and still forms the basis for models of the universe in modern cosmology. Walker was elected both a fellow of the Royal Society of Edinburgh and a fellow of the Royal Society of London. The Royal Society of Edinburgh honoured Walker by awarding him their Keith Medal in 1950. The election to the Royal Society of London took place in 1955 and he served on the Council of the Society in 1962-63. He was also a strong supporter of the London Mathematical Society and was awarded the Junior Berwick Prize of that Society in 1947. He served the Society by being the 50^th President in 1963-65. In [1] some of Walker's interests outside mathematics are recorded:- A popular head of department, he will be remembered as one of the most powerful of British differential geometers, but he was also outstanding as a table-tennis player, and some proficiency at the game was sometimes said to be a necessary qualification for employment as a lecturer in Liverpool. Unknown to most of his colleagues, he and his wife Phyllis were accomplished ballroom dancers, and he once surprised a friend by saying that he had won more prizes for dancing than he had for Article by: J J O'Connor and E F Robertson October 2003 MacTutor History of Mathematics
{"url":"http://www-gap.dcs.st-and.ac.uk/~history/Printonly/Walker_Arthur.html","timestamp":"2014-04-20T13:20:39Z","content_type":null,"content_length":"5998","record_id":"<urn:uuid:f14877d3-f4e1-4f6b-a816-a7e5a157dbd5>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00382-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2004 [00028] [Date Index] [Thread Index] [Author Index] Re: pair sums applied to trignometry sums • To: mathgroup at smc.vnet.net • Subject: [mg52537] Re: pair sums applied to trignometry sums • From: Roger Bagula <tftn at earthlink.net> • Date: Wed, 1 Dec 2004 05:57:59 -0500 (EST) • References: <200411290622.BAA27977@smc.vnet.net> <cohigl$1i5$1@smc.vnet.net> • Reply-to: tftn at earthlink.net • Sender: owner-wri-mathgroup at wolfram.com Daniel Lichtblau wrote: >It occurs to me that these functions might be simplified, as they are >each themselves sums of pairs of functions with terms satisfying simple >recurrences. For example, fs can be written as the sum of n-even + n-odd >terms, and these are just the sums of terms 1/(2*k+1)*x^(4*k+1)/(4*k+1)! >and (-1)*(2*k+1)/(2*k+2)*x^(4*k+3)/(4*k+3)! respectively. >In more detail we get the function below. >InputForm[fsin2[x_] = Together[-Sum[(2*k+1)/(2*k+2)*x^(4*k+3)/(4*k+3)!, >{k,0,Infinity}] + > Sum[1/(2*k+1)*x^(4*k+1)/(4*k+1)!, {k,0,Infinity}]]] >Out[10]//InputForm= (-4 + 4*Cosh[x] + x*Sin[x] - x*Sinh[x])/(2*x) >(Isn't it great to have a symbolic math engine at ones fingertips?) >As a quick check: >In[11]:= InputForm[Max[Abs[Table[fsin2[x]-fsin[x], {x,-Pi,Pi,.1}]]]] >Out[11]//InputForm= 3.372302437298913*^-15 >(Isn't it great to have a numeric math engine at ones fingertips?) >The advantage to using the closed form is twofold. One is that numeric >computations are better, and the other is that they are significantly >faster. To see the latter: >In[5]:= Timing[Plot[fsin[x],{x,-Pi,Pi}]] >Out[5]= {0.3 Second, -Graphics-} >In[6]:= Timing[Plot[fsin2[x],{x,-Pi,Pi}]] >Out[6]= {0.01 Second, -Graphics-} >For the former, just notice what happens when we get outside the range >-Pi<x<Pi, for example the interval {15*Pi,16*Pi}. >Daniel Lichtblau >Wolfram Research Dear Daniel Lichtblau , There is a reason for using the specific pair {1/(n+1),n/(1+n)}. It stems from group theory and functional inversion. The pair function {1/(1+x),x/(1+x)} is connected to the Farey tree functions by functional inversion: x/(1-x) if the functional inverse to x/(1+x) (1-x)/x if the functional inverse to 1/(1+x) There is a larger group that contains these Farey tree transforms too called the anharmonic group: {x, 1/x, 1/(x-1),x-1,x/(1-x),(1-x)/x} and it "implies" an functional inversion group of: I have done a lot of work in this area in the last few years. I'm glad someone realized that these functions are important besides me. There should be a number of ways to do them as is usual when dealing with fundamentals in Mathematics. I chose the ones I did because they are a torus under the inverse substitution: {1/(1+x),x/(1+x)}/. x->1/x I thank you for your work and I'll see if they work on my version of The anharmonic group connection makes me believe that there are further "subharmonic" functions yet to be discovered. Respectfully, Roger L. Bagula tftn at earthlink.net, 11759Waterhill Road, Lakeside,Ca 92040-2905,tel: 619-5610814 : alternative email: rlbtftn at netscape.net URL : http://home.earthlink.net/~tftn
{"url":"http://forums.wolfram.com/mathgroup/archive/2004/Dec/msg00028.html","timestamp":"2014-04-20T11:07:56Z","content_type":null,"content_length":"37321","record_id":"<urn:uuid:5637b90f-f1c2-4a5c-a074-0f47d173e8b9>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
Counting electrons in space MSSL Planetary Science Nuggets Counting electrons in space Space isn't really a empty. In reality it's filled with particles that can be measured by instruments on spacecraft. But there aren't that many of them so special techniques need to be used to work out the density and temperature of the particles surrounding the planets, and our spacecraft. Spacecraft missions are constrained by the weight of the payload and so it is impractical to have separate instruments to measure all the properties of the space plasma surrounding the spacecraft. Because of this weight limitation, space plasma physicists have developed techniques to calculate the basic plasma parameters simply from counting the number of electrons over a range (spectrum) of If we counted the number of particles in a box with a side of one metre in space, we would know the number density of the particles in units of per metre cubed: simply the number of particles divided by the density. We might also count the number of particles in this box that are moving at different speeds in different directions. To a space plasma physicist, this combination is called "velocity Our instruments, like the Cassini Electron Spectrometer in the picture on the right, count how many electrons there are at different energies and allow us to measure how many particles there are in different regions of velocity space - so how many particles there are in certain regions of space and what speeds they are moving at in different directions. We call this "phase space density". Using a mathematical operation called integration we can calculate the density, pressure, temperature and velocity of these electrons. These measurements are important in understanding the space environment of the planets, how they are coupled to the Sun and how important the various moons are. For more information see: Lewis, G.R., N. André, C.S. Arridge, A.J. Coates, L.K. Gilbert, D.R. Linder (2008) Derivation of density and temperature from the Cassini-Huygens CAPS electron spectrometer. Planet. Space Sci., 56 (7), 901-912, doi:10.1016/j.pss.2007.12.017. Page last modified on 28 feb 12 17:53
{"url":"http://www.ucl.ac.uk/mssl/planetary-science/nuggets/mssl-planetary-science-nuggets/grl-density/","timestamp":"2014-04-20T03:30:10Z","content_type":null,"content_length":"17942","record_id":"<urn:uuid:20d1aa23-c872-4ce5-8339-0edfb32fa1e6>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiple Linear Regression Mean? What do the Coefficients in a Multiple Linear Regression Mean? The regression coefficient for the i-th predictor is the expected difference in response per unit difference in the i-th predictor, all other things being equal. That is, if the i-th predictor is changed 1 unit while all of the other predictors are held constant, the response is expected to change b[i] units. As always, it is important that cross-sectional data not be interpreted as though they were longitudinal. The regression coefficient and its statistical significance can change according to the other variables in the model. Among postmenopausal women, it has been noted that bone density is related to weight. In this cross-sectional data set, density is regressed on weight, body mass index, and percent ideal body weight^*. These are the regression coefficients for the 7 possible regression models predicting bone density from the weight measures. (1) (2) (3) (4) (5) (6) (7) Intercept 0.77555 0.77264 0.77542 0.77065 0.74361 0.77411 0.75635 WEIGHT 0.00642 . 0.00723 0.00682 0.00499 . . BMI -0.00610 -0.04410 . -0.00579 . 0.01175 . PCTIDEAL 0.00026 0.01241 -0.00155 . . . 0.00277 Not only do the magnitudes of the coefficients change from model to model, but for some variables the sign changes, too^**. For each regression coefficient, there is a t statistic. The corresponding P value tells us whether the variable has statistically significant predictive capability in the presence of the other predictors. A common mistake is to assume that when many variables have nonsignificant P values they are all unnecessary and can be removed from the regression equation. This is not necessarily true. When one variable is removed from the equation, the others may become statistically significant. Continuing the bone density example, the P values for the predictors in each model are (1) (2) (3) (4) (5) (6) (7) WEIGHT 0.1733 . 0.0011 <.0001 <.0001 . . BMI 0.8466 0.0031 . 0.1960 . <.0001 . PCTIDEAL 0.9779 0.0002 0.2619 . . . <.0001 All three predictors are related, so it is not surprising that model (1) shows that all of them are nonsignifcant in the presence of the others. Given WEIGHT and BMI, we don't need PCTIDEAL, and so on. Any one of them is superfluous. However, as models (5), (6),and (7) demonstrate, all of them are highly statistically significant when used alone. The P value from the ANOVA table tells us whether there is predictive capability in the model as a whole. All four combinations in the following table are possible. │ │ Overall F │ │ │ Significant │ NS │ │ │ Significant │ - │ - │ │Individual t ├─────────────┼─────────────┼────┤ │ │ NS │ - │ - │ • Cases where the t statistic for every predictor and the F statistic for the overall model are statistically significant are those where every predictor has something to contribute. • Cases where nothing reaches statistical significance are those where none of the predictors are of any value. • This note has shown that it is possible to have the overall F ratio statistically significant and all of the t statistics nonsignificant. • It is also possible to have the overall F ratio nonsignificant and some of the t statistics significant. There are two ways this can happen. □ First, there may be no predictive capability in the model. However, if there are many predictors, statistical theory guarantees that on average 5% of them will appear to have statistically significant predictive capability when tested individually. □ Second, the investigator may have chosen the predictors poorly. If one useful predictor is added to many that are unrelated to the outcome, its contribution may not be large enough for the overall model to appear to have statistically significant predictive capability. A contribution that might have reached statistical significant when viewed individually, might not make it out of the noise when viewed as part of the whole. ^*In general, great care must be used when using a predictor such as body mass index or percent ideal body weight that is a ratio of other variables. This will be discussed in detail later. ^**This touches on another point, too important to be left buried here: It is not always easy to guess/know what the sign of a regression coefficient will be when a predictor is correlated with other variables in the model. Consider model (2), for example. Both predictors are statistically significant. On there own, bone density goes up and down as they go up and down [models (6) & (7)]. Yet, when they appear in a model together, bone density goes down as BMI increases with PCTIDEAL held constant! It is sometimes said that BMI is "correcting" for PCTIDEAL, which sounds good, but really isn't much help determining what will happen at the outset. Copyright © 2000 Gerard E. Dallal
{"url":"http://www.jerrydallal.com/LHSP/regcoef.htm","timestamp":"2014-04-18T23:15:48Z","content_type":null,"content_length":"7233","record_id":"<urn:uuid:64069346-ca83-4603-b164-e4f7cdd30848>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00388-ip-10-147-4-33.ec2.internal.warc.gz"}
Self-Organizing Nets James Matthews After a detailed look at supervised networks (see Perceptrons, Back-propagation and Associative Networks) we should look at a good example of unsupervised networks. The Kohonen network is probably the best example, because it is quite simple yet introduces the concepts of self-organization and unsupervised training easily. There are many researchers who require biological plausibility in proposed neural network models especially since the aim of (most) networks is to emulate the brain. It is generally accepted that perceptrons, back-propagation and many other techniques are not biologically plausible. With the demand for biological plausibility rising, the concept of self-organizing networks became a point of interest amoung researchers. Self-organizing networks could be both supervised or unsupervised, and have four additional properties: ○ Each weight is representative of a certain input. ○ Input patterns are shown to all neurons simultaneously. ○ Competitive learning: the neuron with the largest response is chosen. ○ A method of reinforcing the competitive learning. Unsupervised Learning Unsupervised learning allows the network to find its own energy minima (see Associative Networks for an explanation of energy) and is therefore more efficient with pattern association. Obviously, the disadvantage is it is then up to the program/user to interpret the output. There are quite a few types of self-organizing networks, like the Instar-Outstar network, the ART-series, and the Kohonen network. For purposes of simplicity, we will look at the Kohonen network. Kohonen Networks The term Kohonen network is a slightly misleading one, because the researcher Teuvo Kohonen in fact researched many kinds of network, but only a small number of these are called Kohonen networks. We will look at the idea of self-organizing maps, networks that attempt to map their weights to the input data. The Kohonen network is an n-dimensional network, where n is the number of inputs. For simplicity, we will look at a 2-dimensional networks. A schematic architecture of an example network might look The above picture only shows a little bit of the network, but you can see how every neuron gets the same input, and there is one output per neuron. To help us visualize the problem of mapping the weights, imagine that all the weights of the network were initialized to a random value. The weights are then graphed on a standard Cartesian graph, and connected with adjacent neurons. These lines are merely schematic and do not represent connections within the net itself. The network is trained by presenting it with random points. The neuron that has the largest response is reinforced by the learning algorithm. Furthermore, the surrounding neurons are also reinforced (this is explained in much greater depth later). This has the effect of "pulling" and "spreading" the network across the training data. Nothing beats a diagram and a Java applet at this point. Figure 1: TL: Initial iteration, TR: 100 iterations, BL: 200 iterations, BR: 500 iterations. Click on the image to open the applet. This method can also be applied to other similar situations. For example, here are two Kohonen networks applied to an F-14 Tomcat and a cactus! Figure 2: Mapping a Kohonen Network to a bitmapped image. Rules and Operation Now that you can visualize what the network is doing, let us look at how it does it. The basic idea behind the Kohonen network is competitive learning. The neurons are presented with the inputs, which calculate their net (weighted sum) and neuron with the closest output magnitude is chosen to receive additional training. Training, though, does not just affect the one neuron but also its So, how does one judge what the 'closest output magnitude' is? One way is to find the distance between the input and net of the neuron: Notice that when applied to our 2-dimensional network, it reduces down to the standard Euclidean distance formula. So, if we want the output the closely represents the input pattern, it is the neuron with the smallest distance. Let us call the neuron with the least distance x[d0]. Now, remember that we change both the neuron and the neurons in its neighbourhood N[x]. N[x] is not constant, it can change from anything ranging between the entire network to just the 8 adjacent neurons. We will talk about the neighbourhood soon. Kohonen learning is very simple, following a familiar equation: Where k is the learning coefficient. So all neurons in neighbourhood N[x] to neuron x[d0] have their weights adjusted. So how do we adjust k and N[x] during training? This is an area of much research, but Kohonen has suggest splitting the training up into two phases. Phase 1 will reduce down the learning coefficient from 0.9 to 0.1 (or similar values), and the neighbourhood will reduce from half the diameter of the network down to the immediately surrounding cells (N[x] = 1). Following that, Phase 2 will reduce the learning coefficient from perhaps 0.1 to 0.0 but over double or more the number of iterations in Phase 1. The neighbourhood value is fixed at 1. You can see that the two phases allow firstly the network to quickly 'fill out the space' with the second phase fine-tuning the network to a more accurate representation of the space. Refer back to the diagram, the bottom left picture actually shows the network right after Phase 1 has finishes, with the bottom right one after the second phase is complete. Interpreting Output and Applications Interpreting Kohonen networks is quite easy since only one neuron will fire per input set after training. Therefore, it is a case of classifying the outputs. For example, if this neuron fires do this - or if this group of neurons fire do this etc. Kohonen networks have been successfully applied to speech recognition, since it is after cognitive networks that inspired self-organizing networks. Kohonen networks can also be well applied to gaming - by expanding the dimensionality (number of inputs) you can create much more complicated mappings, far beyond the redundant example explained above. Last Updated: 24/10/2004 Article content copyright © James Matthews, 2004.
{"url":"http://www.generation5.org/content/1999/selforganize.asp","timestamp":"2014-04-20T16:01:37Z","content_type":null,"content_length":"15114","record_id":"<urn:uuid:ea710d5b-04a8-4f17-92f8-2bb8da2c8012>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Tools Discussion: Web Site Help, Algebra 1 Teaching Discussion: Web Site Help Topic: Algebra 1 Teaching << see all messages in this topic < previous message | next message > Subject: RE: Algebra 1 Teaching Author: cs4114 Date: Apr 16 2007 One textbook to look at is Discovering Algebra which is published by Key Curriculum Press. I truly believe that a functions-based algebra course with emphasis on multiple representations is very important. For example, solving a linear equation is a traditional Algebra I topic which should be supported by tables and graphs not just mindless symbolic manipulation. Lots of problems in real world situations. There are many websites with some very good instructional strategies. Reply to this message Quote this message when replying? yes no Post a new topic to the Web Site Help Discussion discussion Discussion Help
{"url":"http://mathforum.org/mathtools/discuss.html?context=dtype&do=r&msg=30850","timestamp":"2014-04-17T22:04:59Z","content_type":null,"content_length":"15669","record_id":"<urn:uuid:cce46223-febb-42a7-a1c6-a173324afe3c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
Parkville Math Tutor Find a Parkville Math Tutor ...Not only with professional knowledge in chemistry, I also had lots of experiences in teaching and in helping students to learn efficiently. The most important reason that lots of my students love my chemistry class is that I make chemistry easy to learn. I am very good at turning difficult or ... 22 Subjects: including algebra 1, ACT Math, Chinese, discrete math ...I have a passion for making leaning fun for students, providing them with the tools to succeed, and setting them up for academic success in the future. I am an eighth grade special educator who encounters students with ADHD on a daily basis. I have a degree from Towson University in Secondary Special Education. 45 Subjects: including algebra 1, algebra 2, American history, elementary (k-6th) I have a PhD in Human Genetics from Johns Hopkins. Currently, I'm teaching a course in Synthetic Biology at Hopkins and I love working with students. This past year, I've had the privilege of working with 2 very talented high school students in my course. 13 Subjects: including precalculus, algebra 1, SAT math, prealgebra ...I am proficient in Math Skills Training for students, and have much experience working with students who have "math anxiety." Experience (7+ years):I have tutored students in Statistics and Remedial Math at California State University, as well as students at the Community College of Baltimore Cou... 31 Subjects: including algebra 1, algebra 2, elementary (k-6th), vocabulary ...I am qualified to tutor: PreAlgebra: taught this skill to students without disabilities at my own business. I also provided learning supports for students in general education program; Phonics: taught this skill as part of elementary education (special); and Reading: taught this skill as part o... 7 Subjects: including algebra 1, prealgebra, reading, elementary math
{"url":"http://www.purplemath.com/parkville_math_tutors.php","timestamp":"2014-04-19T07:05:57Z","content_type":null,"content_length":"23771","record_id":"<urn:uuid:9f29a00b-26bd-42f5-b080-c423fb1e1ced>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Comment recorded on the 25 June 'Starter of the Day' page by Inger.kisby@herts and essex.herts.sch.uk, : "We all love your starters. It is so good to have such a collection. We use them for all age groups and abilities. Have particularly enjoyed KIM's game, as we have not used that for Mathematics before. Keep up the good work and thank you very much Best wishes from Inger Kisby" Comment recorded on the 1 February 'Starter of the Day' page by M Chant, Chase Lane School Harwich: "My year five children look forward to their daily challenge and enjoy the problems as much as I do. A great resource - thanks a million." Comment recorded on the 6 May 'Starter of the Day' page by Natalie, London: "I am thankful for providing such wonderful starters. They are of immence help and the students enjoy them very much. These starters have saved my time and have made my lessons enjoyable." Comment recorded on the 3 October 'Starter of the Day' page by S Mirza, Park High School, Colne: "Very good starters, help pupils settle very well in maths classroom." Comment recorded on the 16 March 'Starter of the Day' page by Mrs A Milton, Ysgol Ardudwy: "I have used your starters for 3 years now and would not have a lesson without one! Fantastic way to engage the pupils at the start of a lesson." Comment recorded on the 5 April 'Starter of the Day' page by Mr Stoner, St George's College of Technology: "This resource has made a great deal of difference to the standard of starters for all of our lessons. Thank you for being so creative and imaginative." Comment recorded on the 17 November 'Starter of the Day' page by Amy Thay, Coventry: "Thank you so much for your wonderful site. I have so much material to use in class and inspire me to try something a little different more often. I am going to show my maths department your website and encourage them to use it too. How lovely that you have compiled such a great resource to help teachers and pupils. Thanks again" Comment recorded on the 10 September 'Starter of the Day' page by Carol, Sheffield PArk Academy: "3 NQTs in the department, I'm new subject leader in this new academy - Starters R Great!! Lovely resource for stimulating learning and getting eveyone off to a good start. Thank you!!" Comment recorded on the 19 October 'Starter of the Day' page by E Pollard, Huddersfield: "I used this with my bottom set in year 9. To engage them I used their name and favorite football team (or pop group) instead of the school name. For homework, I asked each student to find a definition for the key words they had been given (once they had fun trying to guess the answer) and they presented their findings to the rest of the class the following day. They felt really special because the key words came from their own personal information." Comment recorded on the 2 May 'Starter of the Day' page by Angela Lowry, : "I think these are great! So useful and handy, the children love them. Could we have some on angles too please?" Comment recorded on the 17 June 'Starter of the Day' page by Mr Hall, Light Hall School, Solihull: "Dear Transum, I love you website I use it every maths lesson I have with every year group! I don't know were I would turn to with out you!" Comment recorded on the 9 October 'Starter of the Day' page by Mr Jones, Wales: "I think that having a starter of the day helps improve maths in general. My pupils say they love them!!!" Comment recorded on the 1 February 'Starter of the Day' page by Terry Shaw, Beaulieu Convent School: "Really good site. Lots of good ideas for starters. Use it most of the time in KS3." Comment recorded on the 23 September 'Starter of the Day' page by Judy, Chatsmore CHS: "This triangle starter is excellent. I have used it with all of my ks3 and ks4 classes and they are all totally focused when counting the triangles." Comment recorded on the 24 May 'Starter of the Day' page by Ruth Seward, Hagley Park Sports College: "Find the starters wonderful; students enjoy them and often want to use the idea generated by the starter in other parts of the lesson. Keep up the good work" Comment recorded on the 19 November 'Starter of the Day' page by Lesley Sewell, Ysgol Aberconwy, Wales: "A Maths colleague introduced me to your web site and I love to use it. The questions are so varied I can use them with all of my classes, I even let year 13 have a go at some of them. I like being able to access the whole month so I can use favourites with classes I see at different times of the week. Thanks." Comment recorded on the 21 October 'Starter of the Day' page by Mr Trainor And His P7 Class(All Girls), Mercy Primary School, Belfast: "My Primary 7 class in Mercy Primary school, Belfast, look forward to your mental maths starters every morning. The variety of material is interesting and exciting and always engages the teacher and pupils. Keep them coming please." Comment recorded on the 3 October 'Starter of the Day' page by Mrs Johnstone, 7Je: "I think this is a brilliant website as all the students enjoy doing the puzzles and it is a brilliant way to start a lesson." Comment recorded on the 26 March 'Starter of the Day' page by Julie Reakes, The English College, Dubai: "It's great to have a starter that's timed and focuses the attention of everyone fully. I told them in advance I would do 10 then record their percentages." Comment recorded on the 12 July 'Starter of the Day' page by Miss J Key, Farlingaye High School, Suffolk: "Thanks very much for this one. We developed it into a whole lesson and I borrowed some hats from the drama department to add to the fun!" Comment recorded on the 8 May 'Starter of the Day' page by Mr Smith, West Sussex, UK: "I am an NQT and have only just discovered this website. I nearly wet my pants with joy. To the creator of this website and all of those teachers who have contributed to it, I would like to say a big THANK YOU!!! :)." Comment recorded on the 19 June 'Starter of the Day' page by Nikki Jordan, Braunton School, Devon: "Excellent. Thank you very much for a fabulous set of starters. I use the 'weekenders' if the daily ones are not quite what I want. Brilliant and much appreciated." Comment recorded on the 14 October 'Starter of the Day' page by Inger Kisby, Herts and Essex High School: "Just a quick note to say that we use a lot of your starters. It is lovely to have so many different ideas to start a lesson with. Thank you very much and keep up the good work." Comment recorded on the 1 May 'Starter of the Day' page by Phil Anthony, Head of Maths, Stourport High School: "What a brilliant website. We have just started to use the 'starter-of-the-day' in our yr9 lessons to try them out before we change from a high school to a secondary school in September 2007. This is one of the best resources on-line we have found. The kids and staff love it. Well done an thank you very much for making my maths lessons more interesting and fun." Comment recorded on the 14 September 'Starter of the Day' page by Trish Bailey, Kingstone School: "This is a great memory aid which could be used for formulae or key facts etc - in any subject area. The PICTURE is such an aid to remembering where each number or group of numbers is - my pupils love it!
{"url":"http://www.transum.org/Software/sw/Starter_of_the_day/Similar.asp?ID_Topic=15","timestamp":"2014-04-20T10:50:03Z","content_type":null,"content_length":"38462","record_id":"<urn:uuid:0993e406-7808-48ad-a009-f3a64bb3aa4a>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
NEW SET of good PS(3) Author Message Re: NEW SET of good PS(3) [#permalink] 25 Apr 2011, 03:55 Bunuel wrote: 5. Mrs. Smith has been given film vouchers. Each voucher allows the holder to see a film without charge. She decides to distribute them among her four nephews so that each nephew gets at least two vouchers. How many vouchers has Mrs. Smith been given if there are 120 ways that she could distribute the vouchers? (A) 13 (B) 14 (C) 15 (D) 16 (E) more than 16 Clearly there are more than 8 vouchers as each of four can get at least 2. So, Senior Manager Joined: 12 Dec 2010 basically 120 ways vouchers can the distributed are the ways to distribute x-8 vouchers Posts: 282 , so that each can get from zero to x-8 as at "least 2", or 2*4=8, we already booked. Let x-8 be k. Strategy, General Answer: C (15). P.S. Direct formula: GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 The total number of ways of dividing n identical items among r persons, each one of whom, can receive 0,1,2 or more items is n+r-1C_{r-1}. GPA: 4 The total number of ways of dividing n identical items among r persons, each one of whom receives at least one item is n-1C_{r-1}. WE: Consulting (Other) Hope it helps. Followers: 5 Bunuel, I have a question on direct formula as well as on the question itself-Can we generalize the formula in case if out of items, if people has to share say more than 1 items (at least >=k items, where k >=2) . Also could not get really why 120 should be the way of distributing x-8 vouchers My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Re: NEW SET of good PS(3) [#permalink] 25 Apr 2011, 04:07 Bunuel wrote: 10. How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤3 and 1≤y≤3?[/b] (A) 72 (B) 76 (C) 78 (D) 80 (E) 84 It would be better if you draw it while reading this explanation. With the restriction given (1≤x≤3 and 1≤y≤3) we get 9 points, from which we can form the triangle: (1,1), (1,2), (1,3), (2,1)... From this 9 points any three (9C3) will form the triangle BUT THE SETS of three points which are collinear. Senior Manager We'll have 8 sets of collinear points of three: Joined: 12 Dec 2010 3 horizontal {(1,1),(2,1),(3,1)} {(1,2)(2,2)(3,2)}... 3 vertical Posts: 282 2 diagonal {(1,1)(2,2)(3,3)}{(1,3)(2,2)(3,1)} Concentration: So the final answer would be; 9C3-8=84-8=76 Strategy, General Management Answer: B. GMAT 1: 680 Q49 V34 Hope it's clear. GMAT 2: 730 Q49 V41 GPA: 4 1- Collinear point issue will arise in case of overlapping values of x, y ? (as in here we have all the overlapping range for x & y). Also since range here is small for both WE: Consulting (Other) x, y (ie.=3) we can manually calculate the collinear points but in case of large range how do we go about it ? should it be = # overlapping points on X + # overlapping points on Y + # diagonal points (which will essentially be min(# overlapping points on X , Y) -1 )-- Not so sure on this though ... Followers: 5 2- I see a similar Question in PS Q.229- The method explained here in the above example does not seems to fit too well there. basically in the question we have -4 <= X <=5, 6<= Y <=16. Can you please throw some light in the context of OG question.... TIA ~ Yogesh My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Senior Manager Re: NEW SET of good PS(3) [#permalink] 01 May 2011, 00:13 Joined: 12 Dec 2010 bumping up in hope to get response Posts: 282 Strategy, General My GMAT Journey 540->680->730! GMAT 1: 680 Q49 V34 ~ When the going gets tough, the Tough gets going! GMAT 2: 730 Q49 V41 GPA: 4 WE: Consulting (Other) Followers: 5 Re: NEW SET of good PS(3) [#permalink] 27 May 2011, 04:53 This post received Expert's post yogesh1984 wrote: VeritasPrepKarishma 1- Collinear point issue will arise in case of overlapping values of x, y ? (as in here we have all the overlapping range for x & y). Also since range here is small for both x, y (ie.=3) we can manually calculate the collinear points but in case of large range how do we go about it ? should it be = # overlapping points on X + # overlapping points Veritas Prep GMAT on Y + # diagonal points (which will essentially be min(# overlapping points on X , Y) -1 )-- Not so sure on this though ... 2- I see a similar Question in Joined: 16 Oct 2010 Posts: 4171 PS Q.229- The method explained here in the above example does not seems to fit too well there. basically in the question we have -4 <= X <=5, 6<= Y <=16. Can you please throw Location: Pune, India some light in the context of OG question.... Followers: 894 TIA ~ Yogesh Kudos [?]: 3787 [1] , Check out this thread: given: 148 It discusses what to do in case of a larger range. Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Re: NEW SET of good PS(3) [#permalink] 27 May 2011, 10:15 VeritasPrepKarishma wrote: yogesh1984 wrote: yogesh1984 1- Collinear point issue will arise in case of overlapping values of x, y ? (as in here we have all the overlapping range for x & y). Also since range here is small for both x, y (ie.=3) we can manually calculate the collinear points but in case of large range how do we go about it ? should it be = # overlapping points on X + # overlapping points Senior Manager on Y + # diagonal points (which will essentially be min(# overlapping points on X , Y) -1 )-- Not so sure on this though ... Joined: 12 Dec 2010 2- I see a similar Question in Posts: 282 OG12 Concentration: PS Q.229- The method explained here in the above example does not seems to fit too well there. basically in the question we have -4 <= X <=5, 6<= Y <=16. Can you please throw Strategy, General some light in the context of OG question.... TIA ~ Yogesh GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 Check out this thread: GPA: 4 ps-right-triangle-pqr-71597.html?hilit=how%20many%20triangles#p830694 WE: Consulting (Other) It discusses what to do in case of a larger range. Followers: 5 Yeah thanks for this (however i had found this through customized search) I hope i am not complicating too much here. My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Senior Manager Re: NEW SET of good PS(3) [#permalink] 07 Jun 2011, 06:18 Joined: 12 Dec 2010 Thanks a bunch Bunuel Posts: 282 However Just one note to all those who are trying this set - Please solve these sets once you have gained some confidence ! Concentration: _________________ Strategy, General Management My GMAT Journey 540->680->730! GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 ~ When the going gets tough, the Tough gets going! GPA: 4 WE: Consulting (Other) Followers: 5 Re: NEW SET of good PS(3) [#permalink] 09 Jun 2011, 03:50 Bunuel wrote: 8. How many positive integers less than 10,000 are such that the product of their digits is 210? (A) 24 (B) 30 (C) 48 (D) 54 (E) 72 210=1*2*3*5*7=1*6*5*7. (Only 2*3 makes the single digit 6). So, four digit numbers with combinations of the digits {1,6,5,7} and {2,3,5,7} and three digit numbers with combinations of digits {6,5,7} will have the product of their digits equal to 210. {1,6,5,7} # of combinations 4!=24 {2,3,5,7} # of combinations 4!=24 {6,5,7} # of combinations 3!=6 Answer: D. Joined: 16 May 2011 10. How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤3 and 1≤y≤3? (A) 72 Posts: 205 (B) 76 (C) 78 Concentration: Finance, (D) 80 Real Estate (E) 84 GMAT Date: 12-27-2011 It would be better if you draw it while reading this explanation. With the restriction given (1≤x≤3 and 1≤y≤3) we get 9 points, from which we can form the triangle: (1,1), (1,2), (1,3), (2,1)... WE: Law (Law) From this 9 points any three (9C3) will form the triangle BUT THE SETS of three points which are collinear. Followers: 0 We'll have 8 sets of collinear points of three: 3 horizontal {(1,1),(2,1),(3,1)} {(1,2)(2,2)(3,2)}... 3 vertical 2 diagonal {(1,1)(2,2)(3,3)}{(1,3)(2,2)(3,1)} So the final answer would be; 9C3-8=84-8=76 Answer: B. Hope it's clear. i just want to thank you bunuel but i still have some question to make it clear: lets say that i was given 5 points for y and the same 5 for x: so it will be choosing 25c3- 5 vertical-5 horizontal and 2 diagonals and to make it even more difficult: lets say that there where 6 points for x and 3 for y: so it will be 18c3-6 horizontal and 3 vertical - 2 diagonals or that is a bit surprise in here? hope you can clarify Manager Re: NEW SET of good PS(3) [#permalink] 12 Jun 2011, 09:27 Joined: 16 May 2011 if anyone can help please to clarify the methos: Posts: 205 let's say that the Q was: How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? Concentration: Finance, how will it be solved: Real Estate will 3C36 minus 6 vertical and 6 horizontal minus 2 diagonals will be the answer or will the answer be different. GMAT Date: 12-27-2011 thank's in advance WE: Law (Law) Followers: 0 Re: NEW SET of good PS(3) [#permalink] 12 Jun 2011, 12:09 dimri10 wrote: if anyone can help please to clarify the methos: let's say that the Q was: How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? yogesh1984 how will it be solved: will 3C36 minus 6 vertical and 6 horizontal minus 2 diagonals will be the answer or will the answer be different. Senior Manager thank's in advance Joined: 12 Dec 2010 While I seriously doubt whether one could encounter such long range question Posts: 282 (esp. because calculating # diagonals is going to be little tricky here) unless you are shooting for 51 in quant. Strategy, General That said let me try my hands- Think about when it will be horizontal collinear- all the y values are same for a given set of X values. so we have 6 values where Y can be same (it has to be integer GMAT 1: 680 Q49 V34 coordinate)- so total # horizontal collinear points- 6 GMAT 2: 730 Q49 V41 You can have similar argument for vertical (constant X and vary Y) set of collinear points- 6 GPA: 4 For # diagonals (please refer tot the attachment, I sketched only one side of the diagonals ) - you should be able to count the numbers now. For one side it comes out that we WE: Consulting (Other) will have 16 such pairs (of 3 points) so by symmetry you need to multiply by 2. SO a total # diagonals will be 32. Followers: 5 I hope that helps. ~ Yogesh My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Re: NEW SET of good PS(3) [#permalink] 12 Jun 2011, 18:17 This post received Expert's post dimri10 wrote: if anyone can help please to clarify the methos: let's say that the Q was: How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? how will it be solved: VeritasPrepKarishma will 3C36 minus 6 vertical and 6 horizontal minus 2 diagonals will be the answer or will the answer be different. Veritas Prep GMAT thank's in advance I think your question is quite similar to yogesh1984's question above. I missed answering his question (thought of doing it later due to the diagram involved but it skipped my Joined: 16 Oct 2010 mind). Posts: 4171 Anyway, let me show you how I would solve such a question. Both the questions can be easily answered using this method. Location: Pune, India How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? Followers: 894 Check out this post for the solution: Kudos [?]: 3787 [2] , http://www.veritasprep.com/blog/2011/09 ... mment-2495 given: 148 *Edited the post to fix the problem. Ques2.jpg [ 16.67 KiB | Viewed 4871 times ] Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Re: NEW SET of good PS(3) [#permalink] 12 Jun 2011, 23:37 Joined: 16 May 2011 you did great. wow. that's a briliant explenation. thank's Karishma . it deserves more than 1 kudos, but unfortunatly there's only 1. Posts: 205 Concentration: Finance, Real Estate GMAT Date: 12-27-2011 WE: Law (Law) Followers: 0 Re: NEW SET of good PS(3) [#permalink] 11 Jul 2011, 08:08 Bunuel wrote: 5. Mrs. Smith has been given film vouchers. Each voucher allows the holder to see a film without charge. She decides to distribute them among her four nephews so that each nephew gets at least two vouchers. How many vouchers has Mrs. Smith been given if there are 120 ways that she could distribute the vouchers? (A) 13 (B) 14 (C) 15 (D) 16 (E) more than 16 Answer: C. Clearly there are more than 8 vouchers as each of four can get at least 2. So, basically 120 ways vouchers can the distributed are the ways to distribute x-8 vouchers, so that each can get from zero to x-8 as at "least 2", or 2*4=8, we already booked. Let x-8 be k. In how many ways we can distribute k identical things among 4 persons? Well there is a formula for this but it's better to understand the concept. yossarian84 Let k=5. And imagine we want to distribute 5 vouchers among 4 persons and each can get from zero to 5, (no restrictions). Manager Consider: Joined: 12 Aug 2010 ttttt||| We have 5 tickets (t) and 3 separators between them, to indicate who will get the tickets: Posts: 66 Schools: UNC Means that first nephew will get all the tickets, Kenan-Flagler, IU Kelley, Emory GSB |t|ttt|t Means that first got 0, second 1, third 3, and fourth 1 WE 1: 5 yrs And so on. Followers: 3 How many permutations (arrangements) of these symbols are possible? Total of 8 symbols (5+3=8), out of which 5 t's and 3 |'s are identical, so \frac{8!}{5!3!}=56. Basically Kudos [?]: 9 [0], it's the number of ways we can pick 3 separators out of 5+3=8: 8C3. given: 50 So, # of ways to distribute 5 tickets among 4 people is (5+4-1)C(4-1)=8C3. For k it will be the same: # of ways to distribute k tickets among 4 persons (so that each can get from zero to k) would be (K+4-1)C(4-1)=(k+3)C3=\frac{(k+3)!}{k!3!}=120. (k+1)(k+2)(k+3)=3!*120=720. --> k=7. Plus the 8 tickets we booked earlier: x=k+8=7+8=15. Answer: C (15). P.S. Direct formula: The total number of ways of dividing n identical items among r persons, each one of whom, can receive 0,1,2 or more items is n+r-1C_{r-1}. The total number of ways of dividing n identical items among r persons, each one of whom receives at least one item is n-1C_{r-1}. Hope it helps. Awesome...hats off...this is totally new to me...widens my realm..and strengthens my reasoning...thanks a lot The night is at its darkest just before the dawn... never, ever give up! Re: NEW SET of good PS(3) [#permalink] 21 Aug 2011, 07:56 Economist wrote: Status: Bell the GMAT!!! yangsta, i liked your solution for 4. I didnt know we can use the definition of linear equation to solve such problems. Affiliations: Aidha I used the guessing method. Joined: 16 Aug 2011 we have two relationships...6--30 and 24---60. This means when R is increased 4 times, S increases 2 times, so if R is increased 2 times S will increase 1 time. Posts: 186 Now, 30*3 ~ 100, so 3 times increase in S will have atleast a 6 times increase in R, i.e. R should be something greater than 36..closest is 48 Location: Singapore Another method (let me call it intuition method) : Concentration: Finance, 6 on scale R corresponds to 30 on scale S and 24 on scale R corresponds to 60 on scale S. If we notice the relationship, we will see that for every 6 points on scale R, 10 General Management points move on scale S. So, 90 points on scale S corresponds to 42 points on Scale R and another 6 points of scale S for another 10 points on scale R. Hence 100 on scale S corresponds to 42+6 = 48 on scale R. GMAT 1: 680 Q46 V37 GMAT 2: 620 Q49 V27 I hope I am making sense GMAT 3: 700 Q49 V36 WE: Other (Other) If my post did a dance in your mind, send me the steps through kudos :) Followers: 5 My MBA journey at http://mbadilemma.wordpress.com/ Kudos [?]: 31 [0], given: 43 Re: NEW SET of good PS(3) [#permalink] 22 Jan 2012, 03:49 VeritasPrepKarishma wrote: dimri10 wrote: if anyone can help please to clarify the methos: let's say that the Q was: How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? how will it be solved: will 3C36 minus 6 vertical and 6 horizontal minus 2 diagonals will be the answer or will the answer be different. thank's in advance I think your question is quite similar to yogesh1984's question above. I missed answering his question (thought of doing it later due to the diagram involved but it skipped my Anyway, let me show you how I would solve such a question. Both the questions can be easily answered using this method. How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? Ok, so we have a total of 36 co-ordinates (as shown below by the red and black dots). We need to make triangles so we need to select a triplet of co-ordinates out of these 36 which can be done in 36C3 ways. Out of these, we need to get rid of those triplets where the points are collinear. How many such triplets are there? Look at the diagram: The Black dots are the outermost points. Red dots are the inside points. Now each of these red dots is the center point for 4 sets of collinear points (as shown by the red Intern arrows). Hence the 4*4 = 16 red dots will make 16*4 = 64 triplets of collinear points. Joined: 21 Jan 2012 These 64 triplets account for all collinear triplets except those lying on the edges. Each of the 4 edges will account for 4 triplets of collinear points shown by the black arrows. Hence, there will be another 4*4 = 16 triplets of collinear points. Posts: 1 Total triplets of collinear points = 64 + 16 = 80 Followers: 0 Therefore, total number of triangles you can make = 36C3 - 80 Kudos [?]: 0 [0], given: 0 Similarly you can work with 1<=x<=5 and -9<=y<=3. The number of red dots in this case = 11*3 = 33 So number of collinear triplets represented by red arrows will be = 33*4 = 132 Number of black arrows will be 3 + 11 + 3 + 11 = 28 Total triplets of collinear points = 132 + 28 = 160 Total triangles in this case = 65C3 - 160 It would like to point out tht the resoning given is wrong. the triplets need not necessarily be adjacent. tht's the flaw. my way: no: of collinear points=? horizontal and vertical lines both give the same no: and each line of 6 points gives 6C3 possibs. hence horz and vert. lines give a total of 2*6*6C3. next 2 diagonals give same no: of such possibs. consider any diagonal direction. it gives 3,4,5,6,5,4,3 collinear points along 6 parallel lines corresponding to any diagonalic direction and each of the points gives us their corresponding triples-3C3+4C3+5C3+6C3+5C3+4C3+3C3. along 2 such dirs. this adds up to 2*(2*(3C3+4C3+5C3)+6C3). total no: of line forming selections= 2*6*6C3+ 2*(2*(3C3+4C3+5C3)+6C3). Re: NEW SET of good PS(3) [#permalink] 22 Jan 2012, 04:08 Expert's post akhileshankala wrote: It would like to point out tht the resoning given is wrong. the triplets need not necessarily be adjacent. tht's the flaw. my way: VeritasPrepKarishma no: of collinear points=? horizontal and vertical lines both give the same no: and each line of 6 points gives 6C3 possibs. Veritas Prep GMAT hence horz and vert. lines give a total of 2*6*6C3. Instructor next 2 diagonals give same no: of such possibs. consider any diagonal direction. it gives 3,4,5,6,5,4,3 collinear points along 6 parallel lines corresponding to any diagonalic direction and each of the points gives us their Joined: 16 Oct 2010 corresponding triples-3C3+4C3+5C3+6C3+5C3+4C3+3C3. Posts: 4171 along 2 such dirs. this adds up to 2*(2*(3C3+4C3+5C3)+6C3). Location: Pune, India total no: of line forming selections= 2*6*6C3+ 2*(2*(3C3+4C3+5C3)+6C3). Followers: 894 Yes, I did miss out on the non-adjacent collinear points! And on the face of it, your calculation looks correct. I will put some more time on this variation tomorrow (since today is Sunday!) and get back if needed. Kudos [?]: 3787 [0], given: 148 _________________ Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews Re: NEW SET of good PS(3) [#permalink] 14 Mar 2012, 05:37 akhileshankala wrote: VeritasPrepKarishma wrote: dimri10 wrote: if anyone can help please to clarify the methos: let's say that the Q was: How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? how will it be solved: will 3C36 minus 6 vertical and 6 horizontal minus 2 diagonals will be the answer or will the answer be different. thank's in advance I think your question is quite similar to yogesh1984's question above. I missed answering his question (thought of doing it later due to the diagram involved but it skipped my Anyway, let me show you how I would solve such a question. Both the questions can be easily answered using this method. How many triangles with positive area can be drawn on the coordinate plane such that the vertices have integer coordinates (x,y) satisfying 1≤x≤6 and 1≤y≤6? Ok, so we have a total of 36 co-ordinates (as shown below by the red and black dots). We need to make triangles so we need to select a triplet of co-ordinates out of these 36 which can be done in 36C3 ways. Out of these, we need to get rid of those triplets where the points are collinear. How many such triplets are there? Look at the diagram: The Black dots are the outermost points. Red dots are the inside points. Now each of these red dots is the center point for 4 sets of collinear points (as shown by the red Senior Manager arrows). Hence the 4*4 = 16 red dots will make 16*4 = 64 triplets of collinear points. Joined: 12 Dec 2010 These 64 triplets account for all collinear triplets except those lying on the edges. Each of the 4 edges will account for 4 triplets of collinear points shown by the black arrows. Hence, there will be another 4*4 = 16 triplets of collinear points. Posts: 282 Total triplets of collinear points = 64 + 16 = 80 Strategy, General Therefore, total number of triangles you can make = 36C3 - 80 Similarly you can work with 1<=x<=5 and -9<=y<=3. GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 The number of red dots in this case = 11*3 = 33 GPA: 4 So number of collinear triplets represented by red arrows will be = 33*4 = 132 WE: Consulting (Other) Number of black arrows will be 3 + 11 + 3 + 11 = 28 Followers: 5 Total triplets of collinear points = 132 + 28 = 160 Total triangles in this case = 65C3 - 160 It would like to point out tht the resoning given is wrong. the triplets need not necessarily be adjacent. tht's the flaw. my way: no: of collinear points=? horizontal and vertical lines both give the same no: and each line of 6 points gives 6C3 possibs. hence horz and vert. lines give a total of 2*6*6C3. next 2 diagonals give same no: of such possibs. consider any diagonal direction. it gives 3,4,5,6,5,4,3 collinear points along 6 parallel lines corresponding to any diagonalic direction and each of the points gives us their corresponding triples-3C3+4C3+5C3+6C3+5C3+4C3+3C3. along 2 such dirs. this adds up to 2*(2*(3C3+4C3+5C3)+6C3). total no: of line forming selections= 2*6*6C3+ 2*(2*(3C3+4C3+5C3)+6C3). Can you please elaborate on the bolded part in details... My GMAT Journey 540->680->730! ~ When the going gets tough, the Tough gets going! Re: NEW SET of good PS(3) [#permalink] 14 Mar 2012, 07:48 Current Student This is a 6x6 square. For each diagonal of this square, you have 8 parallel lines, you can draw within the square by joining the vertices that lies on the edges of the square. Joined: 09 Mar 2012 eg: Join (1,2) & (2,1); (1,3) & (3,1); (1,4) & (4,1); (1,5) & (5,1); to get 4 parallel lines along the diagonal (1,6)-(6,1) Similarly you can get 4 lines on the other side of the diagonal. Posts: 97 Of these, (line joining (1,2) to (2,1) is of no use to us since it contains only 2 points within the square) Location: India the line joining point (1,3) & (3,1) contains total of 3 integer co-ordinates, the line joining point (1,4) & (4,1) contains total of 4 integer co-ordinates, and so on..... GMAT 1: 740 Q50 V39 Any 3 points that you select from these lines will be collinear and not form a traingle. GPA: 3.4 Thus, you have 3,4,5,6,5,4,3 points collinear along the lines parallel to the diagonal. Rest as akhilesh has mentioned. WE: Business Development You may draw a figure by plotting these points. My 1st post on this forum, so Apologies for the weird explanation. Followers: 2 Kudos [?]: 17 [0], given: 12 Re: NEW SET of good PS(3) [#permalink] 19 Mar 2012, 02:35 This post received Expert's post Veritas Prep GMAT Instructor yogesh1984 wrote: Joined: 16 Oct 2010 Can you please elaborate on the bolded part in details... Posts: 4171 Check out this post. I have explained this question in detail in this post. It fixes the problem my above given solution had. Location: Pune, India http://www.veritasprep.com/blog/2011/09 ... o-succeed/ Followers: 894 Kudos [?]: 3787 [1] , given: 148 Karishma Veritas Prep | GMAT Instructor My Blog Save $100 on Veritas Prep GMAT Courses And Admissions Consulting Enroll now. Pay later. Take advantage of Veritas Prep's flexible payment plan options. Veritas Prep Reviews yogesh1984 Re: NEW SET of good PS(3) [#permalink] 19 Mar 2012, 08:56 Senior Manager VeritasPrepKarishma wrote: Joined: 12 Dec 2010 yogesh1984 wrote: Posts: 282 Can you please elaborate on the bolded part in details... Strategy, General Check out this post. I have explained this question in detail in this post. It fixes the problem my above given solution had. http://www.veritasprep.com/blog/2011/09 ... o-succeed/ GMAT 1: 680 Q49 V34 GMAT 2: 730 Q49 V41 Aah that one is awesome !! bole to crystal clear now GPA: 4 _________________ WE: Consulting (Other) My GMAT Journey 540->680->730! Followers: 5 ~ When the going gets tough, the Tough gets going! Intern Re: NEW SET of good PS(3) [#permalink] 06 Jul 2012, 01:12 Joined: 01 Aug 2011 Shouldn't the answer to Question 2 be B? Posts: 23 Followers: 0 Kudos [?]: 4 [0], given: 15 gmatclubot Re: NEW SET of good PS(3) [#permalink] 06 Jul 2012, 01:12
{"url":"http://gmatclub.com/forum/new-set-of-good-ps-85440-40.html","timestamp":"2014-04-17T13:13:57Z","content_type":null,"content_length":"239238","record_id":"<urn:uuid:292593b1-5169-4afb-844b-ff3dbdcd5816>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Tate module of CM elliptic curves up vote 2 down vote favorite This is an exercise in Silverman's book "the arithmetic of elliptic curves". Ex 3.24, page 109. E/K CM elliptic curve, Prove for $\ell \neq char(K)$, the action of $Gal(\bar{K}/K)$ on $T_{\ell}(E)$ is abelian. elliptic-curves nt.number-theory 5 As Mariano has comment on a different question, straight up "How do I do this exercise in this book?" questions are not so appropriate for MO. But hint: you want to use the fact that the image of Galois in Aut(T_l(E)) lies in the commuting algebra of the image of End(E) \otimes Z_l. Show that this commuting algebra is itself commutative, and you're done. – Pete L. Clark Jan 29 '10 at 8:02 Thank you, Pete! I finally worked it out! It's interesting to find out that the commuting algebra is commutative! – natura Jan 30 '10 at 9:35 only I'm wondering if there is a more pretty proof to prove that the commuting algebra ifs commutative. I used down-to-earth linear algebra computation. – natura Jan 30 '10 at 9:36 @basic: It follows immediately from some basic facts about simple algebras (e.g. the 2 by 2 matrices). Fact 1: The centre of a simple algebra is a field. Fact 2: The centraliser (=commuting algebra) of a simple subalgebra of a simple algebra is again a simple algebra. Fact 3: The dimension of a central simple algebra over its centre is a square. Now use the fact that 2 is not a square! – user1594 Feb 4 '10 at 23:23 add comment 3 Answers active oldest votes Here is the standard argument: you can decide whether it is prettier than the one you had in mind. Let $V_{\ell}(E)$ be ${\mathbb Q}\_{\ell}\otimes\_{{\mathbb Z}\_{\ell}}T\_{\ell}(E)$; it is a two-dimensional ${\mathbb Q}\_{\ell}$ vector space. When $E$ has CM by a quad. imag. field $F$, it is free of rank one over $F\otimes\_{\mathbb Q}{\mathbb Q}\_{\ell}$. Thus the image of $Gal(\bar{K}/K)$ acting on $V_{\ell}(E)$ (or equivalently, $T_{\ell}(E)$) lies in $GL_1(F\otimes_{\ mathbb Q}{\mathbb Q}_{\ell})$, and so is abelian. Note that this argument gets to the very heart of CM theory, and its relation to the class field theory (i.e. to the construction of abelian extensions): the elliptic curve $E$ (or, more precisely, its Tate modules) look 1-dim'l as modules over $F$, and so give abelian Galois reps. (Just as the $\ell$-adic Tate modules of the multiplicative group ${\mathbb G}_m$ give 1-dim'l. reps. of $Gal(\bar{\mathbb Q}/{\mathbb Q}).$) You might also want to compare with Lubin--Tate theory, which is very similar: one uses formal groups with an action of the ring of integers ${\mathcal O}$ in an extension of ${\mathbb Q} up vote _p$, and again they are constructed so that the $p$-adic Tate module is free of rank one over ${\mathcal O}$, and hence gives abelian Galois reps. 12 down vote Added in response to basic's questions below: To say that $E$ has CM over $K$ is to say that it has an action by an order in a quad. imag. field $F$. By a standard theorem (in Silverman, say) the ring $F\_{\ell} := F\otimes_{\mathbb Q}{\mathbb Q}\_{\ell}$ acts faithfully on $V_{\ell}(E)$. Counting dimensions over ${\mathbb Q_{\ell}},$ we find that $V_{\ell}(E)$ is free of rank 1 over $F\_{\ell}$. The Galois action on $V_{\ell}(E)$ is being $F_{\ell}$-linear (we have assumed that the action of $F$ is defined over $K$). Thus we have a group, $Gal(\bar{K}/K)$, acting on a free rank 1 module over a ring (namely, the $F_{\ell}$-module $V_{\ell}(E)$). Such an action must be given by $1\times 1$ invertible matrices (just choose a basis for $V_{\ell}(E)$ as an $F_{\ell}$-module), i.e. is described by a homomorphism $Gal(\bar{K}/K) \to GL_1(F_{\ell})$. Since the group of invertible $1\times 1$-matrices over any commutative ring is itself commutative, we see that the $Gal({\bar K}/K)$ action on $V_{\ell}(E)$ is through an abelian group, as @Emerton: this is indeed a slightly prettier argument than the one I had in mind (which is also standard, I think). – Pete L. Clark Feb 4 '10 at 5:44 @Emerton. I have some confusing point. 1. the general definition of CM seems to be $End_K(E)$ strictly bigger than $Z$, now when the field is finite field or local field, these fields cannot be embedded into $C$, so we cannot define $End_C(E)$, right? 2.Even when the field $K$ is a number field, which can be embedded in $C$, now your $F$ is $End_C(E) \otimes Q$, but what is the Galois action on it? Or, IF we can prove that $F = End_C(E) \otimes Q = End_K(E) \otimes Q$, then we can define a Galois action on $End_K(E) \otimes Q$, but still, I can't find a proof of such, and even I assume it to – natura Feb 4 '10 at 6:52 and even I assume it to be true, I still can't figure out a Galois equivariant homomorphism between $V_{\ell}$ and $F \otimes Q_{\ell}$. Thank you! – natura Feb 4 '10 at 6:54 If E over K has CM then $V_p(E)$ decomposes as direct sum of two Galois invariant lines. But what about $T_p(E)$? Does such a decomposition hold at the "integral" level as well? I suppose that if the two characters describing the action on $V_p(E)$ are congruent to each other mod $p$, then $T_p(E)$ should not necessarily decompose. – Tommaso Centeleghe Feb 9 '12 at 16:30 add comment You can find the answer to your question (and learn a whole lot more about complex multiplication) in another book by Joe Silverman, "Advanced Topics in the Arithmetic of Elliptic up vote 2 down Curves". See Chapter II, and in particular read the proof of Theorem 2.3. add comment @Emerton So, what if the char($K$) $\ne 0$, and the endomorphism ring is a quaternion algebra? Then you end up getting a map of Galois into $\textrm{GL}_1$ over a noncommutative ring... up vote 1 Does your proof extend to the case of noncommutative endomorphism rings? down vote 1 @oxeimon: It would be best to ask this as a separate question, including a link back to this one. – Cam McLeman Mar 2 '12 at 19:14 If $K$ is a finite field of characteristic $p$ and $E$ is supersingular with $End_K(E)$ isomorphic to an order in the definite quaternion ramified at $p$ THEN $|K|=p^{2m}$, the Tate module $T_\ell(E)$, $for \ell\neq p$, is a free rank one module over $End_K(E)\otimes Z_\ell$ ($\simeq$ to the ring of two-by-two matrices with entries in $Z_\ell$), and $Frob_K$ acts as multiplication by either $p^{m}$ or $-p^m$, an element in the center of $End_K(E)\otimes Z_\ell$. – Tommaso Centeleghe Mar 2 '12 at 22:25 add comment Not the answer you're looking for? Browse other questions tagged elliptic-curves nt.number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/13349/tate-module-of-cm-elliptic-curves?sort=votes","timestamp":"2014-04-16T07:34:00Z","content_type":null,"content_length":"70059","record_id":"<urn:uuid:70c4d1e1-c28f-45a4-9afa-462ab23c48ef>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00481-ip-10-147-4-33.ec2.internal.warc.gz"}
Stability Experimental Maintainer Roel van Dijk <vandijk.roel@gmail.com> Bas van Dijk <v.dijk.bas@gmail.com> This module provides the Levenberg-Marquardt algorithm specialised for curve-fitting. For additional documentation see the documentation of the levmar C library which this library is based on: http://www.ics.forth.gr/~lourakis/levmar/ Model & Jacobian. type Model r a = [r] -> a -> rSource A functional relation describing measurements represented as a function from a list of parameters and an x-value to an expected measurement. • Ensure that the length of the parameters list equals the lenght of the initial parameters list in levmar. For example, the quadratic function f(x) = a*x^2 + b*x + c can be written as: quad :: Num r => Model r r quad [a, b, c] x = a*x^2 + b*x + c type Jacobian r a = [r] -> a -> [r]Source The jacobian of the Model function. Expressed as a function from a list of parameters and an x-value to the partial derivatives of the parameters. See: http://en.wikipedia.org/wiki/Jacobian_matrix_and_determinant • Ensure that the length of the parameters list equals the lenght of the initial parameters list in levmar. • Ensure that the length of the output parameter derivatives list equals the length of the input parameters list. For example, the jacobian of the above quad model can be written as: quadJacob :: Num r => Jacobian N3 r r quadJacob [_, _, _] x = [ x^2 -- with respect to a , x -- with respect to b , 1 -- with respect to c (Notice you don't have to differentiate for x.) Levenberg-Marquardt algorithm. class LevMarable r Source The Levenberg-Marquardt algorithm is overloaded to work on Double and Float. LevMarable Double LevMarable Float :: LevMarable r => Model r a Model -> Maybe (Jacobian r a) Optional jacobian -> [r] Initial parameters -> [(a, r)] Samples -> Integer Maximum iterations -> Options r Minimization options -> Constraints r Constraints -> Either LevMarError ([r], Info r, CovarMatrix r) The Levenberg-Marquardt algorithm specialised for curve-fitting. type LinearConstraints r = ([[r]], [r])Source Linear constraints consisting of a constraints matrix, kxm and a right hand constraints vector, kx1 where m is the number of parameters and k is the number of constraints. Minimization options. data Options r Source Scale factor for initial mu. Stopping thresholds for ||J^T e||_inf. Stopping thresholds for ||Dp||_2. Stopping thresholds for ||e||_2. Step used in the difference approximation to the Jacobian. If optDelta<0, the Jacobian is approximated with central differences which are more accurate (but slower!) compared to the forward differences employed by default. Read r => Read (Options r) Show r => Show (Options r) data Info r Source Information regarding the minimization. ||e||_2 at initial parameters. ||e||_2 at estimated parameters. ||J^T e||_inf at estimated parameters. ||Dp||_2 at estimated parameters. mu/max[J^T J]_ii ] at estimated parameters. Number of iterations. Reason for terminating. Number of function evaluations. Number of jacobian evaluations. Number of linear systems solved, i.e. attempts for reducing error. Read r => Read (Info r) Show r => Show (Info r) data StopReason Source SmallGradient Stopped because of small gradient J^T e. SmallDp Stopped because of small Dp. MaxIterations Stopped because maximum iterations was reached. SingularMatrix Stopped because of singular matrix. Restart from current estimated parameters with increased optScaleInitMu. SmallestError Stopped because no further error reduction is possible. Restart with increased optScaleInitMu. SmallNorm2E Stopped because of small ||e||_2. InvalidValues Stopped because model function returned invalid values (i.e. NaN or Inf). This is a user error. Enum StopReason Read StopReason Show StopReason data LevMarError Source LevMarError Generic error (not one of the others) LapackError A call to a lapack subroutine failed in the underlying C levmar library. FailedBoxCheck At least one lower bound exceeds the upper one. MemoryAllocationFailure A call to malloc failed in the underlying C levmar library. ConstraintMatrixRowsGtCols The matrix of constraints cannot have more rows than columns. ConstraintMatrixNotFullRowRank Constraints matrix is not of full row rank. TooFewMeasurements Cannot solve a problem with fewer measurements than unknowns. In case linear constraints are provided, this error is also returned when the number of measurements is smaller than the number of unknowns minus the number of equality constraints. Show LevMarError Typeable LevMarError Exception LevMarError
{"url":"http://hackage.haskell.org/package/levmar-0.3/docs/Numeric-LevMar-Fitting.html","timestamp":"2014-04-21T06:13:27Z","content_type":null,"content_length":"25453","record_id":"<urn:uuid:9e7053d2-0844-4fff-b5fb-2886e266410a>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00225-ip-10-147-4-33.ec2.internal.warc.gz"}
Division worksheets with remainder. Basic division worksheets. Easy division worksheets with remainders. Basic division facts should be mastered before attempting division with remainders. Basic early division worksheets with remainders. Each worksheet has 12 questions. Although worksheets have 12 questions each, students will need to practice the concept to ensure understanding is in place. Remind students that division is creating fair shares and sometimes there are leftovers. Basic division will need to be taught 2 or 3 times throughout the year. Each time, 3 or 5 worksheets should be done over a period of 1-2 weeks. Division with remainders should not be taught until student have a great deal of experience with division of basic facts like: 27 divided by 3 or 36 divided by 6 etc. Before attemping any work with division, students should have many experiences using manipulatives (pennies, buttons etc.) to replicate simple division questions. If a student does not experience success in attempting the first 5 questions, it should be discontinued. Review of basic division facts will be required to move forward.
{"url":"http://math.about.com/od/divisionworksheets/tp/Remainders.htm","timestamp":"2014-04-18T00:25:11Z","content_type":null,"content_length":"48255","record_id":"<urn:uuid:c0af8a76-da98-48d4-bde7-ed5c734dfd69>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
An earthquake with a reading of 7.0 on the Richter scale releases ten times as much energy as an earthquake? An earthquake with a reading of 7.0 on the Richter scale releases ten times as much energy as an earthquake that registers 6.0 on the scale. If x represents the amount of energy released by a 6.0 earthquake, write an expression for the amount of energy released by a 7.0 earthquake. Like this post? Subscribe to my RSS feed and get loads more! One comment 1. An earthquake with a reading of 7.0 on the Richter scale releases (10^1.5) times as much energy as an earthquake that registers 6.0 on the scale The amount of energy released by a 7.0 earthquake is 31.6x
{"url":"http://earthquakequestions.com/an-earthquake-with-a-reading-of-7-0-on-the-richter-scale-releases-ten-times-as-much-energy-as-an-earthquake.htm","timestamp":"2014-04-19T07:04:02Z","content_type":null,"content_length":"47490","record_id":"<urn:uuid:7e4415ab-3cca-4f4f-b362-ce8a203a2d68>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00056-ip-10-147-4-33.ec2.internal.warc.gz"}