content
stringlengths
86
994k
meta
stringlengths
288
619
Product of Terms of a Sequence Date: 04/18/2003 at 14:13:53 From: Rosi Jimenez Subject: Terms of a sequence My 13-year-old son was assigned the following math problem with very little instruction given. Nowhere was it explained how to find the product of a sequence. The problem is: Find the product of the first 99 terms of the sequence 1/2, 2/3, 3/4, 4/5, . . . We have found a pattern in the sequence. When 1 is added to the numerator and denominator of each term, the next term is produced. However, we are totally stumped as to how to multiply terms in a sequence. Please help me explain this to my son. Thank you. Date: 04/18/2003 at 14:23:47 From: Doctor Roy Subject: Re: Terms of a sequence Thanks for writing to Dr. Math. To find the product of two numbers, you multiply them together. Let's do a few from this sequence to see if you can catch the pattern: Product of first 2 terms: (1/2)(2/3) = 1/3 Product of first 3 terms: (1/2)(2/3)(3/4) = 1/4 Product of first 4 terms: (1/2)(2/3)(3/4)(4/5) = 1/5 Product of first 5 terms: (1/2)(2/3)(3/4)(4/5)(5/6) = 1/6 Do you notice a pattern? The product is really just multiplying all the terms together. This isn't a problem where you have to look for a formula, but a problem where you are supposed to think for a little bit and find an easy way to calculate the solution. Does this help? Please feel free to write back with any questions you may have. - Doctor Roy, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/62700.html","timestamp":"2014-04-16T19:55:58Z","content_type":null,"content_length":"6579","record_id":"<urn:uuid:a15d8941-3145-42e3-8169-7b810c6083e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Modulated anharmonic ADPs are intrinsic to aperiodic crystals: a case study on incommensurate Rb2ZnCl4 A combination of structure refinements, analysis of the superspace MEM density and interpretation of difference-Fourier maps has been used to characterize the incommensurate modulation of rubidium tetrachlorozincate, Rb[2]ZnCl[4], at a temperature of T = 196K, close to the lock-in transition at T [lock-in] = 192K. The modulation is found to consist of a combination of displacement modulation functions, modulated atomic displacement parameters (ADPs) and modulated third-order anharmonic ADPs. Up to fifth-order Fourier coefficients could be refined against diffraction data containing up to fifth-order satellite reflections. The center-of-charge of the atomic basins of the MEM density and the displacive modulation functions of the structure model provide equivalent descriptions of the displacive modulation. Modulations of the ADPs and anharmonic ADPs are visible in the MEM density, but extracting quantitative information about these modulations appears to be difficult. In the structure refinements the modulation parameters of the ADPs form a dependent set, and ad hoc restrictions had to be introduced in the refinements. It is suggested that modulated harmonic ADPs and modulated third-order anharmonic ADPs form an intrinsic part, however small, of incommensurately modulated structures in general. Refinements of alternate models with and without parameters for modulated ADPs lead to significant differences between the parameters of the displacement modulation in these two types of models, thus showing the modulation of ADPs to be important for a correct description of the displacive modulation. The resulting functions do not provide evidence for an interpretation of the modulation by a soliton model. Keywords: aperiodic crystals, incommensurate modulated structures, MEM density, ADPs
{"url":"http://pubmedcentralcanada.ca/pmcc/articles/PMC3098556/","timestamp":"2014-04-23T12:59:27Z","content_type":null,"content_length":"173951","record_id":"<urn:uuid:a3542f21-0b87-40a7-b2c0-f1eb1e95a81d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
April 21st 2009, 06:04 PM #1 Sep 2008 Find a primitive generator for $\mathbb{Q}(\sqrt2,\sqrt3,\sqrt5)$ over $\mathbb{Q}$. I found the polynomial for the root $\alpha=\sqrt2+\sqrt3<br /> +\sqrt5$, which is $x^8-40x^6+352x^4-960x^2+576$. But I'm having trouble showing the polynomial is irreducible. Some help please. Here is an idea I am not sure how to show immediately that it is irreducible; however here is another idea. We know that: $[\mathbb{Q}(\sqrt2 , \sqrt3, \sqrt5):\mathbb{Q}]=[\mathbb{Q}(\sqrt2 , \sqrt3, \sqrt5):\mathbb{Q}(\sqrt2 , \sqrt3)][\mathbb{Q}(\sqrt2 , \sqrt3):\mathbb{Q}(\sqrt2)][\mathbb{Q}(\sqrt2):\mathbb{Q}]$ But it is easy to show each of the intermediary steps is a degree 2 extension with irreducible minimal polynomials $x^2-5, x^2-3, x^2 - 2$ all irreducible by Eisenstein with p = 5, 3, 2 respectively. So that means the total extension is: $[\mathbb{Q}(\sqrt2 , \sqrt3, \sqrt5):\mathbb{Q}]=2*2*2=8$ Since you have found a monic polynomial of degree 8 that has this as a root it must be irreducible. Kind of roundabout, but hope it helps. April 21st 2009, 08:42 PM #2
{"url":"http://mathhelpforum.com/advanced-algebra/84901-irreducibility.html","timestamp":"2014-04-19T04:22:16Z","content_type":null,"content_length":"32990","record_id":"<urn:uuid:da678846-5c2d-46cd-901c-c0227d7c0e98>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00153-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Laplace on (Roughly) Daily Posts Tagged ‘Laplace’ Computersherpa at DeviantART has taken the collected wisdom at TV Tropes and that site’s “Story Idea Generator” and organized them into an amazing Periodic Table of Storytelling… click here (and again) for a larger image [TotH to Brainpickings] Along these same lines, readers might also be interested in the “Perpetual Notion Machine” (which includes, as a bonus, the story of Dmitri Mendeleev and the “real” Periodic Table…) See also the Periodic Table of Typefaces (“‘There are now about as many different varieties of letters as there are different kinds of fools…’“) and the Periodic Table of Visualization Methods (“Now See Here…“). As we constructively stack our writers’ blocks, we might wish a thoughtful Happy Birthday to Immanuel Kant; he was born on this date in 1724 in Königsberg, Prussia (which is now Kaliningrad, Russia). Kant is of course celebrated as a philosopher, the author of Critique of Pure Reason (1781), Critique of Practical Reason (1788), and Critique of Judgment (1790), and father of German Idealism (et al.). But less well remembered are the contributions he made to science, perhaps especially to astronomy, before turning fully to philosophy. For example, his General History of Nature and Theory of the Heavens (1755) contained three anticipations important to the field: 1) Kant made the nebula hypothesis ahead of Laplace. 2) He described the Milky Way as a lens-shaped collection of stars that represented only one of many “island universes,” later shown by Herschel. 3) He suggested that friction from tides slowed the rotation of the earth, which was confirmed a century later. Similarly, Kant’s writings on mathematics were cited as an important influence by Einstein. Gary Foshee, a collector and designer of puzzles from Issaquah near Seattle walked to the lectern to present his talk. It consisted of the following three sentences: “I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?” The event was the Gathering for Gardner [see here], a convention held every two years in Atlanta, Georgia, uniting mathematicians, magicians and puzzle enthusiasts. The audience was silent as they pondered the question. “The first thing you think is ‘What has Tuesday got to do with it?’” said Foshee, deadpan. “Well, it has everything to do with it.” And then he stepped down from the stage. Read the full story of the conclave– held in honor of the remarkable Martin Gardner, who passed away last year, and in the spirit of his legendary “Mathematical Games” column in Scientific American– in New Scientist… and find the answer to Gary’s puzzle there– or after the smiling professor below. “I have two children. One is a boy born on a Tuesday. What is the probability I have two boys?”… readers may hear a Bayesian echo of the Monty Hall Problem on which (R)D has mused before: The first thing to remember about probability questions is that everyone finds them mind-bending, even mathematicians. The next step is to try to answer a similar but simpler question so that we can isolate what the question is really asking. So, consider this preliminary question: “I have two children. One of them is a boy. What is the probability I have two boys?” This is a much easier question: The way Foshee meant it is, of all the families with one boy and exactly one other child, what proportion of those families have two boys? To answer the question you need to first look at all the equally likely combinations of two children it is possible to have: BG, GB, BB or GG. The question states that one child is a boy. So we can eliminate the GG, leaving us with just three options: BG, GB and BB. One out of these three scenarios is BB, so the probability of the two boys is 1/3. Now we can repeat this technique for the original question. Let’s list the equally likely possibilities of children, together with the days of the week they are born in. Let’s call a boy born on a Tuesday a BTu. Our possible situations are: * When the first child is a BTu and the second is a girl born on any day of the week: there are seven different possibilities. * When the first child is a girl born on any day of the week and the second is a BTu: again, there are seven different possibilities. * When the first child is a BTu and the second is a boy born on any day of the week: again there are seven different possibilities. * Finally, there is the situation in which the first child is a boy born on any day of the week and the second child is a BTu – and this is where it gets interesting. There are seven different possibilities here too, but one of them – when both boys are born on a Tuesday – has already been counted when we considered the first to be a BTu and the second on any day of the week. So, since we are counting equally likely possibilities, we can only find an extra six possibilities here. Summing up the totals, there are 7 + 7 + 7 + 6 = 27 different equally likely combinations of children with specified gender and birth day, and 13 of these combinations are two boys. So the answer is 13/27, which is very different from 1/3. It seems remarkable that the probability of having two boys changes from 1/3 to 13/27 when the birth day of one boy is stated – yet it does, and it’s quite a generous difference at that. In fact, if you repeat the question but specify a trait rarer than 1/7 (the chance of being born on a Tuesday), the closer the probability will approach 1/2. [See UPDATE, below] As we remember, with Laplace, that “the theory of probabilities is at bottom nothing but common sense reduced to calculus,” we might ask ourselves what the odds are that on this date in 1964 the World’s Largest Cheese would be manufactured for display in the Wisconsin Pavilion at the 1964-65 World’s Fair. The 14 1/2′ x 6 1/2′ x 5 1/2′, 17-ton cheddar original– the product of 170,000 quarts of milk from 16,000 cows– was cut and eaten in 1965; but a replica was created and put on display near Neillsville, Wisconsin… next to Chatty Belle, the World’s Largest Talking Cow. UPDATE: reader Jeff Jordan writes with a critique of the reasoning used above to solve Gary Foshee’s puzzle: For some reason, mathematicians and non-mathematicians alike develop blind spots about probability problems when they think they already know the answer, and are trying to convince others of its correctness. While I agree with most of your analysis, it has one such blind spot. I’m going move through a progression of variations on another famous conundrum, trying to isolate these blind spots and eventually get the point you overlooked. Bertrand’s Box Paradox: Three identical boxes each have two coins inside: one has two gold coins, one has two silver coins, and one has a silver coin and a gold coin. You open one and pull out a coin at random, without seeing the other. It is gold. What is the probability the other coin is the same A first approach is to say there were three possible boxes you could pick, but the information you have rules one out. That leaves two that are still possible. Since you were equally likely to pick either one before picking a coin, the probability that this box is GG is 1/2. A second approach is that there were six coins that were equally likely, and three were gold. But two of them would have come out of the GG box. Since all three were equally likely, the probability that this box is GG is 2/3. This appears to be a true paradox because the “same” theoretical approach - counting equally likely cases – gives different answers. The resolution of that paradox – and the first blind spot – is that this is an incorrect theoretical approach to solving the problem. You never want to merely count cases, you want to sum the probabilities that each case would produce the observed result. Counting only works when each case that remains possible has the same chance of producing the observed result. That is true when you count the coins, but not when you count the boxes. The probability of producing a gold coin from the GG box is 1, from the SS box is 0, and from the GS box is 1/2. The correct answer is 1/(1+0+1/2)=2/3. (A second blind spot is that you don’t “throw out” the impossible cases, you assign them a probability of zero. That may seem like a trivial distinction, but it helps to understand what probabilities other than 1 or 0 mean.) This problem is mathematically equivalent to the original Monty Hall Problem: You pick Door #1 hoping for the prize, but before opening it the host opens Door #3 to show that it is empty. Given the chance, what is the probability you win by switching to door #2? Let D1, D2, and D3 represent where the prize is. Assuming the host won’t open your door, and knows where the prize is so he always opens an empty door, then the probability D2 would produce the observed result is 1, that D3 would is 0, and that D1 is … well, let’s say it is 1/2. Just like before, the probability D2 now has the prize is 1/(1+0+1/2)=2/3. Why did I waffle about the value of P(D1)? There was a physical difference with the boxes that produced the explicit result P(GS)=1/2. But here the difference is logical (based on the location of the prize) and implicit. Do we really know the host would choose randomly? In fact, if the host always opens Door #3 if he can, then P(D1)=1 and the answer is 1/(1+0+1)=1/2. Or if he always opens Door #2 if he can, P(D1)=0 and the answer is 1/(1+0+0)=1. But if we observe that the host opened Door #2 and assume those same biases, the results reverse. To answer the question, we must assume a value for P(D1). Assuming anything other than P(D1)=1/2 implies a bias on the part of the host, and a different answer if he opens Door #2. So all we can assume is P(D1)=1/2, and the answer is again 2/3. That is also the answer if we average the results over many games with the same host (and a consistent bias, whatever it is). The answer most “experts” give is really that average, and it is a blind spot that they are not using all the information they have in the individual We can make the Box Paradox equivalent to this one by making the random selection implicit. Someone looks in the chosen box, and picks out a gold coin. The probability is 2/3 that there is another gold coin if that person picks randomly, 1/2 if that person always prefers a gold coin, and 1 if that person always prefers a silver one. Without knowing the preference, we can only assume this person is unbiased and answer 2/3. Over many experiments, it will also average out to 2/3 regardless of the bias. And this person doesn’t even have to show the coin. If we assume he is truthful (and we can only assume that), the answers are the same if he just says “One coin is Finally, make a few minor changes to the Box Paradox. Change “silver” to “bronze.” Let the coins be minted in different years, so that the year embossed on them is never the same for any two. Add a fourth box so that one box has an older bronze coin with a younger gold coin, and one has a younger bronze coin with an older gold coin. Now we can call the boxes BB, BG, GB, and GG based on this ordering. When our someone says “One coin is bronze,” we can only assume he is unbiased in picking what kind of coin to name, and the best answer is 1/(1+1/2+1/2+0)=1/2. If there is a bias, it could be 1/(1+1+1+0)=1/3 or 1/(1+0+0+0)=1, but we can’t assume that. Gee, this sounds oddly familiar, except for the answer. :) The answer to all of Gary Foshee’s questions is 1/2. His blind spot is that he doesn’t define events, he counts cases. An event a set of outcomes, not an outcome itself. The sample space is the set of all possible outcomes. An event X must be defined by some property such that every outcome in X has that property, *and* every outcome with the property is in X. The event he should use as a condition is not “this family includes a boy (born on a Tuesday)”, it is “The father of this family chooses to tell you one of up to two facts in the form ‘my family includes a [gender] (born on a [day]).’” Since most fathers of two will have two different facts of that form to choose from, Gary Foshee should have assigned a probability to each, not merely counted the families that fit the description. The answer is then (1+12P)/(1+26P), where P is the probability he would tell us “one is a boy born on a Tuesday” when only one of his two children fit that description. The only value we can assume for P is 1/2, making the answer (1+6)/(1+13)=1/2. Not P=1 and (1+12)/(1+26)=13/27. And the blind spot that almost all experts share, is that this means the answer to most expressions of the simpler Two Child Problem is also 1/2. It can be different, but only if the problem statement makes two or three points explicit: 1) Whatever process led to your knowledge of one child’s gender had access to both children’s genders (and days of birth). 2) That process was predisposed to mention boys over girls (and Tuesdays over any other day). 3) That process would never mention facts about both children. When Gary Foshee tells you about one of his kids, #2 is not satisfied. He probably had a choice of two facts to tell you, and we can’t assume he was biased towards “boy born on Tuesday.” Just like Monty Hall’s being able to choose two doors changes the answer from 1/2 to 2/3, Gary Foshee’s being able to choose two facts changes the answer from 13/27 to 1/2. It is only 13/27 if he was forced to mention that fact, which is why that answer is Other readers are invited to contribute their thoughts. Mark Twain quotes Disraeli: “There are three kinds of lies: lies, damned lies, and statistics”; H.G. Wells avers that “Satan delights equally in statistics and in quoting scripture”; but the remarkable Hans Rosling begs to differ… Rosling, a physician and medical researcher who co-founded Médecins sans Frontièrs (Doctors without Borders) Sweden and the Gapminder Foundation (with his son and daughter-in-law), and developed the Trendalyzer software that represents national and global statistics as animated interactive graphics (e.g., here), ha become a superstar on the lecture circuit. He brings his unique insight and approach to the BBC with The Joy of Stats… [DEL:It’s above at full length, so takes a while to watch in toto:DEL]– but odds are that one will enjoy it! [UPDATE: since this post was published, the full version has been rendered "private"; unless and until it's reposted in full, the taste above will have to do. Readers in the UK (or readers with VPNs that terminate in the UK) can see the full show soon after it airs on BBC Four on Thursday the 13th on the BBC iPlayer. As a further consolation, here is statistician Andrew Gelman's "Five Books" interview-- his choice of the five best books on statistics-- for The Browser. ] As we realize that sometimes we can, after all, count on it, we might recall that it was on this date in 1776 that Thomas Paine (originally anonymously) published his case for the independence of the American Colonies, “Common Sense”… and after all, as Pierre-Simon, marquis de Laplace pointed out (in 1820), “the theory of probabilities is at bottom nothing but common sense reduced to calculus.” source: University of Indiana
{"url":"http://roughlydaily.com/tag/laplace/","timestamp":"2014-04-20T18:29:17Z","content_type":null,"content_length":"71241","record_id":"<urn:uuid:5b35312a-68fa-4105-a591-ee386cae6864>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00026-ip-10-147-4-33.ec2.internal.warc.gz"}
$Another harmonic mean approximation$ Martin Weinberg posted on arXiv a revision of his paper, Computing the Bayesian Factor from a Markov chain Monte Carlo Simulation of the Posterior Distribution, that is submitted to Bayesian Analysis. I have already mentioned this paper in a previous post, but I remain unconvinced of the appeal of the paper method, given that it Reaching escape velocity Sample once from the Uniform(0,1) distribution. Call the resulting value . Multiply this result by some constant . Repeat the process, this time sampling from Uniform(0, ). What happens when the multiplier is 2? How big does the multiplier have to be to force divergence. Try it and see: iters = 200 locations = rep(0,iters) Those dice aren’t loaded, they’re just strange I must confess to feeling an almost obsessive fascination with intransitive games, dice, and other artifacts. The most famous intransitive game is rock, scissors, paper. Rock beats scissors. Scissors beats paper. Paper beats rock. Everyone older than 7 seems to know this, but very few people are aware that dice can exhibit this same behavior, A different way to view probability densities The standard, textbook way to represent a density function looks like this: Perhaps you have seen this before? (Plot created in R, all source code from this post is included at the end). Not only will you find this plot in statistics books, you’ll also see it in medical texts, sociology, and even economics books. R is eve R ywhe R e R did definitely not start to be THE statistical computing tool. The “two Rs” in far down-under just needed some tool which was not too expensive and structured enough to support the elementary statistics classes filled with hundreds of students. Another constraint was the computing lab which was large enough, but “only” filled with Mac On particle learning $On particle learning$ In connection with the Valencia 9 meeting that started yesterday, and with Hedie‘s talk there, we have posted on arXiv a set of comments on particle learning. The arXiv paper contains several discussions but they mostly focus on the inevitable degeneracy that accompanies particle systems. When Lopes et al. state that is not of interest How to Create an R Package in Windows There are many reasons to create an R package, such as codes protection, convenient usage, etc. However, creating an R package in Unix is not hard, it IS in Windows, as R is designed in a Unix environment which includes a set of compilers, programming utilities, and text-formatting routines while Windows lacks those. I used to build an R... Vanilla Rao-Blackwellisation [re]revised $Vanilla Rao-Blackwellisation [re]revised$ Although the revision is quite minor, it took us two months to complete from the time I received the news in the Atlanta airport lounge… The vanilla Rao-Blackwellisation paper with Randal Douc has thus been resubmitted to the Annals of Statistics. And rearXived. The only significant change is the inclusion of two tables detailing computing highlight 0.2-0 I've released version 0.2-0 of highlight to CRANThis version brings some more additions to the sweave driver that uses highlight to produce nice looking vignettes with color coded R chunksThe driver gains new arguments boxes, bg and border to c... Betting on Pi I was reading over at math-blog.com about a concept called numeri ritardatari. This sounds a lot like “retarded numbers” in Italian, but apparently “retarded” here is used in the sense of “late” or “behind” and not in the short bus sense. I barely scanned the page, but I think I got the gist of it:
{"url":"http://www.r-bloggers.com/search/LaTeX/page/84/","timestamp":"2014-04-18T08:04:36Z","content_type":null,"content_length":"39032","record_id":"<urn:uuid:6e43e9d6-63b9-49c4-a930-b416be7b092e>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00215-ip-10-147-4-33.ec2.internal.warc.gz"}
MATHEMATICA BOHEMICA, Vol. 129, No. 4, pp. 399-410 (2004) The continuous solutions of a generalized Dhombres functional equation L. Reich, J. Smital, M. Stefankova Ludwig Reich, Institut für Mathematik, Karl-Franzens-Universit\" at Graz, A-8010 Graz, Austria, e-mail: ludwig.reich@uni-graz.at; Jaroslav Smital, Marta Stefankova, Mathematical Institute, Silesian University, CZ-746 01 Opava, Czech Republic, e-mail: jaroslav.smital@math.slu.cz, marta.stefankova@math.slu.cz Abstract: We consider the functional equation $f(xf(x))=\varphi(f(x))$ where $\varphi J\rightarrow J$ is a given increasing homeomorphism of an open interval $J\subset(0,\infty)$ and $f (0,\infty)\ rightarrow J$ is an unknown continuous function. In a series of papers by P. Kahlig and J. Smital it was proved that the range of any non-constant solution is an interval whose end-points are fixed under $\varphi$ and which contains in its interior no fixed point except for $1$. They also provide a characterization of the class of monotone solutions and prove a necessary and sufficient condition for any solution to be monotone. In the present paper we give a characterization of the class of continuous solutions of this equation: We describe a method of constructing solutions as pointwise limits of solutions which are piecewise monotone on every compact subinterval. And we show that any solution can be obtained in this way. In particular, we show that if there exists a solution which is not monotone then there is a continuous solution which is monotone on no subinterval of a compact interval $I\subset(0,\infty)$. Keywords: iterative functional equation, equation of invariant curves, general continuous solution Classification (MSC2000): 39B12, 39B22, 26A18 Full text of the article: [Previous Article] [Next Article] [Contents of this Number] [Journals Homepage] © 2004–2010 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.kurims.kyoto-u.ac.jp/EMIS/journals/MB/129.4/6.html","timestamp":"2014-04-21T04:58:09Z","content_type":null,"content_length":"3996","record_id":"<urn:uuid:10b235a6-37ef-49e8-bb27-7d3b65a88352>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00338-ip-10-147-4-33.ec2.internal.warc.gz"}
California Math Standards - 7th Grade MathScore EduFighter is one of the best math games on the Internet today. You can start playing for free! California Math Standards - 7th Grade MathScore aligns to the California Math Standards for 7th Grade. The standards appear below along with the MathScore topics that match. If you click on a topic name, you will see sample problems at varying degrees of difficulty that MathScore generated. When students use our program, the difficulty of the problems will automatically adapt based on individual performance, resulting in not only true differentiated instruction, but a challenging game-like experience. Want unlimited math worksheets? Learn more about our online math practice software. View the California Math Standards at other levels. Number Sense 1.0 Students know the properties of, and compute with, rational numbers expressed in a variety of forms: 1.1 Read, write, and compare rational numbers in scientific notation (positive and negative powers of 10) with approximate numbers using scientific notation. (Scientific Notation ) 1.2 Add, subtract, multiply, and divide rational numbers (integers, fractions, and terminating decimals) and take positive rational numbers to whole-number powers. (Integer Addition , Integer Subtraction , Positive Integer Subtraction , Integer Multiplication , Integer Division ) 1.3 Convert fractions to decimals and percents and use these representations in estimations, computations, and applications. (Percentages , Percentage Pictures , Percent of Quantity ) 1.4 Differentiate between rational and irrational numbers. 1.5 Know that every rational number is either a terminating or repeating decimal and be able to convert terminating decimals into reduced fractions. (Decimals To Fractions , Fractions to Decimals , Compare Mixed Values , Positive Number Line , Repeating Decimals ) 1.6 Calculate the percentage of increases and decreases of a quantity. (Percentage Change ) 1.7 Solve problems that involve discounts, markups, commissions, and profit and compute simple and compound interest. (Purchases At Stores , Commissions , Simple Interest , Compound Interest ) 2.0 Students use exponents, powers, and roots and use exponents in working with fractions: (Exponents Of Fractional Bases , Negative Exponents Of Fractional Bases ) 2.1 Understand negative whole-number exponents. Multiply and divide expressions involving exponents with a common base. (Negative Exponents Of Fractional Bases , Multiplying and Dividing Exponent Expressions , Exponent Rules For Fractions ) 2.2 Add and subtract fractions by using factoring to find common denominators. (Fraction Addition , Fraction Subtraction ) 2.3 Multiply, divide, and simplify rational numbers by using exponent rules. (Exponent Rules For Fractions ) 2.4 Use the inverse relationship between raising to a power and extracting the root of a perfect square integer; for an integer that is not square, determine without a calculator the two integers between which its square root lies and explain why. (Estimating Square Roots ) 2.5 Understand the meaning of the absolute value of a number; interpret the absolute value as the distance of the number from zero on a number line; and determine the absolute value of real numbers. (Absolute Value 1 , Absolute Value 2 ) Algebra and Functions 1.0 Students express quantitative relationships by using algebraic terminology, expressions, equations, inequalities, and graphs: 1.1 Use variables and appropriate operations to write an expression, an equation, an inequality, or a system of equations or inequalities that represents a verbal description (e.g., three less than a number, half as large as area A). (Phrases to Algebraic Expressions , Age Problems , Algebraic Sentences 2 , Algebraic Sentences ) 1.2 Use the correct order of operations to evaluate algebraic expressions such as 3(2x + 5)2. (Order Of Operations , Variable Substitution 2 ) 1.3 Simplify numerical expressions by applying properties of rational numbers (e.g., identity, inverse, distributive, associative, commutative) and justify the process used. (Many topics align to this standard) 1.4 Use algebraic terminology (e.g., variable, equation, term, coefficient, inequality, expression, constant) correctly. (Algebraic Terms ) 1.5 Represent quantitative relationships graphically and interpret the meaning of a specific part of a graph in the situation represented by the graph. 2.0 Students interpret and evaluate expressions involving integer powers and simple roots: (Simplifying Radical Expressions ) 2.1 Interpret positive whole-number powers as repeated multiplication and negative whole-number powers as repeated division or multiplication by the multiplicative inverse. Simplify and evaluate expressions that include exponents. (Exponent Basics , Negative Exponents Of Fractional Bases ) 2.2 Multiply and divide monomials; extend the process of taking powers and extracting roots to monomials when the latter results in a monomial with an integer exponent. (Simplifying Algebraic Expressions 2 , Simplifying Algebraic Expressions , Roots Of Exponential Expressions ) 3.0 Students graph and interpret linear and some nonlinear functions: 3.1 Graph functions of the form y = nx^2 and y = nx^3 and use in solving problems. (Nonlinear Functions ) 3.2 Plot the values from the volumes of three-dimensional shapes for various values of the edge lengths (e.g., cubes with varying edge lengths or a triangle prism with a fixed height and an equilateral triangle base of varying lengths). (Area And Volume Proportions ) 3.3 Graph linear functions, noting that the vertical change (change in y- value) per unit of horizontal change (change in x- value) is always the same and know that the ratio ("rise over run") is called the slope of a graph. (Determining Slope ) 3.4 Plot the values of quantities whose ratios are always the same (e.g., cost to the number of an item, feet to inches, circumference to diameter of a circle). Fit a line to the plot and understand that the slope of the line equals the quantities. 4.0 Students solve simple linear equations and inequalities over the rational numbers: 4.1 Solve two-step linear equations and inequalities in one variable over the rational numbers, interpret the solution or solutions in the context from which they arose, and verify the reasonableness of the results. (Single Variable Equations 2 , Single Variable Inequalities ) 4.2 Solve multistep problems involving rate, average speed, distance, and time or a direct variation. (Train Problems ) Measurement and Geometry 1.0 Students choose appropriate units of measure and use ratios to convert within and between measurement systems to solve problems: 1.1 Compare weights, capacities, geometric measures, times, and temperatures within and between measurement systems (e.g., miles per hour and feet per second, cubic inches to cubic centimeters). ( Time Conversion , Volume Conversion , Weight Conversion , Temperature Conversion ) 1.2 Construct and read drawings and models made to scale. 1.3 Use measures expressed as rates (e.g., speed, density) and measures expressed as products (e.g., person-days) to solve problems; check the units of the solutions; and use dimensional analysis to check the reasonableness of the answer. (Distance, Rate, and Time , Train Problems , Work Word Problems ) 2.0 Students compute the perimeter, area, and volume of common geometric objects and use the results to find measures of less common objects. They know how perimeter, area, and volume are affected by changes of scale: 2.1 Use formulas routinely for finding the perimeter and area of basic two-dimensional figures and the surface area and volume of basic three-dimensional figures, including rectangles, parallelograms, trapezoids, squares, triangles, circles, prisms, and cylinders. (Parallelogram Area , Rectangular Solids , Circle Area , Circle Circumference , Triangular Prisms , Cylinders , Trapezoids ) 2.2 Estimate and compute the area of more complex or irregular two-and three-dimensional figures by breaking the figures down into more basic geometric objects. (Irregular Shape Areas , Perimeter and Area of Composite Figures ) 2.3 Compute the length of the perimeter, the surface area of the faces, and the volume of a three-dimensional object built from rectangular solids. Understand that when the lengths of all dimensions are multiplied by a scale factor, the surface area is multiplied by the square of the scale factor and the volume is multiplied by the cube of the scale factor. (Area And Volume Proportions ) 2.4 Relate the changes in measurement with a change of scale to the units used (e.g., square inches, cubic feet) and to conversions between units (1 square foot = 144 square inches or [1 ft^2] = [144 in^2], 1 cubic inch is approximately 16.38 cubic centimeters or [1 in^3] = [16.38 cm^3]). (Area and Volume Conversions ) 3.0 Students know the Pythagorean theorem and deepen their understanding of plane and solid geometric shapes by constructing figures that meet given conditions and by identifying attributes of 3.1 Identify and construct basic elements of geometric figures (e.g., altitudes, mid-points, diagonals, angle bisectors, and perpendicular bisectors; central angles, radii, diameters, and chords of circles) by using a compass and straightedge. 3.2 Understand and use coordinate graphs to plot simple figures, determine lengths and areas related to them, and determine their image under translations and reflections. (Translations and Reflections ) 3.3 Know and understand the Pythagorean theorem and its converse and use it to find the length of the missing side of a right triangle and the lengths of other line segments and, in some situations, empirically verify the Pythagorean theorem by direct measurement. (Pythagorean Theorem ) 3.4 Demonstrate an understanding of conditions that indicate two geometrical figures are congruent and what congruence means about the relationships between the sides and angles of the two figures. ( Congruent And Similar Triangles ) 3.5 Construct two-dimensional patterns for three-dimensional models, such as cylinders, prisms, and cones. 3.6 Identify elements of three-dimensional geometric objects (e.g., diagonals of rectangular solids) and describe how two or more objects are related in space (e.g., skew lines, the possible ways three planes might intersect). Statistics, Data Analysis, and Probability 1.0 Students collect, organize, and represent data sets that have one or more variables and identify relationships among variables within a data set by hand and through the use of an electronic spreadsheet software program: 1.1 Know various forms of display for data sets, including a stem-and-leaf plot or box-and-whisker plot; use the forms to display a single set of data or to compare two sets of data. (Stem And Leaf Plots ) 1.2 Represent two numerical variables on a scatterplot and informally describe how the data points are distributed and any apparent relationship that exists between the two variables (e.g., between time spent on homework and grade level). 1.3 Understand the meaning of, and be able to compute, the minimum, the lower quartile, the median, the upper quartile, and the maximum of a data set. (Stem And Leaf Plots ) Mathematical Reasoning 1.0 Students make decisions about how to approach problems: 1.1 Analyze problems by identifying relationships, distinguishing relevant from irrelevant information, identifying missing information, sequencing and prioritizing information, and observing patterns. (Function Tables , Function Tables 2 ) 1.2 Formulate and justify mathematical conjectures based on a general description of the mathematical question or problem posed. 1.3 Determine when and how to break a problem into simpler parts. 2.0 Students use strategies, skills, and concepts in finding solutions: 2.1 Use estimation to verify the reasonableness of calculated results. 2.2 Apply strategies and results from simpler problems to more complex problems. (Many topics align to this standard) 2.3 Estimate unknown quantities graphically and solve for them by using logical reasoning and arithmetic and algebraic techniques. 2.4 Make and test conjectures by using both inductive and deductive reasoning. 2.5 Use a variety of methods, such as words, numbers, symbols, charts, graphs, tables, diagrams, and models, to explain mathematical reasoning. 2.6 Express the solution clearly and logically by using the appropriate mathematical notation and terms and clear language; support solutions with evidence in both verbal and symbolic work. 2.7 Indicate the relative advantages of exact and approximate solutions to problems and give answers to a specified degree of accuracy. 2.8 Make precise calculations and check the validity of the results from the context of the problem. 3.0 Students determine a solution is complete and move beyond a particular problem by generalizing to other situations: 3.1 Evaluate the reasonableness of the solution in the context of the original situation. 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems. 3.3 Develop generalizations of the results obtained and the strategies used and apply them to new problem situations. Learn more about our online math practice software.
{"url":"http://www.mathscore.com/math/standards/California/7th%20Grade/","timestamp":"2014-04-17T15:27:30Z","content_type":null,"content_length":"23578","record_id":"<urn:uuid:e2926ea2-a4e8-4a4f-99ac-0b358b16c7d8>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Winston, GA Prealgebra Tutor Find a Winston, GA Prealgebra Tutor ...I know that takes time and consistency, both of which I am more than willing to provide. I am a former math teacher, and a former teacher educator. I have a bachelor's degree in Applied Math from Brown University, and a Masters and PhD in Cognitive Psychology, also from Brown University. 8 Subjects: including prealgebra, statistics, algebra 1, algebra 2 ...Finished my undergrad as the Assistant for Teaching Assistant Development for the entire campus. After undergrad I scored in the 99th percentile on the GMAT and successfully taught GMAT courses for the next 3 years. Helped individuals with the math sections of the GRE, SAT, and ACT. 28 Subjects: including prealgebra, calculus, finance, economics Teacher of mathematics concepts to students (K thru 12 and college levels) for over 20 years. Booker’s love for mathematics goes back to his childhood, where he developed his interest in math while working beside his grandfather in a small grocery store. He is a Christian who loves God, life, people and serving others. 5 Subjects: including prealgebra, geometry, algebra 1, algebra 2 I am Georgia certified educator with 12+ years in teaching math. I have taught a wide range of comprehensive math for grades 6 through 12 and have experience prepping students for EOCT, CRCT, SAT and ACT. Unlike many others who know the math content, I know how to employ effective instructional strategies to help students understand and achieve mastery. 13 Subjects: including prealgebra, statistics, geometry, SAT math ...They will trace the historical development of the modern atomic theory and explain the current quantum mechanical model of the atom. The periodic table will be defined and explained using the modern atomic theory. The properties of atoms will be explored. 14 Subjects: including prealgebra, chemistry, physics, algebra 2 Related Winston, GA Tutors Winston, GA Accounting Tutors Winston, GA ACT Tutors Winston, GA Algebra Tutors Winston, GA Algebra 2 Tutors Winston, GA Calculus Tutors Winston, GA Geometry Tutors Winston, GA Math Tutors Winston, GA Prealgebra Tutors Winston, GA Precalculus Tutors Winston, GA SAT Tutors Winston, GA SAT Math Tutors Winston, GA Science Tutors Winston, GA Statistics Tutors Winston, GA Trigonometry Tutors Nearby Cities With prealgebra Tutor Atlanta Ndc, GA prealgebra Tutors Bowdon Junction prealgebra Tutors Braswell, GA prealgebra Tutors Cedartown prealgebra Tutors Chattahoochee Hills, GA prealgebra Tutors Clarkdale, GA prealgebra Tutors Ephesus, GA prealgebra Tutors Fairburn, GA prealgebra Tutors Felton, GA prealgebra Tutors Mount Zion, GA prealgebra Tutors Palmetto, GA prealgebra Tutors Red Oak, GA prealgebra Tutors Roopville prealgebra Tutors Sargent, GA prealgebra Tutors Waco, GA prealgebra Tutors
{"url":"http://www.purplemath.com/Winston_GA_prealgebra_tutors.php","timestamp":"2014-04-16T07:18:49Z","content_type":null,"content_length":"24188","record_id":"<urn:uuid:9ed29c11-8a02-4921-8486-3778e3c5fedf>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00383-ip-10-147-4-33.ec2.internal.warc.gz"}
Oil flows upward in the wick of a lantern because of the liquid property called A. meniscusity. B. density. C. viscosity. D. capillarity. Speed is a scalar and velocity is a vector. A scalar only has magnitude while a vector has magnitude and direction. Example: If you are traveling north at 65 miles an hour your speed is 65 miles an hour, your velocity is 65 miles an hour north. It gets a little more complicated. Speed = distance (a scalar)/time Velocity = Displacement (vector)/time Example: If you run 5 miles in an hour left and then 5 miles in a hour right your speed is 10 miles/2 hours = 5 miles an hour. However, since you end up in the same exact location as where you started your displacement is zero making your velocity zero as well. Think of it this way; since velocity is a vector it requires a direction if i ended up exactly where I started I have no direction, thus velocity must be zero. One more example to clarify: If you ran 6 miles right and 4 miles left in 2 hours, your speed will be 5 miles an hour (10/2=5) your velocity would be 1 mile an hour to the right, since displacement is 2 miles to the right (6 to right - 4 to left = 2 to right) and the time is 2 hours displacement/time = velocity; 2 to the right/2 = 1 mile to the right per hour. Added 12/1/2012 9:03:05 AM Weegy: According to Ptolemy's model of the movement of celestial bodies,: C. planets orbit in circular paths around the earth.
{"url":"http://www.weegy.com/?ConversationId=7B8E8660","timestamp":"2014-04-17T16:10:55Z","content_type":null,"content_length":"37932","record_id":"<urn:uuid:5e3cd5ea-e9e5-4fab-be48-1e7f3017a1ad>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Eat. Run. Code. Rounding Ain’t So Round This is part 3/10 from my talk “10 Things You Didn’t Know About C#”, from Utah Code Camp 2014. Remember learning how to round in elementary school? Most of us learned that the decimal rounds to the nearest whole number, and that half always rounds away from zero. Interestingly, this is not what C# does by default, and being unaware of it might cause unexpected results. Bankers Rounding C#’s default method for rounding is called half-to-even rounding, or sometimes “Banker’s Rounding”. True to the name, this rule states that half will always round to the nearest even number. This type of rounding is more precise over many calculations because it eliminates the bias that rounding away from zero creates. This bias-eliminating property is one of the reasons this type of rounding is recommended in the IEEE 754 standard for rounding in floating-point numbers. Decimal Truncation There is one other type of rounding you need to know about from the IEEE spec. It is known as Round-toward-zero, but you might know it as truncation. This type of rounding simply removes any decimal, producing a whole number, and is always biased toward zero. Getting Expected Results What if you want to round the way you are used to? Fortunately, Microsoft has you partially covered. The System.MidpointRounding enum lists both AwayFromZero, and ToEven as valid rounding types. Math.Round() will let you round however you want, but by default will apply Half-to-even rounding. Convert.ToInt32() and variants do not give you the luxury of choosing your rounding type, and will always apply Half-to-even rounding. Lets look at some comparisons. Here is the code: And the output: Take note that we have two different answers for any index, and three different lists! Moral of the story: Be careful in how you choose to handle your decimals. Accuracy matters! Enumerated Tricks This is part 2/10 from my talk “10 Things You Didn’t Know About C#”, from Utah Code Camp 2014. When I first began seriously programming; I learned Java. Java had some really cool features baked into its implementation of enums - like methods, and enum values of arbitrary types. When I switched to C# as my primary language, most of that flexibility was gone - and I missed it. However, after much searching, I have found that there are ways to get some of that Java-like flexibility back. Backing Types In stark contrast to Java, C# only has 8 possible backing types that can be used in enums. However, this simplicity can let us do some really cool things. One trick that the language specifies for us is the [Flags] attribute, seen in action below: By specifying this flag, we tell the compiler to treat each value as a component of a bit field. Most times, we want each bit in the field to represent a particular enum being used, and therefore manually specify each value to be a power of two. To me, this breaks the continuity of an enum. As seen above, you can use simple left shits to get the compiler to do the work for you, and maintain a nice, continuous numbering scheme. It also makes re-ordering bits in the field much easier. As a consequence of C#’s bit field abilities, it should also be noted that it is possible to assign any valid value of the enum’s backing type to an enum variable. Whoa, that sentence was a mouthful. Here’s what I mean: Note how both assigned values are specified explicitly by the enums above. For bit fields, this is necessary to represent combinations of enum flags together. But, it also applies when the enum is not a bit field, allowing arbitrary, un-enumerated values to be assigned via casting. Enums Are Types, Too Because enums are types in their own respect, it means that we can also write extension methods for them. This lets us almost get Java-like methods for our enums. See below: You can get incredible functionality from these, when applied in the right circumstances. In this case, I simply use it to determine the superheroes name. Tying it all together Lets see what using each of these together might look like: These enums almost look like regular classes from here! Mission accomplished. It should also be noted that the HasFlag() method is new in C# 4.0. In earlier versions, you still have to use & to determine if it is set. (Non) Keywords This is part 1/10 from my talk “10 Things You Didn’t Know About C#” from Utah Code Camp 2014. C# is one of my favorite languages to program for, and .NET is one of my favorite platforms to target. Why? I find the 1-2 punch of C#/.NET to be an amazingly powerful and flexible system. But, sometimes it can be a little too flexible. Edge cases and emergent behaviors are everywhere, and they can provide for some great fun when exploring C#/.NET in depth. With that intro, let’s jump into the first of 10 things you didn’t know: you can use language keywords as variable names. How, you might ask? According to the C# specification, we can just use the @ sign in much the same way you would to create verbatim string literals. Just prefix the desired keyword with an “@”, and the compiler will happily let you (mis)use keywords as variable names. Check out this example: It should be noted that this trick does not work with any other variable names - @ can only precede keywords. I would normally advise against this practice in general, but there are two cases where this might be considered an acceptable choice: 1. In ASP MVC, it is often useful to set a ‘class’ attribute, which means needing to use ‘class’ as a variable name. (Kudos to whoever pointed this out during the session!) 2. Use it to alleviate C#’s inability to switch on types. is only listed to clear my conscience, because I have implemented code like that before. Though I’m still on the fence about actually deploying code that looks like this, it’s still a neat trick:
{"url":"http://eatrunco.de/","timestamp":"2014-04-19T04:19:52Z","content_type":null,"content_length":"44963","record_id":"<urn:uuid:8d2b9454-725c-44a8-8cdf-96d9e3b5b9e1>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00128-ip-10-147-4-33.ec2.internal.warc.gz"}
Recession probability index rises to 16.9 The Bureau of Economic Analysis reported today that U.S. real GDP grew at an annual rate of 1.3% in the first quarter of 2007, moving our recession probability index up to 16.9%. This post provides some background on how that index is constructed and what the latest move up might signify. What sort of GDP growth do we typically see during a recession? It is easy enough to answer this question just by selecting those postwar quarters that the National Bureau of Economic Research (NBER) has determined were characterized by economic recession and summarizing the probability distribution of those quarters. A plot of this density, estimated using nonparametric kernel methods, is provided in the following figure; (figures here are similar to those in a paper I wrote with UC Riverside Professor Marcelle Chauvet, which appeared last year in Nonlinear Time Series Analysis of Business Cycles). The horizontal axis on this figure corresponds to a possible rate of GDP growth (quoted at an annual rate) for a given quarter, while the height of the curve on the vertical axis corresponds to the probability of observing GDP growth of that magnitude when the economy is in a recession. You can see from the graph that the quarters in which the NBER says that the U.S. was in a recession are often, though far from always, characterized by negative real GDP growth. Of the 45 quarters in which the NBER says the U.S. was in recession, 19 were actually characterized by at least some growth of real GDP. One can also calculate, as in the blue curve below, the corresponding characterization of expansion quarters. Again, these usually show positive GDP growth, though 10 of the postwar quarters that are characterized by NBER as part of an expansion exhibited negative real GDP growth. The observed data on GDP growth can be thought of as a mixture of these two distributions. Historically, about 20% of the postwar U.S. quarters are characterized as recession and 80% as expansion. If one multiplies the recession density in the first figure by 0.2, one arrives at the red curve in the figure below. Multiplying the expansion density (second figure above) by 0.8, one arrives at the blue curve in the figure below. If the two products (red and blue curves) are added together, the result is the overall density for GDP growth coming from the combined contribution of expansion and recession observations. This mixture is represented by the yellow curve in the figure below. It is clear that if in a particular quarter one observes a very low value of GDP growth such as -6%, that suggests very strongly that the economy was in recession that quarter, because for such a value of GDP growth, the recession distribution (red curve)is the most important part of the mixture distribution (yellow curve). Likewise, a very high value such as +6% almost surely came from the contribution of expansions to the distribution. Intuitively, one would think that the ratio of the height of the recession contribution (the red curve) to the height of the mixture distribution (the yellow curve) corresponds to the probability that a quarter with that value of GDP growth would have been characterized by the NBER as being in a recession. Actually, this is not just intuitively sensible, it in fact turns out to be an exact application of Bayes’ Law. The height of the red curve measures the joint probability of observing GDP growth of a certain magnitude and the occurrence of a recession, whereas the height of the yellow curve measures the unconditional probability of observing the indicated level of GDP growth. The ratio between the two is therefore the conditional probability of a recession given an observed value of GDP growth. This ratio is plotted as the red curve in the figure below. Probability of recession if all we observe is one quarter’s GDP growth, as a function of the observed rate of rate of GDP. Adapted from Chauvet and Hamilton (2006) Such an inference strategy seems quite reasonable and robust, but unfortunately it is not particularly useful– for most of the values one would be interested in, the implication from Bayes’ Law is that it’s hard to say from just one quarter’s value for GDP growth what is going on. However, there is a second feature of recessions that is extremely useful to exploit– if the economy was in an expansion last quarter, there is a 95% chance it will continue to be in expansion this quarter, whereas if it was in a recession last quarter, there is a 75% chance the recession will persist this quarter. Thus suppose for example that we had observed -10% GDP growth last quarter, which would have convinced us that the economy was almost surely in a recession last quarter. Before we saw this quarter’s GDP number, we would have thought in that case that there’s a 0.75 probability of the recession continuing into the current quarter. In this situation, to use Bayes’ Law to form an inference about the current quarter given both the current and previous quarters’ GDP, we would weight the mixtures not by 0.2 and 0.8 (the unconditional probabilities of this quarter being in recession and expansion, respectively), but rather by magnitudes closer to 0.75 and 0.25 (the probabilities of being in recession this period conditional on being in recession the previous period). The ratio of the height of the resulting new red curve to the resulting new yellow curve could then be used to calculate the conditional probability of a recession in quarter t based on observations of the values of GDP for both quarters t and t – 1. Starting from a position of complete ignorance at the start of the sample, we could apply this method sequentially to each observation to form a guess about whether the economy was in a recession at each date given not just that quarter’s GDP growth, but all the data observed up to that point. Once can also use the same principle, which again is nothing more than Bayes’ Law, working backwards in time– if this quarter we see GDP growth of -6%, that means we’re very likely in a recession this quarter, and given the persistence of recessions, that raises the likelihood that a recession actually began the period before. The farther back one looks in time, the better inference one can arrive at. Seeing this quarter’s GDP numbers helps me make a much better guess about whether the economy might have been in recession the previous quarter. We then work through the data iteratively in both directions– start with a state of complete ignorance about the sample, work through each date to form an inference about the current quarter given all the data up to that date, and then use the final value to work backwards to form an inference about each quarter based on GDP for the entire sample. All this has been described here as if we took the properties of recessions and expansions as determined by the NBER as given. However, another thing one can do with this approach is to calculate the probability law for observed GDP growth itself, not conditioning at all on the NBER dates. Once we’ve done that calculation, we could infer the parameters such as how long recessions usually last and how severe they are in terms of GDP growth directly from GDP data alone, using the principle of maximum likelihood estimation. It is interesting that when we do this, we arrive at estimates of the parameters that are in fact very similar to the ones obtained using the NBER dates directly. What’s the point of this, if all we do is use GDP to deduce what the NBER is eventually going to tell us anyway? The issue is that the NBER typically does not make its announcements until long after the fact. For example, the most recent release from the NBER Business Cycle Dating Committee was announced to the public in July 2003. Unfortunately, what the NBER announced in July 2003 was that the recession had actually ended in November 2001– they are telling us the situation 1-1/2 years after it has happened. Waiting so long to make an announcement certainly has some benefits, allowing time for data to be revised and accumulating enough ex-post data to make the inference sufficiently accurate. However, my research with the algorithm sketched above suggests that it really performs quite satisfactorily if we just wait for one quarter’s worth of additional data. Thus, for example, with the advance 2007:Q1 GDP data just released, we form an inference about whether a recession might have started in 2006:Q4. The graph below shows how well this one-quarter-delayed inference would have performed historically. Shaded areas denote the dates of NBER recessions, which were not used in any way in constructing the index. Note moreover that this series is entirely real-time in construction– the value for any date is always based solely on information as it was reported in the advance GDP estimates available one quarter after the indicated date. Although the sluggish GDP growth rates of the past year have produced quite an obvious move up the recession probability index, it is still far from the point at which we would conclude that a recession has likely started. At Econbrowser we will be following the procedure recommended in the research paper mentioned above– we will not declare that a recession has begun until the probability rises above 2/3. Once it begins, we will not declare it over until the probability falls back below 1/3. So yes, the ongoing sluggish GDP growth has come to a point where we would worry about it, but no, it’s not at the point yet where we would say that a recession has likely begun. There’s more on the new GDP numbers from Street Light, "http://bigpicture.typepad.com/comments/2007/04/gdp_13.html">The Big Picture, William Polley, and SCSU Scholars. Technorati Tags: GDP, recession probability, recession probability index 22 thoughts on “Recession probability index rises to 16.9” 1. Wcw Recession probability index (P+C Chauvet and Hamilton) Recession probability index, 1967-2007 Although the sluggish GDP growth rates of the past year have produced quite an obvious move up the recession probability index, it is still far from the point at which we would conclude that a recession has likely s 2. Charlie Stromeyer Professor Hamilton, you might be interested to know that Economic Cycle Research Institute (ECRI) correctly forecast each of the past three U.S. recessions months ahead of time and, just as importantly, they have never issued a false recession call. Right now, they are not forecasting a recession which agrees with what you say. You can check out their website here: 3. Ironman Good overview of the math outlined in your and Marcelle Chauvet’s paper – it looks like it might make for a really good background information section in a new forthcoming textbook?… That speculation aside, as a heads up, at some point when I can get enough free time to work on it, I’d really love to get a copy of your spreadsheet with the historical data and calculations for generating the Recession Probability Index chart above. I think I may have finally figured out how to capture the historical series and to incorporate some “what if” elements that would make it a successful online tool. 4. Abhilash Kushwaha Professor Hamilton it might be interesting to see an overlay of Recession Index with your classifications. In addition to shaded regions denoting the dates of NBER recessions, could you also add shaded regions as indicated by your classifications (“we will not declare that a recession has begun until the probability rises above 2/3. Once it begins, we will not declare it over until the probability falls back below 1/3.”). 5. JDH Ironman, actually it’s programmed in GAUSS rather than a spreadsheet. 6. JDH Abhilash, the information you’re asking for is in Table 4 on page 51 of our paper. 7. touche The fourth chart shows that if real GDP declines by 3%, the probability of a recession is only about 50%? This seems absurd to me. Perhaps I am misinterpreting the chart. But if real growth is 3% in a quarter, the probability that the quarter proceeding or the next quarter would have negative growth is only 50%?? Its got to be much higher than 50%! 8. jg Very nice explanation of your model, Professor. Thank you. I look forward to poring over the March personal consumption details, to see how the 14 individual components came out, compared to January and February. I assume the BEA will have the database updated this weekend. Simple time-series model for projecting GDP growth: Q4 ’06, 2.6% Q1 ’07, 1.3% –> Q2 ’07, 0%? 9. wcw JDH, thanks as ever for the update. Should you feel like posting pseudocode, I wouldn’t stop you. We may not all have Gauss installed on our boxes, but most of us can download one or another free-software environment in which to play with data. CS, not to offend, but blah, blah, ECRI, blah. Pointing me at them doesn’t help me do anything but, in the event that I happen to buy their spiel, give them money. I prefer to work through issues myself. Or do you have a paper of theirs you happen really to like? That could be useful. touche, I don’t like to speak for others, but I think the point is that in any quarter in which GDP was -3% on an annualized basis, it may well have been low in the previous quarter, which ups the conditional probability for that, then-current, -3% quarter. Even the conditional estimates based on the advance GDP estimates gave some false alarms (I’ve been looking to the mid-’90s 10. Ezequiel Martin Camara Just as an exercise in tea leaves reading, but is not the curve in the same level as in 1q2001? Of the 8 times it has gone this high, only twice it has gone back down without hitting the ceiling… 11. Joseph Do we need to know this for the test? Actually, I quite enjoyed the presentation. 12. JDH Touche, in 1981:Q2, U.S. real GDP fell by 3.1%, yet NBER characterizes this as part of an expansion. Admittedly, -3.1% growth in expansion is a rare event, but a recession is also a rare event, and Bayes’ Law combines these two relevant facts in a way that is not always intuitively obvious. The nonparametric estimates of the densities smudge these probabilities further. The parametric inference (which is employed in construction of the actual index) is a little sharper. The figure was used primarily to explain how the general idea works. 13. touche “In addition, we have also examined the performance of the model when a break in volatility in 1984 is taken into account. Kim and Nelson (1999b), McConnell and Perez-Quiros (2000), and Chauvet and Potter (2001) have found that the US economy became more stable since this date, particularly the quarterly GDP series. When this feature is incorporated in the model the results improve substantially with respect to the last two recessions, which took place after the structural break in volatility. We have nevertheless chosen not to correct for the decrease in volatility in the US economy in order to keep the analysis simple and robust.” What would be your current prediction if you excluded data prior to 1984? 14. ChrisA Thanks JDH, excellent explanation. Love those ex post fact graphs — sitting on the edge of my seat. I ditto the request for code but would like to see it as a CRAN module. I’m sure you could ‘talk’ one of your grad students into learning R (open source, blah blah). 15. dk I’m assuming the data used are the final quarterly growth rates, not the advance numbers. It would be an interesting mind game to know how sensitive the 16.9% probability will be to a revision up or down in the Q1 value. Or even more interesting – assuming the 1.3% rate holds for Q1, what Q2 rate would be needed to push the probability for Q2 above the 2/3 threshold for you to declare Q2 the start of a recession. 16. JDH Touche, the quote you provide refers to the multivariate monthly model, not the quarterly GDP model used here. I don’t think there’s enough data using just quarterly GDP if you only start in 1985, and I haven’t tried the exercise you ask about. 17. JDH dk, the data used in the estimation and inference at each date in the sample are the full vintage of data reported as of the quarter after that date. For example, for the 16.9% number describing 2006:Q4, I have used all the GDP data now available, which includes the 2007:Q1 advance estimate, the 2006:Q4 data as reported on April 27, 2007, the 2006:Q3 data as reported on April 27, etc. The previous 9.1% reading for 2006:Q3 was based on all the data available as of Jan 31, 2007. This currently-available-vintage philosophy was used for every point in the graph above. Note that although the advance estimates are somewhat unreliable, the advance 2007:Q1 GDP numbers are only being used to refine the inference about 2006:Q4– the 16.9% probability refers to 2006:Q4. I ran your (and Ironman’s) what if question, and found that if none of the current numbers get revised (fat chance of that!), GDP growth below -1% for the second quarter of 2007 would likely lead us to call a recession start for 2007:Q1. 18. Charlie Stromeyer wcw, ECRI was founded by Geoffrey H. Moore who was called the “father of leading economic indicators”. They have some books available: Note that the third book here is also one of the references in the paper above by Professors Chauvet and Hamilton. ECRI also has some papers: How did you know it was the right time to cover your homebuilder shorts? That was great timing (and I am asking because I follow U.S. housing very closely to try to see if it will impact consumer 19. Incognitus “I ran your (and Ironman’s) what if question, and found that if none of the current numbers get revised (fat chance of that!), GDP growth below -1% for the second quarter of 2007 would likely lead us to call a recession start for 2007:Q1.” Why are you losing time with a methodology that would make you call for a recesson in 2007:Q1 when you get the data for 2007:Q2, which is, like, in July 2007 or later? Don’t you see the irony? Anyway, given what’s happening to retail and auto sales in April, plus some growth that went from Q2 to Q1, the Recession should be starting about … now. 20. wcw CS, thanks for the tip. I’ll check ‘em out. I didn’t “know”; I decided valuations were no longer far enough above my estimates for a short-sale comfort zone. I got lucky locally. Still, worth nothing that I was a month early. The RUF was in the low 25s when I covered, but spent early April in the 24s. 21. JDH Incognitus, I just think you need the extra data to be confident in your call. It’s a very deliberate decision to wait until July to make the call for 2007:Q1. 22. Ironman JDH, thank you for following up! GAUSS, huh? That would certainly make things more difficult for what I had in mind. It’s a shame that Aptech doesn’t appear to have done much in the way of developing web-based I/O for GAUSS’ applications – it would be cool if it would be possible to just put the recession probability index online directly! There’s other development work I’ll need to do before I can really take on the challenge – if it pans out, I’ll be back in touch (probably months down the line. Barring shocks to the economy, I don’t believe the index would rise enough to become highly relevant to current events in that time.) Incognitus, bear in mind that Professor Hamilton’s and Chauvet’s method would provide a confirmation of recession 12-18 months ahead of the NBER, even if delayed one quarter beyond the quarter for which the probability is calculated. That’s warp speed in comparison!
{"url":"http://econbrowser.com/archives/2007/04/recession_proba_1","timestamp":"2014-04-23T15:50:37Z","content_type":null,"content_length":"50711","record_id":"<urn:uuid:d3f9f0c9-8909-4266-8c7c-109a92f8e878>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00581-ip-10-147-4-33.ec2.internal.warc.gz"}
prime number problem October 24th 2012, 05:22 PM prime number problem please help me p is a prime. (p-1)! = p-1 (mod k) k = ... a. p+1 b. $\frac{(p-1)p}{2}$ c. $\frac{(p+1)p}{2}$ d. p+1 e. p^2 October 24th 2012, 08:02 PM Re: prime number problem *IF* those are the only choices (I haven't checked that the only possible solution is correct), then just consider what happens when p=5. Choices a and d are the same. Are you sure there there's no mistake in copying the problem? (This would seem to be asking you to use Wilson's Theorem.) October 25th 2012, 01:31 AM Re: prime number problem d. p-2 i don't know how to use wilson's theorem October 25th 2012, 10:43 AM Re: prime number problem Wilson's Theorem: (p-1)! = -1 (mod p) if and only if p is a prime. 1) It's what's "expected" to be used to answer this. The use is that, if p is a prime, then -1 = (p-1)! = (p-1)(p-2)! = -(p-2)! (mod p), so that (p-2)! = 1 (mod p), so p divides [(p-2)!-1]. Ask yourself which k could make this fraction an integer: $\frac{(p-1)! - (p-1)}{k} = \frac{(p-1) \ [(p-2)! - 1]}{k}$ 2) The other approach to this problem doesn't involve a real mathematical understanding, but rather understanding a trick for how to answer math questions on a test. So, this isn't the ideal way to do it, but it is a way, given that you trust that your teacher gave you a "correct" problem that didn't have an error. When you're given a multiple choice question "for any prime p", then the answer has to be the same "for any prime p". In particular, the answer has to work for the prime p = 5. Only one of those choies, (a)-(e), holds when p=5, and so that choice must be the correct answer.
{"url":"http://mathhelpforum.com/number-theory/206033-prime-number-problem-print.html","timestamp":"2014-04-18T18:21:02Z","content_type":null,"content_length":"6027","record_id":"<urn:uuid:d405c00c-f7de-429c-87f6-979257168921>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00470-ip-10-147-4-33.ec2.internal.warc.gz"}
Divide Cell Amounts Between Cells Equally - Divide Cell Amounts Between Cells Equally Divide Cell Amounts Between Cells Equally - Excel View Answers Hi, I have a chart of numbers and i need the difference inbetween 2 cells (shown here on the left and right) to be divided up equally between the amount of cells inbetween the cells. The amount of cells in the centre here is two but the amount will vary. Please see below where i have manually worked one out to give you an example. 13.78 14.62 15.46 16.30 15.24 ------------------ 21.17 22.74 ------------------ 29.09 26.84 ------------------ 35.42 32.10 ------------------ 40.71 37.02 ------------------ 49.84 44.60 ------------------ 57.58 49.58 ------------------ 62.05 Similar Excel Video Tutorials Avoid #DIV/0! Error in Formula 4 Examples (Divide By Zero Error) - See how to avoid the Divide By Zero Error in formula when formula input cells are blank using: 1)IF and OR functions check to see if THREE cells ... Excel Magic Trick #208 Ctrl + Enter 6 Examples Of Efficiency - See how to use the keyboard shortcut Ctrl + Enter to gain efficiencies when creating Excel Spreadsheets. Save time when creating spreadsheets by using ... VLOOKUP Return Two Values to One Cell or Two Cells - See how to lookup two values at once and return then both to: 1. One Cell using 2 VLOOKUP functions and concatenation (Ampersand &) 2. T ... Helpful Excel Macros Format Cells as Text in Excel - This free Excel macro formats a selection of cells as Text in Excel. This macro applies the Text number format to cells Similar Topics Formula Question - Excel Jackpot System - Excel Sumif - Excel Adding Cells - Excel Hi, I have a chart of numbers and i need the difference inbetween 2 cells (shown here on the left and right) to be divided up equally between the amount of cells inbetween the cells. The amount of cells in the centre here is two but the amount will vary. Please see below where i have manually worked one out to give you an example. ----A ------B------C-------D 1 13.78-- 14.52--15.46 ---16.30 2 15.24------------------ 21.17 3 22.74------------------ 29.09 4 26.84------------------ 35.42 5 32.10------------------ 40.71 6 37.02------------------ 49.84 7 44.60------------------ 57.58 8 49.58------------------ 62.05 There are a lot of numbers to the right an left of these cells, so I cant use those cells in the formula unless its way over, about 40 columns to the right. Could this be made simpler? I know i'd have to do a formula for every column but would something like this work, however I'd need help with translating my suggestion into a formula. How would I write this formula correctly? =d1-a1 divided by 3 plus a1. This would be based on 2 blank columns in the centre as above, this is basically how I worked it out on the calculator. If I did this then wrote a new formula for the remaining column??! Ask Your Own Question Hi there, I have an amount in cell C2 that will be broken up into a number of payment installments (selected from a drop down list) in cell C5. In cells E8, E10, E12, E14, E16 and E18 I have the installment amounts. So Installment amount #1 = E8, Installment amount #2 = E10, Installment amount #3 = E12, Installment amount #4 = E14, Installment amount #5 = E16, and Installment amount #6 = E18 What I need to do is if the amount in C2 is $100, and the number of installments selected in cell C5 is 4, than I need cells E8, E10, E12 and E14 to divide C2 by cell C5 while cells E16 and E18 remain blank. If I selected the number of installments in cell C5 to be 6 (which is the maximum amount) than cells E8, E10, E12, E14, E16 and E18 would would get auto-populated with the amount of C2 divided by C5. Hope that makes sense. Ask Your Own Question I have constructed a spreadsheet which details the comparison between expenditures generated in maintaining a boat between two partners ( Adam & Bill ) , who share these equally. This spreadsheet has the following columns :- A :- Date B:- Name C:- Purchase D:- Amount F:- Adam's purchase amount divided by 2 G:- Bill's contributions H: Balance between F & G "Adam" does all the purchasing and this amount divided by 2 is shown in column F . The other partner "Bill" has his contributions detailed in column G and then the totals in F & G are compared to show the Balance in column H. As subsequent purchases and contributions arise and are entered as described , these appear in column C3, C4...etc with the resultant BALANCE being constantly adjusted in column H but always shown in cell H2. Similarly with the totals shown in columns F & G , i.e these also are adjusted automatically with the data entered in column C but are also always shown in cells F2 and G2 I would like these amounts now shown in cells H2 , F2 & G2 to appear on the same row as the latest entry in column C but don't know how to arrange this so it happens each time I enter a purchase in column C. Can anyone help with this , please ? Ask Your Own Question I need to find a formula which will give me the Net Amount and Vat Amount totals. (I am in UK, VAT is Value Added Tax and it's 17.5%). I am using bookkeeping ledger books at the moment but the company will be going computerized in a few weeks but nobody has a clue about spreadsheets really (they can't afford a Sage package!) so although I have a little knowledge it is not enough! So, I have a set of completed figures in front of me which have been worked out with a calculator - now I want to get the same answers using Excel and formulating cells. I would already have the TOTAL AMOUNT and the NO VAT AMOUNT (No VAT amounts are the element deducted from a sale or work done which is zero rated vat) to work from. To find the VAT AMOUNT: Using a calculator, take the TOTAL AMOUNT, deduct the NO VAT AMOUNT, multiply that by 7 then divide that by 47. This gives you the VAT amount. Using the sums from the figures below I would enter on calculator the Total sum 5,745.22 minus no vat sum 101.40 which equals 5,643.82 multiply that by 7 which equals 39,506.74 divide that by 47 which equals 840.56893 (rounded up to 840.57). Finding the Net amount is easy: just deduct the vat amount from the total amount. TOTAL AMOUNT 5,745.22 NO VAT AMOUNT 101.40 VAT AMOUNT: 840.57 NET AMOUNT: 4,904.65 I've tried my best with cell referencing but obviously I'm doing it wrong because I don't get the right VAT amount of 840.57 - I get something like 5,730.12! There's a nice slice of chocolate cake for those who can advise Ask Your Own Question Not sure how best to explain my problem, which may be why I'm not having any luck finding my answer through Google or the forum search. Can I have multiple cells be equally divided to total a static amount in another cell? I have 4 cells with a value of 25 each that, when totalled, equal 100 (which is the value in another static cell). If I change the value in one of the cells to 10, I want the other 3 cells to automatically update to 30, 30 and 30. Can this be done and how? I appreciate any and all feedback on this. I'm stumped on this one. Ask Your Own Question Hi Guys thanks for your help with this! I am fine with excel and basic formula's but have no idea how to do this!! I have a spreadsheet that keeps a track of hours for each month worked. I have a target to work each week and would like the spreadsheet to countdown the target. So i have a cell that has the target in. A cell that has a autosum in to count the amount of hours worked. I have a cell that works out the difference. What i would like is a formula that knows that the 1st x amount of cells have been filled in and then divide the total by the amount of remaining cells. Please Help!!! please please please!!! The bit i am struggling with is how to get excel to work out that a certain number of cells out of the 12 are in use and then to know that it needs to divide the total by the remaining! Many thanks for your help! Ask Your Own Question Hi All, This is my second question in continuation of my 1st question. (Series of Question-I). I got the total cost per customer ID. Now these customer ids corresponds to different cities. I need to divide the total cost of customer id equally among cities and then add the amount for common cities. I'll give you an example: I have 3 customer ids : CUST001 with total spent amount $300. CUST002 with total spent amount $100. CUST003 with total spent amount $500. CUST001 can spend money in as many cities as he wants. Let he spent money in NYC, Seatle, L.A. CUST002 spent his money in NYC. CUST003 spent his money in L.A. and S.F. My Final output should be combination of these 2: First the money should be equally divided among cities. NYC: $100(from CUST001) Seatle: $100(from CUST001) LA: $100(from CUST001) NYC: $100(from CUST002) LA: $250(from CUST003) SF: $250(from CUST003) and then add the common cities amounts NYC: $200 LA: $350 Seatle: $100 SF: $250. In the final output I need not show the customer IDs but only the cities and the total amount spent in them. I have already done this in excel sheet using a macro. For division of money: Private Sub CommandButton5_Click() Dim rng As Range Dim k As Integer Dim kntdups As Integer Dim mySheet As Worksheet Dim myRange As Range Set mySheet = ActiveWorkbook.Worksheets("Customer ID Report") Set myRange = mySheet.Range("A1").CurrentRegion Set rng = mySheet.Range("A:A") k = 1 kntdups = 0 Do While rng.Cells(k).Value "" If rng.Cells(k) = rng.Cells(k + 1) Then kntdups = kntdups + 1 k = k + 1 If kntdups >= 1 Then With rng For i = k - kntdups To k .Cells(i).Offset(0, 11).Value = (.Cells(i).Offset(0, 10).Value) / (kntdups + 1) Next i End With With rng .Cells(k).Offset(0, 11).Value = .Cells(k).Offset(0, 10).Value End With End If kntdups = 0 k = k + 1 End If Loop Until kntdups = 0 ActiveWorkbook.Worksheets("Start").Cells(18, 1).Value = "Completed" End Sub For addition of money: Private Sub CommandButton7_Click() Dim rng As Range Dim k As Integer Dim oldk As Integer Dim kntdups As Integer Dim mySheet As Worksheet Dim assetSheet As Worksheet Dim myRange As Range Dim destrange As Range Dim Lr As Long Dim rFound As Range Set mySheet = ActiveWorkbook.Worksheets("Customer ID Report") Set myRange = mySheet.Range("A1").CurrentRegion Set assetSheet = ActiveWorkbook.Worksheets("Cities Report") Set rng = mySheet.Range("E:E") k = 1 kntdups = 0 Do While rng.Cells(k).Value "" If rng.Cells(k) = rng.Cells(k + 1) Then kntdups = kntdups + 1 k = k + 1 If kntdups >= 1 Then With rng .Cells(k + 1).EntireRow.Insert .Cells(k + 1) = .Cells(k) .Cells(k + 1).Offset(0, 7).FormulaR1C1 = "=SUM(R[-" & kntdups + 1 & "]C:R[-1]C)" .Cells(k + 1).Offset(0, 8).Value = .Cells(k + 1).Offset(0, 7).Value End With kntdups = 0 oldk = k + 1 k = k + 2 With rng .Cells(k).Offset(0, 8).Value = .Cells(k).Offset(0, 7).Value End With kntdups = 0 oldk = k k = k + 1 End If On Error Resume Next With assetSheet Set rFound = .Columns(1).Find(What:=rng.Cells(oldk).Value, After:=.Cells(1, 1), LookIn:=xlValues, Lookat:=xlPart, SearchOrder:=xlByRows, SearchDirection:=xlNext, MatchCase:=False, SearchFormat:=False) On Error GoTo 0 If Not rFound Is Nothing Then rFound.Cells(1, 43).Value = rng.Cells(oldk).Offset(0, 8).Value End If End With End If Loop Until kntdups = 0 ActiveWorkbook.Worksheets("Start").Cells(24, 1).Value = "Completed" End Sub If these macros can be modified according to Access then also its fine with me or you can help me to find a faster way to do this (These macros are slow). Hope I have made myself clear here. If there are still some questions from your side please do revert. Thanks a lot in advance for all your help!!!!!! Ask Your Own Question Good Evening All, I seem to be in a bit of a bind. I messed up a report that I was working running. The end result left my "payment amounts" and "credit amounts" on different rows. I am trying to figure out some way (via VB or a formula) to make the amount in the "Credit amount" cell to appear in the same row, next to the Payment amount (that way all of my data is in one single row, not two). My example is attached. The dollar amounts that I wish to move are in bolded red text. The cells that I want those amounts to go into are highlighted in yellow. For example, My very first Patient name is "Joe Schmoe" (row 2). His payment amount was $20.00 and his credit amount was $15.00. I want the $15.00 to appear in the "Credit Amount" column right next to the $20.00 payment amount. Can anyone make sense of my question and give me a hand? P.S. I have Excel 2007, but saved my attachment as .xls, for compatibility reasons. If you know of a simple solution that is specific to Excel 2007, please let me know! Thank you so much in advance! Ask Your Own Question Ok, I've been using Excel for many years but I'm not considered an expert by any stretch ... This problem has me stumped and maybe there is no simple solution...but I'm hoping there is I have dollar amounts that I need to distribute amongst varying numbers of columns and not have have the total distributed be over or under the original amount by any number of cents. When I simply divide the dollar amount by the number of columns, the total of those columns can sometimes be more or less than the original dollar amount by a few cents. Dollar # of A B C All Columns Amount Columns Total $25.05 2 $12.53 $12.53 $25.06 $11.47 3 $ 3.82 $ 3.82 $ 3.82 $11.46 $25.05 divided into 2 columns gives $12.53 in Column A and $12.53 in Column B. Total of Columns A and B is $25.06. Over by a penny. $11.47 divided into 3 columns gives $3.82 in Columns A through C. Total of Columns A through C is $41.46. Under by a penny. I know I could simply always add or take away the pennies from one column, but I would prefer the process to be random or formulated in such a way that the Column to which the extra pennies are added to or taken away from differs in order to be "fair to each column". Man, I hope I explained that properly. Thanks for ANY help with this. Ask Your Own Question Hi guys, just needing someones help please. I want to add up cells diplaying only W (not J cells) to total up the amount shown. Tipping Comp Rules Jackpot System 8. Perfect Round: 8.1 An amount of $15 will jackpot each week to go towards the perfect round prize money. The jackpot amount will vary depending on the numbers of players that joins the competition. A minimum of $15 will Jackpot each week A1 $15 : A2 $30 : A3 $45 : A4 $15 : A5 $30 etc... B1 J : B2 J : B3 W : B4 J : B5 W etc... Answer; add cells that equal W only = $75 Goma (NASA) Ask Your Own Question A friend told me that I could probably do this with a macro ... but I have no idea how. Can anybody help me accomplish this? Here's the problem: Say I start off with an amount in cell: D104 (For grins, say the amount is: (5) Now ... say I later enter amounts in cells: H95 through H102 (say, the amount entered in each of these cells is: (1) Is there a way via a macro or VB to add the amounts in (H95:H102) to the already existing amount in (D104)? In the example I explained above ... the total would be: (5+1+1+1+1+1+1+1+1)=13 Thank you! Ask Your Own Question I am trying to use the SUMIF function and it is not working the way I think it should. I have several columns with amounts on a single row. Here is my scenario: If B2=R, then sum cells K2 through X2. The amount in K2 is a summed amount for numbers in cells C2 through J2, while the amounts in L2 through X2 are not summed amounts. This is what my formula looks It is only giving me the amount in cell K2 and not the rest, even though my sum_range in the function formula box says ={70,60,0,0,0,0,0,0,0,0,0,100,0,0} What am I doing wrong??? Ask Your Own Question Ok, I've been using Excel for many years but I'm not considered an expert by any stretch ... This problem has me stumped and maybe there is no simple solution...but I'm hoping there is I have dollar amounts that I need to distribute amongst varying numbers of columns and not have have the total distributed be over or under the original amount by any number of cents. When I simply divide the dollar amount by the number of columns, the total of those columns can sometimes be more or less than the original dollar amount by a few cents. Dollar # of A B C All Columns Amount Columns Total $25.05 2 $12.53 $12.53 $25.06 $11.47 3 $ 3.82 $ 3.82 $ 3.82 $11.46 $25.05 divided into 2 columns gives $12.53 in Column A and $12.53 in Column B. Total of Columns A and B is $25.06. Over by a penny. $11.47 divided into 3 columns gives $3.82 in Columns A through C. Total of Columns A through C is $41.46. Under by a penny. I know I could simply always add or take away the pennies from one column, but I would prefer the process to be random or formulated in such a way that the Column to which the extra pennies are added to or taken away from differs in order to be "fair to each column". Man, I hope I explained that properly. Thanks for ANY help with this. Ask Your Own Question Hello All, Suppose If I enter any numeric value in any cell, how can I distribute the values equally in the other cells. For example, in cell A1, if I enter 150, then the values should be equally distributed among three cells B1,C1 and D1 as 50 each. If I enter 90, then it should be 30 each. Your help is much appreciated. Gopala Krishna Rao Ask Your Own Question Hello all, I've tried searching the forums and some have touched on this but not quite. I am trying to show the amount of hours that will be needed over the next few weeks. As you can see in the sheet "Breakdown", I have a spreadsheet with four rows of total hours (B18:B21). Above that, going across (C11:Q11) are the amount of man hours available. These are linked to the cells above that for the number of people and hours available each week. These will be adjusted to fit the number of hours required (i.e. what if we added a sixth person). So down below (C18:Q21), I want to see the hours divided up evenly between the four rows and when one row is completed, the rest of the hours will be divided up evenly among what's left. As the amount of work hours are adjusted, the numbers will adjust automatically. This would mean once a row's hours have been completed, the remainder cells in that row would be left blank. Please let me know if you need more info. Thanks! Ask Your Own Question Hi all I'm wanting to do the followiing I have ten cells A1 to A10, I want to atuosum the cells to A11 (easy enough) Problem I have is In A12 I want to divide A11 by the amount of cells that actually have a value in Ie A1 might be the only cell with a value so A12 would divide A11 by 1, I then might add a value to A8 also so I would want A12 to divide A11 by 2 and so on Hope this makes sense Thanks in Advance Ask Your Own Question In my worksheet I have numbers in cells C7 - L35. Is there a way to equally increase all the values (which are different) without re-keying each one by hand? I don't know if I should be looking for a formula some sort of Help, please!! Ask Your Own Question I have a problem that I've had for a while, so far I've always worked around it because I have not been able to solve it in a satisfactory way. Say I want to do a search for the amount of people of a certain age in a column, but I want to be able to vary the amount of cells I look in. So first I might want to look for people aged 15 in A3:A35 and then in A3:A55 to see if there is a difference. Now the optimal way to do this, in my opinion, would be to have a reference that looks like A3:A(B1) and then have the number of the last cell I want to look in in B1, in this case either 35 or 55. Is this at all possible or do you have to change the formula each time you want to look something up? Or is there another way to do the same thing? Ask Your Own Question Summing the difference of multiple cells based on the value of a cell. Does that even make sense. So this is what I'm trying to do. I have dollar amounts in b5 through b256. One dollar amount in each cell. I want to designate a cell in say C260 where I can enter in a random dollar amount and have that dollar amount subtracted from each of the dollar amounts in column b and then total the differences. I know I can accomplish this by doing something like this =b5-c260+b6-c260+...etc but I'd have to do that 250 some times and that's just silly. Is there a different way to do it. The goal is to be able to identify a pay rate in c260 which compares itself to the pay rates in column B and then shows how much money we're losing. Ideally I would like to be able to write a formula that only subtracts c260 from numbers that are higher than then number i place in 260 and then add them up. Hmmmmmm? Does that make sense? Ask Your Own Question This is my first post here after "lurking" for some time. I have searched the archive and done Google searches but can't figure out an answer to my question. The snippet below seems to work fine, but it's ugly and I would simply like to know how to streamline it. What is the correct syntax to tell VBA to take some action if a cell value equals any of several values? There must be a more efficient way than retyping "Cells(1,b) = " each time. Thanks in advance for the help! For b = 50 To 10 Step -1 If Cells(1, b) = "GST TAX AMOUNT" Or Cells(1, b) = "QST TAX AMOUNT" Or Cells(1, b) = "HST TAX AMOUNT" Or _ Cells(1, b) = "RPT FULLY EXTRACTED" Or Cells(1, b) = "RPE EXP" Or Cells(1, b) = "E DAT" Or _ Cells(1, b) = "TE" Or Cells(1, b) = "DATE" Or Cells(1, b) = "" Then End If Next b Ask Your Own Question I hope I can explain it well. I can figure out why is not working. Any hints on what I am doing wrong will be appreciate it. I have an excel file like this with Sheet 1 with 66 rows Customer ID Amount purchased Amount paid Amount 1302 $2,049 $2,466 -$417 2245 $1,494 $598 $896 1416 $3,165 $3,165 $0 1512 $2,277 $228 $2,049 1224 $3,268 $1,634 $1,634 I created another Sheet named "Amount", should list two columns Costumer ID and Amount Owed but only those that owed more than $1,000 I don't know why my For each loop is not working, is copying the last amount owed over $1,000 only but not the rest of the amounts or the Customers Id. Thank you very much for any help on what I am doing wrong. Here is my code Sub Amounts() Dim cell As Range Dim Amount As Worksheet Dim c As Range Dim CustomerId As Range Range("d3").Value = "Amount" Range("d4").Formula = "=b4-c4" Range("d4").Copy Destination:=Range("d5:d65") ActiveSheet.Name = "Amount" Range("a3").Value = "Customer ID" Range("B3").Value = "Amount Owed" For Each c In Worksheets("Sheet1").Range("d4:D66").Cells If (c.Value) > 1000 Then Range("b4").Value = c 'Range("A4").Value = CustomerId End If Next c End Sub Ask Your Own Question ******** ******************** ************************************************************************> Microsoft Excel - Book2 ___Running: xl2002 XP : OS = Windows XP (F)ile (E)dit (V)iew (I)nsert (O)ptions (T)ools (D)ata (W)indow (H)elp (A)bout A1 = A B C D 1 $ 1,081 $1,087 $1,079 Sheet1 [HtmlMaker 2.42] To see the formula in the cells just click on the cells hyperlink or click the Name box PLEASE DO NOT QUOTE THIS TABLE IMAGE ON SAME PAGE! OTHEWISE, ERROR OF JavaScript OCCUR. Is there a formula that i can insert to will give show me the smallest amount in cell D1 for example from these three amounts I want to show the amount in C1 in cell D1 because C1 is the smallest amount out of the three. Thanks in advance Ask Your Own Question Hello folks, I am trying to minimize the amount I will be rounding off of numbers in an array. I have numbers listed in column K, the same numbers rounded off in column L, and the difference between column K and L, stated absolutely, in column M (see attachment). The numbers in column K were derived from calculations elsewhere in the sheet. By changing the number in F3, the numbers in K through M will vary. My goal is to minimize the amount I have rounded off from the values in the K column. That is, I would like to create a low value in the M cells. Specifically, I would like each cell in M to be less than 0.1. Is there a way to solve for F3, such that each cell in column M is less than 0.1? F3 is the only cell I am free to change. Lastly, F3 must be 3 or greater. Any help would be most appreciated. I look forward to learning more from you all. Ask Your Own Question Hi there, I have a budgeting spreadsheet im developing and in it is a version of the tax tables. on the left the rows increase in multiples of 2 This is the Gross Pay table. On the right is the PAYE (tax to be charged if your pay is the amount on the left) and this goes up by some formulae set by the tax department. Im looking for a formulae which can take your exact gross pay (example $3) and work out what the exact tax would be. Gross Pay , Paye 2 , 0.28 4 , 0.58 6 , 0.86 in the example the correct tax amount would be 0.58 minus 0.28 (because $3 is between 2 and 4) multiplied by 200 (the amount of difference in cents between 2 and 4 dollars) then multiplied by 100 (the difference in cents between 2 and 3) then adding the original .28 Giving an answer of 0.43 I need this to work for all amounts even cents - for example $2.8 The correct tax amount would be 0.58 minus 0.28 Divided by 200 (the amount of difference in cents between 2 and 4 dollars) then multiplied by 80 (the difference in cents between 2 and 2.8) then adding the original .28 Giving an answer of 0.4 VB isnt an option as I am working in Numbers for mac (but if i can find a formulae on here I can get the gist of it and if it doesnt work on numbers atleast i have an idea on how to work it out. Kindest regards, ps any questions please feel free to ask! Ask Your Own Question Hi everyone have another problem which is bugging the life out of me . I have 39 cells (A1-A39) i also have a list of names in (B1-Bx) what i need to do is get excel to count the names - that part is easy enough and depending on the amount of names fill all 39 cells of A with those names equally ie if there is 13 names in the list each will get 3 cells with the name in but if there is 10 names i need it to show 2 names in one cell as 1 name would have 6 cells and the rest 5 and a half each . Anyone got any ideas? Ask Your Own Question
{"url":"http://www.teachexcel.com/excel-help/excel-how-to.php?i=179759","timestamp":"2014-04-18T08:12:42Z","content_type":null,"content_length":"144856","record_id":"<urn:uuid:873f73f2-8ace-4973-afbd-a46a6b59e715>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00397-ip-10-147-4-33.ec2.internal.warc.gz"}
Each of the 25 balls in a certain box is either red, blue or Author Message Each of the 25 balls in a certain box is either red, blue or [#permalink] 13 Aug 2010, 02:03 This post received jananijayakumar D Manager E Joined: 01 Jul 2010 Difficulty: Posts: 53 35% (medium) Schools: LBS, Harvard, Question Stats: Booth, Stanford, ISB, NTU WE 1: S/W Engineer (02:14) correct Followers: 0 39% (01:28) Kudos [?]: 8 [1] , given: 15 wrong based on 204 sessions Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? (1) The probability that the ball will both be white and have an even number painted on it is 0 (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 Spoiler: OA This post received Expert's post Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? Probability ball: white - Probability ball: even - Probability ball: white even - Probability ball picked being white (1) The probability that the ball will both be white and have an even number painted on it is 0 --> (no white ball with even number) --> . Not sufficient (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> , multiple values are possible for Bunuel P(W) Math Expert and Joined: 02 Sep 2009 P(E) Posts: 17317 (0.6 and 0.4 Followers: 2874 OR Kudos [?]: 18380 [7] , 0.4 and 0.2). Can not determine given: 2348 --> multiple answers are possible, for instance: if (10 even balls) then P(WorE)=1 BUT (5 even balls) then . Not sufficient. Answer: E. Hope it's clear. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Joined: 01 Jul 2010 But how can this be solved in less than 2 mins??? Posts: 53 Schools: LBS, Harvard, Booth, Stanford, ISB, NTU WE 1: S/W Engineer Followers: 0 Kudos [?]: 8 [0], given: This post received Expert's post jananijayakumar wrote: But how can this be solved in less than 2 mins??? You can solve this problem in another way. Transform probability into actual numbers and draw the table. 1.JPG [ 8.64 KiB | Viewed 4587 times ] So we are asked to calculate (we are subtracting not to count twice even balls which are white). (1) The probability that the ball will both be white and have an even number painted on it is 0 --> . Not sufficient. 4.JPG [ 8.7 KiB | Viewed 4590 times ] (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> Math Expert Joined: 02 Sep 2009 Posts: 17317 Followers: 2874 Kudos [?]: 18380 [6] , given: 2348 --> . Not sufficient. 2.JPG [ 8.68 KiB | Viewed 4586 times ] . Not sufficient. 3.JPG [ 8.82 KiB | Viewed 4588 times ] Answer: E. NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests great explanation bunuel. Joined: 27 May 2010 Posts: 103 Followers: 2 Kudos [?]: 6 [0], given: Joined: 01 Jul 2010 Wonderful Bunuel. Posts: 53 Schools: LBS, Harvard, Booth, Stanford, ISB, NTU WE 1: S/W Engineer Followers: 0 Kudos [?]: 8 [0], given: Bunuel wrote: Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? Probability ball: white - P(W); Probability ball: even - P(E); Probability ball: white and even - P(W&E). BlitzHN Probability ball picked being white or even: P(WorE)=P(W)+P(E)-P(W&E). Manager (1) The probability that the ball will both be white and have an even number painted on it is 0 --> P(W&E)=0 (no white ball with even number) --> P(WorE)=P(W)+P(E)-0. Not Joined: 07 Aug 2010 (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> P(W)-P(E)=0.2, multiple values are Posts: 85 possible for P(W) and P(E) (0.6 and 0.4 OR 0.4 and 0.2). Can not determine P(WorE). Followers: 1 (1)+(2) P(W&E)=0 and P(W)-P(E)=0.2 --> P(WorE)=2P(E)+0.2 --> multiple answers are possible, for instance: if P(E)=0.4 (10 even balls) then P(WorE)=1 BUT if P(E)=0.2 (5 even balls) then P(WorE)=0.6. Not sufficient. Kudos [?]: 14 [0], given: 9 Answer: E. Hope it's clear. how did you get this? Click that thing This post received Expert's post BlitzHN wrote: Bunuel wrote: Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? Probability ball: white - P(W); Probability ball: even - P(E); Probability ball: white and even - P(W&E). Probability ball picked being white or even: P(WorE)=P(W)+P(E)-P(W&E). (1) The probability that the ball will both be white and have an even number painted on it is 0 --> P(W&E)=0 (no white ball with even number) --> P(WorE)=P(W)+P(E)-0. Not (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> P(W)-P(E)=0.2, multiple values are possible for P(W) and P(E) (0.6 and 0.4 OR 0.4 and 0.2). Can not determine P(WorE). (1)+(2) P(W&E)=0 and P(W)-P(E)=0.2 --> P(WorE)=2P(E)+0.2 --> multiple answers are possible, for instance: if P(E)=0.4 (10 even balls) then P(WorE)=1 BUT if P(E)=0.2 (5 even balls) then P(WorE)=0.6. Not sufficient. Answer: E. Hope it's clear. how did you get this? Bunuel From (1) Math Expert P(WorE)=P(W)+P(E)-0 Joined: 02 Sep 2009 --> Posts: 17317 P(WorE)=P(W)+P(E) Followers: 2874 ; Kudos [?]: 18380 [1] , From (2) given: 2348 NEW TO MATH FORUM? PLEASE READ THIS: ALL YOU NEED FOR QUANT!!! PLEASE READ AND FOLLOW: 11 Rules for Posting!!! RESOURCES: [GMAT MATH BOOK]; 1. Triangles; 2. Polygons; 3. Coordinate Geometry; 4. Factorials; 5. Circles; 6. Number Theory; 7. Remainders; 8. Overlapping Sets; 9. PDF of Math Book; 10. Remainders; 11. GMAT Prep Software Analysis NEW!!!; 12. SEVEN SAMURAI OF 2012 (BEST DISCUSSIONS) NEW!!!; 12. Tricky questions from previous years. NEW!!!; COLLECTION OF QUESTIONS: PS: 1. Tough and Tricky questions; 2. Hard questions; 3. Hard questions part 2; 4. Standard deviation; 5. Tough Problem Solving Questions With Solutions; 6. Probability and Combinations Questions With Solutions; 7 Tough and tricky exponents and roots questions; 8 12 Easy Pieces (or not?); 9 Bakers' Dozen; 10 Algebra set. ,11 Mixed Questions, 12 Fresh Meat DS: 1. DS tough questions; 2. DS tough questions part 2; 3. DS tough questions part 3; 4. DS Standard deviation; 5. Inequalities; 6. 700+ GMAT Data Sufficiency Questions With Explanations; 7 Tough and tricky exponents and roots questions; 8 The Discreet Charm of the DS ; 9 Devil's Dozen!!!; 10 Number Properties set., 11 New DS set. What are GMAT Club Tests? 25 extra-hard Quant Tests Manager very cool.. thanks! Joined: 07 Aug 2010 _________________ Posts: 85 Click that thing Followers: 1 Kudos [?]: 14 [0], given: Manager Re: Each of the 25 balls in a certain box is either red, blue or [#permalink] 03 Mar 2012, 04:12 Joined: 25 Dec 2011 Hi But can we not say that 12 out of the 25 balls were even ? Posts: 54 If we can then we already get the answer with only B !. GMAT Date: 05-31-2012 Thanks & Regards Followers: 0 Kudos [?]: 11 [0], given: Math Expert Re: Each of the 25 balls in a certain box is either red, blue or [#permalink] 03 Mar 2012, 04:48 Joined: 02 Sep 2009 Expert's post Posts: 17317 Followers: 2874 Kudos [?]: 18380 [0], given: 2348 morya003 Re: Each of the 25 balls in a certain box is either red, blue or [#permalink] 03 Mar 2012, 06:32 Manager Dont know why I have already put 1000 Dollars behind GMAT - lolz Joined: 25 Dec 2011 I calculated 12 even numbers as follows - first 10 balls can be painted with 5 even numbers viz - 2,4,6,8,10 likewise for next 10 - 2,4,6,8,10 and next 5 - 2,4, Posts: 54 then from Statement B GMAT Date: 05-31-2012 P(W) - P(E) = 0.2 Followers: 0 So P (W) = 0.2 + P(E) Kudos [?]: 11 [0], given: P(E) = 12/25 P(W) = 0.2 + 12/25 Bunuel wrote: Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? petrifiedbutstanding Probability ball: white - P(W); Probability ball: even - P(E); Senior Manager Probability ball: white and even - P(W&E). Joined: 19 Oct 2010 Probability ball picked being white or even: P(WorE)=P(W)+P(E)-P(W&E). Posts: 272 (1) The probability that the ball will both be white and have an even number painted on it is 0 --> P(W&E)=0 (no white ball with even number) --> P(WorE)=P(W)+P(E)-0. Not Location: India (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> P(W)-P(E)=0.2, multiple values are GMAT 1: 560 Q36 V31 possible for P(W) and P(E) (0.6 and 0.4 OR 0.4 and 0.2). Can not determine P(WorE). GPA: 3 (1)+(2) P(W&E)=0 and P(W)-P(E)=0.2 --> P(WorE)=2P(E)+0.2 --> multiple answers are possible, for instance: if P(E)=0.4 (10 even balls) then P(WorE)=1 BUT if P(E)=0.2 (5 even balls) then P(WorE)=0.6. Not sufficient. Followers: 6 Answer: E. Kudos [?]: 25 [0], given: 26 Hope it's clear. I found this method simple although it might look complicated. Took me exactly 2 minutes. Bunuel wrote: Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? Probability ball: white - P(W); Probability ball: even - P(E); Probability ball: white and even - P(W&E). Probability ball picked being white or even: P(WorE)=P(W)+P(E)-P(W&E). (1) The probability that the ball will both be white and have an even number painted on it is 0 --> P(W&E)=0 (no white ball with even number) --> P(WorE)=P(W)+P(E)-0. Not (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> P(W)-P(E)=0.2, multiple values are possible for P(W) and P(E) (0.6 and 0.4 OR 0.4 and 0.2). Can not determine P(WorE). (1)+(2) P(W&E)=0 and P(W)-P(E)=0.2 --> P(WorE)=2P(E)+0.2 --> multiple answers are possible, for instance: if P(E)=0.4 (10 even balls) then P(WorE)=1 BUT if P(E)=0.2 (5 even balls) then P(WorE)=0.6. Not sufficient. Answer: E. Hope it's clear. Joined: 06 Feb 2013 The interesting part about this explanation is particularly helpful expression " Posts: 60 multiple values are possible Followers: 1 Kudos [?]: 1 [0], given: 33 P(W) (0.6 and 0.4 0.4 and 0.2). Can not determine ." I did not think of that at all (I just thought, well this is a minus probabilities and we need plus, so not sufficient- I am not even sure I understood any relevance of the 2nd option), when I finally thought of the problem in the same way. Now, may be it is quite late here and my brain refuses to come up with something There are times when I do not mind kudos...I do enjoy giving some for help Manager Each of the 25 balls in acertain box is red, blue or white and has a number from 1 to 10 painted on it. If one ball is selected at random from the box , what is the probability that the ball selected will either be white or have an even number painted on it ? Status: Tougher times ... a. the probability that the ball selected will be white and an even number painted on it is 0 Joined: 04 Nov 2012 b.The probability that the ball will be white minus an even number painted on it is 0.2 Posts: 56 Location: India GMAT 1: 480 Q32 V25 Kudos is a boost to participate actively and contribute more to the forum WE: General Management Followers: 2 Kudos [?]: 6 [0], given: Math Expert Expert's post Joined: 02 Sep 2009 Posts: 17317 Followers: 2874 Kudos [?]: 18380 [0], given: 2348 Bunuel wrote: Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? Probability ball: white - P(W); Probability ball: even - P(E); Probability ball: white and even - P(W&E). Probability ball picked being white or even: P(WorE)=P(W)+P(E)-P(W&E). (1) The probability that the ball will both be white and have an even number painted on it is 0 --> P(W&E)=0 (no white ball with even number) --> P(WorE)=P(W)+P(E)-0. Not Senior Manager sufficient Joined: 16 Feb 2012 (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> P(W)-P(E)=0.2, multiple values are possible for P(W) and P(E) (0.6 and 0.4 OR 0.4 and 0.2). Can not determine P(WorE). Posts: 260 (1)+(2) P(W&E)=0 and P(W)-P(E)=0.2 --> P(WorE)=2P(E)+0.2 --> multiple answers are possible, for instance: if P(E)=0.4 (10 even balls) then P(WorE)=1 BUT if P(E)=0.2 (5 even Concentration: Finance, balls) then P(WorE)=0.6. Not sufficient. Answer: E. Followers: 4 Hope it's clear. Kudos [?]: 34 [0], given: 102 You say in the second statement that multiple values are possible (0.6 and 0.4 or 0.4 and 0.2) Are these values only possible? If they are please explain why. Why they cannot be 0.3 and 0.1 or 0.5 and 0.3? Kudos if you like the post! Failing to plan is planning to fail. This post received Stiv wrote: Bunuel wrote: Each of the 25 balls in a certain box is either red, blue or white and has a number from 1 to 10 painted on it. If one ball is to be selected at random from the box, what is the probability that the ball selected will either be white or have an even number painted on it? Probability ball: white - P(W); Probability ball: even - P(E); Probability ball: white and even - P(W&E). Probability ball picked being white or even: P(WorE)=P(W)+P(E)-P(W&E). (1) The probability that the ball will both be white and have an even number painted on it is 0 --> P(W&E)=0 (no white ball with even number) --> P(WorE)=P(W)+P(E)-0. Not (2) The probability that the ball will be white minus the probability that the ball will have an even number painted on it is 0.2 --> P(W)-P(E)=0.2, multiple values are possible for P(W) and P(E) (0.6 and 0.4 OR 0.4 and 0.2). Can not determine P(WorE). (1)+(2) P(W&E)=0 and P(W)-P(E)=0.2 --> P(WorE)=2P(E)+0.2 --> multiple answers are possible, for instance: if P(E)=0.4 (10 even balls) then P(WorE)=1 BUT if P(E)=0.2 (5 even Asifpirlo balls) then P(WorE)=0.6. Not sufficient. Senior Manager Answer: E. Joined: 10 Jul 2013 Hope it's clear. Posts: 344 You say in the second statement that multiple values are possible (0.6 and 0.4 or 0.4 and 0.2) Followers: 2 Are these values only possible? If they are please explain why. Kudos [?]: 80 [1] , given: Why they cannot be 0.3 and 0.1 or 0.5 and 0.3? White balls = w Red = R Blue = b Total ball = 25 Sum total of even numbered ball = E We have to evaluate = w/25 + (E)/25 From st(1) , we only know there is no white ball which contains even number. Even we still don’t know about red and blue balls have how many even numbered balls in them. So all are in mystery and doubt. From st(2), w/25 – E/25 = 0.2 , again all unknown. From both statement, we can assume several different things, I mean double case. So both are insufficient. Answer is (E) Asif vai..... Similar topics Author Replies Last post Each of the 25 balls in a certain box is either red, blue, positive soul 4 26 Sep 2006, 18:09 Each of the 25 balls in a certain box is either red, blue, jairus 2 05 Feb 2009, 04:24 1 Each of the 25 balls in a certain box is either red, blue above720 14 11 Dec 2008, 07:03 3 Each of the 25 balls in a certain box is either red, blue, changhiskhan 2 26 Mar 2010, 14:14 1 Each of the 25 balls in a certain box is either red, blue or hitmoss 3 15 May 2011, 09:31
{"url":"http://gmatclub.com/forum/each-of-the-25-balls-in-a-certain-box-is-either-red-blue-or-99044.html?sort_by_oldest=true","timestamp":"2014-04-19T02:07:50Z","content_type":null,"content_length":"275983","record_id":"<urn:uuid:58c428d1-22e1-43fc-81ec-b2901a73f37c>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
How many finite simple groups of order $p+1$? MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. I'm looking at finite simple groups of order $p+1$ where $p$ is a prime number. But they don't seem to fall into any classification - have these all been determined? Is the number of them even finite? up vote 6 down vote favorite 3 finite-groups gr.group-theory nt.number-theory show 1 more comment I'm looking at finite simple groups of order $p+1$ where $p$ is a prime number. But they don't seem to fall into any classification - have these all been determined? Is the number of them even finite? The philosophical point here is that if all you know about a group $G$ is its order $\lvert G \rvert$, then by far the most relevant information is the prime factorization of that order. (Back when sporadic groups were still being discovered, there are anecdotes about phoning John Thompson with the order of your hypothetical new group, and after some calculations he would up vote tell you whether it 'checked out' or not - and of course he would just be using the knowledge of which primes divided the order and to which exponents). So questions about the prime 11 down factorization of $\lvert G \rvert - 1$ are going to be dominated by the (generally unsolved) number theoretical problems that relate the prime factorization of $n$ and $n+1$, e.g. the vote existence of infinitely many Sophie Germain primes, Mersenne primes, or primes $p$ for which $\frac{p^2+1}{2}$ is also prime, etc. add comment The philosophical point here is that if all you know about a group $G$ is its order $\lvert G \rvert$, then by far the most relevant information is the prime factorization of that order. (Back when sporadic groups were still being discovered, there are anecdotes about phoning John Thompson with the order of your hypothetical new group, and after some calculations he would tell you whether it 'checked out' or not - and of course he would just be using the knowledge of which primes divided the order and to which exponents). So questions about the prime factorization of $\lvert G \rvert - 1$ are going to be dominated by the (generally unsolved) number theoretical problems that relate the prime factorization of $n$ and $n+1$, e.g. the existence of infinitely many Sophie Germain primes, Mersenne primes, or primes $p$ for which $\frac{p^2+1}{2}$ is also prime, etc. As suggested by the other answers and comments, this is unknown (and a hard arithmetic question). Here's another example that might help indicate why: The order of the finite simple group $PSL_2(F_q)$ is (for $q$ not a power of $2$) $q(q^2-1)/2$. You'd therefore like to know when $q(q^2-1)/2-1$ is a prime, for $q$ a prime power. The question is (at least superficially, I hope we can agree) similar to that of when the Mersenne number $2^n-1$ is prime. up vote 5 down vote For $q=2^n$ a power of $2$ the question boils down to asking when $2^n(2^{2n}-1)-1$ is prime. There are similar formulas for the other simple groups of Lie type, and I'll bet money no one in the world knows whether infinitely many of the relevant numbers are prime. add comment As suggested by the other answers and comments, this is unknown (and a hard arithmetic question). Here's another example that might help indicate why: The order of the finite simple group $PSL_2(F_q)$ is (for $q$ not a power of $2$) $q(q^2-1)/2$. You'd therefore like to know when $q(q^2-1)/2-1$ is a prime, for $q$ a prime power. The question is (at least superficially, I hope we can agree) similar to that of when the Mersenne number $2^n-1$ is prime. For $q=2^n$ a power of $2$ the question boils down to asking when $2^n(2^{2n}-1)-1$ is prime. There are similar formulas for the other simple groups of Lie type, and I'll bet money no one in the world knows whether infinitely many of the relevant numbers are prime. Standard heuristics (together with orders from the list of finite simple groups ) suggest that by far the most common orders of the form $p+1$ for $p$ prime will come from $A_1(q) = PSL_2(\ mathbb{F}_q)$, of order $\frac{q^3-q}{2}$, as $q$ ranges over odd primes (or prime powers, if you want an additional small contribution). In particular, for large $N$, one should expect roughly $\frac{\sqrt[3]{N}}{(\log N)^\alpha}$ satisfactory numbers $p+1$ less than $N$, for some fixed positive number $\alpha$, and this sequence of numbers certainly grows without bound. up vote 5 down vote As others have remarked, the question of proving that the set of suitable primes satisfies the rough asymptotics I gave above, or even proving that the set is infinite, seems to be beyond current technology. For example, we still don't know if there are infinitely many primes of the form $n^2+1$ for $n$ an integer. add comment Standard heuristics (together with orders from the list of finite simple groups ) suggest that by far the most common orders of the form $p+1$ for $p$ prime will come from $A_1(q) = PSL_2(\mathbb{F} _q)$, of order $\frac{q^3-q}{2}$, as $q$ ranges over odd primes (or prime powers, if you want an additional small contribution). In particular, for large $N$, one should expect roughly $\frac{\sqrt [3]{N}}{(\log N)^\alpha}$ satisfactory numbers $p+1$ less than $N$, for some fixed positive number $\alpha$, and this sequence of numbers certainly grows without bound. As others have remarked, the question of proving that the set of suitable primes satisfies the rough asymptotics I gave above, or even proving that the set is infinite, seems to be beyond current technology. For example, we still don't know if there are infinitely many primes of the form $n^2+1$ for $n$ an integer. As Alon remarks, it is extremely hard to find such groups even between groups of one (most known) series $A_n$. up vote 0 down vote add comment As Alon remarks, it is extremely hard to find such groups even between groups of one (most known) series $A_n$.
{"url":"http://mathoverflow.net/questions/48618/how-many-finite-simple-groups-of-order-p1/48646","timestamp":"2014-04-20T01:17:14Z","content_type":null,"content_length":"73315","record_id":"<urn:uuid:adb8e8e6-a06a-449c-aed1-08a5191d8476>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
GATE 2014 - Syllabus for Mathematics (MA) If you are a GATE aspirant having attempted GATE exam in the past and scored: 1. 95+ percentile, please send across a scan or photocopy of your GATE scorecard and we will offer a Rs. 2000 discount on all our courses: classroom, GATEDrive and postal (except test series) 2. 90+ percentile in any of the past GATE, we will be happy to provide you a Rs. 1500 discount on all our courses: classroom, online, GATEDrive and postal (except test series) 3. 85+ percentile in any of the past GATE, we will be happy to provide you a Rs. 1000 discount on all our courses: classroom, online, GATEDrive and postal (except test series) To know more about our courses, click here. To enroll with us, click here. For more details, call us on 09930406349 or email: gate@careeravenues.co.in
{"url":"http://www.gatesyllabus.com/about-gate/gate-syllabus/245-gate-2014-syllabus-for-mathematics-ma","timestamp":"2014-04-21T02:01:30Z","content_type":null,"content_length":"38182","record_id":"<urn:uuid:8e2b3cf6-213b-4247-9456-d7ec1ef7e784>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
Getting to know you nodes Getting to know you " nodes (GTKYNs) don't . They don't Maybe my 1 year of working for Oracle has shaped the way I think, but I see this as a typical relational database / set theory problem: First, notation (until our browsers and Everything support MathML, we'll have to resort to ASCII): if X and Y are sets, then X * Y is their cross product, and P(X) is the set of all subsets of X. Let U be the set of all Everything users. Let T be the set of all possible topics for GTKYNs. (A rather large set, but this is set theory...) Let A be the set of all possible answers to any of the topics above. (Again, quite a large set). Define the special element 0 (zero) of A to denote a non-response. Let the function a be defined as follows: a : U * T --> A, a(u, t)= the answer that user u gave on topic t. Ideally, someone who is interested in the answers that the users give will look at this function and find them. But the Everything user interface does not easily allow queries on U * T directly. We'll have to settle on one of the following functionals: • h : U --> P(T * A), h(u)={(t, r) in T * A: a(u, t) = r AND r != 0} (For every user, associate the set of all (topic, answer) pairs that the user has answered). This corresponds to everyone putting all information in their home node. • g : T --> P(U * A), h(t)={(u, r) in U * A: a(u, t) = r AND r != 0} (For every topic, associate the set of all (user, answer) pairs corresponding to all users who answered and their respective answers). This corresponds to long GTKYNs. Now this becomes an issue of user interface design. If we settle on accessing the , then queries by user ("What is user interested in?") become easier. If we settle on , then queries by topic ("What do people think of ") become easier. I personally find the latter more useful, because I am on Everything primarily to gather information, and not to know people. I am interested in what people think of a particular subject, just like reading a consumer report or review; I don't want to know some particular person's opinions on everything ranging from sex to microwave brands. Strangely, putting all information in the home node is more "getting to know you"-like than GTKYNs themselves!
{"url":"http://everything2.com/title/Getting+To+Know+You+Nodes","timestamp":"2014-04-17T03:50:51Z","content_type":null,"content_length":"23674","record_id":"<urn:uuid:006f0635-08e6-420c-bca7-1e8ac167b283>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
Introduction To Elementary Particles By David Griffiths Pdf Introduction to elementary particles by david griffiths pdf Download Free eBook:Introduction to Elementary Particles by David Griffiths - Free chm, pdf ebooks rapidshare download, ebook torrents bittorrent download. INTRODUCTION TO ELEMENTARY PARTICLES David Griffiths Reed College JOHN WILEY CL SONS, INC. New York. Historical Introduction to the Elementary Particles (pages 11?53). I?d recommend this book to anyone in the field and anyone lecturing in it. Reading any section will always yield insights, and you can?t go wrong. Introduction to Elementary Particles 2nd edition, by David J. Griffiths gives balance among quantitative rigor and intuitive comprehension, utilizing a lively, casual. In Introduction to Elementary Particles, Second, Revised Edition, author David Griffiths strikes a balance between quantitative rigor and intuitive understanding. Documents search Engine for PPTS, DOC, PDF, XLS, PPTX etc. ( To use study temple's internal search engine - use search box on right top corner ) & Recent topics in. Download Free eBook:Wiley[request_ebook] Introduction to Elementary Particles Solutions Manual by Griffiths, David - Free chm, pdf ebooks rapidshare download, ebook. Singapore Download the Book Introduction to Elementary Particles, 2nd edition, Author David Griffiths In PDF. In Introduction to Elementary Particles, Second, Revised Edition , author David Griffiths strikes a balance between quantitative rigor and intuitive understanding. Introduction to elementary particles by david griffiths In Introduction to Elementary Particles, Second, Revised Edition, author David Griffiths strikes a balance between quantitative rigor and intuitive understanding. In Introduction to Elementary Particles, Second, Revised Edition , author David Griffiths strikes a balance between quantitative rigor and intuitive understanding. Reading any section will always yield insights, and you can?t go wrong. INTRODUCTION TO ELEMENTARY PARTICLES David Griffiths Reed College JOHN WILEY CL SONS, INC. New York. Singapore David Griffiths is Professor at the Reed College in Portland, OR. After obtaining his degree and PhD at Harvard, assignments included engagements at several renowned. Introduction to Elementary Particles 2nd edition, by David J. Griffiths gives balance among quantitative rigor and intuitive comprehension, utilizing a lively, casual. Introduction to Elementary Particles has 59 ratings and 8 reviews. Bojan said: One of the most interesting and most intellectually far-reaching areas of. Alibris has Introduction to Elementary Particles and other books by David Griffiths, including new & used copies, rare, out-of-print signed editions, and more. I?d recommend this book to anyone in the field and anyone lecturing in it. Download Free eBook:Introduction to Elementary Particles by David Griffiths - Free chm, pdf ebooks rapidshare download, ebook torrents bittorrent download. Introduction to elementary particles by david griffiths download In Introduction to Elementary Particles, Second, Revised Edition, author David Griffiths strikes a balance between quantitative rigor and intuitive understanding. Singapore In Introduction to Elementary Particles, Second, Revised Edition, author David Griffiths strikes a balance between quantitative rigor and intuitive understanding. In Introduction to Elementary Particles, Second, Revised Edition, author David Griffiths strikes a balance between. In Introduction to Elementary Particles, Second, Revised Edition, author David Griffiths strikes a balance between. Download the Book Introduction to Elementary Particles, 2nd edition, Author David Griffiths In PDF INTRODUCTION TO ELEMENTARY PARTICLES David Griffiths Reed College JOHN WILEY CL SONS, INC. New York. Download Free eBook:Introduction to Elementary Particles by David Griffiths - Free chm, pdf ebooks rapidshare download, ebook torrents bittorrent download. Download Introduction to Elementary Particles Download Free eBook:Wiley[request_ebook] Introduction to Elementary Particles Solutions Manual by Griffiths, David - Free chm, pdf ebooks rapidshare download, ebook. Comments: 1 THIS IS VERY GOOD
{"url":"http://zikikaki.jimdo.com/2012/07/01/introduction-to-elementary-particles-by-david-griffiths-pdf/","timestamp":"2014-04-19T19:38:25Z","content_type":null,"content_length":"16636","record_id":"<urn:uuid:afdfa197-f608-4a07-a007-9b24076d2b7e>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00442-ip-10-147-4-33.ec2.internal.warc.gz"}
i need help NOW with my algebra please i cant even explain what it is... heres one... If you are solving for b: b^2 - 48 = 0 (set equal to 0) b^2 = 48 (drag 48 over) b = 7 (take square root) This is if you're solving for b. You didn't really tell what to solve for. But this is just what I'm guessing the question is. Hope that helps! thats the thing it says nothing but that! :\ im soo frustrated. it just says b^2-48 thats all, no solve for b... & im sorry but i still dont understand. There's nothing other than b so I'd guess that's what you have to solve for. Usually when you have a question like this it would say b+2 = 4 or b(squared)=9 There's no equal sign in your question, so EthanDavis just assumed that it was equal to zero. Then he just did the algebra steps, moved over 48 to the other side and changed sign, and then However....the square root of 48 is not 7, the square root of 49 is seven. The square root of 48 is 6.93..... Wow. Silly mistake. That's why I'm better off asking the questions than answering them. ;) my teacher says this is called factoring trinomials... & ther isnt supposed to be a sign... ;[ im sosoo lost can you exlpain please? ok, so the teacher also said something like you use FOIL and the distrubutive property & factor . i dont know.. IM SO CONFUSED ;[[ Oh, you're supposed to factor it...well, that would have been helpful.. There's a step by step on using that method on this website.. Intermediate Algebra Tutorial on Factoring Trinomials But are you sure it's not b^2-49? yeahh its b^2-48 ok heres another one... n^2-16 ? im just as confused & i have a test on this tomorrow!(Worried) That would be (n+4)(n-4) Just think of this form: So in that form, your question $n^2-16$, it's like this A, b and c are just constants..x is the variable (n in your question) You want something in this form... ( ..... )( ..... ) The first terms in each bracket need to multiply to form the first term in what you started with...so if you have n(squared)-16, you know that n times n equals n(squared), so now you have this (n ..... )(n ..... ) And then you need to look for two numbers that multiply to form "c", but add together to form b. c is -16, and b is 0 in your example. So 4 and -4 fit the circumstances...So then you have With these consider your standard form $ax^2+bx+c$ In this case where there is no "b" term you are going to end up with the form (x+some number)*(x-some number) So in the case $n^2-16$ consider the number 16, can you take the square root and get a whole number? In this case it would be four, so you can factor it thus, You can test this result by foiling it back out and it should look the same. ok im understanding this more... [: thanks. so like what about a big one like this: 27t^2+18t+9? Have you learned the quadratic formula? If you can't do it simply, then you can use the quadratic formula. $\frac{-b \pm \sqrt{b^{2} - 4ac}}{2a}$ But you can't do that here because b^2-4ac gives you a negative value, which you can't take the square root of.. ..so there's no roots for this... ;[ ?? forget it... ill just try at what i think i can do, then guess at the rest. thank you all, [it HAS helped!] First factor out a nine, then try to think of it as foiling backwards. your final result should look similar to this. $9(t \pm \mbox{ some number })(t \pm \mbox{ some number })$ I put the "some number" in there because I want you to try it yourself k. so is it: because 27 divided by 9 is 3, and 18 divided by 9 is 2, and 9 divided by 9 is 1? so what do you do after that?
{"url":"http://mathhelpforum.com/algebra/34966-i-need-help-now-my-algebra-please-print.html","timestamp":"2014-04-19T20:25:56Z","content_type":null,"content_length":"15961","record_id":"<urn:uuid:17b4c827-ba8f-4283-996e-5f257bde072b>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00330-ip-10-147-4-33.ec2.internal.warc.gz"}
Binary. Octal. Decimal. Hexadecimal conversion between First, the conversion between decimal and binary (1) decimal to binary, is divided into an integer part and fractional part ① integral part of the way: In addition to taking over France 2, which will be integral parts of each divided by 2, for the remainder of Right the number, but to continue to divide by 2, the remainder a bit further to the right of the number, this step has been to continue until the business is 0 until the time the last reading, the remainder from the last time, has been to the top of the a remainder. The following are examples: Example: 168 decimal to binary conversion The outcome will be converted to binary decimal 168, (10101000) 2 Analysis: The first step would be 168 divided by 2, Business 84, the remainder is 0. The second step will be 84 divided by 2 operators, commercial number 42 I 0. The third step will be 42 divided by 2 operators, the number of operators more than 21 to 0. The fourth step would be 21 divided by 2 operators, the number of business more than 10 to 1. The fifth step, the business 10 divided by 2, Business 5 the remainder is 0. The sixth step, the business 5 divided by 2, remainder is 1 business 2. Seventh step, the Business 2 divided by 2, Business 1, the remainder is 0. Step eight would be 1 divided by 2 operators, business 0 remainder of 1. Ninth step of reading, because the last one is only obtained after repeated divided by 2, so it is the highest place, reading the remainder of the word forward from the last time, that is, 10101000 (2) fractional method: Take 2 to take the whole law, be the fractional part multiplied by 2, then take the integer part, the remaining fractional part of the continuing multiplied by 2, then take the integer part, then multiplied by the fractional part of the remaining 2 fractional part of zero has been taken to date. If you can never be zero, the decimal rounding with the same number of decimal places as required retention time, under the back of a is 0 or 1, choice, if it is zero, give it up, if it is 1, to enter one. In other words, 0 round 1 entry. Reading back from the front of the integer read integer, the following example: Example 1: The conversion to binary 0.125 Outcome: to convert a binary 0.125 (0.001) 2 Analysis: The first step would be 0.125 multiplied by 2, was 0.25, the integer part is 0, the fractional part 0.25; The second step, the fractional part of 0.25 multiplied by 2, 0.5, 0 is an integer part, fractional part is 0.5; The third step, the fractional part will be multiplied by 2, 0.5, 1.0, then the integer part of 1, the fractional part of 0.0; The fourth step, reading, starting from the first reading, read the last one, that is, 0.001. Example 2, the 0.45 is converted to binary (reserved to the fourth decimal point) We can see from the above steps, when the fifth time to do multiplication, and the result is 0.4, then multiplied by the fractional part to 2, was 0.8,0.8 then multiplied by 2, to 1.6 this has been by it and eventually impossible be the fractional part of zero, so this time had to learn the method of rounding decimal, but binary only 0 and 1, 2, and so it was round 1 into 0. This is also the computer will produce errors in the conversion, but because the median number of reservations, high precision, it can be ignored. So, we can conclude that the results will be converted to a binary equivalent to about 0.45 0.0111 Described above is converted to decimal to binary approach, need to note that: 1) decimal to binary, to be divided into two parts respectively, integer and fractional conversion 2) When converting an integer, with the addition of more than 2 out of law and the conversion decimal time, using a rounding method by 2 3) Note the direction of their reading, therefore, we approach from above, we can draw a decimal number converted to binary as 10,101,000.001 168.125, or 168.45 is converted to a binary decimal number about equal to 10101000.0111. (3) is converted to decimal, regardless of the binary integer and fractional method: the right to add according to law, be multiplied by the binary number of each on the right, then the sum of which is a decimal number. Example will convert a binary number to decimal number 101.101. Outcome: (101.101) 2 = (5.625) 10 We are doing the binary into decimal to note that 1) The right to know the value of each binary 2) to be able to calculate the value of each (Note: the weight behind the decimal point is 2 (-1), 2 (-2), etc.). Decimal octal and hexadecimal transfer of principles with the principle of transfer as binary. Related Posts of Binary. Octal. Decimal. Hexadecimal conversion between • double, float types can not be precisely expressed 0.1 double d1 = 5.85; double d2 = 3.21; Therefore, d1 - d2 = 2.6399999999999997 At this point you can use the following method (based on number of decimal places amplified), but to prevent the value ... • <bean scope = "prototype"> <property name="meetsService" ref="meetsService" /> </ bean> "! -- scope = "prototype" did not write any of the issues, projects or deletions from a table right ... • 2010.02.03 - Jquery ajax partial refresh dynamically updated the final results shown that the objectives to be achieved is: When I select a different time, the last time also followed a change, and will spread to a number of parameters of the background t • For a given number (float) rounded to two decimal places • to retain two decimal places java questions: Mode 1: Rounding double f = 111231.5585; BigDecimal b = new BigDecimal (f); double f1 = b.setScale (2, BigDecimal.ROUND_HALF_UP). doubleValue (); Mode 2: java.text.DecimalFormat df = new java.text.DecimalF ... • Java operators can be divided into four categories: arithmetic operators, relational operators, logical operators and bit operators. 1. Arithmetic Operators Java's arithmetic operators are divided into unary operators and binary operators. Unary opera • Jetty is an open source, standards-based, full-featured server JAVA achieved. It Apache2.0 released under the agreement, so be free for commercial use and distribution. First in 1995, Jetty to benefit from a vast user community, there is a stable cor ... • Cases sum = parseFloat (8.99) + parseFloat (7.50); Wait until the value is: 16.4900000002 How to get only two decimal places? <script language="JScript"> Number.prototype.toFixed = function (num) ( / / Re-structure toFixed method, IE5 ... • Before writing a small chat room of DEMO, a blog, in addition http://lvp.javaeye.com/blog/343236, download the attachment provided. However, this example brought about automatically refresh the page to experience a sense of bad, and when the contents reac • 1, break out of the cycle when the layer 2, next neglect the remainder of the cycle to begin the next cycle 3, redo a fresh start cycle, or the beginning of this first 4, retry to start the loop 5, $ array <<value to value as an array element add an
{"url":"http://www.codeweblog.com/binary-octal-decimal-hexadecimal-conversion-between/","timestamp":"2014-04-17T01:28:32Z","content_type":null,"content_length":"37530","record_id":"<urn:uuid:fcb61f7b-7ecf-48de-98ec-66d83878607b>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00417-ip-10-147-4-33.ec2.internal.warc.gz"}
Reverse Word Search algebra a form of elementary mathematics, used esp. in solving equations, in which letters stand for unknown or variable quantities. [1/2 definitions] algorithm a completely determined and finite procedure for solving a problem, esp. used in relation to mathematics and computer science. antilogarithm in mathematics, the number that corresponds to a particular logarithm. arithmetic the simplest form of mathematics consisting of the theory and computation of whole numbers, including addition, subtraction, multiplication, division, evolution, and involution. [1/3 definitions] asymptote in mathematics, a straight line that approaches but never meets a curve. autistic savant an autistic person who has an extraordinary gift or ability, as in mathematics, music, or art. axiom in mathematics, a statement, often one regarded as obvious, that is accepted without proof as a basis for proving other statements; postulate. [1/2 definitions] calculus in mathematics, a system of calculation, as of rates of change, lengths, or volumes, involving algebraic notations. [1/2 definitions] Cartesian of or pertaining to Descartes or his philosophy or mathematics. [1/2 definitions] character a mark, letter, or symbol used in an alphabet or in mathematics. [1/7 definitions] closed in mathematics, of or pertaining to a curve that encloses an area and has no end points. [1/6 definitions] commutative in mathematics, interchangeable in order, as the terms in the equation, a + b = b + a. [1/2 definitions] continuum in mathematics, a set with two end points and an infinite number of points between. [1/2 definitions] coordinate in mathematics, of or using coordinates as reference points. [2/11 definitions] corollary in mathematics, a proposition whose proof directly follows in one or a few steps from the proof of another proposition. [1/4 definitions] diagonal in mathematics, joining two nonadjacent corners of a polygon or two corners of a polyhedron that are not in the same plane. [1/5 definitions] differential in mathematics, relating to differentials. [2/8 definitions] differential calculus the branch of mathematics that has to do with differentials and their application. distributive in mathematics and logic, of or involving the property by which an operation may be applied to each item in an expression or proposition. [1/4 definitions] divide in mathematics, to determine (a number) by the process of division. [1/8 definitions] equation in mathematics, a statement using an equal sign to assert the equality of two quantities or expressions. [1/4 definitions]
{"url":"http://www.wordsmyth.net/?mode=rs&as_data=mathematics&as_data_cs=any_w","timestamp":"2014-04-16T19:19:41Z","content_type":null,"content_length":"43544","record_id":"<urn:uuid:0da4a8a8-9462-4296-9c6c-813ac18b0029>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Modified Dietz A reader sent in a question asking about the Modified Dietz Method for reporting portfolio performance. Specifically, her investment statement noted that this was the method used to calculate returns for clients and she just wanted to know if this was normal or not. It’s a good question. How many people have ever heard of the Modified Dietz Method before? Performance Reporting for Investors You would be surprised how much more (relatively) difficult it is to calculate portfolio performance when you are not dealing with lump sum investments made at the beginning of a reporting period with no distributions (or re-invested distributions). For example, it’s really easy if you start the year with $100,000 and at the end of the year you have $110,000 and no distributions were made during the year. It’s easy to figure out that you earned 10%. But what happens if you’re portfolio spat out $10,000 in a dividend on June 30th, and you still ended up with only $110,000 by the end of the year (and the $10,000 was re-invested)? Your $100,000 earned 10% for 6 months, but then your $110,000 earned 0%. Your end performance would therefore have been less than 10% overall. There are two main categories of calculating and reporting performance: Time-weighted returns and Dollar-Weighted Returns (aka Money Weighted Returns and both are also used interchangeably [depending on who you ask] with IRR, or Internal Rate of Return). Time Weighted Returns Time weighted returns aren’t as precise when you have large cash flows in a portfolio. For example if a portfolio had three years of 20% returns each year and then 0% return for the next three years the time-weighted return is 10% over the 6 years (arithmetic return, and we’ll ignore geometric returns for the purpose of this post). But what if you had $1 invested the first three years and then added $100,000 at the beginning of year 4? At the end of year 6 you would have just under $100,002. That certainly doesn’t seem like an average 10% return does it? Dollar Weighted Returns A dollar weighted return would calculate the above as follows: $1 earned 10% for 6 years, and $100,000 earned 0% for three years. Since the bulk of the portfolio did nothing, this would be reflected in the dollar weighted return being really close to 0%. So which is the Modified Dietz? The Modified Dietz is actually somewhat of a dollar weighted return which becomes time-weighted because performance is calculated for sub-periods which are then linked together. Huh? It’s funny if you look it up because some authoritative sources call it time-weighted, and some call it dollar weighted. Perhaps the mathematicians out there (Michael James?) can help us out on this. Here is the wikipedia link for Modified Dietz and the formula. From the formula you can see it is dollar weighted, but by linking the returns between periods (which is a time weighted method on top of the dollar weighted calculation) it is somewhat of a hybrid. Can you just tell me if it’s good or not? Yes, it’s fine most of the time. The only time it will really distort your results is if you have multiple cash flows within one period and the markets are volatile (2008 & 2009 anyone?). This is because the sub-periods require proper portfolio valuations at the beginning and end of those periods and it is very onerous to do so. The Modified Dietz assumes an average rate of return for each sub-period. The Modified Dietz is (currently) an accepted methodology for portfolio performance reporting according to the GIPS standards (Global Investment Performance Standards), but it should be noted that GIPS (which is run by the CFA Institute) has recommended that performance reporting start calculating portfolio valuations when large cash flows occur (positive or negative) so that a more accurate rate of return can be calculated (as opposed to just using quarterly or monthly valuations and ignoring large cash flow timing as it stands now). These recommendations are to be adopted in January of 2010 (don’t know if it will apply to the reporting of dealer firms to retail clients though). In the end, Modified Dietz is fine most of the time, but it’s not perfect. Related posts: 1. Doctor Stock says: Nice article – a simple post for the average reader. 2. Michael James says: The Modified Dietz method of calculating portfolio return is a reasonably good approximation of the internal rate of return (IRR) in most cases. The IRR is a dollar-weighted measure that is simply the rate of return that causes the net present value of all cash flows to be zero. When the portfolio starting value and ending value are large compared to other cash flows in-between, just about all methods of calculating return give close to the same result. In Preet’s example where the added $100,000 is very large compared to the portfolio starting value, the time-weighted return fails miserably, but the Modified Dietz gives a reasonable answer of just a hair over zero percent return. The Modified Dietz is a much better approximation to the IRR than time-weighted measures. Saving computing power isn’t much of an excuse any more for using approximations. It’s not that much harder to find the IRR than it is to use Modified Dietz, and who cares if a computer is doing the work? The exception would be if you run an investment company and your software already does Modified Dietz, the last thing you want to have to do is change it. In the vast majority of cases, the IRR and Modified Dietz return are going to be so close that it doesn’t really matter. 3. Melissa says: Thank you Preet, this has been very informative. I especially like the “Can you just tell me ….” section. Also …. Michael James, thanks for the additional comments. 4. Patrick says: Michael – the IRR is not well-defined if there is a mixture of positive and negative cash flows, and is generally not a good way to compare investments because doing so would give results at odds with expected utility. 5. Michael James says: It’s true that IRR is not always well-defined. However, all methods of calculating return have their warts. For example, Suppose that we invest $10,000 initially, it grows to $12,000 by then end of the first year, and we withdraw $11,000. Then the remaining $1000 doubles to $2000 by the end of the 12th year. By any reasonable measure, this is a good, but not spectacular return. However, the Modified Dietz says that the total return over the 12 years is -3600%! The IRR is a more reasonable 14.5% per year. Most of the time, when calculating just a one-year return on a portfolio with only small cash flows relative to the portfolio size, just about any method will give reasonable answers. However, there will always be situations where IRR and IRR estimates like Modified Dietz will fail. Methods involving expected utility and presumed reinvestment rates of cash flows have their problems as well. While they tend to give more stable answers in the more extreme cases, they require that we build in assumptions about utility or expected returns on cash flows. In the end, the best measure will depend on the individual and what they plan to use the calculated return for. 6. Patrick says: Thanks Michael. Some interesting points to consider. 7. Alan says: In modified Dietz, how to you handle net cash withdrawls on the denominator..For example, if I start with 50,000 and over a 10 year period, 150,000 was depositied but 350,000 was withdrawn and the ending value of investments is 700,000. Net cash outflow = -200,000 Weighted cash = -125,000 Thank you Recent Comments • Sean Cooper, Financial Freelance Writer and Blogger on Advisors could earn $1 million more over their career by lowering costs for investors • Fred Lasker on Advisors could earn $1 million more over their career by lowering costs for investors
{"url":"http://wheredoesallmymoneygo.com/modified-dietz-return-calculations/","timestamp":"2014-04-21T02:41:20Z","content_type":null,"content_length":"42410","record_id":"<urn:uuid:7301660c-e3c9-49f3-9e24-590c5aa93754>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
Chemical Engineering Listing of Course Designations & Rubrics Chemical Engineering - CHE 2160 Computer Technology for Chemical Engineering Systems (1) F,S Prereq.: MATH 1550. Credit will not be given for this course and CSC 2260, 2262, or IE 2060. Introduction to operating systems, programming techniques, and software packages used in the solution of chemical engineering problems. 2171 Chemical Engineering Fundamentals: Material and Energy Balances (3) F,S Prereq.: MATH 1550 and CHEM 1202. Emphasis on basic principles and concepts used to make chemical engineering calculations; techniques used in these calculations applied to typical industrial problems. 2176 Mathematical Modeling of Chemical Engineering Systems (3) F,S Prereq.: MATH 2065,CHE 2160, and 2171. Basic concepts and techniques in analysis of engineering processes; mathematical description of physical systems and application of modern computers to solution of resulting equations. 3100 Chemical Equilibrium and Kinetics of Environmental Processes (3) F Prereq.: CHE 3172 or ME 3333 or equivalent. Not open to chemical engineering majors. Credit will not be given for both this course and CHE 4190. Introductory chemical thermodynamic concepts extended to heterogeneous equilibrium, dilute solutions, surfaces and colloids of significance in environmental engineering processes; chemical reaction kinetics concepts applied to the environment; applications to waste treatment process design, property estimations for elucidating the fate and transport of chemicals in the environment. 3101 Transport Sciences: Momentum Transfer (3) F Prereq.: CHE 2171, MATH 2065, and credit or registration in CE 2450. Fundamentals of momentum transfer; applications to the fluid problems of 3102 Transport Sciences: Heat and Mass Transfer (4) S Prereq.: CHE 3101, ME 2833, or CE 2200, and MATH 2065. Fundamentals of heat and mass transfer; similarities of heat, mass, and momentum transfer and their interrelation; engineering applications. 3104 Engineering Measurements Laboratory (3) F,S Prereq.: CHE 2176 and credit or registration in CHE 3102. 2 hrs. lecture; 3 hrs. lab. Laboratory work to accompany CHE 3101 and 3102. 3172 Chemical Engineering Thermodynamics (3) F Prereq.: CHE 2171 and credit or registration in CHEM 3491. Basic concepts and chemical engineering applications of thermodynamics; emphasis on flow processes and real gas thermodynamics. 3173 Heterogeneous Equilibrium (3) S Prereq.: CHE 3172. Theory of vapor-liquid, liquid-liquid, and solid-liquid equilibrium, including the effects of chemical reactions; application of thermodynamic theory to the correlation of equilibrium data and the prediction of equilibrium compositions. 3249, 3250 Engineering Practice (1-3, 1-3) Su only Prereq.: consent of instructor. Pass-fail grading. A minimum of 6 weeks of full-time employment by an industry participating in the summer program. Same as ENGR 3049, 3050. Selected engineering problems in an industrial environment. 3271, 3272 Senior Projects (1-2, 1-2) Prereq.: consent of department. Pass-fail grading. Experimental and theoretical investigations including library research. 4151 Unit Operations Design (4) F Prereq.: CHE 3102 and 3173. 3 hrs. lecture; 3 hrs. lab. Unit operations analyzed as applications of chemical engineering fundamentals and transport sciences; use of these principles in design calculations. 4162 Unit Operations Laboratory (2) F,S Prereq.: CHE 3104 and credit or registration in CHE 4151. 6 hrs. lab. Obtaining and interpreting data needed to solve typical problems in design or operation of chemical engineering equipment. 4171 Process Economics and Optimization (3) F Prereq.: credit or registration in CHE 4151. Application of optimization principles to the economic design of chemical engineering unit operations. 4172 Process Design (4) S Prereq.: CHE 4151, 4171, and 4190. 3 hrs. lecture; 3 hrs. lab. Chemical plant design from initial concept through preliminary estimate; flow diagrams, equipment cost estimation, economic analysis, safety, and environmental issues; computer-aided process design. 4190 Chemical Reaction Engineering (3) F Prereq.: CHE 3102 and 3173; or equivalent. Basic principles of reactor design; selection of best design alternatives; achievement of optimum reactor 4198 Process Dynamics (3) S Prereq.: CHE 4151; or equivalent. Principles and practices of process dynamics and automatic control; mathematical modeling of process dynamics, feedback control, and feed forward control. 4204 Technology of Petroleum Refining (3) F Prereq.: Credit or registration in CHE 4151. Catalytic and thermal processes used in petroleum refining; application of scientific and engineering principles in processes such as catalytic cracking, reforming, coking, alkylation, isomerization, and hydroprocessing; emphasis on applied catalysis and its impact on engineering design. 4205 Technology of Petrochemical Industry (3) Prereq.: CHE 4151. Processes used in the manufacture of petroleum-based chemicals; application of scientific and engineering principles involved in the production of hydrogen, alcohols, olefins, aromatics, aldehydes, ketones, acids, rubber, and other polymers; emphasis on catalysis by transition-metal complexes. 4221, 4222 Senior Research (1,2) Prereq.: CHE 3102, 3104, and 3173, gpa of at least 2.8 (in CHE) and consent of instructor. CHE 4221 is prerequisite for 4222. Project chosen in consultation with instructor. Formal proposal and final presentation required. Not open to graduate students. Comprehensive research or development project of a theoretical or experimental nature, involving a team effort over two semesters (fall and spring period). 4253 Introduction to Industrial Pollution Control (3) Prereq.: CHE 3102 or equivalent introductory course in transport science. Quantitative application of chemical engineering principles to removal of objectionable components from effluents, with emphasis on industrial processing effluents; currently available techniques for controlling air and water pollution and solid wastes; concept of pollution control through basic process alterations developed by specific examples. 4260 Biochemical Engineering (3) Prereq.: credit or registration in CHE 4190 or equivalent. Application of chemical engineering fundamentals to microbiological and biochemical systems; problems peculiar to industrial operations involving microbial processes; growth conditions and requirements, metabolisms, product separations, enzyme catalysis, sterilization, and aseptic operations. 4263 Environmental Chemodynamics (3) Prereq.: CHE 3102 or equivalent introductory course in transport science. Environmental chemodynamics: interphase equilibrium, reactions, transport processes and related models for anthropogenic substances across natural interfaces (air-water-sediment-soil) and associated boundary regions. 4270 Processing of Advanced Materials (3) Prereq.: CHE 3102 or equivalent transport course. Treatment of coupled chemical reaction and mass, energy, and momentum transport in the manufacturing and processing of semiconductors and advanced ceramic materials; engineering models for chemical and physical vapor deposition methods and condensed phase processes. 4285 Principles of High Polymers (3) Prereq.: CHE 3172 and CHEM 3491. Solution and solid-state properties of high polymers; microstructure of polymer chains and effect on macromolecular physical properties of the final plastics. 4296 Development of Mathematical Models (3) Prereq.: CHE 2176 and 3102; or equivalent. Mathematical descriptions of systems encountered in chemical engineering developed from basic principles; lumped parameter systems, distributed parameter systems, formulation of ordinary and partial differential equations, continuous and discrete analogs, and matrix formulations; models developed for systems ranging from simple elements to plant-scale. 4410 Special Topics in Chemical Engineering Design (3) May be taken for a max. of 6 sem. hrs. when topics vary. One or more phases of current chemical engineering design. 4420 Special Topics in Chemical Engineering Science (3) May be taken for a max. of 6 sem. hrs. when topics vary. One or more phases of current chemical engineering science. 7110 Mathematical Methods in Chemical Engineering (3) F Review of physicochemical problem formulation; analytical and approximate techniques for the solution of linear and nonlinear differential equation models in chemical engineering systems. 7120 Chemical Engineering Thermodynamics (3) F Thermodynamic properties, first and second laws of thermodynamics, entropy, Maxwell relations, and relationship of thermodynamic properties to intermolecular forces; physical equilibrium with emphasis on partial free energy, fugacity, Raoult's law, K-values, equations of state, and activity coefficients; chemical equilibrium and free energies; fundamentals of statistical mechanics. 7130 Fundamentals of Transport Phenomena (3) S Foundations of heat, mass, and momentum transfer in continua; laminar flow; boundary layer theory; turbulence; buoyancy-induced flows; heat and mass transfer by diffusion, convection, and turbulence. 7140 Chemical Reactor Design Methods (3) S Basic principles of chemical kinetics, fluid flow, heat transfer, and mass transfer used in design of chemical reactors; chemical equilibria, chemical kinetics, design of isothermal reactors, effects of nonideal flow, nonisothermal reactors, and solid-gas catalytic reactions. 7302 Administration of Engineering and Technical Personnel (3) See IE 7642. 7314 Optimization (3) Techniques of optimization including analytical methods, linear and nonlinear programming, geometric and dynamic programming, and variational methods with application to systems of interest to chemical engineers. 7352 Distillation and Other Separation Processes (3) Mathematical models, phase equilibria, and calculation procedures related to design and behavior of distillation columns, absorbers, extractor-settlers, etc.; emphasis on computer techniques. 7512 Advanced Chemical Engineering Analysis (3) Prereq.: CHE 7110 or equivalent. May be taken for a max. of 6 hrs. of credit with consent of department. Topics in chemical engineering analysis, such as perturbation methods, matched asymptotic expansions, vector and tensor calculus, and numerical techniques. 7522 Advanced Chemical Engineering Thermodynamics (3) Prereq.: CHE 7120 or equivalent. May be taken for a max. of 6 hrs. of credit with consent of department. Thermodynamics of chemical engineering processes, such as nonequilibrium thermodynamic properties. 7532 Advanced Chemical Engineering Fluid Mechanics (3) Prereq.: CHE 7130 or equivalent. May be taken for a max. of 6 hrs. of credit with consent of department. Chemical engineering flow processes, such as turbulence, boundary layer theory, hydrodynamic stability, compressible flow, multiphase flow, chemically reacting flows, and nonNewtonian and viscoelastic fluids. 7534 Advanced Chemical Engineering Heat Transfer (3) Prereq.: CHE 7130 or equivalent. May be taken for a max. of 6 hrs. of credit with consent of department. Chemical process heat transfer; phase change and moving boundary problems; heat transfer mechanisms, natural and forced convection, radiation, and combined heat and mass transfer. 7536 Advanced Chemical Engineering Mass Transfer (3) Prereq.: CHE 7130 or equivalent. May be taken for a max. of 6 hrs. of credit with consent of department. Transport of mass in chemical engineering processes, such as diffusional operations, models for mass transfer in multicomponent, multiphase, stationary, flowing, and reacting systems. 7542 Catalysis (3) Prereq.: CHE 7140 or equivalent. Heterogeneous catalysis; adsorption phenomena, physical methods, solid state spectroscopies, and reaction mechanisms as applicable to fundamental and industrially significant processes. 7544 Chemical Kinetics and Reaction Mechanisms (3) Prereq.: CHE 7140 or equivalent. Gas-phase reactions and modern approach to deduction of reaction mechanism; collision, transition state, RRK, and RRKM theories, bond energy correlations, kinetics of complex reaction systems, fast reactions, computer modeling, and sensitivity analysis. 7572 Advanced Automatic Process Control (3) Prereq.: CHE 4198 or equivalent. Recent developments in control theory applied to control schemes in industrial processes; techniques of state space analysis, nonlinear stability criteria, multivariable control, and system identification. 7574 Digital Control of Processes (3) Prereq.: CHE 4198 or equivalent. Theory and use of digital computers for process control; relationships between computer and process control schemes, control algorithms, valve dynamics, modeling techniques. 7582 Polymerization and Polycondensation Processes (4) Prereq.: CHEM 4160 or 4562 or CHE 4285 or equivalent. 3 hrs. lecture; 3 hrs. demonstration/lab. Also offered as CHEM 7261. Preparation and characterization of high polymers; typical commercial procedures for plastics production. 7592 Design Problems in Chemical Engineering (3) Prior to registration students should discuss a prospective design problem with faculty member under whom they plan to study and obtain departmental approval. Design problem cannot be directly related to student's research. Inte- gration of technology into design of systems or plants for accomplishing specific objectives; emphasis on producing a design package considering technical, economic, manning, and scheduling aspects of the project. 7594 Advanced Computer-Aided Process Design (3) Prereq.: CHE 4173 or equivalent. May be taken for a max. of 6 hrs. of credit with consent of department. Computer-aided process design and simulation of chemical process industries, such as sequential modular flow sheeting, simultaneous solution schemes, decomposition strategies, and various simulation languages. 7700 Advanced Topics in Chemical Engineering (3) May be taken for a max. of 9 hrs. of credit with consent of instructor. One or more phases of advanced chemical engineering practice. 8000 Thesis Research (1-12 per sem.) "S"/"U" grading. 9000 Dissertation Research (1-12 per sem.) "S"/"U" grading.
{"url":"http://www.lsu.edu/catalogs/2000/courses/che.htm","timestamp":"2014-04-20T10:48:47Z","content_type":null,"content_length":"17915","record_id":"<urn:uuid:2a8f89a7-4b1c-4df9-85d6-0d2b4b275897>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
force in relation to angular momentum so a student is sitting on a spinning stool and has a 2kg dumbbell in each hand angular velocity is 3rad/sec arms stretched out is a radius of 80cm and he pulls in his arms to 20cm. for this problem your ignoring the students weight. from other problems i have figured out so angular velocity to start with is 3 rad/sec angular velocity final is 48 rad/sec kinetic energy initial is 11.52( dont know what dimensions this is im guessing J) kinetic energy final is 184.32 and i need to find the force required to pull on of the dumbells in at a constant speed is equal to F=((initial angular momentum of 1 block)^2)/(4*Mass of one weight*Radius^3)) or do i use the change in kinetic energy equation ? do i say Torque=F*d and Torque=I*alpha and then go Force = Ia/d? or is there another way to solve this that im not seeing ?
{"url":"http://www.physicsforums.com/showthread.php?t=442402","timestamp":"2014-04-19T09:44:36Z","content_type":null,"content_length":"22627","record_id":"<urn:uuid:845984cf-bbfc-48af-b2bf-59a45e23b3b4>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00456-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Order embedding from a poset into a complete lattice Replies: 2 Last Post: Dec 30, 2012 7:29 PM Messages: [ Previous | Next ] Order embedding from a poset into a complete lattice Posted: Dec 30, 2012 1:40 PM I asked this question at http://math.stackexchange.com but received no Let A is an arbitrary poset. Does it necessarily exist an order embedding from A into some complete lattice B, which preserves all suprema and infima defined in A? Victor Porton - http://www.mathematics21.org Date Subject Author 12/30/12 Order embedding from a poset into a complete lattice Victor Porton 12/30/12 Re: Order embedding from a poset into a complete lattice Butch Malahide 12/30/12 Re: Order embedding from a poset into a complete lattice Rotwang
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2423099&messageID=7945345","timestamp":"2014-04-20T01:13:47Z","content_type":null,"content_length":"18642","record_id":"<urn:uuid:e440aade-9f49-43ec-a5bd-3f3359a50d77>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00093-ip-10-147-4-33.ec2.internal.warc.gz"}
evolution of complex information systems The evolution of complex information systems as movement against the pull of entropy, measured along information-space-time dimensions. by Arie S. Issar Abstract—A new basic dimension is hereby introduced in order to explain the emergence as well as the evolution of ordered complexity in the bio-world. It is suggested to be defined as "the dimension of information", which relates to all what is measured by a computer. (While space is all what is measured by the meter and time is all what is measured by a clock). It has four degrees of freedom namely addition and reduction (i.e. '+' & '-') and induction and deduction (I.e. 'if-then' and 'when-then'). The living cell can be thus described as a system on space-time-information dimensions, able to transform sequences of events from the coordinates of space-time to that of information. This is accomplished by the transformation of mechanical and electro-chemical stimuli into ordered complex structures of notions (which are equivalent to points on the dimension of space, and to instants on the dimension of time). The rise in complexity of the structures of notions on the dimension of information enables the promotion of concepts into ideas and into theories. In order to do this energy has to be invested. This is because it is a movement opposing the slope of universal entropy. In other words, the universal pull towards infinite disorder=entropy, is described as the geometry of space-time-information continuum, which has eleven degrees of freedom (spatial x,y,z, each of which has two degrees of freedom, temporal, which has one degree of freedom from past to future and information with four degrees). The full paper is available below: The evolution of complex information systems as movement against the pull of entropy, measured along information-space-time dimensions. Back to PCID Volumes 1.2 and 1.3
{"url":"http://www.iscid.org/pcid/2002/1/2-3/issar_evolution.php","timestamp":"2014-04-19T11:57:54Z","content_type":null,"content_length":"10711","record_id":"<urn:uuid:742427fd-d2c1-4106-a6b8-b9648c9a5445>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
References for traceless and/or imaginary Octonionic matrices? up vote 2 down vote favorite Hi all. I was wondering if anyone has seen any work related to either traceless matrices of Octonions (with trace defined as the sum of diagonal) or matrices of pure imaginary Octonions (meaning real part of the matrix is a matrix of zeros). Thanks in advance. rt.representation-theory ra.rings-and-algebras add comment 2 Answers active oldest votes You can put octonions inside a matrix, but this might not be a particularly fruitful thing to do since the octonion algebra is not associative. (So in particular you don't get octonionic Lie algebras or Lie groups.) The trace, for instance, may not have the properties you are familiar with. up vote 5 That being said, my only experience with octonionic matrices is the 27-dimensional exceptional Jordan algebra, described for example here. It consists of $3\times 3$ hermitian down vote octonionic matrices and the Jordan product is given by the symmetrised product. add comment up vote 1 down vote add comment Not the answer you're looking for? Browse other questions tagged rt.representation-theory ra.rings-and-algebras or ask your own question.
{"url":"http://mathoverflow.net/questions/11162/references-for-traceless-and-or-imaginary-octonionic-matrices","timestamp":"2014-04-19T18:11:21Z","content_type":null,"content_length":"52813","record_id":"<urn:uuid:73c07b79-1e98-49b0-871a-c55a472c57f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00499-ip-10-147-4-33.ec2.internal.warc.gz"}
Seminar on Discrete Math and Optimization McGill Seminar on Discrete Mathematics and Optimization Jointly Organized by School of Computer Science and Department of Mathematics and Statistics Program Winter 2003 Past Events: Winter 2002 , Fall 2002 Related Links: McGill Algorithms Seminar Organizers: D. Avis (CS), W. Brown (Math) D. Bryant (CS/Math), L. Devroye (CS), K. Fukuda (CS), B. Reed (CS), V. Rosta (Math), G. Toussaint (CS) and S. Whitesides (CS). Coordinators (email): Komei Fukuda (CS) and Vera Rosta (Math) Mailing List Maintainer (email): Steven Robbins This page: http://www.cs.mcgill.ca/~fukuda/semi/discmath.html Last updated: 2003-03-07
{"url":"http://www.cs.mcgill.ca/~fukuda/semi/discmath_w03.html","timestamp":"2014-04-18T15:38:33Z","content_type":null,"content_length":"3103","record_id":"<urn:uuid:a6ca358b-b8c1-48d0-90df-9118d5bfe4a2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: bar graph axis color- frustrated Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: bar graph axis color- frustrated From "Airey, David C" <david.airey@Vanderbilt.Edu> To "statalist@hsphsun2.harvard.edu" <statalist@hsphsun2.harvard.edu> Subject Re: st: bar graph axis color- frustrated Date Thu, 10 Jun 2010 22:34:00 -0500 Oh, Kieren got it: graph hbar weight price , yvaroptions(axis(lc(none))) The help is in "help graph_bar##yvar_options" So it's a real option. Still, I would like to know if and how to use graph editor commands directly if possible. > sysuse auto > graph hbar weight price, nodraw saving(Graph, replace) > graph hbar weight price, nodraw play(axis_none) saving(Graph, replace) > The second command does indeed draw the command on the screen. My recording removes the axis successfully, but at the expense of drawing it to the screen. Why should the graph recording require the graph be drawn? > The graph editor command that seems to be issued is: > "varaxis.style.editstyle linestyle(color(none)) editcopy" > if I open the .gph file with a text editor. > -Dave >> But there has to be an underlying graph command that the graph editor is issuing but is not documented under graph bar. I think I remember seeing others use graph editor commands directly (at the command line) before on this list. You just need to find out what graph editor command is being issued when you make the axis color none using the graph editor and then issue that command in your .do file. > -Dave >>> Thank you Michael. That is exactly how I am doing them (using the record) feature. The problem is that I am doing 20 graphs on each of 700 sites and outputting them to a series of reports. To run the recorded routine, I have to display each graph. That is, I can not use the -nodraw- option!! * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-06/msg00597.html","timestamp":"2014-04-19T04:44:50Z","content_type":null,"content_length":"9161","record_id":"<urn:uuid:f4d495fb-2fec-4ad7-b839-7929009303d3>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00366-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: NFA to DFA question Clint Olsen <clint@0lsen.net> 17 Jan 2003 19:40:42 -0500 From comp.compilers | List of all articles for this month | From: Clint Olsen <clint@0lsen.net> Newsgroups: comp.compilers Date: 17 Jan 2003 19:40:42 -0500 Organization: AT&T Broadband References: 03-01-051 Keywords: lex, DFA Posted-Date: 17 Jan 2003 19:40:42 EST Unmesh joshi wrote: > I am reading the compilers book by Aho ullman, and I have one doubt about > NFA to DFA conversion. "Every state of DFA corresponds to 'set of > states' in NFA". Can anybody explain to me this? Does anybody has a > source code sample for NFA-DFA? May be if I implement the DFA algorithm I > will understand what that means. Since an NFA can have multiple state transitions per character of input, and a DFA cannot, it follows that a state in the DFA corresponds to a set of NFA states (possibly multiple states). The best analogy is given at the end of that paragraph you quoted where they discuss simultating an NFA and DFA for the same input string. The source code from 'flex' should give you an implementation of converting an NFA to a DFA. Post a followup to this message Return to the comp.compilers page. Search the comp.compilers archives again.
{"url":"http://compilers.iecc.com/comparch/article/03-01-067","timestamp":"2014-04-19T02:06:26Z","content_type":null,"content_length":"5228","record_id":"<urn:uuid:51c67a53-46ed-4023-80af-08641aa4234e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
Ellicott City Calculus Tutor Find an Ellicott City Calculus Tutor ...I have utilized QuickBooks as the treasurer for a local youth sports team, and I have used QuickBooks for both my personal and consulting business use. I am an advanced user in all of the basic functionality of the software, and I have a good working knowledge of double-entry accounting which is used in this software. I have over 25 years of experience in public speaking and 20 Subjects: including calculus, physics, algebra 1, algebra 2 ...Over 20 years of teaching expeerience with excellent student reviews. Patient and interacrtive with students of all levels. Currently a senior manager in a very large high technology company. 15 Subjects: including calculus, physics, statistics, geometry Hello, I am a Math Major at Towson University. I've loved math most of my life. I study Math in college, and have a 3.73 GPA. 9 Subjects: including calculus, geometry, algebra 1, algebra 2 I have degrees in Physics and Astronomy. I have worked as an astronomer at NASA and with Earth observation systems at NASA and in private industry, all of which required the use of mathematics including geometry, algebra and calculus. I work well with young people. 9 Subjects: including calculus, physics, statistics, algebra 2 ...Although my degree is in physics, I am knowledgeable about other fields as well, especially math.I took Algebra 1 in middle school. Since then, I have built on my algebra knowledge with a wide array of advanced mathematics. Therefore, I am very comfortable with the basics of algebra. 27 Subjects: including calculus, physics, geometry, algebra 1
{"url":"http://www.purplemath.com/Ellicott_City_calculus_tutors.php","timestamp":"2014-04-21T07:38:44Z","content_type":null,"content_length":"23955","record_id":"<urn:uuid:930b2a5a-5075-432e-acec-4d389144df7e>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
At each iteration step, Flint will solve the governing equations pertaining to all the relevant field variables one at a time by calling the matrix solver. The matrix solver uses a line-by-line method which is a combination of the Gauss-Seidel method combined with the TDMA (TriDiagonal Matrix Algorithm) method. This is known as the Line Gauss-Seidel (LGS) Method. TDMA method can be used for the solution of one dimensional flow problems, where the field equations yield a tri-diagonal coefficients matrix thus making it extremely easy to solve the problem directly by a forward and backwards substitution method. Unfortunately no such technique can be applied to the direct solution of two and three dimensional systems as the coefficients matrix no longer retains its tri-diagonal shape. Never the less, it is possible to apply the TDMA techniques to a line of cells at a time, thus reducing the problem to N*M systems of one dimensional TDMA problems. Where; N and M are the number of planes of cells in the second and third dimensions. The gauss-seidell method comes into effect when the latest values of the field variable along the two adjacent lines of cells are used while calculating the TDMA coefficients of a line of cells, which is sandwiched between them. The matrix solver carries out the TDMA technique on all the lines of cells along a predefined direction, known as the SWEEP DIRECTION . When the sweep direction is 1 all the cells along the I direction form a TDMA line. I.e. along each line J is constant. Similarly when the sweep direction is 2 all the cells along the J direction form a TDMA line. It is recommended that the sweep direction should be choosen to be orthogonal to the primary flow direction. Sweeps of solver for each of the field variables define, how many times, during each iteration step, the matrix solver is called to recalculate the values of that variable. This is normally once per variable, except pressure which is five times. This is because the pressure equation is usually harder to converge.
{"url":"http://www.dsavas.staff.shef.ac.uk/software/flint/help/flint618.htm","timestamp":"2014-04-21T15:29:13Z","content_type":null,"content_length":"2560","record_id":"<urn:uuid:d8489a19-99de-4d51-81d6-5c064f26fd72>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00030-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent application title: ASYMMETRIC SERRATED EDGE LIGHT GUIDE FILM HAVING CIRCULAR BASE SEGMENTS Sign up to receive free email alerts when patent applications with chosen keywords are published SIGN UP The present invention provides a planar light guide film for a backlight unit having at least one point light source, the light guide film comprising a light input surface for receiving light from the point light source, a light redirecting surface for redirecting light received from the light input surface and a light output surface for outputting at least the light redirected from the light redirecting surface. The light input surface further comprises a composite lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segment each circular base segment with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments. Further, the circular tip segment satisfies the following equations: y + {square root over ((r ))} y + {square root over ((r ))} and the circular base segments satisfies the following equations: y - {square root over ((r ))} y - {square root over ((r ))} and each of the composite lens structures is randomly disposed along the light input surface. A planar light guide film for a backlight unit having at least one point light source, the light guide film comprising: a light input surface for receiving light from the point light source; a light redirecting surface for redirecting light received from the light input surface; a light output surface for outputting at least the light redirected from the light redirecting surface; wherein the light input surface further comprises a composite lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segment each circular base segment with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments; and wherein the first and second circular tip segments satisfies the following equations: y + {square root over ((r.sub. ))} y + {square root over ((r.sub. ))} and the first and second circular base segments satisfies the following equations: y - {square root over ((r.sub. ))} y - {square root over ((r.sub. ))} and each of the composite lens structures is randomly disposed along the light input surface. The planar light guide film of claim 1 wherein the composite lens structure has a pitch P greater than or equal to 5 micrometers and less than or equal to 1 millimeter. The planar light guide film of claim 2 wherein the composite lens structure has a gap G less than or equal to 9 times the pitch P. The planar light guide film of claim 1 wherein the composite lens structure has a total height H greater than 3 micrometers and less than or equal to 1 millimeter. The planar light guide film of claim 1 wherein the first circular tip segment of the composite lens structure has a contact angle A greater than 1 degrees and less than or equal to 85 degrees. The planar light guide film of claim 1 wherein the second circular tip segment of the composite lens structure has a contact angle A greater than 1 degrees and less than or equal to 85 degrees. The planar light guide film of claim 1 wherein contact angles A and A of the of the composite lens structure are not equal. The planar light guide film of claim 5-7 wherein the composite lens structure further comprises contact angles A , A , A , and A wherein A.sub. , A.sub. and A.sub. , A.sub. , A.sub. , A.sub. 42. 9. A planar light guide film for a backlight unit having at least one point light source, the light guide film comprising: a light input surface for receiving light from the point light source; a light redirecting surface for redirecting light received from the light input surface; a light output surface for outputting at least the light redirected from the light redirecting surface; wherein the light input surface further comprises a composite lens structure having gaps there between, the lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segment each circular base segment with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments; and wherein the first and second circular tip segments satisfies the following equation: y + {square root over ((r.sub. ))} y + {square root over ((r.sub. ))} and the first and second circular base segments satisfies the following equations: y - {square root over ((r.sub. ))} y - {square root over ((r.sub. ))} and each of the composite lens structures is randomly disposed along the light input surface. The planar light guide film of claim 9 wherein the circular tip segment of the composite lens structure has a contact angle A greater than 1 degrees and less than or equal to 85 degrees. The planar light guide film of claim 9 wherein the second circular tip segment of the composite lens structure has a contact angle A greater than 1 degrees and less than or equal to 85 degrees. The planar light guide film of claim 9 wherein contact angles A and A of the of the composite lens structure are not equal. The planar light guide film of claims 10-12 wherein the composite lens structure further comprises contact angles A , A , A , and A wherein A.sub. , A.sub. and A.sub. , A.sub. , A.sub. , A.sub. 42. 14. A planar light guide film for a backlight unit having at least one point light source, the light guide film comprising: a light input surface for receiving light from the point light source; a light redirecting surface for redirecting light received from the light input surface; a light output surface for outputting at least the light redirected from the light redirecting surface; wherein the light input surface further comprises a serrated lens structure that is provided only where the point light source is incident on the light input surface, the lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segment each circular base segment with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments; and wherein the first and second circular tip segments satisfies the following equations: y + {square root over ((r.sub. ))} y + {square root over ((r.sub. ))} and the first and second circular base segments satisfies the following equations: y - {square root over ((r.sub. ))} y - {square root over ((r.sub. ))} and each of the composite lens structures is randomly disposed only where the point light source is incident on the light input surface. FIELD OF THE INVENTION [0001] The present invention relates to a light guide film of a light emitting diode (LED) backlight unit, and, more particularly, to a light guide film of an LED backlight unit, which has a plurality of grooves carved into an incident plane of the light guide film to increase an incidence angle of which light can be transmitted through the light guide film. BACKGROUND OF THE INVENTION [0002] Typically, a liquid crystal display (LCD) for handheld and notebook devices generally employs at least one lateral light emitting diode (LED) as a light source of a backlight unit. Such a lateral LED is generally provided to the backlight unit as shown in FIG. 1 of Yang U.S. Pat. No. 7,350,598. Referring to FIG. 1, the backlight unit 10 comprises a planar light guide film 20 disposed on a substrate 12, and a plurality of lateral LEDs 30 (only one lateral LED is shown in FIG. 1) disposed in an array on a lateral side of the light guide film 20. Light L entering the light guide film 20 from the LED 30 is reflected upwardly by a minute reflection pattern 22 and a reflection sheet (not shown) positioned on the bottom of the light guide film 20, and exits from the light guide film 20, providing back light to an LCD panel 40 above the light guide film 20. Such a backlight unit 20 suffers from a problem as shown in FIG. 2 when light is incident on the light guide film 20 from the LED 30. As shown in FIG. 2, light L emitted from each LED 30 is refracted toward the light guide film 20 by a predetermined angle θ due to difference in refractive index between media according to Snell's Law when the light L enters the light guide film 20. In other words, even though the light L is emitted at a beam angle of α1 from the LED 30, it is incident on the light guide film 20 at an incidence angle of α2 less than α1. In FIG. 3, such an incidence profile of light L is shown. Therefore, there is a problem of increasing the length (l) of a combined region where beams of light L entered the light guide film 20 from the respective LEDs 30 are combined. In addition, light spots H also called "hot spots" and dark spots D are alternately formed in the region corresponding to the length (l) on the incident plane of the light guide film 20. Each of the light spots H is formed at a location facing the LED 30, and each of the dark spots D is formed between the light spots H. Since the alternately formed light and dark spots are not desirable for the light guide film, they should be minimized and the length (l) should be shortened as much as possible. For this purpose, it is necessary to increase an angle of light entering the light guide film, that is, an incidence angle of light. For this purpose, it is suggested to form protrusions on the input surface of the light guide film as shown in FIG. 4. Specifically, a plurality of fine prism-shaped structures 24 or arc-shaped structures (not shown) are formed on a light input surface 20A of a light guide film 20 and light L enters the light guide film at an incidence angle α3 substantially equal to an orientation angle α1 of light emitted from a focal point F of a light source. Thus, if orientation angles α1 of light beams emitted from the focal point F of the light source are identical, the light L enters the light guide film at an incidence angle α3 wider than the case of FIGS. 2 and 3. However, with this solution, there is some secondary light collimation where the light rays are refracted by the wall of the adjacent prism or arc-shaped structure as shown in FIG. 4. Secondary light collimation from the walls of the adjacent prism structure turns the light ray back on-axis providing less diffusion of the light from the light source as shown in FIG. 4. Thus the continuous prism- or arc-shaped structures on the input surface have limited light diffusing capability. Therefore an improved input edge design is needed to provide a more uniform surface illumination of the light guide film without sacrificing the efficiency of the backlight system. SUMMARY OF THE INVENTION [0008] The present invention is aimed at overcoming the problems (hot spots and secondary light collimation) associated with the above prior art and therefore yield a more uniform surface illumination of the light guide film without sacrificing the efficiency of the backlight system. The present invention provides a planar light guide film for a backlight unit having at least one point light source, the light guide film comprising: a light input surface for receiving light from the point light source; a light redirecting surface for redirecting light received from the light input surface; a light output surface for outputting at least the light redirected from the light redirecting surface; wherein the light input surface further comprises a composite lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segments each circular base segment with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments; and wherein the first and second circular tip segments satisfies the following equations respectively: + {square root over ((r + {square root over ((r and the circular base segments satisfies the following equations - {square root over ((r - {square root over ((r and each of the composite lens structures is randomly disposed along the light input surface. In addition, the invention further provides a planar light guide film for a backlight unit having at least one point light source, the light guide film comprising: a light input surface for receiving light from the point light source; a light redirecting surface for redirecting light received from the light input surface; a light output surface for outputting at least the light redirected from the light redirecting surface; wherein the light input surface further comprises a composite lens structure having gaps there between, the lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segments each circular base segment with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments; and wherein the first and second circular tip segments satisfies the following equations respectively: + {square root over ((r + {square root over ((r and the circular base segments satisfies the following equations - {square root over ((r - {square root over ((r and each of the composite lens structures is randomly disposed along the light input surface. Further, the invention provides a planar light guide film for a backlight unit having at least one point light source, the light guide film comprising: a light input surface for receiving light from the point light source; a light redirecting surface for redirecting light received from the light input surface; a light output surface for outputting at least the light redirected from the light redirecting surface; wherein the light input surface further comprises a serrated lens structure that is provided only where the point light source is incident on the light input surface, the composite lens structure having a first and second circular tip segment each circular tip segment with a first contact angle, and a first and second circular base segments with a top and bottom contact angle, the contact angles of the circular base segments being less than the contact angle of the circular tip segments; and wherein the first and second circular tip segments satisfies the following equations respectively: + {square root over ((r + {square root over ((r and the circular base segments satisfies the following equations - {square root over ((r - {square root over ((r and each of the composite lens structures is randomly disposed only where the point light source is incident on the light input surface. BRIEF DESCRIPTION OF THE DRAWINGS [0012] FIG. 1 shows a schematic diagram illustrating a conventional backlight module; FIG. 2 shows a schematic diagram illustrating the distribution of bright/dark bands of a conventional light guide plate; FIG. 3 shows a schematic diagram illustrating an embodiment of conventional light-diffusing structures; FIG. 4 shows a schematic diagram illustrating another embodiment of conventional light-diffusing structures; FIGS. 5a and 5b shows a schematic diagram illustrating a light guide film according to an embodiment of the invention; FIG. 6a-6c show schematic diagrams illustrating the various segments of the composite lens feature according to an embodiment of the invention; FIGS. 7a and 7b show schematic diagrams illustrating the light diffusing capability of the composite lens feature with a gap between each adjacent feature; FIG. 8 shows another embodiment of this invention; FIG. 9 shows another embodiment of this invention; FIGS. 10a and 10b show the luminance intensity at various distances from the light input surface for a circular or arc shaped input feature; FIGS. 11a and 11b show the luminance intensity at various distances from the light input surface for a trapezoidal feature or feature with slanted sides; and FIGS. 12a and 12b show the luminance intensity at various distances from the light input surface according to an embodiment of the invention. DETAILED DESCRIPTION OF THE INVENTION [0024] A light guide film in accordance with the present invention comprises a light output surface, a light redirecting surface and at least one light input surface that joins the light output surface and the light redirecting surface. The light input surface comprises a plurality of concave features consisting of a composite lens array. Each of the composite lenses is separated by a gap that is a flat surface perpendicular to the light output surface. The composite lenses and gaps are disposed along the light input surface, and extend from the output surface to the light redirecting surface. Each of the composite lenses has an asymmetric cross-section consisting of a tip portion comprising a first and second circular tip segment each of a first contact angle and a base portion comprising two tilted circular base segments each with a top and bottom contact angle. The circular base segment contact angles being less than the circular tip segment contact angles and where the contact angles for each of the two circular tip segments and the top and bottom contact angles for each of the two tilted circular base segments are not equal. According to the above embodiment, the geometrical profile of the composite lens allows for comparatively large light deflecting distances; that is, the composite lens structure has better light-diffusing capability. Thus, the distance between the point light source and the active area of the display can be shortened, and the dark spots between the point light sources can be minimized, with the brightness uniformity still being acceptable. The circular tip segments distribute the light in front of the discrete light source, typically a light emitting diode (LED). The two tilted circular base segments distribute the light between the LEDs. Since the composite lens structure is composed of two circular tip segments and two circular base segments, it allows more degrees of freedom to fine tune the luminance profile than would be attainable if the structure were composed of fewer segments. The asymmetry of the composite lens structure aids in correcting the inputted light from the LEDs. Further, it is also necessary that each two adjacent composite lens structures have a gap or flat there between so a greater degree of deflection on the propagation path of the incident light can be achieved to thereby increase the light-diffusing effect. Unlike the asymmetric structures described in Yamashita et al. U.S. Pat. No. 7,522,809 where the asymmetric features are aligned all in the same direction to overcome the light directivity resulting from the prism films in the backlight system being cut at a 15 degree angle rather than the prism being perpendicular or parallel to the input face. In this invention in order to achieve a uniform distribution of light into the light guide, there is a random distribution of the asymmetric structures across the light input face. The random placement of the asymmetric structures also aids in reducing cosmetic defects created by a regular pattern interfacing with the pattern of the liquid crystal display. Referring to FIGS. 5a and 5b, a light guide film according to an embodiment of the invention is shown, wherein a planar light guide film 12 is used to receive and guide the light from at least one point light source (such as LEDs 14 shown in FIG. 5a). The side surface of the light guide film 12 next to the LED 14 forms a light input surface 12a. The top surface of the light guide film 12 that makes an angle with the light input surface 12a forms a light-emitting surface 12b, and the bottom surface opposite the light-emitting surface 12b forms a light-reflecting surface 12c. The light-reflecting surface 12c is comprised of a plurality of light reflecting structures. The light emitted from the LED 14 enters the light guide film 12 via the light input surface 12a and propagates inside the light guide film 12. Then, it is guided toward the light-emitting surface 12b by the light-reflecting surface 12c and finally exits the light guide film 12 through the light-emitting surface 12b. Further, a plurality of concave composite lens structures 16 are serrated on the edge of the light input surface 12a, with their longitudinal directions being parallel to each other and having a gap (G) between each adjacent composite lens structure 16. Referring now to FIGS. 6a, 6b and 6c, the light input surface 12a, facing the LED 14, of the composite lens structure 16 has a first circular tip segment 16a and a second circular tip segment 16d, and two tilted circular base segments 16b and 16c, respectively. The circular tip segments 16a and 16d of the concave composite lens structure 16 are the segments furthest from the light input surface 12a. Although the composite lens features for the preferred embodiment of this invention are disposed in a concave direction on the light input surface, the composite lens may also be in a convex direction on the light input surface. The length T is the distance between the intersection of the extension of a tangent at the top of the first circular base segment 16b, and the intersection of the first circular tip segment 16a and the second circular tip segment 16d, where T is parallel to the light input surface 12a. The length T is the distance between the intersections of the extension of the tangent at the top of the second circular base segment 16c, and the intersection of the first circular tip segment 16a and the second circular tip segment 16d, where T is parallel to the light input surface 12a. The width T of the first circular tip segment 16a is equal to r times the sine of contact angle A , where T is parallel to the light input surface 12a. The width T of the second circular tip segment 16d is equal to r times the sine of contact angle A , where T is parallel to the light input surface 12a. The contact angle A is the contact angle of the first circular tip segment 16a where the angle is formed by a tangent at the intersection of the first circular tip segment 16a and the top of the first circular base segment 16b and the light input surface 12a. Contact angle A is preferably greater than 0.1 degrees and less than or equal to 85 degrees. The contact angle A is the contact angle of the second circular tip segment 16d where the angle is formed by a tangent at the intersection of the second circular tip segment 16d and the top of the second circular base segment 16c and the light input surface 12a. Contact angle A is preferably greater than 0.1 degrees and less than or equal to 85 degrees. Contact angle A does not equal contact angle A Referring now to FIG. 6b, the gap G is the distance between each adjacent composite lens. Preferably, the gap G is less than or equal to 0.9 times the pitch P. The pitch P of the linear composite lens array 16 is the distance along the light input edge which includes the gap G distance and the width of the composite lens structure, the width B plus the width B . Preferably the pitch P is greater than or equal to 5 micrometers and less than or equal to 1 millimeter (mm). The total height H of the feature is measured from the light input edge to the intersection of the first and second circular tip segments 16a and 16d. The total height H of the composite lens is greater than or equal to 3 micrometers and less than or equal to 1 millimeter. The light input surface 12a will have a surface finish of 10 nanometers to 2 micrometers. The surface finish of the concave composite lens structures 16 can be the same or different than the gap G portion between the features. Advantageously, the circular tip of the composite lens structure comprises a first circular tip segment and a second circular tip segment. The shape of an XY section of the first circular tip segment 16a satisfies the following expression (1): + {square root over ((r ))} (1) where the first circular tip segment 16a has a first radius r . The first radius r is defined as the quotient of the distance T divided by the tangent of half the contact angle A . The parameter a is defined as the total height H of the composite lens feature 16 minus the radius r of the first circular tip segment 16a. The coordinate x is a value in the direction of the light input surface and is preferably set within the range of -r )≦x≦0. The coordinate y is a value in the light propagation direction. The shape of an XY section of the second circular tip segment 16d satisfies the following expression (2): + {square root over ((r ))} (2) where the second circular tip segment 16d has a second radius r . The second radius r is defined as the quotient of the distance T divided by the tangent of half the contact angle A . The parameter a is defined as the total height H of the composite lens feature 16 minus the radius r of the second circular tip segment 16d. The coordinate x is a value in the direction of the light input surface and is preferably set within the range of 0≦x≦r ). The coordinate y is a value in the light propagation direction. Referring now to FIG. 6c, the composite lens structure also comprises two tilted circular base segments, namely a first circular base segment 16b and a second circular base segment 16c. Each circular base segment comprises two contact angles, a top contact angle and a bottom contact angle. The first circular base segment 16b has a top contact angle A and a bottom contact angle A . The second circular base segment 16c has a top contact angle A and a bottom contact angle A . The top contact angle A is created by a tangent to the first circular base segment 16b at the point where the first circular tip segment 16a and the first circular base segment 16b intersect. The bottom contact angle A is created by a tangent to the first circular base segment 16b at the point where the first circular base segment 16b intersects the light input surface 12a. The top contact angle A is created by a tangent to the second circular base segment 16c at the point where the second circular tip segment 16d and the second circular base segment 16d intersect. The bottom contact angle A is created by a tangent to the second circular base segment 16c at the point where the second circular base segment 16c intersects the light input surface 12a. The top contact angle A of the first circular base segment 16b and the top contact angle A of the second circular base segment 16c are not equal. The bottom contact angle A of the first circular base segment 16b and the bottom contact angle A of the second circular base segment 16c are not equal. The bottom contact angles of each of the circular base segments are less than their corresponding top contact angles. Each of the contact angles for each of the two circular base segments 16b and 16c are less than the contact angles A and A of the circular tip segments 16a and 16d respectively. Preferably, the contact angle (A , A , A , A ) for each circular base segments is greater than or equal to 0.1 degrees and less than or equal to 85 degrees. Advantageously, the shape of an XY section of the circular base segments 16b and 16c as shown in FIG. 6c satisfy the following expressions (3 and 4) respectively: - {square root over ((r ))} (3) - {square root over ((r ))} (4) r 3 = H 3 / [ cos ( A 31 ) - cos ( A 32 ) ] ##EQU00001## a 3 = [ ( T 3 + B 3 ) + H 3 × ( 4 r 3 2 - ( B 3 - T 3 ) 2 - H 3 2 ) / ( ( B - T 3 ) 2 + H 3 2 ) ] / 2 ##EQU00001.2## b 3 = r 3 2 - ( B 3 - a 3 ) 2 ##EQU00001.3## H 3 = H - r 1 × [ 1 - cos ( A 1 ) ] ##EQU00001.4## H 4 = H - r 2 × [ 1 - cos ( A 2 ) ] ##EQU00001.5## B 3 + B 4 = P - G ##EQU00001.6## B 3 = T 3 + H 3 × ( sin ( A 32 ) - sin ( A 31 ) ) / ( cos ( A 31 ) - cos ( A 32 ) ) ##EQU00001.7## B 4 = T 4 + H 4 × ( sin ( A 42 ) - sin ( A 41 ) ) / ( cos ( A 41 ) - cos ( A 42 ) ) - B 3 ≦ x ≦ - T 3 ##EQU00001.8## T 4 ≦ x ≦ B 4 ##EQU00001.9## r 4 = H 4 / [ cos ( A 41 ) - cos ( A 42 ) ] ##EQU00001.10## a 4 = [ ( T 4 + B 4 ) + H 4 × ( 4 r 4 2 + ( B 4 - T 4 ) 2 - H 4 2 ) / ( ( B 4 - T 4 ) 2 + H 4 2 ) ] / 2 ##EQU00001.11## b 4 = r 4 2 - ( B 4 - a 4 ) 2 ##EQU00001.12## Thus, the first circular base segment 16b has a radius r and the second circular base segment 16c has a radius r . Referencing FIGS. 6a, 6b and 6c, the radius r of the first circular base segment 16b is defined as the quotient of the height H of the first circular base segment 16b divided by the quantity the cosine of the contact angle A at the top of the first circular base segment 16b minus the cosine of the contact angle A at the bottom of the first circular base segment 16b. The height H of the first circular base segment 16b is equal to the total height H of the composite lens feature 16 minus the radius r of the first circular tip segment 16a times the quantity 1 minus the cosine of contact angle A of the first circular tip segment 16a. The parameter a is equal to one half the quotient of the quantity the width T of the circular tip segment 16a plus the width B of the composite lens feature 16 plus the quantity the height H of the first circular base segment 16b times the square root of the quotient of the quantity 4 times the square of the radius r of the first circular base segment 16b minus the square of the quantity of the width B of the composite lens feature 16 minus the width T of the circular tip segment 16a, minus the square of the total height H of the first circular base segment 16b divided by the square of the quantity of the width B of the composite lens feature 16 minus the width T of the circular tip segment 16a, plus the square of the total height H of the first circular base segment 16b. The width B of the composite lens feature 16 is equal to the quotient of the quantity of the width T of the first circular tip segment 16a plus the height H of the first circular base segment 16b times the quantity the sine of contact angle A at the bottom of the first circular base segment 16b minus the sine of contact angle A at the top of the first circular base segment 16b divided by the quantity cosine of the contact angle A at the top of the first circular base segment 16b minus the cosine of the contact angle A at the bottom of the first circular base segment 16b. The parameter b is equal to the square root of the quantity of the radius r of the first circular base segment 16b squared minus the quantity the width B of the composite lens feature 16 minus the parameter a that quantity squared. The coordinate x is a value in the direction of the light input surface or more specifically in the direction of the total width B of the composite lens feature 16 and is preferably set within the range of -B . The coordinate y is a value in the light propagation direction. Referencing FIGS. 6a and 6c, and equation 4, the radius r of the second circular base segment 16c is defined as the quotient of the height H of the second circular base segment 16c divided by the quantity the cosine of the contact angle A at the top of the second circular base segment 16c minus the cosine of the contact angle A at the bottom of the second circular base segment 16c. The height H of the second circular base segment 16c is equal to the total height H of the composite lens feature 16 minus the radius r of the second circular tip segment 16d times the quantity 1 minus the cosine of contact angle A of the second circular tip segment 16d. The parameter a is equal to one half the quotient of the quantity of the width T of the second circular tip segment 16d plus the width B of the composite lens feature 16 plus the height H of the second circular base segment 16c times the square root of the quotient of the quantity 4 times the square of the radius r of the second circular base segment 16c minus the square of the quantity of the width B of the composite lens feature 16 minus the width T of the circular tip segment 16d, minus the square of the height H of the second circular base segment 16c divided by the square of the quantity of the width B of the composite lens feature 16 minus the width T of the circular tip segment 16d, plus the square of the height H of the second circular base segment 16c. The width B of the composite lens feature 16 is equal to the width T of the second circular tip segment 16d plus the height H of the second circular base segment 16c times the quantity the sine of contact angle A at the bottom of the second circular base segment 16c minus the sine of contact angle A at the top of the second circular base segment 16c divided by the quantity cosine of the contact angle A at the top of the second circular base segment 16c minus the cosine of the contact angle A at the bottom of the second circular base segment 16c. The parameter b is equal to the square root of the quantity of the radius r of the second circular base segment 16c squared minus the quantity the width B of the composite lens feature 16 minus the parameter a that quantity squared. The coordinate x is a value in the direction of the light input surface or more specifically in the direction of the total width B of the composite lens feature 16 and is preferably set within the range of T . The coordinate y is a value in the light propagation direction. FIG. 7a is a ray tracing for an array of a single composite lens feature 16 of this invention illustrating what happens to the light rays when the individual composite lens features are disposed on the light input surface 12a in a contiguous manner such that there is no gap G between adjacent composite lenses. FIG. 7b is a similar ray tracing, but where the individual composite lens feature is separated by a gap G between adjacent features. The gap G is preferably less than or equal to 0.9P where P (as shown in FIG. 6b) is the pitch of the composite lens feature on the input surface 12a. In FIG. 7a, where the composite lens features are adjacent each other along the input surface, some of the light rays will experience a secondary light collimation as they are refracted when they reach the side of the adjacent feature. This secondary light collimation detracts from the diffusion capability of the composite lens feature 16. In FIG. 7b, the composite lens features are separated by a gap G. The gap allows the light ray to continue in a diffuse manner and thus widens the angle at which the light propagates in the light guide film. There is minimal secondary light collimation when the gap between features is incorporated into the composite lens feature design. In this way, the wider angle of light helps to mitigate the hot spots along the input surface of the light guide Referring now to FIG. 8, the light guide film 12 in FIG. 8 shows the composite lens features 16 not disposed along the entire input surface 12a. Instead, the composite lens features 16 are disposed along the light input surface 12a in the region where the LED 14 light is incident. The luminance uniformity of the system is minimally affected as the unpatterned region on the light input surface has minimal light rays in this region. Referring now to FIG. 9, the light guide film 12 shows a section where there is a random distribution of the composite lens features 16. This distribution can include base contact angles that can be greater than or less than the top contact angles of composite lens feature 16. EXAMPLES [0043] FIG. 10a shows a portion of the light input surface 32 of a light guide film 30 with an arc- or circular-type structure 36. The graph in FIG. 10b illustrates the light intensity for the light guide film 30 at distances 3.5 mm, 4.5 mm and 5.5 mm from the light input surface 32. FIG. 10b shows that the localized light intensity decreases as the distance increases from the light input surface, but there are still some hot spots evident at 5.5 mm. The arc- or circular-type structure solution provides some improvement for hot spots but is more effective at collimating light in line with the LED than widening the incidence angle. This is evident in the graph in FIG. 10b. In FIG. 10b, the LEDs are located at each of the vertical dotted lines and the light distribution is still not leveled out at 5.5 mm into the light guide film. It is apparent from the graph in FIG. 10b that the arc- or circular-type solution has insufficient diffusion capability. FIG. 11a shows a portion of the light input surface 42 of a light guide film 40 with a composite lens structure that has flat slanted sides 46. This result would also be applicable to a trapezoidal shaped light input structure. The graph in FIG. 11b illustrates the light intensity for the light guide film 40 at distances 3.5 mm, 4.5 mm and 5.5 mm from the light input surface 42. FIG. 11b shows that the localized light intensity actually inverts in the area immediately in front of the LEDs, resulting in a dark spot immediately in front of the LEDs. This overall loss of light intensity immediately in front of the LED is due to the fact that the straight slanted walls diffuse the light more readily through the sides than through the tip. It is also noted that the shape of the light intensity profile across the light guide film does not change significantly as the distance increases from the input surface 42. FIG. 12a shows a portion of the light input surface 52 of a light guide film 50 with the composite lens feature 56 of this invention. The composite lens feature utilizes a circular tip with two circular tip segments and a base with two tilted circular base segments. The radius of each of the two tilted circular base segments is not equal and the radius of each of the two circular tip segments is not equal. The bottom contact angle of each of the two tilted circular base segments is less than the contact angle of the circular tip segments. The bottom contact angle of each of the two tilted circular base segments is less than the top contact angle of the circular base segments. The circular tip distributes the light in the area immediately in front of the LED. The two tilted circular base segments distribute the light between the LEDs. The asymmetry of the composite lens structure aids in correcting the iputted light from the LEDs. The graph in FIG. 12b illustrates that the composite lens 56 of the present invention generates uniform light output across the light guide film at distances of 3.5 mm, 4.5 mm and 5.5 mm from the input surface 52. Hence, an improved light guide film is provided with asymmetric light redirecting features to improve light output uniformity without sacrificing light input efficiency. Namely, the improved light guide film 12 having composite lens structure 16 provides enhanced light diffusion in the plane parallel to the light extraction plane and light reflection plane (top and bottom surfaces), allowing greater light redistribution between discrete light sources (light traveling outside the critical angle of planar un-serrated input edge), so that the light output uniformity is improved. Moreover, the light distribution in the plane perpendicular to the light extraction plane and light reflection plane (top and bottom surfaces) is minimized, so that the condition of the total internal reflection is minimized for the inputted traveling light. Patent applications by SKC Haas Display Films Co., Ltd. Patent applications in class Light modifier for edge lit light source (incident edge) Patent applications in all subclasses Light modifier for edge lit light source (incident edge) User Contributions: Comment about this patent or add new information about this topic:
{"url":"http://www.faqs.org/patents/app/20130063976","timestamp":"2014-04-16T11:53:09Z","content_type":null,"content_length":"74256","record_id":"<urn:uuid:0b085822-d636-4559-9b2d-a9aff729d4d6>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Starboard Forums - View Single Post - boom size boom size i'm a novice and i apologize for all the basic questions i've been asking. i really appreciate all the help everyone has given me. my boom has a range of 182-244 cm. does this mean that it will fit sails that have a boom measurement of 182-244, or is there a recommended amount of extra space i should give myself for tuning? i got the boom in a rig package for an 8.5, but i'd like to get a 6.5 that has a 200 cm boom length. i figure this will be okay, but how close to the 182 do you think i can get? thanks again. eric b
{"url":"http://www.star-board.com/forum/showpost.php?p=38802&postcount=1","timestamp":"2014-04-16T10:18:49Z","content_type":null,"content_length":"16242","record_id":"<urn:uuid:982598f1-7292-41cb-b4cf-871d36b475bb>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Calibrating the Perfect ClockCalibrating the Perfect Clock This animation is based on a section of the book, SPECIAL RELATIVITY ILLUSTRATED, by John de Pillis. A perfect clock ticks consistently, that is, the time interval between any two ticks is the exactly the same for any two successive ticks. The clock starts ticking by a passing photon. Using the fact that the speed of light (a photon) in a vacuum is c = 186,000 mi./sec., we calibrate the clock so as to exectly define a one-second time interval.
{"url":"http://math.ucr.edu/~jdp/Relativity/Calibration.html","timestamp":"2014-04-18T18:10:51Z","content_type":null,"content_length":"3520","record_id":"<urn:uuid:763f730f-d9a9-4bd1-87a9-856a6617f805>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00186-ip-10-147-4-33.ec2.internal.warc.gz"}
Maths Tutors Yorba Linda, CA 92886 Please do not wait till your kid goes to the unrecoverable grade area ...If you need help in trigonometry, I am the right person. I can help you on unit circle, solving triangles, identities, and word problems. You will need these skills in real life problems, engineering, and calculus. They are essential for higher math. I have been... Offering 10+ subjects including algebra 1, algebra 2 and calculus
{"url":"http://www.wyzant.com/geo_West_Covina_Maths_tutors.aspx?d=20&pagesize=5&pagenum=3","timestamp":"2014-04-16T04:33:59Z","content_type":null,"content_length":"61631","record_id":"<urn:uuid:b2ffce91-091a-435c-afce-43e0fcbc88fc>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00638-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: total momentum is not always conserved .true or false.if true explain if false explain Best Response You've already chosen the best response. is conserved ,when no force act on system that you define .maybe you define a system that force act on it so total momentum is not conserved for more information see Wikipedia Best Response You've already chosen the best response. can u please b more precise as a physisit Best Response You've already chosen the best response. suppose this figure :|dw:1329061665329:dw| momentum will not conserve ,if you take body(just body) for system ,hence friction act on system,now assume body+ground for system here momentum will conserve hence all of force that act ,are interior Best Response You've already chosen the best response. so if i understand u it means it in a change of momentum the forces acting are in the system then momentum is conserved and if the forces acting are not part of the system then momentum is not conserved e.g friction Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f37adaee4b0fc0c1a0d5a29","timestamp":"2014-04-17T00:59:36Z","content_type":null,"content_length":"112046","record_id":"<urn:uuid:edaf0fac-5c0f-43ca-a921-417a2be55f3e>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] .T Transpose shortcut for arrays again Bill Baxter wbaxter at gmail.com Thu Jul 6 21:56:11 CDT 2006 On 7/7/06, Robert Kern <robert.kern at gmail.com> wrote: > Bill Baxter wrote: > > Robert Kern wrote: > > > > > > The slippery slope argument only applies to the .M, not the .T or .H. > No, it was the "Let's have a .T attribute. And if we're going to do that, > then > we should also do this. And this. And this." There's no slippery slope there. It's just "Let's have a .T attribute, and if we have that then we should have .H also." Period. The slope stops there. The .M and .A are a separate issue. > I don't think that just because arrays are often used for linear > > algebra that > > > > linear algebra assumptions should be built in to the core array > type. > > > > It's not just that "arrays can be used for linear algebra". It's that > > linear algebra is the single most popular kind of numerical computing in > > the world! It's the foundation for a countless many fields. What > > you're saying is like "grocery stores shouldn't devote so much shelf > > space to food, because food is just one of the products people buy", or > [etc.] > I'm sorry, but the argument-by-inappropriate-analogy is not convincing. > Just > because linear algebra is "the base" for a lot of numerical computing does > not > mean that everyone is using numpy arrays for linear algebra all the time. > Much > less does it mean that all of those conventions you've devised should be > shoved > into the core array type. I hold a higher standard for the design of the > core > array type than I do for the stuff around it. "It's convenient for what I > do," > just doesn't rise to that level. There has to be more of an argument for > it. My argument is not that "it's convenient for what I do", it's that "it's convenient for what 90% of users want to do". But unfortunately I can't think of a good way to back up that claim with any sort of numbers. But here's one I just found: http://www.netlib.org/master_counts2.html download statistics for various numerical libraries on netlib.org. The top 4 are all linear algebra related: /lapack <http://www.netlib.org/lapack/> 37,373,505 19,908,865 /scalapack <http://www.netlib.org/scalapack/> 14,418,172 /linalg <http://www.netlib.org/linalg/> 11,091,511 The next three are more like general computing issues: parallelization lib, performance monitoring, benchmarks: /pvm3 <http://www.netlib.org/pvm3/> 10,360,012 7,999,140 /benchmark <http://www.netlib.org/benchmark/> 7,775,600 Then the next one is more linear algebra. And that seems to hold pretty far down the list. It looks like mostly stuff that's either linear algebra related or parallelization/benchmarking related. And as another example, there's the success of higher level numerical environments like Matlab (and maybe R and S? and Mathematica, and Maple?) that have strong support for linear algebra right in the core, not requiring users to go into some syntax/library ghetto to use that functionality. I am also curious, given the number of times I've heard this nebulous argument of "there are lots kinds of numerical computing that don't invlolve linear algebra", that no one ever seems to name any of these "lots of kinds". Statistics, maybe? But you can find lots of linear algebra in -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20060707/4c71f451/attachment.html More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-July/009201.html","timestamp":"2014-04-17T04:16:45Z","content_type":null,"content_length":"7273","record_id":"<urn:uuid:89eb91f5-2a5c-443c-a39e-1338a5b24cdc>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Point of Inflection I made some rookie mistakes with my Algebra 2 skills checklist in semester one this year. I have invented a word to describe each problem. In my enthusiasm for separating skills, I gave determinants of 2×2 matrices and determinants of 3×3 matrices each their own spot on the list. They should have been combined. I also separated multiplication of polynomials from their division and even, astonishing in the euphoric clarity of hindsight, addition of matrices from their subtraction. This overzealous separation led to inflated grades (students got a 5/5 on matrix addition, matrix subtraction, both kinds of matrix multiplication, AND twice for finding determinants) and tests that felt kind of… stupid. A more subtle mistake: some skills on the checklist did not have discernible differences between intro-level problems and master-level problems. For instance, an intro simplification test might look like “3x+2p-x+2x,” and a master might look like “3x+2p-x+2x-p+2p+r,” but for some students it felt silly to bother giving the intro level test first. If a student can calculate “3+2+8+9,” do you really need to check to make sure they can calculate “3+2+8+9+1?” In contrast, my favorite skills had some fundamental difference between the intro level and master level tests. For example, the “dividing polynomials” intro test asked for a division that would have no remainder, and the master test involved remainders. This was nice because a student could pass the class with a basic understanding of the concept, but would need a more advanced understanding before getting the 100%. Bad Planning I did not have an effective way to deliver master-level tests to the students who wanted them. Early in the year I promised that students would be able to attempt a master-level test on any skill once per day, and early in the year this worked great. Late in the year, when two thirds of my students wanted six tests a day, it was harder. I need a better system. Luckily, this won’t be a problem again until the end of this semester, so I’ll start thinking about it then. ;) So, now the task is to create the lists for next semester (my school doesn’t start until January 10!), hoping to avoid these problems and keep the amazing benefits I got from the system last semester. Stay tuned! Here’s your exponential data! A while ago k8nowak was looking for exponential data besides credit card loans and bacteria growth. Take a look at the data describing visits to my (very new) blog: Looks a lot like exponential decay to me. In retrospect, I’m almost sad to have posted and ruined the trends. But, for those of you with very popular blogs, I think you might have an interesting option here. Have students create a webpage, right, for any old reason – to review multiplication tables or explain the distributive property or whatever. They could even just post their drawings, a math joke, short stories, whatever. When they’re done, link to it from your blog! Do it early in the morning so you don’t get the weird bump at the front edge like I did. They’re not likely to get a lot of other traffic, so I think the data might be cleaner than mine. This is risky if you are banking on the graph being exponential, but if you’re open to a class discussion about modeling traffic, where exponential curves will probably, but not definitely, come up, I think this would be fascinating. You know, you could guess how many visits will come the next day… and then see how close you are! Would this be more interesting than loans and radioactivity? I’d be interested in hearing from other blogs who notice peaks in traffic when someone famous gives you a reference. Is the curve always exponential decay? I want to have some contest with you guys. It’s not just that I want to prove myself the world’s fastest math teacher (one estimate of my speed in http://larkolicio.us/blog/?p=42 was 18 mph!). It’s that I think a video competition amongst teachers would be interesting for my students to judge. So, want to race? My idea is that several teachers would submit a video of themselves doing something. Running is an easy idea. Videos would be at different resolutions and different zoom levels, with different settings, so kids would have to use multiple points of reference to determine who is actually moving the fastest. Then, we’d have our kids make their calculations. We’d need some sort of structure to submit work from our multiple classes, and some averaging scheme to be fair when one class says their teacher was going a suspicious 50 mph (honest mistake, of course!). Clearly it doesn’t need to be a footrace. We could have any sort of timed event. Is there an untimed event that we could compare? Building a pile of sand with the best height/base ratio? Making a function with the best (length)/(area of enclosing rectangle)? Best record at Settlers of Catan? Maybe you’re getting the idea that this is not crystal clear in my mind yet. But here’s my motivation: I want to show kids that I am part of a learning network, and that other people are working on getting better at things. I think it will motivate them. Also, I think they’d think that interacting with students from another class would be cool, and would show them a way to interact with others on the internet besides facebook. How can we do this? Leave your wave username in the comments or email it to me and we could collaborate with it. I’m “rileylark.” I have a bunch of invites if you don’t have an account. Involving students in assessment The norm is for the teacher to write a test, lead a few lessons, describe what will be on the test, and then administer the test, right? Students are not involved in making the assessment that will determine their grades. My form of summative assessment changed drastically this year, but it still does not address this issue. From what I hear from other teachers (you?), there’s a lot of room in all of our classrooms for more student involvement in assessment creation. The tempered radical recently wrote about the benefits of explaining to students exactly what the point of each lesson is. It’s the kind of thing that seems so obvious when you say it like that, but this is relatively new research pointing at this stuff. In the November 2009 issue of Educational Leadership, “The Quest for Quality” references research from 2006 and 2009 to make the radical claim that “students learn best when they monitor and take responsibility for their own learning.” It goes on to say “This means that teachers need to write learning targets in terms that students will understand.” Sam Shah wrote about an experience talking directly with students about what it means to think and act like a mathematician that was so powerful for him that he considers it a genesis for himself as teacher. And meanwhile, I think I’m totally rad for talking with students about what they want out of my class. It’s incredible that this stuff is new, right? Some teachers are on to this already. I read about teachers developing rubrics with their classes, and others having students write questions from which the teacher will select his favorite three, etc. These teachers are already reaping the benefits: • students feel (are) respected • students feel (have) ownership of the assessment, which gives them a new responsibility • students know a lot about the assessment before the lessons are all over, which seems, you know, better. These benefits are obvious and supported by research. So, if you use an assessment scheme based on written tests (like I do), what are the best ways to get some of these benefits? I want to experiment with having kids write and critique their own questions for sure, since this seems easy to implement and, at its worst, is a form of review. I already ask them to assess their progress towards their personal goals. What else can I do? I am hereby declaring a new goal for the first month of next semester: I will find a way to include each student in the act and process of his or her own assessment, at least a little. I’m aiming high – I don’t mean that I will include “the class” in creating “the assessment.” Whew. There’s something to think about on the 14-hour drive home for Christmas! Please leave comments if you have ideas. I just set this kind of big goal, and to be honest, guys, I don’t know how I’m going to meet it yet. I’m trying out CPM’s Algebra 2 book and so far it’s pretty much fantastic. I don’t want to give away many details of the lesson plan I used today, since they asked me not to reproduce it. But I have to tell you about the basic mechanic: polydokus. A complete polydoku has 4 main sections – one for each of two polynomial factors, one for the product of said factors, and another area for the work. You can figure it out from this already-solved Of course, much of this data is redundant. Try your luck at this unsolved puzzle: The payoff comes in when the puzzle looks like this: Have your kids solve this polydoku, and then ask them, “Hey, by the way, what’s The CPM lesson went on to have the kids discover remainders, and even connected it to the factor theorem for finding roots of an equation, but I’ll let you ask CPM about that. You know who’s a polydoku convert? [pointing emphatically at myself with my thumbs] This guy. Acknowledging Student Time and Autonomy Though I’ve always believed it on an intellectual level, I’ve recently begun to understand with new depth that kids are young people who deserve the full respect that all people deserve, and are not in some way inferior. I have been slowly identifying instances in which I subconsciously considered my time more valuable than theirs, by changing my office hours at the last minute or assigning homework I hadn’t analyzed thoroughly. The fact that we (adults) require students to come to school for hundreds of hours every year has become newly startling to me. As teachers, we sometimes claim a kind of ownership over a significant percentage of our students’ lives. I was surprised to realize that even those students that have done the least work towards the goals I set dedicated 100 hours to my class. Do you know how many episodes of The Wire you can watch in 100 hours? Like, a hundred. Don’t get me wrong. I work harder in each of my classes than any of my students ever has. I expect them to do work for me (for them, but whatever), and I don’t feel bad about that. But this is the first year that I’ve really acknowledged that explicitly to them. I spent about ten minutes of the first period of my classes this year telling each class why I choose to teach and what I get out of it. I told them what I expect from them and why I thought it was important for them to succeed in my goals – important enough that I would be requiring them to do it, even if they didn’t like it. Then, I spent the rest of the class giving kids time to talk and write about what their own goals for the class were. I was very clear that I wanted them to meet my goals, but, given that those were required, what else did they want? We spent some time talking about the importance of action steps, and they all came up with some ways to make sure they were working towards their own goals. I left room for their goals on my syllabus. Later in the semester we came back to these goals briefly, just to remind the students of the concept and to let them assess their progress. I never graded any of it, and I didn’t require them to keep track of anything, but acknowledging that they were autonomous with full-member status in the humanity club got us off to a warm and fuzzy start. I have no research to back this up, but, with this approach: • Students are given the opportunity to find something important in my class. Some of them might have goals like “get better at doodling” at first, but, this year at least, those students got bored with those goals. Later they picked goals more like “get better at taking notes,” “understand where formulas come from more,” or “get more involved in class discussions.” • When students have a personal goal, that is really theirs, with no supervision or grading or judgement, that goal is automatically interesting. • The students realize that I value their time. • I feel better after a really boring class because of this initial discussion. The students know what my goals are for them, and they know I’m not just jerking them around. • When students do choose to share a goal with me, perhaps to ask for help in achieving or measuring success, I feel much closer to them because of that moment of unguarded honesty. My dedication increases and my rewards multiply. Overall, I’m not sure that this is worth the amount of time I spend on it. Many kids just forget about their goals and go about business as usual, and I only mention them every three or five weeks. What do you think? How do you show your students respect? Bag of Tricks #1 – Index Cards In “Bag o’ Tricks” posts, I’ll give activities that require almost zero prep, but inject a shot of fun, practice, activity, assessment, remediation, or whatever in a small amount of class time. This post’s focus is index cards. My students like them – I think they are just nicer objects than sheets of paper. These are perhaps my favorite no-prep activities. Memory (20 minutes) 1. Each student gets two index cards. 2. On one index card, each student writes an expression of a given type (e.g. an anonymous differentiable function like “2x+sin(x)”). Every student must use a pencil. 3. On the other index card, each student writes a corresponding expression after a given operation (e.g. differentiation – “2 + cos(x)”). After this step each student has two cards that are connected by the given operation, but not by name or any other property. 4. In pairs, students swap cards and check each other’s work. 5. Each student gets another two index cards and repeats the process. Each student now has a total of four cards, two pairs of linked cards. 6. Students form groups of four, shuffle their combined sixteen cards together, and lay them out upside down. The cards are (hopefully) indistinguishable. 7. The students play memory (in teams of two, or not). A team flips over one card, and then another. If they match through the operation, they keep the pair, get a point, and go again. If the cards don’t match, the next team is up. This activity is great, after you figure out how to make sure students write problems of the appropriate difficulty. They need to be pretty easy. Memory is hard when its just pictures of barnyard animals, you know? I use it to have students practice derivatives over and over again. Every time they see, for example, “2x,” they have to think “what is the derivative of 2x, and what might have 2x as a derivative?” You need a problem that’s easy, but takes lots of practice. Distributing polynomials, finding logarithms, solving linear equations, etc. The first time I used this activity I put, like, physics word problems on one card and answers on another. Let’s just leave it at “don’t do that” and move on, please ;). Benefits of memory: 1. A bunch of practice 2. It’s reasonably fun 3. Kids write their own problems and solve them 4. Each student gets the advantage of knowing 2 of the 8 answers right away. This almost guarantees some success for every student – everyone can feel engaged, even if their skill level is lower than the others’. Write and Swap (5-7 minutes) 1. Each student gets an index card and creates an example problem. 2. Students swap cards at their table (I have tables of two) and confirm that the problems are in the proper form, etc. Any questions about problem creation are resolved. 3. The teacher moves quickly and energetically around the room, picking cards swiftly out of kids’ hands and giving them replacement cards from other kids. This works elegantly – the teacher can move in any pattern, so as soon as problems are written they can be swapped out, but students who need more time may take it as the teacher is passing out cards. After this step each student has a new card in front of them, and they don’t know exactly where it came from. 4. Each student solves the problem on his or her card. 5. Students swap cards at their table and confirm solutions. Any questions about problem solution are resolved. Benefits of write and swap: 1. Each student gets practice writing a problem, which may involve critical thinking about what is important to include. 2. Each student thinks about four different problems in a row, but a physical interaction between each problem keeps attentions focused. 3. Student responsibility is diffused. Limited responsibility can help students feel safe, which can be important (though students should be fully responsible for at least some work every day). 4. A peppy teacher can infuse the activity with energy on a slow day by zipping around the classroom in the big card swap. Carry around a funny container instead of just holding the cards in your hand if you want. Write and Swap is great for those times when you just want students to practice something kind of boring a few times. It’s not great for longer problems because the phases get unsynchronized. Most confusing part (5-7 minutes) I got this from Science Formative Assessments, by Page Keeley. 1. Each student gets one index card near the end of the period. 2. Each student writes, anonymously, the thing about the class that was most confusing, least fun, whatever. 3. The cards all go in a box and are redistributed, one card per kid. Page Keeley recommends having the kids literally throw the cards around, but I admit to not being brave enough to try this yet. It might make this activity really fun… or just add two minutes to its execution. 4. Kids read their new cards aloud to the class. The first time I tried this, I wasn’t that impressed with the results, but like any new technique I’ve gotten better at making it succinct and useful. This activity is mostly to get a quick sense of how your lesson went, if you didn’t have any better way to do it built in. 1. If a theme emerges, you know, that’s a great piece of information for the teacher. Write that down on your lesson plan! 2. You get to hear from every kid in a very low-pressure way. 3. I imagine that kids who are embarrassed by a lack of understanding are heartened when they (inevitably(!)) hear that someone else had the same problem. Time-independent assessment at the end of the period This semester I started giving my students small, focused tests that attempt to isolate a single skill. Among the many things about this method that are astoundingly great is the fact that re assessment is a snap. A student can earn credit for a skill regardless of whether he understood it immediately or took 2 months of work to master it. I can easily reassess his skill level in December, even if we studied the concept as a class in September. But now there are only two weeks left in the semester, and grades will be due. I want to be able to explore more material, and to test my students’ skill level with it. The students want more tests, for goodness’ sake. And if I give a test with only 3 days of class left… a student who earns a low grade at first does not get any time to improve! It is clear that there is no way to teach new skills at the end of a grading period and give students a lot of time to become comfortable with them. Is there a way to teach all new skills at least three weeks before the end of a grading period… and still make the last three weeks interesting and curricularly-advancing? What do you do when a student fails a test at the very end of a grading period? The Unfunny Valley Why is it that I always feel the best after classes that I’ve planned the least? Is it because less-planned classes can be more organic, following discussions more naturally, accepting tangents? Or is it because I end up talking more in classes less planned, and don’t have to wait around bored while students practice with problems I’ve prepared? Clearly, some theories are more complimentary than others. Of course, when I work for eight hours on 100 minutes of a lesson, I feel great about it. When I do a TON of work, my lessons are satisfying at reasonable rates. But there seems to be a sort of uncanny valley that starts at about one hour of prep and doesn’t end until about four hours. More of an uncanny crevasse. My work between one and two hours goes towards tiering the lesson, or differentiating an activity. I can invent or find an activity in this time, but it’s not enough time to make sure it’s great. I spend this time improving the lesson at the cost of its flexibility. Since the flexibility was the only thing making the lesson fun, and I haven’t replaced it with anything else that’s fun, I fall into the fun abyss. After 3 hours I’ve started adding something else that’s fun – an intrinsically engaging activity or demonstration – and start clawing my way out. Here’s the worst part: I currently spend between two and three hours preparing my classes. Previously in my career, I was hitting the easy sweet spot, at between 45 and 75 minutes. Now I’m aiming at a harder (higher?) goal – fun and satisfaction, with big doses of student practice, understanding, and interest. So, am I improving? The same valley exists neither in the graph of educational content vs. prep time nor in test scores vs. prep time. On some levels I’m more satisfied with my two- and three-hour lessons, even though they’re less fun. However, student interest is correlated to the amount of fun I’m having. And, dangit, so is how much I like teaching! Will my kids this year have higher skill levels, but like math less?
{"url":"http://larkolicio.us/blog/?m=200912","timestamp":"2014-04-18T20:45:05Z","content_type":null,"content_length":"51570","record_id":"<urn:uuid:6f2ceb5c-bcb1-4d02-b47e-54a78b00b271>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00513-ip-10-147-4-33.ec2.internal.warc.gz"}
Similarity-based unification: a multiadjoint approach Results 1 - 10 of 15 , 2008 "... Managing uncertainty and/or vagueness is starting to play an important role in Semantic Web representation languages. Our aim is to overview basic concepts on representing uncertain and vague knowledge in current Semantic Web ontology and rule languages (and their combination). ..." Cited by 16 (5 self) Add to MetaCart Managing uncertainty and/or vagueness is starting to play an important role in Semantic Web representation languages. Our aim is to overview basic concepts on representing uncertain and vague knowledge in current Semantic Web ontology and rule languages (and their combination). - In: Proceedings of the 11th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU-06 , 2006 "... We present Annotated Answer Set Programming, that extends the expressive power of disjunctive logic programming with annotation terms, taken from the generalized annotated logic programming framework. ..." Cited by 7 (0 self) Add to MetaCart We present Annotated Answer Set Programming, that extends the expressive power of disjunctive logic programming with annotation terms, taken from the generalized annotated logic programming "... In this paper we relate two logical similarity-based approaches to approximate reasoning. One approach extends the framework of (propositional) classical logic programming by introducing a similarity relation in the alphabet of the language that allows for an extended unification procedure. The seco ..." Cited by 2 (2 self) Add to MetaCart In this paper we relate two logical similarity-based approaches to approximate reasoning. One approach extends the framework of (propositional) classical logic programming by introducing a similarity relation in the alphabet of the language that allows for an extended unification procedure. The second approach is a many-valued modal logic approach where ✸p is understood as approximately p. Here, the similarity relations are introduced at the level of the Kripke models where possible worlds can be similar to some extent. We show that the former approach can be expressed inside the latter. , 2002 "... We use the formal model for similarity-based fuzzy unification in multi-adjoint logic programs to provide new tools for flexible querying. Our approach is based on a general framework for logic programming, which gives a formal model of fuzzy logic programming extended by fuzzy similarities and ..." Cited by 1 (1 self) Add to MetaCart We use the formal model for similarity-based fuzzy unification in multi-adjoint logic programs to provide new tools for flexible querying. Our approach is based on a general framework for logic programming, which gives a formal model of fuzzy logic programming extended by fuzzy similarities and axioms of first-order logic with equality. "... Abstract. In this paper we relate two logical similarity-based approaches to approximate reasoning. One approach extends the framework of (propositional) classical logic programming by introducing a similarity relation in the alphabet of the language that allows for an extended unification procedure ..." Add to MetaCart Abstract. In this paper we relate two logical similarity-based approaches to approximate reasoning. One approach extends the framework of (propositional) classical logic programming by introducing a similarity relation in the alphabet of the language that allows for an extended unification procedure. The second approach is a many-valued modal logic approach where ✸p is understood as approximately p. Here, the similarity relations are introduced at the level of the Kripke models where possible worlds can be similar to some extent. We show that the former approach can be expressed inside the latter. "... In this paper we relate two logical similarity-based approaches to approximate reasoning. One approach extends the framework of (propositional) classical logic programming by introducing a similarity relation in the alphabet of the language that allows for an extended unification procedure. The seco ..." Add to MetaCart In this paper we relate two logical similarity-based approaches to approximate reasoning. One approach extends the framework of (propositional) classical logic programming by introducing a similarity relation in the alphabet of the language that allows for an extended unification procedure. The second approach is a many-valued modal logic approach where ✸p is understood as approximately p. Here, the similarity relations are introduced at the level of the Kripke models where possible worlds can be similar to some extent. We show that the former approach can be expressed inside the latter. "... We present a model of fuzzy logic programming with best answer semantics as an optimization task and discuss various utility function problems. ..." Add to MetaCart We present a model of fuzzy logic programming with best answer semantics as an optimization task and discuss various utility function problems. "... Incomplete information is a problem in many aspects of actual environments. In many sceneries the knowledge is not represented in a crisp way. It is common to find fuzzy concepts or problems with some level of uncertainty. It is difficult to find practical systems which handle fuzziness and uncertai ..." Add to MetaCart Incomplete information is a problem in many aspects of actual environments. In many sceneries the knowledge is not represented in a crisp way. It is common to find fuzzy concepts or problems with some level of uncertainty. It is difficult to find practical systems which handle fuzziness and uncertainty and the few examples that we can find are minority. To extend a popular system (which many of programmers are using) with this hability seems to be an interesting issue. Our first work (Fuzzy Prolog [1]) was a language that models B([0, 1])-valued Fuzzy Logic. In the Borel Algebra, B([0, 1]), truth value is represented using unions of intervals of real numbers. It subsumed former approaches because it was more general in truth value representation and propagation than them. Now, we enhance our former approach by using default knowledge to represent incomplete information in Logic Programming. We also provide the implementation of this new framework. This new release of Fuzzy Prolog handles incomplete information and it has a complete semantics (the before one was incomplete as Prolog) which we discuss. New Fuzzy Prolog is more expressive to represent real world. - IFSA-EUSFLAT , 2009 "... Formal concept analysis has become an important and appealing research topic. There exist a number of different fuzzy extensions of formal concept analysis and of its representation theorem, which gives conditions for a complete lattice in order to be isomorphic to a concept lattice. In this paper w ..." Add to MetaCart Formal concept analysis has become an important and appealing research topic. There exist a number of different fuzzy extensions of formal concept analysis and of its representation theorem, which gives conditions for a complete lattice in order to be isomorphic to a concept lattice. In this paper we concentrate on the study of operational properties of the mappings α and β required in the representation theorem. "... In this lecture we give several examples and lessons learned from research, development and experiments in the area of theory and applications of information technology. We will try to describe a possible synergy of theory and application too. Namely, to describe where practical needs bring new prob ..." Add to MetaCart In this lecture we give several examples and lessons learned from research, development and experiments in the area of theory and applications of information technology. We will try to describe a possible synergy of theory and application too. Namely, to describe where practical needs bring new problems for theory and where theory helps to formulate methods, which should be verified in practice. In the theoretical part we will mention research on correctness and completeness of fuzzy logic programming [1,2] and various measures for evaluating success. In applications we mention acquaintance with development and experiments of preferential querying and user dependent top-k answers [3]. In all of these it also depends on whether our task is deductive (querying),
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=254494","timestamp":"2014-04-18T21:08:53Z","content_type":null,"content_length":"33701","record_id":"<urn:uuid:27fbb088-6183-4b26-b009-a8a304c94fc2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00557-ip-10-147-4-33.ec2.internal.warc.gz"}
center of mass fucntion 1. Center of mass / Center of gravity - MATLAB 2. center of mass of objects in a 3D box Hello, I have created a 3D box with particles in it. These particles have odd shapes and I want to find the center of mass of each particle and then connect these centers. Could someone guide me how to do this? Many thanks in advance. 3. How to compute center of mass for an image - MATLAB 4. Calculating the Center of Mass of a 3D Binary Array/Matrix Hi, I'm an undergraduate doing some some physics research. I apologize if this is the wrong group, but this is the only thing that popped up on google groups. Here is my task: 1) Import 10 images into matlab in binary format and superimpose them on top of each other 2) Put it back into Binary Format (because superimposing 10 images will yield overlaps and those overlaps values will now contain values that are greater than 1. 3) Find the 3D center of mass of the volumetric matrix The object I have is a tumor in 3 Dimensions - The tumor is bounded within a matrix that is 512,512,117. The tumor does not compromise the whole of the matrix, just a small portion. I thought the math behind finding the Center of Mass (CM) would be simple but after getting the equations down and pondering over it I realized that it wouldn't work. I've finished with steps 1 and 2 (quite easy). I wanted to do the following (not using the sum matlab function = sum() represents equals large-sigma with bounds) A = sum(mx)/M B = sum(my)/M C = sum(mz)/M where m = mass (or the value at the specified coordinate) where x = the x (or first) dimension where y = the y (or second) dimension where z = the z (or third) dimension where M = total mass over three dimensions (easy to do using a triple sum function in matlab) where (A,B,C) = will equal the new coordinates (in decimal format), but multiplying them by a multiplicative factor shoudl fix that ( a total guess) Major problems: I am keeping the two coordinates that I'm not summing over at a constant number (i.e. zero) If I do this, then all I am adding up/summing over are the outer edges of the matrix which don't contain the tumor (I know this for a fact). After researching on the internet, I have come to the conclusion that I am dealing with a centroid, something which I have never seen in my life. I'd appreciate any and all help, thank you. 5. Calculating the center of mass per line - MATLAB Hi all, I need some help in finding out a way to calculate the center of mass of a waveform. I have seen a lot of posts about finding it for a triangle and a polygon, but I am not sure if I can apply that to a waveform. The waveform has 40 data points and you could assume it looks like the positive half of a standard sine wave. any help would be appreciated. Rahul Shingrani 8. faster way to calculate "center of mass" than using regionprops hi guys, does anyone know if there is a faster way to calculate the center of mass of an image than using the Centroid property of the command regionprops. the image is an 240x320 rgb image. stats = regionprops(image, 'Centroid'); lasts at my computer system at least 0.5 sec. i need to be faster than 0.1 sec to calculate the center of mass. please help me. thanks. k.
{"url":"http://www.mofeel.net/582-comp-soft-sys-matlab/76562.aspx","timestamp":"2014-04-17T04:42:28Z","content_type":null,"content_length":"10549","record_id":"<urn:uuid:8e5cbb42-8016-4a32-ad62-a32966d0f918>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
Hinging hyperplanes for regression, classification, and function approximation Results 1 - 10 of 74 - Neural Computation , 1995 "... We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Ba ..." Cited by 309 (31 self) Add to MetaCart We had previously shown that regularization principles lead to approximation schemes which are equivalent to networks with one layer of hidden units, called Regularization Networks. In particular, standard smoothness functionals lead to a subclass of regularization networks, the well known Radial Basis Functions approximation schemes. This paper shows that regularization networks encompass a much broader range of approximation schemes, including many of the popular general additive models and some of the neural networks. In particular, we introduce new classes of smoothness functionals that lead to different classes of basis functions. Additive splines as well as some tensor product splines can be obtained from appropriate classes of smoothness functionals. Furthermore, the same generalization that extends Radial Basis Functions (RBF) to Hyper Basis Functions (HBF) also leads from additive models to ridge approximation models, containing as special cases Breiman's hinge functions, som... - Automatica , 1995 "... A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. There has been considerable recent interest in this area with structures based on neural networks, radial basis networks, wavelet networks, hinging hyperplanes, ..." Cited by 136 (15 self) Add to MetaCart A nonlinear black box structure for a dynamical system is a model structure that is prepared to describe virtually any nonlinear dynamics. There has been considerable recent interest in this area with structures based on neural networks, radial basis networks, wavelet networks, hinging hyperplanes, as well as wavelet transform based methods and models based on fuzzy sets and fuzzy rules. This paper describes all these approaches in a common framework, from a user's perspective. It focuses on what are the common features in the different approaches, the choices that have to be made and what considerations are relevant for a successful system identification application of these techniques. It is pointed out that the nonlinear structures can be seen as a concatenation of a mapping from observed data to a regression vector and a nonlinear mapping from the regressor space to the output space. These mappings are discussed separately. The latter mapping is usually formed as a basis function e... - Computers and Chemical Engineering , 1997 "... More than 15 years after Model Predictive Control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance a ..." Cited by 96 (4 self) Add to MetaCart More than 15 years after Model Predictive Control (MPC) appeared in industry as an effective means to deal with multivariable constrained control problems, a theoretical basis for this technique has started to emerge. The issues of feasibility of the on-line optimization, stability and performance are largely understood for systems described by linear models. Much progress has been made on these issues for nonlinear systems but for practical applications many questions remain, including the reliability and efficiency of the on-line computation scheme. To deal with model uncertainty "rigorously" an involved dynamic programming problem must be solved. The approximation techniques proposed for this purpose are largely at a conceptual stage. Among the broader research needs the following areas are identified: multivariable system identification, performance monitoring and diagnostics, nonlinear state estimation, and batch system control. Many practical problems like control objective prior... - In Plenary talk at the proceedings of the 17th IFAC World Congress, Seoul, South Korea , 2008 "... System identification is the art and science of building mathematical models of dynamic systems from observed input-output data. It can be seen as the interface between the real world of applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous ne ..." Cited by 77 (2 self) Add to MetaCart System identification is the art and science of building mathematical models of dynamic systems from observed input-output data. It can be seen as the interface between the real world of applications and the mathematical world of control theory and model abstractions. As such, it is an ubiquitous necessity for successful applications. System identification is a very large topic, with different techniques that depend on the character of the models to be estimated: linear, nonlinear, hybrid, nonparametric etc. At the same time, the area can be characterized by a small number of leading principles, e.g. to look for sustainable descriptions by proper decisions in the triangle of model complexity, information contents in the data, and effective validation. The area has many facets and there are many approaches and methods. A tutorial or a survey in a few pages is not quite possible. Instead, this presentation aims at giving an overview of the “science ” side, i.e. basic principles and results and at pointing to open problem areas in the practical, “art”, side of how to approach and solve a real problem. 1. , 2001 "... We propose a new technique for the identification of discrete-time hybrid systems in the Piece-Wise Affine (PWA) form. This problem can be formulated as the reconstruction of a possibly discontinuous PWA map with a multi-dimensional domain. In order to achieve our goal, we provide an algorithm that ..." Cited by 49 (7 self) Add to MetaCart We propose a new technique for the identification of discrete-time hybrid systems in the Piece-Wise Affine (PWA) form. This problem can be formulated as the reconstruction of a possibly discontinuous PWA map with a multi-dimensional domain. In order to achieve our goal, we provide an algorithm that exploits the combined use of clustering, linear identification, and pattern recognition techniques. This allows to identify both the affine submodels and the polyhedral partition of the domain on which each submodel is valid avoiding gridding procedures. Moreover, the clustering step (used for classifying the datapoints) is performed in a suitably defined feature space which allows also to reconstruct different submodels that share the same coefficients but are defined on different regions. Measures of confidence on the samples are introduced and exploited in order to improve the performance of both the clustering and the final linear regression procedure. , 1998 "... this paper we investigate the problem of providing error bounds for approximation of an unknown function from scattered, noisy data. This problem has particular relevance in the field of machine learning, where the unknown function represents the task that has to be learned and the scattered data re ..." Cited by 31 (1 self) Add to MetaCart this paper we investigate the problem of providing error bounds for approximation of an unknown function from scattered, noisy data. This problem has particular relevance in the field of machine learning, where the unknown function represents the task that has to be learned and the scattered data represents the examples of this task. An obvious quantity of interest for us is the generalization error -- a measure of how much the result of the approximation scheme differs from the unknown function -- typically studied as a function of the number of data points. Since the data are randomly generated and noisy, the analysis of the generalization error necessarily involves statistical considerations in addition to the traditional - Ann. Stat , 2000 "... We consider the problem of estimating an unknown function f from N noisy observations on a random grid. In this paper we address the following aggregation problem: given M functions f 1�����f M find an “aggregated” estimator which approximates f nearly as well as the best convex combination f ∗ of f ..." Cited by 30 (4 self) Add to MetaCart We consider the problem of estimating an unknown function f from N noisy observations on a random grid. In this paper we address the following aggregation problem: given M functions f 1�����f M find an “aggregated” estimator which approximates f nearly as well as the best convex combination f ∗ of f 1�����f M. We propose algorithms which provide approximations of f ∗ with expected L 2 accuracy O�N −1/4 ln 1/4 M�. We show that this approximation rate cannot be significantly improved. We discuss two specific applications: nonparametric prediction for a dynamic system with output nonlinearity and reconstruction in the Jones– Barron class. 1. Introduction. Consider , 1995 "... In this paper we discuss several aspects of the mathematical foundations of non-linear black-box identification problem. As we shall see that the quality of the identification procedure is always a result of a certain trade-off between the expressive power of the model we try to identify (the larger ..." Cited by 29 (5 self) Add to MetaCart In this paper we discuss several aspects of the mathematical foundations of non-linear black-box identification problem. As we shall see that the quality of the identification procedure is always a result of a certain trade-off between the expressive power of the model we try to identify (the larger is the number of parameters used to describe the model, more flexible would be the approximation), and the stochastic error (which is proportional to the number of parameters). A consequence of this trade-off is a simple fact that good approximation technique can be a basis of good identification algorithm. From this point of view we consider different approximation methods, and pay special attention to spatially adaptive approximants. We introduce wavelet and "neuron" approximations and show that they are spatially adaptive. Then we apply the acquired approximation experience to estimation problems. Finally, we consider some implications of these theoretic developments for the practically... - Automatica , 2004 "... This paper addresses the problem of identification of hybrid dynamical systems, by focusing the attention on hinging hyperplanes (HHARX) and Wiener piecewise affine (W-PWARX) autoregressive exogenous models. In particular, we provide algorithms based on mixed-integer linear or quadratic programming ..." Cited by 22 (4 self) Add to MetaCart This paper addresses the problem of identification of hybrid dynamical systems, by focusing the attention on hinging hyperplanes (HHARX) and Wiener piecewise affine (W-PWARX) autoregressive exogenous models. In particular, we provide algorithms based on mixed-integer linear or quadratic programming which are guaranteed to converge to a global optimum. For the special case where switches occur only seldom in the estimation data, we also suggest a way of trading off between optimality and complexity by using a change detection approach. 1 "... ed on visual and speech data. The ability of the network to automatically generate wavelet codes from natural images is demonstrated. These bear a close resemblance to 2-D Gabor functions, which have previously been used to describe physiological receptive fields, and as a means of producing compact ..." Cited by 20 (0 self) Add to MetaCart ed on visual and speech data. The ability of the network to automatically generate wavelet codes from natural images is demonstrated. These bear a close resemblance to 2-D Gabor functions, which have previously been used to describe physiological receptive fields, and as a means of producing compact image representations. Keywords: neural networks, unsupervised learning, self-organisation, feature extraction, information theory, redundancy reduction, sparse coding, imaging models, occlusion, image coding, speech coding. Declaration This dissertation is the result of my own original work, except where reference is made to the work of others. No part of it has been submitted for any other university degree or diploma. Its length, including captions, footnotes, appendix and bibliography, is approximately 58000 words. Acknowledgements I would like first and foremost to thank Richard Prager, my supervisor, fo
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=219380","timestamp":"2014-04-17T16:22:39Z","content_type":null,"content_length":"40018","record_id":"<urn:uuid:03988d70-b558-4c3d-9469-8d56809e93de>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00192-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometric Unity Physics has a problem. In fact quite a few problems. Why are there three generations of fundamental particles, each seemingly just a heavier copy of the generation before? What is dark matter? Why is the expansion of the universe accelerating? How can one reconcile Einstein’s Field Equations which control the curvature of space time and represent our theory of gravity with the Yang-Mills equations and the Dirac equation which represent our theory of particle interactions on a quantum level? Two years ago Eric Weinstein, working from outside the academic system, came to me with some bold and unorthodox ideas that he had come up with as an attempt to answer these problems. My initial reaction to this was the same as to any such proposal of this type: skepticism. Like many academics I regularly receive hundreds of emails, letters, books from a whole range of people claiming the discovery of a theory of everything or proofs of the Riemann Hypothesis or the Goldbach Conjecture. However I have always kept in the back of my mind the story of Ramanujan writing out of the blue to GH Hardy. Ramanujan’s approaches had been rejected by two academics in London before Hardy responded positively to his letters. So I try to give serious proposals a hearing regardless of whether they come from inside or outside the academy. My initial skepticism began to wane though as I heard Weinstein’s ideas unfold. By the end of our meeting I was intrigued enough to dedicate time during the last two years to work through the ideas. I don’t understand every detail, but the ideas are beautiful and I believe extremely natural. It is a highly mathematical story but with clear implications to questions of physics. His approach is in line with Einstein’s belief in the power of mathematical geometry. Einstein talked about his conviction that the universe was made of marble not wood. Weinstein’s proposal which he calls Geometric Unity realizes Einstein’s dream. Geometric elegance is of course no guarantee that the mathematical universe that Weinstein describes must match the reality of our universe. However, if the details survive scrutiny it will still be a beautiful mathematical landscape Weinstein has uncovered, as well as one with uncanny similarities to the world we inhabit. Weinstein has kept many of these ideas to himself for nearly three decades. It took some courage for him to discuss the ideas with me. I was sworn to secrecy. Weinstein has also been discussing the ideas in secret with mathematician Ed Frenkel and physicist David Kaplan. But as I spent time with the ideas I believed that they were too important to be kept private and needed to be discussed more I debated with myself whether it was appropriate for me to host a lecture where Weinstein could begin to explain his ideas. Was this an appropriate use of my position as the Simonyi Professor for the Public Understanding of Science? Charles Simonyi prepared a manifesto when he endowed my chair to guide the holders of the chair in their mission. I would like to quote one part of the manifesto: “Scientific speculation, when so labelled, and when the concept of speculation and its place in the scientific method has been made clear to the audience, can be very exciting. It is a very effective communication tool, and it is by no means discouraged.” It was in the spirit of this part of my mission as the Simonyi professor that I decided to persuade Weinstein to give a special lecture in which he could start to propose the ideas he has been working on. The decision to try to publicise his ideas was not taken lightly as all attempts at a Unified Theory have failed so far. However, a little thought reveals that whenever a final theory is at last found, it will first begin its public life with an overwhelming statistical likelihood of failure. As Charles Simonyi suggested let me make very clear how the scientific method can work. If you have new ideas it is perfectly acceptable to try to articulate these ideas through a seminar or lecture before publishing a paper. The ideas might go through some revision and evolve through dialogue with other scientists. Ultimately these ideas must be written down and evaluated by the communities to which they are relevant. The lecture that Weinstein gave last week was the beginning of that process. A paper which Weinstein is currently working on will in due course appear. As Charles Simonyi expressed in his manifesto “scientific speculation can be very exciting”. It is an excitement that I think the public can share. No one other than the relevant scientific communities will be able to evaluate the merit of the work, but why shouldn’t the public see science in action? It can help communicate the challenging problems that physics still faces. It took a lot of courage for Weinstein to come forward and talk about his ideas. He comes as something of an outsider but with the sensibilities and knowledge of an insider, a difficult place from which to propose bold ideas. He has a PhD from Harvard, post-doctoral experience from MIT and the Hebrew University. Not a bad grounding. But rather than staying in academia he went a more independent route working in economics, government and finance. But I have always been a believer that it doesn’t matter who the person is and what is their background, it is the ideas that speak for themselves. I believe science has much to gain if the ideas turn out to be correct and little to lose if they turn out to be wrong. For a general description of the ideas being proposed check my Guardian blogpost 5 comments: I am impressed by how lucid and humble is Mr. Du Sautoy's argument. That is very much in contrast with the accusation of sensationalism proffered against him. Daniel L. Burnstein I think that if an article as measured as this one had been published in the Guardian, rather than http://www.guardian.co.uk/science/2013/may/23/eric-weinstein-answer-physics-problems , we might have had a lot more light and rather less heat over the last few days. I think the article here makes the status of both Dr Weinstein (as something of an 'outsider' to scientific mainstream community) and his work (speculative proposal) much clearer than the Guardian article. The Guardian article introduces Dr Weinstein as a mathematician and physicist but makes no mention that this is, for Dr Weinstein 'just a (time thirsty) hobby' [1], I think that this was an important omission. Without it we might be tempted (as I was) to think that we had here a moment akin to Andrew Wiles giving his talk 'Modular Forms, Elliptic Curves and Galois Representations' and presenting a (nearly complete) proof of Fermat's Last Theorem. Dr Weinstein might be right but the Guardian article did not give us the odds in the same way that either this blog post or Alok Jha's blog post http://www.guardian.co.uk/science/blog/2013/may/23/roll-over-einstein-meet-weinstein does. More than anything, I think this episode underlines the importance of trying to highlight whereabouts in the scientific process a proposal like this resides so that we can, as individuals and as a society judge how much time, effort, energy and resource we invest in it. The Simonyi Manifesto wisely highlights the desirability of such labelling when communicating speculative proposals. I think it behoves scientists, science communicators, journalists and teachers to take this very much to heart. Equally, I think that perhaps the mainstream science community and society as a whole might progress faster if speculation was given a little more time on stage. I'd like to see more of the workings of this part of the scientific process come under the spotlight. [1] Tweet from @EricRWeinstein https://twitter.com/EricRWeinstein/status/339742222163533824 Thanks for your explanation. I love the last sentence. I believe there is a fundamental mistake here: physics has never problems with "why"'s. There is no problem of why there are three generations, why fermion masses are as they are, why there is a given gauge group, why the universe exists... Physics only can answer "how"'s. How things interact, how matter properties can be described etc. I strongly believe that there can be no theory of everything, simply because everything does not exist, it is not something we measure. In fact "everything" is all that we do not measure. Thus, we can not make a theory of it. I am telling this after having already published works on "Unified models of gravity" :) It seems to ee that you and Weinstein are suffering from the disease of Platonism, you seem to be confusing two sides of Hume's Fork. Physics deals with matters of fact where as Maths deals with the realm of ideas. In the realm of ideas deductive reasoning is used it is only necessary to prove a theorem once such as the recent proof by another comparative outsider that the gap between two consecutive prime numbers is less than 7.5 billion. On the other hand physics essentially establishes it's laws by a process of induction the laws will always have some empirical input such as coupling constants etc it takes many measurements to establish the values of physical constants and a law of physics is only ever approximately true albeit a good approximation but the approximation can only be as good as the measurement of the physical constants. You and Weinstein seem to think there must be a one-one correspondence between the world of mathematics and the natural world. Furthermore seduced by the elegance of your mathematical theory you convince yourself it must be correct. But the symmetries in particle physics are only ever approximate the mass of the neutron is only appoximately equal to the mass of the proton. Even worse the mass of the top quark is certainly not equal to the mass of the bottom quark. Elegant as Weinstein's theory maybe it certainly doesn't represent reality as it is. Only as he would like it to be Finally you seem to want to leave the messy world of prediction which in most cases requires hard work. Have you ever slogged through the prediction of the Beta function
{"url":"http://findingmoonshine.blogspot.com/2013/06/geometric-unity.html","timestamp":"2014-04-19T19:52:55Z","content_type":null,"content_length":"59070","record_id":"<urn:uuid:e38ad5f2-303e-4c7f-9900-bc31f1ca396a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Correlation of data: Scatter plots Correlation of data: Scatter plots Sometimes two different characteristics of a population exhibit behavior that seems to indicate that the measured values of one characteristic for an individual can be used to predict the measured values of the other characteristic for that same individual. This may be due to one characteristic affecting the other, a third factor affecting both characteristics, or it may be a coincidence altogether. There are several statistical measures that can be utilized to illustrate and provide a better understanding for the relationship between two characteristics, or variables, of a population. We hope to present a glimpse of some of these measures, and show how Mathematica can be incorporated for the purpose of applying them. This statistical study of the strength of the relationship between two variables is known as correlation analysis. If it is suspected that one of two characteristics of a population infuences the other, then this characteristic is referred to as an independent variable, while the other is referred to as a dependent variable. If it is unclear whether one characteristic might depend on the other, then such a designation may be made arbitrarily, without the assumption that a dependence between characteristics occurs, since we are trying to determine if a correlation exists without assuming that one exists beforehand. Now, an investigation of the correlation between two characteristics of a population requires two pieces of information per individual of a sample of the population--measurements of the same two characteristics for each individual. This set of pairs of information can be treated as a set of points in a plane, leading us to a simple method of visually representing this type of data. A scatter plot is a plot of the points (x[i], y[i]) of a set of data collected by the measurements, x[i] and y[i], of two characteristics of the individuals of a sample of a population, the values x[i] representing the independent variable and the values y[i] representing the dependent variable. We can produce a scatter plot of a set of points contained in a list data using the command ListPlot. This is done as follows: This yields a plot of the points contained in data on a rectangular section of the Cartesian plane. There are a number of optional modifications that can be made to a ListPlot command. They are incorporated into the command in the following manner: Here we understand that option[1] is set to the value value[1], option[2] is set to value[2], and so on. We can obtain a list of the options that correspond to the ListPlot command with an Options command This will list the set of options along with their default values. Example 1: Suppose that the height, in inches, and weight, in pounds, of a group of 10 males, aged 30 through 40 years, is listed in the following table: │height │weight │ │ 62.1 │ 157 │ │ 58.3 │ 161 │ │ 73.2 │ 198 │ │ 65.9 │ 192 │ │ 69.4 │ 180 │ │ 75.4 │ 248 │ │ 71.2 │ 203 │ │ 68.9 │ 182 │ │ 67.3 │ 195 │ │ 64.8 │ 168 │ where the height of each individual is listed in the first column, and the corresponding weight of each individual is listed in the second column. Let us form a scatter plot of this data. To accomplish this, we shall merely input the data as a list of paired values, In[1]:= values={{62.1,157},{58.3,161},{73.2,198}, and then we will plot the set of resulting points using the ListPlot command. Since it seems more likely that weight depends on height than it does that height depends on weight, we have designated height as the independent variable, and weight as the dependent variable. To better center the graph of points, we modify the plot by specifying the x- and y-dimensions of the graphics output with the option PlotRange. In[2]:= ListPlot[values,PlotRange->{{55,75},{150,200}}] Out[2]= -Graphics- Thus we obtain a visual sense of how the height and weight of these individuals correlate to each other. Example 2: Suppose that the following set of paired values │ 1.32 │2│ │ 2.37 │5│ │ 0.81 │4│ │ 0.63 │3│ │ 1.03 │8│ │ 3.21 │4│ consists of a list of data collected from six different households living in the same city. Each line of data represents a different household, with the first column containing the fraction of the national average of the annual income that the household earns in a year, and the second column containing the number of individuals living in that household. Let us form a scatter plot of this information. We can input our data into the list info with the command In[3]:= info={{1.32,2},{2.37,5},{0.81,4},{0.63,3}, To further proceed with this problem, we must designate which characteristic of the set of households it is that we wish to use as an independent variable. It may be that some will argue in favor of either characteristic, but we shall designate the second characteristic--that of the size of the household--as being our independent variable. The first characteristic, the information on annual income, will then be the dependent variable. This implies that we need the information in the second column as the x-coordinate of each point, and the information in the first column as the y-coordinate of each point. However, we have input the data into a list as points with the coordinates switched. Thus we will need to reverse the coordinates of each point in the list before we form the scatter plot. Note that the following commands enable us to switch the coordinates of the points in our list: In[4]:= info=Transpose[info]; The scatter plot is then obtained with the command In[5]:= ListPlot[info,PlotRange->{{0,10},{0,4}}] Out[5]= -Graphics- By incorporating the function BarChart contained within the Graphics`Graphics` package, we can form a bar chart that compares two lists of numbers, each having the same number of elements. Suppose we have two sets of data, data[1] and data[2], with which we wish to form some such comparison. Once the Graphics`Graphics` package is loaded << Graphics`Graphics` the command will provide a bar chart for each data set, aligning the charts so that the graphs corresponding to the i^th pieces of data from each set will be side by side. Example 3: Let us form a bar chart for the data from the previous problem. Let us assume that the data has already been input to the list info, and is paired with the number of individuals in each household first. We need to express the data from each set of coordinates of the list info as a separate list. We can express this list as a set of two such lists by transposing it. In[6]:= info=Transpose[info]; We then form a bar chart from the two sublists of this data, the lists info[[1]] and info[[2]], by first loading the appropriate package In[7]:= << Graphics`Graphics` and then by utilizing the BarChart command as follows: In[8]:= BarChart[info[[1]],info[[2]]] Out[8]= -Graphics- Scatter plots can also be made from ordered triples of data. This would require utilizing the ScatterPlot3D from the Graphics`Graphics3D` package. Three-dimensional scatter plots are a little more difficult to properly visualize, but we can obtain a two-dimensional representation of a list data of ordered triples in three-dimensional space with a command of the form once the Graphics`Graphics3D` package has been loaded. Various options can be added to the input of this command in order to modify the output. Example 4: Suppose that a collection of three measurements are made from each member of a group of individuals, and are given in the following table: Let us form a three-dimensional scatter plot of this information To do so we must input the data into a list: In[9]:= points={{35.1,62.1,43.1},{35.6,58.3,44.2}, We must next load the appropriate package: In[10]:= << Graphics`Graphics3D` Finally, we obtain our scatter plot with the command In[11]:= ScatterPlot3D[points] Out[11]= -Graphics3D- Unfortunately, with this command we do not obtain a good sense of the location of each point in space. The following command allows us to put small cuboids in place of each point, and thus enables us to gain a better perspective on how the points lie in relation to each other: In[12]:= Show[Graphics3D[Table[Cuboid[points[[i]]], Out[12]= -Graphics3D- Note that a three-dimensional scatter plot does not graph well if there is a large variety in magnitude of values for differt variables. Last modified: Tue May 28 2002
{"url":"http://www.mathcs.emory.edu/~fox/NewCCS/ModuleV/ModVP7.html","timestamp":"2014-04-16T07:22:20Z","content_type":null,"content_length":"15954","record_id":"<urn:uuid:acba707f-a5a6-41be-aca5-d9e3227d6d94>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Is it possible to construct (without choice, even?) a non-finitely-generated group with no proper non-finitely-generated subgroup? up vote 5 down vote favorite Is there a non-finitely-generated group each of whose proper subgroups is finitely generated? If so, what form of choice (if any) is required to construct such a group? gr.group-theory set-theory 14 The Prufer p-group. en.wikipedia.org/wiki/Pr%C3%BCfer_group – George Lowther Jan 23 '11 at 1:21 add comment 1 Answer active oldest votes (CW since this is just expanding on George Lowther’s comment to the question, which could really have been an answer in the first place; if George L wants to convert his answer to a comment himself, I can delete this one.) For any prime $p$, the Prüfer $p$-group is as desired. There are several constructions of this; a good one for present purposes is $$\mathbb{Z}[1/p]\ /\ \mathbb{Z}$$ i.e. rationals with denominator a power of $p$, modulo the integers. up vote 12 To see that this works, note that it is the union of the linearly ordered chain of finitely generated (indeed, cyclic) subgroups $H_i := \{ [a / p^i]\ |\ 0 \leq a < p^i \}$, over $i \in down vote \mathbb{N}$. Now any element of $H_{i+1}$ not in $H_{i}$ must be of the form $[a/p^{i+1}]$ with $a$ coprime to $p$, and hence generates the whole of $H_{i+1}$. So any subgroup is either equal to some $H_i$, or else contains them all and is the whole group. On the other hand, the entire group is clearly not finitely generated since any finite set of elements is contained in some $H_i$. 1 Sweet... didn't know this one. I'm quite charmed. – Todd Trimble♦ Jan 23 '11 at 18:33 @Peter: Thanks. That's just what I would have said. No need to delete this answer, as 6 people have already bothered upvoting it. – George Lowther Jan 24 '11 at 0:24 1 Also, every proper subgroup is cyclic, and not just finitely generated. – George Lowther Jan 24 '11 at 0:26 Excellent example and explanation---thanks! It's still not clear to me to what extent choice is necessary for this example, but I'll look through the argument more closely later. (I was expecting a less clear-cut construction, I suppose.) – Zach N Jan 24 '11 at 4:26 This group is isomorphic to the group of p-th power roots of unity in the complex numbers (restrict the isomorphism of Q/Z with the roots of unity to the subgroup of elements with p-power order). This is a more concrete model for the group. That every subgroup of this group is cyclic is related to the fact that any finite subgroup of the nonzero elements of a field is a cyclic group: any proper subgroup is missing some root of unity of order, say, $p^n$ and therefore has no root of unity of order p^n or higher, which means it is a finite subgroup. – KConrad Jan 24 '11 at 9:04 show 5 more comments Not the answer you're looking for? Browse other questions tagged gr.group-theory set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/52893/is-it-possible-to-construct-without-choice-even-a-non-finitely-generated-gro","timestamp":"2014-04-20T16:45:25Z","content_type":null,"content_length":"58127","record_id":"<urn:uuid:546abbfa-6143-4fbe-8bd5-9d716d53ef7c>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
AES-128 versus AES-256 Last November I about an email response by John Callas, CEO of PGP, trying to dispel the perception that a government agency might have the computing power to break 128-bit keys. People seemed concern that while breaking 128-bit keys is beyond the resources of most people or groups, governments agencies still had a good shot. He thought this extremely paranoid, using the example of a computing cluster that enveloped the entire earth to a height of one metre high would still require 1,000 years on average to recover a 128-bit key. I just put up on Scribd a 2008 whitepaper from Seagate that discusses their reasoning for choosing AES-128 over AES-256 for hard disk encryption. The whitepaper states that □ NIST has concluded and recommended that all three key-lengths (128-bit, 192-bit and 256-bit) of AES provide adequate encryption until beyond calendar year 2031. □ NIST’s recommendation above includes the threat model not only of predicting the key, but also of cracking the encryption algorithm. The difference between cracking AES-128 algorithm and AES-256 algorithm is considered minimal. Whatever breakthrough might crack 128-bit will probably also crack 256-bit. Further, Seagate wanted to maximize the success of its solution by considering the additional business-side concerns: □ Must promote compliance with laws controlling export from the U.S. and import to other nations □ Must be cost-optimized □ Must be able to meet the needs of ALL target markets AES-128 is sufficient or exceeds all the above criteria. They also went on to discuss the computational task of recovering 128-bit keys, where assuming • Every person on the planet owns 10 computers • There are 7 billion people on the planet. • Each of these computers can test 1 billion key combinations per second. • On average, you can crack the key after testing 50 percent of the possibilities. it follows that the earth’s population can crack one encryption key in 77,000,000,00,000,000,000,000,000 years! The graphic form of the argument looks like Nonetheless AES-256 is being widely deployed since it conveniently lies at the intersection of good marketing and pragmatic security. In upgrading from AES-128 to AES-256 vendors can legitimately claim that their products use maximum strength cryptography, and key lengths can be doubled (thus squaring the effort for brute force attacks) for a modest 40% performance hit. 9 comments: Doesn't Rainbow Tables = 0wned? how do u do?................................................................ I personally think that for the most part, 128 bit encryption is more than sufficient for most sites. However, as technology advances, it is expected that at some point the industry standard will have to shift to 256 bit SSL Certificates encryption. I have decided to plump for 256bit encryption as these are all reasonably priced now - SSL247.co.uk and my customers have the added security now too. Your calculations are incorrect! If you can test 10 combos in a second, then you can test 600 combos every minute (10 x 60s). So, if you can test 7.00E+19 combos per second, then you can test (7.00E+19 x 3.15E+7) = 2.21E+27 combos per year. Which results in 77 billion years to crack a 128bit key on average. Thanks for sharing your idea I always read your blog. Laby[male suit] Uneducated Security said... Doesn't Rainbow Tables = 0wned? April 14, 2010 11:16 AM No. Rainbow tables have their limits. Salted hashes= rainbow tables owned but also the word list for rainbow tables get massive the longer you allow them to go. I've never seen a rainbow table that goes past 11 characters long and that's if you're only using A-Z, a-z, and 0-9. Not even getting close to special characters and spaces. It's just as unrealistic for rainbow tables to be big enough to own aes-256 as it is for a brute-force attack to figure out a long enough password. Which, if it were 64 characters long would take an insanely long amount of time. No one has 64 character passwords though. It's burdensome and unnecessary. Ouch, disproved by NSA. AES-128 has been cracked and it's only 2013, three years after this article. How about AES-256? Give them a couple more years and that will be cracked also. Ouch, disproved by NSA. AES-128 has been cracked and it's only 2013, three years after this article. How about AES-256? Give them a couple more years and that will be cracked also. The technical detail provided in this post is confusing for me to understand. I am looking for a simple detail so that I can learn the basic meaning and use of aes. digital signatures
{"url":"http://lukenotricks.blogspot.com/2010/04/aes-128-versus-aes-256-encryption.html","timestamp":"2014-04-16T21:51:40Z","content_type":null,"content_length":"109082","record_id":"<urn:uuid:94599e8b-5ea1-42a6-a4d3-f6b82e3adb55>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Reese, Kevin, and The Monty Hall Problem « Announcing Head First Scheme | Main | Why I want a Tablet PC » Reese, Kevin, and The Monty Hall Problem "There's a car behind one of those three doors... and goats behind the other two. Pick the door you think the car is behind. OK, now before I open the door you chose, I'll open one of the other doors... [door opens revealing a goat] and you can see there's no car behind that one. But I'll give you a chance to switch your door for the remaining closed door. What do you say? Do you want to switch or keep the one you originally chose?" And that's one way of looking at the Monty Hall problem that's been inciting/confusing/intriguiging/pissing people off for years. (You can try it yourself through this applet.) In my recent blog on Seduction and Curiosity I made up a kind of variant on the Monty Hall problem, only this time the problem was for Kevin to pick the business card that had Reese's phone number on the back, from a pile of three business cards. would have improved his odds... something his friend Manny confirmed by saying, "Your odds would have gone from 1 in 3 to 2 in 3." I wasn't really thinking when I posted the problem... it was just the first thing that came to me when I thought about the puzzles that have made me curious, so I used it as an example. (And you can definitely count me among the huge group of Those Who Did Not Get It when I first saw it.) And I intentionally left a few things up for speculation that people have wondered and argued about in the Monty Hall problem including, "Does the motivation of the host matter?" "Do they ALWAYS offer the choice to switch, or does it depend on whether the host (Monty) knows that the contestant got lucky and picked the winning door..." And then when I posted my Java code (in the comments to the original post), that repeated the Reese/Kevin thing 500 times, Daniel pointed out that there was the potential for another issue if my code was trying to model the real-life scenario of Reese and Kevin. The problem was that in my code Reese had a very specific algorithm--she always turned over the "A" card to show Kevin (revealing that it was blank) unless it happened to be the winning card, in which case she turned over the "C" card (Kevin always picked "B"). So that meant that if Kevin wasn't too drunk at this point, then he could eventually figure it out. Think about it... if Kevin is really paying attention, and he sees that most times Reese turns over the "A" card, then those times when she does turn over the "C" card, he thinks, "Oh... that must mean that the winning card is "A", because this time she didn't turn it over..." It would be just like if in the Monty Hall problem, Monty always looked at the doors from, say, left to right, and he always revealed the left-most of his two doors, unless that's where the car was. When the contestant saw him skip the left-most door and open the other one instead, the contestant would have new information he could use. So, in my problem, the code was just meant to show you that if the EXACT same scenario were repeated 500 times, with Kevin always picking "B", and the winning card is randomly distributed among the three cards, and that if Reese ALWAYS reveals a blank card and offers Kevin the chance to switch, then indeed Kevin's chances go up to 2 in 3 instead of his original 1 in 3. In other words, in this example, Kevin should always switch. . (And this is exactly how Marilyn Vos Savant answered the question in the infamous article that started the controversy--she said the contestant should always But as promised, here is my attempt at an explanation. But with huge disclaimers: 1) I don't know or care where the math gene is... I was born without it. (Proving that good-at-programming doesn't always imply good-at-math). So, my explanation might be completely ridiculous. 2) This is not a subject I have ever--or will ever try to teach or write about, since it's WAY outside my knowledge comfort zone. But if you're still reading and still wondering how switching could possibly improve the odds when all three business cards started with a 1 in 3 chance of being the right card... here's my biggest It's not about three individual cards. It's about two stacks. Here's another way of thinking about it. Imagine I have, say, 50 cards divided into two stacks, and only one of the 50 cards has the winning phone number on the back. Stack X has just one card while stack Y has 49 cards. The odds of the winning card being in the bigger stack are 49 in 50. A hell of a lot better than the 1 in 50 odds you get if you pick the stack-of-one, stack X. The point of the Monty Hall/Reese-and-Kevin thing is that it was always about two things, not three. It was always about the host's big stack (two doors/cards, etc.) vs. the contestant's small stack (one door/card). The person running the show (Monty or Reese), starts out with a bigger stack! Reese has the bigger stack... the one with a 2 in 3 chance instead of a 1 in 3 chance. And by offering you the chance to switch, Reese is really giving you the chance to swap your SMALL stack for her BIG stack. It was always about Reese's bigger stack! Her turning over a card made it look as though it came down to just the two cards--her remaining one and Kevin's original pick. But in reality, she was simply giving you the chance to switch for her bigger stack, and the fact that she revealed one of the cards in her stack doesn't change the fact that the winning card was always twice as likely to be in Reese's bigger stack. But again, there's the important assumption in my explanation that says, "Reese always offers Kevin a chance to switch, no matter how many times they might repeat this." So Kevin doesn't have to wonder whether Reese is giving him a chance to switch only because she already knows he DID pick the winning card. Finally, here's one more way to look at why switching ups the odds... the point is that if Kevin sticks to his original pick ("B") and doesn't switch, the only way he can win is if he got lucky the first time and "B" was the winning card--the 1 in 3 chance. But if he does switch, then he doubles his chances, because Reese always turns over a blank card. If the winning card is "A", Reese turns over "C" and gives Kevin a chance to pick "A". If the winning card is "C", Reese turns over "A" and gives Kevin a chance to pick "C". If Kevin always picks "B" and always switches, he has two chances to win instead of one. If Kevin always switches, then the only time he loses is when he happened to pick the winning card the first time (the 1 in 3 chance that "B" is the winning card). But if the winning card was either "A" or "C", then Kevin wins if he switches. OK math heads and stats folks -- or anyone who has another way to think about this, please feel free to post your comments (and thanks so much to those who did in the original, with special mention to Brad Corbin, Matt Moran, and Woolstar) And if my explanation is just completely lame and makes no sense, feel free to say that too. I'm so out of my area of expertise here it's astonishing : ) And now, back to somewhat less geeky blogs... Posted by Kathy on April 4, 2005 | Permalink TrackBack URL for this entry: Listed below are links to weblogs that reference Reese, Kevin, and The Monty Hall Problem: » Book Review: The Curious Incident of the Dog in the Night-Time from Mostly Muppet Dot Com I was given Mark Haddon's The Curious Incident of the Dog in the Night-Time as an Easter gift from my mother-in-law. She had read the book and thought I might find it interesting. It came highly reccommended and she let me know that she'd given it as... [Read More] Tracked on Apr 8, 2005 10:26:49 AM You might want to think about taking an elementary statistics class. You are way off. You are applying a past event to present circumstances and thinking that it somehow affects the current result. It is as if you think that because the coin came up heads 5 times in arow that it must come up heads again. The fact is that the card you picked the first time is irrelevant. it means nothing!!! No matter what card you pick, one will be tossed out and you will get to pick again. Therefore the prior pick means nothing. You are faced with a simple choice of two cards... pick one. 50-50. Yes, my odds of picking the right card the first time were 1 in 3. But once you remove a card, I am faced with two cards, pick one. Since I am picking again, the prior pick means nothing. Let's change it around so that it becomes clearer. I ask you to pick a number 1, 2, or 3. You pick 3. I tell you that it wasn't 2. Now pick a number 1 or 3. Explain to me how picking 1 can be a better pick than picking 3. Yes, it was a 1 in 3 chance when there were three numbers. But now there are only two numbers so either number becomes a 1 in 2 shot. Posted by: Tom | Apr 4, 2005 12:29:10 PM Think about this... I am sitting with Bert in the kitchen and show him three cards and ask him to pick one. After he picks one, I throw a wrong one away, put the two remaining cards in my pocket, and bring them to you in the living room. I ask you to pick a card. By your logic, one of the cards has a 2/3 chance of being right and the other has only a 1/3 chance of being right. If that is the case, then there is no way to ever make a rational choice about anything because we can never know all the past events that led to the current situation. Statistics goes out the window. Posted by: Tom | Apr 4, 2005 12:42:29 PM OK, after thinking it over I see that in my example, you aren't picking just one number when you switch, you are picking two numbers. Imagine this scenario. I show you 1 million cards and ask you to pick one. I then throw away every card excpet the card you chose and one other. You would have to be crazy not to switch since clearly your chance of having picked the right card the first time is miniscule. By switching, you are in effect getting 999,999 picks. OK, so how does this work if as in the second example I don't let you know what happened? I have Bert pick a card and then I throw away 999,998 wrong picks. You then come in. Picking the card that Bert didn't pick is the right way to go but to you it appears that each card has an even chance since you don't know anything about Bert's pick. Makes you wonder if there are situations in real life that duplicate this... that appear to give us even choices but are stacked one way or the other. Posted by: Tom | Apr 4, 2005 1:07:25 PM Ahem...Simple way to think of it. When Kevin picks the card, he has a 2/3 chance of being wrong. That means that when the blank card is revealed, it is 2/3 likely that it is "the other blank" card. So switching makes it 2/3 likely that you are correct after the switch. Posted by: GBGames | Apr 4, 2005 2:24:44 PM Hey Kathy, Thanks for posting the solution. You're right - my mind has been curious all week, and now I feel like I (sort of) understand. I ran the java you wrote, and it's pretty hard to deny the results... My mind doesn't want to believe, because it's a little weird, but I can see the logic. Thanks for having a interesting site... Posted by: Mike | Apr 4, 2005 2:27:01 PM GBGames: Well, if you want to be all simple and clear about it, sure, then I guess you could say it like that... ; ) (One more reason why you won't see *me* writing a stats book.) Tom: you got sucked in the way just about everyone on the planet is at first (including all the 10,000 PhDs who wrote in after the Ask Marilyn column and said she had it wrong. It took Bert, like, a week to convince me. Your "million cards" example is exactly the right way to help someone understand it. As for the other scenarios you mentioned, well, then you've changed the game so it's a completely different problem. Dan Steinberg has admonished that it's VERY important to be precise when describing the problem, because these puzzles are like games. And when you change it even a little, it could change the rules of the game. I left it a little fuzzy in the beginning because I was trying to get people to speculate not just on the solution, but on the very nature of it, so that THEY would come up with questions like, "does Reese's motivation matter?" "Would she offer this same option to everyone, regardless of whether they got lucky on their first pick?" But sure enough, he spotted something in my code that could indeed have *changed the rules* (changed the odds) if Kevin had been watching carefully. Whew! I'm just glad it's over. I'll think a little longer next time before I throw a can of worms out like that... but I hope some of you enjoyed it ; ) Posted by: Kathy Sierra | Apr 4, 2005 3:01:52 PM One way I try to explain it (it doesn't always work) is this. Lets assume there are 1 million combinations of numbers in a Lotto (in lotto, you pick combinations of numbers and try to match the combination generated by a machine). Lets say my set of numbers is Bob looks in the newspaper and sees the winning Lotto numbers. He then comes over to my house and looks at my slip and sees what I guessed (I haven't read the newspaper). He then starts chanting out 999,998 combinations of numbers - every set of numbers EXCEPT my choice and one other combination. He then says: "There are 1 million combinations (doors) and I've opened 999,998 of them for you. Only two remain. The first is 1,3,10,15,18. The second is 2,4,11,12,25." Now the question is - do I have a 50% chance of winning Lotto now? If you answer yes - then it would seem very easy to win Lotto. All you need is someone who can read (both the newspaper and your slip) and chant alot of numbers. If they did it enough (maybe a week or two) you are BOUND to win - since each time there is a 50% chance. Do you really think winning is that easy? If you answer no - then it is inconsistent to think that the probabilities in the Monty Hall problem are 50/50. Posted by: Anon | Apr 4, 2005 5:09:30 PM The Monty Hall problem is also clearly explained by Christopher, the autistic protagonist of 'The Curious Incident of the Dog in the Nighttime' by Mark Haddon. He explains why you should always change your choice of doors despite the fact that this seems counterintuitive, and he concludes: "And this shows the intuition can sometimes get things wrong. And intuition is what people use in life to make decisions. But logic can help you work out the right answer." What about that ? Posted by: Peter FJ | Apr 4, 2005 9:22:04 PM Tom: when I read your replies, I initially agreed 100% with you. None of the explanations here were making sense to me so I googled "monty hall problem". Suppose there are three cards, and instead of being told "choose A, B, or C", you were told to choose between either JUST A, or {B or C}. Obviously if you chose a group of two cards instead of just one, you have a better chance. So you effectively choose {B or C} by choosing A first and then trading it. Anon: who is this "Bob" guy, and why don't you call the cops? Posted by: Keith Handy | Apr 4, 2005 10:06:15 PM P.S. I realize now that Tom *did* figure it out. Only I didn't understand it when I read his third comment. Now that I do understand it, Kathy's explanation seems crystal clear, but for some reason I just wasn't getting that either (initially). I am digging this blog! Keep it up. Posted by: Keith Handy | Apr 4, 2005 10:12:29 PM A or {B or C} Thank you Keith, epiphany ensued. Posted by: Matt | Apr 5, 2005 1:30:42 PM Interestingly enough, I checked out "The Curious Incident of the Dog in the Nighttime" from the library for my wife to read, but took a look at it myself. Couldn't put it down after that. I really like how the chapters are number: 1 2 3 5 7 11 13 ... Posted by: woolstar | Apr 5, 2005 5:29:35 PM I've heard a few different versions of this problem, but my favourite is the one that has three condemed men waiting in sepearte cells for their execution. None of the prisoners know their cell number, when one of the guards wanders in and annouces that the prisoner in cell 235 will be pardoned. If you are one of those prisoners then you have a 1/3 chance of being pardoned. Then the gaurds come in, open a cell and lead one of the other prisoners off and execute him. Your chance of being pardoned is now 1/2. Then the guy in the next cell breaks down the wall between your two cells and asks you if you'd like to swap cells. I guess he's read this blog and thinks by switching he'd improve his chance to 2/3, and you might think the same. Sadly you both can't be right. Your chance of being pardoned remains at 50/50. Posted by: Mark | Apr 7, 2005 3:00:21 AM Isn't this Bayes Theorem? Posted by: ? | Apr 7, 2005 7:21:23 AM To follow up on my 4 worder: Michael Wood, Portsmouth Business School, 23 February 2000 For example: James is being tried for a murder. The only evidence against him is the forensic evidence - blood and tissue samples taken from the scene of the crime match James's. The chance of such a match happening if James were in fact innocent and the match were just a coincidence is calculated (by the prosecution's expert witness) as 1 in 10,000 (0.0001). If you were on the jury would you find him guilty? What do you think is the probability of his guilt? (If you think the answer is 0.9999 you are wrong: this is known as the prosecutor's fallacy.) The defence's expert witness, however, points out that James was found through a systematic examination of everyone in the local community. There are 40,000 such people, and no evidence (except the forensic evidence) to associate any of them with the crime. Do you still think James is guilty? This situation can be described by Bayes Theorem. H1 (the first hypothesis) = James is guilty H2 (the second hypothesis) = James is innocent (not guilty) These are the two hypotheses between which we have to decide. E (the evidence) = The forensic evidence of a match We can assume that Prob(E given H1) = 1 since if he is guilty, the forensic evidence is bound to show a match, and that Prob(E given H2) = 1/10,000 = 0.0001 since if he is innocent the probability of a match is 0.0001. These two probabilities are known as likelihoods. Notice that they are both "Evidence given hypothesis". Bayes theorem reverses these conditional probabilities and tells us the probability of the hypothesis given the evidence. To use Bayes theorem we also need to know P(H1) and P(H2). These are the prior probabilities of guilt and innocence. They are "prior" in the sense that these are the initial estimates before the evidence is considered. If we assume that the murder must have been committed by someone in the local community, then P(guilty) = 1/40,000 = 0.000025 P(innocent) = 1 - 0.000025 = 0.999975 The first equation above now gives: Prob(guilty given evidence) = (1x0.000025) / (1x0.000025+0.0001x0.999975) = 0.2 = 20% which means that the probability of James's guilt is only 20%. This is probably a lot less than your initial estimate! Posted by: ? | Apr 7, 2005 7:28:52 AM see my post: Posted by: smibbo | Jan 17, 2006 1:52:15 PM I am going to try to tech this to a yr 10 class, How? Posted by: sarah Chamings | Mar 2, 2006 5:29:31 AM The comments to this entry are closed.
{"url":"http://headrush.typepad.com/creating_passionate_users/2005/04/reese_kevin_and.html","timestamp":"2014-04-16T10:17:39Z","content_type":null,"content_length":"51591","record_id":"<urn:uuid:24bc7a39-277b-463a-ba63-c30588d24032>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Re: Wisdom of teaching children Peano Postulates (was: when young students ask why do we need absolute value?) Replies: 9 Last Post: Sep 20, 2001 1:58 PM Messages: [ Previous | Next ] Re: Wisdom of teaching children Peano Postulates (was: when young Posted: Sep 17, 2001 1:43 PM In article <3BA62B7D.D89AAA18@Jhuapl.edu>, James Hunter <James.Hunter@Jhuapl.edu> wrote: >Herman Rubin wrote: >> In article <9o3ugu$179$1@persian.noc.ucla.edu>, >> Chan-Ho Suh <csuh@math.ucla.edu> wrote: >> .................. >> >> There can be some confusion when something is intuited; >> >> formalizing it avoids the need for unlearning and relearning. >> >What confusion have you noticed typically results about the ordinal >> >structure of the integers?? >> Not much, except that most college students, including >> those taking mathematics courses, cannot understand >> arguments by induction. > The reason for that is that "scientists" have never > shown that "agrument by induction" is even approximately valid. Mathematical induction is not an "argument by induction." It is a deductive argument. Dave Seaman dseaman@purdue.edu Amnesty International calls for new trial for Mumia Abu-Jamal
{"url":"http://mathforum.org/kb/thread.jspa?threadID=74526&messageID=336288","timestamp":"2014-04-17T05:40:04Z","content_type":null,"content_length":"28585","record_id":"<urn:uuid:eb4fbb46-fa27-4c45-9367-aee8a8223514>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
What is a calculation engine based on table functions? There is some confusion about the meaning of calculation engine because this term may have different meanings for different people in different contexts. Other than chained formulas in excel (see calculation engine with scalar parameters below), I have personally heard the term calculation engine mostly in the context of financial planning and fee or commission calculations. Actually, calculation engines can be used in any area where chained formulas with a number of input and output parameters are needed. Overview: Primary calculation engine features 1. Ability to configure dependency trees with input and output parameters combined through (table) functions determining the dependencies. Dependent output parameters are updated automatically if an input parameter is changed. 2. A decent library of (table) functions as well as a structure for enabling the users formulate their own formulas (user-defined functions). 3. Ability to configure multiple input and output (table) parameters related with nodes of a calculation network (function tree). 4. Managing and storing multiple calculation instances of a function tree. As an example, a function tree for computing the expected month-end price of a product which must be executed at the beginning of every month. This function tree has then a calculation instance for every month. The features 1-2 define for us (by definition) a calculation engine in the narrow sense. All the features above (1-4) define a calculation engine in the broad sense. Calculation engine based on table functions A calculation engine based on table functions and parameters can be defined as follows: • It must be able to fetch input data from a database, make some calculations on them, and write the results back to a database, with acceptable performance. • It must be fully configurable by the user. Configuration can be done either by setting parameters, or by combining (i.e. chaining) deterministic functions (i.e. same outputs for same inputs) with well-defined input and output parameters. Ideally, an average user with some analytical flair should be able to configure the calculation engine. • It must have a well-documented library of table, matrix and vector functions covering primary calculation patterns frequently used in applications like financial planning, price and commission • It must significantly reduce implementation time and complexity compared to classical database programming for the same tasks. • It must work with any database with any data structure. Aside from reader and writer functions that read/write data from/into a database, a calculation engine has no connection at all with a database during the calculations. All calculations are done with arrays in RAM that represent tables, matrices or vectors. That is, a calculation engine works totally independent of the data structure of a database. This makes it possible to integrate a real calculation engine with any database with any data structure. The only interface or link, that need to be adjusted to a database are the reader and writer functions, and this can be done quite easily. In order to better understand this advantage, you can think of some stored procedures in a database, whose parameters and functionality depend on the data structure of its database like table definitions and fields. You can’t detach such stored procedures and install them on another database with a different data structure. Calculation engine with scalar and table parameters In its most general meaning, a calculation engine is used for computing chain of functions (or formulas) considering the dependencies among parameters. For example, consider the formulas below: 1) c = f1(a, b) 2) e = f2(c, d) The parameter c depends on other parameters a and b. That is, if values of a or b is updated, c needs to be updated as well by executing the function f1. Similarly, e depends on c and d. Because c depends on a and b, e depends indirectly on a and b as well. These relations can be visualized with a dependency tree as shown at the left. Excel is -among other things- a typical calculation engine; you can build dependency trees by combining parameters with formulas in excel sheets. Note that the parameters like a, b, c we are talking about so far are scalar parameters; they represent scalar values like 5 and 120. There are (or there will be, thanks to finaquant) also calculation engines with table functions that take tables as parameters. The main principle is but the same: 1) (T4, T5) = TableFunction1(T1, T2, T3) 2) (T7, T8) = TableFunction2(T2, T4) The values of tables T7 and T8 depend on tables T2 and T4. Therefore, the tables T7 and T8 need to be updated by executing the function TableFunction2 if there is a change in one of the input parameters T2 or T4. Network calculations with tables Configuring network calculations is possible with a calculation engine based on table parameters and table functions. In the exemplary network topology below, there are four primary input and two ultimate output parameters. A table function with tables as input and output parameters sits at each node (or contract) of the network. The output parameters of a node can be input parameters for other nodes. Such calculation networks can be used in many areas including financial planning, data preparation for reporting, provider-dealer or cause-effect networks, commission calculations and so on. Written by: Tunç Ali Kütükçüoglu Copyright secured by Digiprove © 2012-2013 Tunc Ali Kütükcüoglu One Response to What is a calculation engine based on table functions? 1. A very well written article defining “Calculation Engine” in the simplest idea. The reader must have some basic knowledge of how mathematical functions work and also knowledge related to terms like, Stored Procedures; Tables; Database; Data Structure; is also required. You must be logged in to post a comment.
{"url":"http://finaquant.com/help-pages-for-finaquant-products/what-is-a-calculation-engine-based-on-table-functions","timestamp":"2014-04-17T12:29:50Z","content_type":null,"content_length":"46079","record_id":"<urn:uuid:16f79b05-c504-48ba-a321-71313e371651>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00607-ip-10-147-4-33.ec2.internal.warc.gz"}
help with proof techniques I don't know what proof techniques you discussed in class, but one can prove (a), for example. The thing is that every formula that one wants to prove has some form: A and B; A or B; A implies B; for every x, A(x); S is a subset of S', etc. There is a way to prove formulas of each of those forms. E.g., to prove "for every x, A(x)" you fix an arbitrary x and prove that A(x) is true for this x. The claim you need to prove in (a) has the form "for every $f$, ${O}(f)$ is a subset of $\Gamma(f)$. If you now what it means to prove a formula of each form, you can gradually rewrite the claim and then see what you can do about it. Start with: "Fix some arbitrary $f\in\mathcal{F}$. Need to prove: ${O}(f)\subseteq\Gamma(f)$."
{"url":"http://mathhelpforum.com/discrete-math/125699-help-proof-techniques-print.html","timestamp":"2014-04-18T19:41:22Z","content_type":null,"content_length":"4863","record_id":"<urn:uuid:ba8fd9df-6710-469c-8dda-7533d4b43ba1>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
The Reference Frame Henriette Elvang (MIT) spoke about the Black Saturns, a work with Pau Figueras and very recently also with Roberto Emparan. Because I teach during the duality seminars (today: angular dependence of the photoelectric effect in advanced QM), she was nice to repeat the talk for me, in a more interactive form, and it was even more interesting than I expected. A black Saturn is a black hole surrounded by a black ring. The ring's angular momentum creates a force that repels it from the black hole in the middle. The solution will have negative modes signaling instabilities but it is a classical solution anyway. In their case, they construct solutions to five-dimensional pure gravity i.e. Ricci-flat geometries with an unusual structure of The basic method to calculate these solutions goes back to a 1917 paper by Hermann Weyl - a paper whose content is described in Wald's book on GR. I didn't know about it but Weyl discovered an early version of the LLM construction 90 years ago. I find it quite impressive that a mathematician could find such an LLM-like construction in 1917 - the same year when crackpots in less advanced countries, such as Vladimir Lenin, were able to impress whole nations with their dumb leftist ideologies - just a year after general relativity was completed. Recall that in the AdS5 x S5 version of LLM, you want to fill a two-dimensional plane with two colors (black and white, for example). For each such picture, you can construct a solution. However, as we noticed, there exists a similar construction due to Weyl that one may use to construct various solutions in pure four-dimensional gravity. We have two colors "t" and "phi" that can be used to draw a picture on line. If you fill most of the line by the "phi" color except for a line interval whose color is "t", you obtain the Schwarzschild black hole. The length of the line interval is correlated with the size of the black hole. Evidently, the limit where the length goes to infinity corresponds to an infinitely large black hole - i.e. flat space or Rindler space in different coordinates. A similar limit - a plane divided to black and white - gives you the Penrose pp-wave in the AdS5 x S5 case. Analogous constructions for five-dimensional gravity were found more recently. They use them to construct solutions of five-dimensional pure gravity. In five spacetime dimensions, the massive little group is SO(4) whose rank is two which is why you can have two independent angular momenta. Let's look at stationary solutions with one angular momentum only, e.g. J_{12}. The metric will have three Killing vectors - two angles generated by the J_{12} and J_{34} rotations and time translations (that would be true even if you had both angular momenta). That's why the metric won't depend on five coordinates but only two. It turns out that the Einstein equations for this Ansatz are non-linear but integrable differential equations in two variables. You can use a trick that I recently heard from Zack Guralnik - if I remember well - how to transform non-linear partial differential equations in two variables to linear differential equations for some generating functions of a higher number of variables. The procedure is somewhat analogous to replacing a system of non-linear differential equations for a classical system by the linear Schrödinger equation for its quantization because the generating function is analogous to a wavefunction: such a procedure can surprisingly simplify the calculations in some cases. These integrable systems lead to similar equations as the planar limit of the N=4 theory although this is most likely a mathematical coincidence only. The relevant five-dimensional version of the LLM-Weyl procedure involves three colors on a line. For a suitable configuration of colors, you may obtain a black Saturn solution. It is interesting to draw the phase diagram of these solutions. Create a two-dimensional graph and believe me that they can construct a two-parameter family of solutions. The x-axis is the total angular momentum "J" while the y-axis is the total area "A" i.e. the entropy. All points of this graph below a certain line can be identified as black Saturns with different parameters describing the ring and its distance from the black hole. Black rings themselves are special examples of black Saturns for which the size of the spherical black hole at the center vanishes. In the phase diagram, the family of the pure rings looks like a union of two semi-infinite lines connected with a cusp. The point at the cusp maximizes the entropy and minimizes the angular momentum - but there are two lines along which you can go if you want to lower the entropy or increase the angular momentum. The angle at the cusp is zero. There exists another, very similar pair of lines in the phase diagram (whose detailed position is however different) corresponding to black Saturns at equilibrium. Note that the horizon of a black Saturn is disconnected: it is made out of a three-sphere (hole) and a Cartesian product of a circle and a two-sphere (ring). These two components can therefore have different chemical potentials for energy (also known as the temperature - in the context of black objects, it is proportional to the surface gravity at the horizon) and different chemical potentials for the angular momentum (also known as the angular velocity). However, some black Saturns happen to have the same angular velocity for both components and the same surface gravity for both components. They are described by the one-dimensional pair of semi-infinite lines connected at a cusp. We discussed how to get black bi-Saturn, among other possible solutions. You can also imagine a "black bi-Saturn" as a black hole near the Southern pole connected with another black hole near the Northern pole by a negative-tension (repulsion-inducing) cosmic string (that creates an excess angle, unlike the usual deficit angles for positive-tension strings) - and both of them are surrounded by a black ring wrapped near the equator of the Earth. Such a configuration has the same symmetric as the black Saturn and it is conceivable that you can write an exact metric for it, too. Because the Ricci-flatness for the black Saturn Ansatz defines an integrable system, you may also believe that various other physical questions about this system - such as the perturbations, geodesics, classical worldsheets and worldvolumes of various branes in this geometry etc. - could be exactly solvable, too. We took the speaker for a dinner. It was the first time when Henriette visited Henrietta's table in the Charles Hotel. ;-) snail feedback (0) :
{"url":"http://motls.blogspot.com/2007/02/black-saturn.html","timestamp":"2014-04-17T18:23:30Z","content_type":null,"content_length":"194395","record_id":"<urn:uuid:95dbdee2-254f-4a71-b397-5a8c26632063>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Error: NDSolve::ndnum: Encountered non-numerical value for a derivative at t == 0. Help please? Code included inside (self.Mathematica) submitted ago by SeventhMagus sorry, this has been archived and can no longer be voted on Hey, I was having some trouble with this code for my vibrations class. I'm supposed to plot an exact solution and a numerical solution on the same plot. I followed the format in the book, and can't find out how to get this code to work. The error it gives is in the title. xanal[t_]=Exp[-zeta*wn*t]*((x0-f0 (wn^2-w^2)/((wn^2-w^2)^2+(2*zeta*wn*w)^2))*Cos[wd*t]+(zeta*(wn/wd)*(x0-f0*(wn^2-w^2)/((wn^2-w^2)^2+(2*zeta*wn*w)^2))-2*zeta*wn*w^2*f0/(wd*((wn^2-w^2)^2+(2*zeta*wn*w)^2))+v0/wd)*Sin[wd*t])+f0/((wn^2-w^2)^2+(2*zeta*wn*w)^2)*((wn^2-w^2)*Cos[w*t]+2*zeta*wn*w*Sin[w*t]); Plot[{Evaluate[x[t]/.numerical],xanal[t]},{t,-1,40}],PlotRange -> {-1,3},PlotStyle -> {RGBColor[1,0,0],RGBColor[0,1,0]}; all 8 comments [–]SeventhMagus[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]duetosymmetry4 points5 points6 points ago sorry, this has been archived and can no longer be voted on [–]SeventhMagus[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]woodsja21 point2 points3 points ago sorry, this has been archived and can no longer be voted on [–]SeventhMagus[S] 0 points1 point2 points ago sorry, this has been archived and can no longer be voted on [–]Jacobusson0 points1 point2 points ago sorry, this has been archived and can no longer be voted on Just some random details/considerations here It is safer to write xanal[t_] := Exp.. instead of xanal[t_] = Exp... Note that you do not have to write a piece of code on a single line and hit enter in a lot of different places and your code will still work. I wonder if the Evaluate function in the first argument of Plot is really necessary. But I guess it can't hurt either (and possibly makes things faster too) :). Ok no more details. See ya and good luck!
{"url":"http://www.reddit.com/r/Mathematica/comments/11nsx0/error_ndsolvendnum_encountered_nonnumerical_value/?sort=old","timestamp":"2014-04-19T11:41:36Z","content_type":null,"content_length":"67518","record_id":"<urn:uuid:05d6af75-efa3-4d0d-9d99-f11761242927>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
A Lyapunov Function and Global Stability for a Class of Predator-Prey Models Discrete Dynamics in Nature and Society Volume 2012 (2012), Article ID 218785, 8 pages Research Article A Lyapunov Function and Global Stability for a Class of Predator-Prey Models ^1Faculty of Science, Shaanxi University of Science and Technology, Xi'an 710021, China ^2College of Electrical and Information Engineering, Shaanxi University of Science and Technology, Xi'an 710021, China Received 29 November 2012; Accepted 14 December 2012 Academic Editor: Junli Liu Copyright © 2012 Xiaoqin Wang and Huihai Ma. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We construct a new Lyapunov function for a class of predation models. Global stability of the positive equilibrium states of these systems can be established when the Lyapunov function is used. 1. Introduction The dynamics of predator-prey systems are often described by differential equations, which represent time continuously. A common framework for such a model is [1–3] where and are prey and predator densities, respectively, is the prey growth rate, is the functional response, for example, the prey consumption rate by an average single predator, and the per capita growth rate of predators (also known as the “predator numerical response"), which obviously increases with the prey consumption rate. The most widely accepted assumption for the numerical response with predator density restricting is as follows: where is a per capita predator death rate, the conversion efficiency of food into offspring, the density dependent rate [2]. And prototype of the prey growth rate is the logistic growth where is the carrying capacity of the prey. When where is the efficiency of predator capture of prey, model (1.1) is called Ivlev-type predation model, due originally to Ivlev [4]. And Ivlev-type functional response is classified to the prey-dependent; that is, is independent of predator [2]. Both ecologists and mathematicians are interested in the Ivlev-type predator-prey model and much progress has been seen in the study of the model [5–13]. Of them, Xiao [8] gave global analysis of the following model: But, in paper [8], the author gave complex process to prove the global asymptotical stability of the positive equilibrium. In this paper, we will establish a new Lyapunov function to prove the global stability of the positive equilibrium of model (1.5). Our paper is organized as follows. In the next section, we discuss the existence, uniqueness of the positive equilibrium, and establish a new Lyapunov function to model (1.5). In Section 3, we will give some examples to show the robustness of our Lyapunov function. 2. Main Results First of all, it is easy to verify that model (1.5) has two trivial equilibria (belonging to the boundary of , that is, at which one or more of populations has zero density or is extinct), namely, and . For the positive equilibrium, set which yields We have the following Lemma regarding the existence of the positive equilibrium. Lemma 2.1 (see [8]). Suppose . Model (1.5) has a unique positive equilibrium if either of the following inequalities holds:(i); (ii) and . Lemma 2.2. Let , then is a region of attraction for all solutions of model (1.5) initiating in the interior of the positive quadrant . Proof. Let be any solution of model (1.5) with positive initial conditions. Note that , by a standard comparison argument, we have Then, Similarly, since , we have On the other hand, for all , we have and . Hence, is a region of attraction. As a consequence, we will focus on the stability of the positive equilibrium only in the region . In the following, we devote to the global stability of the positive equilibrium for model (1.5) by constructing a new Lyapunov function which is motivated by the work of Hsu [3]. Theorem 2.3. If , the positive equilibrium of model (1.5) is globally asymptotically stable in the region . Proof. For model (1.5), we construct a Lyapunov function of the form Note that is non-negative, if and only if . Furthermore, the time derivative of along the solutions of (1.5) is Substituting the expressions of and defined in (1.5) into (2.7), we can obtain Define then So, Then we can get If holds, which is equivalent to Set , we obtain and And set , we can get and In view of , it follows that and in the region . Then is always true. It follows that , that is, . Consequently, the function satisfies the asymptotic stability theorem [14]. Hence, is globally asymptotically stable. This completes the proof. 3. Applications In this paper, we construct a new Lyapunov function for proving the global asymptotical stability of model (1.5). The new Lyapunov function is useful not only to model (1.5), but also to other In this section, we will give some examples to show the robustness of the Lyapunov function (2.6). The parameters of the following models are positive and have the same ecological meanings with those of in model (1.5). Example 3.1. Considering the following Ivlev predator-prey model incorporating prey refuges (see [9]): where is a refuge protecting of the prey. We can choose a Lyapunov functional as follows: The proof is similar to that of the Section 2. Example 3.2. Considering the following predator-prey model with Rosenzweig functional response (see [10]): where is the victim’s competition constant. We can choose a Lyapunov functional as follows: We omit the proof here. Example 3.3. Considering the following model (1.1) with Holling-type functional response (see [11]): where is known as a Holling type-II function, as a Holling type-III function and as a Holling type-IV function. We choose a Lyapunov function: For more details, we refer to [12]. Example 3.4. Considering the following diffusive Ivlev-type predator-prey model (see [13]): where the nonnegative constants and are the diffusion coefficients of and , respectively. , the usual Laplacian operator in two-dimensional space, is used to describe the Brownian random motion. Model (3.7) is to be analyzed under the following non-zero initial conditions and zero-flux boundary conditions: In the above, is the outward unit normal vector of the boundary . In order to give the proof of the global stability, we construct a Lyapunov function: where Then, differentiating with respect to time along the solutions of model (3.7), we can obtain Using Green's first identity in the plane, and considering the zero-flux boundary conditions (3.9), one can show that The remaining arguments are rather similar as Theorem 2.3. 1. D. Alonso, F. Bartumeus, and J. Catalan, “Mutual interference between predators can give rise to Turing spatial patterns,” Ecology, vol. 83, no. 1, pp. 28–34, 2002. View at Publisher · View at Google Scholar 2. P. Abrams and L. Ginzburg, “The nature of predation: prey dependent, ratio dependent or neither?” Trends in Ecology and Evolution, vol. 15, no. 8, pp. 337–341, 2000. View at Publisher · View at Google Scholar 3. S. B. Hsu, “On global stability of a predator-prey system,” Mathematical Biosciences, vol. 39, no. 1-2, pp. 1–10, 1978. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 4. V. S. Ivlev, Experimental Ecology of the Feeding of Fishes, Yale University Press, 1961. 5. R. E. Kooij and A. Zegeling, “A predator-prey model with Ivlev's functional response,” Journal of Mathematical Analysis and Applications, vol. 198, no. 2, pp. 473–489, 1996. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 6. J. Sugie, “Two-parameter bifurcation in a predator-prey system of Ivlev type,” Journal of Mathematical Analysis and Applications, vol. 217, no. 2, pp. 349–371, 1998. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 7. J. Feng and S. Chen, “Global asymptotic behavior for the competing predators of the Ivlev types,” Mathematica Applicata, vol. 13, no. 4, pp. 85–88, 2000. View at Zentralblatt MATH · View at 8. H. B. Xiao, “Global analysis of Ivlev's type predator-prey dynamic systems,” Applied Mathematics and Mechanics, vol. 28, no. 4, pp. 461–470, 2007. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet 9. J. B. Collings, “Bifurcation and stability analysis of a temperature-dependent mitepredator-prey interaction model incorporating a prey refuge,” Bulletin of Mathematical Biology, vol. 57, no. 1, pp. 63–76, 1995. 10. M. L. Rosenzweig, “Paradox of enrichment: destabilization of exploitation ecosystems in ecological time,” Science, vol. 171, no. 3969, pp. 385–387, 1971. View at Publisher · View at Google 11. C. S. Holling, “The functional response of predators to prey density and its role in mimicry and population regulation,” Memoirs of the Entomological Society of Canada, vol. 97, S45, pp. 5–60, 1965. View at Publisher · View at Google Scholar 12. S. Ruan and D. Xiao, “Global analysis in a predator-prey system with nonmonotonic functional response,” SIAM Journal on Applied Mathematics, vol. 61, no. 4, pp. 1445–1472, 2000/01. View at Publisher · View at Google Scholar · View at MathSciNet 13. W. Wang, L. Zhang, H. Wang, and Z. Li, “Pattern formation of a predator-prey system with Ivlev-type functional response,” Ecological Modelling, vol. 221, no. 2, pp. 131–140, 2010. View at Publisher · View at Google Scholar 14. D. R. Merkin, F. F. Afagh, and A. L. Smirnov, Introduction to the Theory of Stability, vol. 24 of Texts in Applied Mathematics, Springer, New York, NY, USA, 1997. View at MathSciNet
{"url":"http://www.hindawi.com/journals/ddns/2012/218785/","timestamp":"2014-04-19T22:44:07Z","content_type":null,"content_length":"288099","record_id":"<urn:uuid:5794dbc3-8552-4fb6-b6b4-57a6569b71c3>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00283-ip-10-147-4-33.ec2.internal.warc.gz"}
gnuplot demo script: gnuplot demo script: fillcrvs.dem autogenerated by webify.pl on Tue Mar 6 11:35:30 2007 gnuplot version gnuplot 4.3 patchlevel CVS-14Feb2007 # $Id: fillcrvs.dem,v 1.5 2004/09/28 06:06:10 sfeam Exp $ ### Demo for 'with filledcurves' set title set key outside set title "plot with filledcurve [options]" plot [-10:10] [-5:3] \ 1.5+sin(x)/x with filledcurve x2, \ sin(x)/x with filledcurve, \ 1+sin(x)/x with lines, \ -1+sin(x)/x with filledcurve y1=-2, \ -2.5+sin(x)/x with filledcurve xy=-5,-4., \ -4.3+sin(x)/x with filledcurve x1, \ (x>3.5 ? x/3-3 : 1/0) with filledcurve y2 Click here for minimal script to generate this plot set key on set title "Intersection of two parabolas" plot x*x with filledcurves, 50-x*x with filledcurves, x*x with line lt 1 Click here for minimal script to generate this plot set grid front set title "Filled sinus and cosinus curves" plot 2+sin(x)**2 with filledcurve x1, cos(x)**2 with filledcurve x1 Click here for minimal script to generate this plot set title "The red bat: abs(x) with filledcurve xy=2,5" plot abs(x) with filledcurve xy=2,5 Click here for minimal script to generate this plot set title "Some sqrt stripes on filled graph background" plot [0:10] [-8:6] \ -8 with filledcurve x2 lt 15, \ sqrt(x) with filledcurves y1=-0.5, \ sqrt(10-x)-4.5 with filledcurves y1=-5.5 Click here for minimal script to generate this plot set title "Let's smile with parametric filled curves" set size square set key off unset border unset xtics unset ytics set grid set arrow 1 from -0.1,0.26 to 0.18,-0.17 front size 0.1,40 lt 5 lw 4 set label 1 "gnuplot" at 0,1.2 center front set label 2 "gnuplot" at 0.02,-0.6 center front set parametric set xrange [-1:1] set yrange [-1:1.6] plot [t=-pi:pi] \ sin(t),cos(t) with filledcurve xy=0,0 lt 15, \ sin(t)/8-0.5,cos(t)/8+0.4 with filledcurve lt 3, \ sin(t)/8+0.5,cos(t)/8+0.4 with filledcurve lt 3, \ t/5,abs(t/5)-0.8 with filledcurve xy=0.1,-0.5 lt 1, \ t/3,1.52-abs(t/pi) with filledcurve xy=0,1.8 lt -1 Click here for minimal script to generate this plot
{"url":"http://gnuplot.sourceforge.net/demo_4.2/fillcrvs.html","timestamp":"2014-04-21T07:12:21Z","content_type":null,"content_length":"3535","record_id":"<urn:uuid:3c0cd829-1da8-488d-b787-bcce5f8646da>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Palmetto, GA Calculus Tutor Find a Palmetto, GA Calculus Tutor ...I have had much success in developing an individualized teaching method for ADD/ADHD students and helping them achieve their goals. I have taught in the special RESA program in Georgia for students that have psychiatric/severe special needs including aspergers, autism, bi-polar for more than 8 years. The grade levels include grades 4 thru 12. 47 Subjects: including calculus, reading, English, biology ...I listen to every question they have, and after a short time, students begin to understand that tutoring time is a "safe place" where they can ask questions they may feel ashamed to ask in class. (What students do not seem to understand is that if THEY have a question, someone else in that class ... 11 Subjects: including calculus, physics, geometry, algebra 1 ...She then went on to graduate from the Georgia Tech with a degree in Applied Mathematics and a minor in Economics. She went through college at an accelerated pace of 3 years instead of 4, while maintaining her HOPE scholarship. She even studied abroad in Ireland during those three years! 22 Subjects: including calculus, reading, writing, physics I am a certified math teacher with 13 years experience. I have taught all levels of math, from pre-algebra to AP calculus BC, including IB HL and SL Math, and AP Physics - Mechanics. I also have extensive experience in test preparation, including PSAT, SAT, GRE and GHSGT. 10 Subjects: including calculus, GRE, algebra 1, algebra 2 ...I'm patient with students at all levels. I push students to not only get questions correct, but to also build their confidence in mathematics. I don't feel that there is a tutoring style that fits every student. 10 Subjects: including calculus, geometry, algebra 1, algebra 2 Related Palmetto, GA Tutors Palmetto, GA Accounting Tutors Palmetto, GA ACT Tutors Palmetto, GA Algebra Tutors Palmetto, GA Algebra 2 Tutors Palmetto, GA Calculus Tutors Palmetto, GA Geometry Tutors Palmetto, GA Math Tutors Palmetto, GA Prealgebra Tutors Palmetto, GA Precalculus Tutors Palmetto, GA SAT Tutors Palmetto, GA SAT Math Tutors Palmetto, GA Science Tutors Palmetto, GA Statistics Tutors Palmetto, GA Trigonometry Tutors Nearby Cities With calculus Tutor Atlanta Ndc, GA calculus Tutors Brooks, GA calculus Tutors Ellenwood calculus Tutors Fairburn, GA calculus Tutors Grantville, GA calculus Tutors Lovejoy, GA calculus Tutors Moreland, GA calculus Tutors Red Oak, GA calculus Tutors Sargent, GA calculus Tutors Sharpsburg, GA calculus Tutors Sunny Side calculus Tutors Turin, GA calculus Tutors Tyrone, GA calculus Tutors Whitesburg, GA calculus Tutors Winston, GA calculus Tutors
{"url":"http://www.purplemath.com/Palmetto_GA_Calculus_tutors.php","timestamp":"2014-04-19T15:24:18Z","content_type":null,"content_length":"24004","record_id":"<urn:uuid:d5cf831a-d18f-4895-ad48-6b121fb4a78e>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
rational number A rational number is a number determined by the ratio of some integer p to some nonzero natural number q. The set of rational numbers is denoted Q, and represents the set of all possible integer-to-natural-number ratios p/q.In mathematical expressions, unknown or unspecified rational numbers are represented by lowercase, italicized letters from the late middle or end of the alphabet, especially r, s, and t, and occasionally u through z. Rational numbers are primarily of interest to theoreticians.Theoretical mathematics has potentially far-reaching applications in communications and computer science, especially in data encryption and security. If r and t are rational numbers such that r < t, then there exists a rational number s such that r < s < t. This is true no matter how small the difference between r and t, as long as the two are not equal.In this sense, the set Q is "dense."Nevertheless, Q is a denumerable set.Denumerability refers to the fact that, even though a set might contain an infinite number of elements, and even though those elements might be "densely packed," the elements can be defined by a list that assigns them each a unique number in a sequence corresponding to the set of natural numbers N = {1, 2, 3, ...}.. For the set of natural numbers N and the set of integers Z, neither of which are "dense," denumeration lists are straightforward.For Q, it is less obvious how such a list might be constructed.An example appears below.The matrix includes all possible numbers of the form p/q, where p is an integer and q is a nonzero natural number.Every possible rational number is represented in the array.Following the pink line, think of 0 as the "first stop," 1/1 as the "second stop," -1/1 as the "third stop," 1/2 as the "fourth stop," and so on.This defines a sequential (although redundant) list of the rational numbers.There is a one-to-one correspondence between the elements of the array and the set of natural numbers N. To demonstrate a true one-to-one correspondence between Q and N, a modification must be added to the algorithm shown in the illustration.Some of the elements in the matrix are repetitions of previous numerical values.For example, 2/4 = 3/6 = 4/8 = 5/10, and so on.These redundancies can be eliminated by imposing the constraint, "If a number represents a value previously encountered, skip over it."In this manner, it can be rigorously proven that the set Q has exactly the same number of elements as the set N.Some people find this hard to believe, but the logic is sound. In contrast to the natural numbers, integers, and rational numbers, the sets of irrational numbers, real numbers, imaginary numbers, and complex numbers are non-denumerable. They have cardinality greater than that of the set N.This leads to the conclusion that some "infinities" are larger than others! This was last updated in September 2005 Tech TalkComment Contribute to the conversation All fields are required. Comments will appear at the bottom of the article.
{"url":"http://whatis.techtarget.com/definition/rational-number","timestamp":"2014-04-16T16:00:30Z","content_type":null,"content_length":"62785","record_id":"<urn:uuid:f8fbf899-eb94-4fc0-a0f0-5d9958d33a4b>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
cmbright – Computer Modern Bright fonts A fam­ily of sans serif fonts for TeX and LaTeX, based on Don­ald Knuth's CM fonts. It com­prises OT1, T1 and TS1 en­coded text fonts of var­i­ous shapes as well as all the fonts nec­es­sary for math­e­mat­i­cal type­set­ting, in­clud­ing AMS sym­bols. This col­lec­tion pro­vides all the nec­es­sary files for us­ing the fonts with LaTeX. A com­mer­cial-qual­ity Adobe Type 1 ver­sion of these fonts is avail­able from . Free ver­sions are avail­able, in the font bun­dle (the T1 and TS1 en­coded part of the set), and in the pack­age (the OT1 en­coded part, and the maths fonts). Sources /fonts/cmbright Doc­u­men­ta­tion Readme Ver­sion 8.1 Li­cense The LaTeX Project Public Li­cense Main­tainer Wal­ter Sch­midt Con­tained in TeXLive as cm­bright MiKTeX as cm­bright fonts them­selves Topics fonts for use in math­e­mat­ics fonts dis­tributed as METAFONT source Sans-serif font Down­load the con­tents of this pack­age in one zip archive (441.2k).
{"url":"http://ctan.org/pkg/cmbright","timestamp":"2014-04-19T22:22:40Z","content_type":null,"content_length":"5916","record_id":"<urn:uuid:5ab151c7-9c68-4bfd-b84f-286e3b039f6a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00585-ip-10-147-4-33.ec2.internal.warc.gz"}
Differentiating Between DC and AC Motors By definition, an electric motor is a device that converts electrical energy into mechanical energy. An electrical signal is applied to the input of the motor, and the output of the motor produces a defined amount of torque related to the characteristics of the motor. If you think about the attraction and repulsion of the north and south poles of a bar magnet, you're on your way to understanding what has to happen inside the motor yoke. To achieve rotation, there has to be some interaction between magnetic flux produced by electromagnetism within the motor. DC motors and AC motors accomplish this task in different ways. DC machines can be classified as self-excited, separately excited, permanent magnet (PM), and brushless. Self-excited machines can be further classified as shunt, series, and compound. Compound machines can be further classified as cumulative and differential. Cumulative and differential machines can be further classified as long shunt and short shunt. As you can see, there are a variety of electrical configurations for a DC machine. For the purpose of this article, we will stick with the series- and shunt-wound DC motor. Please note that the interconnection of the field (stationary winding) and armature (rotating winding) determine the machine's operating characteristics. The shunt DC motor has the field winding in parallel with the armature (Fig. 1). In a parallel circuit, the magnitude of voltage drop across each parallel element is the same, while the magnitude of current through each parallel branch is a function of the impedance of the element. Please note: In a purely resistive circuit, the impedance will equal the resistance as there is no reactive component present. Shunt motors are also called constant speed motors, as they provide relatively stable speed and torque characteristics under load. The series DC motor has the field winding in series with the armature (Fig. 2). In a series circuit, the magnitude of current is the same through all series elements, while the magnitude of voltage drop across each series element is a function of the impedance of the element. Series motors can develop very high starting torque and provide excellent torque characteristics under load. The drawback is speed regulation. As such, never operate a series motor without mechanical load present. Common terms you will hear discussed with DC motors include commutator, brushes, counter electromotive force (EMF), torque, speed regulation, and speed-torque characteristic curves. When used in a motor application, the commutator is a mechanical device that properly directs current flow to the armature. By contrast, when a commutator is used in a generator application, it acts like a rectifier to convert the generated AC voltage of the machine into DC voltage. Brushes, which are usually made of carbon, are used to transition from a stationary element to a rotating element. EMF is “the difference in potential that exists between two dissimilar electrodes immersed in the same electrolyte or otherwise connected by ionic conductors.” The terms EMF and voltage are often used interchangeably. Remember Faraday's law of magnetic induction where a magnetic field can generate an electric current? As it turns out in the case of a DC motor, when the armature rotates through the magnetic field, an induced voltage opposite in polarity to the applied voltage is created — called counter EMF. Torque is a rotational force that — in simple terms — is the algebraic product of force multiplied by distance. Speed regulation is a measure of how the speed of a DC motor decreases as more mechanical load is applied. It is a function of the armature resistance. Speed-torque characteristic curves are graphs that show the relationship between speed, as a percent of rated speed, and load torque as a percent of full rating. These are very helpful because they illustrate how applied mechanical load affects the speed and torque of series, shunt, or compound DC machines. AC machines can be classified as induction, wound rotor, and synchronous. Induction motors can be further classified as 3-phase and single-phase. A 3-phase induction motor can be further classified as delta wound or wye wound. Single-phase motors can be further classified as split phase (Fig. 3), capacitor start, capacitor start/capacitor run, shaded pole, repulsion start, and universal. As you can see, there are several varieties of AC motors. For the purpose of this article, we will stick with an overview of the induction motor. The induction motor is commonly referred to as a “squirrel cage” induction motor. This is due to the fact that the rotor is constructed in a manner resembling a squirrel's cage. An induction motor has a rotor (rotating part) and a stator (stationary part) within the motor housing. When an AC signal is applied to the stator winding, a rotating magnetic field is produced. This rotating magnetic field induces a signal in the rotor, which also generates a rotating magnetic field. The interaction of these rotating magnetic fields causes rotation. This is an important principle to keep in mind because in the case of a DC motor, the magnetic field is not time varying due to the applied signal. Common terms you will hear discussed with AC motors include frequency, synchronous speed, and slip. An AC waveform is time varying or oscillatory. This means its amplitude starts at zero, rises to some maximum value, returns to zero, falls to some minimum value, and then returns to zero. The number of times this occurs per unit of time is referred to as frequency. In the United States, this frequency is 60 Hertz or 60 cycles per second. Referring to the speed of the rotating magnetic field, synchronous speed is a function of the applied frequency and the number of stator poles in the machine. Slip is a measure of the difference between the synchronous speed of the stator field rotation and the rotor field rotation. Please note that the rotor field rotation is always slower than the stator field rotation. Figure 4 (click here to see Fig. 4) includes power equations for AC motors. It also includes a graphical aid called the power triangle, which uses trigonometric identities to help with the analysis of power factor. Finally, some practical equations are included to calculate torque, horsepower, and efficiency. Vidal is president of Joseph J. Vidal & Sons, Inc. in Throop, Pa.
{"url":"http://ecmweb.com/print/motors/differentiating-between-dc-and-ac-motors?page=14","timestamp":"2014-04-21T06:01:50Z","content_type":null,"content_length":"21605","record_id":"<urn:uuid:6eaa8fb9-58fe-48a9-8950-0f88a9860d6c>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Variability of predicted portfolio volatility February 11, 2013 By Pat A prediction of a portfolio’s volatility is an estimate — how variable is that estimate? The universe is 453 large cap US stocks. The variance matrices are estimated with the daily returns in 2012. Variance estimation was done with Ledoit-Wolf shrinkage (shrinking towards equal correlation). Two sets of random portfolios were created. In both cases the portfolios are long-only. The first set has constraints: • exactly 20 names in the portfolio and maximum weight of 8% The second set has constraints: • exactly 200 names and maximum weight of 2% 1000 portfolios were generated in each set. Portfolios were formed as of the last trading day of 2012. Figure 1 shows the distribution of predicted volatilities for the two sets of random portfolios. Figure 1: Predicted volatility for two sets of 1000 random portfolios. The typical predicted volatility of the 200-name portfolios is slightly smaller than that for the 20-name portfolios. The 20-name portfolios have a much wider range of predictions. Prediction variability The question we want to answer is: how variable is our estimate of the volatility for a specific portfolio. Figure 1 shows variability across portfolios, we want instead to look at one particular portfolio at a time. We create alternative estimates of the variance matrix using the statistical bootstrap. The idea is to get estimates using data we might have seen rather than only looking at the estimate we get from the data we happened to have seen. In what follows, we ignore almost all of the random portfolios and look at just the first 3 in each set. Figure 2 shows the bootstrap distributions of the first three random portfolios with 20 names, and Figure 3 shows the same thing for the first 3 200-name portfolios. 100 bootstrap variance matrices were created. Figure 2: Bootstrap distributions of three 20-name portfolios, actual estimate in blue. Figure 3: Bootstrap distributions of three 200-name portfolios, actual estimate in blue. Figures 4 and 5 show the portfolio volatilities from each bootstrapped variance. Figure 4: Predicted volatility of three 20-name portfolios from 100 bootstrapped variances. Figure 5: Predicted volatility of three 200-name portfolios from 100 bootstrapped variances. In the 200-name portfolios each bootstrapped variance has about the same effect on each portfolio volatility estimate. But in the 20-name portfolios a given bootstrapped variance matrix might produce a lower than average volatility estimate for one portfolio but a higher than average estimate for another portfolio. What’s wrong? The variability we are seeing is only from the noisiness of the data going into the variance estimate. Implicitly it assumes that the process in the future will be the same as during the estimation period. In finance that is a bad assumption. The bootstrap distribution only provides a lower bound on the true variability. Why are the original portfolio volatility estimates substantially bigger than the median of the bootstrapped values (Figures 2 and 3)? Predictions have estimation error. Danielsson and Macrae suggest that stating the variability of risk estimates should be standard. I agree. A temporary refuge where somebody else can stand from “Night Train” by Bruce Cockburn Appendix R Here are R commands that were used. estimate variance matrix There is a function in the BurStFin package for Ledoit-Wolf estimation: lw12 <- var.shrink.eqcor(ret12) Here ret12 is the matrix of returns of the 2012 daily returns (days in rows with most recent last, and stocks in columns). generate random portfolios The random portfolios are generated with the Portfolio Probe software: rp20w08 <- random.portfolio(1000, prices=tail(close12, 1), gross=1e6, long.only=TRUE, max.weight=.08, port.size=c(20,20)) rp200w02 <- random.portfolio(1000, prices=tail(close12, 1), gross=1e6, long.only=TRUE, max.weight=.02, port.size=c(200,200)) Each random portfolio object is basically just a list of the contents of portfolios. predicted volatility of random portfolios A small function was written to make it easy to get portfolio volatilities of random portfolios: pp.rpvol <- function(rp, varmat, annualize=252) # placed in public domain 2013 by Burns Statistics # testing status: untested rpvar <- unlist(randport.eval(rp, keep="var.values", sqrt(annualize * rpvar) * 100 randport.eval is a Portfolio Probe function used here to get the predicted portfolio variance for each random portfolio. This is used like: vol20orig <- pp.rpvol(rp20w08, lw12) vol200orig <- pp.rpvol(rp200w02, lw12) one bootstrap variance matrix Another little function was written to create a boostrapped variance matrix: pp.bootvar <- function(RETMAT, FUN=var.shrink.eqcor, # placed in public domain 2013 by Burns Statistics # testing status: untested FUN <- match.fun(FUN) nobs <- nrow(RETMAT) bootobs <- sort(sample(1:nobs, nobs, replace=TRUE)) FUN(RETMAT[bootobs,], ...) This uses a few R tricks. It might have struck you that the argument names are in all capitals — which goes against the grain. This is to minimize the possibility of a collision between the arguments in this function and the arguments to the estimation function that are passed in via the three-dots argument. Using match.fun is an easy way to get flexibility. If FUN is a function, then it just returns that. If FUN is a string, then it goes off and looks for a function by that name. Usually the order of observations doesn’t matter. However, the default for var.shrink.eqcor is to weight more recent observations more heavily. Hence sorting the observations is a reasonable thing to do here. bootstrapping portfolio volatilities To see what is in Figures 4 and 5, we want to have results for a single bootstrapped variance matrix for each portfolio. We also want to minimize the number of times we bootstrap a variance matrix because estimating the variance matrix is the most expensive part. Hence we set up objects to hold the portfolio volatilities and then do the bootstrapping: vol20boot <- vol200boot <- array(NA, c(100,3)) for(i in 1:100) { thislw <- pp.bootvar(ret12) vol20boot[i,] <- pp.rpvol(head(rp20w08, 3), thislw) vol200boot[i,] <- pp.rpvol(head(rp200w02, 3), thislw) cat("done with", i, date(), "\n") You might think that a simplification would be to write: instead of: head(rp20w08, 3) You would be wrong. The subscripting strips attributes. But there is a head method for random portfolios that preserves the class attribute (and more). The pp.rpvol function wouldn’t work if we just subscripted. Figure 2 was created with: boxplot(vol20boot, col='gold', xlab="Portfolio", ylab="Predicted volatilty (%)") segments(x0=1:3 - .5, y0=vol20orig[1:3], x1=1:3 + .5, col="steelblue", lwd=3, lty=2) The segments command uses the fact that the boxplots are centered at integers. pairs plots Figure 4 was made with: pairs(vol20boot, col='steelblue', label=paste("Port", 1:3)) daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/variability-of-predicted-portfolio-volatility/","timestamp":"2014-04-18T13:14:03Z","content_type":null,"content_length":"45787","record_id":"<urn:uuid:5430ff12-08fc-4b43-a439-cddda5c95ba9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Tip #2 for reading business trends: filter out the noise - Big Data, Plainly Spoken (aka Numbers Rule Your World) Another month, another unemployment report, another set of peanut-gallery reports from the business press. The August numbers apparently delighted quite a few: some headlines are "U.S. Stocks Advance After Employment Report Exceeds Estimates" (Bloomberg BusinessWeek); "Fewer Jobs Lost in August; Private Hiring Beats Forecast" (CNBC); "Private Hiring Surprises with 67,000 New Jobs" (Reuters). The key reported results are: employment drop (-54,000) lower than expected, government jobs decreased due to end of Census, private sector jobs increased (+67,000), upward revisions for both June (+46,000) and July (+77,000). In Tip #1, I already discussed the idiocy of including once-every-10-year Census jobs in any of these numbers. In this post, we shall hone our ability to see through the "noise". First notice that the revisions from the last couple of months are in the same order of magnitude (tens of thousands) as the reported changes for the current months. This is a very strong sign that the reported changes for the current month are just noise. When the revision of the August numbers comes out in September, what would the -54,000 become? It could be comfortably above zero indicating an overall gain in employment, or it could be quite a bit more negative than reported today. Now imagine that the revisions were of a different magnitude. Let's say instead of 46K and 77K, they were 5,000 and 8,000. Then, we could believe that the August numbers would be directionally correct even after future revisions, and we would have more confidence in those numbers. What we have done here is to use historical fluctuations to get a mental picture of how accurate these estimates are, and then use that margin of error to judge how good the current estimates are. This is a very important skill to have when looking at numbers, especially when looking for trends. In Chapter 1, I pointed out how important it is to know the variability around average values. Here, the reports only gave us average values. But by looking back in historical revisions, we can get a good sense of how variable the numbers are, and get the information denied us. A more rigorous way to do this is to look up the technical note for the margin of error. The width of the confidence interval is given as 100,000 at 90% confidence. What this means is that when they report -54,000, what they actually mean is that any number between +46,000 and -154,000 is consistent with the data that was observed. So in fact, the statisticians have no idea whether employment grew or shrank in August. This is the overall employment number; for individual breakouts (like private sector jobs or mining jobs), the margin of error will be even greater, and they have even less of a handle of the trend. Technically, this happens because the government does not have data on every business in the U.S. All of these estimates are made based on survey samples. For example, the so-called Establishment Survey is based on 140,000 businesses and government agencies (I think, minus the nonresponders). That's a very small proportion of millions of businesses in the U.S. and therefore some error in estimation is inevitable. This may be the shocker: if you take the margin of error of 100,000, and notice that almost every number in the employment report is smaller than that, then essentially you can conclude that the entire report is pure noise, and you'd be right. (We say none of the changes are statistically significant.) The original design for the sample survey was not intended to read changes of this scale. Now that you know this, and as you look around, you will find it's very very noisy out there. You can follow this conversation by subscribing to the comment feed for this post. You made one very common error in your discussion: the precision of a survey-based estimate has almost _nothing_ to do with what proportion of the population is being sampled (as long as you are not sampling almost the entire population). I am sure you know the soup-tasting analogy. So the wide margin of error of the estimates is not because 140,000 is a small proportion of all the businesses, but because the buisiness-to-business variability of the change in the number of employees is large. In discussing the confidence interval you state: What this means is that when they report -54,000, what they actually mean is that any number between +46,000 and -154,000 is consistent with the data that was observed. So in fact, the statisticians have no idea whether employment grew or shrank in August. I think you are stretching it a bit to say that statistician have "no idea" whether employment grew or shrank. They do have an idea, their best guess is that employment shrank by 54,000 jobs. Yes, their sample was noisy, and even if the true value were 0, they would get sample estimates like this one more than 10 percent of the time, but I think it is a statistically valid claim to state "our data suggest it is more likely than not that employment shrank." Aniko: You read a lot more into that sentence than I intended - and I realize that what I wrote could be misleading so thanks for bringing this up. If everyone in the population were surveyed, then we would have complete information and there could be no sampling error. If we can only collect partial information, then the larger the sample, the smaller the error. But after a certain point, increasing the sample doesn't reduce the error enough to matter so we like to say proportions don't matter. Hope I clarified that. Also I want to clarify your statement that the large error is not due to sample size but due to variability. The sample size is designed to filter out a certain level of noise (conversely, read a certain level of signal); if the survey has been designed to read changes in employment of 10,000, then the statistician would have called for a lot more than 140K businesses to be surveyed. Aaron: A margin of error comes at a certain level of confidence, in this case, 90%. Any statement like "our data suggest it is more likely than not that employment shrank" is valid ONLY if we accept a lower confidence level. 90% is already a lower threshold than typically used so one must be careful when issuing such statements. I have fundamentally a strong objection to this line of thinking because it is equivalent to saying when the sampling error is large, just ignore the variability, and use the average value (point estimate) as the most likely value. It is precisely when the sampling error is large that we must pay attention to it. Otherwise, we might declare all the research on confidence levels and margins of error useless! In Australia they include a trend line, but most commentators ignore it. I'm going to use the accuracy of unemployment figures as an example in a basic stats course this semester. This is only a preview. Your comment has not yet been posted. Your comment could not be posted. Error type: Your comment has been posted. Post another comment As a final step before posting your comment, enter the letters and numbers you see in the image below. This prevents automated programs from posting comments. Having trouble reading this image? View an alternate. Post a comment Recent Comments
{"url":"http://junkcharts.typepad.com/numbersruleyourworld/2010/09/tip-2-for-reading-business-trends-filter-out-the-noise.html","timestamp":"2014-04-19T14:52:29Z","content_type":null,"content_length":"71443","record_id":"<urn:uuid:c40a8c9d-9c8f-48dd-9977-a74693b9f6ac>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Type I 2.4 Type I In type I critical phenomena, the same phase space picture as in Section 2.1 applies, but the critical solution is now stationary or time-periodic instead of self-similar or scale-periodic. It also has a finite mass and can be thought of as a metastable star. (Type I and II were so named after first and second order phase transitions in statistical mechanics, in which the order parameter is discontinuous and continuous, respectively.) Universality in this context implies that the black hole mass near the threshold is independent of the initial data, namely a certain fraction of the mass of the stationary critical solution. The dimensionful quantity that scales is not the black hole mass, but the lifetime Type I critical phenomena occur when a mass scale in the field equations becomes dynamically relevant. (This scale does not necessarily set the mass of the critical solution absolutely: There could be a family of critical solutions selected by the initial conditions.) Conversely, as the type II power law is scale-invariant, type II phenomena occur in situations where either the field equations do not contain a scale, or this scale is dynamically irrelevant. Many systems, such as the massive scalar field, show both type I and type II critical phenomena, in different regions of the space of initial data [34 ].
{"url":"http://www.univie.ac.at/EMIS/journals/LRG/Articles/lrr-2007-5/articlesu6.html","timestamp":"2014-04-17T04:01:10Z","content_type":null,"content_length":"6481","record_id":"<urn:uuid:977b13b6-248f-4435-acd0-d07391ab7200>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00364-ip-10-147-4-33.ec2.internal.warc.gz"}
Question of the Week: How can computers calculate exponential math without overflow errors? This weeks question of the week comes from Kit-Ho who poses: Studying some RSA encrypt/decrypt methods, I found this article: An Example of the RSA Algorithm It requires this to decrpyt this message [DEL:For those of you that may not know what this “Overflow” that Kit mentioned, he’s talking about a term Stack Overflow. Here’s the official “Wiki” definition::DEL] [DEL:In software, a stack overflow occurs when too much memory is used on the call stack. The call stack contains a limited amount of memory, often determined at the start of the program. The size of the call stack depends on many factors, including the programming language, machine architecture, multi-threading, and amount of available memory. When a program attempts to use more space than is available on the call stack (that is, when it attempts to access memory beyond the call stack’s bounds, which is essentially a buffer overflow), the stack is said to overflow, typically resulting in a program crash. This class of software bug is usually caused by one of two types of programming errors.:DEL] As pointed out by Dennis (thanks!) I completely got this wrong. Stack overflow isn’t the issue, but rather integer overflow: In computer programming, an integer overflow occurs when an arithmetic operation attempts to create a numeric value that is too large to be represented within the available storage space. For instance, adding 1 to the largest value that can be represented constitutes an integer overflow. The most common result in these cases is for the least significant representable bits of the result to be stored (the result is said to wrap). On some processors like GPUs and DSPs, the resultsaturates; that is, once the maximum value is reached, attempts to make it larger simply return the maximum result. For example, a mechanical odometer, has a rollover (or reset) after a certain amount of miles: This is the same as computer integer overflow, where the size of the numbers needed are greater than the object type can hold. Kit-Ho’s example RSA link exceedes the C#’s max value of 18,446,744,073,709,551,615 of the long type. Dietrich Epp came up with a great answer as to how computers can calculate these large numerical calculations: Because the integer modulus operation is a ring homomorphism (Wikipedia), (X * Y) mod N = (X mod N) * (Y mod N) mod N You can verify this yourself with a little bit of simple algebra. Computers use this trick to calculate exponentials in modular rings without having to compute a large number of digits. -- compute X^I mod N function expmod(X, I, N) if I is zero return 1 elif I is odd return (expmod(X, I-1, N) * X) mod N else Y <- expmod(X, I/2, N) return (Y*Y) mod N end if end function You can use this to compute (855^2753) mod 3233 with only 16-bit registers, if you like. However, the values of X and N in RSA are much larger, too large to fit in a register. A modulus is typically 1024-4096 bits long! So you can have a computer do the multiplication the “long” way, the same way we do multiplication by hand. Only instead of using digits 0-9, the computer will use “words” 0-2^16-1 or something like that. (Using only 16 bits means we can multiply two 16 bit numbers and get the full 32 bit result without resorting to assembly language. In assembly language, it is usually very easy to get the full 64 bit result, or for a 64-bit computer, the full 128-bit result.) -- Multiply two bigints by each other function mul(uint16 X[N], uint16 Y[N]): Z <- new array uint16[N*2] for I in 1..N -- C is the "carry" C <- 0 -- Add Y[1..N] * X[I] to Z for J in 1..N T <- X[I] * Y[J] + C + Z[I + J - 1] Z[I + J - 1] <- T & 0xffff C <- T >> 16 -- Keep adding the "carry" for J in (I+N)..(N*2) T <- C + Z[J] Z[J] <- T & 0xffff C <- T >> 16 return Z -- footnote: I wrote this off the top of my head -- so, who knows what kind of errors it might have This will multiply X by Y in an amount of time roughly equal to the number of words in X multiplied by the number of words in Y. This is called O(N^2) time. If you look at the algorithm above and pick it apart, it’s the same “long multiplication” that they teach in school. You don’t have times tables memorized out to 10 digits, but you can still multiply 1,926,348 x 8,192,004 if you sit down and work it out. Long multiplication: x 5,678 There are actually some faster algorithms around for multiplying (Wikipedia), such as Strassen’s fast Fourier method, and some simpler methods which do extra addition and subtraction but less multiplication, and so end up faster overall. Numerical libraries like GMP are capable of selecting different algorithms based on how big the numbers are: the Fourier transform is only the fastest for the largest numbers, smaller numbers use simpler algorithms. 5 Comments Subscribe to comments with RSS. For the beginning of this post, isn’t ‘stack overflow’ something else than the ‘overflow’ mentioned here? As far as I know, ‘Stack overflow’ is when there have been too many method calls (infinite call loop for example), while a regular overflow in this context is when you have a number which is too big for a 32/64-bit integer and it overflows, either by ‘looping around’ or failing or I don’t think this has anything to do with a stack overflow. It is simply that the numerical range of 64-bit IEEE 754 floating point numbers isn’t large enough to contain 855^2753. The possible range is approximately −10^308 to +10^308. @sblair is correct: This has nothing to do with Stack Overflow. The concern here is integer overflow, since the maximum representable value of a 64-bit register is 2 ^ 64 – 1 = 18,446,744,073,709,551,615. In contrast, the result of 855 ^ 2753 would need 23,814 bits to be represented. The calculations are handled by software (sometimes native data types) that can deal with numbers of arbitrary length. These are commonly referred to as “BigDecimal” or “BigInteger” (though I wouldn’t be surprised if some large but bounded systems used these names as well). Whoops. I see this was answered in greater depth in the second half of the OP.
{"url":"http://blog.superuser.com/2012/07/30/question-of-the-week-how-can-computers-calculate-exponential-math-without-overflow-errors/","timestamp":"2014-04-20T23:27:53Z","content_type":null,"content_length":"51021","record_id":"<urn:uuid:eb851eda-b4e4-49a6-8053-35630e2377bd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
Open-Loop Wide-Bandwidth Phase Modulation Techniques Journal of Electrical and Computer Engineering Volume 2011 (2011), Article ID 507381, 12 pages Review Article Open-Loop Wide-Bandwidth Phase Modulation Techniques ^1Electrical Engineering Department, University of California, Los Angeles, CA 90095-1594, USA ^2Broadcom Corporation, Irvine, CA 92617-3038, USA Received 31 May 2011; Accepted 16 August 2011 Academic Editor: Kenichi Okada Copyright © 2011 Nitin Nidhi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. The ever-increasing growth in the bandwidth of wireless communication channels requires the transmitter to be wide-bandwidth and power-efficient. Polar and outphasing transmitter topologies are two promising candidates for such applications, in future. Both these architectures require a wide-bandwidth phase modulator. Open-loop phase modulation presents a viable solution for achieving wide-bandwidth operation. An overview of prior art and recent approaches for phase modulation is presented in this paper. Phase quantization noise cancellation was recently introduced to lower the out-of-band noise in a digital phase modulator. A detailed analysis on the impact of timing and quantization of the cancellation signal is presented. Noise generated by the transmitter in the receive band frequency poses another challenge for wide-bandwidth transmitter design. Addition of a noise transfer function notch, in a digital phase modulator, to reduce the noise in the receive band during phase modulation is described in this paper. 1. Introduction The rapid growth of new communication standards like LTE and WiMAX has led to high data rate, wide signal bandwidth, and high peak-to-average-power ratio. Additionally, transmission of 1-2GHz bandwidth signals in the unlicensed frequency band at 60GHz is also gaining momentum. Since total power consumption is largely determined by the efficiency of the power amplifier used, high-efficiency architectures, like polar [1–5] and out-phasing [6, 7], are preferable for future designs. Both of these architectures require a phase modulator as one of their key building blocks. The bandwidth of these modulators increases with the bandwidth of the transmitted signal. The concept of a software-defined radio, which can support multiple programmable carrier frequencies and provides maximum flexibility in data rates, is being looked at as a desirable and viable enhancement for future radios. Such a transmitter may also require a wide-bandwidth phase modulator. Besides being wideband, a digital implementation of the phase modulator should be favored as it comes with several advantages—it (1) enables implementation of calibration algorithms to correct for transmitter nonlinearity, PVT variations, and, in a polar architecture, AM/PM path delay mismatch, (2) allows shaping of quantization noise transfer function to reduce in-band noise, (3) allows dynamic element matching to make linearity insensitive to component mismatches, (4) permits reconfigurability to meet the requirements for more than one communication standard, and (5) eases porting of design from one process to the next and hence readily benefits from technology scaling. On the other hand, the digital implementation poses new challenges in terms of quantization noise and spectral images. Furthermore, PVT variation, nonlinearity and power of the front-end digital-to-phase block, although minimized by digital techniques, still require optimization. Traditionally, phase modulators have been implemented using a phase-locked loop (PLL) for narrow-band modulation [2–5, 8–11]. A second input port, inside a PLL, was employed to enable wideband modulation capability to such modulators [7, 12–17]. But these techniques have not been successful for applications beyond GSM/EDGE [2–5, 13, 17] and WCDMA [7, 12]. LTE and WiMAX both require wider-bandwidth modulators. Recently, open-loop phase switching technique, that dynamically selects a signal from a bank of signals at the carrier frequency but with different phase offsets, was proposed for digital wide-bandwidth phase modulation. Quadrature signals at the output of a frequency divider [18], output signals of a ring oscillator [19], and phase interpolation [20] were used to form the bank of reference signals. An overview of both open-loop and closed-loop techniques for phase modulation is presented in this paper, contrasting their performance limitations. Finite resolution in the digital phase modulator results in phase quantization noise (PQN). A PQN cancellation technique was proposed in [20] to reduce the out-of-band emission from the phase modulator. The signal processing details of this technique along with methods to further improve its effectiveness are described in this paper. The quantization noise added by the modulator must not violate the transmit spectrum mask and the receive band noise requirement of a frequency division duplexing (FDD) system. FDD is commonly employed in cellular systems like GSM/EDGE, WCDMA, and LTE. Recent advances towards a software-defined radio (SDR) promotes coexistence of multiple radios on an integrated circuit. This trend will further increase the requirements on emission in the receive band frequencies. The noise component due to PLL phase noise can be reduced to meet the receive band noise requirements for WCDMA without requiring additional filtering. However, a high Q band pass filter, like a SAW filter, will be required to filter out the quantization noise component. Such a filter is costly and must be avoided to achieve a fully integrated transmitter. A more elegant solution will be to reduce the noise at the receive band frequency by design. This will not only save the cost of an additional costly board component but also provide the flexibility in changing the position and order of the notch. A detailed description of this approach is presented in this paper. The paper is organized as follows. Section 2 presents an overview of prior art on phase modulation techniques and describes the concept of digital phase modulation. Section 3 describes the concept and analysis of a phase quantization noise cancellation technique, and Section 4 describes a solution to meet the receive band noise specification. A conclusion is given in Section 5. 2. Wide-Bandwidth Digital Phase Modulation In one of the simplest implementations of phase or frequency modulation, the modulation data is applied to the control voltage of a voltage-controlled oscillator (VCO). Although, this technique is open-loop and wideband, it suffers from frequency drift, VCO transfer function nonlinearity and variations over PVT, high close-in phase noise, and loss of transmission during periods of frequency lock. In order to find a solution for these problems, broadly two categories of phase modulators have emerged—closed-loop and open-loop phase modulation. 2.1. Closed-Loop Phase Modulation If continuous feedback control is applied to a VCO, to arrive at a conventional PLL, close-in phase noise is improved and carrier frequency is tightly controlled. In this case, modulation data has been successfully applied using a multimodulus feedback divider (MMD) in a fractional-N PLL [8, 9]. In this technique, however, the high-frequency content of the modulation data is filtered out by the loop filter of the PLL making it unsuitable for wide-bandwidth phase modulation. Bandwidth extension methods like phase noise cancellation [10, 22–26], multiphase fractional-N PLL [27], type I fractional-N PLL with sharp loop filter [28], and digital pre-emphasis [11] have been applied, but even the best in state-of-the-art designs have not exceeded 3MHz. Additionally, this technique can create low-frequency fractional spurs at PLL output. In order to obtain further increase in bandwidth, the so-called two-point modulation has been often used [12–17]. Since the injection of modulation data at any node of the PLL loop is either high-pass filtered or low-pass filtered, it is injected at two nodes simultaneously such that the sum of the two transfer functions becomes wideband. The most commonly used injection nodes are the MMD and the VCO control voltage to achieve wide-bandwidth FM. A common problem encountered in this approach is the loss in SNR due to gain and phase mismatch between the two paths. Since K[VCO] must be known for gain matching, an on-chip K[VCO] estimation method becomes important. Nonetheless, this technique has been successfully used to generate transmit signals meeting GSM/EDGE and WCDMA In [29], gain matching was obtained by applying a square wave input to the two modulation input nodes. The calibration loop then minimizes the error voltage developed across the loop filter zero setting resistor. The approach taken in [21] to measure digitally controlled oscillator (DCO) gain and PLL loop gain was to measure the phase error in response to a DCO control change (Figure 1). The PLL loop, reduced to type I operation by holding the digital integral path, compensates for the frequency error by adjusting the phase error at phase detector input. The calibration loop then senses the phase error using a bang-bang PFD and modulates the fractional part of the feedback divider until the phase error is reduced to a very small value. Using the new frequency divider value so obtained, the DCO gain was calculated. The development of an all-digital PLL (ADPLL) [13, 30–33] has opened up new possibilities for accomplishing phase modulation, as the signals within the PLL loop have become more predictable. Besides this, injection of digital data can be readily achieved in the digital domain, at most of the internal nodes of the PLL. The design presented in [13] took advantage of this feature of a digital PLL and injected the frequency modulation data to the control word of DCO and the carrier frequency control word of the PLL (Figure 2). The design presented in [12] introduced VCO transfer function linearization technique for two-point modulation for WCDMA. It utilized a local negative feedback loop around the VCO to obtain a fairly constant K[VCO]. This local loop employed an analog technique to measure the VCO frequency, which forms the feedback signal. Although gain and phase calibration techniques have helped to increase the robustness and bandwidth of phase modulators, to the best of the authors’ knowledge, their application have not been demonstrated on even wider modulation standards such as WLAN, WiMAX, and LTE. 2.2. Open-Loop Phase Modulation The bandwidth limitation and gain and phase mismatch issue associated with two-point modulation techniques can be avoided by using open-loop modulation techniques, where modulation is performed outside the frequency synthesizer loop. Essentially, it isolates the carrier frequency generation block from the data modulation block, yielding a modulator which does not involve a low-pass filter (the loop filter) in its path. Hence, these modulators can achieve very wide bandwidth. In a typical open-loop phase modulator, a phase generator block produces multiple phases at the carrier frequency. It is followed by a phase multiplexer whose output is controlled by the phase modulation data. For a given sequence of desired digital phase values , a modulator quantizes each phase sample to one of the available phases, , . The output of the modulator controls a digital-to-phase converter, whose inputs are phase signals, . Note that is the carrier frequency in rad/s. The digital-to-phase converter can be a phase multiplexer with M phase inputs or a digital phase interpolator. The resultant synthesized signal can be written as where pulse-shaping function, , is nominally a rectangular pulse of unit amplitude with duration and is the error in quantizing . The synthesized signal well approximates the desired phase modulated carrier signal for large and a high switching frequency, . The ability to dynamically switch between multiple phases can be easily extended to synthesize a frequency at a small offset from the VCO frequency by applying a ramp signal as input to the modulator, with the appropriate slope. This approach for phase modulation suffers from three critical issues—phase quantization noise (PQN), nonlinearity of digital-to-phase converter, and spectral images. Phase quantization noise arises due to the quantization process involved in the generation of output phases. Simple truncation results in white quantization noise and a flat power spectral density (PSD) stretching over a band around the carrier frequency, resulting in poor in-band signal-to-noise ratio (SNR) and error vector magnitude (EVM). Using a modulator to quantize imparts a high-pass shape [32] to , thereby suppressing the close-in PQN and improving EVM. However, the PQN at offsets of from the carrier frequency are amplified resulting in elevated out-of-band noise that can violate the spectral mask requirements for the transmitter. The PQN can be reduced by increasing the phase switching speed and/or by increasing the resolution of the digital phase interpolator. The discrete-time nature of the phase modulator causes spectral images to appear at integer multiples of . The zeroth-order hold operation results in only modest filtering () of these images, and hence a high oversampling factor or interpolation is required to reduce them. At most, 5–10dB of further suppression results from the LC tanks typically employed in a tuned power amplifier. Due to a digital implementation of the modulator, this technique easily lends itself for quantization noise shaping, digital predistortion to compensate for errors due to PVT variation, dynamic element matching, and other similar digital techniques. The design presented in [18] used four quadrature phases at the output of a frequency divider and digitally switched between them to realize phase modulation (Figure 3). Since only four-level quantization was used, the noise floor was quite high. With quantization noise shaping, the in-band noise was lowered at the expense of large out-of-band noise (−25dBr observed from the measured spectrum). While transmitting at 403.2MHz, the measured FSK errors at 6Mb/s data rate was 4.1% and 11.6% for 2-FSK and GFSK modulations, respectively. In [7], a hybrid of many of the techniques mentioned earlier was designed for GSM and WCDMA. The outphasing angle was generated using an 8-bit phase interpolator, while phase modulation was generated by two-point modulation. However, it also used phase-to-digital converters in a negative feedback loop to correct for the nonlinearity of phase interpolators, thereby reducing the available bandwidth. Phase modulation techniques at 60GHz are also being researched. Reference [34] implemented a novel method to obtain phase modulation at 60GHz by digitally controlling the effective dielectric constant of a differential transmission line. This was achieved by digitally switching in and out a 4-bit bank of floating M6 and M7 strips placed underneath the transmission line, which leads to a digitally variable phase of S21. However, its dynamic performance under data transmission was not presented. 3. Quantization Noise Cancellation PQN can be contrasted with the quantization noise added by baseband DACs in an I–Q architecture. In the latter, out-of-band quantization noise and spectral images of baseband DACs can be removed by baseband low-pass filters, although the total power consumption also gets increased. Furthermore, the noise generated by the low-pass filters adds on top of the contributions from the mixer and amplifiers in the transmit chain. Such a filter in a digital phase modulator will have to be RF band pass, requiring high Q passive components or a SAW filter. A PQN cancellation technique can, therefore, significantly improve the performance of the phase modulator. Through a second VCO control port, a quantization noise cancellation signal was added to the modulator output in [20], as shown in Figure 4. The required cancellation signal is obtained by first subtracting the input signal, , from the quantized signal, . Thus, the quantized phase data is applied through the digital-to-phase converter, while the PQN cancellation signal of is applied through a cancellation path through the VCO. In the cancellation path, frequency of the VCO can be controlled in a straightforward manner through an analog control signal, or through a digital word in the case of a digitally controlled oscillator. Its phase, on the other hand, is the outcome of an integration of the resultant frequency. Since the cancellation signal is a phase quantity, while the VCO input port controls its frequency, the required cancellation phase is differentiated, to obtain an equivalent VCO frequency deviation. It must also be attenuated by VCO control voltage-to-frequency gain, . In [20], a state-of-the-art implementation of PQN cancellation was presented on a 2.4GHz wide-bandwidth open-loop GFSK transmitter IC. The phase cancellation path was implemented by adding a second VCO port to control a 4-bit capacitor bank, with DEM logic incorporated in the selection process. Figure 5 shows the GFSK transmitted eye diagram at 20Mb/s GFSK modulation. Figure 6(a) shows 9dB improvement in the measured output spectrum, after enabling the phase noise cancellation technique. The corresponding GFSK transmitted modulation rms error is 3.2%rms at 20Mb/s. The wide-bandwidth capability of the transmitter is demonstrated in Figure 6(b), which shows overlaid spectrum of transmitter output under 20, 40, 80, and 120Mb/s rates. It should be noted that the normal functioning of the PLL is not impacted by the cancellation path. The cancellation signal at the VCO input port goes through the same transfer function as seen by the VCO phase noise, resulting in its high-pass filtering. The crucial difference being that, as opposed to VCO phase noise, the high-frequency content of the cancellation signal is a desired signal which lowers the quantization noise added from phase interpolator input. It may be perceived that since the low-frequency content of the cancellation signal is filtered out, it will degrade the efficacy of the technique. But, since quantization noise is already low at these frequencies, this results in negligible loss in effectiveness. One of the subtle features of the cancellation path is the implication of discrete-time differentiation applied to obtain the required VCO frequency deviation. The integration of frequency inherent in a VCO is, on the other hand, continuous. Consider the case of rectangular pulse shape for the input phase and cancellation signal, where both of them are applied at a frequency, . Since the cancellation signal is updated every time period, while VCO frequency is continuously being integrated, perfect cancellation is obtained only at the end of each time period, which is not very effective. Figure 8(b) depicts the timing diagram of phase quantization noise, the applied cancellation signal, and the uncancelled PQN. In [20], the cancellation signal was advanced by to improve the effectiveness of noise cancellation. In the following subsections, results from detailed system simulation are used to illustrate this technique and bring up the associated trade-off, along with mathematical expressions for residual quantization noise. It will also be shown how the effectiveness of cancellation changes with a reduction in integration time and after quantization of the cancellation signal. Since the techniques are applicable to all forms of PQN shaping algorithm and for any resolution in the phase data path, a 2nd-order noise shaping with a 5-bit quantizer in the forward path is used to illustrate these techniques. For all system simulations, input data is 20Mb/s GFSK modulated. 3.1. Advancement of Cancellation Signal The control voltage-to-frequency response of VCO to a cancellation signal was found to be low pass with a very high cutoff frequency (). Hence, the cancellation frequency can be modeled as a rectangular pulse shape, and the cancellation phase becomes its integrated form. The transfer function of the uncancelled PQN can be expressed mathematically as where is the NTF of modulator quantizing , models the pulse-shaping function of uncancelled , and models the effect of PQN cancellation. When PQN cancellation is off, is a sinc function, , representing the zeroth-order hold, and is 1. However, when a cancellation signal is applied through the VCO control port, two changes take place: (1) pulse shape of uncancelled quantization noise becomes saw-tooth and magnitude of quantization noise becomes a first-order difference, of the initial quantization noise. These changes can be easily observed in the timing plot of Figure 8(b). As a result, , for a saw-tooth pulse shape, attains a DC value of −6dB and becomes . Saw-tooth pulse shape for PQN and the additional first-order noise shaping results in lower quantization noise at low frequencies, but higher noise at high frequencies (Figure 7). Hence noise cancellation is limited to frequencies below 40 MHz offset; while it increased by 2dB at higher offset frequencies due to the additional first-order noise In order to improve the PQN cancellation mechanism, the cancellation signal can be advanced by . As a result, the residual quantization noise rises to only half of the value attained in the earlier case and afterwards its sign gets flipped (Figure 8(a)). Hence, has a zero at DC and is reduced by 6dB. The combined effect of these two changes is a reduction in peak quantization noise by 10dB, along with a maximum improvement of 17dB, at a lower frequency (Figure 7). and the expression of uncancelled PQN, , for the three cases are plotted in the appendix. It should be noted that half-period advancement results in the highest achievable noise cancellation, compared to other values for signal advancement. 3.2. Reduction in Integration Time Further improvement in PQN cancellation can be obtained by using a return-to-zero (RZ) DAC or an equivalent DCO to control the deviations in VCO frequency. In this case, integration of frequency input to the VCO is performed for a shorter duration of time, and hence perfect phase cancellation is obtained for a longer duration as opposed to one time instant. The required frequency deviation must also be increased in proportion to the reduction in integration time, such that the phase accumulated in one time period equals the required cancellation phase value. Figures 9(a) and 9(b) depict the time waveforms for the cases when integration time is reduced to and , respectively. The resultant improvement in PQN cancellation is 6dB for an integration time of as shown in Figure 10. If the integration time is reduced further while simultaneously increasing the cancellation frequency signal, output PQN reduces by 6dB for each octave reduction in integration time. In the limit of an impulse in frequency cancellation signal, perfect cancellation is obtained at all times. Since shorter integration time requires a larger frequency deviation from the VCO, there exists a trade-off between noise cancellation and VCO frequency deviation. The minimum integration time and the resultant noise cancellation are limited by the maximum frequency deviation that can be linearly obtained from the VCO. For instance, for a maximum achievable frequency deviation of 100MHz and a 4-bit phase interpolator, the maximum cancellation phase required is 0.4 radians, and hence the minimum integration time allowed becomes 625ps. 3.3. Combination of Cancellation Signal Advancement and Reduction in Integration Time The improvement in noise cancellation can be further increased by combining both signal advancement and reduction in integration time. When integration time is reduced to , the optimal signal advancement changes from to so that the peak magnitude of PQN splits equally in its positive and negative cycle. In general, the optimal clock advancement reduces by a factor of 2 when the integration time is reduced by a factor of 2. The PSD of the output signal when clock advancement and integration time are used is plotted in Figure 12. For comparison, PSD plots for zero clock advance and integration time and clock advance and integration time are also plotted. The peak PQN has reduced by an additional amount of 10dB due to simultaneous application of the two techniques. 3.4. Optimized Modulator for PQN Shaping A second-order modulator results in excessive quantization noise at a frequency offset of . This increase can lead to spectral mask violation. By including a pair of complex poles in the NTF [35], the high-frequency noise can be reduced at the expense of higher noise at low frequencies. The improvement in noise reduction after the inclusion of a pair of complex poles in the NTF is depicted in Figure 11(c). In addition, the low frequency noise degradation observed in 2nd-order modulator due to the advancement of cancellation signal has also improved resulting in lower noise for the optimized NTF. 3.5. Quantization of Cancellation Signal Since, in a digital implementation of the cancellation path, the frequency data fed to the second VCO port must go through a quantization process, its impact also requires a careful attention. If the frequency data is uniformly quantized within the input dynamic range of VCO control port, its quantization noise has a constant PSD between and . Within the loop bandwidth of PLL, this noise will be tracked by the negative feedback loop and its impact will be nullified. Outside the loop bandwidth, VCO integrates this noise due to frequency to phase conversion, and hence an amplified version will appear at its output. Mathematically, the PSD of quantization noise at VCO output can be expressed as where is quantization step size in VCO control voltage, is sampling frequency, is frequency offset from carrier, is VCO gain, and is noise transfer function. For an example case with of 100MHz/V, of 450MHz, and of 41mV (5-bit quantization), the calculated output noise is plotted in Figure 11(a) for uniform quantization and 1st-order and 2nd-order noise shaping. If the integration time is reduced to , the expression for quantization noise changes to (Figure 11(b)) Clearly, for both cases, the noise at VCO output will suffer from both EVM and ACPR degradation if uniform quantization is applied. In order to cancel the pole in the VCO transfer function, a first- or higher-order zero is required in the NTF of the cancellation signal quantizer. First-order noise shaping results in a flat noise PSD close to DC, while second-order noise shaping has a zero at DC in its noise transfer function. Hence, the noise shaping employed must be of at least second order. The PSD of a GFSK-modulated signal with quantized cancellation signal is shown in Figure 11(d). Clearly, the residual quantization noise matches the behavior expected for uniform, 1st-order and 2nd-order noise shaping in the cancellation path, from the preceding analysis. The overall improvement obtained from the cancellation technique can now be compared with the case when it is off. From Figure 11(d), it can be observed that the peak quantization noise is −128dBc/ Hz, which is 27dB lower than the modulator with cancellation off (Figure 7). 4. Receive Band Noise In a Frequency division duplexing (FDD) system, both transmitter and receiver are operational simultaneously. For instance, in the LTE-FDD frequency planning Tx-Rx separation varies from 30MHz in band XII (700MHz) to 400MHz in band X (Tx in 1710–1770MHz and Rx in 2110–2170MHz). Due to a finite duplexer Tx to Rx isolation, the transmitter noise in the receive frequency band leaks into the receiver, which can desensitize the receiver (Figure 13). If the transmitter noise in the receive band is −160dBc/Hz and the duplexer Tx-to-Rx isolation in the receive band is 47dB, then noise power at LNA input is given by assuming 24dBm power at the antenna, 1.5dB of duplexer Tx insertion loss, and 1.0dB of antenna switch insertion loss. As a result of this additional noise at LNA input, the receiver noise figure can degrade by 0.5dB if it was 3dB in the beginning (without including switch and duplexer loss). In the phase path of a polar transmitter, this noise is composed of PLL phase noise and quantization noise added by the digital phase modulator. In practice, the phase noise of the PLL can be reduced to meet the requirement in the receive band. Consequently, the quantization noise of the modulator becomes the dominant source of noise, requiring additional filtering by off-chip components. However, the additional components can be avoided by positioning a quantization noise transfer function (NTF) notch at the receive band frequency. The quantizer of the phase modulator can be modified to include a zero in the quantization noise transfer function. Due to high-pass shaping of quantization noise, the 30–70MHz Rx band offset poses less design challenge. Improvement in noise at the receive band due to NTF notch at 80MHz is shown in Figure 14. The resolution of digital phase modulator was increased from 5bits to 6 in this simulation, to lower its noise contribution. The modulator also employs PQN cancellation technique with advancement in the cancellation path for an integration time of . For comparison, the transfer function without the notch, but including the same PQN cancellation technique, is also shown. An improvement of 12dB is obtained due to the notch in the transfer function. When the cancellation path is quantized to 5bits, the noise performance degrads by 2dB due to the additional noise of the cancellation path. The ACPR performance of the modulator is better than 62dB in the out-of-band region which meets the requirement of both LTE and WiMAX. Although the ACPR specification can be met with a lower resolution, at least 4bits are generally required to keep the frequency deviation in the cancellation path within a reasonable range. 5. Conclusion Transmitters for upcoming wireless standards, 60GHz band, and software-defined radio require a digital wide-bandwidth phase modulator for a reduction in power consumption and for achieving maximum flexibility in transmission. Several circuit and system techniques for designing such a modulator were reviewed in this paper. Signal processing details of phase quantization noise cancellation were presented with emphasis on advancement of cancellation signal, reduction in integration time, and impact of quantization in the cancellation signal. In the final system, the residual PQN for a 2nd-order quantized digital phase modulator was lower by 27dB. Inclusion of a noise transfer function notch was presented to meet the specification of receive band noise in a digital phase Pulse-shaping function for a rectangular pulse, saw-tooth pulse (saw-tooth1), and modified saw-tooth pulse is obtained after advancing cancellation signal by (saw-tooth2): For a 2nd-order modulator quantizing , is given by , which models the effect of noise cancellation, for the three cases can be written as Figure 15(b) depicts the pulse-shaping functions , , and . Figure 15(c) depicts the calculated transfer function of uncancelled PQN for the three cases. This research was funded in parts by National Science Foundation awards ECCS-0824279 and ECCS-0955330. 1. V. Petrovic and W. Gosling, “Polar-loop transmitter,” Electronics Letters, vol. 15, no. 10, pp. 286–288, 1979. View at Scopus 2. M. R. Elliott, T. Montalvo, B. P. Jeffries et al., “A polar modulator transmitter for GSM/EDGE,” IEEE Journal of Solid-State Circuits, vol. 39, no. 12, pp. 2190–2199, 2004. View at Publisher · View at Google Scholar · View at Scopus 3. T. Sowlati, D. Rozenblit, R. Pullela et al., “Quad-band GSM/GPRS/EDGE polar loop transmitter,” IEEE Journal of Solid-State Circuits, vol. 39, no. 12, pp. 2179–2189, 2004. View at Publisher · View at Google Scholar · View at Scopus 4. A. W. Hietala, “A quad-band 8PSK/GMSK polar transceiver,” IEEE Journal of Solid-State Circuits, vol. 41, no. 5, pp. 1133–1141, 2006. View at Publisher · View at Google Scholar · View at Scopus 5. Y. Akamine, S. Tanaka, M. Kawabe et al., “A polar loop transmitter with digital interface including a loop-bandwidth calibration system,” in Proceedings of the IEEE International Solid-State Circuits Conference: Digest of Technical Papers (ISSCC '07), pp. 348–349, San Francisco, Calif, USA, February 2007. View at Publisher · View at Google Scholar 6. D. C. Cox, “Linear amplification with nonlinear components,” IEEE Transactions on Communications, vol. 22, no. 12, pp. 1942–1945, 1974. View at Scopus 7. M. E. Heidari, M. Lee, and A. A. Abidi, “All-digital outphasing modulator for a software-defined transmitter,” IEEE Journal of Solid-State Circuits, vol. 44, no. 4, Article ID 4804977, pp. 1260–1271, 2009. View at Publisher · View at Google Scholar · View at Scopus 8. T. A. D. Riley and M. A. Copeland, “A simplified continuous phase modulator technique,” IEEE Transactions on Circuits and Systems II: Analog and Digital Signal Processing, vol. 41, no. 5, pp. 321–328, 1994. View at Publisher · View at Google Scholar · View at Scopus 9. N. M. Filiol, T. A. D. Riley, C. Plett, and M. A. Copeland, “An agile ISM band frequency synthesizer with built-in GMSK data modulation,” IEEE Journal of Solid-State Circuits, vol. 33, no. 7, pp. 998–1008, 1998. View at Scopus 10. S. Pamarti, L. Jansson, and I. Galton, “A wideband 2.4-GHz delta-sigma fractional-N PLL with 1-Mb/s in-loop modulation,” IEEE Journal of Solid-State Circuits, vol. 39, no. 1, pp. 49–62, 2004. View at Publisher · View at Google Scholar · View at Scopus 11. M. H. Perrott, T. L. Tewksbury III, and C. G. Sodini, “A 27-mW CMOS fractional-N synthesizer using digital compensation for 2.5-Mb/s GFSK modulation,” IEEE Journal of Solid-State Circuits, vol. 32, no. 12, pp. 2048–2059, 1997. View at Scopus 12. M. Youssef, A. Zolfaghari, H. Darabi, and A. Abidi, “A low-power wideband polar transmitter for 3G applications,” in Proceedings of the IEEE International Solid-State Circuits Conference: Digest of Technical Papers (ISSCC '11), pp. 378–379, San Francisco, Calif, USA, February 2011. View at Publisher · View at Google Scholar 13. R. B. Staszewski, J. L. Wallberg, S. Rezeq et al., “All-digital PLL and transmitter for mobile phones,” IEEE Journal of Solid-State Circuits, vol. 40, no. 12, pp. 2469–2482, 2005. View at Publisher · View at Google Scholar · View at Scopus 14. S. A. Yu and P. Kinget, “A 0.65-V 2.5-GHz fractional-N synthesizer with two-point 2-Mb/s GFSK data modulation,” IEEE Journal of Solid-State Circuits, vol. 44, no. 9, Article ID 5226689, pp. 2411–2425, 2009. View at Publisher · View at Google Scholar · View at Scopus 15. W. W. Si, D. Weber, S. Abdollahi-Alibeik et al., “A single-chip CMOS bluetooth v2.1 radio SoC,” IEEE Journal of Solid-State Circuits, vol. 43, no. 12, Article ID 4684630, pp. 2896–2904, 2008. View at Publisher · View at Google Scholar · View at Scopus 16. K. C. Peng, C. H. Huang, C. J. Li, and T. S. Horng, “High-performance frequency-hopping transmitters using two-point delta-sigma modulation,” IEEE Transactions on Microwave Theory and Techniques, vol. 52, no. 11, pp. 2529–2535, 2004. View at Publisher · View at Google Scholar · View at Scopus 17. S. Lee, J. Lee, H. Park, K. Y. Lee, and S. Nam, “Self-calibrated two-point delta-sigma modulation technique for RF transmitters,” IEEE Transactions on Microwave Theory and Techniques, vol. 58, no. 7, Article ID 5481987, pp. 1748–1757, 2010. View at Publisher · View at Google Scholar · View at Scopus 18. Y. H. Liu and T. H. Lin, “A wideband PLL-based G/FSK transmitter in 0.18 μm CMOS,” IEEE Journal of Solid-State Circuits, vol. 44, no. 9, Article ID 5226692, pp. 2452–2462, 2009. View at Publisher · View at Google Scholar · View at Scopus 19. H. Mair and L. Xiu, “Architecture of high-performance frequency and phase synthesis,” IEEE Journal of Solid-State Circuits, vol. 35, no. 6, pp. 835–846, 2000. View at Publisher · View at Google Scholar · View at Scopus 20. P. E. Su and S. Pamarti, “A 2.4GHz wideband open-loop GFSK transmitter with phase quantization noise cancellation,” IEEE Journal of Solid-State Circuits, vol. 46, no. 3, pp. 615–626, 2011. View at Publisher · View at Google Scholar 21. P. Y. Wang, J. H. C. Zhan, H. H. Chang, and H. M. S. Chang, “A digital intensive fractional-N PLL and all-digital self-calibration schemes,” IEEE Journal of Solid-State Circuits, vol. 44, no. 8, Article ID 5173739, pp. 2182–2192, 2009. View at Publisher · View at Google Scholar · View at Scopus 22. K. J. Wang, A. Swaminathan, and I. Galton, “Spurious tone suppression techniques applied to a wide-bandwidth 2.4GHz fractional-N PLL,” IEEE Journal of Solid-State Circuits, vol. 43, no. 12, Article ID 4684634, pp. 2787–2797, 2008. View at Publisher · View at Google Scholar · View at Scopus 23. E. Temporiti, G. Albasini, I. Bietti, and R. Castello, “A 700-kHz bandwidth ΣΔ fractional synthesizer with spurs compensation and linearization techniques for WCDMA applications,” IEEE Journal of Solid-State Circuits, vol. 39, no. 9, pp. 1446–1454, 2004. View at Publisher · View at Google Scholar · View at Scopus 24. M. Gupta and B. S. Song, “A 1.8-GHz spur-cancelled fractional-N frequency synthesizer with LMS-based DAC gain calibration,” IEEE Journal of Solid-State Circuits, vol. 41, no. 12, pp. 2842–2851, 2006. View at Publisher · View at Google Scholar · View at Scopus 25. A. Swaminathan, K. J. Wang, and I. Galton, “A wide-bandwidth 2.4GHz ISM-band fractional-N PLL with adaptive phase-noise cancellation,” IEEE Journal of Solid-State Circuits, vol. 42, no. 12, pp. 2639–2650, 2007. View at Publisher · View at Google Scholar 26. S. E. Meninger and M. H. Perrott, “A 1-MHZ bandwidth 3.6-GHz 0.18-μm CMOS fractional-N synthesizer utilizing a hybrid PFD/DAC structure for reduced broadband phase noise,” IEEE Journal of Solid-State Circuits, vol. 41, no. 4, pp. 966–980, 2006. View at Publisher · View at Google Scholar · View at Scopus 27. P. E. Su and S. Pamarti, “A 2-MHz bandwidth Δ—Σ fractional-N synthesizer based on a fractional frequency divider with digital spur suppression,” in Proceedings of the IEEE Radio Frequency Integrated Circuits Symposium (RFIC '10), pp. 413–416, Anaheim, Calif, USA, May 2010. View at Publisher · View at Google Scholar · View at Scopus 28. H. Hedayati, W. Khalil, and B. Bakkaloglu, “A 1 MHz bandwidth, 6GHz 0.18 μm CMOS type-I δσ fractional-N synthesizer for WiMAX applications,” IEEE Journal of Solid-State Circuits, vol. 44, no. 12, Article ID 5342360, pp. 3244–3252, 2009. View at Publisher · View at Google Scholar · View at Scopus 29. M. Nilsson, S. Mattisson, N. Klemmer et al., “A 9-band WCDMA/EDGE transceiver supporting HSPA evolution,” in Proceedings of the IEEE International Solid-State Circuits Conference: Digest of Technical Papers (ISSCC '11), pp. 366–367, San Francisco, Calif, USA, February 2011. View at Publisher · View at Google Scholar 30. C. M. Hsu, M. Z. Straayer, and M. H. Perrott, “A low-noise wide-BW 3.6-GHz digital ΔΣ fractional-N frequency synthesizer with a noise-shaping time-to-digital converter and quantization noise cancellation,” IEEE Journal of Solid-State Circuits, vol. 43, no. 12, Article ID 4684627, pp. 2776–2786, 2008. View at Publisher · View at Google Scholar · View at Scopus 31. M. Lee, M. E. Heidari, and A. A. Abidi, “A low-noise wideband digital phase-locked loop based on a coarse-fine time-to-digital converter with subpicosecond resolution,” IEEE Journal of Solid-State Circuits, vol. 44, no. 10, pp. 2808–2816, 2009. View at Publisher · View at Google Scholar · View at Scopus 32. M. Zanuso, S. Levantino, C. Samori, and A. Lacaita, “A 3MHz-BW 3.6GHz digital fractional-N PLL with sub-gate-delay TDC, phase-interpolation divider, and digital mismatch cancellation,” in Proceedings of the IEEE International Solid-State Circuits Conference: Digest of Technical Papers (ISSCC '10), pp. 476–477, San Francisco, Calif, USA, February 2010. View at Publisher · View at Google Scholar · View at Scopus 33. R. B. Staszewski, K. Muhammad, D. Leipold et al., “All-digital TX frequency synthesizer and discrete-time receiver for Bluetooth radio in 130-nm CMOS,” IEEE Journal of Solid-State Circuits, vol. 39, no. 12, pp. 2278–2291, 2004. View at Publisher · View at Google Scholar · View at Scopus 34. T. LaRocca, J. Liu, F. Wang, and F. Chang, “Embedded DiCAD linear phase shifter for 57–65GHz reconfigurable direct frequency modulation in 90nm CMOS,” in Proceedings of the IEEE Radio Frequency Integrated Circuits Symposium (RFIC '09), pp. 219–222, Boston, Mass, USA, June 2009. View at Publisher · View at Google Scholar · View at Scopus 35. Delta-SigmaToolBox, http://www.mathworks.com/matlabcentral/fileexchange/19.
{"url":"http://www.hindawi.com/journals/jece/2011/507381/","timestamp":"2014-04-16T16:48:03Z","content_type":null,"content_length":"214387","record_id":"<urn:uuid:b993d4b1-e173-49b3-953b-629a4e2dcc42>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
A hard question of finding angle could any smart person help solve this question? thank you so much..... Find x where AB=BC This is not a hard question, it is an impossible one. Not enough information here. Now I could be missing something, but I don't see how you can lock in an angle for x. If m<DAC = 0, then x=90. But if m<DAC = 30, then x = 60. And if m<DAC = 45, then x = 45. This is true regardless of what is going on with triangleABC. We need some other constraint on the problem. It is a messy problem, but not impossible. To simplify notation, let $m\left( {\angle DAC} \right) = y\;\& \,m\left( {\angle BAC} \right) = m\left( {\angle BCA} \right) = z$ The sum of measures of angles of a quadrilateral is [ $360^o$. So $x+y+2z+205=360$. But $x+y=90$, so $2z+295=360$. Now you can finish. But even if you can solve for z, you still have 2 variables and a single equation (or two equivelant equations anyway). Even if you know z, all you're left with is x+y=90. Isn't it? How would you finish from there?
{"url":"http://mathhelpforum.com/geometry/105863-hard-question-finding-angle.html","timestamp":"2014-04-19T00:22:10Z","content_type":null,"content_length":"42611","record_id":"<urn:uuid:fd4c9e0b-b40c-4785-ba2b-e210f3ded358>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00293-ip-10-147-4-33.ec2.internal.warc.gz"}
K as an Abbreviation for Thousands Date: 05/29/2002 at 21:48:38 From: Caryll Subject: Representation of thousands and millions In class last night my professor wrote $30M and intended it to mean thirty thousand dollars. I was taught that 30K meant thirty thousand. I looked at several of the sites referencing Roman numerals, but can't find anything about 'K'. Can you assist? Date: 05/29/2002 at 22:47:10 From: Doctor Peterson Subject: Re: Representation of thousands and millions Hi, Caryll. When we write "30 K" for 30,000, we are not using Roman numerals but pseudo-metric, where K stands for "kilo-". For the same reason, "30 M" means 30,000,000, using M for "mega-" (or just "million"). It would be wrong to use Roman numerals this way; M here can't stand for thousand. But I suspect it's a common mistake. See this site, listed in our FAQ, that tells all there is to know about units: How Many? an informal abbreviation for one thousand used in expressions where the unit is understood, such as "10K run" (10 kilometers) or "700K disk" (700 kilobytes or kibibytes). Note that "K" is also the symbol for the kelvin (see below). Also note that the symbol for the metric prefix kilo- (1000) is actually k-, not K-. M [1] informal abbreviation for million in expressions where the base unit is understood, as in "500M hard drive" (500 megabytes or mebibytes). In chemistry, M is the symbol for "molar" (see below). M [2] the Roman numeral 1000, sometimes used in symbols to indicate a thousand, as in Mcf, a traditional symbol for 1000 cubic feet. Given the widespread use of M to mean one million, this older use of M to mean 1000 is very confusing and should be scrapped. See also Is K a Roman numeral? - Doctor Peterson, The Math Forum Date: 05/30/2002 at 08:04:43 From: Caryll Subject: Thank you (Representation of thousands and millions) Many thanks for the clarification. Keep up the good work. Date: 10/08/2002 at 15:25:33 From: Chris Chang Subject: On M representing Thousands I believe it is incorrect for you to state that usage of M, such as $10M, is wrong to represent thousands. While $10K is more common, there has certainly been a convention for many years in various businesses for $10M to mean $10,000 and for $10MM to mean $10,000,000. This is a confusing issue because to my knowledge there is no definitive standard. Instead, convention has varied by field, and case to case. Indeed I am sure there are different people right now using M to mean thousands while others are using it to mean millions. I suggest checking the convention in a specific situation. Chris Chang Date: 10/08/2002 at 15:35:04 From: Doctor Peterson Subject: Re: On M representing Thousands Hi, Chris. Can you give me any references to show this usage, either a manual of some sort, or an example (preferably on the Web so I can see it easily)? I will still stand by my statement that it is "wrong," because as you admit it is very confusing; but as we know many things that are wrong are nevertheless standard, and it would be good to point that out in case people confuse the two concepts. Also, note that I referred to Russ Rowlett's page, where he says that M for thousand IS used, but should be eliminated, which is what I am saying too. Neither of us claims that it is only used mistakenly, just that it is not a good idea. The references I found some from government and industry, often in energy or forest related fields where Rowlett's "Mcf" is also found: Glossary - Southern Appalachian Forest Coalition M - thousand M$ - thousands of dollars MAUM - thousand animal unit month MBF - thousand board feet MCF - thousand cubic feet MM - million MM$ - millions of dollars MMBF - million board feet MMCF - million cubic feet MMRVD - million recreation visitor-day Energy INFOcard MMbd = million barrels per day; Mcf = thousand cubic feet tcf = trillion cubic feet; kWh = kilowatthour; MM = million Fuel Alcohol Plant Cost Study Cases CAPITAL COSTS $25 MM The frequent coexistence of MM in money and in unit names suggests that in fields where such obsolescent units are used, the habit carries over into financial notation. I can also refer to these entries in Rowlett that I had missed; they confirm my impression of the industries affected: an abbreviation for one million, seen in a few traditional units such as those listed below. The abbreviation is meant to indicate one thousand thousand, M being the Roman numeral 1000. However, MM actually means 2000, not one million, in Roman numeration. MMb, MMbo symbols for one million barrels of oil; see megabarrel above. MMBF or MMBM symbols sometimes used in U.S. forestry for one million board feet. One MMBF represents a volume of 83 333 cubic feet or 2360 cubic meters. "BM" stands for "board measure." MM Btu traditional symbol for one million Btu (about 1.055 057 gigajoules (GJ)), a unit used widely in the energy industry. This unit is also called the dekatherm. a symbol for one million cubic feet (28 316.85 m3, or 28.316 85 megaliters). MM scfd symbol for one million standard cubic feet per day, the customary unit for measuring the production and flow of natural gas. "Standard" means that the measurement is adjusted to standard temperature (60 °F or 15.6 °C) and pressure (1 atmosphere). As much as I dislike this notation, I certainly have to admit that it is still current in some fields, and has to be lived with for now. The professor in the original question has the final say. Thanks for pointing out this interesting backwater of notation! - Doctor Peterson, The Math Forum Date: 10/08/2002 at 16:38:52 From: Chris Chang Subject: On M representing Thousands I agree with you that it would be nice to get rid of MM representing millions and simply use M, or any sort of standardization so we know what people mean! I was specifically thinking of the financial industry, which your Rowlett citation mentions, where MM is still in regular use. Admittedly, now that I look around and think about it, I have not seen M used much recently - primarily because figures tend to be larger and so are not often quoted in thousands. A few references follow: Done Deals Definitions A site that uses M for thousands and MM for millions. Merrill Lynch glossary Merrill Lynch glossary. See Market Cap entry, uses $MM for millions of dollars. Median Market Cap uses $M for thousands of dollars, although definition of $M is not stated explicitly. Information or Babel? - Madnick section 2.1 paragraph 3: (as an aside, even if given a clue, such as 23M, there may be a problem because sometimes M means millions, sometimes it means thousands - in which case MM is used to mean millions) Exactly the various usage we are talking about. 7-11 Financial Report This financial report just uses MM for millions. Yale Career Information Center This Yale guide uses K for thousands, M to mean one million, and MM to represent millions. This seems even more confusing to me, but goes to my point that there is a lot of different usage around. Date: 01/17/2003 at 18:16:31 From: Richard Wilkes Subject: Recent article on the letter M I noted with agreement your statement that "K" refers to thousands, "M" to millions. However, it is not uncommon in the business world to see "M" for thousands, "MM" for millions, and "MMM" for billions. This probably is a fairly recent trend/shortcut in finance. The same holds true for dashes, such as one the length of a hyphen, used as a superscript above and right of the number (as one would in noting that a number has been squared or cubed, for example). I have seen (and have used myself when writing down numbers) one (superscript) dash for thousands, two for millions, three for billions, and so on. Richard Wilkes Financial/M&A Advisor Houston, TX Date: 10/01/2005 at 18:22:29 From: George Subject: K as an Abbreviation for Thousands You mention a preference for the use of "K" rather than "M" for thousands. I have been in and around the printing industry for over twenty five years. In our industry, it is true that K is the standard used to indicate thousands in some parts of the world, and M is the standard used in others, including the United States. Until a consensus is reached, I agree that it could be confusing; however, until that time, there is an easy way to avoid that confusion. Don't focus on changing the symbol for thousands, which would be both difficult and time consuming, considering the various cultural predispositions. Instead, always use MM for millions. That way neither K nor M would be mistaken for millions in my industry as well as many others. Date: 10/01/2005 at 21:15:12 From: Doctor Peterson Subject: Re: K as an Abbreviation for Thousands Hi, George. That's not a bad thought; though if you use M for thousands, that could still be mistaken for millions until the reader notices that you are using MM for that purpose. If you go further and use only K and MM, you could avoid ambiguity entirely (as long as your audience is not so stuck on one tradition that they don't recognize the other). On the other hand, your rule just takes two individually logical systems and combines them into a hybrid that makes no sense, and that would have to be replaced eventually anyway if we want to migrate to a single consistent system. I don't quite like it, but could probably live with it if I had to! I think there will always be some confusion, but your idea is a possibility for reducing it. - Doctor Peterson, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/60697.html","timestamp":"2014-04-16T04:43:26Z","content_type":null,"content_length":"16771","record_id":"<urn:uuid:f1927961-840c-4e4a-bfb1-4cb65dd64d78>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
When to use which? August 11th 2007, 09:33 PM When to use which? if 5 card are dealt from 52 cards, determine the probabilty of getting 3 red cards and 2 black face cards. The answer is 26C3(6C2)/52C5, but I got 52C5(1/2)^5(6/52)^2, how do you know when to use which? Thank you! August 12th 2007, 03:26 AM Hello, wby! If 5 cards are dealt from 52 cards, determine the probabilty of getting 3 red cards and 2 black face cards. The answer is: . $\frac{\left(_{26}C_3\right)\left(_6C_2\right)}{_{5 2}C_5}$ but I got: . $\left(_{52}C_5\right)\left(\frac{1}{2}\right)^5\le ft(\frac{6}{52}\right)^2$ . ?? How do you know when to use which? Sorry, but everything you wrote is incorrect . . . That $_{52}C_5$ is the number of possible 5-card hands. . . It belongs in the denominator. When they state "Five cards are dealt", . . we generally assume it is without replacement. So the probability of getting a certain color is not $\frac{1}{2}$ each time, . . and the probability of getting a black face card is not $\frac{6}{52}$ each time. Their answer is correct. You want to draw 3 red cards from the available 26 red cards. . . There are: . $_{26}C_3$ ways. You want to draw 2 black face cards from the available 6 black face cards. . . There are: . $_6C_2$ ways. . . Hence, there are: . $\left(_{26}C_3\right)\left(_6C_2\right)$ ways to draw 3 red cards and 2 black face cards. And that is over $_{52}C_5$, the number of possible 5-card hands.
{"url":"http://mathhelpforum.com/statistics/17699-when-use-print.html","timestamp":"2014-04-18T06:25:41Z","content_type":null,"content_length":"7117","record_id":"<urn:uuid:23406e9b-94b9-47a8-af9a-24eed518c697>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00145-ip-10-147-4-33.ec2.internal.warc.gz"}
Relational Algebra Projection A selection of articles related to relational algebra projection. Original articles from our library related to the Relational Algebra Projection. See Table of Contents for further available material (downloadable resources) on Relational Algebra Projection. Relational Algebra Projection is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Relational Algebra Projection books and related Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/relational-algebra-projection/","timestamp":"2014-04-16T14:29:55Z","content_type":null,"content_length":"27587","record_id":"<urn:uuid:de6798f1-e689-407c-a693-87ca78e81a7e>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00133-ip-10-147-4-33.ec2.internal.warc.gz"}
Robert Hansen's Blog From the Following the tea-party wave in the 2010 election, the 112th Congress looks set to be the least productive in recent history. By the end of November, the House had passed 146 bills over the previous two years, by far the smallest number for any Congress since 1948. The Senate passed fewer bills in 2012 than in any year since at least 1992. Damn, only 146 bills. And the fat lady hasn't begun to sing quite yet. We might yet get a tax increase without any spending cuts. Imagine, the automatic spending cuts set to kick in are $110 billion...on a GDP of $15 I will save you the math...that is less than 1%. Note added: The cuts of $110 billion are only 1% of GDP, which is relevant when thinking about the (possible!) impact on spending and hence GDP. The cuts are a bit more than 4% of the total Federal budget, and of course more than 4% of discretionary spending. Still, to refer to the cuts as "draconian" as is customary, seems a bit of an exaggeration...especially when the Federal government is clearly living beyond its mean. Comparisons between heterogeneous countries like the US versus Sweden or Finland have always bothered me. It seems intuitive that all kinds of public policy should optimally depend on the degree of heterogeneity in the underlying population. A more homogeneous population should be able to provide a better safety net, for example, because there will be less of an incentive problem in providing a minimum level of welfare. Here is an attempt to formalize that intuition. I suppose this has been done before. I don't have the math worked out entirely but it seems right... Let individuals in the population be defined by a parameter a, which we will think of the individual's ability to create wealth in the market economy. (I do not presume a negative connotation here. This is a very narrow definition of ability -- the ability to create wealth in the market economy. Great artists might not have much ability by this definition!) More precisely, each individual has a W = W(E | a) where W is wealth, a is an individual-specific parameter, and E is effort. We will assume that dW/dE = a and let a be normally distributed. Every individual will have an increasing disutility of effort, but this is the same for everyone. Each individual will in the market economy choose her optimal level of effort and therefore her optimal level of wealth. The optimum occurs where the marginal disutility of effort equals the marginal effect of effort on wealth, which is given by a. With marginal disutility of effort increasing, it will be the case that individuals with higher a choose higher levels of wealth. This makes sense. If an individual has the capability to create more wealth, they will choose to do so. This is the essential heterogeneity I am dealing with. Now bring in public policy in the sense of a minimum safety net level of wealth, S, or a subsidy that would be available to anyone who has less wealth than S. If there is no disutility associated with receiving the safety net subsidy, then a rational individual should compare the wealth less disutility they would achieve in the market economy to S and choose whatever is greater. Since individuals with higher a choose higher wealth, there will be some critical level of the ability parameter, let's denote it a*, below which it will be optimal to elect the subsidy and above which it will be optimal to engage in the market economy. The final step is to think about how public policy sets S. There must be some value associated with equity, letting the less able achieve a minimum level of net wealth. The cost of achieving equity, however, is the loss of effort from individuals who choose the subsidy. As the base subsidy S increases, several marginal effects occur: One, the marginal value of increased equity falls, because individuals getting the subsidy are increasingly well off. Two, the marginal wealth loss increases as more-able individuals drop out of the market economy. Wealth loss has to be a negative. And third, as S increases, we move along the normal curve and experience an increasing slope of that density. This last point is critical, as it addresses the issue of heterogeneity. I assume that S is further than one standard deviation less than the mean of a. Since the inflection point of the normal is at one SD away from the mean, that means we are in the range of increasing slope. So as S increases in that range, more individuals are caught up, and more wealth is lost even if marginal ability were not increasing. The optimal S, S*, has to balance the marginal benefits of greater equity against the marginal cost of wealth loss. Remember that with S* there is also an implied a*. Individuals with ability below a * take the subsidy; individuals above a* participate in the market economy. Here is a picture of two normals. The tighter one represents a more homogeneous society to begin with (Finland, Sweden). The less tight one we can think of the US. Now remember we are far to the left on these densities, where the slope is increasing in a (which is on the horizontal axis). Here is where it starts getting a little tricky, and I will have to check the math by getting the derivative of the normal density as a function of the standard deviation. But suppose that S* for the tighter distribution, the more homogeneous population, implies an a* right about where the density curve in the picture above ends. Then go up to the density for the less homogeneous population, the wider normal curve. Note that the slope of that curve will be greater than for the other curve -- essentially we are closer to the inflection point, where the slope reaches a maximum. If the slope of the density for the more heterogeneous population is greater, that means that the marginal cost of increasing S is also greater, for we are picking up more of the population. All the other marginal effects are the same, for we are at the same level of a. A higher marginal cost of S means that the more heterogeneous population would optimally choose an S*, and an a*, less than that of the more homogeneous population. The optimal safety net depends on the heterogeneity of the population? When I teach about auctions, I like to ask students: What does an auction accomplish, or put differently, what social roles does an auction play? I point to two major roles: An auction determines an allocation -- who gets the good being sold, or who is chosen to produce -- and it also determines a price. Two very important things: allocation and price. In Medicare -- a confused and confusing policy area if there ever was one!-- auctions are used in both Medicare Part D (prescription drug coverage) and Medicare Advantage (private Medicare plans). I am concerned that some policy proposals for Medicare Advantage (MA) are asking too much from an auction, for they add a third role: determining the subsidy level for subscibers. This is a complicated issue, requiring auction theory that is at the frontier. But I think the intuition is pretty clear. Also, while I will focus on MA here, similar issues arise with Part D plans, albeit somewhat less so because of the way those rules are set. In a nutshell, here is the way MA plans work now. Private insurers submit bids to provide health coverage for those over 65, with bids submitted on a county basis. Folks who qualify for Medicare can either take the standard government-issue Medicare or opt into one of the private MA plans. The private plans are paid by the government a subsidy amount equal to the average per person cost of that county's standard Medicare plan. If the plan bids more than that, the enrollees in that plan pay the difference between the subsidy and the bid. If a plan bids less than the subsidy, then enrollees don't pay anything but the plan has to rebate the difference to enrollees as either cash or extra benefits (I do need to verify the specifics of this, but for now I don't think it is crucial). Importantly, enrollees select which MA plan they want, so choice is a key part of the process. So this is fine. The auctions do two things, as above. They determine which of the private plans provide service (allocation) and they determine a price (the price paid by enrollees). Note that the subsidy is determined exogenously from the auction -- the average per person cost of standard Medicare. Granted, there might be some endogeneity here, as the cost of the local Medicare plan depends on who opts into MA plans...but that seems of second order importance. However, some policy proposals (see Alice Rivlin, for example) will add a third role to MA auctions, that of determining the subsidy. The typical idea is to set the subsidy at the second-lowest bid of the private insurers. The first order logic of this is great. Set the subsidy at that level, and you can be sure that at least two plans will be willing to offer coverage at that subsidy amount. Even more important, instead of having the MA subsidy set through a political process, it is set in a market mechanism. What could sound better than that? Here is my concern, arising from the effect that setting the subsidy in the auction will have on strategic bidding behavior. (Let's be clear that strategic bidding behavior should be expected, that is, insurers will not just put bids in that equal their expected cost, even if that is what the government asks for. Insurers will put in bids that maximize their expected profit.) The issue is that by putting in a higher bid, an insurer has a reasonable expectation that it will increase the subsidy (if the bidder happens to be the second lowest bid). This will increase the subsidy to enrollees and make it less likely that the insurer's bid will result in a net payment by the enrollees. Also, as the subsidy increases, more people will opt into the MA plan arena. Seems pretty clear to me that this will result in higher bids. Amplification of this problem arises because is in MA plans, there is not a standard package of benefits. By adding benefits, and putting in a higher bid reflecting the higher cost of that expanded package, an insurer minimizes any competitive effect of being a high bidder in the auction while still having a reasonable expectation that the subsidy will be increased. As all bidders do this, the entire distribution of bids shifts higher. Studies that have been done on the cost savings from basing the subsidy on the second lowest bid are obviously wrong, as that second lowest bid is going to be higher. The idea is not that different from shifting from a second-price sealed bid auction to a first-price selaed bid auction (standard auction where something is being SOLD to bidders). It would seem that taking the highest bid as the price in an auction would clearly be better than taking the second highest. But as the rules change from second-highest to highest, we have to expect that bidders will lower their bids. I always ask students: What do you think is greater -- the second highest out of a distribution, or the first highest out of a lower distribution? Sorry for the small font on this -- the table is from here if you want to see original. I added the last line, percent growth year on year. Three main points to note: First, the 18% increase in total federal outlays from 2008 to 2009, on top of a 9% increase 2007 to 2008, both of which were stimulus to a great extent. Second, since then, there is only one year when nominal spending went down, by 2% from 2009 to 2010. And third, overall between 2007 and 2012, nominal Federal outlays are up 39%. In previous posts, I noted my fear that stimulus spending would become permanent. Here is a quick graph of the outlays from 2007-12. Speaker Boehner pulled his "Plan B" bill from the House Thursday evening, saying there were not enough votes to pass it. This bill would have preserved the current tax rates for those under $1 million in income, allowing tax rates to rise on those above. Pundits are crying the end of the world, which coincidentally coincides with the predictions of the ancient Mayans. The end of the world as we know it is not nigh. I actually give the advantage now to the forces in favor of smaller government, as in less spending and lower taxes. Look at it this way. Any deal has to win approval of a majority in the House. My working assumption is that the Republicans will hold together (but see the * endnote below). Thus, any deal needs approval of almost all Republicans. Think of the Republicans as being lined up on a continuum, from very conservative (lower taxes, lower spending) to more moderate (willing to accept higher taxes, higher spending). Boehner's failure defines the conservative side of the spectrum in regard to the minimum deal it is willing to accept and the risks it is willing to endure. With this clarity, the center of gravity in the Republican continuum has shifted to the conservative side. Suppose I was negotiating with two people, one very demanding and one less so. The more demanding person just committed himself to blowing his head off if he doesn't get his way. I would say that the center of gravity shifted to the very demanding side of the spectrum. The conservative Republicans are willing to risk going over the cliff, and I think they are correct in accepting that risk. My belief is that "the cliff" is not anything like truly going over a cliff. There will be some unfortunate consequences, especially in regard to the Alternative Minimum Tax and Medicare rates for doctors (the media are not focusing on the Medicare implications, but the so-called Doc Fix needs to be voted this year or Medicare payments to docs will fall). Taxes will go up, which to be honest I don't see as a disaster either. If I am right in that the cliff is not a disaster, the Democrats lose much of their negotiating power in the new year. That power right now is fueled by media-supported claims of economic apocalypse. If Jan. 1 comes with no deal, and life goes on, how does the Democrat position look? There will be some cries of pain, yes, but blame will be equally spread -- and if there is any justice, most folks will blame the leader, ie., President Obama, for failing to negotiate a deal. Thus, in the new year, the balance of power shifts to the conservative side, with a better prospect of getting more spending cuts and a better mix on the revenue side. *Endnote. It is possible that the Republicans will not hold together in the House. If Obama and Reid craft a deal in the Senate, could that attract all the Democrats in the House and just enough Republicans to pass? Possible, but unlikely...unless the deal is attractive enough on the "less spending and taxes" dimension.
{"url":"http://robertghansen.blogspot.com/2012_12_01_archive.html","timestamp":"2014-04-21T07:03:37Z","content_type":null,"content_length":"115403","record_id":"<urn:uuid:b69c523f-e22b-4bd6-9223-0cc79f910612>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
Least Upper Bound Property Proof September 7th 2008, 11:23 AM Least Upper Bound Property Proof Prove that an nonempty set S that is bounded above has a least upper bound. For this problem, we must use the following way to show the proof: Let $u_0 \in S$ and $N_0$ be an upper bound of S. Let $v_{0} = \frac {u_0 + N_0 }{2}$. If $v_0$ is an upper bound, let $U_1 = v_0$ and $u_1 = x_0$ If $v_0$ is not an upper bound, then let $N_1 = N_0$ and $u_1 > v_0 \ \ \ u_1 \in S$ Repeat the process to obtain sequence $N_n$ and $u_n$, we have to show that they both converge to the LUB (S). To be honest, I'm a bit lost on how to generate $N_n$ and $u_n$, so if $v_0$ is not an upper bound, that means $N_0$ is still the least upper bound since $v_0 \in S$, right? Suppose it is not true, then we let $v_1$ be something lower, or closer to the sequence, but how should the sequences $N_n$ and $u_n$ look like? Thank you very much! September 7th 2008, 11:57 AM There is a real problem is offering any help in this case. This statement is usually known as the Completeness Axiom. If you are asked to prove what is an axiom in one development then there must be an axiom in the other development that helps prove the statement. Can you tell us what axioms you have been given that deal with bounded sets? What are your axioms to this point? September 7th 2008, 12:16 PM So far we know that: A convergent sequence is bounded. A montone sequence that is bounded converges. And for the set S in the problem, S is a nonempty set in the set of real numbers. September 7th 2008, 01:38 PM It is clear that you will use “A montone sequence that is bounded converges.” However, I still do not know what else you have proven about the structure of the real numbers. Here is a fact that may help recall what you have proved about the real numbers. If S contains a point of U, then that point is the LUB of S. So if there is no LUB of S both S & U can be shown to be open. Have you done anything with connectivity? September 7th 2008, 05:38 PM We haven't done anything with connected sets yet (at least not in this course). We went over field axioms and order axioms, well-ordered property, completeness property, and we talked about dense and equivalent classes. Haven't talk about open, close, or connected. September 8th 2008, 03:32 AM Well, I think that is what I asked to bengin with: "What is the statement of the completeness property?" Because this is equivalent to that. September 8th 2008, 05:34 AM The statement is: "An ordered field is said to be complete if it obeys the monotone sequence property". I was working on this problem, and I understand the idea of the proof now. So if a is an upper bound, then M would change; if a is not an upper bound, then x would change. In other words, $M_n$ is a sequence that is appraoching sup(S) from the positive side while $X_n$ does the same from the negative side. Since, both sequences are monotonic, M being decreasing while X increasing, and they are both bounded by the sup(S), since if they move outside of their respective area they stop moving, sup(S) is the GLB of M and the LUB of x. It is just that, I don't know how to write this correctly, since I do know what excatly $M_n$ and $x_n$ equals to. But here is what I think: So I have $M_n = \frac { M_{n-1} + x_0 }{2}$ On the other hand, I have: $x_n > \frac { x_{n-1} +M_0 }{2}$ Now, by construction, M has to be decreasing and x has to be increasing, and they are both bounded by sup(S), but how do I show that? Any hints would be appreicated, thank you!
{"url":"http://mathhelpforum.com/calculus/48056-least-upper-bound-property-proof-print.html","timestamp":"2014-04-21T08:53:56Z","content_type":null,"content_length":"14259","record_id":"<urn:uuid:5a16b4b1-4e7b-401a-9eca-835ed58a095f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00159-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebras and modules in monoidal model categories Results 11 - 20 of 124 - Trans. Amer. Math. Soc "... of unbounded chain complexes, where the cofibrations are the injections. This folk theorem is apparently due to Joyal, and has been generalized recently ..." Cited by 27 (0 self) Add to MetaCart of unbounded chain complexes, where the cofibrations are the injections. This folk theorem is apparently due to Joyal, and has been generalized recently - Algebr. Geom. Topol , 2002 "... Abstract: We construct Quillen equivalences between the model categories of monoids (rings), modules and algebras over two Quillen equivalent model categories under certain conditions. This is a continuation of our earlier work where we established model categories of monoids, modules and algebras [ ..." Cited by 23 (8 self) Add to MetaCart Abstract: We construct Quillen equivalences between the model categories of monoids (rings), modules and algebras over two Quillen equivalent model categories under certain conditions. This is a continuation of our earlier work where we established model categories of monoids, modules and algebras [SS00]. As an application we extend the Dold-Kan equivalence to show that the model categories of simplicial rings, modules and algebras are Quillen equivalent to the associated model categories of connected differential graded rings, modules and algebras. We also show that our classification results from [SS] concerning stable model categories translate to any one of the known symmetric monoidal model categories of spectra. 1. , 2001 "... Abstract. In this paper we develop the theory of operads, algebras and modules in cofibrantly generated symmetric monoidal model categories. We give J-semi model structures, which are a slightly weaker version of model structures, for operads and algebras and model structures for modules. We prove h ..." Cited by 22 (0 self) Add to MetaCart Abstract. In this paper we develop the theory of operads, algebras and modules in cofibrantly generated symmetric monoidal model categories. We give J-semi model structures, which are a slightly weaker version of model structures, for operads and algebras and model structures for modules. We prove homotopy invariance properties for the categories of algebras and modules. In a second part we develop the theory of S-modules and algebras of [EKMM] and [KM], which allows a general homotopy theory for commutative algebras and pseudo unital symmetric monoidal categories of modules over them. Finally we prove a base change and projection formula. , 2009 "... For any finite simplicial complex K, Davis and Januszkiewicz have defined a family of homotopy equivalent CW-complexes whose integral cohomology rings are isomorphic to the Stanley-Reisner algebra of K. Subsequently, Buchstaber and Panov gave an alternative construction, which they showed to be hom ..." Cited by 19 (6 self) Add to MetaCart For any finite simplicial complex K, Davis and Januszkiewicz have defined a family of homotopy equivalent CW-complexes whose integral cohomology rings are isomorphic to the Stanley-Reisner algebra of K. Subsequently, Buchstaber and Panov gave an alternative construction, which they showed to be homotopy equivalent to the original examples. It is therefore natural to investigate the extent to which the homotopy type of a space X is determined by such a cohomology ring. Having analysed this problem rationally in Part I, we here consider it prime by prime, and utilise Lannes’ T functor and Bousfield-Kan type obstruction theory to study the p-completion of X. We find the situation to be more subtle than for rationalisation, and confirm the uniqueness of the completion whenever K is a join of skeleta of simplices. We apply our results to the global problem by appealing to Sullivan’s arithmetic square, and deduce integral uniqueness whenever the Stanley-Reisner algebra is a complete intersection. - London Math. Soc. Lecture Note Ser , 2004 "... These notes are based on lectures given at the Workshop on Structured ring spectra and ..." "... ..." - Amer.J.Math.123 , 2001 "... We produce a highly structured way of associating a simplicial category to a model category which improves on work of Dwyer and Kan and answers a question of Hovey. We show that model categories satisfying a certain axiom are Quillen equivalent to simplicial model categories. A simplicial model cate ..." Cited by 15 (3 self) Add to MetaCart We produce a highly structured way of associating a simplicial category to a model category which improves on work of Dwyer and Kan and answers a question of Hovey. We show that model categories satisfying a certain axiom are Quillen equivalent to simplicial model categories. A simplicial model category provides higher order structure such as composable mapping spaces and homotopy colimits. We also show that certain homotopy invariant functors can be replaced by weakly equivalent simplicial, or "continuous," functors. This is used to show that if a simplicial model category structure exists on a model category then it is unique up to simplicial Quillen equivalence.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1026679&sort=cite&start=10","timestamp":"2014-04-18T06:13:32Z","content_type":null,"content_length":"31694","record_id":"<urn:uuid:8ad9e969-c854-4b70-b56d-fb5bb2acf208>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Statistics Chapter 1 -- Quiz Parking at a large university has become a very big problem. University administrators are interested in determining the average parking time of it's students (e.g. the time it takes a student to find a parking spot). An administrator inconspicuously followed 250 students and carefully recorded their parking times. Identify the population of interest to the university administration. The amount of television viewed by today's youth is of primary concern to Parents Against Watching Television (PAWT). Three hundred parents of elementary school-aged children were asked to estimate the number of hours per week that their child watched television. The mean and the standard deviation for their responses were 15 and 5, respectively. Identify the data collection method used by PAWT in this study. A manufacturer of cellular phones has decided that an assembly line is operating satisfactorily if less than 3% of the phones produced per day are defective. To check the quality of a day's production, the company decides to randomly sample 30 phones from a day's production to test for defects. Define the population of interest to the manufacturer. The legal profession conducted a study to determine the percentage of cardiologists who had been sued for malpractice in the last five years. The sample was randomly chosen from a national directory of doctors. What is the variable of interest in this study? A published report recently stated "Based on a sample of 150 new cars, there is evidence to indicate that the average new car price of all foreign automobiles is significantly higher than the average new car price of all American cars." This statement is an example of a ___________. A sample of high school teenagers reported that 85% of those sampled are interested in pursuing a college education. This statement is a result of a(n) __________. A statistics student researched her statistics project in the library and found a reference book that contained the median family incomes for all 50 states. On her project, she would report her data as being collected using __________. A personnel director at a large company studied the eating habits of the company's employees. The director noted whether an employee brought their own lunch to work, ate at the company cafeteria, or went out to eat lunch. The goal of the study was to improve the company cafeteria. This type of data collection would best be considered as a(n) __________. Which of the following is not an element of descriptive statistical problems? A study published in 1990 attempted to estimate the proportion of Florida residents who were willing to spend more tax dollars on protecting the Florida beaches from environmental disasters. Twenty-five hundred Florida residents were surveyed. Which of the following describes the variable of interest in the study?
{"url":"http://cwx.prenhall.com/bookbind/pubbooks/esm_mcclave_statistics_9/chapter1/multiple2/deluxe-content.html","timestamp":"2014-04-17T04:34:03Z","content_type":null,"content_length":"18936","record_id":"<urn:uuid:00aa2fad-04eb-4b4e-9182-8fd939ce8204>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Kong-Fatt Wong pdf, supplementary Quantum interest in two dimensions, Physical Review D66: 064007 (2002). E. Teo and K.F. Wong Manuscripts in preparation Time-varying gain modulation on decision-making dynamics and performance: Analysis of a biological network model. R. Niyogi and K.F. Wong-Lin^† Performance-monitoring, gating, and learning: A connectionist model of task switching. M.T. Todd, K.F. Wong-Lin and J.D. Cohen Selected conference abstracts and presentations Flexible and optimal perceptual decisions with time-varying gain modulation, Department of Systems Science, Beijing Normal University, Beijing, China (Apr 28, 2010) [Invited talk] Neural circuit dynamics of perceptual decision making: External perturbation and internal modulation, Department of Electrical and Computer Engineering, National University of Singapore, Singapore (Mar 18, 2010) [Invited talk] Time-varying gain modulation on neural circuit dynamics and performance in perceptual decisions, R.K. Niyogi and K.F. Wong-Lin, Abstract # 246, Computational and Systems Neuroscience (CoSyNe) 2010, Salt Lake City, UT, USA (Feb 25 - Feb 28, 2010) Contributions of time-varying gain modulation in perceptual decision making, Dynamical Systems and Nonlinear Science Seminar, Princeton University, Princeton, NJ, USA (Dec 11, 2009) [Talk] Optimal control of countermanding saccadic eye movements, K.F. Wong-Lin, P. Eckhoff, P. Holmes and J.D. Cohen, 4th Computational Cognitive Neuroscience Conference, Boston, MA, USA (Nov 18-19, 2009) Sequential effects in biased perceptual choice tasks, S. Goldfarb, K.F. Wong-Lin, N. Leonard and P. Holmes, 4th Computational Cognitive Neuroscience Conference, Boston, MA, USA (Nov 18-19, 2009) Are redundant neurons redundant in categorical decision making? A robustness study of a network model, K.F. Wong-Lin, Program # 194.12, Society for Neuroscience meeting, Chicago, IL, USA (Oct 17-21, Optimality, robustness and neural dynamics of decision making under norepinephrine modulation: A spiking neuronal network model, 42nd Winter Conference on Brain Research, Copper Mountain, CO, USA (Jan 24-30, 2009) [Invited panel talk] Perturbing the formation of a decision, Applied Dynamical Systems Seminar, Department of Mathematics, Drexel University, Philadelphia, PA, USA (Jan 23, 2009) [Invited talk] Robustness and optimality of decision making under norepinephrine modulation: A biophysical model of the neural substrate, P. Eckhoff, K.F. Wong-Lin and P. Holmes, Program # 590.12, Society for Neuroscience meeting, Washington D.C., USA (Nov 15-19, 2008) Optimality, robustness and neural dynamics of decision making under norepinephrine modulation: a spiking neuronal network model, Sloan-Swartz Center for Theoretical Neurobiology annual summer meeting, Princeton University, Princeton, NJ, USA (July 19-22, 2008) [Talk] Analysis of a biologically realistic model for saccade-countermanding tasks, K.-F. Wong, P. Eckhoff, P. Holmes and J.D. Cohen, Program # 719.3, Society for Neuroscience meeting, San Diego, CA, USA (Nov 3-7, 2007) Exploring norepinephrine modulation in a model for decision making, P. Eckhoff, K.-F. Wong and P. Holmes, Program # 645.9, Society for Neuroscience meeting, San Diego, CA, USA (Nov 3-7, 2007) A model analysis of the mechanisms underlying sequential effects by separation of timescales, J. Gao, K.-F. Wong, P. Holmes and J.D. Cohen, Program # 740.3, Society for Neuroscience meeting, San Diego, CA, USA (Nov 3-7, 2007) Competition, gating, and learning: A new computational model of task switching, M.T. Todd, K.-F. Wong and J.D. Cohen, Program # 634.10, Society for Neuroscience meeting, San Diego, CA, USA (Nov 3-7, Time-varying perturbations during perceptual decision-making, Dynamical Systems and Nonlinear Science Seminar, Princeton University, Princeton, NJ, USA (Oct 19, 2007) [Talk] Time integration in a perceptual decision task: adding and subtracting brief pulses of evidence in a recurrent cortical network model, K.-F. Wong, A.C. Huk, X.-J. Wang and M.N. Shadlen, Program # 621.5, Society for Neuroscience meeting, Washington D.C., USA (Nov 12-16, 2005) What determines the integration time of a decision microcircuit in the cortex?, K.-F. Wong and X.-J. Wang, Program # 668.15, Society for Neuroscience meeting, San Diego, CA, USA (Oct 23-27, 2004) Click here for a more complete list of scientific presentations Mathematical Neuroscience (with P. Holmes, E. Fuchs), Princeton University (Spring 2010) Mathematical Neuroscience (with P. Holmes), Princeton University (Fall 2008) Introductory Physics Laboratory (with lecture), Brandeis University (2002-2003) [Awarded the David L. Falkoff Teaching Fellow Prize for outstanding teaching] Physics Laboratory, Brandeis University (2001-2002) Physics for Pre-medical and Biology, University of Minnesota, Minneapolis (Summer 2000-2001) Introductory Physics for Science and Engineering, University of Minnesota, Minneapolis (2000-2001)
{"url":"https://web.math.princeton.edu/~kfwong/home","timestamp":"2014-04-19T19:57:42Z","content_type":null,"content_length":"27292","record_id":"<urn:uuid:efa050e2-637b-4b3c-bcdb-c5dca719cd2a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Cone geometry puzzling task May 21st 2011, 09:33 AM #1 May 2011 Cone geometry puzzling task I've been given a couple of assignments from my teacher, everything I answered correctly except this one task. s= 15cm ------>This how far I've gone I've figured out how to extract "r" but I can't seem to extract it with 2 uknowns. Cone is composed of 2 right-angled triangles. So I used one of the triangles to extract the "r" using Pythagoras theorem. And I got this: s^2=r^2+s^2h What do I do next? Hi Riddleton, The lateral area of a right circular cone is pi r l where r= radius of base and l = slant height May 21st 2011, 05:23 PM #2 Super Member Nov 2007 Trumbull Ct
{"url":"http://mathhelpforum.com/geometry/181235-cone-geometry-puzzling-task.html","timestamp":"2014-04-21T12:26:19Z","content_type":null,"content_length":"31308","record_id":"<urn:uuid:f9f19de2-6ab9-4dc8-9709-3e53f90464b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00406-ip-10-147-4-33.ec2.internal.warc.gz"}
Time is the longest distance between two places. Tennessee Williams, 1945 The lapse of proper time for moving clocks in a gravitational field is often computed by splitting the problem into separate components, one to account for the velocity effect in accord with special relativity, and another to account for the gravitational effect in accord with general relativity. However, the general theory subsumes the special theory, and it's often easier to treat such problems holistically from a purely general relativistic standpoint. (The persistent tendency to artificially bifurcate problems into "special" and "general" components is partly due to the historical accident that Einstein arrived at the final theory in two stages.) In the vicinity of an isolated non-rotating spherical body whose Schwarzschild radius is 2m the metric has the form where f = longitude and q = latitude (e.g., q = 0 at the North Pole and q = p/2 at the equator). Let's say our radial position r and our latitude q are constant for each path in question (treating r as the "radius" in the weak field approximation). Then the coefficients of (dt)^2 and (df)^2 are both constants, and the metric reduces to If we're sitting on the Earth's surface at the North Pole, we have sin(q) = 0, so it follows that dt = []dt where r is the radius of the Earth. On the other hand, in an equatorial orbit with radius r = R we have q = p/2, sin^2(q) = 1, and so the coefficient of (df)^2 is simply R^2. Now, recall Kepler's law w^2 R^3 = m, which also happens to hold exactly in GR (provided that R is interpreted as the radial Schwarzschild coordinate and w is defined with respect to Schwarzschild coordinate time). Since w = df/dt we have R^2 = m/(w^2 R) = (dt/df)^2 (m/R). Thus the path of a particle in a circular orbit satisfies Now for each test particle, one sitting at the North Pole and one in a circular orbit of radius R, the path parameter t is the local proper time, so the ratio of the orbital proper time to the North Pole's proper time is To isolate the difference in the two proper times, we can expand the above function into a power series in m/r to give Multiplying through by dt[earth] and then subtracting this quantity from both sides, we get The mass of the earth in geometrical units is about m = 0.00443 meters, and the radius of the earth is about r = 6.38(10)^6 meters, so we can insert these values and integrate over a given lapse Dt[ earth] of proper time measured on Earth to give the difference between this elapsed time and the corresponding elapsed time for the circular orbit Consequently, for an orbit at the radius R = (3/2)r (about 2000 miles up) there is no difference in the lapses of proper time. For orbits lower than 3r/2 the satellite will show slightly less lapse of proper time (i.e., the above discrepancy will be negative), whereas for higher orbits it will show slightly more elapsed time than the corresponding interval at the North Pole. One might think that a further adjustment would be necessary to correlate the elapsed time to a particle at rest on the equator, to account for the rotation of the Earth, but the Earth tends to bulge due to its rotation in such a way that the it maintains roughly an equi-potential surface, meaning that the rate of proper time is approximately constant at sea level, regardless of latitude. Indeed this was the basis of Newton’s prediction of the Earth’s oblate shape, since (as he pointed out) if the surface of the seas was not equi-potential, the water would flow so as to achieve equilibrium. In a low Earth orbit of, say, 360 miles, we have r/R = 0.917, so the proper time runs about 22.5 microseconds per day slower than a clock on the Earth. On the other hand, for a geo-synchronous orbital radius of 22,000 miles we have r/R = 0.18, so the orbit's lapse of proper time actually exceeds the corresponding lapse of proper time on Earth by about 43.7 microseconds per day. Of course, as R continues to increase the orbital velocity drops to zero and we are left with just coordinate time for the orbit, relative to which a clock on Earth is "running slow" by about 60 micro-seconds per day, due entirely to the gravitational potential of the earth. (Hence during a typical human life span the Earth's gravity stretches out our lives to cover an extra 1.57 seconds of coordinate time.) Of particular interest is the orbit of the Global Positioning System (GPS) satellites, which are located in circular orbits approximately 20,200 meters above the Earth’s surface (not a geosynchronous orbit, as is sometimes thought). Taking the mean radius of the Earth to be about r = 6378 meters, it follows that the orbital radius of the GPS satellites is R = 20200+6378 = 26578 meters, giving a ratio R/r of about 4.2. Inserting this ratio into the above formula, we find that the elapsed proper time per Earth-day for a GPS satellite is about 38.1 micro-seconds more than the elapsed time on Earth. Incidentally, the value of t[orbit] given by equation (2) goes to zero when the orbital radius R equals 3m, consistent with the fact that 3m is the radius of the orbit of light. This suggests that even if something prevented a massive object from collapsing within its Schwarzschild radius 2m, it would still be a very remarkable object if it was just within 3m, because then it could (theoretically) support circular light orbits, although I don't believe such orbits would be stable (even neglecting interference from in-falling matter). If neutrinos are massless there could also be neutrinos in 3m (unstable) orbits near such an object, although the evidence today indicates that neutrinos have a small positive mass. The results of this and the previous section can be used to clarify the so-called twins paradox. In some treatments of special relativity the difference between the elapsed proper times along different paths between two fixed events is attributed to a difference in the locally "felt" accelerations along those paths. In other words, the asymmetry in the proper times is "explained" by the asymmetry in local accelerations. However, this explanation fails in the context of general relativity and gravity, because there are generally multiple free-fall (i.e., locally unaccelerated) paths of different proper lengths connecting two fixed events. This occurs, for example, with any two intersecting orbits with different eccentricities, provided they are arranged so that the clocks coincide at two intersections. To illustrate, consider the intersections between a circular and a purely radial “orbit” in the gravitational field of a spherically symmetrical mass m. One clock follows a perfectly circular orbit of radius r, while the other follows a purely radial (up and down) trajectory, beginning at a height r, climbing to R, and falling back to r, as shown below. We can arrange for the two clocks to initially coincide, and for the first clock to complete n circular orbits in the same (coordinate) time it takes for the second clock to rise and fall. Thus the objects coincide at two fixed events, and they are each in free-fall continuously in between those two events. Nevertheless, we will see that the elapsed proper times for these two objects are not the same. Throughout this example, we will use dimensionless times and distances by dividing each quantity by the mass m in geometric units. For a circular orbit of radius r in Schwarzschild spacetime, Kepler's third law gives the proper time to complete n revolutions as Applying the constant ratio of proper time to coordinate time for a circular orbit, we also have the coordinate time to complete n revolutions For the radially moving object, the usual parametric cycloid relation (see Section 6.4) gives the total proper time for the rise and fall where the parameter a satisfies the relation The total elapsed coordinate time for the radial object is In order for the objects to coincide at the two events, the coordinate times must be equal, i.e., we must have Dt[circ] = Dt[radial]. Therefore, replacing r with q(1+cos(a)) in the expression for the coordinate time in circular orbits, we find that for any given n and q (= R/2) the parameter a must satisfy Once we’ve determined the value of a for a given q and n, we can then determine the ratio of the elapsed proper times for the two paths from the relation With n = 1 and fairly small value of r the ratio of proper times behaves as shown below. Not surprisingly, the ratio goes to infinity as r drops to 3, because the proper time for a circular orbit of radius 3m is zero. (Recall that the "r" in our equations signifies r/m in normal goemetrical units.) The a parameters and proper time ratios for some larger values of r with n = 1 are tabulated below. To determine the asymptotic behavior we can substitute 1/u for the variable q in the equation expressing the relation between q and a, and then expand into a series in u to give Now for any given n let a[n] be defined such that For large values of r the values of a will be quite close to a[n] because the ratio of proper times for the two free-falling clocks is close to 1. Thus we can put a = a[n] + da in equation (3) and expand into a series in da to give To determine the asymptotic da as a function of R and n we can put a = a[n] + da in equation (4) and expand into a series in da to give For sufficiently large R the value of B[n] is negligible, so we have Inserting this into (6) and recalling that 2/R is essentially equal to [1+cos(a[n])]/r since a is nearly equal to a[n], we arrive at the result So, for any given n, we can solve (5) for a[n] and substitute into the above equation to give k[n], and then the ratio of proper times for two free-falling clocks, one moving radially from r to R and back to r while the other completes n circular orbits at radius r, is given (for any value of r much greater than the mass m of the gravitating body) by equation (7). The values of a[n], k[n], and R/ r for several values of n are listed below. As an example, consider a clock in a circular orbit at 360 miles above the Earth's surface. In this case the radius of the orbit is about (6.957)10^6 meters. Since the mass of the Earth in geometrical units is 0.00443 meters, we have the normalized radius r = (1.57053)10^9, and the total time of one orbit is approximately 5775 seconds (i.e., about 1.604 hours). In order for a radial trajectory to begin and end at this altitude and have the same elapsed coordinate time as one circular orbit at this altitude, the radial trajectory must extend up to R = (1.55)10^7 meters, which is about 5698 miles above the Earth's surface. Taking the value of k[1] from the table, we have and so the difference in elapsed proper times is given by This is the amount by which the elapsed time on the radial (up-down) path would exceed the elapsed time on the circular path. Return to Table of Contents
{"url":"http://www.mathpages.com/rr/s6-05/6-05.htm","timestamp":"2014-04-21T05:19:40Z","content_type":null,"content_length":"44362","record_id":"<urn:uuid:38f8bf15-00d6-4ce4-9425-88d1d2ec38bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00494-ip-10-147-4-33.ec2.internal.warc.gz"}
Explicit construction of irreducible unitary connections up vote 2 down vote favorite On a Riemann surface irreducible unitary connections on complex vector bundles of rank $n\geq 2$ are uniquley determined by their underlying holomorphic structure $\frac{1}{2}(\nabla+i*\nabla)$, where $*$ is the adjoint of the complex structure $J$ of the Riemann surface. This holomorphic structure is stable. Are there any special examples, where this correspondence is made more explicit? riemann-surfaces dg.differential-geometry reference-request Surely you mean unitary connections on complex vector bundles on a complex manifold, for otherwise there is no underlying holomorphic structure. Since gauge theory is not only studied on complex manifolds, then could I ask you to please edit your question and state clearly your assumptions? Thanks. – José Figueroa-O'Farrill Apr 19 '11 at 8:16 Many thanks for editing the question! – José Figueroa-O'Farrill Apr 19 '11 at 11:00 You have been right, I should have ask my question more carefully. Thanks for your comments. – Sebastian Apr 19 '11 at 11:26 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged riemann-surfaces dg.differential-geometry reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/62214/explicit-construction-of-irreducible-unitary-connections","timestamp":"2014-04-21T10:12:10Z","content_type":null,"content_length":"49520","record_id":"<urn:uuid:3459b539-211a-46f6-bace-6fa2975994a6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00033-ip-10-147-4-33.ec2.internal.warc.gz"}
The Assignment of Atomic Term Symbols Atomic Term Symbols Assigning Term Symbols The ground state of hydrogen atom is one electron in the lowest energy atomic orbital: the 1s. Therefore the total orbital angular momentum off all (one) electrons is L=l=0, and the total electron spin is S=s=1/2. Applying the 'definition' of the term symbol: Note: In the symbol L must be replaced with its alphabetic 'code': L=0 is S, L=1 is P, L=2 is D, L=3 is F, L=4 is G, L=5 is H... J is the vector sum of L and S: J = L+S, L+S-1, L+S-2, ....L-S results in a ^2S[1/2] atomic term for the ground state of the H atom. Exciting the single hydrogenic electron to higher orbitals results in different atomic states or 'terms' of the atom. Note that an H atom with the electron in a 3d and 10d orbital both result in ^2D [3/2] and ^2D[5/2] terms, but at different energies. The lowest electron configuration of He is 1s^2. The ground state of the neutral helium atom is therefore ^1S[0] In fact, any electron configuration (orbital population) that consists of any combination of closed shells or subshells will result in this (totally symmetric ^1S[0] term. Therefore, in the designation of atomic terms, the contribution from closed subshell electrons may be neglected. To determine the states(terms) of a given atom or ion: 1. Write down the electronic configuration (ignore closed subshell electrons) 2. Determine the number of distinct microstates that can represent that configuration. If you have e electrons in a single open subshell of 2l+1 orbitals, this value is #microstates = (2(2l+1))!/e!(2(2l+1)-e)! 3. Tabulate the number of microstates that have a given M[L] and M[S] 4. Decompose your table into terms by elimination 5. Test the total degeneracy of the resultant terms to account for all the microstates counted in parts 2 and 3 6. Determine the lowest term for the configuration by Hunds Rules. Summary Examples ground state term symbol is ^2S[1/2] Ground state term symbol is ^1S[0] He(1s^12s^1) An Excited State Configuration Terms: ^1S[0] , ^3S[1] { There is no ^3S[0] Nor ^3S[-1] Term } Term symbol: ^2P[1/2], ^2P[3/2] [The spin orbit splitting is regular so ^2P[1/2.] is the ground state. Calculate the number of possible electron arrangements in the given configuration There are 6!/4!2! = 15 microstates expected Write down all these possibilities Tabulate the total numbers by M[L] and M[S] Decompose this table into terms Check this with reality What are the atomic state term symbols resulting from the lowest energy configuration of N (1s^2 2s^2 2p^3) ? This configuration has 6!/3!3! = 20 microstates Draw all the possibilities Tabulate the totals Assign Terms Check this with reality What are the atomic state term symbols resulting from the lowest energy configuration of O (1s^2 2s^2 2p^4) ? Aha! We don't haver to do this because the terms arising from p^2 and p^4 are exactly the same. Note however the ordering of the J levels is now inverted Check this with reality If you think you have mastered the process of determining the states arising from a given electronic configuration, you should try Ti(1s^22s^22p^63s^23p^64s^23d^2) Here is the answer depicted graphically Check this with reality The terms resulting from a single open subshell are tabulated below for your amusement.
{"url":"http://www.chem.ufl.edu/~itl/4412/lectures/ATermSym.html","timestamp":"2014-04-21T00:10:05Z","content_type":null,"content_length":"8231","record_id":"<urn:uuid:63cce8ac-0973-4975-9804-dd260b4f6e49>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00372-ip-10-147-4-33.ec2.internal.warc.gz"}
Highland, MD Math Tutor Find a Highland, MD Math Tutor ...I have also taught Modern World History at the High School level. For the past 8 years I have taught the Computer Applications / Web Design class at my school. I am proficient in HTML as well as many applications. 29 Subjects: including algebra 1, prealgebra, geometry, reading ...I enjoy teaching students at every skill level. I believe in teaching beyond the short cuts and introducing students to the satisfaction of finding solutions using problem-solving skills. I teach basic through advanced mathematics and sciences. 14 Subjects: including trigonometry, precalculus, prealgebra, algebra 1 ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including prealgebra, probability, algebra 1, algebra 2 ...Throughout the past few years as a middle school math teacher, I have helped many students strengthen their basic skills to allow them to be successful in other areas of math. I have attended many math trainings and conferences which have provided me with a wealth of teaching strategies and tech... 4 Subjects: including algebra 1, prealgebra, SAT math, elementary math ...Confused about picking the right tutor? I have almost a decade of experience in helping students achieve their personal best on standardized tests. I scored 800s on the Math and Reading sections of the SAT, and I have worked as a research assistant at the Johns Hopkins Bloomberg School of Public Health. 41 Subjects: including trigonometry, GRE, LSAT, SAT math Related Highland, MD Tutors Highland, MD Accounting Tutors Highland, MD ACT Tutors Highland, MD Algebra Tutors Highland, MD Algebra 2 Tutors Highland, MD Calculus Tutors Highland, MD Geometry Tutors Highland, MD Math Tutors Highland, MD Prealgebra Tutors Highland, MD Precalculus Tutors Highland, MD SAT Tutors Highland, MD SAT Math Tutors Highland, MD Science Tutors Highland, MD Statistics Tutors Highland, MD Trigonometry Tutors
{"url":"http://www.purplemath.com/highland_md_math_tutors.php","timestamp":"2014-04-18T16:19:24Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:73132fb2-c97c-4d1a-888d-745c64073b8c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Various questions on limits, derivatives and series February 12th 2009, 06:45 PM #1 Dec 2008 Plz help me with these questions... am really stuck in them ... and really need help urgently... I had to submit them in 2 hours... plz help.. Questions are in attached document... because i dunno how to type here in the forum... sorry.. 1. Since the degree of the numerator exceeds the degree of the denominator, you should be able to tell that the limit is infinite. You can also see it by dividing the numerator and the denominator by $x^4$, and noting that $\lim_{x\to\infty}\frac1x=0$. 2. This should be easy. If you are having trouble, state where you need help. 3. This is a telescoping series: $\sum_{n=1}^\infty\ln\left(\frac n{n+1}\right) = \sum_{n=1}^\infty\left[\ln n - \ln(n+1)\right]$. Thanx I am solving the questions right now... and I have done Q1 with your kind help but in Q2 I really dont know from where should I start...first I thought of continuity but when I saw we have to show F'(x)=F1'(x)=f(x) I got confused... Plz kindly help me out that from where should i start... and I am solving Q3 right now.. if i get any diificulty I'll write it here... Thanx... but need help in Q2 You can differentiate $F$ and $F_1$ by differentiating each case separately. You should get $f(x)$ as the derivative. That's it. February 12th 2009, 07:28 PM #2 February 12th 2009, 07:44 PM #3 Dec 2008 February 12th 2009, 07:57 PM #4
{"url":"http://mathhelpforum.com/calculus/73417-various-questions-limits-derivatives-series.html","timestamp":"2014-04-21T02:29:33Z","content_type":null,"content_length":"41097","record_id":"<urn:uuid:5646c36d-f17a-4445-a8c6-f2168395ff9c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Rowlett SAT Math Tutor Find a Rowlett SAT Math Tutor ...In my tutoring sessions, I have each student review and practice grammar, punctuation, organization, style, and other concepts covered on the ACT English section. If a student is taking the ACT Writing section, I include ACT essay practice as well. I’ve tutored over 100 hours of ACT and SAT prep, including over 30 hours of ACT and SAT Math. 15 Subjects: including SAT math, reading, algebra 1, geometry ...I currently have a number of students that I am working with in their math course along with preparation for TAKS. I specialize in preparing students for the TEAS test. My background consistent of tutoring the necessary math, reading, and language arts skills needed to pass this exam. 27 Subjects: including SAT math, reading, geometry, statistics ...In other words, television, your favorite DJ, five texting/face booking friends, and trash on the table are not conducive to a positive study environment. Secondly, the mind needs to be clutter free. Anxiety about yesterday's argument with a friend can truly hinder anyone's ability to accomplish tasks. 41 Subjects: including SAT math, English, reading, writing ...Why can't my teacher explain like you?" or something along those lines, as well as seeing improved scores from the student resulting from our sessions. As most students struggling with this subject have the idea that math is a dry, hard and boring subject, they tend to develop a negative attitud... 12 Subjects: including SAT math, Spanish, calculus, geometry Hi! My name is Elizabeth and my passion in life is to teach and help others. I have a bachelor's and master's in mathematics and am currently working in the field as a quantitative analyst for a software company. 10 Subjects: including SAT math, calculus, probability, algebra 1
{"url":"http://www.purplemath.com/rowlett_sat_math_tutors.php","timestamp":"2014-04-16T04:21:34Z","content_type":null,"content_length":"23715","record_id":"<urn:uuid:fd5018b3-e678-489a-93ed-c733b808d394>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Bright Chevron & an AMAZING Giveaway! Well, here it is... The Bright Chevron Classroom Kit- bright teal, green, pink, yellow & purple chevron backgrounds! Thanks so much for all the great comments & emails on my Bright Dots Classroom Kit & requests for a chevron-themed kit. I've just posted the Bright Chevron Classroom Kit in both Traditional Print D'Nealian Print . Click the pics below to check it out! example from Tradition Print version example from D'Nealian Print version I promise to post the Freebie Word Wall Words with the chevron background very soon! Now, for this great giveaway from Learning Resources Have you seen these handy lil things? HearALL Assessment Recorder is very similar to the Easi-Speak Microphone except it just sits on the table & it can record several students at once. { Click here to read my post on the Easi-Speak Digital Microphone.} Here's the product description from Learning Resouces: "Hear way beyond! Authentically assess students’ skills and participation in small groups. Designed with 4 omni-directional microphones to capture clear recordings with dramatically reduced background noise. Play back, download and share files (WAV or MP3) with specialists or parents, or upload to portfolios. Records up to 4 hours of audio from reading groups, learning centers, listening and speaking activities, speech therapy and more!" I was super excited to be able to try this out during my small groups. It is compact and has 4 separate microphones so it really picks up the kiddos voices. I couldn't believe how clear the playback was. I love how I can play it back so the students can hear themselves. After hearing themselves read on the Hear All, they really paid close attention on the next reading! One kiddo said, "I have to be a super reader now because this thing could make me a star!" What a hoot! Hollywood get ready, here she comes!! I feel like I'm not missing any of the students reading. I can focus on one student while it records the others. {Also, very good at keeping those students focused because they know they are being recorded!} Not only is this useful during guided reading groups, but also is very useful at centers and during partner reading to moniter participation & verbal interactions. Oh my! Almost forgot to tell a BIGGIE....NO BATTERIES folks! Now that is a huge plus! Just connect this handy lil thing to your computer & it charges up! You're good to go for 4 hours! I am looking forward to having my reading groups record various stories that they will be able to listen to later in our listening center. How excited will they be to hear themselves reading the SORRY...THIS CONTEST HAS ENDED. [S:Now for the part you've all been waiting for...How can you win one of these super products for your classroom??? Well, it's super easy...Just leave a lil comment saying that you do the following::S] [S: :S] [S:That's 6 chances to win! Be sure to include your email address on each entry. Contest ends Thursday, June 28th-winner will be announced on Friday! Good luck!!!:S] Sorry, contest has ended. 94 comments: 1. I follow your blog!! Learning Is Something to Treasure 2. I follow your TPT store!! Learning Is Something to Treasure 3. I follow Learning Resources on Pinterest! Learning Is Something to Treasure 4. I follow you on Pinterest!! Learning Is Something to Treasure 5. I follow your blog! 6. I follow your TpT store! 7. I follow Learning Resources on fb! 8. I follow Learning Resources on pinterest. 9. I follow you on pinterest. 10. Just found your blog! Sara :) Smiling In Second Grade 11. I follow your blog! 12. I follow you on TpT! Sara :) Smiling In Second Grade 13. I follow your TpT store! 14. I like LR on Facebook. Sara :) Smiling In Second Grade 15. I follow LR on Twitter:) ...no Facebook tho:( 16. I follow your blog!!!! Crisscross Applesauce in First Grade 17. I follow LR on Facebook! Crisscross Applesauce in First Grade 18. I follow you on Pinterest! Crisscross Applesauce in First Grade 19. I follow your blog! 20. I follow your TpT store :) 21. I follow your blog! 22. I follow learning resources on facebook! 23. I like Learning Resources on FB 24. I am a blog follower! tokyoshoes (at) hotmail (dot) com 25. Also a TPT follower! tokyoshoes (at) hotmail (dot) com 26. I follow your blog! 27. I follow your blog, your TpT store and also I follow you on Pinterest. I follow Learning Resources on Twitter and Pinterest and I "like" Learning Resources on Facebook. If I need to send you 6 different comments let me know. 1. Hi Kyp! Each separate comment counts as an entry! So go ahead & enter them separate if you like! (Don't forget your email.) :) 28. I follow your blog. 29. I follow you tpt store. 30. I like Learning Resources on Twitter. 31. I follow Learning Resources on Pinterest. 32. I follow you on Pinterest. 33. I follow your blog. Sweet Seconds 34. I follow your TpT store! Sweet Seconds 35. I follow Learning Resources on Twitter. Sweet Seconds 36. I like Learning Resources on Facebook. Sweet Seconds 37. I follow Learning Resources on Pinterest. Sweet Seconds 38. I follow you on Pinterest. Sweet Seconds 39. I follow your blog. And I love the zebra stripes! 40. I follow your TPT store. 41. I follow your awesome blog! 42. I follow Learning Resources on Pinterest! 43. I follow Learning Resources on Pinterest. 44. I follow your adorable blog! 45. I follow your TPT store! 46. I follow you on Pinterest! 47. I like Learning Resources on FB! 48. I follow your blog :) 49. I like Learning Resources on Facebook! 50. I follow your blog! 51. I LIKE Learning Resources on Facebook!! 52. I follow your blog! 53. I follow your TpT store! 54. I follow you on Pintrest! 55. I follow Learning Resources on Pintrest! 56. I like Learning Resources on FB! 57. I follow your blog 58. I follow you on PInterest 59. I follow you on Teachers Pay Teachers 60. I liked Learning Resources on FB 61. I follow Learning Resources on Pinterest 62. I follow your blog! 63. I follow your TpT store! 64. I follow Learning Resources on twitter! 65. I follow you on Pinterest! 66. I am a GFC Follower (katja9_10). katja9_10 at hotmail dot com 67. I follow your blog! 68. I follow your blog! ❤ Sandra Sweet Times in First 69. I follow you on Pinterest! ❤ Sandra Sweet Times in First 70. I follow LR on FB! ❤ Sandra Sweet Times in First 71. I follow LR on Twitter! ❤ Sandra Sweet Times in First 72. I follow you on TpT! ❤ Sandra Sweet Times in First 73. I follow LR on Pinterest! ❤ Sandra Sweet Times in First 74. I follow First Grade Fever! 75. I follow your TpT Shop! 76. I follow Learning Resources on Twitter! 77. I "liked" Learning Resources on Facebook! 78. I follow Learning Resources on Pinterest! 79. I follow you on Pinterest! 80. I follow your blog. 81. I follow your TPT store. 82. I follow Learining Resources on FB. 83. I follow you on Pinterest. 84. I follow Learning Resources on Pinterest. 85. Great blog!! 86. Just gave you an award!! Come by and check it out!! 1...2...3...Teach With Me 87. sponsor's fb liker amramazon280 at yahoo dot com 88. your fb liker 89. sponsor's pinterest follower amramazon280 at yahoo dot com 90. your pinterest follower amramazon280 at yahoo dot com 91. Hi Christie! Kristi and I follow your blog and we'd like to give you some awards - One Lovely Blog Award and Versatile Blogger Award! Come and get them! Teaching Little Miracles P.S. Love the chevron! 92. I follow your blog! The First Grade Dream 93. I follow your TPT store! The First Grade Dream
{"url":"http://first-grade-fever.blogspot.com/2012/06/bright-chevron-amazing-giveaway.html","timestamp":"2014-04-20T13:28:38Z","content_type":null,"content_length":"348465","record_id":"<urn:uuid:7fd0b3fb-a07c-446d-bae7-071ec6d82096>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00168-ip-10-147-4-33.ec2.internal.warc.gz"}
Class: BioMap Return count of read sequences aligned to reference sequence in BioMap object Count = getCounts(BioObj, StartPos, EndPos) GroupCount = getCounts(BioObj, StartPos, EndPos, Groups) GroupCount = getCounts(BioObj, StartPos, EndPos, Groups, R) ... = getCounts(..., Name,Value) Count = getCounts(BioObj, StartPos, EndPos) returns Count, a nonnegative integer specifying the number of read sequences in BioObj, a BioMap object, that align to a specific range or set of ranges in the reference sequence. The range or set of ranges are defined by StartPos and EndPos. StartPos and EndPos can be two nonnegative integers such that StartPos is less than EndPos, and both integers are smaller than the length of the reference sequence. StartPos and EndPos can also be two column vectors representing a set of ranges (overlapping or segmented). By default, getCounts counts each read only once. Therefore, if a read spans multiple ranges, that read instance is counted only once. When StartPos and EndPos specify overlapping ranges, the overlapping ranges are considered as one range. GroupCount = getCounts(BioObj, StartPos, EndPos, Groups) specifies Groups, a row vector of integers or strings, the same size as StartPos and EndPos. This vector indicates the group to which each range belongs. GroupCount is a column vector containing a number of elements equal to the number of unique elements in Groups. GroupCount specifies the number of reads that align to each group, in the ascending order of unique groups in Groups. Each group is treated independently. Therefore, a read can be counted in more than one group. GroupCount = getCounts(BioObj, StartPos, EndPos, Groups, R) specifies a reference for each of the segmented ranges defined by StartPos, EndPos, and Groups. ... = getCounts(..., Name,Value) returns counts with additional options specified by one or more Name,Value pair arguments. Input Arguments BioObj Object of the BioMap class. StartPos Either of the following: ● Nonnegative integer that defines the start of a range in the reference sequence. StartPos must be less than EndPos, and smaller than the total length of the reference sequence. ● Column vector of nonnegative integers, each defining the start of a range in the reference sequence. EndPos Either of the following: ● Nonnegative integer that defines the end of a range in the reference sequence. EndPos must be greater than StartPos, and smaller than the total length of the reference sequence. ● Column vector of nonnegative integers, each defining the end of a range in the reference sequence. Groups Row vector of integers or strings, the same size as StartPos and EndPos. This vector indicates the group to which each range belongs. R Vector of positive integers indexing the SequenceDictionary property of BioObj, or a cell array of strings specifying the actual names of references. R must be ordered and have the same number of elements as the unique elements in Groups. If R has the same number of elements as Groups, then all of the entries in R for each unique value in Groups must be the same. Name-Value Pair Arguments Specify optional comma-separated pairs of Name,Value arguments. Name is the argument name and Value is the corresponding value. Name must appear inside single quotes (' '). You can specify several name and value pair arguments in any order as Name1,Value1,...,NameN,ValueN. 'Independent' Logical that specifies whether to treat the ranges defined by StartPos and EndPos independently. If true, Count is a column vector containing the same number of elements as StartPos and EndPos. In this case, a read that spans multiple ranges, is counted once in each range. │ Note: This name-value pair argument is ignored when using the Groups input argument, because getCounts assumes that each group of ranges is independent. │ Default: false 'Overlap' Specifies the minimum number of base positions that a read must overlap in a range or set of ranges, to be counted. This value can be any of the following: ● Positive integer ● 'full' — A read must be fully contained in a range or set of ranges to be counted. ● 'start' — A read's start position must lie within a range or set of ranges to be counted. Default: 1 'Spliced' Logical specifying whether short reads are spliced during mapping (as in mRNA-to-genome mapping). N symbols in the Signature property of the object are not counted. Default: false 'Method' String specifying the method to measure the abundance of reads. Choices are: ● 'raw' — Raw counts ● 'rpkm' — Counts of reads per kilobase pairs per million aligned reads ● 'mean' — Average coverage depth computed base-by-base ● 'max' — Maximum coverage depth computed base-by-base ● 'min' — Minimum coverage depth computed base-by-base ● 'sum' — Sum of all aligned bases in all the reads Default: 'raw' Output Arguments Count Either of the following: ● When Independent is false, this value is a nonnegative integer. The integer specifies the number of reads that align to a range or set of ranges (overlapping or segmented) of the reference sequence in BioObj, a BioMap object. Each read is counted only once, even if the read spans multiple ranges. ● When Independent is true, this value is a column vector of nonnegative integers. This vector indicates the number of reads that align to the independent ranges specified by StartPos and EndPos. This vector contains the same number of elements as StartPos and EndPos. GroupCount Column vector containing a number of elements equal to the number of unique elements in Groups. The vector specifies the number of reads that align to each group, in the order of unique groups in Groups. The groups of ranges are treated independently. Therefore, a single read can be counted in more than one group. Construct a BioMap object, and then return the number of reads that align to at least one base position in two ranges of the reference sequence: % Construct a BioMap object from a SAM file BMObj1 = BioMap('ex1.sam'); % Return the number of reads that align to the segmented range 1:50 and 71:100 counts_1 = getCounts(BMObj1,[1;71],[50;100]) counts_1 = Construct a BioMap object, and then return the number of reads that align to at least one base position in two independent ranges of the reference sequence: % Construct a BioMap object from a SAM file BMObj1 = BioMap('ex1.sam'); % Return the number of reads that align to each of the ranges, % 1:50 and 71:100, independent of each other counts_2 = getCounts(BMObj1,[1;71],[50;100],'independent',true) counts_2 = Notice that the total number of reads reported in counts_2 is greater than the number of reads reported in counts_1. This difference occurs because there are four reads that span the two ranges, and are counted twice in the second example. Construct a BioMap object, and then return the number of reads that align to two separate groups of ranges of the reference sequence: % Construct a BioMap object from a SAM file BMObj1 = BioMap('ex1.sam'); % Return the number of reads that align to a group containing range 30:60, % and also the number of reads that align to a group containing range 1:10 % and range 50:60 counts_3 = getCounts(BMObj1,[1;30;50],[10;60;60],[2 1 2]) counts_3 = Construct a BioMap object, and then return the total number of reads aligned to the reference sequence: % Construct a BioMap object from a SAM file BMObj1 = BioMap('ex1.sam'); % Return the number of sequences that align to the entire reference sequence ans = See Also align2cigar | BioMap | cigar2align | getAlignment | getBaseCoverage | getCompactAlignment | getIndex How To Related Links
{"url":"http://www.mathworks.nl/help/bioinfo/ref/biomap.getcounts.html?nocookie=true","timestamp":"2014-04-23T07:34:34Z","content_type":null,"content_length":"49838","record_id":"<urn:uuid:eba50ad9-e736-4993-b8aa-9a212dd071c6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
July 16th, 2013 An interesting problem I was working with when designing methods to bin and average our anisotropy data into a grid format stemmed from the fast-axis calculations: the range of the arctan function, and the fact that each trend representing the fast axis orientation can be described by two angles over azimuth range [0,360]. Gabi and I thought through two approaches to this problem. However, because the first approach is best explained by visuals, and the second approach requires a combination of visuals and equations, it's a little unorthodox but the rest of this blog entry will be shown or described in the following figures. Also - this is a work in progress. If you have interesting suggestions or see an error in logic, I'd be happy to hear about it and make adjustments to my current set of calculations (which is based on the geometric explanation below). Approach 1: Approach 2: You must be logged into the CMS to post a comment.
{"url":"http://www.iris.edu/hq/internship/blogs/entry/chopsticks","timestamp":"2014-04-21T02:57:45Z","content_type":null,"content_length":"10011","record_id":"<urn:uuid:1fbc9f8f-1b52-479e-ab99-dd8e8eec985c>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Topics That Engage Students 1:00 A Geometry Based Math/Art Course with a Studio Component PM – 1:15 Judith Silver, Marshall University Jonathan Cox, Marshall University Marshall University offers a four hour freshman honors seminar in mathematics and art, with emphasis on geometry. The beauty and usefulness of mathematics is enhanced by a studio component taught by an art professor. Topics covered include perspective, symmetry, mathematical themes in art, and studio skills. 1:20 Geometry in an Historical Frame PM – 1:35 Ockle Johnson, Keene State College In this talk I will describe the course I developed that teaches geometry within an historical context. This approach has many positive features. Students read the first book of Euclid's Elements to review some basic Euclidean geometry. Students learn that important contributions were made by mathematicians from various civilizations. Students discover that geometry has developed over the years and that every development in mathematics, e.g., algebra, calculus, and abstract algebra, has provided new tools and raised new questions for geometry. Students realize that geometry is much broader than they thought and remains an area of active mathematical research and development. 1:40 Analyzing Floor Plans: A Geometry Lab PM – 1:55 Emma Smith Zbarsky, Wentworth Institute of Technology I will describe a project that I developed for my geometry course. I collaborated with an architecture professor to select a number of floor plans designed by architecture students. Then I let the math students loose to prepare a lease space analysis applying basic concepts from two and three dimensional geometry. 2:00 Symmetry and Shape: Geometry for Non-majors PM – 2:15 Penelope Dunham, Muhlenberg College What should a geometry course for non-majors look like? In particular, what topics will convey the beauty of geometry and, at the same time, attract students from the humanities who only want to satisfy their general reasoning requirement? My solution is “Symmetry and Shape,” a 100-level course that examines geometric concepts as it engages students with hands-on explorations and examples from art and nature. Although I originally designed the course to appeal to students from the arts, it has also been a popular choice for preservice elementary teachers and majors in biology, theatre, and history. This talk will address issues in designing the course, including topic selection and assessment options. I’ll list the major topics covered and give examples of innovative assignments, in-class explorations, technology-based labs, and available resources. I’ll also describe assessment components, including two portfolio projects: one focused on examples of symmetry from students’ environment, and another featuring original student art based on concepts studied in the course (culminating in a display for the campus Arts Week). 2:20 Geometry Via Modeling PM – 2:35 Marian Anton, Centre College According to Euclid, if two points are taken at random on the circumference of a circle, then the straight line joining the points falls within the circle. Starting from this example, we show how topological data analysis could reshape the teaching of geometry. In particular, we outline a course in elementary geometry aiming to engage students in learning via modeling. 2:40 Using "Arts and Crafts" to Reinforce Geometric Concepts PM – 2:55 Kristen Sellke, Saint Mary's University of Minnesota My geometry course is a whirlwind tour through different geometries such as Euclidean, hyperbolic, analytic, finite, and transformational. Geometry is offered every two years so the students enter the course with a very diverse mathematical background. This presentation will examine a series of hands-on activities done throughout the course including constructions, paper folding and the creation of hyperbolic paper. We will discuss how my goals of the activities: to develop visualization skills for all students, to motivate proof-writing for second-year majors , and to give pre-service teachers examples of activities they can use in their future classes were met and look at student responses to the activities. 3:00 Using Paper Folding to Explore Euclidean Geometry PM – 3:15 Carroll G. Wells, Lipscomb University Presentation Withdrawn (August 4, 2011) 3:20 Kinesthetically Experiencing Geometry PM – 3:35 Todd D. Oberg, Illinois College With the increased emphasis on Transformational Geometry in the PreK-12 curriculum, and the continued need to study Euclidean Geometry, a rethinking of Geometry courses for preservice teachers may be necessary. One way of preparing future teachers, well as current teachers, is to combine Euclidean and Transformational Geometries into a single study rather than treating each as a separate topic. In this presentation, I will share some paper folding and Patty Paper activities that invite preservice students to more actively engage in the study of Geometry and also provide these students with opportunities to explore both Transformational and Euclidean techniques for creating proofs. In addition, some of these activities can be extended to explore ideas in Non-Euclidean Geometries. 3:40 All Hands on Deck: In Praise of Toys PM – 3:55 Thomas Q Sibley, St. John's University Geometry students benefit from “playing” with geometrical objects. I have students use mirrors, basketballs, approximations of hyperbolic planes, both knitted and plastic, and other toys. These experiences help them develop valuable geometrical intuition and make conjectures. I will discuss how I have used hands on experiences to help students develop geometrical approaches to proofs and understand mathematical ideas.
{"url":"http://www.framingham.edu/~smabrouk/Maa/mathfest2011/index.htm","timestamp":"2014-04-20T01:17:56Z","content_type":null,"content_length":"40459","record_id":"<urn:uuid:490d2f34-8c78-48e0-86c8-567d7313adcd>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
Nordic Topology Meeting 27 - 28 November 2008 Abstract: If you cover a space by open sets, a continuous function can be given by defining the function on each open set and demanding that the values agree at intersection points. A vector bundle can be defined by defining transition isomorphisms on intersections and demanding that they agree on triple intersections. A two-vector bundle is a natural extrapolation of these ideas, where the isomorphisms occur at triple intersections and the cocycle condition on quadruple intersections. In fact, two-vector bundles is what you get if you - when defining vector bundles - replace the ring of complex numbers by the category of finite dimensional complex vector spaces with sum and tensor as operations. Otherwise said, they are to vector bundles what gerbes are to line bundles. The strange fact is that this very naïve setup has homotopy theoretical content. The chromatic picture gives a hierarchy of cohomology theories according to how deep structure of stable homotopy theory the cohomology theory detects. Chromatic filtration zero, one and infinity have nice geometric interpretations through functions, vector bundles and bordisms, and the geometric origin is important for the analysis of key problems. Such geometric interpretations have been missing in higher finite filtrations. Elliptic cohomology and topological modular are examples at chromatic filtration two, and Segal conjectured that there ought to be a geometric interpretation of elliptic cohomology through quantum field theories. Recently it was shown that two-vector bundles give rise to a cohomology theory which is represented by the algebraic K-theory of topological K-theory ku. By results of Ausoni and Rognes K(ku) is of (a connected version of) chromatic filtration two. Hence we have a naturally defined geometric theory of the desired sort. The connection to quantum field theories and the set-up of Stolz and Teichner is, however, still mysterious. There was a hope that an "integration of determinants through loops" construction would give a functor from two-vector bundles to quantum field theories, but this is unfortunately not the case. Whereas commutative rings support determinants, this is not (in the most naïve sense) true for commutative ring spectra. In fact, neither the sphere spectrum nor topological K-theory support determinants. The latter is important for us since it rules out the conjectured integral functor to quantum field theories. However the reason for its failure is very interesting: it stems from an observation that Rognes that Ausoni's calculations of K(ku) implies that the group of gerbes on the three dimensional sphere do not split off as a direct summand of the group of "virtual" two-vector bundles. This leaves one speculating about the geometry of two-vector bundles, even over very simple spaces. Rationally, this problem vanishes, and Ausoni and Rognes have pushed through the program in this case: giving a virtual two-vector bundle on X is rationally the same as giving its virtual "dimesion bundle" and an "anomality bundle" on the free loop space of X. Two-vector bundles is a theory in its infancy, and much is still left to explore. In particular the geometric and analytic aspects are so far largely terra incognita. In my talk I will try to explain some of these ideas and results.
{"url":"http://www.math.ntnu.no/Nordictopmeet/abstract.imf","timestamp":"2014-04-19T19:50:02Z","content_type":null,"content_length":"25322","record_id":"<urn:uuid:62345e62-5f14-4c07-a323-4aae4e2ba15d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Hermite Polynomial Background for the Hermite Polynomial The cubic Hermite polynomial p(x) has the interpolative properties endpoints of the interval Charles Hermite (1822-1901), and are referred to as a "clamped cubic," where "clamped" refers to the slope at the endpoints being fixed. This situation is illustrated in the figure below. Theorem (Cubic Hermite Polynomial). If Proof Hermite Polynomial Interpolation Hermite Polynomial Interpolation Remark. The cubic Hermite polynomial is a generalization of both the Taylor polynomial and Lagrange polynomial, and it is referred to as an "osculating polynomial." Hermite polynomials can be generalized to higher degrees by requiring that the use of more nodes Computer Programs Hermite Polynomial Interpolation Hermite Polynomial Interpolation Example 1. Find the cubic Hermite polynomial or "clamped cubic" that satisfies More Background. The Clamped Cubic Spline A clamped cubic spline is obtained by forming a piecewise cubic function which passes through the given set of knots Example 2. Find the "clamped cubic spline" that satisfies Solution 2. More Background. The Natural Cubic Spline A natural cubic spline is obtained by forming a piecewise cubic function which passes through the given set of knots Example 3. Find the "natural cubic spline" that satisfies Solution 3. Old Lab Project (Hermite polynomial interpolation Hermite polynomial interpolation). Internet hyperlinks to an old lab project. Research Experience for Undergraduates Hermite Polynomial Interpolation Hermite Polynomial Interpolation Internet hyperlinks to web sites and a bibliography of articles. Download this Mathematica Notebook Hermite Polynomial Interpolation Return to Numerical Methods - Numerical Analysis (c) John H. Mathews 2004
{"url":"http://mathfaculty.fullerton.edu/mathews/n2003/HermitePolyMod.html","timestamp":"2014-04-20T03:26:25Z","content_type":null,"content_length":"12201","record_id":"<urn:uuid:7fdb60f6-1b16-4e3d-a81e-66e1e132b77a>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
More by Bartels on 2-Bundles Posted by Urs Schreiber A while ago, Toby Bartels had a paper on the arXiv in which a notion of a categorified bundle, a 2-bundle, was defined ($\to$) Toby Bartels Categorified gauge theory: 2-bundles Meanwhile this material has evolved. A draft of a refined version is now available: Toby Bartels Higher gauge theory: 2-Bundles (draft) There are various refinements of the original definition. As far as I am aware, the most central one is that concerning the concept of 2-bundles associated to 2-transitions. This is a matter of picking one of several a priori possible ways of encoding the structure of an ordinary bundle in terms of commuting diagrams, and then internalizing these diagrams in some 2-category of of “2-spaces”, where a 2-space is, essentially, a smooth category. In this respect the crucial diagram now is (32) (beware that since this is a draft, I cannot guarantee that this number remains meaningul in the future). This encodes how a given transition between trivial bundles on local patches of a good covering may be associated (in a slightly nonstandard sense) to a bundle on the entire base space. The categorification of this to 2-bundles is given in diagram (119), which of course (that is the whole point of this internalization approach to categorification) looks precisely as the original one, with the only exception that the commutativity condition is replaced with the existence of a coherent 2-isomorphism filling the diagram. There is a more or less obvious way to define a 2-category of the 2-bundles thus defined. The punchline of the whole enterprise is then supposed to be theorem 3 (currently in section 3.3 “Gerbes”), which says that the 2-category of 2-bundles over some base space is equivalent to a suitable 2-category of nonabelian gerbes over that space. The basic idea here (to my mind) is that a 2-bundle should be to a gerbe what an ordinary bundle is to its “sheaf of local retrivializations”. I have tried to sketch how this should work here. Unfortunately, the proof of this important theorem is not yet given in the present stage of Toby’s draft. Apparently an important new ingredient necessary to make this work is that in the definition of morphisms of 2-bundles one uses, instead of naive smooth functors, so-called smooth anafunctors. These are functors between smooth categories which locally are naturally isomorphic to ordinary smooth functors. There would probably be more to say, but I need to get this entry here finished. If I find the time I might try to elaborate a little on how the notion of 2-transition developed by Toby Bartels relates to the concept of transition which I am using in the theory of 2-transport, as described in section 1.2 of these notes. The kind of (2-)transition defined there immediately gives a relation for instance to bundle gerbes with connection and curving, but also to other structures involving higher order transport. In fact, one can understand the curious definition of a bundle gerbe with connection as defining precisely a 2-trivialization with 2-transition for a line 2-bundle. This is described in these notes. There would be more to say. But I have to run now. Posted at April 4, 2006 7:48 PM UTC
{"url":"http://golem.ph.utexas.edu/string/archives/000784.html","timestamp":"2014-04-19T19:35:14Z","content_type":null,"content_length":"16150","record_id":"<urn:uuid:3a67be6e-1935-4c3d-9d56-f72006c451dc>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
First grade math skills set foundation for later math ability Children who failed to acquire a basic math skill in first grade scored far behind their peers by seventh grade on a test of the mathematical abilities needed to function in adult life, according to researchers supported by the National Institutes of Health. The basic math skill, number system knowledge, is the ability to relate a quantity to the numerical symbol that represents it, and to manipulate quantities and make calculations. This skill is the basis for all other mathematics abilities, including those necessary for functioning as an adult member of society, a concept called numeracy. The researchers reported that early efforts to help children overcome difficulty in acquiring number system knowledge could have significant long-term benefits. They noted that more than 20 percent of U.S. adults do not have the eighth grade math skills needed to function in the workplace. "An early grasp of quantities and numbers appears to be the foundation on which we build more complex understandings of numbers and calculations," said Kathy Mann Koepke, Ph.D., director of the Mathematics and Science Cognition and Learning: Development and Disorders Program at the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD), the NIH institute that sponsored the research. "Given the national priority on education in science, technology, engineering and math fields, it is crucial for us to understand how children become adept at math and what interventions can help those who struggle to build these skills." Senior author David C. Geary, Ph.D., of the University of Missouri, Columbia, conducted the research with colleagues Mary K. Hoard, Ph.D., and Lara Nugent, and with Drew H. Bailey, Ph.D., of Carnegie Mellon University, Pittsburgh. The study appears online in PLoS One. These results are part of a long-term study of children in the Columbia, Mo., school system. Initially, first graders from 12 elementary schools were evaluated on their number system knowledge. Number system knowledge encompasses several core principles: • Numbers represent different magnitudes (five is bigger than four). • Number relationships stay the same, even though numbers may vary. For example, the difference between 1 and 2 is the same as the difference between 30 and 31. • Quantities (for example, three stars) can be represented by symbolic figures (the numeral 3). • Numbers can be broken into component parts (5 is made up of 2 and 3 or 1 and 4). The researchers also evaluated such cognitive skills as memory, attention span, and general intelligence. The researchers found that by seventh grade, children who had the lowest scores on an assessment of number system knowledge in first grade lagged behind their peers. They noted that these differences in numeracy between the two groups were not related to intelligence, language skills or the method students used to make their computations. For the testing at age 13, 180 of the students took timed assessments that included multiple-digit addition, subtraction, multiplication, and division problems; word problems; and comparisons and computations with fractions. Previous studies have shown that these tests evaluate functional numeracy―skills that adults need to join and succeed in the workplace. This might include the limited understanding of algebra needed to make change such as being able to provide an answer to a question such as: "If an item costs $1.40 and you give the clerk $2, how many quarters and how many dimes should you get back?" Other aspects of functional numeracy include the ability to manipulate fractions, as when doubling the ingredients in a recipe (for example, adding 1 ½ cups water when doubling a recipe that calls for ¾ cups water) or finding the center of a wall when wanting to center a painting or a shelf. The researchers' analysis showed that a low score on the assessment of number system knowledge in first grade significantly increased a student's risk of getting a low functional numeracy score as a The researchers examined learning and found that first graders with the lowest scores also had the slowest growth in number system knowledge throughout that school year. Starting with poor number knowledge can put children so far behind that they never catch up, the researchers said. "These findings are especially valuable for bringing attention to the idea that numeracy early in life has profound effects not only for the individual, but also for the society that individual works and lives in," Dr. Mann Koepke said. 5 / 5 (2) Feb 27, 2013 So, those bad at math in the beginning, are bad at math at the end. This stunning result could have been brought to you by Aristotelian math teachers some two thousand years ago. This, of course, raises the temptation to double or triple the teaching effort for these kids so that they could catch up with the others. But I'm positive that most of even those that do catch up will be behind again after some years. It might be just as efficient and rewarding for both the individual and the society, to spend as much on finding these individuals' respective strengths. Especially those without low intelligence but with low math abilities are very likely to excel at something else. 3 / 5 (2) Feb 27, 2013 So, those bad at math in the beginning, are bad at math at the end. This stunning result... That's not the stunning result. The stunning result is that very little effort in crucial areas can have a large payoff. This, of course, raises the temptation to double or triple the teaching effort for these kids so that they could catch up with the others. The result is exactly NOT that you need double/triple the teaching effort - but only a very little nudge extra - to make a large difference.
{"url":"http://phys.org/news/2013-02-grade-math-skills-foundation-ability.html","timestamp":"2014-04-21T02:35:23Z","content_type":null,"content_length":"70508","record_id":"<urn:uuid:1feb2f37-2c66-4bae-aebe-5bab5a1332f9>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Michael Shulman discrete object An object $A$ of a 2-category $K$ is discrete if the category $K(X,A)$ is equivalent to a discrete set for all objects $X$ of $K$. Discrete objects are also called 0-truncated objects since they are characterized by $K(X,A)$ being a 0-category (a set). More explicitly, an object $A$ is discrete if and only if every pair of parallel 2-cells $\alpha,\beta:f \;\rightrightarrows\; g:X\;\rightrightarrows\;A$ are equal and invertible. If $K$ has finite limits, this can be expressed equivalently by saying that $A\to A^{ppr}$ is an equivalence, where $ppr$ is the “walking parallel pair of arrows.” We write $disc(K)$ for the full sub-2-category of $K$ on the discrete objects; it is equivalent to a 1-category, and is closed under limits in $K$. A morphism $A\to B$ is called discrete if it is discrete as an object of the slice 2-category $K/B$. Revised on June 12, 2012 11:10:00 by Andrew Stacey?
{"url":"http://ncatlab.org/michaelshulman/show/discrete+object","timestamp":"2014-04-19T04:58:33Z","content_type":null,"content_length":"13549","record_id":"<urn:uuid:232e03e4-5e51-43b7-8f75-3bf3914488e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
data Design a Source Eq a => Eq (Design a) Ord a => Ord (Design a) Show a => Show (Design a) incidenceMatrix :: Eq t => Design t -> [[Int]]Source The incidence matrix of a design, with rows indexed by blocks and columns by points. (Note that in the literature, the opposite convention is sometimes used instead.)
{"url":"http://hackage.haskell.org/package/HaskellForMaths-0.3.2/docs/Math-Combinatorics-Design.html","timestamp":"2014-04-18T08:47:48Z","content_type":null,"content_length":"18129","record_id":"<urn:uuid:35f28eb2-095d-4bd1-b5fb-81d02ad62729>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00198-ip-10-147-4-33.ec2.internal.warc.gz"}
Download discrete math through applications instructors manual Download discrete math through applications instructors manual - 0 views #1 Paul Love on 05 Oct 12 Filename: discrete math through applications instructors manual Date: 16/8/2012 Type of compression: zip Total downloads: 11086 Nick: queani File checked: Kaspersky Download speed: 31 Mb/s Price: FREE Instructor's Solutions Manual for Discrete Mathematics with Applications Third Edition by Susanna S. Epp discrete math through applications instructors manual Publisher: International Thomson Publishi; 3rd Ed edition. "A good source of topics for discrete mathematics, and many topics are covered in very good breadth and depth.", H.K. Dai, Oklahoma State University "This text is. Jan 19, 2011 · Discrete Mathematics and Its Applications 2Nd Edition Susanna Solutions Manual document sample discrete mathematics and discrete math through applications instructors manual its applications 6th edition instructor solution manual - Picktorrent.com - Search Torrents and Download Torrents. Download Music, TV … Discrete Mathematics And Its Applications Instructor Solutions Manual rapidshare links available for download. Daily checked working links for downloading discrete. Download discrete mathematics and its applications 6th edition instructor solution manual files from rapidshare, megaupload, mediafire. Working discrete mathematics. Discrete Mathematics and Its Applications sixth edition complete solution manual free PDF ebook downloads. eBooks and manuals for Business, Education,Finance. Instructor's Manual Discrete Mathematics book download Richard Johnsonbaugh Download Instructor's Manual Discrete Mathematics Amazon.com: Discrete Mathematics … New Members: smolnikm joined 12 minutes ago. AndesonGH joined 14 minutes ago. SGMKEV joined 32 minutes ago. parrothead joined 41 minutes ago. expertoffnetu … Discrete Mathematics and Its Applications: Instructor's Manual [Kenneth H. Rosen] on Amazon.com. *FREE* super saver shipping on qualifying offers. This text provides. An Introduction to Cryptography, Second Edition (Discrete Mathematics and Its Applications) by Richard A. Mollin cbi00185 James W. Cortada Papers, circa 1890-2007. Finding Aid. Prepared by Stephanie Horowitz, 2007-2010. University of Minnesota Libraries 2007, 2010 * pdf Rosen, Discrete Mathematics and Its Applications, 6th Edition. Rosen, Discrete Mathematics and Its Applications, 6th edition. discrete mathematics and its applications edition instructor solution manual Discrete Mathematics and Its Applications, 7th Edition Publisher: McGraw-Hill. Instructors Manual for Discrete Mathematics With Applications, Third Edition - Free ebook download as PDF File (.pdf), text file (.txt) or read book online for free. Instructor's Resource Guide for Discrete Mathematics and Its Applications By Kenneth H. Rosen Publisher: Mc/Gra.w-Hil.l; 5th edition 2003 | 524 Pages | ISBN. rosen Discrete Mathematics and Its Applications sixth edition solution manual instructor free PDF ebook downloads. eBooks and manuals for Business, …
{"url":"https://groups.diigo.com/group/pelnaemanfo93/content/download-discrete-math-through-applications-instructors-manual-6723372","timestamp":"2014-04-19T12:12:55Z","content_type":null,"content_length":"28384","record_id":"<urn:uuid:c7f463e8-dc65-47a4-8c24-5806f0c6f1be>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Prize-winning Research in Computational Science and Engineering April 11, 2007 Chi-Wang Shu of Brown University received the 2007 SIAM/ACM Prize in Computational Science and Engineering in February at the SIAM Conference on Computational Science and Engineering, Costa Mesa, California. Shu, shown here with SIAM vice president at large David Keyes, was cited for “the development of numerical methods that have had a great impact on scientific computing, including TVD temporal discretizations, ENO and WENO finite difference schemes, discontinuous Galerkin methods, and spectral methods.” Philip J. Davis It was recently my pleasure to step into the office adjacent to mine and congratulate a genial colleague who had just received an important prize: the SIAM/ACM Prize in Computational Science and Engineering. For some time I had wondered what Professor Chi-Wang Shu "did" and with what material he confronted the steady stream of graduate students who have worked with him. By "interviewing" him in preparation for this article, I gained some insight into the work mentioned in the prize citation, along with a tutorial in computational methods for fluids. Despite coming to maturity in China during the disastrous "cultural revolution," which severely downplayed science and technology, Chi-Wang got some mathematics in high school. Happily, this dark period in China's history came to an end in 1976, and the traditional college curriculum was restored. At the age of twenty Shu entered the University of Science and Technology of China, in Hefei, Anhui Province, where he received a bachelor's degree in mathematics in 1982. (One of his teachers in Hefei was Geng-Zhe Chang, whom I knew and collaborated with on several projects.) Applied mathematics courses were available only in the students' last year. Shortly thereafter, Shu came to the U.S. as a graduate student in the mathematics department at UCLA. At the end of his first year, he fell in with Stanley Osher. He liked Osher's area of applied mathematics and in 1986 received his PhD under Osher's guidance, with a thesis titled "Numerical Solutions of Conservation Laws." In 1987 Shu accepted a position in the Division of Applied Mathematics at Brown; in rapid succession he achieved professorship and chairmanship, all the while turning out research papers and PhD students. Over the years his research interests have been consistently focused on numerical methods for the solution of "convection-dominated" hyperbolic and mixed, time-dependent partial differential equations arising in fluid theory and other application areas. He has found collaboration easy and natural, working constantly not only with his students but also with such researchers as David Gottlieb, Bernardo Cockburn, Joseph Jerome, and Irene Gamba. Asked how he places himself in the spectrum of mathematicians, Shu replied, "I think of myself as half a numerical analyst, while the other half is more applied: designing algorithms for applications and not necessarily involving analysis. I believe I'm a specialist and not a generalist." In performing computational experiments, he and his group write their own codes in Fortran, relying occasionally on Mathematica for arduous algebraic manipulations. The SIAM/ACM prize (the most recent of a number of honors accorded Shu), was presented in February at the 2007 SIAM Conference on Computational Science and Engineering. The prize recognized his accumulated work, particularly on the TVD temporal discretization, ENO and WENO finite difference schemes, discontinuous Galerkin methods, and spectral methods. In the course of our interview, I learned a bit about each method, including benefits and drawbacks, as well as recent improvements and applications. TVD (total variation diminishing) refers to an idea from spatial discretization that allows the scheme to maintain stability in the bounded variation semi-norm for discontinuous solutions, thus avoiding the nonphysical oscillations around discontinuities (Gibbs phenomenon) typical of traditional high-order accurate schemes. Even though a scheme is TVD for the first-order time discretization, however, it is not easy to maintain this stability when a higher-order time discretization is applied. Shu began to design schemes for such time discretization, starting from his PhD thesis; this work led to a paper in SIAM Journal on Scientific and Statistical Computing covering multistep methods and a joint paper with Osher in Journal of Computational Physics (JCP) covering Runge–Kutta methods (both in 1988). Shu has continued to work in this area with students, most notably with Sigal Gottlieb, whose thesis is based mainly on this work. ENO stands for "essentially non-oscillatory"; WENO adds "weighted" to this description. The popular TVD schemes do not increase the bounded variation semi-norm but have the drawback of being limited, basically, to second-order accuracy. ENO schemes were originally developed in a seminal 1987 JCP paper by Ami Harten, Björn Engquist, Osher, and Sukumar Chakravarthy (the first three were among Shu's teachers at UCLA). The idea is essentially to use interpolation stencils adaptively. Traditional schemes typically use a fixed stencil: To interpolate at i, you might use i – 1, i, and i + 1 for a second-order polynomial; of course, you could also use i – 2, i – 1, and i. The idea of ENO is to choose automatically the best stencil to use locally. An ENO scheme can provide high-order accuracy without oscillations around shocks. Shu's main work on ENO schemes, described in two JCP papers with Osher, employs a conservative finite difference framework that allows simple and efficient computation for multidimensional problems---the reason for the popularity of ENO schemes. WENO schemes automatically pick linear combinations of the stencils with suitable locally determined weights. Xu-Dong Liu, Osher, and Tony Chan designed the first WENO scheme, in third-order finite volume form, in a 1994 JCP paper. Shu, with his then PhD student Guang-Shan Jiang, formulated a general strategy for finite difference WENO schemes of arbitrary accuracy in a 1996 JCP paper; the fifth-order WENO scheme from this paper is the one used most often in later applications. Finite difference schemes have the ad-vantages of simplicity and low storage requirements; they can be used on domains with regular geometries that can be put onto a very smooth mesh. The finite element method, by contrast, is a bit more complicated to design and harder to code, but it fits arbitrary geometries and accommodates adaptive methods more easily. Shu has been working with his University of Minnesota colleague and friend Bernardo Cockburn for almost 20 years on the design, analysis, and application of the discontinuous Galerkin (DG) finite element method, which is particularly suitable for convection-dominated partial differential equations. In the past few years, the DG method has become tremendously popular among those working in applications. Shu has also been collaborating with his Brown colleague David Gottlieb on spectral methods for discontinuous problems, mainly on the recovery of uniform spectral accuracy from spectral expansion coefficients of discontinuous but piecewise-smooth functions. As to applications, Shu works with people in the computational fluid group at NASA, Langley, in particular with Harold Atkins. He mentioned the exterior of an airfoil as an example of the geometries he considers. Emphasizing the experimental nature of his work, he pointed out that he may get "crazy" ideas for many situations. He tries them out; some work and some do not. In the case of hyperbolic problems, theoretical mathematics sometimes provides guidelines, but they are not always sufficient. Many aspects of algorithm design depend on intuition and heuristics, he said. It is rewarding to see that algorithms are playing important roles in applications. While the "digital wind tunnel" may not have completely replaced the physical wind tunnel, computational methods are making a substantial contribution to the craft of engineering. Shu is delighted by the growing popularity of the high-order, high-resolution schemes for shock calculations and for general convection-dominated problems on which his research is focused. And I, in turn, was delighted to learn of developments in a field in which I was engaged years ago, as a young aerodynamicist at Langley. Philip J. Davis, professor emeritus of applied mathematics at Brown University, is an independent writer, scholar, and lecturer. He lives in Providence, Rhode Island, and can be reached at
{"url":"https://www.siam.org/news/news.php?id=1108","timestamp":"2014-04-21T14:43:33Z","content_type":null,"content_length":"15614","record_id":"<urn:uuid:c4ce943d-8f80-4493-94f2-15e4695e3f64>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00644-ip-10-147-4-33.ec2.internal.warc.gz"}
Piecewise Cubic Hermite Interpolating Polynomial (PCHIP) for given data and then finding area Hello all I am trying to do Piecewise Cubic Hermite Interpolation on the data given below and then I want to get the area covered by the polynomials with x axis. I think, I am misunderstanding the meaning of coefficients returned by pchip command, but not sure. Does anyonw know what could be the problem? x = [5.8808 6.5137 7.1828 7.8953]; y = [31.2472 33.9977 36.7661 39.3567]; pp = pchip(x,y) If I see see pp,it gives pp as form: 'pp' breaks: [5.8808 6.5137 7.1828 7.8953] coefs: [3x4 double] pieces: 3 order: 4 dim: 1 and pp.coefs are -0.0112 -0.1529 4.4472 31.2472 -0.3613 0.0884 4.2401 33.9977 -0.0422 -0.3028 3.8731 36.7661 I think these are the polynomials representing the three intervals But when I find the y values corresponding to x values using these polynomials, it gives wrong values. It gives negative y values for second polynomial . Even third polynomial do not seem to satisfy the points. I used these commands for obtaining the values For example:- (for second polynomial) xs = linspace(6.5137, 7.1828, 200); y = polyval(pp.coefs(2,:),xs); I want to find the area under the curve covered by this plot, that's why I am trying to find the polynomial. Is there any other way to do it or if anyone could find the problem in the commands that I am using, please let me know. Bhomik Luthra 0 Comments 1 Answer Edited by Andrei Bobrov on 17 Jun 2013 Accepted answer x = [5.8808 6.5137 7.1828 7.8953]; y = [31.2472 33.9977 36.7661 39.3567]; pch = pchip(x,y); out = fnval(fnint(pch),x([2,3]))*[-1;1]; % if you have 'Curve Fitting Toolbox' out = integral(@(x)ppval(pch,x),x(2),x(3)); % 2 Comments Thanks for your valuable answer, but I still have the same doubt. What exactly are pch.coefs? Could you please tell me what do they represent? Thanks Bhomik please read listing of the function ppval (in Command Window: >> open ppval). b = pp.breaks; c = pp.coefs; k = pp.order; xx = linspace(b(2),b(3),200); sizexx = size(xx); lx = numel(xx); xs = reshape(xx,1,lx); [~,index] = histc(xs,[-inf,b(2:l),inf]); xs = xs-b(index); v = c(index,1); for i=2:k v = xs(:).*v + c(index,i); v = ppval(pp,xx);
{"url":"http://www.mathworks.com/matlabcentral/answers/79249","timestamp":"2014-04-17T04:24:45Z","content_type":null,"content_length":"28705","record_id":"<urn:uuid:04ade20f-3d05-4c6f-b2e1-435544990360>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00314-ip-10-147-4-33.ec2.internal.warc.gz"}
On the shape of mathematical arguments, Volume 445 - TURKU CENTRE FOR COMPUTER SCIENCE , 1996 "... It is generally accepted that in principle it’s possible to formalize completely almost all of present-day mathematics. The practicability of actually doing so is widely doubted, as is the value of the result. But in the computer age we believe that such formalization is possible and desirable. In c ..." Cited by 23 (0 self) Add to MetaCart It is generally accepted that in principle it’s possible to formalize completely almost all of present-day mathematics. The practicability of actually doing so is widely doubted, as is the value of the result. But in the computer age we believe that such formalization is possible and desirable. In contrast to the QED Manifesto however, we do not offer polemics in support of such a project. We merely try to place the formalization of mathematics in its historical perspective, as well as looking at existing praxis and identifying what we regard as the most interesting issues, theoretical and practical. - Int. Workshop on FirstOrder Theorem Proving (FTP'98), Technical Report E1852-GS-981 , 1998 "... . Our long-range goal is to implement a program for the machine verification of textbook proofs. We study the task from both the linguistics and deduction perspective and give an in-depth analysis for a sample textbook proof. A three phase model for proof understanding is developed: parsing, str ..." Add to MetaCart . Our long-range goal is to implement a program for the machine verification of textbook proofs. We study the task from both the linguistics and deduction perspective and give an in-depth analysis for a sample textbook proof. A three phase model for proof understanding is developed: parsing, structuring and refining. It shows that the combined application of techniques from both NLP and AR is quite successful. Moreover, it allows to uncover interesting insights that might initiate progress in both AI disciplines. Keywords: automated reasoning, natural language processing, discourse analysis 1 Introduction In [12], John McCarthy notes that "Checking mathematical proofs is potentially one of the most interesting and useful applications of automatic computers". In the first half of the 1960s, one of his students, namely Paul Abrahams, implemented a Lisp program for the machine verification of mathematical proofs [1]. The program, named Proofchecker, "was primarily directed
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1740158","timestamp":"2014-04-19T01:14:22Z","content_type":null,"content_length":"15251","record_id":"<urn:uuid:643aae1e-a604-4ae8-a5df-f0115ce702fe>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00035-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with tiling Help with tiling • I've just converted one of my C++ AMP algorithms to use tiling, and I got a nice 80% speed up, great! However, I'm unsure if I'm doing it correctly (most efficiently), since my code looks quite different from the various examples I have seen, and would therefore appreciate some feedback. Basically my scenario is that I have an one dimensional array filled with consecutive 8x8 dct matrices laid out in linear order on which I need to peform idct (basically matrix multiplication). So matrix1 is array[0..63] and matrix 2 is array[64..127] etc.. My current code looks as follows. Note that TS = 64 = 8 *8, is the only tile size that works for this implementation (unlike others I have seen where the algorithm works regardless). array_view<float> coeffs = /*...*/; float idct[8][8] = /*...*/; static const int TS = 64; concurrency::parallel_for_each(coeffs.extent.tile<TS>(), [=](const concurrency::tiled_index<TS>& t_idx) restrict(amp) using namespace concurrency; auto local_idx = t_idx.global % 64; auto row = (t_idx.local % 8)[0]; auto col = (t_idx.local / 8)[0]; tile_static float local_coeffs[8][8]; local_coeffs[row][col] = coeffs[t_idx.global]; float tmp = 0.0; for (int i = 0; i < 8; ++i) float tmp2 = 0.0; tmp2 += local_coeffs[0][i] * idct[row][0]; tmp2 += local_coeffs[1][i] * idct[row][1]; tmp2 += local_coeffs[2][i] * idct[row][2]; tmp2 += local_coeffs[3][i] * idct[row][3]; tmp2 += local_coeffs[4][i] * idct[row][4]; tmp2 += local_coeffs[5][i] * idct[row][5]; tmp2 += local_coeffs[6][i] * idct[row][6]; tmp2 += local_coeffs[7][i] * idct[row][7]; tmp += idct[col][i] * tmp2; dest[t_idx.global] = tmp; □ Edited by Dragon89 Wednesday, August 22, 2012 5:35 PM Wednesday, August 22, 2012 4:04 PM • Hi Dragon89, I would suggest the following towards improving the performance of this implementation. □ Increase tile size: This computation processes a 8X8 matrix and a choice of 8X8 tile makes the implementation simple. However, GPUs rely on availability of a large number of threads to hide latency of global memory accesses or other high latency arithmetic operations. Current GPU hardware typically has a limit on the number of tiles that can be scheduled for concurrent execution on a compute unit (a SIMD engine essentially) and hence small tile sizes may result in inadequate occupancy which in turn may translate into sub-optimal performance. As you mentioned, most of the samples you may have seen work on larger problem sizes which require chunks of data to be iteratively brought into tile_static memory and hence are able to use larger tile sizes in a natural way. I would suggest experimenting with larger tile sizes (256 would be good number to begin). For example you can use a tile size of 256 and consider it to be 4 teams of 64 threads each processing a different 8X8 matrix. □ Secondly, I noticed that much of the computation performed by each thread is also performed by all other threads in the same “row” (tmp2 +=local_coeffs[0][i]*idct[row][0];). It will be interesting to get rid of the redundant computations and have each thread in a row to perform this computation for a different “i” and store the result in tile_static memory to be subsequently shared by other threads in that row. Hope this helps Amit K Agarwal □ Proposed as answer by Zhu, Weirong Thursday, August 23, 2012 8:45 PM □ Marked as answer by Dragon89 Thursday, August 23, 2012 9:04 PM Thursday, August 23, 2012 2:11 AM All replies • Hi Dragon89, I would suggest the following towards improving the performance of this implementation. □ Increase tile size: This computation processes a 8X8 matrix and a choice of 8X8 tile makes the implementation simple. However, GPUs rely on availability of a large number of threads to hide latency of global memory accesses or other high latency arithmetic operations. Current GPU hardware typically has a limit on the number of tiles that can be scheduled for concurrent execution on a compute unit (a SIMD engine essentially) and hence small tile sizes may result in inadequate occupancy which in turn may translate into sub-optimal performance. As you mentioned, most of the samples you may have seen work on larger problem sizes which require chunks of data to be iteratively brought into tile_static memory and hence are able to use larger tile sizes in a natural way. I would suggest experimenting with larger tile sizes (256 would be good number to begin). For example you can use a tile size of 256 and consider it to be 4 teams of 64 threads each processing a different 8X8 matrix. □ Secondly, I noticed that much of the computation performed by each thread is also performed by all other threads in the same “row” (tmp2 +=local_coeffs[0][i]*idct[row][0];). It will be interesting to get rid of the redundant computations and have each thread in a row to perform this computation for a different “i” and store the result in tile_static memory to be subsequently shared by other threads in that row. Hope this helps Amit K Agarwal □ Proposed as answer by Zhu, Weirong Thursday, August 23, 2012 8:45 PM □ Marked as answer by Dragon89 Thursday, August 23, 2012 9:04 PM Thursday, August 23, 2012 2:11 AM • Removing the redundant calculation gave me a 50% speedup and increasing tile size to 256 have me another ~40%; □ Edited by Dragon89 Saturday, September 01, 2012 8:23 PM Friday, August 31, 2012 6:58 AM • An interesting thing I found, explicitly using mad is faster than letting the kernel compiler optimize it, how come? i.e. instead of: row_sum += dct[y][0][z] * idct[x][0][z]; row_sum += dct[y][1][z] * idct[x][1][z]; row_sum += dct[y][2][z] * idct[x][2][z]; row_sum += dct[y][3][z] * idct[x][3][z]; the following is faster: row_sum = mad(dct[y][0][z], idct[x][0][z], row_sum); row_sum = mad(dct[y][1][z], idct[x][1][z], row_sum); row_sum = mad(dct[y][2][z], idct[x][2][z], row_sum); row_sum = mad(dct[y][3][z], idct[x][3][z], row_sum); □ Edited by Dragon89 Saturday, September 01, 2012 8:23 PM Friday, August 31, 2012 7:35 AM • This is related to IEEE strictness of the floating point behavior in the generated code. This can be controlled by the /fp compiler switch. By default the C++ AMP compiler uses /fp:precise and hence cannot use the faster “mad” hardware instruction (which is not guaranteed to be IEEE strict) and uses a mul + add instead. Compiling with /fp:fast, will result in the “mad” instruction being used. If the big /fp:fast hammer (for your entire code) is undesirable, you can explicitly use the mad intrinsic to accelerate specific multiply add operations in your kernel. Amit K Agarwal Friday, September 14, 2012 6:43 PM
{"url":"http://social.msdn.microsoft.com/Forums/vstudio/en-US/55478645-ac40-4700-bde6-b424746f982e/help-with-tiling?forum=parallelcppnative","timestamp":"2014-04-18T21:19:25Z","content_type":null,"content_length":"73766","record_id":"<urn:uuid:52f5c556-9929-489b-8d46-e96ab690bf71>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
Enumerative Geometry and String Theory String theory is a very contentious topic among physicists now, judging by the heat it has generated in scores of related books, articles and blogs over the last few years. Whatever its ultimate fate in physics, string theory has led to a good deal of new mathematics and an upsurge of interest in enumerative geometry. According to the author, the basic question of enumerative geometry is — very broadly speaking — “How many geometric structures of a given type satisfy a given collection of geometric conditions?” For example, what is the number of intersection points of two planar curves in two-dimensional projective space? Enumerative Geometry and String Theory is part of the Student Mathematical Library series, in this case published jointly by the American Mathematical Society and the Institute for Advanced Study's Park City Mathematics Institute , where the lectures on which this book is based were first given. The books in the series are intended to address non-standard mathematical topics accessible to talented undergraduates with two or more years of college mathematics. While many of the books in the SML series would be challenging even for strong undergraduates, this one is over the top. For example, Chapters 4 and 5 are called “crash courses”, and they attempt to cover the basics of topology, manifolds and cohomology in thirty-three pages. Chapter 10 — on mechanics (classical and quantum) — should be called a crash course since it races through a treatment of mechanics based on the action principle in about ten pages. Another indication of the difficulty of the subject matter is the author’s comment that his lectures were heavily attended by graduate students specializing in this area who wished to solidify their knowledge of Gromov-Witten theory. Where is the author going with this? He clearly loves the subject and writes about it with enthusiasm. It is a little hard to find the thread, but it goes something like this: In string theory, Calabi-Yau manifolds are ubiquitous and key to building models of the universe. A particular kind of Calabi-Yau manifold is a quintic threefold, which is a hypersurface of degree 5 in four-dimensional projective space. The number of rational curves on the quintic threefold plays a part in a particularly important calculation in string theory. So, how many rational curves of degree d are there on a quintic threefold? For degree d = 1 it was known in the 19^th century that there are 2875 lines on the general quintic threefold. Values for d = 2 and d = 3 had been found by the early 1990s, but it appeared that the standard techniques could not handle the general case. In 1991 a group of physicists announced a solution to this problem using ideas from string theory. There was no proof, but since then the calculation has been formalized and understood as the Mirror Theorem. The process of connecting physics and enumerative geometry comes to a conclusion in the final chapter where the author applies quantum cohomology to investigate the enumerative geometry of the complex projective plane. Along the way we get a brief introduction to supersymmetry and topological field theory. Even top notch undergraduates might feel like they’d been it by a speeding bus by the time they get to the end. The topics flash by very quickly and the thread of the author’s argument is difficult to follow. It would have helped considerably if the author had given better indications of his plan along the way, telling us where he’s going and how he plans to get there. The author notes that the book is not self-contained. This is something of an understatement. Only a background in calculus and some linear algebra is assumed. Bill Satzer (wjsatzer@mmm.com) is a senior intellectual property scientist at 3M Company, having previously been a lab manager at 3M for composites and electromagnetic materials. His training is in dynamical systems and particularly celestial mechanics; his current interests are broadly in applied mathematics and the teaching of mathematics.
{"url":"http://www.maa.org/publications/maa-reviews/enumerative-geometry-and-string-theory","timestamp":"2014-04-19T01:23:07Z","content_type":null,"content_length":"101667","record_id":"<urn:uuid:8b82cd5b-23a8-4ca4-89c6-5d54855a7fa4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Im curious about this. Why are there always so many people online in Math than anything else.. Best Response You've already chosen the best response. It's a question we've been trying to answer ourselves (rather, an imbalance we're trying to improve). Partly, it's a self-reinforcing cycle. Partly, it's the fact that it seems more people search for math than anything else on Google and such. Partly, it's that we have more partners in math displaying OpenStudy widgets than in the other topics. Best Response You've already chosen the best response. Well thats a epic fail is so many ways. :( Best Response You've already chosen the best response. Math is radical! ~Bumper Sticker Best Response You've already chosen the best response. that is really good question Carniel, It was because a lot people hates math and it was really hard for them. They thought they were going to fail if they don't know the question is. They went to math and post their question and want someone who is really good at math can answer his or her question. Best Response You've already chosen the best response. there's also this dire problem we face called the brain drain. You see a lot of people in other groups ask questions but dont receive answers. Thus they migrate to the Mathematics group in hope of finding one. This results to a decrease in number in the other groups and a gain in Mathematics group. Also, some enthusiasts of the other groups get bored in their groups because a lot of the people are migrating to the Math group, so the enthusiast themselves migrate to the Mathematics group. This is the sad truth of economics my friend. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f7999bbe4b0ddcbb89ec5a1","timestamp":"2014-04-17T04:14:37Z","content_type":null,"content_length":"38183","record_id":"<urn:uuid:588af523-048a-40ce-991e-739a84bb65a9>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00247-ip-10-147-4-33.ec2.internal.warc.gz"}
Electron interactions : a field theoretic generalization of the Gell-Mann-Brueckner theory and a calculation of exchange effects DuBois, Donald F. (1959) Electron interactions : a field theoretic generalization of the Gell-Mann-Brueckner theory and a calculation of exchange effects. Dissertation (Ph.D.), California Institute of Technology. http://resolver.caltech.edu/CaltechETD:etd-02082006-103252 The method of Gell-Mann and Brueckner for treating electron interactions in a degenerate electron gas is generalized using the Feynman-Dyson techniques of field theory. A Feynman propagator is constructed for the effective interaction between electrons which takes into account the polarizability of the medium of unexcited particles in the Fermi sea. The well-known plasmon excitation appears as a singularity in this propagator. The plasmon is seen to be a correlated, resonant oscillation of the electron density field which is damped by transferring its energy to less correlated, multiple excitations. General expressions for the plasmon dispersion relation and for the plasmon level width are derived in terms of the polarizability of the many body medium. The self-energies of the lowest states of the electron gas are discussed by using the adiabatic theorem. This enables us to derive an exact expression for the ground state energy in terms of the polarizability. Because of the degeneracy of the excited states of the non-interacting system, the adiabatic transforms of these states are not stationary states of the interacting system. However, as the momenta of the excited particles approach the Fermi momenta these states become asymptotically stationary. For states with only a few excited particles present an independent particle model is valid with the result that only the Feynman propagator for the physical one particle state is needed. This propagator, which is corrected for the virtual polarization of the medium by the particle, provides all the information concerning the energies and damping of the single particle states. The second part of the paper is concerned with the detailed calculation of the effects of the interaction on the properties of an electron gas. The lowest order exchange correction to the plasmon energy is computed and found to be small in all cases of physical interest. However, the lowest order contributions to the plasmon damping are seen to modify the observed cut-off for plasmon excitation in electron energy loss experiments in a not negligible way. In applying the formalism to such experiments we also discuss the stopping power of an electron gas and derived the exact lowest order contribution to the single particle damping rate. Using the self-energy method, the correction to the low temperature specific heat of an electron gas is computed exactly to one higher order in r(s) (the interelectron spacing) beyond the calculation of Gell-Mann. It appears that the series in orders of r(s) converges reasonably well only for r(s) < 2. For r(s) < 0.8 the specific heat is reduced from the value for non-interacting electrons while for r(s) > 0.8 the specific heat is enhanced from this value. The change in sign appears to be a result of the Pauli Principle. We conclude from these calculations that the procedure of expansion in orders of r(s) gives useful results for values of r(s) < 2. A formal calculation of the third order correction to the correlation energy is also carried out which will give a further clue concerning the convergence of the method if the integrations can be evaluated. For intermediate densities (2 < r(s) < 6) the general perturbation approach may still be valid but a different approximation procedure for treating the polarization effects is needed. Item Type: Thesis (Dissertation (Ph.D.)) Degree Grantor: California Institute of Technology Division: Physics, Mathematics and Astronomy Major Option: Physics Thesis Availability: Public (worldwide access) Research Advisor(s): • Gell-Mann, Murray Thesis Committee: • Unknown, Unknown Defense Date: 1 January 1959 Record Number: CaltechETD:etd-02082006-103252 Persistent URL: http://resolver.caltech.edu/CaltechETD:etd-02082006-103252 Default Usage Policy: No commercial reproduction, distribution, display or performance rights in this work are provided. ID Code: 548 Collection: CaltechTHESIS Deposited By: Imported from ETD-db Deposited On: 08 Feb 2006 Last Modified: 26 Dec 2012 02:30 Thesis Files PDF (Dubois_df_1959.pdf) - Final Version See Usage Policy. Repository Staff Only: item control page
{"url":"http://thesis.library.caltech.edu/548/","timestamp":"2014-04-24T04:14:13Z","content_type":null,"content_length":"27557","record_id":"<urn:uuid:ed0dfe71-6e58-44be-a6e3-0e1318a082bc>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00477-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: May 2006 [00114] [Date Index] [Thread Index] [Author Index] Re: When is x^y = != E^(y*Log[x]) • To: mathgroup at smc.vnet.net • Subject: [mg66262] Re: When is x^y = != E^(y*Log[x]) • From: "Nagu" <thogiti at gmail.com> • Date: Sat, 6 May 2006 01:54:35 -0400 (EDT) • References: <e3f555$s5h$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com x^y = exp(y*log(x)) is the y-th power function of the complex number x and log(x) denotes a branch of logarithm. x^y is a multifunction and have many definitions to denote y-th power of x. A general form would look like, x^y = ( exp(y*ln|x| + i y*arg(x)) )*(exp(i 2*pi*y))^k, where k is a complex number. If a y-th power function, x^y defined on an open set S, and is continuous on S, then it is a branch of the y-th power function. Note that S does not contain the point zero as its epsilon-neighborhood has many preimages. The principal branch can be defined as follows: x^y = ( exp(y*ln|x| + i y*arg(x)) ), for -pi < arg(x) < pi This definition is consistent with the definition if y is integer, it is same as a polynomial. If y = 1/n, then our principal branch is same as the inverse of x^n.
{"url":"http://forums.wolfram.com/mathgroup/archive/2006/May/msg00114.html","timestamp":"2014-04-17T18:30:51Z","content_type":null,"content_length":"34909","record_id":"<urn:uuid:61977acf-1a9d-4cdc-b956-1da7c4c7d744>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00009-ip-10-147-4-33.ec2.internal.warc.gz"}
Reply to comment Submitted by Anonymous on September 6, 2013. If given:p=prime and 2^(p-1)(2^p-1)=even perfect number, the fact that there are infinitely many prime numbers =>there are infinitely many even perfect numbers=>there are infinitely many perfect numbers. Another question is if there are infinitely many odd perfect numbers.
{"url":"http://plus.maths.org/content/comment/reply/5925/4612","timestamp":"2014-04-19T07:23:02Z","content_type":null,"content_length":"20163","record_id":"<urn:uuid:eedbd5f6-96c0-4b71-a10e-b2a150de987e>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
Infinite/Maclaurin Series problem!! February 20th 2013, 04:32 PM #1 Jan 2013 United States Infinite/Maclaurin Series problem!! Evaluate the following: Riemann Sum k=2 to infinity, 2/(k^2 -1) I'm not really sure where to begin...I rearranged the equation to -2/( 1 - k^2) and used the Maclaurin Series for 1/(1 - x) to obtain -2 * k^(2n). Interval of convergence for k is -1 < k < 1. Now I'm stuck as to how to evaluate it. Re: Infinite/Maclaurin Series problem!! Remember partial fractions: ${2\over k^2-1}={1\over k-1}-{1\over k+1}$ Now see if you have a telescoping series. By the way this is a series not a Riemann sum. Re: Infinite/Maclaurin Series problem!! Achiu 17 for k=2 till infinity Σ[2/(k^2 -1]=3/2 easy. February 20th 2013, 08:05 PM #2 Super Member Dec 2012 Athens, OH, USA February 20th 2013, 08:35 PM #3 Senior Member Feb 2013 Saudi Arabia
{"url":"http://mathhelpforum.com/calculus/213501-infinite-maclaurin-series-problem.html","timestamp":"2014-04-20T16:55:25Z","content_type":null,"content_length":"34170","record_id":"<urn:uuid:42abd6bc-31b8-4f5f-97a5-c962b70dd215>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00379-ip-10-147-4-33.ec2.internal.warc.gz"}