content
stringlengths
86
994k
meta
stringlengths
288
619
South Gate Calculus Tutor ...I make my students understand the concepts of derivative and integral first. Then I start to derive or integrate functions with them and help them get the required skills in dealing with them. Perfect shapes. 11 Subjects: including calculus, statistics, algebra 2, geometry ...I was an algebra tutor at St. John's College for 2 years, a teaching assistant for astronomy at Oklahoma State University for 3 years, a teaching assistant for general physics at OSU for 3 years, and currently I am a physics professor at Marymount College in Palos Verdes, CA I have had glowing ... 10 Subjects: including calculus, physics, geometry, algebra 1 ...I delivered the speeches, either in English, Spanish or Portuguese. I feel passionate about Marketing for its impact in organization's positioning in the marketplace at a global scale. Being an Engineer, I was hired by a technology company to direct their International Marketing operations. 20 Subjects: including calculus, Spanish, linear algebra, algebra 1 ...I like to adapt to the students needs: I like to listen attentively to students in order to form a strategy that will help students cope with mathematics; it is a skill that I have acquired by many years of teaching in the classroom. I have an exciting degree in Physics from the University of Ca... 8 Subjects: including calculus, Spanish, physics, algebra 1 ...I love Pre-Calculus. I have extensive experience tutoring in Pre-Calculus because it is one of the subjects I am most commonly asked to tutor. Helping students to form the mathematical foundation for the challenging material in Calculus is very rewarding, as is working with the talented high school students who take this course. 18 Subjects: including calculus, English, writing, algebra 1
{"url":"http://www.purplemath.com/South_Gate_calculus_tutors.php","timestamp":"2014-04-21T02:41:43Z","content_type":null,"content_length":"23990","record_id":"<urn:uuid:995219af-8a58-47d6-8fe2-ea61f36914c3>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00346-ip-10-147-4-33.ec2.internal.warc.gz"}
Boundedness of Solutions to $\Delta u = f u$ on $R^2$ up vote 3 down vote favorite Consider the Laplacian $\Delta = d/dx^2 + d/dy^2$ on $\mathbb{R}^2$. This is true: Let $f$ be a nonnegative function, not identically zero. Then any positive solution of $\Delta u = f u$ is unbounded. Let $g$ be any function such that $\Delta u = g u$ has a positive solution. Is it the case that any positive solution to $\Delta u = ( f + g ) u$ is unbounded? dg.differential-geometry ap.analysis-of-pdes laplacian add comment 2 Answers active oldest votes The answer to the first question is yes. If $u$ is positive and $f$ non-negative then the RHS is non-negative, thus $u$ is subharmonic. Subharmonic function bounded from above must be up vote 2 constant ("Liouville's theorem" for subharmonic functions). If it is constant then the LHS iz $0$, then RHS is $0$ and this is a contradiction. down vote add comment The second problem can be solved using the following Liouville type theorem, which was first used in works related to De Giorgi conjecture (see for example L. Ambrosio and X. Cabré). Let $u$ be a positive solution of $\Delta u=gu$ and $v$ a bounded solution of $\Delta v=fv+gv$. Then consdier $\psi=v/u$. It satisfies $$div(u^2\nabla \psi)=fu^2\psi.$$ Using $\psi\eta^2$ as a test function with $\eta\in C_0^\infty$, we get $$\int fu^2\psi^2\eta^2+u^2|\nabla\psi|^2\eta^2\leq\int v^2|\nabla\eta|^2.$$ So if $v$ is bounded, you can use a classical technique by choosing a up vote good test function $\eta_R$ with some $\log$ dependence to show the RHS goes to $0$ as the radius $R\to\infty$. 1 down vote Note that this is something special in dimension $2$. The existence of positive solution $u$ is also related to the stablity of $\Delta-g$. add comment Not the answer you're looking for? Browse other questions tagged dg.differential-geometry ap.analysis-of-pdes laplacian or ask your own question.
{"url":"http://mathoverflow.net/questions/132259/boundedness-of-solutions-to-delta-u-f-u-on-r2","timestamp":"2014-04-18T14:40:42Z","content_type":null,"content_length":"52933","record_id":"<urn:uuid:929ecdb2-f3fb-489f-8371-20db10f9bf3b>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00510-ip-10-147-4-33.ec2.internal.warc.gz"}
Public Function HyperGeometricRandom( _ ByVal vNumSmp As Variant _ , ByVal vPopSuc As Variant _ , ByVal vNumPop As Variant _ ) As Variant Random Number with Hypergeometric Distribution See also: HyperGeometricRandomTest Subroutine HyperGeometricInverse Function HyperGeometricCDF Function Declarations Topic vNumSmp: Size of the sample. Number is truncated to the nearest integer. Function returns Null if vNumSmp is less than one (<1) or greater than vNumPop. vPopSuc: Number of successes in the population. Number is truncated to the nearest integer. Function returns Null if vPopSuc is less than one (<1) or greater than vNumPop. vNumPop: Size of the population. Number is truncated to the nearest integer. Function returns Null if vNumPop is less than one (<1) or greater than either vNumSmp or vPopSuc. Copyright 1996-1999 Entisoft Entisoft Tools is a trademark of Entisoft.
{"url":"http://www.entisoft.com/ESTools/MathProbability_HyperGeometricRandom.HTML","timestamp":"2014-04-19T22:58:58Z","content_type":null,"content_length":"2630","record_id":"<urn:uuid:ce3abdc0-ccd3-4f6e-940f-285913b842b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00289-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving two floor problems equal each other September 20th 2010, 08:52 AM #1 Junior Member Sep 2010 Proving two floor problems equal each other Excuse the confusing text but I don't know how to input math texts on this forum (yet). So my problem is: Prove that for positive real number x, I figured I have to prove cases when the remainder of x/4 is 0,1,2 or 3. MY question is, how do I get started? It helps to draw a graph: first of $x/2$, then $\lfloor x/2\rfloor$, then $\lfloor x/2\rfloor/2$, and finally $\lfloor\lfloor x/2\rfloor/2\rfloor$. The remainder is not often used for the division of real numbers. I believe it is sufficient to consider two cases: when the function argument has the form $4n+x$ and $4n+2+x$ where $n\in\mathbb {Z}$ and $0\le x<2$. Then, obviously, $0\le x/2<1$ and $0\le x/4<1/2$. It helps to draw a graph: first of $x/2$, then $\lfloor x/2\rfloor$, then $\lfloor x/2\rfloor/2$, and finally $\lfloor\lfloor x/2\rfloor/2\rfloor$. The remainder is not often used for the division of real numbers. I believe it is sufficient to consider two cases: when the function argument has the form $4n+x$ and $4n+2+x$ where $n\in\mathbb {Z}$ and $0\le x<2$. Then, obviously, $0\le x/2<1$ and $0\le x/4<1/2$. We never learned on how to draw graphs of floor or ceiling functions so I can't really prove it that way. I asked my teacher and he said I was on the right track with just proving cases where the remainder is either 0,1,2 or 3, but I still don't know how to start off. How can I create a function that always has a remainder? I'm really lost on this question The remainder of a real number divided by 4 is a real number from 0 (included) to 4 (excluded). It is not just 0, 1, 2 or 3. For example, what is the remainder of $\pi$ divided by 4? It may make sense to say that the ratio of $\pi$ and 4 is 0 since $\pi$ < 4 and the remainder is $\pi$. As I said, remainders are rarely considered for real numbers. To continue my suggestion, suppose that $x=4n+y$ where $n\in\mathbb{Z}$ and $0\le y<2$. Then $x/2=2n+y/2$. Since $0\le y/2<1$, $\lfloor x/2\rfloor=2n$. Therefore, $\lfloor\lfloor x/2\rfloor/2\ rfloor=n$. Now you show that $\lfloor (4n+y)/4\rfloor=n$. Also, show that $\lfloor\lfloor x/2\rfloor/2\rfloor=\lfloor x/4\rfloor$ when $x=4n+2+y$ where $n\in\mathbb{Z}$ and $0\le y<2$. September 20th 2010, 10:38 AM #2 MHF Contributor Oct 2009 September 20th 2010, 11:01 AM #3 Junior Member Sep 2010 September 20th 2010, 11:25 AM #4 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/discrete-math/156804-proving-two-floor-problems-equal-each-other.html","timestamp":"2014-04-16T11:15:00Z","content_type":null,"content_length":"45994","record_id":"<urn:uuid:89f2a7a3-4d11-4242-af01-a8f766ba74f0>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
The breath of Julius Caesar How much air did Julius Caesar breathe in his lifetime? Fact: He was killed at the age of 56 (Et tu Brute?). Fact: On average, a human being uses between 1 and 2 m^3 of air per hour. A part of the exhaled air will be re-inhaled, so let's say it is 1 m^3/hour. Conclusion: In a 56 year lifetime that is 56 x 365.25 x 24 = ca. 500000 m^3. What fraction of the Earth's atmosphere is that? Fact: The Earth's circumference is 40000 km, so its surface is 40000^2/pi = ca. 5 x 10^8 km^2. Fact: The air pressure at ground level is 1 kg/cm^2 and the density of air at ground level is 1.3 kg/m^3. Conclusion: The air column above 1 cm^2 is 1/1.3 kg = ca. 0.77 m^3 effectively, and 0.77 m^3/(1 cm^2) = 7700 m, so the effective thickness of the atmosphere is 7.7 km. This leads to a total effective atmosphere volume of 7.7 x 5 x 10^8 = 3.85 x 10^9 km^3 = 3.85 x 10^18 m^3. Thus, the fraction of the atmosphere breathed by Julius Caesar is 5 x 10^5 /(3.85 x 10^18) = 1.3 x 10^-13, so about every 1.3 in 10,000,000,000,000 molecules of the Earth's atmosphere have been inhaled and exhaled by Julius Caesar. The Oxygen component of the air is an essential part of biological cycles, but the 80% Nitrogen is chemically inert, so let's reduce this result to 10^-13 = 1 in 10,000,000,000,000. What does this mean? Since the weather is a very turbulent global phenomenon (it takes only a week for a Caribian hurricane to become a European depression), all of Julius Caesar's exhaled air has been mixed with the atmosphere very homogeneously in the more than 2000 years that passed since he died. Your own lungs contain ca. 3 to 4 litres of air right now, let's calculate with 4 litres. That is (using Avogadro's law: 6 x 10^23 molecules is ca. 25 litres at room temperature) a total of 6 x 10^23 x 4/25 = ca. 10^23 molecules. Combining these results leads to the following conclusion: Wherever you are on Earth, right now, at this very moment, your own lungs contain 10,000,000,000 of the very same molecules that were in/exhaled by Julius Caesar!
{"url":"http://www.henk-reints.nl/caesar.htm","timestamp":"2014-04-17T12:58:32Z","content_type":null,"content_length":"4360","record_id":"<urn:uuid:10d8d76f-cb64-4cdf-ab43-b00cdc3b09b0>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00253-ip-10-147-4-33.ec2.internal.warc.gz"}
What is Apple's Realized P/E ratio? Oct 12, ’12 2:13 PM The Price/Earnings ratio is a very simple measure of the “value” a company has. The Price is the current share price and the Earnings is usually the sum of the last 12 months’ earnings per share. In other words it measures how many of the last year’s earnings are built into the share price. Put yet another way it’s the answer to the question “If earnings don’t change, how many years will I have to wait before I’m paid back for my share purchase with retained earnings.” So a company with a P/E of 10 implies that if nothing changes, in 10 years a share owner would “earn” back the price they paid for the share. Any earnings after 10 years would be “profit” for the share owner. You can imagine it even more simply as buying not shares but an actual small business of your own. You pay up front for it and then wait until it pays you back. After getting paid back for the initial purchase you then make money that you can set aside. Obviously this figure of P/E is very sensitive to growth in earnings. Consider paying $100 for a share of a company having just earned $10/share last year. It would have a P/E of 10. If earnings stayed at $10/yr for 10 years, you’d “get your money back” in 10 years. However if earnings grow at 20% then next year the earnings would be $12 then 14.4 then 17.3 then 20.7 etc. Adding these up means you’d get your $100 back in five years, not 10. So with a company growing at 20% the “realized P/E” is 5. You realized the price of $100 in five years’ worth of earnings. In the scenario above you paid expecting to wait 10 years but you got paid in five. If that’s your retirement plan then you can retire five years early. Not bad. Let’s then look at what Apple gave investors as “realized P/E.” If you bought shares in the first Friday of 2006 you would have paid $76.3/share. At the time the company had $9.36 in cash so you actually paid $66.94 for any future earnings. The P/E ratio at the time was around 35. The company went on to earn $2.78 in 2006. Another $4.63 in 2007, $7.47 in 2008 and 10.24 in 2009 and 17.91 in 2010. Then in 2011 it earned $35.11. If you add these values up you realize that Apple reached your price about two thirds of the way into 2011. So the company earned your purchase price in 5.7 years. That becomes the realized P/E if you bought in early 2006. The following diagram shows the time period as a box encompassing the earnings (and cash) for a share purchase in early 2006. I repeated this for purchase in early 2007. The result is a Realized P/E of 4.9. I did it for 2008 as well[1] and got 5.4 then 2009 yielded an astonishing 2.9. 2010 was not much different with 3.2. Investing in Apple between 2006 to 2010 meant obtaining a payback period of less than 4.5 years, on average. In other words, regardless of what the trailing or forward P/Es getting quoted at the time (trailing is illustrated below), buyers actually paid for only about 4.5 years of earnings. In other words they actually bought Apple for a P/E of about 4.5. Using our small business analogy, buying Apple in the past few years has meant getting paid back in less than five years. That makes it a very low risk opportunity. I could try to repeat the process for years 2010 onward but it would require making forecasts beyond 2013, something I leave as an exercise to the reader. 1. I had to make assumptions about full year 2012 and 2013 earnings. I used $55.4 earnings for full calendar year 2012 and 60% growth for 2013. Its surprising that Apple P/E remained so low even after visible success of iPhone… Apple had a hit product, with high margin, in the fastest growing industry of smartphone and yet its P/E remains low…. • http://www.isophist.com/ I know that it is really surprising, it depends on how emotional and irrational stocks market is. But it is weird that P/E should be the contrary of what buyers pay for? Horace smart as ever shows that even if Apple’s as a P/E of about 15, the realized P/E is about 5 and diminishing. As a consequence of that we all say that P/E should be higher because it is low risk, so the lower if the realized P/E the higher should be the price and hence the P/E. So P/E is just an index of risk, the higher the lower wall street see the risk in investing in that company, the lower the more risk, but realized P/E is the outcome of the risk and history says that Apple was not risky at all. Rational people should therefore diminish their risk expectations, so P/E index should increase basing on historic data, but irrational reasoning goes like this: this success can not continue (why? no one knows but it goes on in pundit’s comment since 1992/93) so the more the success the higher the risk because a fall must come. That’s nonsense obviously but is the only reason I can imagine for such a low P/E You’re right, Emilio. It is fear that’s keeping investors away from Apple. But it’s fear that’s purposefully manufactured. There are many who are threatened by Apple’s ascendancy, and for good reason! And then there are the rabid anti-Apple hordes, who jump at even the slightest opportunity to find fault, and who swell the blogs with their posts. In this day and age, that is not a force to take lightly. Finally, there are some who are uniquely placed to benefit by volatility in Apple’s stock price. And the more the volatility, the more “risk” is associated with a stock. It isn’t provable, but the potential is there for these individuals to profit handsomly by tipping the “news” in a positive or negative direction at certain opportune times. □ http://twitter.com/fivetonsflax I am grateful to the fear-mongers, who have allowed me to enlarge my AAPL position at a relatively modest cost. Without them, the stock would have been priced more rationally, and I would not have had the same opportunity to profit. Tongue somewhat in cheek here … According to my model If we buy now we would cover the investment Q4 2017 Not bad! aharon can you please provide what your model assumptions are. Thanks. I’d be interested in seeing some comparisons, like for example what Amazon’s realized P/E would be. With a P/E ratio of 271, I doubt its earnings will ever realize its price paid at any time during the last two years. □ http://twitter.com/JessiDarko You mean you don’t expect to live to be 350 years old? It could take 100 years or more if Amazon doesn’t expand margins. You’d think using their so-called market power they could leverage some of their product lines for higher margins, while still maintaining their huge capex. You’d think Bezos would be a little curious to see his grand plan work. Makes you wonder. Maybe he knows the grand plan won’t work, so he’s ponzi’ing it and keeps expanding and expanding all the while running the most successful non-profit around, convincing shareholders of profits around the corner that never come. Remarkable. I probably made little sense. It’s late, I’m tired, but Amazon’s ability to fool its shareholders to wait for the big prize is amazing to me. Even Steve Jobs didn’t have that kind of RDF. □ http://twitter.com/JessiDarko You’re onto something. Amazon is a ponzi scheme. I have inside info. Not the kind of stuff that would let me write an expose, but an understanding of the players, such as Bezos, whome I’ve Amazon is a retailer like Walmart, with walmart type margins. But it pretends to be a “high tech” company and gets a high tech PE. Everything about this is designed to get good margins on Amazon’s real product: Their own company stock. So as the board gives Bezos and others more and more stock, that stock sells really handsomly in the market, and the insiders profit. Amazon as a company is never going to be very profitable. It simply can’t, given its business. Further, as a company it is very incompetantly managed, and thus it can’t compete with Apple and other companies that are competantly managed, even if it somehow decided to be a high tech business. Things like the kindle (outsourced design) and AWS (a low margin business, commoditized already.) will never compete with the iPad or even something like Google. Jessica, you’re out of your mind about Amazon. It’s extremely well managed, very strong leader in its field, though today in a low margin business. But also expanding in higher margin Her comments are entirely within the scope of reasoned investment analysis of Amazon. She may not be right, but she is not “out of her mind.” Im more distrustful of opinions from somebody who doesn’t know how to communicate civilly. □ http://twitter.com/JessiDarko I’m out of my mind? I used to work there. I have talked to Bezos. I have seen the quality of people they have managing the company, from Bezos down several levels. I’m speaking from experience. Amazon has an excellent PR team– and this creates the perception that they are a great company and “well managed” and that Bezos is a “visionary”, etc. But the PR is not reality. □ http://www.asymco.com Can you name these higher margin businesses? Rumor today has it that they are targeting TI’s chip business which TI is exiting due to low margins. I imagine that Mr. Bezos believes as many of his shareholders do, that Amazon is inventing a new business model enough ahead of the competition that his share and profitability will grow As Walmart did a few years before him. Will it work as well as he wants? Time will tell. (And I don’t give investment advice.) Can’t even begin to argue with this. Couple this insight with the reality that the severe compression in Apple’s P/E is largely due to huge increases in the E, especially in Apple’s 1st and 2nd quarter (see Andy Zaky’s latest missive), and you see a juggernaut of a company creating massive returns into the far future for those gutsy enough to jump on the Apple freight train. It’s literally rescued my wife and I from a prospective hard-scrabble retirement to one of considerable ease, and I’m certain we’re not alone – and it’s not done yet! Comparison with amazon may be an extreme case, but maybe with msft and goog could be more useful. If stock does not appreciate in accordance with earnings growth, or is fully paid back to stockowners via dividends, this discussion may be theoretical, since those earnings never arrive to the Given the fact that the YoY growth of aapl has been shrinking dramatically in the last Q’s (now approaching 30% from the 95% of some Qs ago), the important factor is the expectancy of future growth. Until now, the company has largely beaten those expectations, and thus stockowners have been rewarded ( not in proportion of this growth, the p/e shrinking). Now, how the company will deal with the commoditization of its main products, smartphones and tablets, and how/what will be the post to the next big thing is what matters in first place. My bet: mobile consumption of content growing exponentially, and done mainly from IOS devices, Appl will take a big part of goog business in the mid-term. • http://twitter.com/fivetonsflax If the physical devices are good enough, the product will have to improve along other dimensions. Services? Software? I think both are run at break-even or less—can they be monetized directly? Alternatively, can they convince people to spend more on Apple devices than on competitors with similar hardware specifications? The distribution and manufacturing brilliance help hold the line, but the real prize is new jobs-to-be-done. Isn’t that what people mean when they talk about “vision”? Ask Google whether services are run better than breakeven, or are maybe a growth industry. Ditto, Facebook (its particular valuations notwithstanding). Or Oracle, Salesforce, IBM… these firms are all profitable and growing. Apple *used* to be in the manufacturing business, but got driven out by its small volumes and need to change techniques so rapidly and dramatically. (Think 6502 —> 68000 —> PowerPC —> X86 —> ARM and now whatever it’s doing with its own silicon.) The company seems to have done a superb job in contracting out, but increasingly integrating into, the manufacturing process over the past 5+ years, making investments that are fully rewarded by the sales of devices they manufacture. Manufacturing has gone from being a high-cost nuisance to a moderate-cost facilitator of differentiation and distribution. With manufacturing a modest part of the dollar value of Apple’s product, investments in software, networks and services are taking increasing shares of Apple’s expenditures. Tim Cook’s role is still valuable, but he is CEO primarily for having shown that he can leverage a charter into excellence. I trust that most of his attention is going elsewhere. □ http://twitter.com/JessiDarko I agree with most of what you say, but I think manufacturing has a higher cost than you’re accounting for. While it’s true it is a small sliver of the cost of production, the limited manufacturing capacity Apple has as compared to the android rivals has allowed android to take significant marketing share. I’ve been hoping that Tim Cook is the kind of genius that Jobs is and has been working on a method to get Apple out of the constrained situation it finds itself in so that it can bring supply more in line with demand and thus have much larger market share. I’ve been hoping for a “made with robots” or the acquisition of upstream component suppliers or something to change this situation which, frankly, has been chronic since 2007. But maybe what I want is simply not possible due to the nature of the product and the industry… but the market is flooded with android crap sold on a “buy one get one free” basis, while Apple has trouble meeting demand. That’s a problem that Tim Cook should be uniquely capable of fixing. □ http://search.websonar.com:8080/ I expect that is “job one” but he will want to savour that cake, eating it one piece at a time. □ http://twitter.com/fivetonsflax Google and Facebook make money on services by selling their users’ attention to third parties. My favorite thing about Apple, in contrast, is that their user is their customer. • http://twitter.com/Truthwayseeker I don’t expect that Apple will face serious commodization of smart phones and tablets. Apple can win any price war (as someone previously suggested), because of supply chain efficiencies, economies of scale from large production runs, and a large war chest. What kind of commoditisation do you mean? Is it like the commoditisation of a Mercedes Benz car within the automotive industry domain? Or is it like the commoditisation of refrigerators and washing machines? At what stage, and how, will any Apple product become anything even remotely like a commodity? For the last 3 decades, or more, Apple has been a loner maverick defining the best of breed benchmarks in all the product sectors it competes in. That is the very opposite of embracing commoditisation. It is all about embracing differentiation by defining the high ground – the differences that make a difference, if you will. If we harbour any concerns about the commoditisation of the smartphone and tablet, it would be a better use of our anxieties to pity the future of Android and its OHA members. Now there is a crowded market in which the combatants can only ever arrive at eventual commoditisation. High tech white goods in the making. • http://twitter.com/WalterMilliken I don’t think there’s much evidence that Apple is being much affected by commoditization yet, and in recent US market poll data, I see some signs it may actually be pulling share back from Android “commodity” phones now, though I don’t think there’s enough definitive data to call a trend. Outside the US, the market is mostly less mature, and different factors are in play, so it’s hard to extrapolate much. @twitter-14291351:disqus I agree that Apple is likely to push into more services and software to help keep their edge, but I think they may simply keep their current monetization model: buying the device basically gets you free or lost-cost services of significant value to the user, and that enable the device to perform new jobs. More importantly, I think, low app software prices make the risk to the user very low for *trying* to use the device for new jobs, which makes it easy for the hardware to migrate into new niches without Apple doing much at all. If the services and software at least break even, and entice people to join the ecosystem, or at least stay there, then Apple can continue to make money on the hardware margin, which is the user’s entry cost into the value of the larger ecosystem. This is sort of the inverse of the Amazon model, which breaks even on the device at best, and makes money selling the services (content). Apple has also trashed the old desktop economic model, where the software on a machine was often worth significantly more than the machine’s hardware price, at least if the user had more than just the bundled software (and Microsoft made all the money off that part). • http://twitter.com/JessiDarko Earnings not paid out in dividends, but retained by the company benefit shareholders because the cash-per-share ends up going up and up. Which means the shareholder can get that cash by simply selling the shares. (EG: if apple retains $5 per share, then the price of the shares will go up $5, and you will be able to sell the shares for $5 more, and you get the $5…. without there ever being a dividend.) Assuming the PE stays the same, etc. But the risk of PE contraction is much less when it is so low that the cash per share is forcing the share price up dramatically all the time! I get your point that, under the assumption that the P/E stays constant, retained earnings will drive up the share price. And Horace’s post about shareholder value relative to P/E assumes (I think) either a dividend payout or share appreciation. I wonder how either of these assumptions are sound. Isn’t it a safe bet that we won’t be seeing a regular dividend from Apple anytime soon? And why should we assume P/E will remain near its present level? I’m essentially questioning how a P/E analysis applies these days to a stock like AAPL. Or, to re-quote Horace, I don’t see how P/E really answers the “If earnings don’t change, how many years will I have to wait until I’m paid back for my share purchase with retained earnings.” I apologize if I’m missing something basic here, as I am somewhat new to this world. But I have already asked one investment analyst about this and his answer was that P/E isn’t a great value metric for a business like AAPL. □ http://twitter.com/WalterMilliken Apple *is* paying a regular dividend now, as of August, I think it was. P/E isn’t necessarily a good metric for *any* company, since it’s really just an emotionally-driven value divided by a fact-based business metric. To some extent, P/E is a “garbage in, garbage out” calculation. Though on the average, and especially for companies whose business is relatively stable (and therefore predictable), the P value can be semi-reasonable. If you look at stable, boring businesses, Apple’s P/E is roughly comparable, which suggests that the emotionally-driven valuation completely discounts any growth possibility, and treats Apple like a company in a mature commodity market where growth only comes from stealing marketshare. This doesn’t seem to be a rational evaluation, at least to most of us who hang out here. And it certainly hasn’t been borne out in the recent past. I think Apple and Amazon’s P/E values have one thing in common — they’re both based almost entirely on future expectations driven by factors other than actual data, i.e. sheer speculation about the future course of their business. Aah, good point about the dividend. It’s so small that I forgot about it! I think most people know that the AAPL dividend will never function in that blue-chip “pay back my investment” manner Horace is talking about. Well, to a new investor anyway. I guess if you bought in at under $100, a mere 5 years ago, then the $10.60 annual dividend is pretty nice. Actually, as a retiree who no longer can risk or add to my holdings, the dividend is very welcome, since it represents income that I don’t need to acquire by selling to “make ends meet”. For my particular circumstance, the more I can hold, and the longer I can hold it, the better (at least until Apple stops growing), and even a small dividend helps me do that. □ http://twitter.com/JessiDarko I suggest you read Vick’s “Invest like Warren Buffett” boosk, or Mary Buffett’s “Buffetology” or “New Buffetology”. Your questions are reasonable, but “debating” them with you will take more bandwidth than this kind of forum allows, but in those books are explanations that will make this clearer to you (And also, I suspect, open your eyes up to a lot of things) □ http://twitter.com/JessiDarko Spock, I think the best bet would be for you to go read some of Horace’s past articles about Apple’s PE compression. In your response you said “If earnings don’t change”. But earnings are changing, and growing, but more to the point, so is cash. With the PE as low as it is (and has been) this forces up the stock price in a way that was ably demonstrated by Horace in past articles. But think about it– if Apple’s valued at $100B and it has $105B in cash– then it is priced under its cash, right? Who cares what the PE is (in this hypothetical the PE could be 1,000 because Apple is making no money, or it could be 0.5 because it is making $50B a year in profit.) As the cash goes up, if the price doesn’t go up, the proportion of the price that becomes cash goes up too. Sometimes you see companies where they trade for less than the cash they have on hand– but these are usually companies that are bleeding money really fast and the investors are discounting that cash because they expect management is going to spend it before they could get it. It is axiomatic that a stock with $10 in cash is worth at least $10 presuming the company is profitable…. In fact, it is probably worth much more than that… and even a conservative multiple of price-to-cash is forcing Apple’s stock up dramatically. That’s part of the reason why it has gone from $350 last year to $700 this year. @Sacto_Joe:disqus @twitter-110885782:disqus I certainly see how the dividend is welcome income, Sacto. I was just addressing the idea that it can be a way to cover the outlay for an investment in AAPL stock. Jessica, you’re right to assume I haven’t read Horace’s articles on P/E compression yet. I just started listening to his podcast a couple weeks ago, and since I’m going chronologically starting with the first episode it will be a while before I get caught up. I just started reading the Asymco site a couple days ago. I’m checking out the Buffett books now. Thanks for the recommendation! “Given the fact that the YoY growth of aapl has been shrinking dramatically in the last Q’s (now approaching 30% from the 95% of some Qs ago)” does not appear to be a factual statement. The average growth during fy 2009 was 34%, during fy 2010 was 67%, and during fy 2011 was a remarkable 83%. Assuming that this quarter’s eps come in at about $9.50/share, that would give us an average growth during fy 2012 of 63%, or more than twice the 30% figure you are quoting. You find these numbers by calculating the total earnings for each fiscal year, subtracting the year earlier’s earnings, and dividing the result by the year earlier’s earnings. If aapl profit is 9 and 17 eps on Q4/12, Q1/13 (both numbers, very possible), the YoY growth will have shrinked to 36%. Then, to maintain that 36%, Q2/13 would have to be 20.45 eps (compared with the exceptionally good 12.30 of Q2/12). That would mean returning to 66% YoY growth that Q. Unlikely, although possible if ip5 and pad mini are huge successes. In order to get that 17 eps, christmas Q sales of ip5 should exceed 50m. To keep the 66% YoY growth figure, the ip5 sales of that Q should be a number that Apple has proven it can’t produce right now. (that’s why they should use the cash to increase capex abroad). I may be the bulliest Apple investor out there, but facts are facts. Even so, Apple should still be a good investment, since at that time the P/E shrink should stop, imho. About commoditization: mp3 market is already s commodity. There is no question smart phones and tablets will also be there in the future; and Samsung/Google are doing all that they can to be it that way. That’s Google business; the cheaper the phones are, the larger its business is. “If aapl profit is 9 and 17 eps on Q4/12, Q1/13 (both numbers, very possible), the YoY growth will have shrinked to 36%.” “I may be the bulliest Apple investor out there, but facts are facts.” Yes, facts are facts. You’re assuming both numbers. I can pull numbers out of the air as well. Doesn’t make them “facts”. The facts are what I’ve already quoted. Apple is likely to have a growth in EPS for fy 2012 of well over 60%. We’ll know exactly how much over in a couple of weeks. And forecasting next year’s earnings on the low growth of Apple’s 3rd and forth quarter this year is not intellectually honest. The reality is that Apple changed the equation in the 4th quarter of fy 2011 by moving the iPhone release date up by three full months. Consequently, fy quarter to fy quarter comparsions no longer make sense. Re: commoditization: It’s way, way too soon since the disruption for commoditization to have taken place. Right now, Apple is struggling just to keep up with demand in spite of its best efforts to increase production of the highest quality devices in the market, for which it can, does, and should charge a premium price. Until it can get on top of production, it won’t even begin to worry about the commoditizers like Google and Samsung – nor should it. Please do primer on Exponention Growth and what happens to the pond when the pond is half full on the 59th day. The fish become rich? Which particular pond are you referring to? □ http://twitter.com/JessiDarko He’s deliberately not referring to a specific pond, because he’s exercising a belief that requires he keep himself carefully ignorant of certain information… like the fact that Apple’s share of the total phone market is %5, or of the total computer market is %10, or that while the iPad is %80 of the “tablet” market, the pond isn’t really “tablets”, but portable computing….where the iPad has a lot of room to grow. It is this kind of careful ignorance that has people believing Apple cannot grow much and thus only merits a 15 PE. @r.d., I am assuming you’re talking about the notion of a container doubling its contents daily, starting from 0.5^60 of the container. Yes, a constant 100% growth per day, 50% full on the 59th day, and no possibility of continuing the growth rate after day 60. I guess. You might have been a bit less obtuse. As @Jessica Darko notes below, the analogy is to a rather different situation, where Apple’s two most significant products decidedly DO have room to grow. I guess I’d say that when Apple sells 3 billion iPhones and 3 billion iPads per day, it will have reached saturation, and your point about a breakdown of growth will be real. I’ll guess that something else is more important first. Horace has noted two things: first, Apple has learned Disruption Theory pretty damn well, and has shown at least *some* signs of even being able to disrupt itself. Just 5 years ago, Apple was a computer company with “ordinary” computers, no phones or tablets. Who knows what products they will offer five years from now? Second, and somewhat related, Horace has noted Apple “skimming” the high-margin phone business, not trying to meet the needs of people whose income and/or needs doesn’t support buying a $400–$800 device. This cannot be unconscious, so the question is whether they expect the overall market to grow into their price range, demolishing the cheapo smartphones at about the same rate that feature phones are being replaced, whether they have some strategy for a lower-cost lineup based on Siri but with a minimal screen, or … Whatever, unlike the relatively static Microsoft of the last two decades (making minor, mostly unsuccessful forays into growth markets), or Google, which mostly doubled down on advertising even as it promotes out-there ideas such as cars, all companies are kinda committed to prospering in a very different tech and business climate in 5 years’ time. Linear, or log-linear extrapolation will be a decent projection until it is utterly not. □ http://www.facebook.com/people/Shameer-Mulji/1685212657 “I guess I’d say that when Apple sells 3 billion iPhones and 3 billion iPads per year (a 2-year replacement cycle for the planet), it will have reached saturation, and your point about a breakdown of growth will be real.” Considering that Android is growing rapidly and MS will be coming on strong with Windows Phone 8 / Windows 8, that’s a pretty big if. “Horace has noted at least two: first, Apple has learned Disruption Theory pretty damn well….” It would more correct to say that Steve Jobs learned Disruption Theory very well, plus the fact that he had vision to “see” years down the road and take steps to manifest that vision. As for the rest of the executive team, we don’t know if they have that IT factor yet. Remember that big jump in 08 was the result of abandoning the subscription earnings for the iPhone though… • http://www.asymco.com The data shown includes restatements of previous years’ earnings. To the comments regarding dividends: there seems to be some confusion about retained cash/earnings, dividends, and investor returns. Earnings which are retained and REINVESTED at attractive return-on-invested-capital enhance shareholder wealth. Examples would be investments in profitable new products, or repurchase of shares at a price below intrinsic value. Earnings which are retained and used for low-return ventures squander shareholder wealth. Examples are poor-profitability acquisitions, or sitting on huge hoards of cash earning less than the rate of inflation. This is where Apple is now…letting the value of their previous earnings slip away like sand in an hour-glass as the return does not keep pace with inflation. Hopefully someone in management gets a clue. The final use of capital is giving back some of the profits to the owners of the company; how that turns out financially is then up to each individual investor and how well they deploy that capital. @commoncents wrote, “Hopefully someone in management gets a clue.” While *I* hope that they take their cues from somebody better versed in running real-world businesses for the long term. Apple has done a splendid job of internal, organic growth, reportedly *only* spending $150 million on iPhone/iOS. Committing that amount to an utterly unproven, impossible market in 2005, rather than paying dividends, would’ve scared many investors blind, very likely including you, had you known. I have no inside track on what Apple’s plans are for its $100 billion of cash, but I’m quite happy to have it in the hands of people who have shown themselves genius-level experts on timing new products and new markets. The “poor-profitability acquisitions” you cite are non-existent, and instead they are buying and building expertise in silicon, manufacturing, retailing and other core Some day in the next couple of years, I expect to see Apple either pay out half of its mountain of cash (which many investors will piss away on Facebooks, Groupons or other hot deals), or else move into a business where they can bring substantially new value-added. I’m willing to be patient. This idea that they should hold on to what is now roughly $120,000,000,000 so that they can develop new lines of business simply doesn’t hold water. How much cash did it take for them to develop the iPhone? The iPad? I’d guess about 0.1% of that. So why in the world would one expect that the next big thing would somehow require 1000X as much? And it’s not like there isn’t going to be a continuing tsunami of cash coming in. Frankly, if they for some bizarre reason they get to where there isn’t a gusher of new cash continually pouring in, as an investor you’d be quite happy that some of the cash was returned to you during the good ole days. □ http://twitter.com/JessiDarko Dude, it is their cash, not yours. You can’t have it. Sorry. If you don’t like the way Apple is being run, you can resolve the problem very simply: Sell your shares and exit the stock. Then you’ll never have to worry about it again. As for those of us who remain long, we believe you are completely wrong, and you’ve been giving several reasonable arguments for why. Since you’re still arguing it, the only correct solution is for you to sell your shares. □ http://twitter.com/endsofinvention Sorry Jess, but the business is owned by the shareholders not the managers. The cash belongs to the shareholders. That’s why its called share holder because you OWN a share in the company; not because you own some abstract financial instrument. Holding a share gives you the right to say what, how or where a company does what it does. Sure a single share gives you a very small voice but a 1 share shareholder has no fewer rights than someone with a million shares. Well that’s how it works in the first-world.; not sure about the US. • http://twitter.com/WalterMilliken You are making the presumption that Apple can do something with all that cash that actually earns more than keeping pace with inflation. That is a *very* questionable assumption. Horace has analyzed this several times in the past, and the options boil down to: 1) Give the cash to the stockholders (resulting in an immediate -30% return on that money as it gets taxed going back into the US). No thanks. They seem to be returning a large part of the US cash flow now, which has already been taxed, increasing the dividends beyond that will simply transfer shareholder-owned cash to the US government. 2) Buy something worth several 10s of billions of dollars — of which there are very few examples, and none that appear to add significant value to the company. In fact, most giant corporate acquisitions in recent years seem to have had negative returns for shareholders. Look at Microsoft’s giant ad arm writedown this year, or Google’s $12B acquisition of a money-losing phone company with no obvious route to return to profitability. I’m *glad” as a stockholder that Apple isn’t doing this kind of typical mega-corp “buy into new lines of business” nonsense. 3) Invest it in R&D to expand into new areas of business (or increase sales and profits in old ones). But spending that kind of money on more R&D is almost *certain* to be wasted. Look at how successful Google and Microsoft are at throwing money into R&D…. I worked in R&D for many years, and there are limits to how much growth into new areas you can constructively absorb. Apple simply has too much cash to invest this way. You are also overlooking that they *are* investing some part of their cash flow in one of the highest-return businesses in the world — Apple. The put some of it to work pre-buying components to get better prices and lock in supplies. Some goes to production equipment that keeps their cutting edge designs ahead of competitors. And some goes into R&D. Ultimately, their *overall* return on investor assets is still excellent, you’re only looking at the part that doesn’t generate these astronomical increases in value. As a stockholder, I’m not at all worried about how they’re using their cash — they seem to be doing much better than just about any other cash-rich company has done, overall. So some of their assets aren’t making any significant profit. But they’re not pouring that money down ratholes, either. I’m very happy with that. The idea that somehow the Apple and its shareholders can benefit by not paying tax on repatriating overseas funds is flawed. Unless somehow those taxes are eliminated (not very likely; we’ll know in a few weeks whether there is even a chance on reductions), you either have to pay the tax at some point, or NEVER see that cash. And look what opportunities Apple has squandered in the meantime. With their EXCESS cash (that which is not reasonably foreseen to be needed for operations) two years ago, even after paying the repatriation taxes, they would have been able to buyback shares around $300; so that money would have basically doubled, even with the taxes. Same thing is likely happening now. While they sit on far, far more money the could reasonably use for expanding Apple’s business, they miss the opportunity to buy shares at today’s low prices. When they finally start getting serious about buy backs, they won’t even be able to buyback half as many shares for any given amount of cash. As to your last comment, I don’t think it is too much to ask them to walk and chew gum at the same time. Fact One: buy producing the best-quality, innovative computing devices, Apple has hit a grandslam home run with their products and ecosystem. Fact Two; buy being ridiculously conservative with the EXCESS cash, they have squandered to opportunity to serve their shareholders even better. Whether you can appreciate this truth or not, let’s leave it at that…so as to not pollute what is generally an excellent places for comments. @ commoncents, I understand your point. I even agree with it to a limited degree (see my statement below). However, as the old saying goes, hindsight is always 20/20. It was, and remains, literally impossible to predict with absolute certainty when Apple’s earnings growth will level off. I can practically guarantee you that even Steve Jobs was surprised by the enormity of Apple’s success, especially with a history like Apple’s in the rear view mirror. And perhaps you are right, and in a few years we’ll look back on this as a missed opportunity to add value by dramatically decreasing Apple’s outstanding stock (or, looked at differently, investing in their own stock). But there’s also the possibility that earnings growth will start to flatline sooner than you or I expect it to. And as an Apple shareholder, that burgeoning cash pile represents the ultimate rainy day fund that will underpin my investment for years to come. Now, I’ve already spoken elsewhere of our being on fixed income, with little hope of adding to our savings and every indication that we will now be “burning” savings to make ends meet. So for me, a dividend is a wonderful thing. It means I can afford to sell less AAPL and hold it longer. Heck, if the dividend got big enough fast enough, I would even be able to add to my AAPL holdings! At the same time, I need AAPL to be stable for the long haul, preferably another twenty years or more. And so I’m not in favor of overdoing stock buybacks at this level of cash. If we get to $200 billion, I would be more inclined to agree that stock buybacks and dividends need to be amplified dramatically, holding cash at that level. Just my 2 cent’s worth…. One more thing… …given that AAPL stock is trading at such attractive levels already, and assuming further P/E compression (due to general anxiety that the party must end, Steve Jobs is gone, people freaking out when the share price surpasses $1000/share, etc), if Apple finally gets a clue and starts buying back their shares aggressively, the shareholders could see even far greater profits than they’ve seen to date. Hopefully the next group in the U.S. government makes this decision easier for Apple management by lowering the repatriation taxes on their foreign-held cash; it’s certainly not doing the U.S. any good over there. Share buy-backs are essentially—mathematically identical to—reverse fractional splits plus a one-time dividend. Fewer shares out, money out of the corporate coffers and into investors’ hands. Say, a 5:6 split ratio, resulting in 5 shares for every 6 you hold today, but at the same ~ $660 price because each of the now 5/6ths of a billion shares no longer has $110 of cash each. Except that Apple no longer has an easy way to weather a surprise initiative from Google, Microsoft, Samsung, Lenovo or ???. No way to pick up a major global network over which to distribute content, nor a way to guarantee revenues to producers so as to wean them from the cable companies. They’d be substantially more stuck in their current business, one that @r.d. and others are worried are already closer to saturation. In an industry that’s undergoing such sharp changes, that’d seem the pinnacle of stupidity. This strikes a chord. I’ve recently been reading “Great by Choice” (Collins and Hansen, Random House Business Books) where the data shows that the best results come from sticking to a steady pace through good times and bad. Collins also wrote “Good to Great”, another excellent read. I liked “Good to Great” at the time, but I think it would’ve been better had it been written post-Christensen. Horace has occasionally described how a group of very smart businesspeople all suddenly turned “stupid” simultaneously with the introduction of a new, unexpected technology. I think it’s very helpful to understand companies by breaking up their success into macro, industry and firm effects; G2G mostly talks up the latter. Unquestionably important, especially as it talks about resilience in the face of exogenous changes, but maybe not recognizing how a firm (like Apple) disrupts an industry, even an entire Thanks for the note, tho. I’m sure “Great by Choice” will reflect those factors more. • http://twitter.com/JessiDarko Share buybacks have not helped Microsoft or Intel, despite plowing billions of dollars into these programs, their share prices have not appreciated anywhere close to the amount of money spent. Apple has much better places to deploy its cash. The problem is, people think that Apple’s cash is just going to waste– it is not. It is a strategic asset, and at the level they are playing at, havin $100B in cash is extremely valuable. It means they can buy any company they need, should they need to. Thus, Sharp, TSMC, etc, know that Apple could buy a competitor if the don’t play ball… so they play ball. That $100B is earning a return, you just don’t see it. (and of course at the same time, it is also earning returns itself… ) Being able to write a check for $10B for a new manufacturing plant is really, profoundly, strategic. @Jessica Darko wrote, “Share buybacks have not helped Microsoft or Intel…” Of course, the purpose is to help the investors of the companies, who get their money subject only to cap gains tax, and can reinvest it however they see fit. There are all sorts of little “but…”s with this, but every investor can roll his/her own buyback: it’s called a share sale. If you want the stock, but without it holding all the cash, you can make your own synthetic shares by borrowing cash in proportion to the cash that come with the shares. The claim always comes down to the notion that the individual can make better investment decisions than Management. In some cases, this is true. With Apple, it’s really hard to understand the □ http://twitter.com/JessiDarko Share buybacks have not helped investors. The buybacks do not drive up the share price, as you seem to be proposing, in the case of the two companies I mentioned– intel and microsoft– they’ve been flat to down, despite tens of billions of buybacks. In theory they may work as expected, causing a boost to the price and thus more money for the investors– but I don’t think they really work, in practice, consistently enough to recommend one for Apple. But at the end of the day, this is a quibble, we don’t seem to disagree on the real issues. “Share buybacks have not helped investors.” That’s not a smart statement. Or a correct statement. Or one that reflects well on the poster’s investment knowledge. I’m sure you’d be happy to share your wisdom. Why post it as an empty challenge? I myself haven’t evaluated whether the share buybacks helped or hurt investors; I merely go from the notion that investors may be fooled by short-term cosmetics, but what really matters is the actual business success the firm can have, plus their financial decisions, which mostly —in a Modigliani-Miller sort of sense—don’t affect the real risk/reward tradeoffs much at all. So please, share the justification for your claims that “That’s not a smart statement. Or a correct statement.” Seems it’s incumbent on you to raise the ante if you raise the contradiction □ http://twitter.com/JessiDarko I gave examples to support my statement. You have nothing but derogatory characterizations of me, along with a dishonest representation of what I was saying. I consider that a conclusion of the debate with the point in my favor. I don’t see how you thought you could win by refusing to play! I think the discussion on Apple’s cash is appropriate. I would caution all sides to keep it civil. It is quite possible to have a legitimate difference of opinion. For myself, I’ve been puzzling over this issue of Apple’s growing stash for some time. A few things to keep in mind: 1. Apple stock is not being valued properly even if it had no cash. Its future earning capability is not being priced in. 2. One can think of the excess cash being generated as a by-product of an unbelievably successful business plan. 3. That business plan is still in effect, and there is every reason to believe that Apple’s cash stash is going to be getting much, much larger. 4. To paraphrase an old saying, if money isn’t used to fertilize green and growning things, then like fertilizer its just so much sh!t. 5. Even as say, a U.S. bond, it’s doing some good. 6. I don’t see money going to taxes as a waste. I may be alone on this forum with that opinion…. 7. I see a legitimate argument in any excess cash (however that should be defined) being distributed back to the investors. 8. Improving the value of the stock by “drying up” shares via a buyback is a means for Apple to invest in themselves. And improving the value of the stock gives more “punch” to stock options, which helps them attract and retain the best available talent. It also “signals” the investment community that they think their stock is worth the price. I probably can think of others, but that should get the ball rolling…. I also suspect strongly that P/E compression is ending. Andy Zaky made this case in his last blog. Discounting future earnings is behind a lot of the compression to date, but there was no ignoring last Holiday Season, just as there’s no ignoring the upcoming Holiday Season. January’s earnings report is going to be mammoth, and everyone knows it, and April’s will be as well, and everybody knows that too. Indeed, the present “selloff” is like the tide receding just before the tidal wave. I’m expecting the P/E to come close to 20 just before earnings in January. And, of course, I’m expecting the P/E to instantaneously drop three or more points when the E part gets posted. By May or June, we’ll see a “compression” again as profit taking rips through. Again, my 2 cent’s worth…. The majority of posters on this board believe Apple to be uniquely prepared to meet the challenges of the smart phone market. Horace did produce a graph a few months ago that showed how many companies have dropped by the way side trying to compete in the smart phone and personal computer market. There were literally dozens of companies that have fallen on hard times. Wall Street is not unreasonably discounting Apples share price after watching this blood bath over the last 30 years. Perhaps that graph is all the explanation as to why Apples PE remains so stubbornly You could say that about any company. Apple has a far better likelihood of future success than most. Ergo, that ‘s not the reason. But is that likelihood of success obvious to the wider market? If the expectation continues to be that someone else will come along and take Apple’s lunch a la Microsoft in the 90s AND that it’s difficult to make money in the mobile arena AND that tech stocks are too risky anyway, then it’s entirely reasonable to assume such expectations are a major cause of the low P/E. We all know that investing is as much about psychology as it is about the numbers being reported. • http://www.asymco.com This makes sense to me. The tech industry overall is discounted deeply and has been so since the crash of 2000. The hangover from the 90s is still felt because the PC industry and IT in general overshot their markets. Devices have also not been a great story so far with notable exceptions. But painting Apple with the same broad brush is assuming a lot. We should have another data point at the end of the quarter: how difficult is it for a competitor to come into the market (which it previously got driven out of), after spending many billions in R&D, marketing and a deal with the previous giant of manufacturing? Certainly, the early anecdotes are that well-funded, well-scaled, well-experienced and well-managed Microsoft/Nokia are NOT endangering Apple in any significant way. At this point, Apple can even be said to enjoy a head start in corporations’ development of proprietary apps for “mobile” use, at least as long as “mobile” apps are different from what the Enterprise’s desktops need. I’m frankly astonished that Microsoft hasn’t pulled out all the stops to utterly dominate the business arena, and to make it at least a tossup in the consumer space (where so many devices are now My own guess about the rising concern is whether at- or below-cost Android tablets can squeeze the profit margin out of iPads, or relegate Apple to “skimming” the high-margin, top decile of customer needs. Another possibility is that mutual funds and retirement vehicles have reached the limit of how much Apple stock they can hold. Once these highly regulated companies have 5% in one company they are forced to sell to maintain a low exposure to any one company. As an investor, that’s a good reason to abandon mutual funds, which is exactly what, and why, I did. This is oversimplified, but I thought I would share an opinion. One of the assumptions that I always hear about Apple is that smartphone consumers will get a new phone every 2 years. Historically, this is wrong—look at the demand curve for televisions, phones, cars, laptops etc…. One of 2 things will happen to the smartphone space: pricing will collapse, or the upgrade cycle will slow by 50%, because each new iPhone sans Steve Jobs has been mildly better, but given that most are new to iOS, they won’t notice at first purchase. They will on the second, which is where you will flatten the curve. I don’t pretend to know that date but it is probably evident sometime next year. Assuming you reach the total addressable market, if you slow down that 24 months to 30 months you effectively cut the sales by 25% on a unit basis, or the company will have to eat into the margin, to keep the upgrade cycle going. Previous consumer product messaging upgrade cycles were driven by huge disruptive changes in features: pagers, the original cell phones, Palm Pilot and contacts, the beautiful hardware of the RAZR vs other feature phones, RIMM and messaging/security adoption driven by government and finance, . Then, the market provided the general consumer access to all that iOS could offer-music, an internet browser, applications, photo and video sharing accelerated through superior Data networks, and arguably SIRI (or not, if you have ever used Google NOW). What is next? Many people, Steve Wozniak included, think that the much larger screens were an improvement on that scale; one only need look at the S3 sales to see that as a differentiator. NFC is clearly another feature not included (like LTE last year) that may be included next year, if it fits Tim Cook’s gross margin requirements. We can argue about Android share and China all day long, but personally I think Apple is so bad at the cloud (mobile me, calendar sync issues, maps, etc) in contrast with Google that I feel comfortable saying it is not in their DNA. The slowdown/flatlining will come; its just a question of when. That may not be happening yet, but portfolio managers will look out 1 year and if they can see it happening, you have no more buyers for the stock unless Tim Cook miraculously discovers a new market on a different planet. I think the maps disaster was such an unforced error that most managers said “Hey, if the guy does that, there will be more unforced errors on crucial decisions going forward, and there are no huge features coming, I think the party is over here.” And given that Apple’s market cap is so large that finding other risk/reward opportunities around them is easier, you will have a hard time locating the incremental next buyer of the shares. Ultimately, if AAPL misses on numbers this time (because of all the tailwinds), the stock will be at 500 so fast it will make everyone’s heads spin. You may get the mini and the iTV, but that is in every buyer’s script, so the units are priced in. Getting beyond a 10x multiple in a flagrantly disruptive sector is going to be tough and risky. • http://www.asymco.com This is wrong on every point raised. Regarding upgrade cycle time: Phones have had 18 mo. to 2 year cycles for decades. The reason is because they wear out physically and because they break. The iPhone has fewer moving parts than phones used to have but they are still prone to breakage and ports, home buttons and audio jacks wear out. Batteries are another point. I don’t have figures, but battery life does decrease over time and iPhone batteries have a finite rated life. The better analogy would be the difference between “mature” desktops and “mature” notebook computers. Notebooks are replaced far more frequently even if they can still operate reasonably well with existing software. This is also due to the increased wear and tear. A cursory glance at the budgets of corporate IT departments can confirm this. Pricing has historically eroded for phones for decades. That did not affect the ability of innovations to be priced at a premium. Pricing reflects value and has always done so. Your mistake is in assuming that there can be no increase in value because there can no longer be any innovation. The iPhone launched into a commoditized smartphone market and it was perceived as a ridiculous bauble by incumbents. The people who had the sharpest visibility into roadmaps of components, software trends and laboratories full of prototypes could not conceive of what value an iPhone could bring, even when they saw it. Horace, I would be cautious about your reasoning these two points. First, reliability. Good design attacks weak points; that’s where a dollar of engineering usually delivers the best bang-for-the-buck. Five or ten years ago, failure points were spinning disk drives (they’re gone), laptop hinges and wires that go through them (ditto), batteries (hugely improved), assembly failures such as omitted screws or poor solder joints (tackled through better design, materials and processes) and many others that don’t leap to mind. As a kid, I got a parts box of electronics when our console radio caught fire due to some electrical failure; by 25 years ago stereos essentially stopped failing. Today’s devices have benefit of understanding almost a half-century of failure modes, so I no longer have my heart go into my throat when I drop my phone onto a carpeted floor. And I’ve estimated it might add less than $1 to a phone to make it water-resistant, meaning that spilling a beer onto it, or seeing it slip into the sink or lake could/should soon be non-failures, too. Reliability is a small, decreasing concern in products of Apple’s quality. Second, pricing of premium ideas. You’ve commented about Apple’s “skimming” strategy; can lower-priced, maybe more job-focussed devices be far behind? This should provide a long horizon for Apple. But at some point, the industry will have created products & institutions that serve people’s needs well; the market for devices will shift to a zero-sum game of Apple vs Googlarola. I have no doubt that Apple would compete very well in that world, but it’s still a low-replacement, relatively static market. If there’s one thing I find most fascinating about Apple, it’s their ability to detect emerging megatrends, and to shape and ride the waves that result. Was this more than exceptionally good luck or individuals’ unique skills? Is that special something somehow pervasive in the company? Apple may have to move to entirely different areas to deliver the type of premium ideas for which they’re famous. This talent is exceptionally difficult to observe, either from the inside (too many biases to cite), or the outside (too little data; too much noise). Here’s where your models help separate trends from disruptions. And I think this is the whole crux of the matter of Apple’s future. the multiple was 2x now prior to 2008. Things were just a “flagrantly disruptive” back then, if not more so. • http://www.facebook.com/people/Chris-Greene/620255997 3 months later and the stock opened @ 504. Looks like we have a winner. the problem is that his 2012 number is way too aggressive (as proved by apple’s earnings, and 60% growth of 2013 is very high. This is a good analysis, but he should used more conservative numbers going forward. I believe the volatility of apple stock has a lot to do with anylysists. They pump up the stock by giving too high an estimate and then when apple misses, the stock drops like a dead stone. I also believe that apple has a low p/e ratio because of the risk associated with the huge swing of the stock. If apple does not have such huge swings daily, the stock price would be higher since it would attact more investors. Currently it way things are, one need to have iron nerve to invest in apple.
{"url":"http://www.asymco.com/2012/10/12/what-is-apples-realized-pe-ratio/","timestamp":"2014-04-18T15:50:21Z","content_type":null,"content_length":"125306","record_id":"<urn:uuid:49a24e26-6ca7-4cca-b339-6552245388c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00259-ip-10-147-4-33.ec2.internal.warc.gz"}
Very fast mixture-model-based clustering using multiresolution kd-trees Results 1 - 10 of 11 - In Proceedings of the 17th International Conf. on Machine Learning , 2000 "... Despite its popularity for general clustering, K-means suffers three major shortcomings; it scales poorly computationally, the number of clusters K has to be supplied by the user, and the search is prone to local minima. We propose solutions for the first two problems, and a partial remedy for the t ..." Cited by 267 (5 self) Add to MetaCart Despite its popularity for general clustering, K-means suffers three major shortcomings; it scales poorly computationally, the number of clusters K has to be supplied by the user, and the search is prone to local minima. We propose solutions for the first two problems, and a partial remedy for the third. Building on prior work for algorithmic acceleration that is not based on approximation, we introduce a new algorithm that efficiently, searches the space of cluster locations and number of clusters to optimize the Bayesian Information Criterion (BIC) or the Akaike Information Criterion (AIC) measure. The innovations include two new ways of exploiting cached sufficient statistics and a new very efficient test that in one K-means sweep selects the most promising subset of classes for refinement. This gives rise to a fast, statistically founded algorithm that outputs both the number of classes and their parameters. Experiments show this technique reveals the true number of classes in the underlying distribution, and that it is much faster than repeatedly using accelerated K-means for different values of K. - JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION , 2000 "... Cluster analysis is the automated search for groups of related observations in a data set. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures and most clustering methods available in commercial software are also of this type. However, there is little ..." Cited by 260 (24 self) Add to MetaCart Cluster analysis is the automated search for groups of related observations in a data set. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as \How many clusters are there?", "Which clustering method should be used?" and \How should outliers be handled?". We outline a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, mineeld detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology, a... , 2001 "... We present efficient algorithms for all-point-pairs problems, or 'Nbody '-like problems, which are ubiquitous in statistical learning. We focus on six examples, including nearest-neighbor classification, kernel density estimation, outlier detection, and the two-point correlation. ..." Cited by 90 (12 self) Add to MetaCart We present efficient algorithms for all-point-pairs problems, or 'Nbody '-like problems, which are ubiquitous in statistical learning. We focus on six examples, including nearest-neighbor classification, kernel density estimation, outlier detection, and the two-point correlation. - In Twelfth Conference on Uncertainty in Artificial Intelligence , 2000 "... This paper is about metric data structures in high-dimensional or non-Euclidean space that permit cached sufficient statistics accelerations of learning algorithms. ..." - in ACM Multimedia’s Multimedia Information Retrieval Workshop , 2004 "... In this paper, we present an index structure-based method to fast and robustly search short video clips in large video collections. First we temporally segment a given long video stream into overlapped matching windows, then map extracted features from the windows into points in a high dimensional f ..." Cited by 30 (3 self) Add to MetaCart In this paper, we present an index structure-based method to fast and robustly search short video clips in large video collections. First we temporally segment a given long video stream into overlapped matching windows, then map extracted features from the windows into points in a high dimensional feature space, and construct index structures for these feature points for querying process. Different from linear-scan similarity matching methods, querying process can be accelerated by spatial pruning brought by an index structure. A multi-resolution kd-tree (mrkd-tree) is employed to complete exact K-NN Query and range query with the aim of fast and precisely searching out all short video segments having the same contents as the query. In terms of feature representation, rather than selecting representative key frames, we develop a set of spatial-temporal features in order to globally capture the pattern of a short video clip (e.g. a commercial clip, a lead in/out clip) and combine it with the color range feature to form video signatures. Our experiments have shown the efficiency and effectiveness of the proposed method that the very first instance of a given 10-sec query clip can be identified from a 10.5hour video collection in tens of milliseconds. The proposed method has been also compared with the fast sequential search algorithm. - In Proceedings of the 18th International Conf. on Machine Learning , 2001 "... Previous work in mixture model clustering has focused primarily on the issue of model selection. Model scoring functions (including penalized likelihood and Bayesian approxi- mations) can guide a search of the model pa- rameter and structure space. Relatively lit- tle research has addressed th ..." Cited by 15 (1 self) Add to MetaCart Previous work in mixture model clustering has focused primarily on the issue of model selection. Model scoring functions (including penalized likelihood and Bayesian approxi- mations) can guide a search of the model pa- rameter and structure space. Relatively lit- tle research has addressed the issue of how to move through this space. Local optimization techniques, such as expectation maximization, solve only part of the problem; we still need to move between different local optima. , 2003 "... This short report, compiled upon request from Dave Siegrist and Ted Senator, surveys the spectrum of technologies that can help with Biosurveillance. We indicate which we have chosen, so far, to use in our development of analysis methods and our reasons. 1 Time-weighted averaging This is directly ap ..." Cited by 4 (2 self) Add to MetaCart This short report, compiled upon request from Dave Siegrist and Ted Senator, surveys the spectrum of technologies that can help with Biosurveillance. We indicate which we have chosen, so far, to use in our development of analysis methods and our reasons. 1 Time-weighted averaging This is directly applicable to a scalar signal (such as “number of respiratory cases today”. This method, more commonly used in computational finance, simply compares the count during the current time period with the weighted average of the counts of recent days. Exponential weighting is typically used, where the half-life is known as the “time window ” parameter. This time-window parameter is typically chosen by hand. We prefer the Serfling and Univariate HMM methods described below. 2 Serfling method This method (Serfling, 1963) is a cyclic regression model, and is the standard CDC algorithm for flu detection. It is, again, applicable to scalar signals. It assumes that the signal follows a sinusoid with a period of one year, and thus finds the four parameters ¢¤£¦¥¨ § and © in where the parameters are chosen to minimize the sum of squares of residuals. It is an easy matter of regression analysis to determine, on any date, whether , 2000 "... nto a new, fundamentally impossible realm where the data sources are just too large to assimilate by humans. This situation is ironic given the large investment the US has put into gathering scientific data. The only alternative is automated discovery. It is our thesis that the emerging technology ..." Cited by 1 (0 self) Add to MetaCart nto a new, fundamentally impossible realm where the data sources are just too large to assimilate by humans. This situation is ironic given the large investment the US has put into gathering scientific data. The only alternative is automated discovery. It is our thesis that the emerging technology of cached sufficient statistics will be critical to developing automated discovery on massive data. A cached sufficient statistics representation is a data structure that summarizes statistical information in a database. For example, human users, or statistical programs, often need to query some quantity (such as a mean or variance) about some subset of the attributes (such as size, position and shape) over some subset of the records. When this happens, we want the cached sufficient statistic representation to intercept the request and, instead of answering it slowly by database accesses over billions of records, answer it immediately. The interesting technical challenge is: given that there "... We also introduce a new algorithm for optimization of similarity-based data. In a problem where only the similarity metric is deo/ned, a gradient is rarely possible. The essence of MBR is the similarity metric among stored examples. The new algorithm, Pairwise Bisection, uses all pairs of stored exa ..." Cited by 1 (1 self) Add to MetaCart We also introduce a new algorithm for optimization of similarity-based data. In a problem where only the similarity metric is deo/ned, a gradient is rarely possible. The essence of MBR is the similarity metric among stored examples. The new algorithm, Pairwise Bisection, uses all pairs of stored examples to divide the space into many smaller spaces and uses a nonparametric statistic to decide on their promise. The nonparametric statistic is Kendall's tau, which is used to measure the probability that a given point is at an optimum. Because it is fundamentally nonparametric, the algorithm is also robust to non-Gaussian noise and outliers. To my mother and father, to whom I owe just about everything Acknowledgements I would like to thank my advisor, Andrew Moore, who provided such an inspirational model throughout my graduate studies. Andrew was perpetually supportive and enthusiastic, and could be counted on to have a clever idea up each sleeve at any given time. - Journal of the American Statistical Association , 2000 "... Cluster analysis is the automated search for groups of related observations in a data set. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures and most clustering methods available in commercial software are also of this type. However, there is little ..." Add to MetaCart Cluster analysis is the automated search for groups of related observations in a data set. Most clustering done in practice is based largely on heuristic but intuitively reasonable procedures and most clustering methods available in commercial software are also of this type. However, there is little systematic guidance associated with these methods for solving important practical questions that arise in cluster analysis, such as \How many clusters are there?", \Which clustering method should be used?" and \How should outliers be handled?". We outline a general methodology for model-based clustering that provides a principled statistical approach to these issues. We also show that this can be useful for other problems in multivariate analysis, such as discriminant analysis and multivariate density estimation. We give examples from medical diagnosis, mineeld detection, cluster recovery from noisy data, and spatial density estimation. Finally, we mention limitations of the methodology, a...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1776987","timestamp":"2014-04-18T21:16:33Z","content_type":null,"content_length":"39625","record_id":"<urn:uuid:5aac50fa-a9cb-46b8-8bb1-bdec7c6da3a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Caristi’s fixed point theorem and selections of set-valued contractions. (English) Zbl 0916.47044 Let $\left(X,d\right)$ be a metric space and $T:X\to X$ a map which need not be continuous but satisfies $d\left(x,Tx\right)\le \phi \left(x\right)-\phi \left(Tx\right)$ for some lower semicontinuous function $\phi :\left[0,\infty \right)\to \left[0,\infty \right)$. Caristi proved this result using transfinite induction. W. A. Kirk [Colloq. Math. 36, 81-86 (1976; Zbl 0353.53041)] defined a partial ordering on $X$ by $x{\le }_{\phi }y$ iff $d\left(x,y\right)\le \phi \left(x\right)-\phi \left(y\right)$ in order to prove this theorem. His proof uses Zorn’s lemma. F. E. Browder [in: Fixed point theorem, Appl. Proc. Sem. Halifax 1975, 23-27 (1976; Zbl 0379.54016)] gave a constructive proof using the axiom of choice only for countable families. R. Mańka [Rep. Math. Logic 22, 15-19 (1988; Zbl 0687.04003)] then gave a constructive proof based on Zermelo’s theorem. The present author gives a simple derivation of Caristi’s theorem from Zermelo’s theorem in case $T$ is continuous. On the other hand, the author describes examples of set-valued contractions which admit (not necessarily continuous) selections which satisfy the assumptions of Caristi’s theorem. Finally, the author answers a question posed by W. A. Kirk by proving the following result: Let $\eta :\left[0,\infty \right)\to \left[0,\infty \right)$ be a function satisfying $\eta \left(0\right)=0$. Then the right hand lower Dini derivative of $\eta$ at 0 (i.e., ${lim inf}_{s\to {t}^ {+}}\left[\eta \left(s\right)-\eta \left(t\right)\right]/\left[s-t\right]$) vanishes if and only if there is a complete metric space $\left(X,d\right)$, a continuous and asymptotically regular mapping $T:X\to X$ which has no fixed points and a continuous function $\phi :\left[0,\infty \right)\to \left[0,\infty \right)$ such that $\eta \left(d\left(x,Tx\right)\right)\le \phi \left(x\right)- \phi \left(Tx\right)$ for all $x\in X$. 47H10 Fixed point theorems for nonlinear operators on topological linear spaces 54H25 Fixed-point and coincidence theorems in topological spaces 47H04 Set-valued operators 03E25 Axiom of choice and related propositions (logic)
{"url":"http://zbmath.org/?q=an:0916.47044","timestamp":"2014-04-21T12:18:27Z","content_type":null,"content_length":"25560","record_id":"<urn:uuid:3b438acc-f34c-4eff-8696-fa24ba92d7bf>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00055-ip-10-147-4-33.ec2.internal.warc.gz"}
What is the simplest oscillatory integral for which sharp bounds are unknown? up vote 6 down vote favorite I have either heard or read that sharp asymptotics and bounds for oscillatory integrals of the form $ \int e^{i \lambda \Phi(x)} \psi(x) dx \quad \lambda \to \infty $ are unknown when the critical points for the phase function are not isolated. If this impression is correct, what are the simplest / most important integrals of this form for which the optimal decay rate and asymptotic have not been proven? E.g. are there examples with $\Phi$ being a polynomial? (I would also appreciate recommendations of references for estimating multidimensional oscillatory integrals if anyone has them.) fourier-analysis harmonic-analysis fa.functional-analysis ca.analysis-and-odes 1 I think C.D.Sogge's Fourier integrals in classical analysis is a good choice. – user23078 Jul 26 '12 at 7:32 add comment 2 Answers active oldest votes As soon as the Hessian is not full rank, the problem becomes quickly messy: • if the hessian has rank $n-1$, then one can treat the one direction separately since we have explicit bound for a one-dimensional integral where the taylor expansion of $\Phi$ near a critical point $x_0$ looks like $(x-x_0)^p$ for any $p\ge 2$, the other directions will always give you $\lambda^{-\frac{1}{2}}$. • when the rank is less, then one must first identify those directions where the phase isn't quadratic, and look at the next terms in the expansion. V.I. Arnold then classifies the simple jets of functions in terms of their corresponding maximal decay in $\lambda$ in the following paper: V.I. Arnold, Remarks on the stationary phase method and coxeter numbers, Russian Math. Surveys, 28 (1973), p. 19 See also J.J. Duistermaat, Oscillatory integrals, Lagrange immersions and unfolding of singularities, CPAM vol XXVII, 207-281 (1974) The classification is algebraic and does not rely on estimating integrals, so it never tells you how to obtain the estimate corresponding to that optimal decay. For the simpler classes up vote 5 of degenerate critical points, Popov has worked on estimating the oscillatory integrals: down vote D.A. Popov, Estimates with constants for some classes of oscillatory integrals, Russian Math. Surveys, 52, pp. 73–145. D.A. Popov, Remarks on uniform combined estimates oscillatory integrals with simple singularities, Izv. Math., 72, pp. 793–816. These papers helped me for the following problem, where the oscillatory integral I had to study had an interesting degenerate behavior. F. Monard-G. Bal "Inverse transport with isotropic time-harmonic sources", SIAM J. Math. Anal., Vol. 44, No. 1, pp. 134-161 (2012). Along the way, another paper I found interesting for direct estimates: G.I. Arkhipov, A.A. Karatsuba, and V.N. Chubarikov, Trigonometric integrals, Izv. Akad. Nauk SSSR Ser. Mat., 43 (1979), pp. 971–1003 (in Russian); Math. USSR-Izv., 15 (1980), pp. 211–239 (in English). (I think they also have a multidimensional counterpart). Wow, thank you so much for the response and all the references! I will have to look into them. – Phil Isett Aug 3 '12 at 5:22 add comment On multidimensional oscillatory integral, A. Averbuch, E. Braverman, M. Israeli, R. Coifman, On Efficient Computation of Multidimensional Oscillatory Integrals with Local Fourier Bases, Preprint submited to Elsevier, 2001. up vote 3 down vote Also, S. Olver, Numerical Approximation of Highly Oscillatory Integrals, PhD Thesis, University of Cambridge, 2008. 1 Thanks for the references. I had actually never thought about oscillatory integrals from the point of view of numerical approximation. Right now what I'm mostly interested in is knowing which oscillatory integrals do not even have theoretical bounds that are sharp up to a constant. But I find these interesting. – Phil Isett Jul 27 '12 at 12:12 add comment Not the answer you're looking for? Browse other questions tagged fourier-analysis harmonic-analysis fa.functional-analysis ca.analysis-and-odes or ask your own question.
{"url":"http://mathoverflow.net/questions/103138/what-is-the-simplest-oscillatory-integral-for-which-sharp-bounds-are-unknown?sort=oldest","timestamp":"2014-04-21T04:59:56Z","content_type":null,"content_length":"59009","record_id":"<urn:uuid:6cc2f63d-8e8f-46b2-b687-e2d0aade73e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00462-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: April 2005 [00533] [Date Index] [Thread Index] [Author Index] Re: Re: Infinite sum of gaussians • To: mathgroup at smc.vnet.net • Subject: [mg56173] Re: Re: Infinite sum of gaussians • From: Maxim <ab_def at prontomail.com> • Date: Sun, 17 Apr 2005 03:07:39 -0400 (EDT) • References: <200504120926.FAA27573@smc.vnet.net><d3ibr6$9un$1@smc.vnet.net> <200504141254.IAA28085@smc.vnet.net> <200504150847.EAA11453@smc.vnet.net> <d3qheo$oeh$1@smc.vnet.net> • Sender: owner-wri-mathgroup at wolfram.com On Sat, 16 Apr 2005 08:13:12 +0000 (UTC), Andrzej Kozlowski <akoz at mimuw.edu.pl> wrote: > I now believe the equation is not true on mathematical grounds. But I > would not trust any numerical verifications of it. To see why consider > the following two series. > Sum[Exp[-(30 - k)^2/2], {k, -Infinity, Infinity}] > and > Sum[Exp[-(-k)^2/2], {k, -Infinity, Infinity}] > Now as k runs through integer values from -Infinity to +Infinity -(20 - > k)^2/2 and -(-k)^2/2 must run though precisely the same set of values, > so since the series are absolutely convergent they should be equal. > However, Mathematica gives: > N[FullSimplify[Sum[Exp[-(-k)^2/2],{k,-Infinity,Infinity}]],100] > 2.5066282880429055448306790538639603781474512715189099785077187561072857 > 447639\ > 10390142584776971960969 > N[FullSimplify[Sum[Exp[-(30 - k)^2/2], > {k, -Infinity, Infinity}]], 100] > various messages > -2.0771591956161771304`2.107209964269863*^-48 > so one hundred digits of precision is insufficient to show that these > two values are the same. This problem appears to be very ill posed and > therefore I do not think numerical arguments are convincing. > Nevertheless I think the identity is not satisfied. This can be best > proved by an argument involving Fourier series mentioned in Carl Woll's > posting. However, I would like to return again to my original argument > to try to understand where I went wrong. Consider again the function > f[z_] := Sum[E^((-(1/2))*(z - k)^2), > {k, -Infinity, Infinity}] - Cos[2*Pi*z]* > (EllipticTheta[3, 0, 1/Sqrt[E]] - Sqrt[2*Pi])-Sqrt[2 Pi] > we certainly have > FullSimplify[f[0]] > 0 > What sort of function is f? Well, it is clearly not complex analytic. > In fact the sum Sum[E^((-(1/2))*(z - k)^2), > {k, -Infinity, Infinity}] cannot converge for all complex z, since > E^((-(1/2))*(z - k)^2) is certainly complex analytic as a function of > k, and one can prove that if g is a complex analytic function in the > entire complex plane we must have Sum[g[k],{k,-Infinity,Infinity}]==0. > So f is not defined for all complex values of z. But it is defined for > all real values and the function so obtained is real analytic. I think > I can prove that but I admit I have not considered this carefully. But > if it is a real analytic function it is also determined by its value at > just one point and the valus of the derivatives at that point. Note > also that the function f has obviously period 1. > So let's consider again what happens that the point 0. We know that the > f itself takes the value 0 there. > Mathematica also returns: > FullSimplify[D[f[x], {x, 3}] /. x -> 0] > Sum[k^3/E^(k^2/2) - (3*k)/E^(k^2/2), > {k, -Infinity, Infinity}] > This is clearly zero, and so are all the odd derivatives. What about > the even ones. Well, I believe now i was wrong to say that they are 0 > but I think they are extremely small. Let's look again at the second > derivative: > FullSimplify[D[f[x], {x, 2}] /. x -> 0] > Sum[k^2/E^(k^2/2) - E^(-(k^2/2)), {k, -Infinity, > Infinity}] + 4*Pi^2*(-Sqrt[2*Pi] + > EllipticTheta[3, 0, 1/Sqrt[E]]) > N[%] > -2.2333485739105708*^-14 > I think this really is an extremly small number rather than 0. If this > is indeed so and if the function is really real analytic, as I believe > than we can see what happens. The function is 0 for integer values of > x. For non-integer x it can be expressed as a power series in odd > powers in x-Floor[x], with extremely small coefficients. So the values > of f remain always very close to zero, to an extent that is impossible > to reliably determine by numerical means. > Andrzej Kozlowski This is wrong on several points. In fact, Sum[E^(-(z - k)^2/2), {k, -Infinity, Infinity}] is analytic everywhere in the complex plane. Since we already know that this sum is equal to Sqrt[2*Pi]*EllipticTheta[3, Pi*z, E^(-2*Pi^2)], all its properties, including analyticity, follow from the properties of EllipticTheta. Actually, since the sum of E^(-(z - k)^2/2) is very well-behaved (the terms decay faster than, say, E^(-k^2/4)), it is trivial to prove the uniform convergence in z and therefore the validity of the termwise differentiation as well as analyticity directly. The fact that the series is double infinite is of no importance; we can always rewrite it as two series from 1 to +Infinity. Also it's not correct that a real infinitely differentiable function can be defined by its value and the values of its derivatives at a point. If we take f[z] == E^-z^-2 for z != 0 and f[0] == 0, then all the (real) derivatives at 0 vanish. As to using N[Sum[...], prec], there is probably a minor inconsistence in the semantics: if N has to call NSum, then it cannot guarantee that all (or even any) of the returned digits will be correct, unlike, say, N[Pi, prec]. WorkingPrecision in NSum, NIntegrate, NDSolve, etc. only determines the precision used at each computation step; while it is useful for keeping track of precision (for example, may detect the loss of digits), in general it cannot give a reliable estimate of the error in the final I believe it is better to just use NSum, where we have more control over the settings: for example, NSum[Exp[-(30 - k)^2/2], {k, -Infinity, Infinity}, NSumTerms -> 50, WorkingPrecision -> 100] gives the result with 100 digits of precision, that is, no digits are lost at all. This is hardly surprising, since the sum converges very rapidly. (Using N[Sum[...]] is just out of the question: thus, the correct value of f''[0] is around -10^-32, not -10^-14). In principle a rigorous proof can be numerical; if we take z == 1/2 and expand EllipticTheta[3, 0, 1/Sqrt[E]] into a series, the identity becomes Sum[E^(-(1/2 - k)^2/2) + E^(-k^2/2), {k, -Infinity, Infinity}] == Sum[E^(-(1/2 - k)^2/2) + E^(-k^2/2), {k, -#, #}] > 2*Sqrt[2*Pi]&, 1] This means that the left-hand side is already greater than 2*Sqrt[2*Pi] if we take the sum from -12 to 12; since all the terms are positive, the identity cannot be true. The point is that Less/Greater use significance arithmetic; if we evaluate N[Sum[E^(-(1/2 - k)^2/2) + E^(-k^2/2), {k, -12, 12}] - 2*Sqrt[2*Pi], 10] then Mathematica tells us that the accuracy of the result is greater than 43 and so the absolute error is less than 10^-43, therefore the result is verifiably different from zero. So this can be a proof, but only barring the defects of Mathematica's significance arithmetic. Also it's possible to use Interval. Maxim Rytin m.r at inbox.ru • Follow-Ups: • References:
{"url":"http://forums.wolfram.com/mathgroup/archive/2005/Apr/msg00533.html","timestamp":"2014-04-17T21:51:57Z","content_type":null,"content_length":"42573","record_id":"<urn:uuid:1d04801f-7510-4ef2-946f-5b647bb3b875>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00229-ip-10-147-4-33.ec2.internal.warc.gz"}
Basic Elements The basic elements of geometry are objects you see every day but probably never think about (unlike your Nintendo DS, which you think about every day and still can't seem to find). We're talking about lines, angles, and shapes—and lots of 'em. That being the case, geometry will require you to draw sometimes. In fact, one of the best things to do if you're ever unsure about a geometry problem is to draw a picture. You don't have to be the next Claude Monet or Andy Warhol, but we'd advise you to steer clear of the Pablo Picasso Salvador Dali neck of the woods. You'll also use proofs, fact-based arguments that lead to a logical conclusion, to dissect and discover the properties of these shapes. Chances are good that you've probably never written a proof before, so we'll cover exactly what proofs are and how best to tackle them. (Hint: get a running start.) While geometry does primarily work in the visible arena, we'll still need the math tools we've been gathering so far. Hopefully addition, subtraction, multiplication, and division go without saying, but we said them anyway just to be safe. Basic algebra will definitely come in handy also—especially using linear equations and manipulating variables. We'll also touch on coordinates and the x-y plane (and even the x-y-z plane), so we're hoping you kept the distance formula in a safe somewhere. If not, don't worry your pretty little head since it comes from the Pythagorean theorem anyway.
{"url":"http://www.shmoop.com/geometry-introduction/basic-geometry-elements.html","timestamp":"2014-04-17T06:58:26Z","content_type":null,"content_length":"34063","record_id":"<urn:uuid:c329ee0a-40eb-446d-a55a-c758feb06812>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Proceedings of the American Mathematical Society ISSN 1088-6826(online) ISSN 0002-9939(print) Directed inverse limits of spatial locales Authors: Wei He and Till Plewe Journal: Proc. Amer. Math. Soc. 130 (2002), 2811-2814 MSC (2000): Primary 18B30, 54B30, 54D30, 54D45 Published electronically: May 8, 2002 MathSciNet review: 1908261 Full-text PDF Free Access Abstract | References | Similar Articles | Additional Information Abstract: In this note we consider spatiality of directed inverse limits of spatial locales. We give an example which shows that directed inverse limits of compact spatial locales are not necessarily spatial. This answers a question posed by John Isbell. We also give a condition which, if satisfied by the maps of a directed inverse system, implies that taking limits preserves local compactness and hence produces spatial locales. Similar Articles Retrieve articles in Proceedings of the American Mathematical Society with MSC (2000): 18B30, 54B30, 54D30, 54D45 Retrieve articles in all journals with MSC (2000): 18B30, 54B30, 54D30, 54D45 Additional Information Wei He Affiliation: Department of Mathematics, Shaan Xi Normal University, Xi’an 710062, People’s Republic of China Address at time of publication: Department of Mathematics, Nanjing Normal University, Nanjing 210097, People’s Republic of China Email: weihe@snnu.edu.cn, weihe@njnu.edu.cn Till Plewe Affiliation: Department of Science and Engineering, Ritsumeikan University, Noji Higashi 1-1-1, Kusatsu-shi, Shiga 525, Japan Email: till@theory.cs.ritsumei.ac.jp DOI: http://dx.doi.org/10.1090/S0002-9939-02-06196-8 PII: S 0002-9939(02)06196-8 Keywords: Directed inverse limits, spatial locales, locally compact spaces, locally compact locales, compact locales Received by editor(s): May 17, 1998 Received by editor(s) in revised form: October 30, 2000 Published electronically: May 8, 2002 Additional Notes: The first author was supported by a grant of the NSF of China Communicated by: Alan Dow Article copyright: © Copyright 2002 American Mathematical Society
{"url":"http://www.ams.org/journals/proc/2002-130-10/S0002-9939-02-06196-8/","timestamp":"2014-04-19T12:23:07Z","content_type":null,"content_length":"25588","record_id":"<urn:uuid:967c46e8-54d9-44da-af18-527b06181da1>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00132-ip-10-147-4-33.ec2.internal.warc.gz"}
Gary, IN Calculus Tutor Find a Gary, IN Calculus Tutor I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including calculus, chemistry, physics, English ...I have taught a wide variety of courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus, advanced placement calculus, integrated chemistry/physics, and physics. I also have experience teaching physics at the college level and have taught an SAT math preparation course. For the past three years I have served as the math department chair. 12 Subjects: including calculus, physics, geometry, algebra 1 ...I have worked with them in the classroom, individually, and in groups. Over the years I have become familiar with most types of disabilities. I have worked with teenagers with ADD/ADHD for the past 10 years. 24 Subjects: including calculus, chemistry, special needs, study skills ...My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are able to grasp. I use both analytical as well as graphical methods or a combination of the two as needed ... 34 Subjects: including calculus, reading, writing, statistics ...Other topics in which I am well versed are formulation of proofs, which is a major component of most discrete math courses, as well as introductory logic. I've been programming ever since I was a child (1990 or so, I was 9 years old). I began programming in GWBASIC, and graduated to more complex... 22 Subjects: including calculus, geometry, statistics, precalculus Related Gary, IN Tutors Gary, IN Accounting Tutors Gary, IN ACT Tutors Gary, IN Algebra Tutors Gary, IN Algebra 2 Tutors Gary, IN Calculus Tutors Gary, IN Geometry Tutors Gary, IN Math Tutors Gary, IN Prealgebra Tutors Gary, IN Precalculus Tutors Gary, IN SAT Tutors Gary, IN SAT Math Tutors Gary, IN Science Tutors Gary, IN Statistics Tutors Gary, IN Trigonometry Tutors
{"url":"http://www.purplemath.com/Gary_IN_calculus_tutors.php","timestamp":"2014-04-21T11:14:50Z","content_type":null,"content_length":"23900","record_id":"<urn:uuid:ef234776-96bc-4c60-8981-98a02cde721c>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Abington, MA Science Tutor Find an Abington, MA Science Tutor I am currently a high school biology and chemistry teacher, so I am very familiar with the Massachusetts state frameworks, and also the MCAS tests. I have had a lot of success preparing students from several districts (including ELL and special education students) for the biology and chemistry MCAS tests. I also teach AP Chemistry, so I am very comfortable helping students taking that 9 Subjects: including chemistry, algebra 1, biology, calculus ...The other two are linear algebra and the stochastic systems (statistics), which come together in advanced courses. Everyone intending to pursue studies in basic science (including life science s), engineering or economics should have a good foundation in introductory calculus. I did not really b... 7 Subjects: including physics, algebra 2, calculus, astronomy ...I am working on an dual degree with a major in Chemistry, and History. I intend to become a college professor in one of these two subjects after I complete my education. I am a native English speaker, have been studying Spanish for twelve years, and Chinese for two. 9 Subjects: including chemistry, physics, biology, organic chemistry ...I love to see the light bulb go on when my students get it. I strive to inspire students and enhance their love of learning. I like to make learning fun and aim to educate my students in a dynamic fashion. 5 Subjects: including physical science, biology, anatomy, physiology ...Prior to my move to Boston, I received a B.S. (with high honors) in Biomedical Sciences from Rochester Institute of Technology (RIT). I am passionate about teaching and have worked as an undergraduate teaching assistant for the following courses: Drugs & Behavior, Cellular Biology, Intro to Neur... 22 Subjects: including pharmacology, genetics, ACT Science, physiology Related Abington, MA Tutors Abington, MA Accounting Tutors Abington, MA ACT Tutors Abington, MA Algebra Tutors Abington, MA Algebra 2 Tutors Abington, MA Calculus Tutors Abington, MA Geometry Tutors Abington, MA Math Tutors Abington, MA Prealgebra Tutors Abington, MA Precalculus Tutors Abington, MA SAT Tutors Abington, MA SAT Math Tutors Abington, MA Science Tutors Abington, MA Statistics Tutors Abington, MA Trigonometry Tutors Nearby Cities With Science Tutor Avon, MA Science Tutors Brockton, MA Science Tutors East Bridgewater Science Tutors East Weymouth Science Tutors Hanover, MA Science Tutors Hanson, MA Science Tutors Holbrook, MA Science Tutors Kingston, MA Science Tutors North Abington, MA Science Tutors Norwell Science Tutors Pembroke, MA Science Tutors Randolph, MA Science Tutors Rockland, MA Science Tutors South Weymouth Science Tutors Whitman, MA Science Tutors
{"url":"http://www.purplemath.com/abington_ma_science_tutors.php","timestamp":"2014-04-18T13:43:56Z","content_type":null,"content_length":"24008","record_id":"<urn:uuid:d8ff72ba-5ccd-4a22-a31d-e12a7010c0e6>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00368-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of a convex hull of few points in R^n ? January 12th 2008, 01:56 AM Existence of a convex hull of few points in R^n ? I have a set of vectors V {v1,...,vk} where every element of V is from R^n. Please consider that k could be smaller than n. Does the convex hull C exist for the vectors in V ? so that v1,...,vk are elements of C If I write a vector v' as a convex combination of the vectors V, will v' be an element of C as well ? (where c1+...+ck = 1 and c1,..,ck positive) from my point of view it should work. consider for instance R^3. - two random vectors v1 and v2, form a convex set as a line which connects v1 and v2. Thus any vector on the line can be expressed as a convex combination of v1 and v2 - three random vectors v1, v2, v3 form a triangle. And any convex combination will be inside the triangle - vector counts higher then the dimensions, would form a polyhedron and there could be interior vector as well (vectors inside the convex hull). It is clear to me for trivial example in low dimensional spaces. However Im not sure if this is valid for any dimensionality as well. kind regards, January 18th 2008, 04:26 AM Does the convex hull C exist for the vectors in V ? so that v1,...,vk are elements of C Yes, ofcourse it does. If I write a vector v' as a convex combination of the vectors V, will v' be an element of C as well ? Most certainly. - two random vectors v1 and v2, form a convex set as a line which connects v1 and v2. The convex hull is the whole triangle. - three random vectors v1, v2, v3 form a Random tetrahedron, which (sides and interior included) forms the convex hull. But note that in more general spaces, the convex hull does not only consist of the set of convex combinations. Remember the Krein-Milman theorem. January 18th 2008, 05:17 AM Hi Rebesques, thank your for your feedback. so in other words: if I have random vectors V={v1,...,vn} in R^n where V contains both extreme points and inner points, then I can write a vector v' as a convex combination v' = c1*v1+...+cn*vn and it will be sure that v' is in the same convex set, which contains the vectors V as well (without needing to calculate the convex hull of this convex set / without needing to distinguish the extreme points from the inner points) is this correct ? and also, the convex set which contains the vectors V, will be a subspace of R^n , right ?
{"url":"http://mathhelpforum.com/advanced-math-topics/25933-existence-convex-hull-few-points-r-n-print.html","timestamp":"2014-04-20T03:20:16Z","content_type":null,"content_length":"6801","record_id":"<urn:uuid:ad6a4a81-7698-467e-93a8-53a398c3e13b>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
Royal Palm Beach, FL Algebra 2 Tutor Find a Royal Palm Beach, FL Algebra 2 Tutor ...This translated to working with around 700 students each week, helping them improve their level of conversational English. Through this experience, I worked with 7-8 native Korean teachers in planning lessons and carrying them out in the classroom. It was a year of growth change for the better to say the least. 15 Subjects: including algebra 2, reading, literature, algebra 1 ...I have 8 years of teaching experience, 5 at the University level. From teaching hundreds of students, I have learned several techniques to help inspire students understanding and learning. I have learned tremendously from the hundreds of students I have taught math, and completely understand how it can be challenging for some students. 39 Subjects: including algebra 2, chemistry, reading, writing Hi, I'm Tyrell. I've held over four jobs helping students with Mathematics and Science at both the High School and College level. I recently graduated for college with a Bachelors Degree in Mathematics, so math is still fresh in my head. 12 Subjects: including algebra 2, chemistry, physics, calculus ...I am looking to become a Spanish Teacher either through a University, or within a school district. Spanish is my second language, so I understand the difficulties that arise when learning a second language. Conjugation, grammar, vocabulary, conversation, writing, whatever you need help with, I am willing to help! 10 Subjects: including algebra 2, Spanish, English, writing ...Math is FUN! I try to get my students to develop confidence in their math skills as we build up from scratch a good foundation to step up to higher math subjects. Through problems and homework, then a step by step method of approaching any of the given problems. 16 Subjects: including algebra 2, Spanish, physics, calculus
{"url":"http://www.purplemath.com/Royal_Palm_Beach_FL_algebra_2_tutors.php","timestamp":"2014-04-20T04:39:31Z","content_type":null,"content_length":"24659","record_id":"<urn:uuid:fab8ceaa-0724-4ecc-9851-d77ce282a7ea>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem-Solving Models 1.15: Problem-Solving Models Practice Problem-Solving Models Suppose you're taking a standardized test to get into college and you encounter a type of problem that you've never seen before. What tools could you use to help solve the problem? Is there anything you should do before trying to solve the problem? Is there anything you should do afterwards? In this Concept, you'll be presented with a step-by-step guide to problem solving and some strategies that you can use to solve any problem. A Problem-Solving Plan Much of mathematics applies to real-world situations. To think critically and to problem solve are mathematical abilities. Although these capabilities may be the most challenging, they are also the most rewarding. To be successful in applying mathematics in real-life situations, you must have a “toolbox” of strategies to assist you. Many algebra lessons are devoted to filling this toolbox so you become a better problem solver and can tackle mathematics in the real world. Step #1: Read and Understand the Given Problem Every problem you encounter gives you clues needed to solve it successfully. Here is a checklist you can use to help you understand the problem. $\surd$ sum, difference , and product, and mathematical verbs such as equal, more than, less than , and is. Key words also include the nouns the situation is describing, such as time, distance, people, etc. Visit the Wylie Intermediate Website ( http://wylie.region14.net/webs/shamilton/math_clue_words.htm) for more clue words. Once you have discovered what the problem is about, the next step is to declare what variables will represent the nouns in the problem. Remember to use letters that make sense! Step #2: Make a Plan to Solve the Problem The next step in problem-solving is to make a plan or develop a strategy. How can the information you know assist you in figuring out the unknown quantities? Here are some common strategies that you will learn. • Drawing a diagram • Making a table • Looking for a pattern • Using guess and check • Working backwards • Using a formula • Reading and making graphs • Writing equations • Using linear models • Using dimensional analysis • Using the right type of function for the situation In most problems, you will use a combination of strategies. For example, drawing a diagram and looking for patterns are good strategies for most problems. Also, making a table and drawing a graph are often used together. The “writing an equation” strategy is the one you will work with the most frequently in your study of algebra. Step #3: Solve the Problem and Check the Results Once you develop a plan, you can use it to solve the problem. The last step in solving any problem should always be to check and interpret the answer. Here are some questions to help you to do that. • Does the answer make sense? • If you substitute the solution into the original problem, does it make the sentence true? • Can you use another method to arrive at the same answer? Step #4: Compare Alternative Approaches Sometimes a certain problem is best solved by using a specific method. Most of the time, however, it can be solved by using several different strategies. When you are familiar with all of the problem-solving strategies, it is up to you to choose the methods that you are most comfortable with and that make sense to you. In this book, we will often use more than one method to solve a problem. This way we can demonstrate the strengths and weaknesses of different strategies when applied to different types of problems. Regardless of the strategy you are using, you should always implement the problem-solving plan when you are solving word problems. Here is a summary of the problem-solving plan. Step 1: Understand the problem. Step 2: Devise a plan – Translate. Come up with a way to solve the problem. Set up an equation, draw a diagram, make a chart, or construct a table as a start to begin your problem-solving plan. Step 3: Carry out the plan – Solve. Step 4: Check and Interpret: Check to see if you have used all your information. Then look to see if the answer makes sense. Solve Real-World Problems Using a Plan Example A Jeff is 10 years old. His younger brother, Ben, is 4 years old. How old will Jeff be when he is twice as old as Ben? Solution: Begin by understanding the problem. Highlight the key words. Jeff is 10 years old. His younger brother, Ben , is 4 years old. How old will Jeff be when he is twice as old as Ben ? The question we need to answer is. “What is Jeff’s age when he is twice as old as Ben?” You could guess and check, use a formula, make a table, or look for a pattern. The key is “twice as old.” This clue means two times, or double Ben’s age. Begin by doubling possible ages. Let’s look for a pattern. $4 \times 2 = 8$ $5 \times 2 = 10$ $6 \times 2 = 12$ Jeff will be 12 years old when he is twice as old as Ben. Example B Another way to solve the problem above is to write an algebraic equation. Let $x$$2x$$x+6$ What value of $x$$x=6$$x=6$$x+6=6+6=12$ When Jeff is 12, he will be twice Ben's age, since 12 is twice the age of 6. Example C Matthew is planning to harvest his corn crop this fall. The field has 660 rows of corn with 300 ears per row. Matthew estimates his crew will have the crop harvested in 20 hours. How many ears of corn will his crew harvest per hour? Solution: Begin by highlighting the key information. Matthew is planning to harvest his corn crop this fall. The field has 660 rows of corn with 300 ears per row . Matthew estimates his crew will have the crop harvested in 20 hours. How many ears of corn will his crew harvest per hour ? You could draw a picture (it may take a while), write an equation, look for a pattern, or make a table. Let’s try to use reasoning. We need to figure out how many ears of corn are in the field: $660(300) = 198,000$ $\frac{198,000}{20} = 9,900$ The crew can harvest 9,900 ears per hour. Guided Practice The sum of angles in a triangle is 180 degrees. If the second angle is twice the size of the first angle and the third angle is three times the size of the first angle, what are the measures of the angles in the triangle? Step 1 is to read and determine what the problem is asking us. After reading, we can see that we need to determine the measure of each angle in the triangle. We will use the information given to figure this out. Step 2 tells us to devise a plan. Since we are given a lot of information about how the different pieces are related, it looks like we can write some algebraic expressions and equations in order to solve this problem. Let $a$$2a$$3a$angles must add up to 180 degrees. From this we will write an equation, adding together the expressions of the three angles and setting them equal to 180. Step 3 is to solve the problem. Simplifying this we get Now we know that the first angle is 30 degrees, which means that the second angle is 60 degrees and the third is 90 degrees. Let's check whether these three angles add up to 180 degrees. The three angles do add up to 180 degrees. Step 4 is to consider other possible methods. We could have used guess and check and possibly found the correct answer. However, there are many choices we could have made. What would have been our first guess? There are so many possibilities for where to start with guess and check that solving this problem algebraically was the simplest way. Sample explanations for some of the practice exercises below are available by viewing the following video. Note that there is not always a match between the number of the practice exercise in the video and the number of the practice exercise listed in the following exercise set. However, the practice exercise is the same in both. CK-12 Basic Algebra: Word Problem-Solving Plan 1 (10:12) 1. What are the four steps to solving a problem? 2. Name three strategies you can use to help make a plan. Which one(s) are you most familiar with already? 3. Which types of strategies work well together? Why? 4. Suppose Matthew’s crew takes 36 hours to harvest the field. How many ears per hour will they harvest? 5. Why is it difficult to solve Ben and Jeff’s age problem by drawing a diagram? 6. How do you check a solution to a problem? What is the purpose of checking the solution? 7. There were 12 people on a jury, with four more women than men. How many women were there? 8. A rope 14 feet long is cut into two pieces. One piece is 2.25 feet longer than the other. What are the lengths of the two pieces? 9. A sweatshirt costs $35. Find the total cost if the sales tax is 7.75%. 10. This year you got a 5% raise. If your new salary is $45,000, what was your salary before the raise? 11. It costs $250 to carpet a room that is $14 \ ft \times 18 \ ft$$9 \ ft \times 10 \ ft$ 12. A department store has a 15% discount for employees. Suppose an employee has a coupon worth $10 off any item and she wants to buy a $65 purse. What is the final cost of the purse if the employee discount is applied before the coupon is subtracted? 13. To host a dance at a hotel, you must pay $250 plus $20 per guest. How much money would you have to pay for 25 guests? 14. It costs $12 to get into the San Diego County Fair and $1.50 per ride. If Rena spent $24 in total, how many rides did she go on? 15. An ice cream shop sells a small cone for $2.92, a medium cone for $3.50, and a large cone for $4.25. Last Saturday, the shop sold 22 small cones, 26 medium cones, and 15 large cones. How much money did the store take in? Mixed Review 16. Choose an appropriate variable for the following situation: It takes Lily 45 minutes to bathe and groom a dog. How many dogs can she groom in an 9-hour day? 17. Translate the following into an algebraic inequality: Fourteen less than twice a number is greater than or equal to 16. 18. Write the pattern of the table below in words and using an algebraic equation. $&&x && -2 && -1 && 0 && 1\\&&y && -8 && -4 && 0 && 4$ 20. Check that $m=4$$3y-11 \ge -3$ 21. What is the domain and range of the graph shown? Files can only be attached to the latest version of Modality
{"url":"http://www.ck12.org/book/CK-12-Basic-Algebra-Concepts/r14/section/1.15/","timestamp":"2014-04-24T11:13:25Z","content_type":null,"content_length":"158009","record_id":"<urn:uuid:c59bf476-0647-4b7d-a417-950345ffaf53>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
From OLPC Feature Requests RPN mode needed for scientific calculations.--Mokurai 22:01, 19 May 2007 (EDT) (not sure whether this still applies to the new version Rwh 10:27, 15 July 2007 (EDT)) I suggest that a better solution would be to allow multistep calculations with saved 'variables'. This can do everything that RPN enables, but is easier to teach to children, closer to the global standard language of mathematics, and it leads directly to programming as a logical outgrowth of multistep calculations. Integrate material from and link to Derivations of Applied Mathematics. -- Jpritikin 07:52, 9 September 2007 (EDT) 6 December 2007 The essential operational model (user interface) underlying RPN is an implicit stack with postfix operator application. Both of these conceptual models much more closely resemble typical programming paradigms in actual use, both at the machine level and higher levels of abstraction, than simplistic multistep variable saving scripts, which remind me of infantile "languages" like Basic or Cobol or similar such. Keeping in mind that the goal is to provide a useful numerical tool for children, and bearing in mind that children, contrary to historical habit, are not imbeciles who need to be coddled, but rather intelligent conscious beings who simply have an unprejudiced blank slate to be filled, would it not be better to devise a calculator which fills that blank slate with the very best the world has to offer, in an intuitive compelling manner, and avoids dumping the same old trash that most of us have had to endure, sift through, and ultimately jettison? With that in mind, and remembering that the physical limitations of commodity calculators are not in play, but rather a high resolution display with multiple megabytes of memory are at our disposal, I make the following suggestions: 1) Separate the keypad, display, and mode selection function keys into three separate windows which may be independently positioned (or hidden), and opened in multiple instances. 2) By all means, have a rudimentary operating mode suitable for the very youngest of children where the keypad displays only ten digits and four functions with the ancillary keys necessary to formulate an infix operator paradigm using a single display. 3) Provide a mode selection key to convert the key pad into optional prefix and postfix operator paradigms, after all, the goal is education and enlightenment, not sheltering, confining, and hiding the great diversity of truth. 4) When I work with typical calculator programs, I typically use three of four instances of the program to compute along three or four threads, and then cut and paste to merge the calculations in a fourth (or fifth) instance of the program. The point here is the singleton memory of typical calculator programs is inadequate for complex calculations, and the cheat I use quickly clutters the screen, apart from being inefficient, although it does obviate the need to use pencil and paper as an auxiliary memory. With a separate display window, one need not replicate the keypad and mode keys. By including a mode key to create a new display, and by connecting a particular display to the keypad by clicking on it and highlighting the display, the cheat I use can be much more efficiently and elegantly implemented. As for memory, the stack paradigm can be intuitively and graphically mimicked and extended by imagining the display to be a single numeric window on a virtual "paper tape" connected in a loop under the window (a circular buffer implemented by a dynamic doubly linked list in programmer parlance). By putting a toggle widget on the side of the display, the window can be moved backward and forward to navigate previous intermediate results on the "tape". Better still, show one or two of those other results grayed out above and below the window. The keypad intuitively obviously only operates on the contents of the current window, and a visual model of stack, queue, and buffer operations is intuitively made available. In postfix mode (RPN) the value above the window can be colored instead of shaded, indicating it can also be operated upon, as well as eliminating the need to remember what the "entered" (stack) values are. Beyond the intuitive appeal, the "paper tape" paradigm eliminates the need to define, name, and remember variables, making the whole interface easier to use for complex computations. 5) Finally, the same paper tape paradigm can be used in a keystroke window, which logs keystrokes, so that a complex calculation may be rerun with different data, once the first sequence is worked 6) Needless to say, there should be mode keys to expand the keypad for various more specialized computations, e.g. scientific, hexadecimal, statistical, complex numbers, or even quaternions, octonions, or vector operations. Obviously all this can't be done in the first cut, but by using multiple windows to start with, a modular design paradigm is established which can easily accommodate future directions that none of us now can even imagine. Moreover, the design paradigm is intrinsically conceptually hierarchical, so a child can work at his or her own level, but always have the opportunity to grow and learn, and perhaps ultimately even customize the calculator for his or her own purposes. To conclude, my main point here is to please, please avoid patronizing children. They are imaginative, extremely capable beings, and they will mirror whatever you show them. Give them mediocrity and that is what they will give in return, give them your best, and they will take that places no stodgy adult could ever imagine. 6 December 2007 This is my own feelings about the above: RPN: my first exposure to RPN was in 1975-6 when the first HP pocket calculator appeared with it. It is okay for us engineers, but my wife hates it. It's totally natural for the technologist and absolutely not the way folks write equations. I believe that the TI graphing pocket calculators you see in classrooms (that my kids used, as a matter of fact) enter formulas the "normal" way. Graphing activity: What you do want is a graphing function, so the kids can learn about X-Y coordinates Spreadsheet activity: I'm not a developer. But I've been doing a lot of writing for wikipedia (see next post, too). I would suggest the simple "pocket calculator", but also a spreadsheet using relations/functions as well as typical non-RPN, somewhat akin to Excel. Apparently a simple spreadsheet is an activity to be designed for the future. I strongly encourage this. Most kids (e.g. my own and my nephews etc etc) in our schools by 7th grade if not earlier have encountered simple spreadsheet work. Request to change "square" to "exponent" or add a new exponent function : i.e. 3^3 = 27 rather than "square" causing 3^2 and the kid having to change the "2" to "3". (I had to discover that 3^3 would indeed evaluate correctly). I do understand that simple work around the Pythagorean theorem etc uses ^2 so there is an advantage to having a separate function "square". What's going on with the "Label" in the "algebra" function?: I don't see how this works, yet. I entered "x=3" in the "label" box, and 3+x into where the formula appears to go. It returns this into the box: x=3: 3+x Error at 2: Variable 'x' not defined. My guess all of this will evolve/resolve as developers set their minds to it. But perhaps some feedback such as this will be useful. Bill Wvbailey 17:21, 21 December 2007 (EST) Boolean operations: request NOT (3) I have some suggestions for the activity's functionality, but am not sure they should be here. I will enter them here anyway. The philosophy behind this is to provide the kids with the same symbolism and functionality that they will find on wikipedia. (3a) The symbol | (stroke): Why use the "Sheffer stroke" for OR? this symbol was classically used for NAND (NOT AND). Why not use V for "von", the more classical symbol? I've never seen | used for (3b) Logical NOT: I was rather ... stunned shall we say ... to see XOR rather than NOT (i.e. ~ or "bent bar" 2-shift-alt, or whatever). XOR would be an okay addition but not without NOT; NOT is virtually mandatory. I am quite aware that the three functions chosen (AND, OR, XOR) are sufficient, but hey, so is NAND (stroke) by itself or NOR by itself or implication by itself -- and using them by themselves is ugly. (3c) What happens when we plug in numbers not { 0, 1 }? The numerical results such as 3&4 => 4, and 3|4 => 3 are rather peculiar; they seem reversed (from a Venn-diagram point-of-view). For example, as "4" contains "3" we would expect that 3|4 might be "4". I will pursue this with my wikipedia cohorts to get their opinions. Usually Boolean functions have to do with predicates in particular "equality" that evaluate to { TRUE, FALSE } or values { 1, 0 }, e.g. =AND(3,4) yields TRUE in Excel. Bill Wvbailey 14:19, 21 December 2007 (EST) ": I'm not sure about the other issues, but I assume the reason they chose the vertical bar for "or" is that a single vertical bar is traditionally used as the (binary, not logical!) "or" operator in a number of computer languages (C/C++, Java, and Python, among others). Using a keyboard symbol rather than a letter or more complex symbol has the advantage that it can be easily entered into formulas without being confused for variable names (e.g. "V"). —Joe 15:07, 21 December 2007 (EST) The "and" and "or" operators seem to be using the Python short-circuiting interpretation for logical operators, which I agree makes little sense in a calculator application. If the first operand of an "or" operator is true (i.e. non-zero), then the value of the expression is the value of the first operand, otherwise it is the second. Likewise, if the first operand of an "and" operator is false, then the value of the expression is the value of the second operand, otherwise it is the first. This means that "3|4" will not give the same value as "4|3", for instance. The behavior gives the expected result when using zero and one, but I agree it is confusing with other values. (I note the activity also interprets "True" and "False" as variables rather than boolean values...) —Joe 21:23, 21 December 2007 (EST) I double-checked and yes, Joe is correct: the "eq" and "neq" functions return "False" and "True" but these output values are interpreted as variables when used as input. This is a shame because if "False" and "True" were recognized as input values rather than the names of variables then the following would be possible (here "input_A" and "input_B" are variables to be filled in with any old number): ("input_A" != 0)&("input_B" != 0) For example (3 != 0)&(4 != 0) yields the value "True" whereas (3 != 0)&(4 != 0) yields the value "False". Given this correction, one could substitute the first equation into another equation, and proceed with as complex formulas as they would like. Strangely, this is sort of happening now in the sense that the examples above do compute correctly. A bug: The above also seems related to the glitch that if one is using the Boolean activity, pulls down "Help" and clicks on it, the word "Help" enters as a variable. Moreover if an "indicating function" (see http://en.wikipedia.org/wiki/Indicator_function) is available to convert "False" into "0" and "True" into "1" then true Boolean math is possible (here I will use "IND", thus IND(False)=0, IND(True)=1: IND(3 != 0) will yield 1 IND(0 != 0) will yield 0 Given this ability to convert "False" to "0" and True to "1", then the classical Boolean equivalences will result from the arithmetic operations { +, -, * } (in the following, ~ is NOT, x and y can only have values {1, 0} ). These equivalences also appear in the article cited above (also see http://en.wikipedia.org/wiki/Propositional_formula for more): ~(x) = 1-x x & y = x*y x|y = x+y-x*y x|y = (1 - ((1-x)*(1-y))) = (1-(1 -x -y +x*y)) = x+y-x*y x^y = x+y-2*x*y (x & ~y) | (~x & y) = (x*(1-y))|((1-x)*y) = (x-x*y)|(y-x*y) = (x-x*y)+(y-x*y)-(x-x*y)*(y-x*y) = x+y-2*x*y - (x*y-x*y*y-x*x*y+x*x*y*y). As x*x=x and y*y=y in Boolean math, we end up with: x^y = x+y-2*x*y - (x*y-x*y-x*y+x*y) = x+y-2*x*y - 0 = x+y-2*x*y. Of course the kids don't have to see all this, but I'm arguing that this should be "in the background" for an educator who is interested, plus it would be a thorough and correct emendaton of the Boolean activity. Bill Wvbailey 15:01, 22 December 2007 (EST) Here is my wikipedia correspondent's take on (3a)-(3c) above: The output when the inputs aren't 0 or 1 do seem strange, but somehow I have become accustomed to software implementations using strange notation and conventions. Including NOT would certainly be a good idea. — Carl (User:CBM|CBM · User talk:CBM|talk) 23:35, 21 December 2007 (UTC) Bill Wvbailey 10:28, 22 December 2007 (EST) This is from the Python web-site http://docs.python.org/lib/boolean.html: Operation Result Notes "x or y" is defined as follows: if x is false, then y, else x (1) "x and y" is defined as follows: if x is false, then x, else y (1) "not x" is defined as follows: if x is false, then True, else False (2) (1) These only evaluate their second argument if needed for their outcome. (2) "not" has a lower priority than non-Boolean operators, so not a == b is interpreted as not (a == b), and a == not b is a syntax error Bill Wvbailey 12:31, 23 December 2007 (EST) Implementation Discussion gallery of Calculate examples It would be great to have a place for a gallery of successfull examples for each mode. For example, plot(ln(x*100)-sin(x*180)/2-ln((1-x)*100),x=0..1) (I'll keep looking around on the wiki for them too....) If there is a better place, I'll also post screenshots here too. --ixo 08:45, 30 December 2007 (EST) The gallery picture that Rwh posted is very helpful. Am confused about what "commands" (functions, instructions) the Calculate activity can respond to, their syntax, and where/how to input them. Is there a list somewhere? Thanks, Bill Wvbailey 16:01, 30 December 2007 (EST) Use of Label and Programming I come under the category of learning by experimenting. With regard to the use of Labels, I have found that to program an expression, first define the variables with default values, then define the expression, The variables are defined by entering te name followed by a colon in the main window. The variable name will then appear in the Label window. Enter a default value in the main window, press Enter and the variable will be assigned that value. Do this for all of the variables you are going to use. Next, define the name of the expression by entering it's name followed by a colon. The expression name will appear in the Label window. Now, enter your expression with the variable names and press Enter. The expression will be evaluated with the default values. To evaluate the expression for different values simply enter them like the default values, enter the expression name and voila! For example, to compute the hypotenuse of a right triangle from the two sides, first define the sides with defauilt values, ie., sideA: (Press Enter, this places sideA in the Label window) 3 (Press enter, this assigns 3 to sideA) sideB: 4 Now, the default values of sides A and B are defined. Hyp: (Press enter, this is the name of the expression and appears as a Label) sqrt(sideA^2 +sideB^2) (Press enter, this defines Hyp and produces an answer, 5) Now try, other values of sides A and B by reassigning the values and entering Hyp to evaluate it. There may be other sublties with this but this my first attempt. The graphing functionality is unfortunately lacking here. Do any of you know the Grapher program on Mac OS-X (I forget the name of the similar linux program). It is relatively simple and can export the graphs. Here, kids need to be able to take and plot data (rainfall, height, etc) as well as play with plotting in x-y. Then they need to add the plots to reports. Examples will be useful too. This program has great potential, especially in explaining Fourier analysis :)
{"url":"http://wiki.laptop.org/index.php?title=Talk:Calculate&oldid=187382","timestamp":"2014-04-19T02:12:59Z","content_type":null,"content_length":"39592","record_id":"<urn:uuid:f18060e0-5536-4b8d-b046-3a090a3a11f1>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Recent Homework Questions About Geometry Post a New Question | Current Questions Geometry HELP HELP Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 PLEASE HELP ME BELLS Tuesday, February 18, 2014 at 6:56am original dimensions: l, w, h volume = lwh new length --- 2l/3 new width ---- w/10 new height = h/4 new volume + (2l/3)(h/4)(w/10) = 2 lwh/120 = lwh/60 or (1/60)lwh so , yes, it is B Monday, February 17, 2014 at 8:44pm Monday, February 17, 2014 at 8:44pm V = L w H new V = (2/3)L * (1/10) w * (1/4) H = (2/120) LwH = (1/60) LwH Monday, February 17, 2014 at 8:44pm You are welcome :) Monday, February 17, 2014 at 8:40pm Is it B? Monday, February 17, 2014 at 8:36pm How does the volume of a rectangular prism change if the width is reduced to 1/10 of its original size, the height is reduced to 1/4 of its original size, and the length is reduced to 2/3 of its original size? A. V=1/120lwh B. V=1/60lwh C. V= 2/3lwh D. V=3/4lwh Monday, February 17, 2014 at 8:35pm Thank you so much oh my god Monday, February 17, 2014 at 8:33pm add six feet at the end tan 25 = h/(x+10) tan 36 = h/x h = (x+10)(.466) h = x(.727) so .727 x = .466 x + 4.66 .261 x = 4.66 x = 17.9 ft so h = .727 (17.9) = 13 13 + 6 = 19 ft Monday, February 17, 2014 at 8:23pm The angle of depression from a hot air ballon in the air to a person on the ground is 36 degrees. If the person steps back 10ft the new angle of depression is 25 degrees. If the person is 6ft tall, how far off the ground is the hot air ballon? Monday, February 17, 2014 at 8:14pm V = (1/3)base area * height new V =(1/3) (4 base area)(height/9) so 4/9 of original Monday, February 17, 2014 at 8:00pm How does the volume of a square pyramid change if the base area is quadrupled and the height is reduced to 1/9 of its original size? A. V= 1/27Bh B. V= 4/27Bh C. V= 2/9Bh D. V= 4/9Bh Monday, February 17, 2014 at 7:37pm Gowith Steve - Geometry messed up my typing don't know why my 1/3 suddenly became 2/3 Monday, February 17, 2014 at 5:48pm let the base of the pyramid and the cube be x by x let the height of the pyramid be h then the height of the cube is 2h volume of pyramid = (1/3)x^2 h = (2/3)h x^2 volume of cube = x^2 (2h) = 2h x^2 ratio of cube : pyramid = 2hx^2 : (2/3hx^2 = 2: 2/3 = 6:2 = 3:1 so it is 3 ... Monday, February 17, 2014 at 5:45pm cube: v = Bh pyramid: 1/3 B(h/2) = 1/6 Bh So, (A) Monday, February 17, 2014 at 5:44pm but that's not a choice. A. 1/6 B. 1/3 C. 3 D. 6 Monday, February 17, 2014 at 5:29pm its is equal to 2 times the volume because the square pyramid is HALF the height of the cube. Next time, read the question a little more carefully, because the answer is in the question. Monday, February 17, 2014 at 5:26pm The volume of a square pyramid is equal to _____ times the volume of a cube where the bases are the same, but the square pyramid is half the height of the cube. Monday, February 17, 2014 at 5:14pm a = 2pi r (r+h) If r and h are shrunk by a factor of 3 each, than we have 2pi (r/3)(r/3 + h/3) = 2/9 pi r (r+h) = 1/9 a as with all geometric figures, when the linear dimensions are scaled by a factor of f, the area is scaled by f^2 and the volume is scaled by f^3. Monday, February 17, 2014 at 5:12pm If a cylinder s radius and height are each shrunk down to a third of the original size, what would be the formula to find the modified surface area? Monday, February 17, 2014 at 4:17pm been there, did that http://www.jiskha.com/display.cgi?id=1392638255 Monday, February 17, 2014 at 10:11am solve it yourself. Monday, February 17, 2014 at 10:10am geometry circles equation i dont know points on circle: (0,0), (5,0) find intersections of lines for third point on circle 2(y) + 3 y = 10 5 y = 10 y = 2 then x = 2 so third point is (2,2) (0,0) (5,0) (2,2) (x-a)^2 + (y-b)^2 = r^2 plug and chug a^2 + b^2 = r^2 (5-a)^2 + b^2 = r^2 (2-a)^2 +(2-b)^2 = r^2 a^2 + b^2... Monday, February 17, 2014 at 10:10am Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 Monday, February 17, 2014 at 8:50am Geometry? Very incomplete! Monday, February 17, 2014 at 8:40am Geometry Circles Equation Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 I beg you Monday, February 17, 2014 at 8:31am geometry circles equation i dont know please he lp Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 this is my first timein this website please help me need it badly my teacher will be angry at me so much Monday, February 17, 2014 at 6:57am Did you mean positive whole numbers ? let the base be y and each of the equal sides be x To be a triangle, x > 2y or y < x/2 2x + y = 99 y = 99-2x y-intercept is 99, x-intercept is 49.5 so the x, or the base, can only be a number from 1 to 49 but to even have a triangle... Sunday, February 16, 2014 at 8:12pm How many distinct isosceles triangles exist with a perimeter of 99 inches and side lengths that are positive while numbers? Sunday, February 16, 2014 at 7:44pm 2x+2y = 102 x^2+y^2 = 39^2 A 5-12-13 right triangle looks promising. Scale that up to a 15-36-39 size, and we see that 15+36=51, so our rectangle is 15 by 36 Sunday, February 16, 2014 at 5:20pm The perimeter of a rectangle is 102 inches, and the length of the diagonal is 39 inches. Find the dimensions of the rectangle. Sunday, February 16, 2014 at 5:07pm Sunday, February 16, 2014 at 8:01am algebra 1/BGe Geometry.the figures below are squares.find an expression for the area of each shaded region.write your answers in standard form. Friday, February 14, 2014 at 8:06pm Thursday, February 13, 2014 at 7:50pm geometry - incomplete since I know neither the angles nor their relation to x, you better provide a bit more description of the figure, eh? Thursday, February 13, 2014 at 2:58pm Geometry true or false I believe that is false Thursday, February 13, 2014 at 12:53pm if the measurement of angle fgk is 3y-4 and measurement of angle kgh is 2y+7 find x Thursday, February 13, 2014 at 10:29am V = πr^2 h h = 335/(π r^2) SA = 2π r^2 + 2πrh = 2πr^2 + 2πr(335/(πr^2) = 2πr^2 + 670/r d(SA)/dr = 4πr - 670/r^2 = 0 for a min of SA 4πr = 670/r^2 4πr^3 = 670 r = (670/(4π))^(1/3) = 3.763757606.. subbing back into h h... Thursday, February 13, 2014 at 8:55am a cylindrical soup can has a volume of 335cm^3.find the dimensions(radius r and height h)that minimise the surface area of such a can. Thursday, February 13, 2014 at 5:51am elaine and Daniel are building a rectangular greenhouse.they want the are of the floor to be 36metres squared.since the glass walls are expensive,they want to minimise the amount of glass wall they use.they have commissioned you to design a greenhouse which minimizes the cost... Thursday, February 13, 2014 at 5:42am I'd say C Thursday, February 13, 2014 at 12:02am Which theorem or postulate is the construction of parallel lines based upon? A. Consecutive Interior Angles Converse Theorem B. Corresponding Angles Converse Postulate C. In the same plane, if two lines are perpendicular to the same line, then they are parallel to each other. ... Wednesday, February 12, 2014 at 9:36pm Thanks to both of you Wednesday, February 12, 2014 at 8:41pm Wednesday, February 12, 2014 at 8:39pm D looks good but of course, an equilateral triangle with sides of 10, has a height of 8.66 So, the triangle is isosceles, but not equilateral. Wednesday, February 12, 2014 at 8:37pm Wednesday, February 12, 2014 at 8:35pm What is the area of an equilateral triangle with sides of 10 inches and height of 7 inches? Can you please check my answer and my reasoning? A. 44 sq. in. B. 20 sq. in. C. 70 sq. in. D. 35 sq. in. I picked D, because I took 10x7=70and divided that by 2, which equals 35 sq. in... Wednesday, February 12, 2014 at 8:33pm A circle has a diameter of 20 inches and a central angle AOB that measures 160°. What is the length of the intercepted arc AB? Use 3.14 for pi and round your answer to the nearest tenth Wednesday, February 12, 2014 at 4:03pm 40 cm : 18 mm = 40 cm : 1.8 cm = 40 : 1.8 = 400 : 18 = 200 : 9 Wednesday, February 12, 2014 at 10:15am Write the ratio of the first measurement to the second measurement. Diameter of care tire:40cm Diameter of care tire:18mm I'm not sure hoow to do this. May someone explain how to do this problem. Wednesday, February 12, 2014 at 9:20am Wednesday, February 12, 2014 at 2:09am You say nothing about how you know that Q is "just above" L, or what that means. I'd say (c) is the choice. You draw arcs centered at P and T, such that they intersect above and below L. Note that the arcs must have radius greater than LT=LP. I get the feeling ... Monday, February 10, 2014 at 6:28pm All of this should be covered in his book. http://www.enchantedlearning.com/math/geometry/shapes/ http://www.mathsisfun.com/geometry/ Monday, February 10, 2014 at 5:57pm An architect plans to make a drawing of the room of a house. The segment LM represents the floor of the room. He wants to construct a line passing through Q and perpendicular to side LM to represent a wall of the room. He uses a straightedge and compass to complete some steps ... Monday, February 10, 2014 at 4:13pm 7/x = cos60 = 1/2 Sunday, February 9, 2014 at 9:39pm A ladder leaning against a house makes an angle of 60 degrees with the ground. The foot of the ladder is 7 ft from the house. How long is the ladder? Sunday, February 9, 2014 at 7:35pm Geometry - insufficient data What is the size of the yard? Sunday, February 9, 2014 at 12:59pm Andrea has a yard shaped like parallelogram ABCD. The garden area, parallelogram EFGB, has an area of 105 ft. If Andrea wants to sod the rest of her yard, how many square feet of sod should she order? A. 765 ft B. 840 ft C. 945 ft D. 1,515 ft Sunday, February 9, 2014 at 12:42pm Area of a kite = (1/2)product of the diagonals So the width and height would be diagonals let the height diagonal be h (1/2)(15)h = 60 15h = 120 h = 8 , looks like C Sunday, February 9, 2014 at 10:59am Maria is making a stained glass window in the form of a kite. The width of the window must be 15 in., and she only has enough stained glass to cover 60 in. What should the height of the window be? A. 4 in. B. 6 in. C. 8 in. D. 12 in. Sunday, February 9, 2014 at 9:58am Saturday, February 8, 2014 at 4:45pm A flower bed in the corner of your yard is in the shape of a right triangle. The perpendicular sides of the bed measure 10 feet and 12 feet. Calculate the area of the flower bed. Friday, February 7, 2014 at 8:17pm !@#$%^&s you Friday, February 7, 2014 at 7:36am find AC IF AB = 6 cm Thursday, February 6, 2014 at 6:43pm it is 8.94 Thursday, February 6, 2014 at 5:48pm opposite angles are equal, so 8x-2 = 104 Thursday, February 6, 2014 at 2:39pm My problem is with a parallelogram that has three numbers which are 88 degrees,104,and 80. Then the fourth on is 8x minus 2 how do I slove this Thursday, February 6, 2014 at 10:30am 5^2 + h^2 = 16^2 Thursday, February 6, 2014 at 5:29am Larry has a ladder that is 16' long. If he sets the base of the ladder on level ground 5 feet from the side of the house, how many feet above the ground will the top of the ladder reach when it rests against the house? Thursday, February 6, 2014 at 12:24am An advertising blimps hovers over stadium at the altitude of 152 m.the pilot sites a tennis court at in 80 degree angle of depression. Find the ground distance in the straight line between the stadium and the tennis court. (note: in an exercise like this one, and answers ... Thursday, February 6, 2014 at 12:09am geometry - still stumped I agree that bcd is 94° similarly, angle acd is x, so x+y+66=180. Still looking for another angle (as it were) to connect x and y. Wednesday, February 5, 2014 at 5:23pm Coordinate geometry Hmmm. Your question implies you have not yet read the lesson. Seems like that ought to be the next step. Any explanation you receive here will probably not be any better than the explanation in your text. Plus, your text probably has some examples. Wednesday, February 5, 2014 at 1:07pm Coordinate geometry Our lesson coordinate geom already now analytic proof . HOW TO DO ANALYTIC PROOF ? PLEASE PLEASE HELP THANKS VERY MUCH Wednesday, February 5, 2014 at 10:15am wE nEED tHE cHART??? Wednesday, February 5, 2014 at 10:00am Hmmm. This makes no sense to me. Tuesday, February 4, 2014 at 10:37pm Jenny is 5ft 2in tall to find the height of The light pole she measured her shadow And the poles shadow what is The height of the light pole Tuesday, February 4, 2014 at 10:35pm 180 - 55.1 = 124.9 http://www.mathsisfun.com/geometry/supplementary-angles.html Tuesday, February 4, 2014 at 6:38pm Length of the diameter of a circle with the endpoints A(4, -3) and B(4, 3) Tuesday, February 4, 2014 at 7:40am how to do eucliodean geometry Tuesday, February 4, 2014 at 3:18am Identify the sequence of transformations that maps quadrilateral abcd onto quadrilateral a"b"c"d" Answers 180 rotation around the origin; reflection over the x-axis translation (x,y) -> (x - 2, y + 0); reflection over the line x = -1 enlargement; ... Monday, February 3, 2014 at 10:59pm 11/8.5 = 1.294 77/53 = 1.452 So, the 8.5x11 paper is closer to square than the 53x77 paper The scale will have to be such that the height of 77cm will fit in 11in That is 11in:77cm or 1in:7cm = 2.54:7 = 1:2.75 Monday, February 3, 2014 at 5:52am Coordinate geometry let A be at (0,0) Then if B is at (xb,yb), the length of AB is √(xb^2 + yb^2) Then, if C is at (xc,yc), the midpoint of AC is (xc/2,yc/2) and the midpoint of BC is at ((xb+xc)/2,(yb+yc)/2) If M is the midpoint of AC and N is the midpoint of BC, then the slope of MN is ((... Monday, February 3, 2014 at 5:44am Coordinate geometry Help me i this please the segment joining the midpoint of 2 sides f a triagle is parallel to the 3rd side and half as long. thaks Monday, February 3, 2014 at 4:37am Trying to find the scale for a 77 centimeter by 53 centimeter painting to fit on a 8.5 by 11 inch paper. Please show me how- Sunday, February 2, 2014 at 10:40pm how does the volume of an oblique cylinder change if the radius is reduced to 2/9 of it's original size and the height is quadrupled? Sunday, February 2, 2014 at 6:03pm Sunday, February 2, 2014 at 3:04pm Geometry - insufficient data Saturday, February 1, 2014 at 2:34pm Jason wants to walk the shortest distance to get from the parking lot to the beach. a.How far is the spot on the beach from the parking lot? b. How far will he have to walk from the parking lot to get to the refreshment stand? Saturday, February 1, 2014 at 2:32pm COURSE HELP PLEASE ms sue qq In high school, these courses are usually called Algebra I Algebra II Geometry Pre-calculus AP Calculus Biology AP Biology Chemistry AP Chemistry Physics AP Physics Work with your counselor to make sure you take the required courses for graduation AND as many of these as ... Saturday, February 1, 2014 at 9:41am GEOMETRY MIDPOINT FORMULA steve how did this hapn thankssssss steve :) <33 Saturday, February 1, 2014 at 9:34am GEOMETRY MIDPOINT FORMULA steve how did this hapn thank you reinyyyyy <3 Friday, January 31, 2014 at 9:51am GEOMETRY MIDPOINT FORMULA steve how did this hapn looks like Steve was using vector geometry Perhaps the following approach might make sense to you: make a sketch. since AC = 4AB AB : BC = 1 : 3 for the x's : (2 - (-2))/(-2-x) = 1/3 12 = -2-x x = -14 for the y's: (3-0)/(0-y) = 1/3 9 = -y y = -9 so point C is (-14, -9) Friday, January 31, 2014 at 9:22am GEOMETRY MIDPOINT FORMULA steve how did this hapn If a line is extended from A (2,3) through B ( -2, 0 ) to a point so that AC = 4ab Find the coordinates of C Please help thanks so much GEOMETRY MIDPOINT FORMULA - Steve, Thursday, January 30, 2014 at 11:58am B-A = (-4,-3) C-A = 4(B-A) = (-16,-12) C = A+(C-A) = (-14,-9) Friday, January 31, 2014 at 8:53am Changing a number to scientific notation gives you a number less than 10 multiplied by a power of 10: 3500 = 3.5*10^3 3.5 is less than 10; the exponent(3) means that moving the decimal 3 places to the right restores the number to standard form. 0.0035 = 3.5*10^-3. The negative... Thursday, January 30, 2014 at 10:26pm Thursday, January 30, 2014 at 2:45pm If the pole's shadow is 14 times as long as her shadow, then the pole is 14 times as tall as she is. 14(5.5) = 77 feet Thursday, January 30, 2014 at 12:00pm B-A = (-4,-3) C-A = 4(B-A) = (-16,-12) C = A+(C-A) = (-14,-9) Thursday, January 30, 2014 at 11:58am Dru is challenged by her geometry teacher to estimate the height of the school flag pole without measuring. She decided to walk off the length of the shadow cast by the pole by successively walking the noted length of her shadow. If dry is 5ft 6 in.tall and she estimates the ... Thursday, January 30, 2014 at 11:10am If a line is extended from A (2,3) through B ( -2, 0 ) to a point so that AC = 4ab Find the coordinates of C Please help thanks so much Thursday, January 30, 2014 at 6:31am Thursday, January 30, 2014 at 4:16am Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> Post a New Question | Current Questions
{"url":"http://www.jiskha.com/math/geometry/?page=4","timestamp":"2014-04-18T18:48:32Z","content_type":null,"content_length":"31756","record_id":"<urn:uuid:5e920e8d-6cbc-461c-93f2-f777e866abe8>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Proposition 35 If two numbers measure any number, then the least number measured by them also measures the same. Let the two numbers A and B measure any number CD, and let E be the least that they measure. I say that E also measures CD. If E does not measure CD, let E, measuring DF, leave CF less than itself. Now, since A and B measure E, and E measures DF, therefore A and B also measure DF. But they also measure the whole CD, therefore they measure the remainder CF which is less than E, which is Therefore E cannot fail to measure CD. Therefore it measures it. Therefore, if two numbers measure any number, then the least number measured by them also measures the same.
{"url":"http://aleph0.clarku.edu/~djoyce/java/elements/bookVII/propVII35.html","timestamp":"2014-04-21T07:05:31Z","content_type":null,"content_length":"3791","record_id":"<urn:uuid:dc9972a5-dc1b-4cdd-9811-050e61a94faa>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
complex solutions Could someone help me with this? Find all solutions to z = (6 − 6√3)^(1/5). I am guessing you mean $z=6(1-i\sqrt{3})$ we use De Moivre's formula $z^{\frac{1}{n}}=r^{\frac{1}{n}}\left[ \cos(\frac{x+2\pi k}{n})+ i \sin(\frac{x+2\pi k}{n})\right]$ in the above formula x is the reference angle so ours is $-\frac{\pi}{3}$ r is the radius(=6) and k goes from 0 to n-1 $z^{\frac{1}{5}}=6^{\frac{1}{5}}\left[ \cos(\frac{-\frac{\pi}{3}+2\pi k}{6})+ i \sin(\frac{-\frac{\pi}{3}+2\pi k}{6})\right]$ if you plug in k=0,1,2,3,4,5 you will get all solutions
{"url":"http://mathhelpforum.com/trigonometry/35139-complex-solutions.html","timestamp":"2014-04-18T13:34:36Z","content_type":null,"content_length":"33810","record_id":"<urn:uuid:ced1031b-ad2e-4b5e-be58-bb9d5f9c23fe>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00191-ip-10-147-4-33.ec2.internal.warc.gz"}
Meaning of Slope for a p-t Graph As discussed in the previous part of Lesson 3, the slope of a position vs. time graph reveals pertinent information about an object's velocity. For example, a small slope means a small velocity; a negative slope means a negative velocity; a constant slope (straight line) means a constant velocity; a changing slope (curved line) means a changing velocity. Thus the shape of the line on the graph (straight, curving, steeply sloped, mildly sloped, etc.) is descriptive of the object's motion. In this part of the lesson, we will examine how the actual slope value of any straight line on a graph is the velocity of the object. Consider a car moving with a constant velocity of +10 m/s for 5 seconds. The diagram below depicts such a motion. Now consider a car moving at a constant velocity of +5 m/s for 5 seconds, abruptly stopping, and then remaining at rest (v = 0 m/s) for 5 seconds. Both of these examples reveal an important principle. The principle is that the slope of the line on a position-time graph is equal to the velocity of the object. If the object is moving with a velocity of +4 m/s, then the slope of the line will be +4 m/s. If the object is moving with a velocity of -8 m/s, then the slope of the line will be -8 m/s. If the object has a velocity of 0 m/s, then the slope of the line will be 0 m/s. The widget below plots the position-time plot for an object moving with a constant velocity. Simply enter the velocity value, the intial position, and the time over which the motion occurs. The widget then plots the line with position on the vertical axis and time on the horizontal axis.
{"url":"http://www.physicsclassroom.com/Class/1Dkin/U1L3b.cfm","timestamp":"2014-04-17T01:21:59Z","content_type":null,"content_length":"53983","record_id":"<urn:uuid:c8594055-d0b4-4973-9577-f58290f2d818>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00522-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Best ("Most Standard") Choice for LegendreType? Replies: 0 AES Best ("Most Standard") Choice for LegendreType? Posted: Aug 31, 1996 12:40 AM Posts: 72 Registered: 12/7/04 I'm writing up an engineering analysis in which the results are expressed in terms of Legendre polynomials LegendreP[n,m,z] with z real and >1, that is, outside the more common -1<z<1 range. I realize that there are multiple choices used in the literature, and that in the region of interest to me the only difference is the phase angle of the answer; but would anyone want to recommend a "preferred choice" or "most common choice" as between LegendreType->Real or Complex? I'm influenced myself by the fact that formula 8.6.16 on page 334 of Abramowitz and Stegun seems to match up with the Complex and not the Real type; but are there any counterarguments? On the other hand Mathematica's default is Real, not Complex. Any other arguments either way?
{"url":"http://mathforum.org/kb/thread.jspa?threadID=223917","timestamp":"2014-04-18T09:21:53Z","content_type":null,"content_length":"14271","record_id":"<urn:uuid:5a91b3f7-8573-4dbf-8b5b-045473cf18d6>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00556-ip-10-147-4-33.ec2.internal.warc.gz"}
Polynomial Equation Solver Recently I came across a situation where I needed to solve a 4th degree polynomial equation in .NET, and to my surprise I couldn't find any code written in C# or VB .NET that contained either the explicit algebraic formulas, or the numerical algorithm Jenkins-Traub. (hvowever I did find the Jenkins-Traub algorithm translated from the original Netlib site in FORTRAN and translated into C++ by Laurent Bartholdi for Real coefficients and a C version written by Henrik Vestermark. It is both of these application that are converted from C++ and C into C # and VB code by me.) NB: There might be a licence attached to the Jerkins-Traub algorithm for commercial use, please check before you use it in a program that you want to sell to others. The explicid algebraic formulas that is also implemented are nasty, and would definitely clog up your day if you ever had a need for one, so I decided to share it with you. I also have a rather lengthy story to follow it, as I have read "The equation that couldn’t be solved" by Mario Livio. The title of the book refers to the story of the insolvability of the 5th degree polynomial equation, but it also goes through the historical development of the solutions to lower degree polynomial equations. I won’t give any explanations of how these formulas were derived, as the formulas get quite lengthy, so long that in fact that even Tartaglia (one of the people that had a part in solving the cubic equation) had problems in remembering all the rules he had discovered. As for the numerical algoritm Jenkins-Traub, it has been completely translated from a C++and C version into VB.NET and C# by me, and as far as I have tested it, it seems to be working fine. It should be mentioned that the Jenkins-Traub algorithm generally uses explicid formulas for 1st and 2nd degree, while using the numerical approximation on 3rd degree and above. The 1st degree polynomial equation This is fairly easy to solve mathematically so if you are experiencing trouble here, you should probably not download the code. I am of course talking about equations on the form 2x + 3 = 7, and the simple equation is called linear equations, as they can be represented by lines when graphed (or drawn). However the story behind equations is quite interesting, so I’m going to take you back to 2000 BC - 600 BC to the Babylonian civilization in Mesopotamia. The word equation should be used with care in this context, as the Babylonians didn’t actually use algebra to solve these equations, but instead they ventured into length debates and logic to solve the problems. This might be a way for making math even more difficult and incomprehensible than we have ever had, espesially when dealing with higher order polynomial equations. The result of this way of solving mathematical problems without using algebra meant that the Babylonians could not find any general patterns or formulas behind the various mathematical problems. Despite the cumbersome formulations of the mathematical problems they did manage to solve a pairs (meaning equations with both x and y as unknown) of linear equations . Babylonians didn't seem that keen on producing many text with the 1st degree equation, as for instance the Egyptians, as it seems that the Babylonians thought it too elementary for any detailed discussion. In Egypt however there exists large manuscripts on the subject that presents mathematical "recipes" with the solution to some problems, as kind of a cookbook. It should be mentioned that the Chinese collection Nine chapters on Mathematical Art (Jiu zhang suan shu), one can find the solutions to none fewer than three linear equation with three unknowns, which is quite a feat consider how cumbersome the procedure was. The 2nd degree polynomial equation The origin of the famous solution to the second degree polynomial equation is actually not known, but it is known that Babylon’s actually made a great deal of detailed account on how to solve the equation, but not where they had got it from or how they discovered it. The Babylonian understood how to solve a quadratic equation on the form x^2 - x = 870 and could find the positive solution, as the solution were intended to be used in for instance land measurements and problems like that. They did however ignore equations of this type that had two positive solutions, as these were seen to be illogical solutions. This is also true about the very early Greek mathematician Euclid, as he solves the quadratic equation using geometry, and not by algebra. To compare the mathematical knowledge in the ancient world, the Egyptians on the other hand only knew how to solve equations on the form of x^2 = 4 and not how to solve x^2 + x = 4, and they could also only find the positive The Greek civilization soon managed to resolve some of the issues with the help of the brilliant mathematician Diaphanous. He effectively advanced the way solutions are presented, as a halfway point between the Babylonian wordings of the equations to the modern way of using algebra. His book Arithmetica shows solutions to three different types of the quadratic equations, as well as the famous Diaphanous equations, which Fermats Last Theorem is an example of. Fermat in fact read Aritmethica and it was in this very book by Diaphanous that he wrote his famous last theorem in the margin. As for Diaphanous himself, we actually know very little of him, one can't even be curtain when he actually lived, except that it probably was in Alexandria in the period between A.D. 200 and 214 to 284 or 298. With the fall of the Greek civilization the mathematical progress in the west came to a halt, and went into hibernation for nearly a millennium. The progress of mathematics now turned east, and one of the greate mathematicans of his age, Brahmagupta from India, who managed to solve some of Diaphanous equations, as well as being the first to give solutions of the 2nd degree polynomial equations that also involved negative numbers. He realized that negative numbers could be seen as “debts” and positive numbers as “fortune”, very much like an accountant of today would, and thus making a hugh breakthrough in mathematics. The next greate step in solving equations was done by the development of algebra, and got its name from the Arab mathematician Muhammad ibn Musa al-Khwarizmi, or rather his book “Kitab al-jabr wa al-muqabalah”, referring to the word al-jabr as the basis of the modern word algebra. Al-jabr means “restoration” or “completion” which is quite fitting concerning the importance of the mathematical development this would entail. His books weren’t ground breaking in new material; instead it was the systematic treatment of the solutions to the quadratic equation that was the real genius in it. However, the complete set of solutions of the quadratic equation didn’t appear into Europe until the twelfth century in Spain. The implementation of the quadratic formula on a computer is however not so straight forward as one would initially think. We assume the equation on the form: The solution is the well known formula, and I'm ofcource thinking about this one: However, you'll be asking for trouble if you actually implemented the formula this way on a computer. The reason is that if the coefficients a or c are very close to zero, you could get hugh truncation errors. The coerrect implementation to find the roots on the computer are: Even if the coefficients are complex the computer formula still holds, although one needs to take into account on how to take the sign of the square root: In the formula above the Re stands for the real part of the solution and the asterisk is the complex conjugate of the complex number. The 3rd and 4th degree polynomial equation This is often referred to as the cubic or the quartic equation or function and, not surprisingly it turns up when we want to find the volume of something. The actual general equation wasn’t solved until the sixteen century, although a few special cases were solved by the Babylonians and a few more was given by the Persian poet Omar Khayyam in the twelfth century. However, there wasnt a real tangible need for the solution to the cubic equation. No one was waiting for it to be discovered, it was more of a mental challenge, sort of the Olympic games of mathematics that would determine the greates intellect of his day. The first partial solution to the cubic equation would come from the oldest currently open university, the University of Bologna, which has been open since its establishment in 1088. After having been presented with a problem in 1501 that involved third degree polynomial equations, Scipione del Ferro decided to work on the problem. And around 1515 he managed to find a method for solving the cubic equations that had the form x3+mx = n. Del Ferro did not publish his result, which unfortunatly was quite normal in those days, and only told his student Antonio Fiore and his son-in-law about his discovery on his own death bed. Fiore then seem to think that the formula was his to use as he pleases from that point on, but did not publish it right away, but instead waited for the right moment to appear. So when Niccolò Tartaglia in 1530 announced that he could solve some problems concerning the cubic equations, Fiore decided that his moment had come and challenged Tartaglia to a mathematical dispute. Each contestant would give the other one 30 problems to solve, and the loser would pay a monetary price to the winner. Each of them would have forty or fifty days to solve the problems. By the time the problems were handed over to Tartaglia, he managed to solve all of them within a space of two hours! And Fiore could not solve any of the problems given to him and he did not know the equations on the form x3 + mx2 = n so Tartaglia won the competition. In 1539, after a massive presuation campain, a character by the name of Gerolamo Cardano (he actually earned his allowance by gambling when he was a student, and was known for beiing load and rude to the people around him) managed to pursvade Tartaglia to reveal the formula, under the condition that Cardano would not reveal it. However Cardano found out about del Ferro's solutions from del Ferro's son-in-law, and decided that he was not bounded by the agreement with Tartaglia, as he would present del Ferro's solution and not Tartaglia s, so he published the result in his book Ars Magna. This book is by many contemporary mathematicians considered to be the start of the modern algebra, and it involves solutions with complex numbers, although Cardano did not understand this in detail, as some part of this was from Tartaglia solutions involved squre root of a negative number . (Rafael Bombelli is often considered as the discoverer of complex numbers, as he did far more studies of the subject.) After the book was published, Tartaglia immediately posed a challenge to Cardano, which were a rather poor mathematician by the standard of Tartaglia, and he promptly refused. However, Cardano’s student, Lodovico Ferrari, send numerous public challenges to Tartaglia, which Tartaglia denied. Eventually Tartaglia was offered a job at a university, given that he would beat Ferrari in a dispute. Ferrari had discovered a general way of solving the cubic equation, which was not known to Tartaglia, he had also found the solution to the quartic equation as early as 1540, but this required the solution of the cubic equation so it wasnt published until Cardano found out about del Ferro's solutions. Ferrari won the dispute with Tartaglia, whom left just before the first day into the dispute was over. (Ferrari was quite a character, as he had lost all his fingers on his right hand at the age of seventeen in a brawl). His career skyrocketed from this point on, though he was allegedly later poisoned by his sister and died. The real discoverer of the third degree formula is quite complicated as you understand, although the solution to the 4th degree equation seems to be Ferrari's work alone. The fomulas are not usually implemented on a comuter, as the solutions found by numerical technique is nearly always better with them. If you would still like to use the explicid formulas, you should implement Viete's formula that uses trigonometric functions instead of Ferraris solutions. Higher degree polynomial equations After the Quartic equation was solved using algebra, many tried to solved Quintic or 5th degree polynomial equation, and they all failed. It wasn't proved until Abel found a general proof of it in 1823 that stated that you actually couldn't. The theorem is known today as the Abel–Ruffini theorem or as Abel's impossibility theorem. The reason for the double name is that Ruffini gave an incomplete proof of the theorem in 1799, which Abel didn't know about until 1826. After reading it and studing it he said that Ruffini's work was so complicated to understand that he wasn't sure if it were correct. It is however important to know what the proof actually means, and crutially, what it dosen't mean. Abel's proof simply states that one can not find a general solution to all the roots in a Quintic or any higher order polynominal equation by the use of algebra. Abel used a generalization of the Euler integrals to prove it, and the german mathematican Jacobi was beside himself that this discovery had gone unnoticed by the mathematical community. One can however find the solution to 5th degree equations by the use of numerical methods (Newton-Rapson) or by the use of elliptic integrals. If one uses Évariste Galois theory one can also find out what type of solutions one would find. Évariste Galois also proved that the 5th degree polynomial equation could not be solved independently of Abel and Ruffini's work, and his proof was published posthumously in 1846 The problem with explicid formulas There are problems with the explicid formulas that have to do with the finite storage space on the computer, which can in some instances give such a high error that it would render the calculated solutions far from the true values. In practice, solutions for the 3rd and higher is almost always better with the Jenkins-Traub algorithm or other simular techniques, than the solutions calcualted with the explicide formulas. There are however problems with the numerical estimation of polynomial, as any person with sufficient understanding of the algorithms behind them, easily could construct an example that would fail to converge. Take the equation (1) below: Given that it is a 5th degree polynomial solutions could only be found using a numerical algorithm, but we know, from the Fundemental theorem of algebra, that the equation will have exactly 5 real or complex solutions, as Gauss and others have proved. The exact solutions to the equation 1 are: Jenkins-Traub in C # and VB would not converge (most likely to the double precition that is choosen for the storage of data, as well as the epsilon, min and max values that were "exported" from the float library in VC++). In fact, it can always be problematic for an algorithm of this kind to converge, given the number of multiple roots. The reason is that the algorithm extracts the found roots from the original equation, and with a too large numerical error it can result in that no other solutions are found. So make a note of the fact that any method which uses Newton type iterations, could have problems with multiple roots. This also apply to the circumstances were the roots are closely together (or very far apart) from each other. Jenkins-Traub indeed uses a Newton type method to find the roots, and this could also have other problems. If a polynimial with no linear term is zero at zero, the iterative technique would fail to converge, given that both the function and its derivative is zero at zero. There is another quite surprising thing about polynomials that have the multiple roots. it is that the derivative polynominal will have one fewer of the same roots. Take the example form equation 1, if we derivate the equation we will get: And when we try to solve this equation using the same Jenkins-Traub algorithm as previously, we find that indeed the same roots are still there, with an additional that wan't. The solutions could easily be checked by inserting the values into the original equation (1): The actual code has turned so complex that it is hard, if not impossible, to explain it all. Instead Im going to refer you to the procedure of the Jenkins-Traub described on Wikipedia. It is however important to know that the Jenkins-Traub Algorithm is considered de facto way of numerically calculating the roots of a polynomial. It is extencively tested and it is also implemented in many products that are in comercial use (i.e. Matematica and others). As for the explicid formulas, they should be used with care, and cannot be assumed to give the correct results for any real coefficients you put in, though the description given in the program should indicate if it is a recognized type, and if the output description maches the calculated roots. Several differnt kinds of mehods are employed to find the roots, among them is Ferraris equation, the depressed biquadratic equation and many others. The Wolfram site that provides some formulas used, and other links to the numerical algorithm that is used: C++ source code: Other links:
{"url":"http://www.codeproject.com/Articles/552678/Polynomial-Equation-Solver?msg=4515137","timestamp":"2014-04-17T17:06:12Z","content_type":null,"content_length":"144866","record_id":"<urn:uuid:b68736ec-9279-4e26-a5a1-5ec524799f15>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00464-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: S.western Moving and Storage wants to have enough money to purchase a new tractor-trailor in 5 years at a cost of $290,000. If the company sets aside $100,000 in year 2 and $75,000 in year 3, how much will the company need to set aside in year 4 in order to have the money it needs if the money set aside earns 9% per year? • 3 months ago • 3 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/52cb88efe4b00a421e4b6963","timestamp":"2014-04-16T22:43:12Z","content_type":null,"content_length":"35397","record_id":"<urn:uuid:e2270e5a-32ce-4bac-8b4a-1c8a721dd596>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00662-ip-10-147-4-33.ec2.internal.warc.gz"}
Gary, IN Calculus Tutor Find a Gary, IN Calculus Tutor I earned High Honors in Molecular Biology and Biochemistry as well as an Ancient History (Classics) degree from Dartmouth College. I then went on to earn a Ph.D. in Biochemistry and Structural Biology from Cornell University's Medical College. As an undergraduate, I spent a semester studying Archeology and History in Greece. 41 Subjects: including calculus, chemistry, physics, English ...I have taught a wide variety of courses in my career: prealgebra, math problem solving, algebra 1, algebra 2, precalculus, advanced placement calculus, integrated chemistry/physics, and physics. I also have experience teaching physics at the college level and have taught an SAT math preparation course. For the past three years I have served as the math department chair. 12 Subjects: including calculus, physics, geometry, algebra 1 ...I have worked with them in the classroom, individually, and in groups. Over the years I have become familiar with most types of disabilities. I have worked with teenagers with ADD/ADHD for the past 10 years. 24 Subjects: including calculus, chemistry, special needs, study skills ...My passion for education comes through in my teaching methods, as I believe that all students have the ability to learn a subject as long as it is presented to them in a way in which they are able to grasp. I use both analytical as well as graphical methods or a combination of the two as needed ... 34 Subjects: including calculus, reading, writing, statistics ...Other topics in which I am well versed are formulation of proofs, which is a major component of most discrete math courses, as well as introductory logic. I've been programming ever since I was a child (1990 or so, I was 9 years old). I began programming in GWBASIC, and graduated to more complex... 22 Subjects: including calculus, geometry, statistics, precalculus Related Gary, IN Tutors Gary, IN Accounting Tutors Gary, IN ACT Tutors Gary, IN Algebra Tutors Gary, IN Algebra 2 Tutors Gary, IN Calculus Tutors Gary, IN Geometry Tutors Gary, IN Math Tutors Gary, IN Prealgebra Tutors Gary, IN Precalculus Tutors Gary, IN SAT Tutors Gary, IN SAT Math Tutors Gary, IN Science Tutors Gary, IN Statistics Tutors Gary, IN Trigonometry Tutors
{"url":"http://www.purplemath.com/Gary_IN_calculus_tutors.php","timestamp":"2014-04-21T11:14:50Z","content_type":null,"content_length":"23900","record_id":"<urn:uuid:ef234776-96bc-4c60-8981-98a02cde721c>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00120-ip-10-147-4-33.ec2.internal.warc.gz"}
Past Analysis Seminars Fall 2011 Wednesday, October 26, 2011 at 10:00 am in Hume 331 Ali Al-Sharadqah Statistical Analysis of Curve Fitting in Errors-In-Variables Models Fall 2010 Thursday, December 2, 2010 at 2:00 pm in Hume 331 M. M. Czerwinska Noncommutative Symmetric Spaces of Measurable Operators Friday, November 19, 2010 at 3:30 pm in Hume 331 Richard M. Aron Smooth Surjections from Non-Separable Banach Spaces Thursday, November 11, 2010 at 8:00 am in Hume 321 Michael Northington V An Asymptotic Formula for the Taylor Series Coefficients of Functions with Algebraic Singularites Thursday, November 4, 2010 at 8:00 am in Hume 321 Erwin Mi˜na - Diaz Strong Asymptotics for Orthogonal Polynomials in Weighted Bergman Spaces of the Unit Disk Thursday, October 28, 2010 at 8:00 am in Hume 321 Iwo Labuda Vector Measures Without Weak Compactness: Part III Thursday, September 23, 2010 at 8:00 am in Hume 321 Iwo Labuda Vector Measures Without Weak Compactness: Part II Thursday, September 16, 2010 at 8:00 am in Hume 321 Iwo Labuda Vector Measures Without Weak Compactness Friday, September 10, 2010 at 2:00 pm in Hume 321 Peter Dragnev Ping pong balayage and convexity of the Riesz and logarithmic equilibrium measures Spring 2010 Wednesday, April 21, 2010 at 1:00 pm in Hume 321 Gerard Buskes Orthomorphisms, Polynomials, and Ortho–Symmetry Wednesday, April 7, 2010 at 1:00 pm in Hume 321 Vlad Timofte Remainder Maps II Graduate Semimar Wednesday, March 31, 2010 at 4:00 pm in Hume 321 Elisabeth Udemgba The Henstock–Kurzweil Integral in Riesz Spaces Wednesday, March 24, 2010 at 1:00 pm in Hume 321 Dr. Guillermo Curbera, University of Sevilla, Spain Extensions of the classical Cesàro operator on Hardy spaces Departmental Colloquium Friday , March 26, 2010 at 3:00 pm in Hume 101 Dr. Guillermo Curbera, University of Sevilla, Spain Mathematicians of the World: Unite! Wednesday, March 10, 2010 at 1:00 pm in Hume 321 Dr. Vlad Timofte Remainder Maps Wednesday, March 3, 2010 at 1:00 pm in Hume 321 Dr. Erwin Mi˜na–Diaz Asymptotics of polynomials orthogonal over the complex unit disk with respect to a positive polynomial weight--PART II Wednesday, February 25, 2010 at 1:00 pm in Hume 321 Dr. Bernardo Cascales, Universidad de Murcia, Spain, Domination by Second Countable Spaces Departmental Colloquium Friday, February 27, 2010 at 4:00 pm in Hume 321 Dr. Bernardo Cascales, Universidad de Murcia, Spain, Scalar, vector and multi-valuedintegration Wednesday, February 18, 2010 at 1:00 pm in Hume 331 Dr. Vladimir Troitsky, University of Alberta, Invariant subspaces of finitely strictly singular operators Wednesday, February 10, 2010 at 1:00 pm in Hume 331 Dr. Erwin Mi˜na –Diaz Asymptotics of polynomials orthogonal over the complex unit disk with respect to a positive polynomial weight-Part II Wednesday, February 3, 2010 at 1:00 pm in Hume 331 Dr. Erwin Mi˜na –Diaz Asymptotics of polynomials orthogonal over the complex unit disk with respect to a positive polynomial Wednesday, January 27, 2010 at 3:00 pm in Hume 331 Organizational Meeting Fall 2009 Thursday, November 5, 2009 at 9:30 am in Hume 331 Dr. Aida Timofte Monday , October 19, 2009 at 2:00 PM in Hume 331 Dr. Laurent Baratchart, Vanderbilt University Rational approximation and inverse problems Thursday, October 15, 2009 at 9:30 AM in Hume 321 Dr. Iwo Labuda Geometry of L^1(\mu)) for vector valued measure \mu Part III Thursday, September 24, 2009 at 9:30 AM in Hume 321 Dr. Iwo Labuda Geometry of L^1(\mu)) for vector valued measure \mu Part II Thursday, September 24, 2009 at 9:30 AM in Hume 321 Dr. Iwo Labuda Geometry of L^1(\mu)) for vector valued measure \mu Thursday, September 17, 2009 at 9:30 AM in Hume 331 Dr. Aida Timofte Friday, September 11, 2009 at 3:00 pm in Hume 331 Dr. Przemo Kranz 25 Years-A Reflection Thursday, September 3, 2009 at 9:30 am in Hume 331 Prof. Wen Song, Harbin Normal University Bregman distance, approximate compactness and Chebyshev sets in Banach spaces Spring 2009 Friday, March 27, 2009 at 2:00 pm in Hume 331 Dr. Anton Schep, University of South Carolina Duality of Integral Operators in Tuesday, March 3, 2009 at 2:00 pm in Hume 321 Dr. Vlad Timofte The solution of a long standing open problem: Finding a good differentiation theory on locally convex spaces Thursday, February 19, 2009 at 2:00 pm in Hume 331 Dr. Maxim Zinchenko, Cal Tech Spectral Theory for Jacobi Matrices Wednesday, February 4, 2008 at 3:00 pm in Hume 331 Dr. Vlad Timofte Fall 2008 Wednesday, November 12, 2008 at 2:00 pm in Hume 331 Dr. Qingying Bu The Grothendieck Property for Injective Tensor Products Wednesday, October 29, 2008 at 2:00 pm in Hume 331 Dr. Gerard Buskes Vector Lattices in Boolean Algebras Friday, October 17, 2008 at 2:00 pm in Hume 331 Dr. Abey Lopez, Vanderbilt University Friday, October 10, 2008 at 2:00 pm in Hume 331 Dr. Peter Dragnev, Indiana-Purdue University, Fort Wayne, Indiana Electrons, Buckyballs, and Orifices: Nature's Way of Minimizing Energy Wednesday, September 24, 2008 at 2:00 pm in Hume 331 Erwin Miña-Díaz Conformal maps and orthogonal polynomials for planar regions with analytic boundaries, part II Wednesday, September 10, 2008 at 2:00 pm in Hume 331 Erwin Miña-Díaz Conformal maps and orthogonal polynomials for planar regions with analytic boundaries Fall 2007 A. Louise Perkins, Pippin Welmon, & Keelia Altheimer Long Beach, MS Thursday, April 13, 2004 in Hume Hall Room 331 at 3:00 pm Rings of Real Analytic and Real Entire Functions Professor Melvin Henriksen Harvey Mudd College Wednesday, March 29, 2004 in Hume Hall Room 331 at 4:00 pm Dr. Thomas Schlumprecht Texas A&M University Thursday, March 24, 2004 in Hume Hall Room 331 at 3:00 pm Dr. Thomas Schlumprecht Texas A&M University Wednesday, March 23, 2004 in Hume Hall Room 331 at 3:00 pm Dr. George Anastassiou University of Memphis Department of Mathematics Wednesday, March 9, 2004 in Hume Hall Room 331 at 3:00 pm Dr. Vlad Timofte University of Mississippi Department of Mathematics Wednesday, February 16, 2004 in Hume Hall Room 331 at 3:00 pm Dr. Iwo Labuda University of Mississippi Department of Mathematics Wednesday, February 2, 2004 in Hume Hall Room 331 at 3:00 pm Dr. Iwo Labuda Department of Mathematics University of Mississippi Friday, January 30, 2004 in Hume Hall Room 331 at 3:00 pm Fall 2003 Parallel Thinking, a Mathematics Mini-Conference November 14, 2003 3:00 PM Michael M. Neumann Weighted composition and partial differential operators from an algebraic point of view 3:35 PM Robert Page Title: Bilinear maps of order bounded variation 4:45 PM Gerard Buskes and Koos Grobler Polar Decomposition of order bounded disjointness preserving maps 5:20 PM Hathai Wattanataweekul Flows in infinite networks November 14, 2003 Titchmarsh theorem on functions holomorphic in the upper half-plane and their boundary functions Dr. Iwo Labuda Department of Mathematics University of Mississippi Wednesday, November 5, 2003 in Hume Hall Room 331 at 2:00 pm Fixed point free nonexpansive mappings in Banach spaces Dr. Chris Lennard Department of Mathematics University of Pittsburgh Thursday, October 16, 2003 in Hume Hall Room 331 at 4:00 pm Some more theorems on disjointness preserving operators Dr. Gerard Buskes Department of Mathematics The University of Mississippi Wednesday, October 1, 2003 in Hume Hall Room 331 at 4:15 pm Operators representable by random measures and conditional expectations Dr. Koos Grobler University of Mississippi Wednesday, September 17, 2003 in Hume Hall Room 331 at 4:15 pm Some theorems on disjointness preserving operators Dr. Gerard Buskes Department of Mathematics The University of Mississippi Wednesday, September 17, 2003 in Hume Hall Room 331 at 4:15 pm Spring 2003 Support Planes and a Wonderful Theorem of Bishop and Phelps Dr. Joe Diestel Kent State University Kent, Ohio Thursday, March 6, 2003 in Hume Hall Room 331 at 3:00 pm A Representation Theorem for D-Rings Dr. Karim Boulabiar Départment de Mathématiques Université de Carthage Monday, March 3, 2003 in Hume Hall Room 331 at 2:15 pm Some Properties of Transitive Operators Gleb Sirotkin Department of Mathematics The University of Mississippi Thursday, January 23, 2003 in Hume Hall Room 331 at 2:30 pm
{"url":"http://www.olemiss.edu/depts/mathematics/seminars-analysis.html","timestamp":"2014-04-21T12:15:35Z","content_type":null,"content_length":"53225","record_id":"<urn:uuid:8dcfd323-a010-431f-b4f0-bca4cc92f198>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Help with Manipulate Replies: 1 Last Post: Nov 15, 2013 6:54 AM Messages: [ Previous | Next ] Help with Manipulate Posted: Nov 14, 2013 1:50 AM I need to create a simple demonstration based upon an exercise on poisson counting processes. Here is the code Manipulate[n/Total[z],Style["Poisson Arrival Times",18,Bold],"",Delimiter,{{z,Flatten[{0,RandomVariate[Quiet@ExponentialDistribution[\[Lambda]],n-1]}]},Button["random",z=Flatten[{0,RandomVariate The idea is to generate a sequence of random numbers, then take the sum of it and keep the last value. Every time I hit "random"a new sequence is created. The above code returns the following error msg RandomVariate::array : "\"The array dimensions -1 + n given in position 2 \ of RandomVariate[ExponentialDistribution[=CE=BB], -1 + n] should be a list of \ non-negative machine-sized integers giving the dimensions for the result. \"" and some weird output. After hitting "random", the output is what I expect but the whole Manipulate output is still red indicating that there are problems. Could you be so kind to point out what I did wrong and how to fix it? I feel that I did not quite get how Manipulate works. Many thanks
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2605903&messageID=9324474","timestamp":"2014-04-16T07:27:03Z","content_type":null,"content_length":"18276","record_id":"<urn:uuid:b509a97e-4fb8-42e7-9a31-f8ccdab3106d>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00543-ip-10-147-4-33.ec2.internal.warc.gz"}
This post is a bit of a divergence from my norm. I'm going to editorialize a bit about mathematics, type classes and the tension between different ways of fitting ideas together, rather than about any one algorithm or data structure. I apologize in advance for the fact that my examples are written from my perspective, and as I'm writing about them unilaterally my characterization is likely to be unfair. Something in here is likely to offend everyone. The term "generalized abstract nonsense" was originally coined by Norman Steenrod as a term of endearment, rather than of denigration. e.g. Saying in passing "this is true by abstract nonsense" when referring to a long-winded proof that offers no insight into the domain at hand. Now, some mathematicians like to refer to category theory as generalized abstract nonsense, not out of endearment, but because they do not find immediate value in its application. Among category theorists this view is seen as somewhat daft, as they view category theory as a sort of Rosetta Stone for mapping ideas from one area of mathematics to another -- a lingua franca that lets you express commonalities and cross-cutting concerns across domains. To me category theory serves as a road map to new domains. I don't know much about rational tangles, but if I know that with 2 dimensions of freedom, they form a braided monoidal category letting me tie myself in er.. knots, but in 3 or more dimensions they form a symmetric monoidal category, letting me untie all knots. With this, I can work with them and derive useful results without caring about irrelevant details and dealing with rope burn. Just as some general mathematicians look down with various degrees of seriousness upon generalized abstract nonsense, even some category theorists look down upon what the analyst Antoni Zygmund famously referred to as centipede mathematics: You take a centipede and pull off ninety-nine of its legs and see what it can do. In this sense working with a Semigroup is just working with a neutered Monoid that has had its unit removed. The usual critique of "centipede mathematics" is that it lacks taste. With such colorful metaphors, it'd be hard to argue otherwise! The negative view of this practice seems to stem from the era of folks evaluating grant proposals to see whether or not it was likely to lead to interesting research that they could use. With fewer parts to use, it would seem that one would be unlikely to find new results that benefit those solely concerned with the larger mathematical object. But in many ways, all of modern abstract algebra can be seen as an exercise in centipede mathematics. Mathematicians started with the real numbers, which sit on a line, and, coincidentally, with suitable markers, look an awful lot like a centipede. Starting from there at the turn of the last century, mathematicians kept ripping off legs to get fields, rings, groups, monoids, etc. Of course, all of these folks are mathematicians, and many mathematicians famously look down their noses at applied mathematicians. Consider the famous claim by G. H. Hardy: I have never done anything 'useful'. No discovery of mine has made, or is likely to make, directly or indirectly, for good or ill, the least difference to the amenity of the world. Mind you that didn't stop folks from later putting his ideas to work in thermodynamics and quantum physics, and the gap between pure mathematics and theoretical physics seems to be narrowing every To me the difference between a mathematician and an applied mathematician is one of focus. Often a mathematician will start with an abstraction and try to find things that fit, to help give intuition for folks for the more general concept. Conversely, an applied mathematician will typically start with a thing, and try to find abstractions that capture its essence, giving rise to insight into its behavior. Mathematics is used to provide a bit of vaseline for the lens, blurring away the parts you don't want to focus on. We aren't pulling legs off a centipede and seeing if it can go. We're trying to understand the behavior of a spider without gluing an extra 92 legs onto its thorax and then wondering why it lacks the strength to climb back into its web. While not all centipede mathematicians are applied mathematicians, some really do just want to pull apart their abstractions in a Mengelean fashion and understand why they tick, but the art of applying mathematics is largely an exercise in centipede mathematics. At one extreme, roboticists often pull or blow the legs off their centipedes literally to see how they'll adjust their gait. Of course, all of these folks are mathematicians, so they can look down on the lowly computer scientist, the practitioner of an artform that is delightfully neither truly about computers nor, properly, a science. Virtually everything we touch in computer science arose from a form of centipede mathematics. Constructive logic is the natural vocabulary of computer science. You obtain it by ripping double-negation out of classical logic and watching the system hobble along. The idealized category of Haskell types Hask is effectively used as a constructive analogue to Set. We can, however, turn this perspective around. In Greek mythology, Procrustes served as the final trial of Theseus during his travels to Athens. He would offer travelers along the road food and a night's lodging in a bed he promised would be a perfect fit. Upon their repose, he would proceed to set to work on them with a hammer to stretch them to fit, or an axe to cut off any excess length, forcing them to fit the bed. Worse, Procrustes kept two beds, so no traveler could ever measure up. While Theseus ultimately triumphed over the giant Procrustes, forcing him to fit his own bed by cutting off his head and feet, the concept lives on. In literary analysis, a Procrustean bed is an arbitrary standard to which exact conformity is enforced. Traditional mathematicians finds themselves often forced into the role of Procrustes with many such theoretical beds at their disposal, they can proceed by cutting off bits or adjoining pieces to satisfy the requirements of whatever mathematical construct in which they want to work. Conversely, the applied mathematician or computer scientist often finds themself in a situation where they care more about their problem patient, but can't find a bed to fit. Being a bit less psychopathic, they must set aside mathematical taste and adjust the bed. This is an exercise in centipede mathematics, the bed itself doesn't fit, so they rip parts off of it until it does. We don't do this because we hate the bed, but because we are concerned for the patient. The mathematician is looking out for the needs of the abstraction. The applied mathematician or computer scientist is looking out to maximize the fit for a given domain. In a recent thread on Reddit it was suggested that every Semigroup should be extended to a Monoid and then we could be done with the need for the more refined concept. We can actually often accomplish this. Consider this Semigroup: newtype First a = First { getFirst :: a } instance Semigroup (First a) where m <> _ = m If you really want a Monoid you can lay about with Procrustes' hammer and adjoin a unit. newtype First a = First { getFirst :: Maybe a } instance Semigroup (First a) where First Nothing <> m = m m <> _ = m instance Monoid (First a) where mempty = First Nothing First Nothing `mappend` m = m m `mappend` _ = m Now our object has grown a bit more complicated, but we can use it with all the usual Foldable machinery. Sometimes the patient may be better off for his extra parts, but you may kill other properties you want along the way, or have to consider impossible cases. Having Semigroup as a more fine-grained constraint does enable us to handle the empty case once and for all, and lets us fold over a NonEmpty container with more things and capture the non-empty nature of something via Foldable1 and it simplifies First's implementation considerably, but requires you to use something like Option to lift it into a Monoid you can use for a general purpose list. Here this is simply a matter of taste, but that isn't always the case. If you try to stretch any particular Comonad to fit Alternative, you have to deal with the fact that extract empty :: a This strongly implies no such beast can exist, and so the Comonad must die to make room for the Alternative instance. We have to give up either empty or extract. The type system has taken on the role of the serial killer Saw, sadistically forcing us to choose which of our friends will lose a limb. Even if you want to disavow centipede mathematics, you're going to be forced to occasionally put your abstractions on the chopping block, or abandon rigor. You may be able to upgrade an Applicative parser to one that is a Monad, perhaps at the cost of parallelism. Haskell's type system is very good at expressing a few well chosen abstractions. Monad used to be the golden hammer of the Haskell community, until we found out that Applicative functors exist and are useful for capturing context-free code, where the control flow doesn't vary based on previous results. Arrow was introduced along the way, but later had a smaller Category class carved out of it. Typeclasses in Haskell tend to force us into a small set of bed sizes, because it is relatively bad at code reuse across fine-grained class hierarchies. Each attempt at refining the class hierarchy carries with it a price that library implementors and users who instantiate the classes must now write more methods. Worse, they must often do so without access to the full gamut of extra laws obtained further down in the class hierarchy, because they don't have a tenable way of offering defaults for superclass methods when they write a subclass. Even with one of the superclass default proposals, you get no real code reuse for any form of transformer, and the existing default signature mechanism runs "the wrong way" in such a way that it even forces you to put everything in the same module. The initial arguments against a fine-grained class hierarchy in Haskell arose from the same place as the denigration of centipede mathematics, but they are butressed by the pragmatic concern that there is real pain in an accurate class hierarchy caused by the design of the language. These are valid concerns! Arguments in favor of a finer-grained hierarchy arise from a desire to avoid flooding the namespace with redundant operations, and to capture the relationship between things. It arises from caring about the needs of the things you want to be able to reason about, rather than capturing just the examples that happen to measure up to an arbitrary standard. These are also valid concerns! My semigroupoids package was originally written because I couldn't work with product categories in Haskell, but needed them in code. I still can't, due to the presence of Any as a distinguished member of every kind in Haskell. Along the way, I dug up the -oid-ification of a Semigroup, also known as a semicategory to capture the portions of the product category that we can write nicely in Haskell today. When we look at the Kleisli category of things that are not quite a Monad, or the static arrow category of things that are not quite Applicative, we wind up with mere a Semigroupoid rather than a But where do we find such examples? Consider the lowly IntMap. It cannot be made an instance of Applicative or Monad directly. We have three options to proceed if we want to work with something like (<*>) or (>>=) on it. 1. We can clutter the namespace with a completely ad hoc combinator that we can't abstract over. 2. We can try to adjoin a universal default. This means that you have to kill the Foldable and Traversable instances for it, or deal with the fact that they basically return nonsense. It also means that you either have to give up the ability to delete from the map, or accept the fact that you aren't really modeling (Int -> Maybe b) any more. 3. We can engage in a bit of centipede mathematics, ripping off the pure and return from Applicative and Monad respectively to get a semi-Applicative and a semi-Monad, which I unartfully called Apply and Bind, in semigroupoids. Now we've respected the needs of the domain object, at the expense of a finer grained class hierarchy. In a perfect world, from the perspective of the centipede mathematician, Apply and Bind would occupy a place of privilege in the class hierarchy. However, to the Procrustean mathematicians who are only really concerned with Applicative and Monad, and who can't be bothered to deal with the finer grained hierarchy, such a refinement of the hierarchy merely adds cognitive overhead. They are happy to discard these examples in favor of a simpler, more teachable, meta-theory. Both of these perspectives are valid. To unfairly cast Oleg Kiselyov in the role of Procrustes with Edwin Brady as his understudy, we can look at the modeling of extensible effects in this same light. Lawvere theories offer us too small a bed to fit many of the effects we care to model, such as continuation passing. This is why that effect is ignored by Edwin Brady's handling of effects for Idris. They just don't fit. On the other hand, Oleg offers us a second bed that is much bigger, his Eff monad is the result of applying Codensity to a Lawvere Theory. Now it's the job of the handler to deal with the impossible We're forced to set about with Procrustes' hammer to embed many monads, like Reader into a domain that is too large. Codensity ((->) s) a ~ forall r. (a -> s -> r) -> s -> r is strong enough to implement all of CPS'd State. If you pass it (,) for its first argument you get s -> (a, s)! It is only by convention and hiding that we can restrict such a reader down to size. This means that the compiler has really no chance of ever optimizing the code properly as it must always live in fear that you could change the environment, even though the handler never will. This forces a single thread of execution through otherwise parallelizable code. We improve the adjustability of this bed by switching from the Codensity construction to an arbitrary right Kan extension, like my old monad-ran package did. This lets the bed conform to the shape of Ran Identity ((->) s) a ~ Yoneda ((->) s) a ~ forall r. (a -> r) -> s -> r This is just a CPS'd function, when passed id we recover (s -> a). It is no longer large enough to model all of State and properly captures the features we want. Yet even this bed is still too small for some patients. The infinite usecases of lazy writer and lazy state monads still cannot be made to fit at all. This destroys many interesting use cases of the Tardis. Admittedly they are the kinds of things that tie users in knots. Perhaps like an appendix removal, your patients will not miss those parts. However, many monads for syntax trees like the transformer used by bound cannot be adapted without an asymptotic performance hit, causing you to redo whole calculations every time you want to pattern match on the result. From the standpoint of the Procrustean mathematician, the extensible effects approach is fairly elegant, it provides a single bed into which many common effects can fit. The elegance of this approach makes it very appealing! However, it isn't roomy enough to hold all of the current effects we can capture with monad transformers. Without upgrading to Ran, many effects are forced into a model that is too big, where you have to handle many impossible conditions that are merely ruled out by convention. On the other side, the inability to handle a number of cases that we do use in practice is also somewhat of a bad This is why I can bring myself to view extensible effects as a usful way to think about effects, but I can't view it as a full replacement for the monad transformer approach. Monad transformers do pay an O(n^2) complexity tax, describing how everything commutes over everything else, but n is typically not that big of a number. The abilty to handle the extra cases that don't fit the extensible effects approach means I can't bring myself to just relegate them to the waste bin of history. As an applied mathematician / computer scientist, I still need to handle those effects! Concerns about the need to write lift . lift . lift, seem to arise from a particularly awkward style of use that I frankly never see in real code. There are ways to handle this and the multiple-state issues using tools we have, such as lenses. I'll relegate both of these concerns to another post. The lens package is very much an exercise in building a very fine grained set of distinctions and trying to make it so you can say very precisely what constraints you want to impose on each operator. Alternative designs like fclabels capture a different trade-off between factors. I, personally, find that the ability to express a Fold or Getter is worth the added complexity of the representation. Your mileage may vary. fclabels forces you to stretch each such thing into a form where you cannot express any laws, and then lays about with Procrustes' axe cutting of a number of abstractions we've found useful at the top end of the lens ecosystem, for Indexed traversals and the like. It is a pretty clean exercise in Procrustean mathematics, though. If you fit into the abstraction it models comfortably, you'll never feel the bite of the axe. Even lens, with its deep hierarchy, occasionally cuts you with Procrustes' axe. There are some constructions that just don't fit the domain. For instance lens offers no tool for validating input -- a Lens' s a must accept any such a. There is a tension between these design criteria as with Comonad and Alternative. Something had to give. Lens chooses a different point on the design curve than fclabels. The choice we made was to gain a great deal of expressive power and ability to reason about the code with laws in exchange for It is important to realize that modifying the abstraction/bed or modifying the problem/patient are both options. Sometimes you can easily adapt the problem to the mathematical construct, and sometimes the mathematical construct can be easily adapted to the problem. When we add laws and operatons to an abstraction, we wind up with fewer examples. If we work parametrically over an abstraction, the weaker the requirements we put on our inputs, the more scenarios we can cover. Other times one or the other is up against a hard constraint. It is very important to distinguish between the normative concerns of taste and the very real concerns that some times one or the other of these things cannot give, as in the Comonad/Alternative case above. To argue against a straw man, giving up either one of these degrees of freedom unilaterally strikes me as absurd. A hundred years ago, nobody cared about the foundations of mathematics, then came Russell and Whitehead, but their encoding was in many ways too dense and full of incredibly fine-grained Competing tensions like this gave us the mathematical world we inhabit today. The choice of how to balance these factors, to abstract over enough problem domans to be useful without needlessly quibbling over impossibly fine-grained distinctions is really the true role of taste in library design and in mathematics, and tastes vary over time. October 25th, 2013
{"url":"https://www.fpcomplete.com/user/edwardk/editorial/procrustean-mathematics","timestamp":"2014-04-18T10:34:49Z","content_type":null,"content_length":"53978","record_id":"<urn:uuid:7da2601f-9455-4543-96fd-d0753fb80eae>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00576-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Recent Homework Questions About Geometry Post a New Question | Current Questions Geometry HELP HELP Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 PLEASE HELP ME BELLS Tuesday, February 18, 2014 at 6:56am original dimensions: l, w, h volume = lwh new length --- 2l/3 new width ---- w/10 new height = h/4 new volume + (2l/3)(h/4)(w/10) = 2 lwh/120 = lwh/60 or (1/60)lwh so , yes, it is B Monday, February 17, 2014 at 8:44pm Monday, February 17, 2014 at 8:44pm V = L w H new V = (2/3)L * (1/10) w * (1/4) H = (2/120) LwH = (1/60) LwH Monday, February 17, 2014 at 8:44pm You are welcome :) Monday, February 17, 2014 at 8:40pm Is it B? Monday, February 17, 2014 at 8:36pm How does the volume of a rectangular prism change if the width is reduced to 1/10 of its original size, the height is reduced to 1/4 of its original size, and the length is reduced to 2/3 of its original size? A. V=1/120lwh B. V=1/60lwh C. V= 2/3lwh D. V=3/4lwh Monday, February 17, 2014 at 8:35pm Thank you so much oh my god Monday, February 17, 2014 at 8:33pm add six feet at the end tan 25 = h/(x+10) tan 36 = h/x h = (x+10)(.466) h = x(.727) so .727 x = .466 x + 4.66 .261 x = 4.66 x = 17.9 ft so h = .727 (17.9) = 13 13 + 6 = 19 ft Monday, February 17, 2014 at 8:23pm The angle of depression from a hot air ballon in the air to a person on the ground is 36 degrees. If the person steps back 10ft the new angle of depression is 25 degrees. If the person is 6ft tall, how far off the ground is the hot air ballon? Monday, February 17, 2014 at 8:14pm V = (1/3)base area * height new V =(1/3) (4 base area)(height/9) so 4/9 of original Monday, February 17, 2014 at 8:00pm How does the volume of a square pyramid change if the base area is quadrupled and the height is reduced to 1/9 of its original size? A. V= 1/27Bh B. V= 4/27Bh C. V= 2/9Bh D. V= 4/9Bh Monday, February 17, 2014 at 7:37pm Gowith Steve - Geometry messed up my typing don't know why my 1/3 suddenly became 2/3 Monday, February 17, 2014 at 5:48pm let the base of the pyramid and the cube be x by x let the height of the pyramid be h then the height of the cube is 2h volume of pyramid = (1/3)x^2 h = (2/3)h x^2 volume of cube = x^2 (2h) = 2h x^2 ratio of cube : pyramid = 2hx^2 : (2/3hx^2 = 2: 2/3 = 6:2 = 3:1 so it is 3 ... Monday, February 17, 2014 at 5:45pm cube: v = Bh pyramid: 1/3 B(h/2) = 1/6 Bh So, (A) Monday, February 17, 2014 at 5:44pm but that's not a choice. A. 1/6 B. 1/3 C. 3 D. 6 Monday, February 17, 2014 at 5:29pm its is equal to 2 times the volume because the square pyramid is HALF the height of the cube. Next time, read the question a little more carefully, because the answer is in the question. Monday, February 17, 2014 at 5:26pm The volume of a square pyramid is equal to _____ times the volume of a cube where the bases are the same, but the square pyramid is half the height of the cube. Monday, February 17, 2014 at 5:14pm a = 2pi r (r+h) If r and h are shrunk by a factor of 3 each, than we have 2pi (r/3)(r/3 + h/3) = 2/9 pi r (r+h) = 1/9 a as with all geometric figures, when the linear dimensions are scaled by a factor of f, the area is scaled by f^2 and the volume is scaled by f^3. Monday, February 17, 2014 at 5:12pm If a cylinder s radius and height are each shrunk down to a third of the original size, what would be the formula to find the modified surface area? Monday, February 17, 2014 at 4:17pm been there, did that http://www.jiskha.com/display.cgi?id=1392638255 Monday, February 17, 2014 at 10:11am solve it yourself. Monday, February 17, 2014 at 10:10am geometry circles equation i dont know points on circle: (0,0), (5,0) find intersections of lines for third point on circle 2(y) + 3 y = 10 5 y = 10 y = 2 then x = 2 so third point is (2,2) (0,0) (5,0) (2,2) (x-a)^2 + (y-b)^2 = r^2 plug and chug a^2 + b^2 = r^2 (5-a)^2 + b^2 = r^2 (2-a)^2 +(2-b)^2 = r^2 a^2 + b^2... Monday, February 17, 2014 at 10:10am Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 Monday, February 17, 2014 at 8:50am Geometry? Very incomplete! Monday, February 17, 2014 at 8:40am Geometry Circles Equation Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 I beg you Monday, February 17, 2014 at 8:31am geometry circles equation i dont know please he lp Find the equation of a circle circumscribes a triangle determined by the line y= 0 , y= x and 2x+3y= 10 this is my first timein this website please help me need it badly my teacher will be angry at me so much Monday, February 17, 2014 at 6:57am Did you mean positive whole numbers ? let the base be y and each of the equal sides be x To be a triangle, x > 2y or y < x/2 2x + y = 99 y = 99-2x y-intercept is 99, x-intercept is 49.5 so the x, or the base, can only be a number from 1 to 49 but to even have a triangle... Sunday, February 16, 2014 at 8:12pm How many distinct isosceles triangles exist with a perimeter of 99 inches and side lengths that are positive while numbers? Sunday, February 16, 2014 at 7:44pm 2x+2y = 102 x^2+y^2 = 39^2 A 5-12-13 right triangle looks promising. Scale that up to a 15-36-39 size, and we see that 15+36=51, so our rectangle is 15 by 36 Sunday, February 16, 2014 at 5:20pm The perimeter of a rectangle is 102 inches, and the length of the diagonal is 39 inches. Find the dimensions of the rectangle. Sunday, February 16, 2014 at 5:07pm Sunday, February 16, 2014 at 8:01am algebra 1/BGe Geometry.the figures below are squares.find an expression for the area of each shaded region.write your answers in standard form. Friday, February 14, 2014 at 8:06pm Thursday, February 13, 2014 at 7:50pm geometry - incomplete since I know neither the angles nor their relation to x, you better provide a bit more description of the figure, eh? Thursday, February 13, 2014 at 2:58pm Geometry true or false I believe that is false Thursday, February 13, 2014 at 12:53pm if the measurement of angle fgk is 3y-4 and measurement of angle kgh is 2y+7 find x Thursday, February 13, 2014 at 10:29am V = πr^2 h h = 335/(π r^2) SA = 2π r^2 + 2πrh = 2πr^2 + 2πr(335/(πr^2) = 2πr^2 + 670/r d(SA)/dr = 4πr - 670/r^2 = 0 for a min of SA 4πr = 670/r^2 4πr^3 = 670 r = (670/(4π))^(1/3) = 3.763757606.. subbing back into h h... Thursday, February 13, 2014 at 8:55am a cylindrical soup can has a volume of 335cm^3.find the dimensions(radius r and height h)that minimise the surface area of such a can. Thursday, February 13, 2014 at 5:51am elaine and Daniel are building a rectangular greenhouse.they want the are of the floor to be 36metres squared.since the glass walls are expensive,they want to minimise the amount of glass wall they use.they have commissioned you to design a greenhouse which minimizes the cost... Thursday, February 13, 2014 at 5:42am I'd say C Thursday, February 13, 2014 at 12:02am Which theorem or postulate is the construction of parallel lines based upon? A. Consecutive Interior Angles Converse Theorem B. Corresponding Angles Converse Postulate C. In the same plane, if two lines are perpendicular to the same line, then they are parallel to each other. ... Wednesday, February 12, 2014 at 9:36pm Thanks to both of you Wednesday, February 12, 2014 at 8:41pm Wednesday, February 12, 2014 at 8:39pm D looks good but of course, an equilateral triangle with sides of 10, has a height of 8.66 So, the triangle is isosceles, but not equilateral. Wednesday, February 12, 2014 at 8:37pm Wednesday, February 12, 2014 at 8:35pm What is the area of an equilateral triangle with sides of 10 inches and height of 7 inches? Can you please check my answer and my reasoning? A. 44 sq. in. B. 20 sq. in. C. 70 sq. in. D. 35 sq. in. I picked D, because I took 10x7=70and divided that by 2, which equals 35 sq. in... Wednesday, February 12, 2014 at 8:33pm A circle has a diameter of 20 inches and a central angle AOB that measures 160°. What is the length of the intercepted arc AB? Use 3.14 for pi and round your answer to the nearest tenth Wednesday, February 12, 2014 at 4:03pm 40 cm : 18 mm = 40 cm : 1.8 cm = 40 : 1.8 = 400 : 18 = 200 : 9 Wednesday, February 12, 2014 at 10:15am Write the ratio of the first measurement to the second measurement. Diameter of care tire:40cm Diameter of care tire:18mm I'm not sure hoow to do this. May someone explain how to do this problem. Wednesday, February 12, 2014 at 9:20am Wednesday, February 12, 2014 at 2:09am You say nothing about how you know that Q is "just above" L, or what that means. I'd say (c) is the choice. You draw arcs centered at P and T, such that they intersect above and below L. Note that the arcs must have radius greater than LT=LP. I get the feeling ... Monday, February 10, 2014 at 6:28pm All of this should be covered in his book. http://www.enchantedlearning.com/math/geometry/shapes/ http://www.mathsisfun.com/geometry/ Monday, February 10, 2014 at 5:57pm An architect plans to make a drawing of the room of a house. The segment LM represents the floor of the room. He wants to construct a line passing through Q and perpendicular to side LM to represent a wall of the room. He uses a straightedge and compass to complete some steps ... Monday, February 10, 2014 at 4:13pm 7/x = cos60 = 1/2 Sunday, February 9, 2014 at 9:39pm A ladder leaning against a house makes an angle of 60 degrees with the ground. The foot of the ladder is 7 ft from the house. How long is the ladder? Sunday, February 9, 2014 at 7:35pm Geometry - insufficient data What is the size of the yard? Sunday, February 9, 2014 at 12:59pm Andrea has a yard shaped like parallelogram ABCD. The garden area, parallelogram EFGB, has an area of 105 ft. If Andrea wants to sod the rest of her yard, how many square feet of sod should she order? A. 765 ft B. 840 ft C. 945 ft D. 1,515 ft Sunday, February 9, 2014 at 12:42pm Area of a kite = (1/2)product of the diagonals So the width and height would be diagonals let the height diagonal be h (1/2)(15)h = 60 15h = 120 h = 8 , looks like C Sunday, February 9, 2014 at 10:59am Maria is making a stained glass window in the form of a kite. The width of the window must be 15 in., and she only has enough stained glass to cover 60 in. What should the height of the window be? A. 4 in. B. 6 in. C. 8 in. D. 12 in. Sunday, February 9, 2014 at 9:58am Saturday, February 8, 2014 at 4:45pm A flower bed in the corner of your yard is in the shape of a right triangle. The perpendicular sides of the bed measure 10 feet and 12 feet. Calculate the area of the flower bed. Friday, February 7, 2014 at 8:17pm !@#$%^&s you Friday, February 7, 2014 at 7:36am find AC IF AB = 6 cm Thursday, February 6, 2014 at 6:43pm it is 8.94 Thursday, February 6, 2014 at 5:48pm opposite angles are equal, so 8x-2 = 104 Thursday, February 6, 2014 at 2:39pm My problem is with a parallelogram that has three numbers which are 88 degrees,104,and 80. Then the fourth on is 8x minus 2 how do I slove this Thursday, February 6, 2014 at 10:30am 5^2 + h^2 = 16^2 Thursday, February 6, 2014 at 5:29am Larry has a ladder that is 16' long. If he sets the base of the ladder on level ground 5 feet from the side of the house, how many feet above the ground will the top of the ladder reach when it rests against the house? Thursday, February 6, 2014 at 12:24am An advertising blimps hovers over stadium at the altitude of 152 m.the pilot sites a tennis court at in 80 degree angle of depression. Find the ground distance in the straight line between the stadium and the tennis court. (note: in an exercise like this one, and answers ... Thursday, February 6, 2014 at 12:09am geometry - still stumped I agree that bcd is 94° similarly, angle acd is x, so x+y+66=180. Still looking for another angle (as it were) to connect x and y. Wednesday, February 5, 2014 at 5:23pm Coordinate geometry Hmmm. Your question implies you have not yet read the lesson. Seems like that ought to be the next step. Any explanation you receive here will probably not be any better than the explanation in your text. Plus, your text probably has some examples. Wednesday, February 5, 2014 at 1:07pm Coordinate geometry Our lesson coordinate geom already now analytic proof . HOW TO DO ANALYTIC PROOF ? PLEASE PLEASE HELP THANKS VERY MUCH Wednesday, February 5, 2014 at 10:15am wE nEED tHE cHART??? Wednesday, February 5, 2014 at 10:00am Hmmm. This makes no sense to me. Tuesday, February 4, 2014 at 10:37pm Jenny is 5ft 2in tall to find the height of The light pole she measured her shadow And the poles shadow what is The height of the light pole Tuesday, February 4, 2014 at 10:35pm 180 - 55.1 = 124.9 http://www.mathsisfun.com/geometry/supplementary-angles.html Tuesday, February 4, 2014 at 6:38pm Length of the diameter of a circle with the endpoints A(4, -3) and B(4, 3) Tuesday, February 4, 2014 at 7:40am how to do eucliodean geometry Tuesday, February 4, 2014 at 3:18am Identify the sequence of transformations that maps quadrilateral abcd onto quadrilateral a"b"c"d" Answers 180 rotation around the origin; reflection over the x-axis translation (x,y) -> (x - 2, y + 0); reflection over the line x = -1 enlargement; ... Monday, February 3, 2014 at 10:59pm 11/8.5 = 1.294 77/53 = 1.452 So, the 8.5x11 paper is closer to square than the 53x77 paper The scale will have to be such that the height of 77cm will fit in 11in That is 11in:77cm or 1in:7cm = 2.54:7 = 1:2.75 Monday, February 3, 2014 at 5:52am Coordinate geometry let A be at (0,0) Then if B is at (xb,yb), the length of AB is √(xb^2 + yb^2) Then, if C is at (xc,yc), the midpoint of AC is (xc/2,yc/2) and the midpoint of BC is at ((xb+xc)/2,(yb+yc)/2) If M is the midpoint of AC and N is the midpoint of BC, then the slope of MN is ((... Monday, February 3, 2014 at 5:44am Coordinate geometry Help me i this please the segment joining the midpoint of 2 sides f a triagle is parallel to the 3rd side and half as long. thaks Monday, February 3, 2014 at 4:37am Trying to find the scale for a 77 centimeter by 53 centimeter painting to fit on a 8.5 by 11 inch paper. Please show me how- Sunday, February 2, 2014 at 10:40pm how does the volume of an oblique cylinder change if the radius is reduced to 2/9 of it's original size and the height is quadrupled? Sunday, February 2, 2014 at 6:03pm Sunday, February 2, 2014 at 3:04pm Geometry - insufficient data Saturday, February 1, 2014 at 2:34pm Jason wants to walk the shortest distance to get from the parking lot to the beach. a.How far is the spot on the beach from the parking lot? b. How far will he have to walk from the parking lot to get to the refreshment stand? Saturday, February 1, 2014 at 2:32pm COURSE HELP PLEASE ms sue qq In high school, these courses are usually called Algebra I Algebra II Geometry Pre-calculus AP Calculus Biology AP Biology Chemistry AP Chemistry Physics AP Physics Work with your counselor to make sure you take the required courses for graduation AND as many of these as ... Saturday, February 1, 2014 at 9:41am GEOMETRY MIDPOINT FORMULA steve how did this hapn thankssssss steve :) <33 Saturday, February 1, 2014 at 9:34am GEOMETRY MIDPOINT FORMULA steve how did this hapn thank you reinyyyyy <3 Friday, January 31, 2014 at 9:51am GEOMETRY MIDPOINT FORMULA steve how did this hapn looks like Steve was using vector geometry Perhaps the following approach might make sense to you: make a sketch. since AC = 4AB AB : BC = 1 : 3 for the x's : (2 - (-2))/(-2-x) = 1/3 12 = -2-x x = -14 for the y's: (3-0)/(0-y) = 1/3 9 = -y y = -9 so point C is (-14, -9) Friday, January 31, 2014 at 9:22am GEOMETRY MIDPOINT FORMULA steve how did this hapn If a line is extended from A (2,3) through B ( -2, 0 ) to a point so that AC = 4ab Find the coordinates of C Please help thanks so much GEOMETRY MIDPOINT FORMULA - Steve, Thursday, January 30, 2014 at 11:58am B-A = (-4,-3) C-A = 4(B-A) = (-16,-12) C = A+(C-A) = (-14,-9) Friday, January 31, 2014 at 8:53am Changing a number to scientific notation gives you a number less than 10 multiplied by a power of 10: 3500 = 3.5*10^3 3.5 is less than 10; the exponent(3) means that moving the decimal 3 places to the right restores the number to standard form. 0.0035 = 3.5*10^-3. The negative... Thursday, January 30, 2014 at 10:26pm Thursday, January 30, 2014 at 2:45pm If the pole's shadow is 14 times as long as her shadow, then the pole is 14 times as tall as she is. 14(5.5) = 77 feet Thursday, January 30, 2014 at 12:00pm B-A = (-4,-3) C-A = 4(B-A) = (-16,-12) C = A+(C-A) = (-14,-9) Thursday, January 30, 2014 at 11:58am Dru is challenged by her geometry teacher to estimate the height of the school flag pole without measuring. She decided to walk off the length of the shadow cast by the pole by successively walking the noted length of her shadow. If dry is 5ft 6 in.tall and she estimates the ... Thursday, January 30, 2014 at 11:10am If a line is extended from A (2,3) through B ( -2, 0 ) to a point so that AC = 4ab Find the coordinates of C Please help thanks so much Thursday, January 30, 2014 at 6:31am Thursday, January 30, 2014 at 4:16am Pages: <<Prev | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 | Next>> Post a New Question | Current Questions
{"url":"http://www.jiskha.com/math/geometry/?page=4","timestamp":"2014-04-18T18:48:32Z","content_type":null,"content_length":"31756","record_id":"<urn:uuid:5e920e8d-6cbc-461c-93f2-f777e866abe8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Research Interests Relativistic Astrophysics Relativistic astrophysics is the application of the theory of general relativity (the theory of strong gravitational fields) to problems in astrophysics. The strongest gravitational fields in the universe are associated with compact objects (neutron stars and black holes). The main focus is on providing a clearer understanding of the electromagnetic and gravitational radiation produced by compact objects. My research is theoretical and I use a mix of analytical and numerical techniques. • Relativistic Effects in Accreting Neutron Stars and Black Holes: The motivation for some of my recent work has been the observation of kHz quasi-periodic x-ray oscillations originating from accreting neutron stars and black holes by NASA's Rossi X-ray Timing Explorer satellite. I have been investigating the possibility of detecting strong-field relativistic effects such as the precession of an accretion disk due to frame-dragging. • Neutron Star Oscillations and Instabilities: I am also working on studies of non-radial oscillations of rotating neutron stars, including mechanisms for the production of gravitational radiation. If a neutron star is rotating, gravitational radiation can drive an instability which will cause the star to slow down. A great surprise was a recent result that perturbations driven by the Coriolis force can be unstable at arbitrarily small angular velocities. Further work which included the effect of viscosity showed the gravitational radiation driven instability of the Coriolis modes is important for the class of neutron stars which are born rapidly rotating (such as the pulsar found in the supernova remnant N157B). My present work involves an investigation of the damping effect of weak turbulence on linear modes in young neutron stars. I am using techniques first developed for the study of convection in the sun in order to determine the maximum mode amplitudes allowed by mode-mode couplings. I have written some reviews of recent results in Matters of Gravity (the newsletter of the APS topical group on gravity). You can read slightly revised versions of these reviews through the links below. Sharon Morsink
{"url":"http://www.ualberta.ca/~morsink/research.html","timestamp":"2014-04-20T09:58:14Z","content_type":null,"content_length":"3705","record_id":"<urn:uuid:650bbdc4-53a8-4994-9ec2-0c6a1920c868>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Performance Evaluation of Mobile Processes via Abstract Machines October 2001 (vol. 27 no. 10) pp. 867-889 ASCII Text x Chiara Nottegar, Corrado Priami, Pierpaolo Degano, "Performance Evaluation of Mobile Processes via Abstract Machines," IEEE Transactions on Software Engineering, vol. 27, no. 10, pp. 867-889, October, 2001. BibTex x @article{ 10.1109/32.962559, author = {Chiara Nottegar and Corrado Priami and Pierpaolo Degano}, title = {Performance Evaluation of Mobile Processes via Abstract Machines}, journal ={IEEE Transactions on Software Engineering}, volume = {27}, number = {10}, issn = {0098-5589}, year = {2001}, pages = {867-889}, doi = {http://doi.ieeecomputersociety.org/10.1109/32.962559}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Software Engineering TI - Performance Evaluation of Mobile Processes via Abstract Machines IS - 10 SN - 0098-5589 EPD - 867-889 A1 - Chiara Nottegar, A1 - Corrado Priami, A1 - Pierpaolo Degano, PY - 2001 KW - Calculi for mobility KW - enhanced operational semantics KW - formal methodology KW - performance evaluation KW - stochastic models. VL - 27 JA - IEEE Transactions on Software Engineering ER - Abstract—We use a structural operational semantics which drives us in inferring quantitative measures on system evolution. The transitions of the system are labeled and we assign rates to them by only looking at these labels. The rates reflect the possibly distributed architecture on which applications run. We then map transition systems to Markov chains, and performance evaluation is carried out using standard tools. As a working example, we compare the performance of a conventional uniprocessor with a prefetch pipeline machine. We also consider two case studies from the literature involving mobile computation to show that our framework is feasible. [1] Netperf: A Network Performance Benchmark, Revision 2.1. Information Networks Division, Hewlett-Packard, 1996. [2] K. Ahlers, D.E. Breen, C. Crampton, E. Rose, M. Tucheryan, R. Whitaker, and D. Greer, “An Augmented Vision System for Industrial Applications,” Proc. SPIE Photonics for Industrial Applications Conf., Oct. 1994. [3] A. O. Allen,Probability, Statistics, and Queueing Theory with Computer Science Applications.New York: Academic, 1978. [4] M. Baldi and G.P. Picco, “Evaluating the Tradeoffs of Mobile Code Design Paradigms in Network Management Applications,” Proc. Conf. Software Eng., pp. 146-155, Apr. 1998. [5] M. Bernardo, L. Donatiello, and R. Gorrieri, “A Formal Approach to the Integration of Performance Aspects in the Modeling and Analysis of Concurrent Systems,” Information and Computation, vol. 144, pp. 83–154, 1998. [6] G. Berry and L. Cosserat, "The ESTEREL Synchronous Programming Language and Its Mathematical Semantics," Seminar Concurrency, S.D. Brookes, A.W. Roscoe, and G. Winskel, eds., Lecture Notes in Computer Science 197, pp. 389-448. Springer-Verlag, 1985. [7] R. Borgia, P. Degano, C. Priami, L. Leth, and B. Thomsen, “Understanding Mobile Agents via a Non-Interleaving Semantics for Facile,” Proc. Third Int'l Static Analysis Symp. (SAS '96), R. Cousot and D.A. Schmidt, eds., pp. 98–112, 1996. [8] G. Boudol and I. Castellani, “A Non-Interleaving Semantics for CCS Based on Proven Transitions,” Fundamenta Informaticae, vol. XI, no. 4, pp. 433–452, 1988. [9] E. Brinksma, J.-P. Katoen, R. Langerak, and D. Latella, “A Stochastic Causality Based Process Algebra,” The Computer J., vol. 38, no. 6, pp. 552–565, 1995. [10] L. Brodo, P. Degano, and C. Priami, “A Tool for Quantitative Analysis of$\pi{\hbox{-}}\rm calculus$Processes,” Proc. Eighth Int'l Workshop Process Algebra and Performance Modelling (PAPM '00), [11] P. Buchholz, “On a Markovian Process Algebra,” technical report, Informatik IV, Univ. of Dortmund, 1994. [12] L. Cardelli and A.D. Gordon, "Mobile Ambients," Foundations of Software Science and Computation Structures, Lecture Notes in Computer Science, vol. 1378, Springer-Verlag, Berlin, 1998, pp. [13] A. Carzaniga, G.P. Picco, and G. Vigna, "Designing Distributed Applications with Mobile Code Paradigms," Proc. 19th Conf. Software Eng. (ICSE'97), R. Taylor, ed., pp. 22-32, ACM Press, 1997. [14] P. Cenciarelli, A. Knapp, B. Reus, and M. Wirsing, “An Event-Based Structural Operational Semantics of Multithreaded Java,” Formal Syntax and Semantics of Java, 1998. [15] M. Chiodo et al., "Hardware/Software Codesign of Embedded Systems," IEEE Micro, Aug. 1994, pp. 26-36. [16] G. Clark, “Formalising the Specification of Rewards with PEPA,” Proc. Fourth Int'l Workshop Process Algebra and Performance Modelling (PAPM '96), pp. 139–160, 1996. [17] E. Coste-Maniere and B. Faverjon, “A Programming and Simulation Tool for robotics Workcells,” Proc. Int'l Conf. Automation, Robotics, and Computer Vision, 1990. [18] L. de Alfaro, “Stochastic Transition Systems,” Proc. Ninth Int'l Conf. Concurrency Theory (CONCUR '98), 1998. [19] P. Degano, R. De Nicola, and U. Montanari, “Partial Ordering Derivations for CCS,” Proc. Fifth Int'l Conf. Fundamentals of Computation Theory (FCT '85), pp. 520–533, 1985. [20] P. Degano and R. Gorrieri, “A Causal Semantics of Action Refinement,” Information and Computation, vol. 122, no. 1, pp. 97–121, 1995. [21] P. Degano and C. Priami, “Proved Trees,” Proc. 19th Int'l Colloquium Automata, Languages, and Programming (ICALP '92), pp. 629–640, 1992. [22] P. Degano and C. Priami, “Enhanced Operational Semantics,” ACM Computing Surveys, vol. 28, no. 2, pp. 352–354, 1996. [23] P. Degano and C. Priami, “Non Interleaving Semantics for Mobile Processes,” Theoretical Computer Science, vol. 216, pp. 237–270, 1999. [24] P. Degano, C. Priami, L. Leth, and B. Thomsen, “Causality for Debugging Mobile Agents,” Acta Informatica, 1999. [25] D.L. Eager, E.D. Lazowska, and J. Zahorjan, "Adaptive Load Sharing in Homogeneous Distributed Systems," IEEE Trans. Software Eng., vol. 12, no. 5, pp. 662-675, May 1986. [26] W. Feller, An Introduction to Probability Theory and its Applications. Wiley, 1970. [27] C. Fournet and G. Gonthier, “The Reflexive Chemical Abstract Machine and the Join-Calculus,” Proc. ACM SIGPLAN-SIGACT Symp. Principles of Programming Languages (POPL '96), pp. 372-385, 1996. [28] C. Fournet, G. Gonthier, J.-J. Lévy, L. Maranget, and D. Rémy, "A Calculus of Mobile Agents," Proc. Int'l Conf. Concurrency Theory. Lecture Notes in Computer Science 1,119, pp. 406-421. Springer-Verlag, 1996. [29] A. Fuggetta, G. Picco, and G. Vigna, "Understanding Code Mobility," IEEE Trans. Software Eng., May 1998, pp. 352-361. [30] N. Götz, U. Herzog, and M. Rettelbach, “TIPP—A Language for Timed Processes and Performance Evaluation,” Technical Report 4/92, IMMD VII, Univ. of Erlangen-Nurnberg, 1992. [31] R. Want et al., "The Active Badge Location System," ACM Trans. Information Systems, vol. 10, no. 1, Jan. 1992, pp. 91-102. [32] P.G. Harrison and B. Strulo, “Stochastic Process Algebra for Discrete Event Simulation,” Quantitative Methods in Parallel Systems, pp. 18–37, 1995. [33] C. Harvey, “Performance Engineering as an Integral Part of System Design,” BT Technology J., vol. 4, no. 3, pp. 143–147, 1986. [34] O.M. Herescu and C. Palamidessi, “Probabilistic Asynchronous$\pi{\hbox{-}}\rm calculus$,” Proc. Third Int'l Conf. Foundations of Software Science and Computation Structure (FOSSACS '2000), pp. 146–210, 2000. [35] H. Hermanns, U. Herzog, and V. Mertsiotakis, “Stochastic Process Algebras—between LOTOS and Markov Chains,” Computer Networks and ISDN Systems, vol. 30, nos. 9-10, pp. 901–924, 1998. [36] J. Hillston, A Compositional Approach to Performance Modelling. Cambridge Univ. Press, 1996. [37] C.A.R. Hoare, Communicating Sequential Processes, Prentice Hall, Englewood Cliffs, N.J., 1985. [38] R. Howard, Dynamic Probabilistic Systems: Semi-Markov and Decision Systems, volume II.Wiley, 1971. [39] O.C. Ibe and K.S. Trivedi, “Stochastic Petri Net Models of Polling Systems,” IEEE J. Selected Areas of Comm., vol. 8, no. 9, 1990. [40] L.J. Jagadeesan, C. Puchol, and J.E. Von Olnhausen, “A Formal Approach to Reactive Systems Software: A Telecommunications Application in Esterel,” Proc. Workshop Industrial-Strength Formal Specification Techniques, 1995. [41] C. B. Jones,Systematic Software Development Using VDM. Englewood Cliffs, NJ: Prentice-Hall, 1990, 2nd ed. [42] K.G. Larsen and A. Skou, “Compositional Verification of Probabilistic Processes,” Proc. Third Int'l Conf. Concurrency Theory (CONCUR '92), 1992. [43] A. Maggiolo-Schettini and S. Tini, “Applying Techniques of Asynchronous Concurrency to Synchronous Languages,” Fundamenta Informaticae, vol. 40, pp. 221–250, 1999. [44] M.Ajmone Marsan,S. Donatelli,F. Neri,, and U. Rubino,“On the construction of abstract GSPNs: Anexercise in modeling,” Proc. Fourth Int’l Workshop Petri Nets and Performance Models, pp. 2-17,Melbourne, Australia, Dec.2-5, 1991. [45] R. Milner, Communication and Concurrency, Prentice-Hall, Englewood Cliffs, N.J., 1989. [46] R. Milner, Communicating and Mobile Systems: The p calculus, Cambridge Univ. Press, Cambridge, UK, 1999. [47] R. Milner, J. Parrow, and D. Walker, “A Calculus of Mobile Processes,” Information and Computation, vol. 100, pp. 1-77, 1992. [48] R. Milner, J. Parrow, and D. Walker, “Modal Logics for Mobile Processes,” Theoretical Computer Science, vol. 114, pp. 149–171, 1993. [49] D.S. Milojicic, F. Douglis, Y. Paindaveine, R. Wheeler, and S. Zhou, “Process Migration,” technical report, HP Labs, 1998. [50] G. Murakami and R. Sethi, “Terminal Call Processing in Esterel,” Proc. IFIP 92 World Computer Congress, 1992. [51] R. Nelson, Probability, Stochastic Processes, and Queueing Theory. New York: Springer-Verlag, 1995. [52] X. Nicollin and J. Sifakis, “An Overview and Synthesis on Timed Process Algebras,” Real Time: Theory in Practice, pp. 526–548, 1991. [53] F. Nielson and H.R. Nielson, “From CML to Its Process Algebra,” Theoretical Computer Science, vol. 155, pp. 179–219, 1996. [54] B.C. Pierce and D.N. Turner, “Pict: A Programming Language Based on the Pi-Calculus,” Technical Report CSCI 476, Computer Science Dept., Indiana Univ., 1997. To appear Proof, Language and Interaction: Essays in Honour of Robin Milner, Gordon Plotkin, Colin Stirling, and Mads Tofte, eds., MIT Press, 1998. [55] G. Plotkin, “A Structural Approach to Operational Semantics,” Technical Report DAIMI FN-19, Aarhus Univ., Denmark 1981. [56] C. Priami, “Stochastic$\pi{\hbox{-}}\rm calculus$,” The Computer J., vol. 38, no. 6, pp. 578–589, 1995. [57] C. Priami, “Stochastic$\pi{\hbox{-}}\rm calculus$with General Distributions,” Proc. Fourth Int'l Workshop Process Algebra and Performance Modelling (PAPM '96), pp. 41–57, 1996. [58] A. Reibman, R. Smith, and K. Trivedi, “Markov and Markov Reward Model Transient Analysis: An Overview of Numerical Approaches,” European J. Operations Research, vol. 40, pp. 257–267, 1989. [59] J. Riely and M. Hennessy, “A Typed Language for Distributed Mobile Processes,” Proc. 25th ACM Principles of Programming Languages (POPL '98), pp. 378–390 1998. [60] D. Sangiorgi, “Expressing Mobility in Process Algebras: First-Order and Higher-Order Paradigms,” PhD thesis, Univ. of Edinburgh, 1992. [61] W.J. Stewart, Introduction to the Numerical Solution of Markov Chains. Princeton Univ. Press, 1994. [62] J.-P. Talpin, “The Calumet Experiment in Facile—A Model for Group Communication and Interaction Control in Cooperative Applications,” Technical Report ECRC-94-26, European Computer-Industry Research Centre, 1994. [63] J.-P. Talpin, P. Marchal, and K. Ahlers, “Calumet—A Reference Manual,” Technical Report ECRC-94-30, European Computer-Industry Research Centre, 1994. [64] B. Thomsen, “Plain CHOCS: A Second Generation Calculus for Higher Order Processes,” Acta Informatica, vol. 30, no. 1, pp. 1–59, 1993. [65] B. Thomsen, L. Leth, S. Prasad, T.-M. Kuo, A. Kramer, F. Knabe, and A. Giacalone, “Facile Antigua Release Programming Guide,” Technical Report ECRC-93-20, European Computer-Industry Research Centre, 1993. [66] K.S. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Science Applications. Prentice Hall, 1982. [67] R.J. van Glabbeek, S.A. Smolka, B. Steffen, and C.M.N. Tofts, “Reactive, Generative and Stratified Models of Probabilistic Processes,” Information and Computation, 1995. Index Terms: Calculi for mobility, enhanced operational semantics, formal methodology, performance evaluation, stochastic models. Chiara Nottegar, Corrado Priami, Pierpaolo Degano, "Performance Evaluation of Mobile Processes via Abstract Machines," IEEE Transactions on Software Engineering, vol. 27, no. 10, pp. 867-889, Oct. 2001, doi:10.1109/32.962559 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/ts/2001/10/e0867-abs.html","timestamp":"2014-04-19T02:31:35Z","content_type":null,"content_length":"68047","record_id":"<urn:uuid:eaed000e-6665-4e6e-b50a-3fd6e0bd33c7>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Permutations and Combinations PC # 1 There are 7 men and 3 ladies. Find the number of ways in which a committee of 6 persons can be formed if the committee is to have atleast 2 ladies. Character is who you are when no one is looking. Re: Permutations and Combinations first, I thank you ganesh for making this new themes. (But problems and solutions is best, too) IPBLE: Increasing Performance By Lowering Expectations. Re: Permutations and Combinations So if there are 2 ladies in the com, we have (7)C(6-2) combinations. Multiply this by the number of 2-ladies:3C2 And simular, if we have 3l: Is this the answer? IPBLE: Increasing Performance By Lowering Expectations. Re: Permutations and Combinations Two ladies must be in the committee. If two ladies are chosen, the number of combinations is 3C2. The remaining 4 members can be chosen from the 7 men and the number of combinations is 7C4. Hence, the number of ways in which two ladies can form part of the committee is 3C2*7C4. If three ladies are chosen, the number of combinations is 3C3. The remaining 3 members can be chosen from the remaining 7 men and the number of combinations is 7C3. Hence, the number of ways in which three ladies can form part of the committee is 3C3*7C3. The total number of combinations is 3C2*7C4 + 3C3*7C3. That is, 105 + 35 = 140. Character is who you are when no one is looking. Re: Permutations and Combinations One simple muistake may change the answer badly: IPBLE: Increasing Performance By Lowering Expectations. Re: Permutations and Combinations PC # 2 There are 6 books on Economics, 3 on Mathematics and 2 on Accountancy. In how many ways can they be arranged on a shelf if the books of the same subject are always to be together? Character is who you are when no one is looking. Re: Permutations and Combinations They must be tigether so we have 3 subjects that must be arranged: IPBLE: Increasing Performance By Lowering Expectations. Re: Permutations and Combinations Read the question fully......and..... Character is who you are when no one is looking. Re: Permutations and Combinations PC # 3 How many numbers are there between 100 and 1000 such that atleast one of their digits is 6? Character is who you are when no one is looking. Re: Permutations and Combinations PC # 3 All of the numbers from 600 to 699 have a 6. This leaves 800 numbers between 100 and 999. Of those, 1/10 will have 6 as their tens digit. 800/10 = 80, so that leaves 720 more. Of those, 1/10 will have 6 as their units digit. 720/10 = 72, and adding this to 80 and 100 will give the answer. 100+80+72 = 252. Why did the vector cross the road? It wanted to be normal. Re: Permutations and Combinations Character is who you are when no one is looking. Re: Permutations and Combinations PC # 4 Bus number plates contain three distinct English alphabets followed by four digits with the first digit not zero. How many different number plates can be formed? Character is who you are when no one is looking. Re: Permutations and Combinations PC # 5 The figures 4, 5, 6,7, and 8 are written in every possible order. How many of the numbers so formed will be greater than 56,000? Character is who you are when no one is looking. Star Member Re: Permutations and Combinations Guess to PC#4 (26) × (25) × (24) × (9) × (10) × (10) × (10) (Algebra never should have used x's, it sure makes multiplying more confusing.) igloo myrtilles fourmis Re: Permutations and Combinations You're correct, John! Well done You could use . instead of x for multiplication sign! Character is who you are when no one is looking. Full Member Re: Permutations and Combinations ganesh wrote: PC # 5 The figures 4, 5, 6,7, and 8 are written in every possible order. How many of the numbers so formed will be greater than 56,000? First, find ALL the combinations Since the formula is where r is (the number of positions)-1 so now we see: combinations which don't accept 4's at the start and 54's. and we have: Last edited by landof+ (2007-09-17 21:23:23) I shall be on leave until I say so... Star Member Re: Permutations and Combinations Woops I forgot some 57xyz, 58xyz, so add 12 to my answer sorry. Last edited by John E. Franklin (2007-09-18 09:11:15) igloo myrtilles fourmis Full Member Re: Permutations and Combinations Which is 90, same I suppose I shall be on leave until I say so... Re: Permutations and Combinations krassi_holmz wrote: They must be tigether so we have 3 subjects that must be arranged: the answer should be !6 * !3 *!2 * !3 the books can be arranged among themselves as !6 Economics, !3 on Mathematics !2 on Accountancy and !3 among themselves Re: Permutations and Combinations There are 3 boys and 3 girls. In how many ways can they be arranged so that each boy has at least one girl by his side? Re: Permutations and Combinations There are 10 boxes numbered 1, 2, 3, …10. Each box is to be filled up either with a black or a white ball in such a way that at least 1 box contains a black ball and the boxes containing black balls are consecutively numbered. The total number of ways in which this can be done is.. Power Member Re: Permutations and Combinations ganesh wrote: Two ladies must be in the committee. If two ladies are chosen, the number of combinations is 3C2. The remaining 4 members can be chosen from the 7 men and the number of combinations is 7C4. Hence, the number of ways in which two ladies can form part of the committee is 3C2*7C4. If three ladies are chosen, the number of combinations is 3C3. The remaining 3 members can be chosen from the remaining 7 men and the number of combinations is 7C3. Hence, the number of ways in which three ladies can form part of the committee is 3C3*7C3. The total number of combinations is 3C2*7C4 + 3C3*7C3. That is, 105 + 35 = 140. There are three ladies from which to choose the two slots designated for ladies. After those two slots are filled, there are eight unchosen members from which to randomly assign the remaining four committee seats. Why isn't the answer nCr(3,2)*nCr(8,4)=3*70=210? What am I missing? If I'm counting 70 possibilities twice, which possibilities am I double counting? Edit to add: I'm reasonably confident the method I proposed is wrong, and Ganesh's solution is indeed correct. I'm interested in understanding why the method I proposed is wrong. Last edited by All_Is_Number (2008-10-14 17:32:34) You can shear a sheep many times but skin him only once. Power Member Re: Permutations and Combinations Sudeep wrote: There are 10 boxes numbered 1, 2, 3, …10. Each box is to be filled up either with a black or a white ball in such a way that at least 1 box contains a black ball and the boxes containing black balls are consecutively numbered. The total number of ways in which this can be done is.. You can shear a sheep many times but skin him only once. Re: Permutations and Combinations This problem is old but... The correct answer is 55, by direct count. You can compute it by: In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=95240","timestamp":"2014-04-17T12:47:40Z","content_type":null,"content_length":"37858","record_id":"<urn:uuid:d4594603-c394-4236-ae24-e5d24e2a5b33>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00029-ip-10-147-4-33.ec2.internal.warc.gz"}
Carl Gottfried Neumann Born: 7 May 1832 in Königsberg, Germany (now Kaliningrad, Russia) Died: 27 March 1925 in Leipzig, Germany Click the picture above to see two larger pictures Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index Carl Neumann was the son of Franz Neumann who has a biography in this archive. His mother was Bessel's sister-in-law. Carl was born and received his school education at Königsberg where his father was the Professor of Physics. Neumann entered the University of Königsberg where he became close friends with two of his teachers, Otto Hesse and F J Richelot who taught mathematical analysis. After graduating with a qualification to teach mathematics in secondary schools, Neumann continued to study at Königsberg for his doctorate which was awarded in 1855. After receiving his doctorate, Neumann studied for his habilitation and he submitted his thesis to the University of Halle. He received his habilitation giving him the right to lecture in 1858 when he became a Privatdozent at Halle. He was promoted to extraordinary professor in 1863. Neumann did not remain at Halle for long after his promotion for he was offered a professorship at the University of Basel. Arriving in Basel in 1863 he only spent two years at the university there before being offered a professorship at the University of Tübingen. However, during these two years in Basel he married Mathilde Elise Kloss in 1875. A slightly longer time, namely three years, spent in Tübingen, from 1865 to 1868, and then Neumann was on the move again, this time to a chair at the University of Leipzig. Appointed to Leipzig in the autumn of 1868 he gave his inaugural lecture, called an Antrittsvorlesung, in 1869 with the title On the principles of the Galileian-Newtonian theory of mechanics. The German text of this lecture is given in [2]. Neumann held the chair at Leipzig until he retired in 1911 but sadly his wife died in 1875. Wussing writes in [1]:- Neumann, who led a quite life, was a successful university teacher and a productive researcher. More than two generations of future Gymnasium teachers received their basic mathematical education from him. He worked on a wide range of topics in applied mathematics such as mathematical physics, potential theory and electrodynamics. He also made important pure mathematical contributions. He studied the order of connectivity of Riemann surfaces. During the 1860s Neumann wrote papers on the Dirichlet principle and the 'logarithmic potential', a term he coined. In 1890 Émile Picard used Neumann's results to develop his method of successive approximation which he used to give existence proofs for the solutions of partial differential equations. This is discussed in detail in [4]. In addition to his research and teaching, Neumann made another important contribution to mathematics as an editor of Mathematische Annalen. He was honoured with membership of several academies and societies, including the Berlin Academy and the societies in Göttingen, Munich and Leipzig. Article by: J J O'Connor and E F Robertson Click on this link to see a list of the Glossary entries for this page List of References (6 books/articles) Mathematicians born in the same country Cross-references in MacTutor Previous (Chronologically) Next Main Index Previous (Alphabetically) Next Biographies index History Topics Societies, honours, etc. Famous curves Time lines Birthplace maps Chronology Search Form Glossary index Quotations index Poster index Mathematicians of the day Anniversaries for the year JOC/EFR © May 2000 School of Mathematics and Statistics Copyright information University of St Andrews, Scotland The URL of this page is:
{"url":"http://www-history.mcs.st-andrews.ac.uk/Biographies/Neumann_Carl.html","timestamp":"2014-04-16T10:37:41Z","content_type":null,"content_length":"12781","record_id":"<urn:uuid:1c08b13b-8528-489e-850f-9c1a727cf2bb>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Download the FitOO latest version Download CorelPoly, preliminary work of FitOO letting you to fit polynomilas and exponential sums FitOO is a non-linear curve fitting tool for OpenOffice.org. It is a Calc template with the necessary structure and macros to implement the Levenberg-Maquard algorithm. This algorithm resolves the underlying least squares minimization problem. This document provides some basic information to help you use FitOO. Of course, OpenOffice.org must be installed. FitOO was developed with OOo version 1.1 beta on a computer running Microsoft Windows. I would like some feedback on its use on other systems. Setup and starting Because FitOO is implemented as a Calc template, you can directly load the file. A new document based on the template will be created. If FitOO.stc is copied to the [ooo]/user/Template directory, you can start a FitOO session using File/New/Templates... and choosing FitOO. When FitOO starts, you must allow the macros to be run. The opening screen is shown in figure 1. Operating with FitOO A detailed description of the Levenberg-Maquard algorithm is beyond the scope of this document. This section is only a brief overview of the method. Assume that you have observed or sampled XY data points and a general mathematical expression (an equation with a set of parameters) that is assumed to describe the data. The goal is to calculate the parameters such that the analytical curve optimally fits the data points. This problem leads to the least squares minimization method. With simple functions (linear, polynomials, exponentials, ...) we can analytically program the partial derivatives against each parameter and solve the problem. More complicated functions, or combination of simpler functions, require an iterative algorithm that will minimize, step by step, the difference between the data points and the approximating mathematical expression by adjusting the parameters. To facilitate convergence, an iterative process usually requires a reasonable initial guess. This new 1.1 version of FitOO now handles multidimensionnal theoritical equations. Then equations as Y = F (X1...Xn;A1...Am) are now possible. This enhancement implies that the real time fitting curve is not displayed anymore. Moreover, the method to use FitOO slightly changed. Using FitOO The cells in the sheets that you should modify are in red. Do not modify any other cells. You will nedd to perform three stpeps to use FitOO : • Define the theoritical equation • Provide the data points • Drive the resolution Define the theoritical equation The equation is defined inteh FitOO sheet in cell B4. For efficiency, I did not use a calc cell to evaluate the function each time. Instead, I dynamically create a function macro, avoiding repetitive and time consuming access to the sheet. There are two essential things to writing the mathematical expression to be optimized : • The parameters to be optimized are written &1 , &2, ..., &m (n > 100). • The function variable x, is written #1, #2, ..., #n (because of the function Exp). So FitOO manages equations of some dimensions face to face unknowns X. Thus, the function given as the example &1*#1*cos(#2+&2)+&3 corresponds in fact to Y=a X1 Cos(X2+b)+c The optimized parameters are a, b and c. This writing allows to have thus so many parameters as we want. They were however limited to 99 what should be sufficient. Also, number of variables X is too limited to 99. After the equation has been defined, it must be validated and working columns must be built. This is done by clicking the Initialize button. The associated macro calculates the number of parameters in the equation (cell FitOO:B10) and then it builds the list for the initial guess for the parameters to be optimized (Sheet FitOO, cells B16 and following). Provide the data points The previous action reset the DATAS sheet and built as many columns as variables detected in the equation. The user has now to fill these columns with data points (don't forget the Y column). Sample data points for the example equation can be found on AA column of this sheet. You just have to copy-paste them to perform a test. Drive the resolution Before solving the problem, adjust the initial guess (cells B16 and following). The blue curve representing the optimized mathematical equation is updated in real-time on the « Fitted curve » chart as you modify the initial guess. This allows you to visually find reasonable initial values for parameters. The maximum number of iterations and convergence criteria can be left as is for now (cells B13 and B14). The convergence criteria is the sum of the squared differences between the experimental XY data points and the function evaluation at those points. This is not an "objective" criteria because it depends on the value of the function at each abscissa (x) value. When you are ready, start the iterative process by clicking on the Calculate button. The diagram of convergence is refreshed in every iteration what can slow down the process. The Stop button allows you to stop the iterative process when the current iteration is finished. Some remarks When designing the mathematical expression that will approximate the data, be certain to use the fewest number of parameters as possible. For example, the function f(x)=(ax+b)/(cx+d) should be simplified to f(x)=(a'x+b')/(c'x+1) (all parameters divided by d). If you do not do this, the algorithm may oscillate between different solutions leading to a loss of convergence. The problem is the same for trigonometric functions defined modulo k.pi. Systems solving As FitOO handles as many dimensions as needed for the equation, it can be used to solve linear and non-linear systems. The theoritical has to represent the general layout of the system equations (aX1+ bX2+cX3+d). Then the Y column will have to be filled with 0 values. Fitoo will then be usuable as described before. FitOO includes a dynamic engine for translation. If you want to translate it to another language, you just have to add a new column in the "Translation" sheet and send the file to oooconv@free.fr.
{"url":"http://oooconv.free.fr/fitoo/fitoo_en.html","timestamp":"2014-04-16T18:56:34Z","content_type":null,"content_length":"13098","record_id":"<urn:uuid:53fe5019-6176-444a-a3ee-1d7bf23d844d>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00479-ip-10-147-4-33.ec2.internal.warc.gz"}
Java Modelling Tools - JWAT JWAT allows workload analysis using clustering techniques. Input data can be collected from any kind of textual file: support for standard apache log files is provided but users can specify customized formats. During input phase, users can provide the following extraction criteria: • All: selects every observation • Interval: selects only observations in a given range • Random: given number of observations, selects that number at random from data • N every K: picks at random N observations every K observations in input file (K>N) The application support a complete environment for statistical analysis. In that environment is possible to calculate univariate and bivariate statistic and allow the drawing of frequency, quantiles and scatter plots. Data can be normalized with the following transformations: • Logarithmic • MinimumMaximum • Standard deviation and trimmed to selected percentile. Clustering is performed with the following algorithms: JWAT provides also an interface to the similarity clustering tool CLUTO. This is important for workloads including qualitative data. Data analysis is guided with a wizard interface. Site designed by Bertoli Marco
{"url":"http://jmt.sourceforge.net/JWAT.html","timestamp":"2014-04-19T22:07:54Z","content_type":null,"content_length":"7282","record_id":"<urn:uuid:78f1da8d-3d5f-41b3-a389-982762a5ef0c>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Operating Leverage Edit Article Edited by IngeborgK, Razor Blade, BR, Maluniu and 2 others The operating leverage of a business is the ratio of the change in operating income (EBIT) to the change in sales. Operating leverage is a way to measure the volatility of the business's earnings in relation to sales. A business with higher operating leverage is riskier than an equivalent business with lower operating leverage. You can calculate the operating leverage of a business with a few quick steps - get started with Step 1 below. 1. 1 Define the business' revenue and variable costs per unit sold, and its fixed costs. □ For our example, we'll use a factory that made and sold 1,000 widgets last year and had revenues of $100,000. 2. 2 Divide the total revenues by the number of units sold to determine the revenue per unit sold, or the sale price per unit. □ For example, the amount $100,000 in total revenues divided by 1,000 widgets sold equals $100, so the factory sold each widget for $100. 3. 3 Subtract the fixed costs and operating expenses from the total revenues. □ Fixed costs are costs which don't change based on the number of widgets made. Examples include rent and marketing or advertising expenses. □ For example, if the fixed costs were $20,000, and the operating expenses were $10,000, then the revenue of $100,000 minus fixed costs of $20,000 and operating expenses of $10,000 equals 4. 4 Divide the difference between revenues and fixed costs and operating earnings by the number of units made to determine the variable cost per unit made. □ Variable costs are different based on the number of units made. These costs can include materials. □ In our example, the difference between revenues and fixed costs and operating earnings was $70,000; divide $70,000 by 1,000 widgets to determine that variable costs per widget are $70. 5. 5 Calculate the contribution margin, or the variable profit per unit sold. □ This is calculated as the difference between the sales price per unit and the variable cost per unit. □ For example, the sales price for each widget was $100, and the variable cost for each widget was $70, so the contribution margin was $30. 6. 6 Multiply the variable profit per unit sold by the number of units sold to determine the total variable profit. □ For example, the variable profit per until sold was $30, and the number of units sold was 1,000, so the total variable profit was $30,000. 7. 7 Divide the total variable profit by the operating earnings. □ For example, the total variable profit of $30,000 divided by the operating earnings of $10,000 equals 3. This is the degree of operating leverage, or the ratio at which a $1 increase in sales will increase operating earnings. Things You'll Need • Calculator • Pen or Pencil • Paper Sources and Citations • Wikipedia entry: Operating Leverage Article Info Categories: Financial Ratios Recent edits by: Chris, Maluniu, BR In other languages: Español: Cómo calcular el apalancamiento operativo Thanks to all authors for creating a page that has been read 35,528 times. Was this article accurate?
{"url":"http://www.wikihow.com/Calculate-Operating-Leverage","timestamp":"2014-04-21T12:18:10Z","content_type":null,"content_length":"62019","record_id":"<urn:uuid:476212e2-97e5-49cd-a354-c5f5332a4047>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Poincare, Jules Henri (1854-1912) The mind uses its faculty for creativity only when experience forces it to do so. Poincare, Jules Henri (1854-1912) Mathematical discoveries, small or great, are never born of spontaneous generation. They always presuppose a soil seeded with preliminary knowledge and well prepared by labour, both conscious and Poincare, Jules Henri (1854-1912) Absolute space, that is to say, the mark to which it would be necessary to refer the earth to know whether it really moves, has no objective existence.... The two propositions: "The earth turns round" and "it is more convenient to suppose the earth turns round" have the same meaning; there is nothing more in the one than in the other. La Science et l'hypothese. Poincare, Jules Henri (1854-1912) ...by natural selection our mind has adapted itself to the conditions of the external world. It has adopted the geometry most advantageous to the species or, in other words, the most convenient. Geometry is not true, it is advantageous. Poisson, Simeon (1781-1840) Life is good for only two things, discovering mathematics and teaching mathematics. Mathematics Magazine, v. 64, no. 1, Feb. 1991. Polya, George (1887-1985) Mathematics consists of proving the most obvious thing in the least obvious way. In N. Rose Mathematical Maxims and Minims, Raleigh NC:Rome Press Inc., 1988. Polya, George (1887-1985) The traditional mathematics professor of the popular legend is absentminded. He usually appears in public with a lost umbrella in each hand. He prefers to face the blackboard and to turn his back to the class. He writes a, he says b, he means c; but it should be d. Some of his sayings are handed down from generation to generation. "In order to solve this differential equation you look at it till a solution occurs to you." "This principle is so perfectly general that no particular application of it is possible." "Geometry is the science of correct reasoning on incorrect figures." "My method to overcome a difficulty is to go round it." "What is the difference between method and device? A method is a device which you used twice." How to Solve It. Princeton: Princeton University Press. 1945. Polya, George (1887-1985) Mathematics is the cheapest science. Unlike physics or chemistry, it does not require any expensive equipment. All one needs for mathematics is a pencil and paper. D. J. Albers and G. L. Alexanderson, Mathematical People, Boston: Birkhauser, 1985. Polya, George (1887-1985) There are many questions which fools can ask that wise men cannot answer. In H. Eves Return to Mathematical Circles, Boston: Prindle, Weber and Schmidt, 1988. Polya, George (1887-1985) When introduced at the wrong time or place, good logic may be the worst enemy of good teaching. The American Mathematical Monthly, v. 100, no. 3.
{"url":"http://www.maa.org/quote_alphabetical/p?page=7&device=mobile","timestamp":"2014-04-21T13:05:13Z","content_type":null,"content_length":"31013","record_id":"<urn:uuid:424681b6-5278-43d6-b93e-de1edeb7b09c>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00515-ip-10-147-4-33.ec2.internal.warc.gz"}
Wolfram Demonstrations Project R-Trees for Indexing Multidimensional Data An R-tree is a data structure used to index multidimensional data in database systems for spatial queries. Branches in an R-tree maintain a minimum bounding rectangle of all the child branches and nodes. Queries against the R-tree traverse the tree by performing relatively inexpensive intersection operations against the minimum bounding rectangles. On the top, you can see the data and the minimum bounding rectangles containing the data. On the bottom, you can see the tree itself. Clicking anywhere in the panel at the top adds new data points to the tree. Hover over a node in the tree to see its label. Clicking a branch in the tree highlights the minimum bounding rectangle to which it corresponds. The R-tree was first proposed by Guttman in 1984. It is used in many spatial database systems, including PostGIS and JTS, to efficiently index and query multidimensional data. R-trees create a hierarchical decomposition of the data space that minimizes the area of rectangles needed to group the data. Each branch in an R-tree maintains a minimum bounding rectangle for all of the children of that branch, including sub-branches and data elements. Each node in an R-tree has a configurable maximum number of elements. The insertion algorithm requires that nodes be split when they are full. There are multiple optimal splitting techniques. The one used in this Demonstration is the quadratic cost splitting criterion, which trades off speed for optimality. A split could cause splits to propagate up the tree. When this happens, the tree grows at the root to accommodate the new data. R-trees are particularly suited for computing results for nearest-neighbor queries. As such, they are effective for performing rough clustering of multidimensional data in a machine-learning algorithm. [1] A. Guttman, "R-Trees: A Dynamic Index Structure for Spatial Searching," Proceedings of the 1984 ACM SIGMOD International Conference on Management of Data—SIGMOD , New York: ACM, 1984 pp. 47–57. [2] H. Samet, Foundations of Multidimensional and Metric Data Structures, San Francisco: Morgan Kaufmann, 2005.
{"url":"http://demonstrations.wolfram.com/RTreesForIndexingMultidimensionalData/","timestamp":"2014-04-16T21:52:57Z","content_type":null,"content_length":"42575","record_id":"<urn:uuid:6d2c82cf-e72a-45a7-b72d-e36b8ae03756>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00544-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Mathematical topos (plural: topoi or toposes - this is a contentious topic) in is a type of which allows the formulation of all of mathematics inside it. See introduction to topos theory for an account of the genesis of this concept. Traditionally, mathematics is built on set theory, and all objects studied in mathematics are ultimately sets and functions. It has been argued that category theory could provide a better foundation for mathematics. By analyzing precisely which properties of the category of sets and functions are needed to express mathematics, one arrives at the definition of topoi, and one can then formulate mathematics inside any topos. Of course, the category of sets forms a topos, but that is boring. In more interesting topoi, the axiom of choice may no longer be valid, or the the law of the excluded middle (every proposition is either true or false) may break down. It is thus of some interest to collect those theorems which are valid in all topoi, not just in the topos of sets. One may also work in a particular topos in order to concentrate only on certain objects. For instance, constructivists may be interested in the topos of all "constructible" sets and functions in some sense. If symmetry under a particular group G is of importance, one can use the topos consisting of all G-sets. Other important examples of topoi are categories of sheaves on a topological space. The historical origin of topos theory is algebraic geometry. Alexander Grothendieck generalized the concept of a sheaf. The result is the category of sheaves with respect to a Grothendieck topology - also called Grothendieck topos. F. W. Lavwere realized the logical content of this structure, and his axioms (elementary topos) lead to the current notion. Note that Lavwere's notion is more general than Grothendieck's, and it is the one that's nowadays simply called "topos". A topos is a category which has the following additional properties: • John Baez: Topos theory in a nutshell, http://math.ucr.edu/home/baez/topos (http://math.ucr.edu/home/baez/topos). A gentle introduction. • Robert Goldblatt: Topoi, the Categorial Analysis of Logic (Studies in logic and the foundations of mathematics vol. 98.), North-Holland, New York, 1984. A good start. • Saunders Mac Lane and Ieke Moerdijk: Sheaves in Geometry and Logic: a First Introduction to Topos Theory, Springer, New York, 1992. More complete, and more difficult to read. • Michael Barr and Charles Wells: Toposes, Theories and Triples,Springer, 1985. Corrected online version at http://www.cwru.edu/artsci/math/wells/pub/ttt (http://www.cwru.edu/artsci/math/wells/pub /ttt). More concise than Sheaves in Geometry and Logic All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/ma/Mathematical_topos?title=Subobject_classifier","timestamp":"2014-04-16T13:21:54Z","content_type":null,"content_length":"17696","record_id":"<urn:uuid:7c576df7-604c-4545-b4d4-002185edfaa1>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00410-ip-10-147-4-33.ec2.internal.warc.gz"}
Union, NJ Precalculus Tutor Find an Union, NJ Precalculus Tutor Hi! I recently completed a master's in applied mathematics and have started working in a cancer research lab in Hackensack. I can tutor all areas of K-12 math including calculus and SAT/ACT math. I have tutored several students, some as long as 6-8 weeks, but any time frame is fine. 12 Subjects: including precalculus, chemistry, algebra 2, biology ...From my time teaching, I have accumulated many resources from various websites, books, and articles to holistically address these problems. I believe I can present seemingly difficult math concepts in a very tangible and understandable manner. The key is to put it into real-world situations, which are actually meaningful and tangible to us. 26 Subjects: including precalculus, calculus, writing, statistics ...Currently, I am completing my final classes for the music technology major at The College Of Staten Island. My coursework has covered composition in classical and jazz settings. I also was in an all original rock band as the primary co-composer for 4 years, writing many songs throughout that time. 33 Subjects: including precalculus, physics, calculus, GRE ...Over the course of my college career, I have had the privilege of tutoring a high school student into an English honors class, a significant personal accomplishment for his strong math skill set. In addition to tutoring I have edited a section of a student newspaper and have led educational prog... 25 Subjects: including precalculus, chemistry, reading, algebra 2 I just graduated college with a BA in Applied and Pure Mathematics from Rutgers University with a 3.9/4.0 GPA. I would be starting a Pure Mathematics PhD program at the University of Oklahoma this fall. In short, I love mathematics. 12 Subjects: including precalculus, calculus, geometry, statistics Related Union, NJ Tutors Union, NJ Accounting Tutors Union, NJ ACT Tutors Union, NJ Algebra Tutors Union, NJ Algebra 2 Tutors Union, NJ Calculus Tutors Union, NJ Geometry Tutors Union, NJ Math Tutors Union, NJ Prealgebra Tutors Union, NJ Precalculus Tutors Union, NJ SAT Tutors Union, NJ SAT Math Tutors Union, NJ Science Tutors Union, NJ Statistics Tutors Union, NJ Trigonometry Tutors Nearby Cities With precalculus Tutor Chestnut, NJ precalculus Tutors Cranford precalculus Tutors East Orange precalculus Tutors Elizabeth, NJ precalculus Tutors Hillside, NJ precalculus Tutors Irvington, NJ precalculus Tutors Kenilworth, NJ precalculus Tutors Linden, NJ precalculus Tutors Maplewood, NJ precalculus Tutors Millburn precalculus Tutors Roselle Park precalculus Tutors Roselle, NJ precalculus Tutors Springfield, NJ precalculus Tutors Union Center, NJ precalculus Tutors Vauxhall precalculus Tutors
{"url":"http://www.purplemath.com/Union_NJ_precalculus_tutors.php","timestamp":"2014-04-16T07:23:04Z","content_type":null,"content_length":"24085","record_id":"<urn:uuid:691ba595-d93e-4bdc-96b2-3cc83591bfdc>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Calculate Operating Leverage Edit Article Edited by IngeborgK, Razor Blade, BR, Maluniu and 2 others The operating leverage of a business is the ratio of the change in operating income (EBIT) to the change in sales. Operating leverage is a way to measure the volatility of the business's earnings in relation to sales. A business with higher operating leverage is riskier than an equivalent business with lower operating leverage. You can calculate the operating leverage of a business with a few quick steps - get started with Step 1 below. 1. 1 Define the business' revenue and variable costs per unit sold, and its fixed costs. □ For our example, we'll use a factory that made and sold 1,000 widgets last year and had revenues of $100,000. 2. 2 Divide the total revenues by the number of units sold to determine the revenue per unit sold, or the sale price per unit. □ For example, the amount $100,000 in total revenues divided by 1,000 widgets sold equals $100, so the factory sold each widget for $100. 3. 3 Subtract the fixed costs and operating expenses from the total revenues. □ Fixed costs are costs which don't change based on the number of widgets made. Examples include rent and marketing or advertising expenses. □ For example, if the fixed costs were $20,000, and the operating expenses were $10,000, then the revenue of $100,000 minus fixed costs of $20,000 and operating expenses of $10,000 equals 4. 4 Divide the difference between revenues and fixed costs and operating earnings by the number of units made to determine the variable cost per unit made. □ Variable costs are different based on the number of units made. These costs can include materials. □ In our example, the difference between revenues and fixed costs and operating earnings was $70,000; divide $70,000 by 1,000 widgets to determine that variable costs per widget are $70. 5. 5 Calculate the contribution margin, or the variable profit per unit sold. □ This is calculated as the difference between the sales price per unit and the variable cost per unit. □ For example, the sales price for each widget was $100, and the variable cost for each widget was $70, so the contribution margin was $30. 6. 6 Multiply the variable profit per unit sold by the number of units sold to determine the total variable profit. □ For example, the variable profit per until sold was $30, and the number of units sold was 1,000, so the total variable profit was $30,000. 7. 7 Divide the total variable profit by the operating earnings. □ For example, the total variable profit of $30,000 divided by the operating earnings of $10,000 equals 3. This is the degree of operating leverage, or the ratio at which a $1 increase in sales will increase operating earnings. Things You'll Need • Calculator • Pen or Pencil • Paper Sources and Citations • Wikipedia entry: Operating Leverage Article Info Categories: Financial Ratios Recent edits by: Chris, Maluniu, BR In other languages: Español: Cómo calcular el apalancamiento operativo Thanks to all authors for creating a page that has been read 35,528 times. Was this article accurate?
{"url":"http://www.wikihow.com/Calculate-Operating-Leverage","timestamp":"2014-04-21T12:18:10Z","content_type":null,"content_length":"62019","record_id":"<urn:uuid:476212e2-97e5-49cd-a354-c5f5332a4047>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00196-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: HIII can someone please help me with numbers 5 and 6!? The answer to number 4 is 846.3 so you can use that to help you with part 5 and 6? Please? - Sincerely extremely confused! • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. The next resonant frequencies in a open organ pipe are given by lengths : 3L,5L,.............. where L corresponds to the resonant frequency Second resonance is given by 3L=30cm Third is given by Best Response You've already chosen the best response. I am getting the value of frequency of tuning fork as 862.5 How did u calculate 846.3 ? Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50fba1b9e4b010aceb331188","timestamp":"2014-04-18T03:52:38Z","content_type":null,"content_length":"33723","record_id":"<urn:uuid:4fcaddb4-808f-4914-8139-33d7d7f8bd1b>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00348-ip-10-147-4-33.ec2.internal.warc.gz"}
quick question January 5th 2008, 05:52 AM #1 Junior Member Jan 2007 im working on an anova problem i have count of 8 sum of 646 therefore an average of 80.75 i just cant remember how to get the variance any help appreciated what exactly soes variance signify in this case??? the apparent answer is 70.2143, i dont understand how that was achieved Last edited by question; January 5th 2008 at 06:11 AM. the variance here can be zero(all value are equal) to any positive real value. If we consider different distributions such as poisson,truncated poisson,hypergeometric etc. then there will be different variances. thanks for that but the answer that must be arrived at is the one i mentioned above, this is one way anova if that helps January 6th 2008, 04:33 AM #2 January 6th 2008, 05:18 AM #3 Junior Member Jan 2007
{"url":"http://mathhelpforum.com/advanced-statistics/25574-quick-question.html","timestamp":"2014-04-18T08:19:49Z","content_type":null,"content_length":"33811","record_id":"<urn:uuid:f9cfc68e-772c-47bc-b570-2228a82e1247>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00535-ip-10-147-4-33.ec2.internal.warc.gz"}
Student Support Forum: 'problem when Plotting a List of functions...' topic Author Comment/Response I am desperately trying to plot several functions --listed in a variable of type List-- with the Plot command. as a Mathematica beginner, i thought it would be straigtforward, ... but I am facing a strange problem related to lists of functions and the way Plot handles them. let's define a list of two functions: l := {Cos[x], Sin[x]} now let's try to plot the two functions on the same graphics: Plot[l, {x, 0, 10}] ...well it doesn't work! Mathematica displays a series of messages:'' Plot::''plnr'': l is not a machine-size real number at x = 4.16666666666666607`*^-7.'' and then an empty graphics Can someone explain why this doesn't work?? note that the following commands work: Plot[{Cos[x], Sin[x]}, {x, 0, 10}] Plot[{First[l], Last[l]}, {x, 0, 10}] ... even though {Cos[x], Sin[x]} == {First[l], Last[l]} == l !!!!! as in general, my list of functions is of a priori unknown size, i can't use the trick with First and Last. any idea on how to plot my list of functions? Thanks a lot for your time URL: ,
{"url":"http://forums.wolfram.com/student-support/topics/2674","timestamp":"2014-04-17T09:53:52Z","content_type":null,"content_length":"27305","record_id":"<urn:uuid:96add905-b178-45d1-b995-2208c65a1b27>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Patente US6084986 - System and method for finding the center of approximately circular patterns in images The present invention is directed to the field of image processing and, more particularly, to finding the center of approximately circular patterns in images. In automated clinical diagnostics, a droplet of some fluid to be tested is placed on a reactive medium. The resulting reaction produces information relevant to the clinical test being performed. This information is obtained automatically by measuring qualities related to the optical density of the test medium after the reaction has taken place or while the reaction is occurring. However, since the droplet does not spread uniformly, the density pattern on the substrate is not uniform, nor is the droplet placed in precisely the same spot on each measurement. Thus, it is desirable to locate the center of the density pattern, as this corresponds to the position at which the droplet is placed. All diagnostic measurements can then be made relative to this position. This will reduce the variability of the measurements and improve their quality. Typically, the density patterns are not precisely circular, there is noise in the measurement of the pattern, and the nature of the pattern will be different for different diagnostic procedures. In general, density patterns tend to be a collection of digital values related to the optical densities at specific locations in the original image. Techniques are known for estimating the center and radius of a circular arc in a binary image, and for finding the centers and radii of multiple such arcs using; for example, the widely known Hough transform. Unfortunately, this requires applying thresholds to the gradient images in order to create binary images and loses information about the strength of the gradient and its local direction. In addition, these techniques find a center for each circular arc, whereas what is often needed is the best overall center for the entire pattern. It is seen then that it would be desirable to have a modified system and method for determining the best central location in an image. The present invention is a method and system for determining the best central location in an image, which comprises determining a score for each of a set of candidate center locations. The center location with the highest score is the most likely center of those in the set of candidates. The center location can then be refined by evaluating the score on a more finely spaced set of candidate locations, or by interpolating amongst those locations for which the score has already been determined. In accordance with one aspect of the present invention, a center of an approximately circular pattern in a physical image can be determined by scanning the physical image to produce an array of digital values. The array of digital values are used to calculate a score at each of a set of candidate center locations. The candidate center location having the highest score is selected as the best overall center for the entire pattern. Typically, the scores are computed on discrete rectangular grid locations, which gives the location of the center to the resolution of the discrete grid. In alternative embodiments, the location can be determined to a finer resolution by recomputing the scores over a grid that has a finer resolution, but only extends over a small neighborhood around the first estimate of the center. This refinement can be repeated an arbitrary number of times, after which the resulting estimate of the center location can be taken as the final estimate, or a final interpolation can be performed. Accordingly, it is an object of the present invention to provide a system and method useful in image processing. The concept of the present invention can be used for the processing of images where it is desired to determine the center of the image, such as for images obtained in clinical diagnostic systems. The present invention is useful in finding the best overall center for an entire pattern, particularly for density patterns which are not precisely circular. Other objects and advantages of the invention will be apparent from the following description, the accompanying drawings and the appended claims. FIG. 1 illustrates contour lines of a roughly circular pattern for which the center is to be estimated, in accordance with the present invention; and FIG. 2 is a block diagram illustrating the steps for estimating the center of the roughly circular pattern of FIG. 1. Referring to FIG. 1, in accordance with the present invention, the center of an approximately circular pattern 10 in a physical image 12 can be estimated. FIG. 1 illustrates contour lines 14, 16, 18, 20 of the pattern 10 for which the center is to be estimated. Block diagram 22 of FIG. 2 illustrates the steps for estimating the center of the roughly circular pattern 10 of FIG. 1. As illustrated in the block diagram of FIG. 2, the initial step 24 in determining the center of approximately circular pattern 10 in physical image 12 is to scan the image 12 to produce an array of digital values, i.e., measure the physical density at each of a rectangular grid of locations in the physical image. For the purposes of this illustration, it can be assumed that the density measurements are contained in an array, A[i,j], where j is the row (0≦j&lt;Nrows), and i is the column (0≦i &lt;Ncols). Two derivative arrays can be then computed as follows: DY[i,j]=smedian (A[i+1,j+1]-A[i+1,j-1], A[i,j+1]-A[i,j-1], A[i-1,j+1]-A[i-1,j-1]); DX[i,j]=smedian (A[i+1,j+1]-A[i-1,j+1], A[i+1,j]-A[i-1,j],A[i+1,j-1]-A[i-1,j-1]), where smedian (a,b,c)= a if abs(b)&lt;abs(a)&lt;abs(c) or abs(c)&lt;abs(a)&lt;abs(b) or a=b or a=c; b if abs(a)&lt;abs(b)&lt;abs(c) or abs(c)&lt;abs(b)&lt;abs(a) or b=c; c if abs(a)&lt;abs(c)&lt;abs(b) or abs(b)&lt;abs(c)&lt;abs(a); 0 if (a=-b and a !=c and b!=c) or (a=-c and a !=b and c !=b) or (b=-c and a !=b and a !=c). In accordance with the present invention, a score is then computed, as shown at block 26 of FIG. 2, for a given location of the center, (x,y) where x represents the column coordinate and y represents the row coordinate. It should be noted that x and y need not be integers. The score can be computed in any of a variety of ways, including, for example, according to the following equations: F[x,y]=Sum on i,j of (abs(DX[i,j]*(x-i)+DY[i,j]*(y-j))/sqrt((x-i)*(x-i)+(y-j)*(y-j))); F[x,y]=Sum on i,j of (abs(DX[i,j]*(x-i)+DY[i,j]*(y-j))/((x-i)*(x-i)+(y-j)*(y-j))). The determination on how the score should be computed will depend on the data being used. For example, to more heavily weight data further from the center, the equation using the square root of the denominator may be preferred. In the equations for the calculation of the location of the center, any terms that have a zero denominator are omitted. Furthermore, it may be that some elements of A are invalid measurements, because, for example, they may lie outside an area of usable densities. In that case, the elements of DX and DY which depend on those elements of A are also invalid and any term of the sum above which depends on those invalid elements is omitted. The equations above give a score for any given location (x,y) of the center. To find the location of the center, the value of the center, i.e., the value of F[x,y], is computed for a set of locations. As can be seen from the equations above, the score will increase as the location approaches the center of the circular pattern, and will be higher for patterns which are more circular. Hence, the location with the largest value of F is selected as the most likely location of the center, as shown at block 28 of FIG. 2. Typically, F[x,y] is computed on discrete rectangular grid locations. This will give the location (x0,y0) of the center, to the resolution of the discrete grid. The location can be determined to a finer resolution by recomputing F[x,y] over a grid that has a finer resolution but only extends over a small neighborhood around the first estimate of the center, (x0,y0). This will give an improved estimate (x1,y1) of the center location. This refinement can be repeated an arbitrary number of times. After the final refinement, the resulting estimate of the center location (xn,yn) can be taken as the final estimate or a final interpolation can be performed by fitting the values of F[x,y] computed over the neighborhood to a quadratic or cubic surface and determining the peak of that The present invention is useful in the field of image processing in that it determines the center of approximately circular patterns in images. Although the present application is useful in a variety of image processing situations, the present invention is particularly useful in the processing of images obtained in clinical diagnostic systems, where density patterns are not precisely circular. The present invention has the advantage of determining the best overall center for an entire pattern. Having described the invention in detail and by reference to the preferred embodiment thereof, it will be apparent that other modifications and variations are possible without departing from the scope of the invention defined in the appended claims. 10 Pattern 12 Image 14 Contour lines 16 Contour lines 18 Contour lines 20 Contour lines 22 Block 24 Block 26 Block 28 Block
{"url":"http://www.google.es/patents/US6084986?dq=flatulence","timestamp":"2014-04-17T15:33:48Z","content_type":null,"content_length":"65955","record_id":"<urn:uuid:cb3cb786-e6d7-456b-9d72-7db3494dc60f>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
Another programming problem!!! hi guys i am trying to make a program to solve a Sudoku.i hope you know what that is. now i have finished the code but there seems to be a logical error because i entered a valid one and it outputted the message 'The Sudoku cannot be solved!!!' here's the code: program Sudoku_solver; {$mode objfpc}{$H+} {$IFDEF UNIX}{$IFDEF UseCThreads} { you can add units after this }; {$IFDEF WINDOWS}{$R Sudoku.rc}{$ENDIF} niz=array[1..max] of integer; matrica=array[1..max] of niz; function row(a:matrica;i1,j1:integer):boolean; for j:=1 to 9 do if j<>j1 then if a[i1][j]=a[i1][j1] then function column(a:matrica;i1,j1:integer):boolean; for i:=1 to 9 do if i<>i1 then if a[i][j1]=a[i1][j1] then function square(a:matrica;i1,j1:integer):boolean; case i1 of 1,2,3: begin 4,5,6: begin 7,8,9: begin case j1 of 1,2,3: begin 4,5,6: begin 7,8,9: begin while (k<=i) do while (l<=j) do if (k<>i1) and (l<>j1) then if a[k][l]=a[i1][j1] then function pos(a:matrica;i,j:integer):boolean; poz:=row(a,i,j) and column(a,i,j) and square(a,i,j); procedure sudoku(var a:matrica;n,i1,j1:integer;ok:boolean); if poz(a,i,j) then while (b=true) and (i<=max) do while (b=true) and (j<=max) do if a[i][j]=0 then if b=false then while (k<=9) and not ok do writeln('Enter the Sudoku: '); for i:=1 to 9 do for j:=1 to 9 do while (k<=9) and not ok do if ok then writeln('The solution is: ') for i:=1 to 9 do for j:=1 to 9 do else writeln('Sudoku cannot be solved!!!'); note that functions row,column and square check if there are same numbers as the number we are looking at in the same row,column and square. my second question is: is there another (better) way to make the Sudoku solver,because this one is fairly long and complex !? Last edited by anonimnystefy (2011-10-01 07:14:47) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Hi anonimnystefy; That is not long at all for a sudoku solver. What is the sudoku problem that you are testing this on? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! hi bobbym the answer should be: The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! just for practice: Last edited by anonimnystefy (2011-09-30 06:13:23) The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Funny thing is my program does noot get that one either. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! but why won't it work? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! I do not know. I checked the rows and columns of the solution as well as every 3 x 3 grid. It should have got that answer?! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! have you checked it compleely.it uses recursion. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Hi anonimnystefy; No, I haven't. I am looking at mine as to why it does not get that answer. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! what does it get? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Spits it out as if it does not have a solution! Where does that solution come from? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! from me! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! That is what I was thinking. Did you try it on any others? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! nope! :embarrassed! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! I can give you some examples if you do not have any. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! i have bunch of them and i tried this one: it won't work! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! hi bobbym i found an error on my side in the main code and i fixed it and i edited it but it still won't do it! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Something must be wrong with yours. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! ya think? The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! That is about all I can do. I do not speak Java. Even my C++ is too rusty. Use your debugger to go line by line watching the variables as you go. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! where did you get Java? i'm using Pascal.very basic. i changed my code a bit and got a program that solves only the first row.it won't go to the next one! The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Hi anonimnystefy; Forgive my sense of humor, I know, you told me you were using Lazarus. In the other thread, just out of curiosity, I asked if lazarus has a debugger? Does it? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! think so. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: Another programming problem!!! Does it have a variable pane? Or a variable watch? In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: Another programming problem!!! yes,yes it does. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=192043","timestamp":"2014-04-20T01:10:53Z","content_type":null,"content_length":"45309","record_id":"<urn:uuid:f5d21470-7e97-4d62-abf0-c4dcf2b364cc>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimized Sections for High-Strength Concrete Bridge Girders--Effect of Deck Concrete Strength CHAPTER 4. TASK 3: ANALYSES OF PRESTRESS LOSSES AND LONG-TERM DEFLECTIONS Analyses to determine the effect of high-performance concrete on prestress losses and long-term deflections were performed using a computer program known as PBEAM. ^(29) The program PBEAM is capable of analyzing composite prestressed concrete structures of any cross sectional shape having one axis of symmetry. The program accounts for the effects of nonlinearity of stress-strain response of materials and their variations of strength, stiffness, creep, and shrinkage of concrete, and relaxation of steel with time. A step-by-step method is used in the time-dependent analysis, and a tangent stiffness method is implemented for solving nonlinear response. Precast, prestressed bridge girders with composite cast-in-place decks are modeled using a discrete element method. Element deformations and forces are estimated by analyzing stress-strain relationships of a series of rectangular fibers distributed over the depth of a cross section. Strain in each fiber is assumed to be constant at the centroidal axis of the fiber, and strain distribution varies linearly through the depth of a section. For each time step, the equilibrium at each element is maintained by determining the time-dependent stress corresponding to the level of strain in each fiber. The stress multiplied by area is summed over all fibers and force equilibrium is checked. If necessary, the strain distribution is adjusted and the process is repeated until all forces balance. A more detailed description of the program PBEAM and its verification against experimental data are given in references 29 and 30. The following assumptions were utilized in the program PBEAM: • Girdersare simply supported. • Calculations are based on a typical interior girder. • Release of the prestressing strands occurs at an age of one day in several increments. Dead load is added at each increment. • Concrete deck is cast in place and is cast when the girder is 83 days old. At age 90 days, the concrete deck acts compositely with the girder. Deck formwork is considered to be supported on the • Strands are low relaxation Grade 270 with a 12.7 m (0.5 inches) diameter spaced at 51‑mm (2-inch) centers. Minimum cover to center of strands is 51 mm (2 inches). • Girder cross section is a BT-72. Material properties are varied according to the discussion in section 4.2. Analyses were performed for the following variables: • Girder concrete compressive strength: 41, 55, 69, and 83 MPa (6,000, 8,000, 10,000, and 12,000 psi, respectively). • Deck concrete compressive strength: 28, 41, 55, and 69 MPa (4,000, 6,000, 8,000 and 10,000 psi, respectively). • Span lengths: 24.4, 44.5, and 53.3 m (80, 146, and 175 ft, respectively). The combination of variables are defined in tables 16 and 17. Cross sections of the girders are shown in Figure 23. Series A through D represent a complete parametric study of girder concrete strength and deck concrete strength for constant cross section and span length. Series E is an investigation of span lengths for a constant concrete strength. Design of the cross sections for series A and E were based on the analyses performed in task 1. Table 16. Task 3 variables (SI units). │Series│Girder Strength (MPa) │Deck Strength (MPa)│Span (m)│No. of Strands*│ │ │41 │28 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │41 │41 │44.5 │41 │ │ A ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │41 │55 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │41 │69 │44.5 │41 │ │ │55 │28 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │55 │41 │44.5 │41 │ │ B ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │55 │55 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │55 │69 │44.5 │41 │ │ │69 │28 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │69 │41 │44.5 │41 │ │ C ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │69 │55 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │69 │69 │44.5 │41 │ │ │83 │28 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │83 │41 │44.5 │41 │ │ D ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │83 │55 │44.5 │41 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │83 │69 │44.5 │41 │ │ │83 │55 │24.4 │20 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ E │83 │55 │44.5 │77 │ │ ├──────────────────────┼───────────────────┼────────┼───────────────┤ │ │83 │55 │53.3 │77 │ * For consistency between tasks, the odd number of strands calculated by the program BRIDGE in task 1 were retained in task 3. Table 17. Task 3 variables (English units). │Series│Girder Strength (psi) │Deck Strength (psi)│Span (ft)│No. of Strands*│ │ │6,000 │4,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │6,000 │6,000 │146 │41 │ │ A ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │6,000 │8,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │6,000 │10,000 │146 │41 │ │ │8,000 │4,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │8,000 │6,000 │146 │41 │ │ B ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │8,000 │8,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │8,000 │10,000 │146 │41 │ │ │10,000 │4,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │10,000 │6,000 │146 │41 │ │ C ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │10,000 │8,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │10,000 │10,000 │146 │41 │ │ │12,000 │4,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │12,000 │6,000 │146 │41 │ │ D ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │12,000 │8,000 │146 │41 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │12,000 │10,000 │146 │41 │ │ │12,000 │8,000 │80 │20 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ E │12,000 │8,000 │146 │77 │ │ ├──────────────────────┼───────────────────┼─────────┼───────────────┤ │ │12,000 │8,000 │175 │77 │ * For consistency between tasks, the odd number of strands calculated by the program BRIDGE in task 1 were retained in task 3. To satisfy design stress conditions at the ends of the girders, every strand within the width of the web was draped upwards at the ends. The drape started at a distance of 30 percent of the span from the end of the girders. The center 40 percent of the span length had the strands at maximum and constant eccentricity. The computer program PBEAM allows for a variety of inputs for material properties and also contains default values. Because the properties of high-performance concrete may be different from those used as the basis for the default properties, a study was made to select the most appropriate material properties for analysis. This study involved selecting appropriate properties for modulus of elasticity, shrinkage, and creep, and their variation with time. Modulus of Elasticity In task 2, complete stress-strain curves for various strengths of concrete were established. The slope of the ascending portion of the stress-strain curve is the modulus of elasticity. The following equations were utilized for calculation of the modulus: For f'[c] of 28 and 41 MPa (4,000 and 6,000 psi, respectively): in English units ^(20) (1) For f'[c] of 55, 69, and 83 MPa (8,000, 10,000, and 12,000 psi): in English units ^(19) (4) For the girder concrete, the variation of compressive strength with time was determined from the following equation: (f'[c])[t] = compressive strength at a concrete age of t days (f'[c])[28] = compressive strength at a concrete age of 28 days The above relationship was based on the recommendations of ACI 209 and corresponds to a compressive strength at 1 day equal to 75 percent of the compressive strength at 28 days. ^(31) Figure 23 (part 1). Cross section of series A through D girders (BT-72) analyzed in task 3. All Figure 23 (part 2). Cross section of series E girder (BT-72), 24.4 m (80-ft) span, analyzed in task dimensions are in millimeters (inches). 3. All dimensions are in millimeters (inches). Figure 23 (part 3). Cross section of series E girder (BT-72), 44.5 m (146-ft) span, analyzed in Figure 23 (part 4). Cross section of series E girder (BT-72), 53.3 m (175-ft) span, analyzed in task 3. All dimensions are in millimeters (inches). task 3. All dimensions are in millimeters (inches). For the deck concrete, the variation of compressive strength was assumed to be in accordance with ACI 209 as follows: ^(31) Equation 17 reflects a slower strength gain for moist, cured concrete compared with equation 16 which applies to rapid strength development. Consequently, for a specified 28-day compressive strength, the compressive strength at any other age may be calculated. Using this value of compressive strength, the corresponding modulus of elasticity for the concrete can be determined. In this manner, the variation of modulus of elasticity with time can be calculated. Most research has indicated that the final shrinkage of high-strength concretes is of the same order of magnitude as that for lower strength concretes. ^(1) Consequently, the values proposed by ACI 209 were utilized in the program PBEAM analysis. ACI 209 recommends that, in the absence of specific creep and shrinkage data for local aggregates and conditions, an average value of 780 millionths be utilized for the shrinkage of a 153- by 305-mm (6- by 12-inch) cylinder exposed to drying at 40 percent relative humidity. ^(31) This value was then corrected for the effects of girder size and relative humidity in accordance with the procedures of ACI 209. An average mean annual relative humidity of 70 percent was taken as representing a large portion of the United States. Consequently, a relative humidity correction factor of 0.7 was applied. A size correction factor of 0.837 was also applied as representing a volume-to-surface-area ratio of 3.0 for a BT-72. These two correction factors resulted in a final shrinkage strain for the girder of 457 millionths. The shrinkage strain of the girder concrete was assumed to vary with time according to the following equation: ^(31) ([sh])[t] = shrinkage at time t ([sh])[u] = final shrinkage strain For the concrete in the deck, a relative humidity correction factor of 0.7 was applied along with a size correction for a 190-mm (7.5-inch) thick deck of 0.77, resulting in a final shrinkage of 420 millionths. The deck shrinkage was assumed to vary with time according to the following equation: ^(31) It is possible that, with the higher-strength concretes and the use of fly ash or silica fume to obtain the strengths, the concrete may take longer to dry out than the lower strength concretes. Consequently, the assumed variation of shrinkage with time may not truly reflect actual behavior. However, a lack of data for steam-cured, high-strength concretes precluded the determination of an alternative equation. Creep of Girder Concrete Creep of concrete can be expressed in terms of creep coefficients or specific creep. The creep coefficient is the ratio of creep strain to the initial strain at loading. For most concretes, the values vary between 1.30 and 4.15. Specific creep is defined as creep strain per unit stress and varies between 15 and 220 millionths/MPa (0.1 and 1.5 millionths per psi). The relationship between creep coefficient and specific creep is as follows: Creep coefficient = specific creep x modulus of elasticity at age of loading The computer program PBEAM allows the input of creep as a creep coefficient. However, for purposes of selecting appropriate creep values for use in the analyses, the following discussion is based on specific creep. Specific creep data for 153- by 305-mm (6- by 12-inch) cylinders published by several authors are shown in Figure 24. These data have been obtained for a variety of concrete constituent materials, cured under different conditions, loaded at different ages, and maintained under constant load for different lengths of time. To partially eliminate the variable associated with the length of time under load, the published data were corrected to final values based on variations of creep with time following the equations listed above. A plot of the same data including this correction factor is shown in Figure 25. All of the data are for cylinders maintained at 50 percent relative humidity while under load. A comparison with the predicted values according to ACI 209 for 50 percent relative humidity is also included in Figure 25. This curve is very close to the best fit for all data. The solid symbols shown in Figure 25 are for concrete specimens obtained by steam curing.^(14, 30, 32) Since it is anticipated that high-strength concrete prestressed girders will either be produced by steam curing or will achieve relative high temperatures from heat of hydration, the effects of curing temperatures on the properties of concrete are important. Hanson indicated that the effect of atmospheric steam curing was to reduce the creep of concrete cylinders containing type I cement by 20–30 percent and that of concretes containing type III cements by 30–40 percent below that of the same concretes moist cured for 6 days ^(32) It is also apparent from Figure 25 that the reduction in specific creep as compressive strength increases is more rapid with the steam-cured concretes than with the moist-cured concretes. Figure 25 shows a best-fit curve to the data for steam-cured concretes alone. This curve indicates a very rapid change in the specific creep as the concrete compressive strength increases. However, no data are available for concrete compressive strengths above 69 MPa (10,000 psi), so the validity of the extrapolation beyond 69 MPa (10,000 psi) is questionable. Consequently, in the PBEAM analyses, a variation of specific creep with concrete compressive strength was selected that lay between the ACI 209 values and that for steam cured concrete alone. This line is labeled in Figure 25 as PBEAM. Since most of the data in Figure 25 represent concrete loaded at 28 days, this age was selected as the age for which the specific creep values would be selected. Values of specific creep and creep coefficient for 28-day age of loading at 50 percent relative humidity and a volume-to-surface ratio of 1.5 are listed in table 21. These data were then corrected using the procedures of ACI 209 for a relative humidity of 70 percent, a volume-to-surface ratio of 3.0 corresponding to a BT-72, and a loading age of 1 day. The corrected calculated creep coefficients for each concrete strength are tabulated in table 18. These values were used in the PBEAM analyses. Figure 24. Variation of specific creep with compressive strength as published. Figure 25. Variation of ultimate specific creep with compressive strength. Table 18. Values of creep used in PBEAM. Specific Creep Creep Coefficient Loading Age = 28 Days 28-Day Compressive Strength Unit Weight 28-Day Modulus of Elasticity RH = 50% RH = 50% RH = 70% V/S = 1.5 V/S = 1.5 V/S = 3.0 MPa kg/m^3 GPa millionths/MPa 41 2,370 31.7 72.5 2.30 1.95 55 2,420 35.9 56.9 2.04 1.73 69 2,480 39.1 47.0 1.93 1.63 83 2,500 44.3 40.3 1.79 psi lb/ft^3 10^6 psi millionths/psi 6,000 148 4.60 0.500 2.30 1.95 8,000 151 5.20 0.392 2.04 1.73 10,000 155 5.97 0.324 1.93 1.63 12,000 156 6.43 0.278 1.79 1.51 The variation of creep with time was assumed to be in accordance with the following equation by ACI 209: [t] = creep at time t [u] = final value of creep [t] = number of days under load The effect of age of loading was also assumed to be in accordance with ACI 209 as follows: [la] = correction factor for age of loading t[la] = age of concrete at loading The resulting relationship between specific creep and age for different strength concretes and two ages of loading are shown in Figure 26. Creep of Deck Concrete Since the equations utilized by ACI for creep coefficient represented a good fit for the data shown in Figure 25, it was decided to use the ACI 209 values for the creep properties of the concrete used in the deck. The calculated creep coefficient for a deck with a thickness of 190 m (7.5 inches) was 1.44. Steel Relaxation Since steel relaxation was not a primary parameter in the evaluation, the default values contained within PBEAM were utilized. These are based on the PCI recommendations. ^(39) Figure 26. Variation of specific creep with age. The variation of prestressing strand stress with time for the bottom layer of strands for two BT‑72 girders with concrete compressive strengths of 41 and 83 MPa (6,000 and 12,000 psi, respectively) and a span of 44.5 m (146 ft) is shown in Figure 27. Each curve consists of four stages. The first stage comprises the initial elastic shortening caused by release of the prestressing. In PBEAM, this is accomplished by applying the prestressing force in a series of stages corresponding to the addition of the girder dead load. The force is applied in increments to prevent cracking of the concrete. The second stage of the curves consists of prestress losses between the time of release and the time when the deck is cast on the girder. The third stage is the elastic change in stress caused by application of the dead load of the deck concrete to the girder cross section. The fourth and final stage of the curve consists of losses in strand stresses as the composite girder is loaded by the dead load of the deck and girder. For this analysis, no additional dead load was assumed after the deck was added. The general shape of the curve was the same for all span lengths and concrete strengths analyzed. At each level of girder concrete compressive strength, the variation of deck concrete compressive strength did not have any effect on prestress losses. This occurs because the deck does not become an effective part of the composite section until the fourth stage of each curve. At the beginning of the fourth stage, the compressive stress in the deck is zero. The only increase in compressive stress occurs as the concrete girder shortens and tries to shorten the deck with a corresponding force transferred into the deck. However, at the same time, the deck is also shrinking and this shrinkage is of the same order of magnitude as the shortening of the top flange of the girder. Consequently, there is very little transfer of force between the girder and the deck, and the deck does not have a significant impact on the prestress losses. It should be noted that all the analyses in this investigation were based on a composite section becoming effective at 90 days. It is possible that the effect of the deck concrete compressive strength may be greater for earlier ages of loading. The variation of strand stress with time for the three girders containing 83-MPa (12,000-psi) concrete compressive strength and varying span lengths is shown in Figure 28. It can be seen that the prestress losses varied with span length, the 44.5-m (146-ft) length having the largest total loss. This is consistent with the magnitude of stress at the level of the bottom layer of strands following release. For the girder with the span length of 44.5 m (146 ft), the concrete compressive stress at release was the highest of the three girders. Consequently, the elastic shortening and the creep shortening were also higher. Figure 27. Prestressing strand stress versus time for varying girder concrete strength, 28-MPa deck strength, and 44.5-m span. Figure 28. Prestressing strand stress versus time for 83-MPa girder concrete strength, 55-MPa deck strength, and varying spans. The prestress losses determined for each level of girder concrete compressive strength are tabulated in tables 19 and 20. The tabulated losses are the calculated losses for the lower layer of prestressing strand in the girder cross section. The total losses are those determined from the program PBEAM at an age of 25 years starting with an initial stress of 1.30 GPa (189,000 psi) in the prestressing strand. The shrinkage stresses were calculated from the assumed shrinkage strain based on a modulus of elasticity of the prestressing strand of 193 GPa (28,000,000 psi). The elastic shortening at release corresponds to the prestress loss determined from the program PBEAM during application of the prestressing force. Because of the manner in which the analyses are performed, a small amount of relaxation is included in these stresses. The creep and relaxation losses represent the net difference between the total losses and the shrinkage and elastic losses. Because of the manner in which the program PBEAM calculates the interactive stresses from creep and relaxation, it is not possible to separate the two effects in the analysis. Consequently, they are listed together in tables 19 and 20. From the analyses, it can be seen that the direct substitution of a higher-strength concrete for one of lower strength reduces the prestress losses, part of the reduction being caused by the lower elastic losses and part by the lower creep losses. It may also be concluded from tables 19 and 20 that the magnitude of the total prestress losses will not be greater through the use of high-strength concrete in the girders and are likely to be less. Prestress losses calculated according to AASHTO standard specifications are also shown in tables 19 and 20 for comparison with the losses calculated according to PBEAM. ^(16) In the current specifications, the creep losses are calculated based on the concrete stresses at the center of gravity of the prestressing steel. For purposes of comparison, the AASHTO losses shown in tables 19 and 20 were calculated using the procedure detailed in AASHTO specifications, but the stresses were calculated at the level of the bottom layer of prestressing steel. For the shorter span lengths, the AASHTO calculations show reasonable agreement with the PBEAM calculations. However, for the higher-strength concretes at the longer span lengths considerable deviation exists. For all calculations, the elastic losses compare favorably. However, AASHTO underestimates the losses caused by shrinkage and overestimates considerably the losses caused by creep. These data indicate that a revision of the AASHTO specification to take into account the different creep properties of the high-strength concretes is needed. Table 19. Comparison of prestress losses (SI units). │ │ │ Losses (MPa) │ │Girder Strength(MPa) │Span(m)├───────┬─────────┬─────┬──────────┬─────┤ │ │ │Elastic│Shrinkage│Creep│Relaxation│Total│ │ │ │ PBEAM │ │ 41 │44.5 │104 │88 │119 │311 │ │ 55 │44.5 │92 │88 │99 │279 │ │ 69 │44.5 │81 │88 │85 │254 │ │ 83 │44.5 │75 │88 │76 │239 │ │ 83 │24.4 │54 │88 │60 │202 │ │ 83 │44.5 │130 │88 │116 │334 │ │ 83 │53.3 │106 │88 │93 │287 │ │ │ │ AASHTO │ │ 41 │44.5 │98 │45 │126 │16 │285 │ │ 55 │44.5 │83 │45 │128 │17 │173 │ │ 69 │44.5 │72 │45 │130 │19 │266 │ │ 83 │44.5 │66 │45 │131 │19 │261 │ │ 83 │24.4 │45 │45 │91 │23 │204 │ │ 83 │44.5 │112 │45 │212 │10 │379 │ │ 83 │33.3 │92 │45 │181 │14 │332 │ Table 20. Comparison of prestress losses (English units). │Girder Strength (psi) │ │ Losses (ksi) │ │ │Span (ft)├───────┬─────────┬─────┬──────────┬─────┤ │ │ │Elastic│Shrinkage│Creep│Relaxation│Total│ │ │ │ PBEAM │ │ 6,000 │146 │15.1 │12.8 │17.2 │45.1 │ │ 8,000 │146 │13.4 │12.8 │14.4 │40.6 │ │ 10,000 │146 │11.8 │12.8 │12.4 │37.0 │ │ 12,000 │146 │10.9 │12.8 │11.0 │34.7 │ │ 12,000 │80 │7.8 │12.8 │8.7 │29.3 │ │ 12,000 │146 │18.8 │12.8 │16.8 │48.4 │ │ 12,000 │175 │15.4 │12.8 │13.5 │41.7 │ │ │ │ AASHTO │ │ 6,000 │146 │14.2 │6.5 │18.2 │2.3 │41.2 │ │ 8,000 │146 │12.1 │6.5 │18.6 │2.5 │39.7 │ │ 10,000 │146 │10.5 │6.5 │18.8 │2.7 │38.5 │ │ 12,000 │146 │9.6 │6.5 │19.0 │2.8 │37.9 │ │ 12,000 │80 │6.5 │6.5 │13.2 │3.4 │29.6 │ │ 12,000 │146 │16.2 │6.5 │30.7 │1.5 │54.9 │ │ 12,000 │175 │13.4 │6.5 │26.2 │2.0 │48.1 │ The variation of midspan deflection with time for four girders of varying girder concrete compressive strength and at a constant deck concrete strength of 28 MPa (4,000 psi) is shown in Figure 29. These curves consist of four stages. The initial stage corresponds to an upward deflection at release of prestressing and includes the effects of prestressing and dead load of the girder. The second stage consists of continued upward deflection as a result of creep in the concrete. The third stage consists of a downward deflection caused by the dead load of the deck at the time the concrete deck is placed at 83 days. The fourth stage consists of further downward deflection as a result of creep and shrinkage followed by a period in which the deflections essentially level off. By age 1,000 days, the maximum net deflection was +6 mm (0.25 inch). These analyses indicate that very little change occurs after 180 days, consistent with results obtained by Bruce. ^(14) The effect of deck concrete compressive strength on midspan deflection is shown in Figure 30. This figure shows the concrete compressive strength of the deck had very little effect on the midspan deflections (which was true for all girder strength levels). The variation of midspan deflection with time for the 83-MPa (12,000 psi) concrete girders with varying span lengths is shown in Figure 31. The deflections of the 24.4-m (80-ft) girder are relatively small compared with the deflections of the girder for other span lengths. This results partly from the shorter span length but also from the relatively low number of strands (only 20). The initial camber of the 44.5-m (146-ft) girder is similar to the camber of the girders shown in Figure 29 for the same span length. A slight difference in camber occurs because of the different number of strands: 77 for the girder in Figure 31 compared with 41 for the girders in Figure 29. The downward deflection caused by casting the deck and the subsequent creep and shrinkage are larger for the girder shown in Figure 31 compared with that in Figure 29 because of the larger girder spacing. The net result for the girder shown in Figure 31 is a downward deflection of approximately 20 mm (0.8 inch). Figure 29. Midspan deflection versus time for varying girder concrete strengths, 28-MPa deck strength, and 44.5-m span. The 53.3-m (175-ft) span girder shows a deflection pattern after release that is different from the other girders. For a short time, the girder creeps upwards but it then reverses direction. Following release of the prestress, the stress distribution across the depth of the girder is nearly constant. This is different from the other girders where the compressive stress in the bottom flange is always greater than the stress in the top flange. Following a small amount of prestress loss, the compressive stress in the top flange exceeds the stress in the bottom flange and the girder begins to creep downwards. A large deflection occurs when the deck is cast because of the long span. The final result is a downward deflection of approximately 90 mm (3.5 inches). This deflection is small compared with the span length (1 in 600) and could be compensated for by cambering the deck formwork. However, it indicates that there may be deflection considerations that could limit the span length for which high-strength concrete girders can be used. Based on the task 3 analyses, the following conclusions are made: • The use of high-strength concrete in the decks did not change the magnitude of the prestress losses. • Prestress losses in high-strength concrete girders will generally be less than the losses in lower strength concrete girders. • The current AASHTO procedure for calculations of prestress losses needs to be modified to account for the properties of high-strength concrete. • The use of high-strength concrete in the decks did not affect the magnitude of the long-term deflections. • The use of high-strength concrete in girders in place of lower strength concrete will result in less initial camber and similar long-term deflections for the same span lengths. • There may be deflection requirements that limit the span lengths for which high-strength concrete girders can be used. Figure 30. Midspan deflection versus time for 41-MPa girder concrete strength, varying deck concrete strengths, and 44.5-m span. Figure 31. Midspan deflection versus time for 83-MPa girder concrete strength, 55-MPa deck strength, and varying spans. Previous Table of Contents Next
{"url":"https://www.fhwa.dot.gov/publications/research/infrastructure/structures/05058/04.cfm","timestamp":"2014-04-18T00:50:48Z","content_type":null,"content_length":"73097","record_id":"<urn:uuid:35c8a2dd-fac7-4b50-8f8d-abb4012357da>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00089-ip-10-147-4-33.ec2.internal.warc.gz"}
Page:Scientific Memoirs, Vol. 3 (1843).djvu/684 This page has been , but needs to be L. F. MENABREA ON BABBAGE'S ANALYTICAL ENGINE. cases the difficulty will disappear, if we observe that for a great number of functions the series which represent them may be rendered convergent; so that, according to the degree of approximation desired, we may limit ourselves to the calculation of a certain number of terms of the series, neglecting the rest. By this method the question is reduced to the primitive case of a finite polynomial. It is thus that we can calculate the succession of the logarithms of numbers. But since, in this particular instance, the terms which had been originally neglected receive increments in a ratio so continually increasing for equal increments of the variable, that the degree of approximation required would ultimately be affected, it is necessary, at certain intervals, to calculate the value of the function by different methods, and then respectively to use the results thus obtained, as data whence to deduce, by means of the machine, the other intermediate values. We see that the machine here performs the office of the third section of calculators mentioned in describing the tables computed by order of the French government, and that the end originally proposed is thus fulfilled by it. Such is the nature of the first machine which Mr. Babbage conceived. We see that its use is confined to cases where the numbers required are such as can be obtained by means of simple additions or subtractions; that the machine is, so to speak, merely the expression of one^[1] particular theorem of analysis; and that, in short, its operations cannot be extended so as to embrace the solution of an infinity of other questions included within the domain of mathematical analysis. It was while contemplating the vast field which yet remained to be traversed, that Mr. Babbage, renouncing his original essays, conceived the plan of another system of mechanism whose operations should themselves possess all the generality of algebraical notation, and which, on this account, he denominates the Analytical Engine. Having now explained the state of the question, it is time for me to develope the principle on which is based the construction of this latter machine. When analysis is employed for the solution of any problem, there are usually two classes of operations to execute: firstly, the numerical calculation of the various coefficients; and secondly, their distribution in relation to the quantities affected by them. If, for example, we have to obtain
{"url":"http://en.wikisource.org/wiki/Page:Scientific_Memoirs,_Vol._3_(1843).djvu/684","timestamp":"2014-04-17T23:44:49Z","content_type":null,"content_length":"25258","record_id":"<urn:uuid:5036859d-3d9a-475a-aee5-7c6cb91777d5>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
Coin tosses don't really give you fifty-fifty odds The coin toss, the great equalizer of all odds, is not as random as it seems. When flipped by a human being, the odds are slightly stacked towards one side. One famous Futurama episode showed a universe where everything is the same, except the result of each coin toss. If a tossed coin came up heads in one universe, it came up tails in the other. The comedy in the episode came from the changes that this random event - entirely up to the physical laws of the universe - caused. It turns out, though, that this mirror universe is impossible. While coin tosses represent fifty-fifty odds, they don't actually work out that way. As usual, it's humanity that messes things up. When flipped by a machine, coins come up heads a solid fifty percent of the time, and tails the other fifty percent. Put the fate of the coin in grubby human hands and the odds tip slightly in favor of the side that faces up just before the coin is flipped. The side that was face up at the beginning of the flip has a fifty-one percent chance of landing face-up at the end. Humans are not as precise as machines, and so the coin rotates around several axes instead of one. The extra rotation favors the side the original position, to a measurable degree. This is independent of the material that the coin is made out of. Scientists have tried the experiment using coins made out of balsa wood (and probably the labor of some very tired interns), and gotten the same results. So if the initial placement of the coin matters to the flip, how did that alternate Futurama universe work out the difference? Did a character who liked to place the coin on their hand heads-up in one universe switch to tails-up in another? Suddenly that universe was not just about the effect of random chance on people's actions, but the effect of subtle psychological decisions that had an effect one percent of the time. Via The Washington Post. Photo by Kirsty Pargeter/Shutterstock 1 77Reply
{"url":"http://io9.com/5826157/coin-tosses-arent-really-fifty-fifty","timestamp":"2014-04-21T07:57:19Z","content_type":null,"content_length":"84158","record_id":"<urn:uuid:f6fc337c-1166-47f8-ae54-b06a076f78f5>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
'Senior mathematical challenge' Issue 20 May 2002 The UK National Mathematics Contest 1988-1996 As Tony Gardiner says in at the beginning of this book, "the last ten years or so has seen a remarkable blossoming of public interest in mathematics [but] most of the books produced have been for adults, rather than for students. Moreover, most are in prose format - for those who want to 'read about' mathematics, rather than those who want to get their hands dirty solving problems." There is only so far a student or teacher can go reading about maths - maths is not primarily a spectator sport. But where to turn for really good, challenging problems? The preference of publishers for "new" books means that classic problem books - which in maths have a long shelf-life - are allowed to go out of print. Still, this book fills the gap nicely. It contains the National Mathematics Contest papers from 1988 to 1996, together with comments, hints and full solutions. The problems are multiple choice in format, and are aimed at students aged 15 to 18. In the solutions, Gardiner deliberately sticks to techniques that GCSE students should already know, eschewing "slick" solutions in favour of ones that will not discourage beginners. Clearly this book will be of use as practice material to those intending to sit the paper in the future, but it could also be used as a source of challenging and interesting problems - for homework or for the sheer joy of intellectual mastery. A useful feature is a list of books of problems and puzzles and books which cover mathematical material in a readable way. So what are the problems like? Here are two chosen pretty much at random: How many planes of symmetry does an octahedron possess? A 3 B 5 C 8 D 9 E 12. How many letters are there in the correct answer to this question? A one B two C three D four E five If you like these, now you know where to find lots more. Book details: Senior Mathematical Challenge - the UK National Mathematics Contest 1988-1996 Tony Gardiner paperback - 184 pages (2002) Cambridge University Press ISBN: 0-521-66567-1 You can buy the book and help Plus at the same time by clicking on the link on the left to purchase from amazon.co.uk, and the link to the right to purchase from amazon.com. Plus will earn a small commission from your purchase.
{"url":"http://plus.maths.org/content/senior-mathematical-challenge","timestamp":"2014-04-17T06:46:11Z","content_type":null,"content_length":"25725","record_id":"<urn:uuid:2747a835-dc43-4274-a206-eb85bd0d338b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search The volume velocity cross-sectional area When a flow is confined within an enclosed channel, as it is in an acoustic tube, volume velocity is conserved when the tube changes cross-sectional area, assuming the density mass in a flow: The total mass passing a given point As a simple example, consider a constant flow through two cylindrical acoustic tube sections having cross-sectional areas It is common in the field of acoustics to denote volume velocity by an upper-case would express the conservation of volume velocity from one tube segment to the next. Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/pasp/Volume_Velocity_Gas.html","timestamp":"2014-04-21T10:59:20Z","content_type":null,"content_length":"10023","record_id":"<urn:uuid:20f64ece-4fc6-4394-9c37-85e6f8103248>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Items tagged with parameter So I've got a numerical solution to a D.E. and want to find which parameters (v, theta) cause the second event to occur. Here's my dsolve > soln:= dsolve({eqx, eqy, inty, intx, x(0)=0, y(0)=0}, {x(t), y(t)}, numeric, method = rkf45, output=listprocedure, parameters=[v, theta], events=[ [ [diff(y(t),t)=0, y(t) The first event is "Does the trajectory reach its max at a height less than 3.05m? If so, stop computation"...
{"url":"http://www.mapleprimes.com/tags/parameter?page=2","timestamp":"2014-04-18T15:48:34Z","content_type":null,"content_length":"99040","record_id":"<urn:uuid:e731c8dd-589a-47a0-88c0-450801623752>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00205-ip-10-147-4-33.ec2.internal.warc.gz"}
Grid Walk Sponsoring Company: Challenge Description There is a monkey which can walk around on a planar grid. The monkey can move one space at a time left, right, up or down. That is, from (x, y) the monkey can go to (x+1, y), (x-1, y), (x, y+1), and (x, y-1). Points where the sum of the digits of the absolute value of the x coordinate plus the sum of the digits of the absolute value of the y coordinate are lesser than or equal to 19 are accessible to the monkey. For example, the point (59, 79) is inaccessible because 5 + 9 + 7 + 9 = 30, which is greater than 19. Another example: the point (-5, -7) is accessible because abs(-5) + abs (-7) = 5 + 7 = 12, which is less than 19. How many points can the monkey access if it starts at (0, 0), including (0, 0) itself? Input sample: There is no input for this program. Output sample: Print the number of points the monkey can access. It should be printed as an integer — for example, if the number of points is 10, print "10", not "10.0" or "10.00", etc.
{"url":"https://www.codeeval.com/public_sc/60/","timestamp":"2014-04-18T15:39:12Z","content_type":null,"content_length":"13301","record_id":"<urn:uuid:cfa34586-217e-40e0-9118-d38bea8e45b6>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
1. Introduction The transconductor is a versatile building block employed in many analog and mixed-signal circuit applications, such as continuous-time filters, delta-sigma modulators, variable gain-amplifier or data converter. The transconductor is to perform voltage-to-current conversion. Linearity is one of most critical requirements in designing transconductor. Especially in designing delta-sigma modulators for high resolution Analog/Digital converters, it needs high linearity transconductors to accomplish the required signal-to-(noise+distortions) ratio. The tuning ability of transconductor is also mandated to adjust center frequency and quality factor in filter applications. The portable electronic equipments are the trend in comsumer markets. Therefore, the low power consumption and low supply voltage becomes the major challenge in designing CMOS VLSI circuitry. However, designing for low-voltage and highly linear transconductor, it requires to consider many factors. The first factor is the linear input range. The range of linear input is justified by the constant transconductance, G[m] . Since the distortion of transconductor is determined by the ratio of output currents versus input voltage. The second factor is the control voltage of transconductor. This voltage can greatly impact the value of transconductance, linear range, and power consumption. For example, when the control voltage increases, the transconductance also increase but the linear input range of transconductor is reduced and power consumption is increased. Hence it is critical in designing transconducotr operated at low supply voltage. The third factor is the symmetry of the two differential outputs. If the transconductance of the positive and negative output is G[m+]=I[O+]/V[i] and G[m−]=I[O−]/V[i] , then how close G[m+] and G[m−] should be is a critical issue, where I[O+] is the positive output current, I[O−] is the negative output current, and V[i] is the input differential voltage. This factor is the major cause of common-mode distortion of transconductor which occurs at outputs. In general, the design of differential transconductor can be classified into triode-mode and saturation-mode methods depending on operation regions of input transistors. Triode-mode transconductor has a better linearity as well as single-ended performance. On the other hand, saturation-mode transconductor has better speed performance. However, it only exhibits moderate linearity performance. Furthermore, the single-ended transconductor of saturation-mode suffers from significant degradation of linearity. Several circuit design techniques for improving the linearity of transconductors have been reported in literatures. The linearization methods include: source degeneration using resistors or MOS transistors [Krummenacher & Joeh, 1988; Leuciuc & Zhang, 2002; Leuciuc, 2003; Furth & Andreou, 1995], crossing-coupling of multiple differential pairs [Nedungadi & Viswanathan, 1984; Seevinck & Wassenaar, 1987] class-AB configuration [Laguna et al., 2004; Elwan et al., 2000; Galan et al., 2002], adaptive biasing [Degrauwe et al., 1982; Ismail & Soliman, 2000; Sengupta, 2005], constant drain-source voltages [Kim et al., 2004; Fayed & Ismail, 2005; Mahattanakul & Toumazou, 1998; Zeki, 1999; Torralba et al., 2002; Lee et al., 1994; Likittanapong et al., 1998], pseudo differential stages [Gharbiya & Syrzycki, 2002], and shift level biasing [Wang & Guggenbuhl, 1990]. Source degeneration using resistors or MOS transistors is the simplest method to linearize transconductor. However, it requires a large resistor to achieve a wide linear input range. In addition, MOS used as resistor exhibits considerable varitions affected by process and temperture and results in the linearity degradation. Crossing-coupling with multiple differential pairs is designed only for the balanced input signals. The Class-AB configuration can achieve low power consumption. On the other hand, the linearity is the worst due to the inherited Class-AB structure. The adaptive biasing method generates a tail current which is proportional to the square of input differential voltage to compensate the distortion caused by input devices. However, the complication of square circuitry makes this technique hard to implement. The constant drain-source voltage of input devices is a simple structure. It can achieve a better linearity with tuning ability. However, it needs to maintain V[DS] of input devices in low voltage and triode region. Therefore, this technique is difficult to implement in low supply voltage. Hence, a new transconductor using constant drain-source voltage in low voltage application is proposed to achieve low-voltage, highly linear, and large tuning range abilities. In section 2, basic operatrion and disadvantage of the linerization techniques are described. The proposed new transconductor is presented in section 3. The simulation results and conclusion are given in section 4 and 5. 2. Linearization techniques In this section, reviews of common linearization techniques reported in literatures are presented. The first one is the transconductor using constant drain-source voltage. The second one is using regulated cascode to replace the auxiliary amplifier. The third one is transconductor with source degeneration by using resistors and MOS transistors. The last one is the linear MOS transconductor with a adaptive biasing scheme. Besides introducing their theories and analyses, the advantages and disadvantages of these linearization techniques are also discussed. 2.1. Transconductor using constant drain-source voltage The idea of transconductors using constant drain-source voltages is to keep the input devices in triode region such that the output current is linearized. The schematic of this method is shown in Figure 1. Considering that transistors M[1], M[2] operate at triode region, M[3], M[4] are biased at saturation region, channel length modulation, body effect, and other second-order effects are ignored, the drain current of M[1] and M[2] is given by where β =μ[n]C[OX] (W/L), V[GS] is the gate-to-source voltage, V[T] is the threshold voltage, and V[DS] is the drain-to-source voltage. If the two amplifiers in Figure 1 are ideal amplifiers, then The transfer characteristic of this transconductor is given by The transconductance value is In fact, it is difficult to design an ideal amplifier implemented in this circuits. However, it can force V[DS1] =V[DS2] =V[DS] by using two auxiliary amplifiers controlled with the same V[C] to keep V[DS] at the constant value. Therefore, the transfer characteristic of this transconductor is changed as follows: where V[GS1]= V[in1] and V[GS2]= V[in2]. Therefore, the new transconductance value is The linearity of this transconductor is moderated. It is also easy to implement in circuit. However, V[DS] of the input devices must be small enough to keep transistors in triode region. The following condition has to be satisfied: On the other hand, the auxiliary amplifiers need to design carefully to reduce the overhead of extra area and power. 2.2. Transconductor using regulated cascode to replace auxiliary amplifier In Figure 2(a) regulating amplifier keeps V[DS] of M[1] at a constant value determined by V[C] . It is less than the overdrive voltage of M[1]. The voltage can be controlled from V[C] so as to place M[3] in current-voltage feedback, thereby increasing output impedance. The concept is to drive the gate of M[3] by an amplifier that forces V[DS1] to be equal to V[C] . Therefore, the voltage variations at the drain of M[3] affect V[DS1] to a lesser extent because amplifiers “regulate” this voltage. With the smaller variations at V[DS1] the current through M[1] and hence output current remains more constant, yielding a higher output impedance [Razavi, 2001] It is one of solutions using regulated cascode to replace the auxiliary amplifier in order to overcome restrictions on Figure 1. The circuit in Figure 2(b) proposed in [Mahattanakul & Toumazou, 1998] uses a single transistor, M[5], to replace the amplifier in Figure 2(a). This circuit called regulated cascode which is abbreviated to RGC. The RGC uses M[5] to achieve the gain boosting by increasing the output impedance without adding more cascode devices. V[DS1] is calculated by follows: Assuming M[5] is in saturation region in Figure 2(b). It can be shown that From (6) Gm=β1VDS1=β1(VC+2ICβ5+VT5) . Thus, G[m] can be tuned by using a controllable voltage source V[C] or current source I[C] . However, it is preferable in practice to use a controllable voltage source V[C] for lowering power consumption since V[DS1] only varies as a square root function of I[C] . Simple RGC transconductor using a single transistor to achieve gain boosting can reduce area and power wasted by the auxiliary amplifiers. However, it still has some disadvantages. First, it will cause an excessively high supply-voltage requirement and also produce an additional parasitic pole at the source of transistors. Therefore, it can not apply to the low-supply voltage design. Second, the tuning range of V[DS1] is restricted. The smallest value of V[DS1] is 2ICβ5+VT when V[C] = 0. In other words, V[DS1] can not be set to zero. Owing to the restriction of (7), V[DS] is as low as possible and the best value is zero. Third, V[T] dependent G[m] may be a disadvantage due to the substrate noise and V[T] mismatch problems [Lee et al., 1994]. In Figure 3, another RGC transconductor that can apply to the low-voltages applications is proposed in [Likittanapong et al., 1998]. The circuit overcomes the disadvantages mentioned above is to utilize PMOS transistor that can operate in saturation region as gain boosting. The use of this PMOS gain boosting in the feedback path can result in a circuit with a wide transconductance tuning range even at the low supply voltage. In [Likittanapong et al., 1998], it mentions that at the maximum input voltage, M[3] may be forced to enter triode region, especially if the dimension of M[2] is not properly selected, resulting in a lower dynamic range. Besides, β[2] may be chosen to be larger for a very low distortion transconductor. It means that the tradeoff between linearity and bandwidth of transconductor is controlled by β[2] Therefore, β[2] should be selected to compromise these two characteristics for a given application. V[DS1] is calculated by follows. Assuming M[3] is in saturation region in Figure 3. From (6) Gm=β1VDS1=β1[VC−(2ICβ3+VT3)] It shows that V[DS1] can be set to zero when VC=2ICβ3+VT3 Therefore, this transconductor has a wider tuning range compared to that of RGC transconductor and is capable of working in low-supply voltage (3V). However, this transconductor still has some drawbacks. The major drawback is the tuning ability. For example, it is difficult to control VC=2ICβ3+VT3 if V[DS1] is set to zero. The minor drawback is that V[T] depends on the G[m] . It also may cause substrate noise and V[T] mismatch problems [Lee et al., 1994]. 2.3. Transconductor using source degeneration A simple differential transconductor is shown in Figure 4(a). Assuming that M[1] and M[2] are in saturation and perfectly matched, the drain current is given by The transfer characteristic using (5) is given by where V [i] = (V[ in1] −V[ in2]) If V[GS] is large enough, the higher linearity can be achieved. Unfortunately, it can not be used in the low-voltage application and the linear input range is limited. Simplest techniques to linearize the transfer characteristic of MOS transconductor is the one with source degeneration using resistors as shows in Figure 4(b). The circuit is described by A transfer characteristic derived from (13) is given by The transconductance G[m] is where g[m] is the transconductance of transistor M[1] and M[2]. We should notice that in (14), the nonlinear term depends on V[i] − RI[out] rather than V[i] . Higher linearity can be achieved when R >> 1/g[m] . The disadvantage of this transconductor is that large resistor value is needed in order to maintain a wider linear input range. Owing to G[m] ≈ 1/R, the higher transconductance is limited by the smaller resistor. Hence, there is a tradeoff between wide linear input range and higher transconductance which is mainly determined by a resistor. Another method to linearize the transfer characteristic of MOS transconductor is using source degeneration to replace the degeneration resistor with two MOS transistors operating in triode region. The circuit is shown in Figure 5. Notice that the gates of transistor M[3] and M[4] connect to the differential input voltage rather than to a bias voltage. To see that M[3] and M[4] are generally in triode region, we look at the case of the equal input signals (V[in1]=V[in2] ), resulting in Therefore, the drain-source voltages of M[3] and M[4] are zero. However, V[DS] of M[3] and M[4] equal those of M[1] and M[2]. Owing to (7), M[3] and M[4] are indeed in triode region. Assuming M[3], M [4] are operating in triode region, the small-signal drain-source resistance of M[3], M[4] is given by It must be noted that in this circuit the effect of varying V[DS] of M[1] and M[2] can not be ignored since the drain currents are not fixed to a constant value. The small-signal source resistance of M[1], M[2] is given by Using small-signal T model, the small-signal output current, i[o1] , is equal to Assuming M[1] is in saturation region, the drain current of M[1] is given by Using (20) substitutes for (19), that leads to The transconductance G[m] is Linearity can be enhanced (assuming r[ds3]>> r[s1] ) compared to that of a simple differential pair because transistors operated in triode region exhibits higher linearity than the source resistances of transistors operated in saturation region. When the input signal is increased, the small-signal resistance in one of two triode transistors in parallel, M[3] or M[4], is reduced. Meanwhile, the reduced resistance results in the lower linearity and the larger transconductance. As discussed in [Krummenacher & Joeh, 1988], if the proper size ratio of β[1] /β[3] is chosen, the balance between higher linearity and stable transconductance can be achieved. How to choose the optimum size ratio of β[1] /β[3] for the best linearity performance becomes slightly dependent on the quiescent overdrive voltage, V[GS]−V[T]. The size ratio of β[1] /β[3] =6.7 is used to achieve the best linearity performance. According to (22), the transconductance can be tuned by changing I[SS] and size ratio of β[1] /β[3] . Nevertheless, the nonlinearity error is up to 1% for I[out] /I[SS] < 80%. It is required to have a better linearity so as to achieve a THD of -60 dB or less in some filtering applications [Kuo & Leuciuc, 2001]. 2.4. Transconductor using adaptive biasing The transconductor using adaptive biasing is shown in Figure 6. All transistors are assumed to be operated in saturation region, neglecting channel lengh modulation effect. First, transistor M[3] is absent, and output current as a function of two input voltages V[in1] and V[in2] is obtained as where I[SS] is a tail current and equals I[B] . An adaptive biasing technique is using a tail current containing an input dependent quadratic component to cancel the nonlinear term in (23). Consequently, the circuit in Figure 6 changes the tail current by adding transistor M[3]. The tail current will be changed by where I[B] is tail current of differential pair and I[C] is the compensating tail current that cancel nonlinear term. Therefore, the transfer characteristic is changed by 3. New transconductor The conventional structure which uses the constant drain source-voltage such as RGC with NMOS or PMOS can not operate at 1.8V or below. The main reason is that auxiliary amplifier under the low supply voltage can’t provide enough gain to keep the constant drain-source voltage. Therefore, we propose a triode transconductor which uses new structure to replace the auxiliary amplifier. Figure 7 shows the proposed triode transconductor structure. MOS M[5], M[7], M[9] and M[11] are made up a two-stage amplifier to replace the auxiliary amplifier. The two-stage amplifier is implemented using M[9] with the active loads M[11] formed the first stage and M[5] with the active load M[7] formed the second stage. The first and second stages exhibit gains equal to Therefore, the overall gain is The proposed transconductor is shown in Figure 8. Considering that the large gain is achieved and is able to keep transistors M[1] and M[2] in triode region, the drain current of M[1] and M[2] is given by The transfer characteristic is given by where β[1] = β[2] , V[T1] =V[T2] , and V[DS1] = V[DS2] . Assuming that current I[9] flows from M[11] through M[9] and MOS M[9] is in saturation region, V[DS1] can be found in (33) According to (32) The transconductance G[m] is From (35), the transconductance can be tuned by control voltage V[C] To keep M[1] and M[2] in triode region, the relation (36) needs to be satisfied. Using (33) to substitute (36) The proposed transconductor is suitable for low supply voltage and we choose 1.8V to achieve a wide linear range. Moreover, M[9] is needed to obtain a negative feedback to keep the drain-source voltage of M[1], V[DS1], constant. This new structure can provide enough gain to keep V[DS1] constant at 1.8V supply voltage. It has a low control voltage V[C] between 0.69V~0.72V and the large transconductance tuning range depending on applications. Besides, it has a simple structure so as to save area. 4. Simulation results The circuits in Figure 8 have been designed by using TSMC CMOS 0.18μm process with a single 1.8V supply voltage and simulated by Hspice. Figure 9. shows the curve of input voltage transferring to the output current at V[C] = 0.7V. The slope of the curve is linear when the input voltage varies from −1V to 1V. The slope in Figure 9. is equal to the transconductance in Figure 10. In order to verify the performance of the proposed transconductor, we define transconductance error (Equation 39) as the linearity of the transconductance’s output current. The transconductance error is less than 1% among ±0.9V input voltage, so the input linear range is up to 1.8V. In Figure 11. it shows the drain-source voltage of the input transistors M[1] and M[2], V[DS1] and V[DS2], changes with the input voltage. Within ±1V input voltage, V[DS1] and V[DS2] are very small. According to equation (40), V[DS1] and V[DS2] are too small such that transistors M[1] and M[2] can be set in triode region. Once the input voltage exceeds ±1V, V[DS1] and V[DS2] will increase rapidly. It results in that transistors M[1] and M[2] enter in saturation region. In other words, when M[1] and M[2] entering saturation region the proposed transconductor can not maintain the high When V[C] is set between 0.69V and 0.72V, the linear input range is up to 2.6V and the transconductance error is less than 1%. The smallest transconductance is 3.4μs and linear input range is 1.2V when V[C] is 0.720V. The highest transconductance is 542μs and linear input range is 1.4V when V[C] is 0.690V. Table 1 shows the linear input range and the transconductance tuned by different V[C] . Therefore, the proposed transconductor achieve a large tuning range. V [C ](V) Linear input range (V) Transconductance (µS) 0.690 1. 4 542 0.695 1.8 434 0.700 1.8 326 0.705 2.2 219 0.710 2.4 122 0.715 2.6 42 0.720 1.2 3.4 In Figure 12., the simulated THD as a function of the input frequency and input signal amplitude is plotted. The best THD is achieved at the low input voltage and the low frequency. When V[C] is 0.7V, the linearity of the proposed transconductor is less than −60dB for 0.7Vpp at 100KHz. Figure 13. shows the linearity of transconductor in three linearization techniques. The transconductor using source degeneration with resistor is shown in Figure 4(b), and the transconductance in Figure 13(a) is tuned by different resistors. The transconductor using source degeneration with MOS transistors is shown in Figure 5, and the transconductance in Figure 13(b) is tuned by the different size ratio of β[1] /β[3] . The transconductor using adaptive biasing is shown in Figure 6, and the transconductance in Figure 13(c) is tuned by the different compensating tail current, I[C] . Figure 14. Shows the simulation result of the proposed technique and other techniques. Figure 14(a) is the full plot of the different linearization techniques. From Figure 14(b) it can be easily seen that the linearity achieved by the newly proposed technique is better than all other implementations. The simulated THD of the output differential current versus the input signal amplitude for the four linearized transconductors is plotted in Figure 15. The proposed transconductor achieves THD less than −61dB for the 0.7Vpp input voltage, 11dB better than the one using source degeneration using resistor, 24dB better than the one using source degeneration using MOS, and 31dB better than the one using adaptive biasing, at the same input range. Table 2. shows the power consumption of the four linearized transconductors at the same transconductance. Power consumption changes with the different transconductances. Therefore, the same transconductance is chosen to be compared in each configuration. Table 3. shows different power consumption at the different transconductance of the proposed transconductor. Source degeneration using MOS Source degeneration using resistor Adaptive biasing Proposed Power (mW) 1.3 1 1.19 1.38 1. 58 V C (V) Power (mW) G m (µA/V) 0.690 1.759 542 0.695 1.7 14 434 0.700 1.5 86 326 0.705 1.4 42 219 0.710 1.2 63 122 0.715 0. 9 54 42 0.720 0.733 3.4 Table 4. shows the comparison of performance with other transconductors at the low supply voltage (under 2V). The transconductor in [Fayed & Ismail 2005] also uses constant drain-source voltage. It modifies the basic structure of constant drain source voltage and uses the moderate amplifier. The proposed transconductor modifies the auxiliary amplifiers to obtain high gain under low supply The layout including proposed transconductor, Common Mode Feedback, and bandgap is shown in Figure 16. The proposed transconductor uses STC pure 1.8V linear I/O library in 0.18μm CMOS process. The chip area is 0.516mm^2. [Galan et. al 2002] [Leuciuc & Chang 2002] [Laguna et. al 2004] [Sengupta 2005] [Fayed & Ismail 2005] Proposed Process 0.8µm 0.25µm 0.8µm 0.18µm 0.18µm 0.18µm Power supply 2V 1.8V 1.5V 1.8V 1.8V±10% 1.8V THD -40dB @10MHz -80dB, 0.8Vpp, @2.5MHz -33dB, 0.2Vpp, @5MHz -65dB, 1Vpp, @1MHz -50dB, 0.9Vpp, @50KHz - 60 dB, 0.7Vpp, @1 00KH z G m (µA/V) 0.6~207 200~600 67~155 770 5~110 3.4 ~ 542 Linear input range 0.6Vpp 1.4Vpp 0.6Vpp 1Vpp 1.8Vpp 2.4 Vpp Year 2002 2002 2004 2005 2005 200 9 5. Conclusion The proposed low-voltage, highly linear, and tunable triode transconductor achieves the wide linear input range up to 2.4V. The total harmonic distortion is −60dB with a 0.7V[pp] input voltage. The design uses TSMC 0.18μm CMOS technology and supply voltage is 1.8V. Moreover, it exhibits a large G[m] tuning range from 3.4μS to 542μS and also keeps a wide linear input range. Finally, the performance comparison with other linear techniques shows that the proposed technique achieves better linearity, wider tuning range, and wider linear input range.
{"url":"http://www.intechopen.com/books/advances-in-solid-state-circuit-technologies/transconductor","timestamp":"2014-04-20T03:11:21Z","content_type":null,"content_length":"183194","record_id":"<urn:uuid:0a465647-8c4d-4074-93b5-2b2265f8eeda>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00546-ip-10-147-4-33.ec2.internal.warc.gz"}
xamples of Last edited 7/15/2009 7:21:00 PM The examples given here: ¨ are basic types of functions that are used everywhere in abstract math ¨ are functions you need to be familiar with ¨ are (sigh) boring. For any set A, the identity function is the function that takes an element to itself; in other words, for every element , . Its graph is the diagonal of . ¨ Warning: The word identity has two other commonly used meanings. This causes trouble because people may refer to the identity function as simply “the identity”, especially in conversation. ¨ The notation for the identity function on A is fairly common, but so is . The identity function is injective and surjective. Understanding the identity function ¨ The identity function on a set A is the function that does nothing to each element of A. ¨ The identity function on is the familiar function defined by . Its graph in the plane is the diagonal line from lower left to upper right through the origin. Its derivative is the constant function defined by g(x) = 1. ¨ There is a different identity function for each different set. See overloaded notation. These functions all have the “same” formula: for every a in A. But they are technically different functions because they have different domains. If (see inclusion), then there is an inclusion function that takes every element in A to the same element. In other words, inc(a) = a for every element . This fits the property of codomain that requires (in this case) that because that is what “ ” means every element of A is an element of B. ¨ The notation for the identity function on A show which set A we are using, but the notation “inc” does not show either set. ¨ The notation “inc” is my own and is not common. Other notations I have seen are: and ¨ Many mathematicians who use the looser definition of function never talk about the inclusion function. For them, it is merely the identity function. The inclusion function is injective. It is surjective if and only if A = B. Understanding the inclusion function ¨ The definition says the inclusion function “takes every element in A to the same element.” I could have worded it this way: “The inclusion function takes each element of A to the same element regarded as an element of B.” This wording incorporates elements of “how you think about X” into the definition of X. This is loose and unrigorous. But I’ll bet a lot of readers would understand it more quickly that way! ¨ The graph of inc is the same as the graph of and they have the same domain, so that the only difference between them is what is considered the codomain (A for , B for the inclusion of A in B). So inc is different from if you require that functions with different codomains be different (discussed here). If A and B are nonempty sets and b is a specific element of B, then the constant function is the function that takes every element of A to b; that is, for all . The notation is not common. There is no standard notation for constant functions. The constant function is injective only if A has exactly one element. It is surjective only if B has exactly one element. How to understand constant functions ¨ A constant function takes everything to the same thing. It has a one-track mind. ¨ A constant function from to has a horizontal line as its graph. ¨ The constant function is not the same thing as the element b of B. If A is any set, there is exactly one function . Such a function is an empty function. Its graph is empty, and it has no values. An identity function does nothing. An empty function has nothing to The empty function is vacuously injective. It is surjective only if A is empty. If A and B are sets, there are two coordinate functions (or projection functions) and . Thus for and , and . (See cartesian product). In general for an n-fold cartesian product, the function takes an n-tuple to its i th coordinate. ¨ is injective if and only if either A is empty or B has at most one element. ¨ It is surjective if and only if A is empty or B is nonempty. ¨ If A (or B) is empty, then so is . In that case is the empty function. ¨ For any set S, there are two different coordinate functions and . For example, if S is the set of real numbers, then and . The coordinate function may be denoted by or sometimes (for projection). A binary operation on a set S is a function . (See cartesian product). ¨ The operation of adding two real numbers gives a binary operation ¨ Subtraction is also a binary operation on the real numbers. Observe that, unlike addition, it cannot be regarded as a binary operation on the positive real numbers. ¨ Multiplication of real numbers is also a binary operation . ¨ Division is not a binary operation on the real numbers because you can’t divide by 0. However, it is a binary operation on the nonzero real numbers ( is standard notation for the nonzero reals). You could also look at the function since 0 / y is defined even though y / 0 is not. But it is not a binary operation because by definition a binary operation has to fit the pattern where all three sets are the same. ¨ For any set S, the two projections and are both binary operations on S. Notation and usage ¨ With a binary operation symbol, infix notation is usually used: the name of the binary operation is put between the arguments. For example we write 3 + 5 = 8, not +(3, 5) = 8. ¨ Binary operations are the basis of most of algebra. See groups for more examples. In this section I give you examples of really weird functions that you may never have thought of as functions before, because if you are a beginner in abstract math, you probably need to: Loosen up narrowminded ideas about what a function is Other consciousness-expanding examples of functions are listed in an appendix. Let be defined by The graph of this function is pictured on the right. ¨ F is given by a split definition. It is defined by one formula for part of its domain and by another on the rest. F is nevertheless one function, defined on the closed interval [0,1]. ¨ F is discontinuous at . ¨ F is neither injective nor surjective. ¨ F does not have a derivative at x = 0.5. ¨ The graph does not and cannot show the precise behavior of the function near . a. The point is on the graph, because the definition of F says that F(x) is for x between 0 and 0.5 inclusive. b. For any point x to the right of 0.5, . For example, … correct to eighteen decimal places. In fact, c. but . d. Nevertheless, F(0.5) is 1, not 0.75. That implies that F is not continous at x = 0.5. ¨ It would be wrong to say something like: “ starting at the first point to the right of x = 0.5”. There is no first point to the right of x = 0.5. See density. A function can be given by different rules on different parts of its domain. It is still one function. Let the function F be defined on the set as follows: . ¨ F is defined only for inputs 1,2,3 and 6. For example, is not defined. ¨ F is not injective since F(1) = F(2). ¨ F is not defined by a formula. F(2) = 3 because the definition says it is. ¨ F could be defined by the formula for . (This is given by an interpolation formula (MW, Wik)). But it is not obligatory that a function be defined by a formula, only that a mathematical definition of the function be given. See Conceptual and Computational. ¨ You could give the function as a table, as in (a). ¨ You can show the function in an picture, with arrows going from each input to its output, as in (b). A function does not have to be given by a formula. Another finite function is studied here. Let S be some set of English words, for example the set of words in a given dictionary. Then the length of a word is a function; call it L. ¨ L takes words as inputs. ¨ L outputs the number of letters in the word. For example, and . ¨ L is not injective. For example, . ¨ L is not surjective onto since there is a longest word in the set of words in any dictionary. ¨ This function illustrates the fact that a function can have one kind of input and another kind of output. ¨ There is a method of computation for this function (count the number of letters) but most people would not call it a formula. A function can have one kind of input and another kind of output. Let F be defined on the natural numbers by requiring that is the nth prime in order. Thus and . There is a procedure for calculating . For example to calculate F(100), make a list of primes in order (check each natural number in order to see if it is divisible by some natural number other than itself and 1) and stop when you get to the 100^th one. This procedure is ridiculously slow and difficult to use but it doesn’t matter. The definition “F(n) is the nth prime in order” gives a precise definition of F, and that is enough to make it a legitimate function. The definition of a function must tell you what the value is at every element of the domain, but it doesn’t have to tell you how to calculate that value. For example, is the prime in order, but there is no way in the world you will ever find out the decimal representation of that prime. There are faster methods for calculating F(n), in particular the sieve method. but the number is so humongous that no method could calculate in anyone’s lifetime. See Conceptual and Computational. Note that we know F is injective even though we can’t calculate its value for large n. for real numbers . Its graph is shown to the right. It has asymptotes (shown in green) at . Now let Since E(x) is a continuous function on the interval, this integral exists for every t. So G(t) is a properly defined function of the real variable t. ¨ G is a function of t, not of x. The variable x is a bound variable (dummy variable) used in the integral. The definition of G(t) therefore depends on the value of E(x) for every value of x from 0 to t (or from t to 0). After all, the integral is the area under the curve between those values of x, so every little twist in the curve matters. ¨ If you try to use methods you learned in Calc 1 to find the indefinite integral of with respect to x, you will fail. It’s known that this integral cannot be expressed in terms of familiar functions (polynomials, rational functions, log, exp, trig functions.) Nevertheless, for all real t with , the integral exists and and has a specific value. ¨ The definition of G(t) makes it very easy to find the derivative (!): . ¨ This function is an example of an elliptic integral. Elliptic integrals have a long (190 years) and rich history, and are best studied as functions of complex variables. A definition integral may still be meaningful even if you don’t know a “formula” for the antiderivative Let for all real x. ¨ (F(1/3)=1, F(42) = 1, but because is not rational. ¨ If all you know about x is that it is 3.14159 correct to five decimal places, then you don’t know what F(x) is. No matter how many decimal places you are given for x, you cannot tell what F(x) is. You need to have other information about x (whether it is rational or irrational) to determine its value. ¨ There is no way to draw the graph of this function since both the rationals and the irrationals are dense in the set of real numbers. ¨ This function is not continuous, and therefore does not have a derivative. ¨ This function is not injective. ¨ You can read more about this function here. A function need not have a drawable graph. Let . The frequency goes up rapidly as you get close to the y-axis from the left, since grows very rapidly as x moves toward 0. Drawing the graph near the y-axis is impractical because the curve between x = 0 and x = any bigger number is infinitely long even though it occurs in a finite interval. The graph of a real valued function on a finite interval can be an infinitely long curve. Let f be a function that has a derivative, and let D(f) be its derivative. Then D is a function from a set of functions to a set of functions. ¨ If then , or, using barred arrow notation, . ¨ If then , or . These are pictured below. D takes a function as input and outputs another function, namely the derivative of the first one. The whole function is the input, not some value of the function, not the rule that defines the function, not the graph. You have to think of the function as a thing, in other words as a math object. ¨ Functions whose inputs are complicated structures such as functions may be called operators . (Usage varies in different specialties.) This function D is the differentiation operator. ¨ The differentiation operator is not injective. For example, and have the same derivative, namely 2x. ¨ The domain of D must include only differentiable function (duh). A function can have a set of functions as its domain or codomain. Two functions and their derivatives In each picture, the differentiation operator takes the blue function thought of as a single math object to the red one. More pictures here like those above Other consciousness-expanding examples of functions ¨ The functions denoted (a, b). Bigger graph of the sine blur function
{"url":"http://abstractmath.org/MM/MMFuncExamples.htm","timestamp":"2014-04-18T10:36:38Z","content_type":null,"content_length":"376247","record_id":"<urn:uuid:b18e45bf-9816-44e4-978b-9d977ba33ad7>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00473-ip-10-147-4-33.ec2.internal.warc.gz"}
2006.414: The problem of differentiation of an Abelian function over its parameters 2006.414: Victor Buchstaber and Dmitry Leykin (2006) The problem of differentiation of an Abelian function over its parameters. There is a more recent version of this eprint available. Click here to view it. Full text available as: PDF - Requires a PDF viewer such as GSview, Xpdf or Adobe Acrobat Reader 149 Kb Theory of Abelian functions was a central topic of the 19th century mathematics. In mid-seventies of the last century a new wave arose of investigation in this field in response to the discovery that Abelian functions provide solutions of a number of challenging problems of modern Theoretical and Mathematical Physics. In a cycle of our joint papers published in 2000–05, we have developed a theory of multivariate sigma-function, an analogue of the classic Weierstrass sigma-function. A sigma-function is defined on a cover of U , where U is the space of a bundle p : U → B defined by a family of plane algebraic curves of fixed genus. The base B of the bundle is the space of the family parameters and a fiber J_b over b ∈ B is the Jacobi variety of the curve with the parameters b. A second logarithmic derivative of the sigma-function along the fiber is an Abelian function on J_b. Thus, one can generate a ring F of fiber-wise Abelian functions on U. The problem to find derivations of the ring F along the base B is a reformulation of the classic problem of differentiation of Abelian functions over parameters. Its solution is relevant to a number of topical applications. This work presents a solution of this problem recently found by the authors. Our method of solution essentially employs the results from Singularity Theory about vector fields tangent to the discriminant of a singularity y^n -x^s, gcd(n, s) = 1. Available Versions of this Item Download Statistics: last 4 weeks Repository Staff Only: edit this item
{"url":"http://eprints.ma.man.ac.uk/664/","timestamp":"2014-04-16T19:13:34Z","content_type":null,"content_length":"10867","record_id":"<urn:uuid:a4eb8973-68ac-42b8-b9db-7e0dc9d749e1>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00322-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts from May 2008 on Dr. Myron Evans Subject: Lemaitre / Tolman / Bondi Metric Date: Sat, 31 May 2008 12:18:25 EDT This describes a spherically symmetric cloud of dust alleged to be expanding or collapsing under gravitation. It is an exact solution of the Einstein field equation found by Lemaitre in 1933, Tolman in 1934 and Bondi in 1947: ds squared = c squared dt squared – A dr squared – R squared d cap omega squared A = (R’) squared / (1 – 2E) R = R(t, r), R’ = partial R / partial r, E = E(r) We already know that the Friedmann Lemaitre Robertson Walker metric failed the dual identity test, so this one will fail also. The Maxima code will work out all the functional derivatives needed.
{"url":"http://drmyronevans.wordpress.com/2008/05/","timestamp":"2014-04-20T08:14:54Z","content_type":null,"content_length":"40624","record_id":"<urn:uuid:bc6dc199-bfa7-4e76-ad13-5a1c23652b09>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
What does ¶ Mean!? August 7th 2008, 04:34 PM #1 Aug 2008 What does ¶ Mean!? I Am very stumped. I need to know what this symbol means. here it is in a equation 1 radian = 180 degrees / ¶ 5 x 1 radian = 5 x 180 degrees / ¶ 5 radians = 286.5 degrees does anyoe know the value or meaning or name of this symbol? If someone could help me that would be great thank you!! Actually the symbol you're looking for is $\pi$. What you have is a symbol that marks the end of lines/paragraphs (such as in Microsoft Word). Here's an entire article for you: Pi Are you sure that the symbol means pi? because this is the exact formula from the lesson in math 30 applied that I am working on rite now. 1 radian = 180 degrees / ¶ If it was pi? why wouldnt they just put pi... instead of confusing the Last edited by topsquark; August 7th 2008 at 07:05 PM. Yes it is pi and its symbol is $\pi$. What you've learned is a conversion between two different ways to express angles: 1 radian is equal to $\frac{180}{\pi} \: \text{degrees}$ or equivalently, 1 degree is equal to $\frac{\pi}{180^{\circ}}$. With this new notation, we can see that, for example, $30^{\circ} = 30^{\circ} \cdot \frac{\pi}{180^{\circ}} = \frac{\pi}{6}$ When writing in terms of radians, usually we leave it unitless (if you want, add the unit 'rad' to the end). Ok.... if you say so Ill give that a try Thank-you very much. That's just the beginning. Wait till you encounter other weird symbols that mathematicians use. Last edited by topsquark; August 7th 2008 at 07:03 PM. August 7th 2008, 04:40 PM #2 August 7th 2008, 04:43 PM #3 Aug 2008 August 7th 2008, 04:50 PM #4 August 7th 2008, 04:52 PM #5 Aug 2008 August 7th 2008, 06:36 PM #6 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/math-topics/45524-what-does-mean.html","timestamp":"2014-04-17T21:53:52Z","content_type":null,"content_length":"41463","record_id":"<urn:uuid:8eb044dc-939c-4ec5-9b7f-456a43eb80fd>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
How to find the function if the derivative is given?Given f'(x)=36x^5+3x^2 determine f(x). - Homework Help - eNotes.com How to find the function if the derivative is given? Given f'(x)=36x^5+3x^2 determine f(x). The problem provides the derivative of the function, hence, you need to evaluate the function using anti-derivatives, such that: `int f'(x) = f(x) => int (36x^5+3x^2)dx = f(x)` You need to use the linearity of integral, hence, you need to split the integral in two simpler integrals, such that: `int (36x^5) dx + int (3x^2)dx = 36 int x^5 dx + 3 int x^2 dx` `int (36x^5) dx + int (3x^2)dx = 36 x^6/6 + 3 x^3/3 + c` Reducing duplicate factors yields: `int (36x^5) dx + int (3x^2)dx = 6x^6 + x^3 + c` Hence, evaluating the function f(x) using anti-derivatives, yields `f(x) = x^3(6x^3 + 1) + c.` Basically, what you are trying to find is the antiderivate of the function. To find the antiderivative, you will have to solve using integral rules. Antiderivate is f(x) = 36x^6 / 6 + 3x^2 / 3 By definition, f(x) could be determined evaluating the indefinite integral of f'(x) Int (236x^5+3x^2)dx We'll apply the additive property of integrals: Int (36x^5+3x^2)dx = Int (36x^5)dx + Int (3x^2)dx We'll re-write the sum of integrals, taking out the constants: Int (36x^5+3x^2)dx = 36Int x^5 dx + 3Int x^2 dx Int (36x^5+3x^2)dx = 36*x^6/6 + 3*x^3/3 We'll simplify and we'll get: Int (36x^5+3x^2)dx = 6x^6 + x^3 + C The function f(x) is: f(x) = 6x^6 + x^3 + C Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/how-find-function-derivative-given-398701","timestamp":"2014-04-19T23:28:41Z","content_type":null,"content_length":"29784","record_id":"<urn:uuid:0cef5f9e-5905-4c2e-9327-f1510c23d51f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00109-ip-10-147-4-33.ec2.internal.warc.gz"}
[Solved] if condition doesn't work on formula generated cell Weird behavior of excelI have the below information in my excelA1: String value of 10+20A2: Number 5In A3 I am reading the number before “+” in A1 using “=Left(A1, Find(“+”,A1)-1)”. I got the number 10I want to have a formula in my A4 whereby I should compare A3 with A2. If A3 less than or equal to A2 return “True” else return “False”. So I had the standard if condition “=if(A3<= A2,”True”,”False”)”But the if condition does not work since the number in A3 is a formula generated number and not a keyed in number.How to fix the problem?? Try this:=LEFT(A1,FIND("+",A1)-1)+0or=LEFT(A1,FIND("+",A1)-1)*1or =IF(A2<=A3+0,"True","False")or=IF(A2<=A3*1,"True","False")Since the LEFT function returns text, you have to force Excel to convert it to a number before you can perform a <= operation on it.Click Here Before Posting Data or VBA Code ---> How To Post Data or Code. Since the LEFT function returns text, you have to force Excel to convert it to a number before you can perform a <= operation on it. Click Here Before Posting Data or VBA Code ---> How To Post Data or Code.
{"url":"http://www.computing.net/answers/office/if-condition-doesnt-work-on-formula-generated-cell/17721.html","timestamp":"2014-04-16T16:11:44Z","content_type":null,"content_length":"34649","record_id":"<urn:uuid:8f88ddfe-0dc7-4209-b8ab-3395b6d868a5>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Back to Compression and shear Based on part of the GeotechniCAL reference package by Prof. John Atkinson, City University, London Stiffness Back to Basic mechanics of soils stiffness is the relationship between changes of stress and changes of strain. The stiffness E' is the gradient of the stress-strain curve. Stiffnesses may be described by a tangent modulus E'[tan] = ds' / de or by a secant modulus E'[sec] = Ds' / De Note: If the material is linearly elastic the stress-strain curve is a straight line and E'[tan] = E'[sec]. Change of size: bulk modulus Back to Stiffness s'[mean] = (s'[x] + s'[y] + s'[z]) / 3 In soils volumetric strains are due to changes of effective stress. Change of shape: shear modulus Back to Stiffness Sinse water has no shear strength, the value of the shar modulus, G, remains the same, independant of whether the loading process is drained or undrained. Uniaxial loading: Young's modulus and Poisson's ratio Back to Stiffness i.e. ds'[r] = 0 Young's modulus Poisson's ratio n' = - de[r] / de[a] If the material is incompressible, e[v] = 0 and Poisson’s ratio, n = 0.5. Uniaxial compression is the only test in which it is possible to measure Poisson's ratio with any degree of simplicity. Typical values of E Back to Uniaxial loading These are a function of the stress level, and the loading history, however a range is given below. │ │Typical E │ │Unweathered overconsolidated clays │ 20 ~ 50 MPa│ │Boulder clay │ 10 ~ 20 MPa│ │Keuper Marl (unweathered) │ >150 MPa│ │Keuper Marl (moderately weathered) │ 30 ~ 150 MPa│ │Weathered overconsolidated clays │ 3 ~ 10 MPa│ │Organic alluvial clays and peats │ 0.1 ~ 0.6 MPa│ │Normally consolidated clays │ 0.2 ~ 4 MPa│ │Steel │ 205 MPa│ │Concrete │ 30 MPa│ Relationships between stiffness moduli Back to Stiffness In bodies of elastic material the three stiffness moduli (E', K' and G') are related to each other and to Poisson’s ratio (n'). It assumed that the material is elastic and isotropic (i.e. linear stiffness is equal in all directions). The following relationships can be demonstrated (for proofs refer to a text on the strength of materials). G' = E' / 2(1 + n') K' = E' / 3(1 - 2n') Material behaviour Back to Basic mechanics of soils OA: linear and recoverable ABC: non-linear and irrecoverable BCD: recoverable with hysteresis DE: continuous shearing The three basic theories which are relevant to soil behaviour are: elasticity, plasticity and viscus flow (often refered to as creep). In addition, the theories of elasticity and plasticity are combined into elasto-plasticity. If strains are zero the behaviour is rigid. Elasticity Back to Material behaviour Forward to Perfect plasticity In linear-elastic behaviour the stress-strain is a straight line and strains are fully recovered on unloading, i.e. there is no hysteresis. The elastic parameters are the gradients of the appropriate stress-strain curves and are constant. Shear modulus G = dt / dg Bulk modulus K' = ds'[mean] / de[v ] Young’s modulus E' = ds'[a] / de[a] (where ds'[r] = 0) Poisson’s ratio n' = - de[r] / de[a] (where ds'[r] = 0) Perfect plasticity Back to Material behaviour Forward to Elasto-plasticity OA - rigid AB - plastic During perfectly plastic straining, plastic strains continue indefinitely at constant stress. The ratio of plastic strains is related to the yield stress, which also represents the failure stress. Yield Back to Perfect plasticity Yield stress is the stress at the end of elastic behaviour (in a perfectly plastic material this is the same as the failure stress). In a 2-dimensional stress system the combination of yield stresses forms a yield curve, inside which failure can not occur. Normality Back to Perfect plasticity Normality is also known as associated flow. Elasto-plasticity Back to Material behaviour There are simultaneous elastic and plastic strains and the plastic strains cause the yield stress to change. There are two cases which are typically found: Strain hardening The plastic strain de[p] causes an increase in the yield stress. The hardening law is ds'[yield] / de[p] Soils with loosely packed grains are strain hardening because the disturbance during sharing causes the grains to move closer together. Strain softening The plastic strain de[p] causes a decrease in the yield stress. Soils with densely packed grains are strain softening because disturbance during sharing causes the grains to move apart causing dilation. Elastic-perfectly plasticity Back to >Material behaviour
{"url":"http://environment.uwe.ac.uk/geocal/SoilMech/basic/stiffness.htm","timestamp":"2014-04-17T17:08:48Z","content_type":null,"content_length":"15683","record_id":"<urn:uuid:1b1303bd-3a49-4f90-9fa0-a969c41f3bdd>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00452-ip-10-147-4-33.ec2.internal.warc.gz"}
algebraically closed field K is called algebraically closed if every of positive degree has a zero in . (Equivalently, such a polynomial must split into a product of linear factors.) For example the fundamental theorem of algebra says that C is algebraically closed. (Any proof of this eventually boils down to the completeness of C. The simplest proof I know uses Liouville's theorem. If f(z) is a polynomial function on the complex numbers which vanishes nowhere then its reciprocal must be bounded and holomorphic on the plane, hence constant by Liouville.) Despite its name this result is not especially important for algebra. However, it is important to know that given a field K we can construct an algebraic closure of K. So let me explain what this is. Definition For a field K an algebraic closure of K is a algebraic field extension L of K that is an algebraically closed field. That leaves two natural questions: can we construct algebraic closures and are they unique? Here's the answer. Theorem Given a field K there exists an algebraic closure L of K. If M is also an algebraic closure of K then there is an isomorphism of rings f:L-->M such that f(a)=a for all a in K. I'm going to show the construction of an algebraic closure. First we need Lemma If K<=L is a field extension then the set F of elements of L which are algebraic over K form a field extension of K. Proof. Choose a,b in F. By the field extension writeup [K(a):K] is finite. Since b is algebraic over K it is a fortiori algebraic over K(a). Thus again we have that [K(a,b):K(b)] is finite. By the lemma about dimensions of towers of field extensions in the proof of the uniqueness of splitting fields we see that [K(a,b):K] is also finite. It follows that every element of this extension is algebraic over K. For take any such element c. Then 1,c,c^2,... cannot be linearly independent over K. This gives a nonzero polynomial in K[x] with c as a root. It follows that a-b and ab and (for a not zero) a^-1 are all algebraic over K. Hence the result. Proof that algebraic closures exist. First we show that K is a subfield of an algebraically closed field. The argument I give is due to Emil Artin. For each polynomial f in K[x] of positive degree we associate a variable X[f]. Then form the polynomial ring in all these variables R. (Note there will be infinitely many variables here.) Now consider the ideal of R generated by all f(X[f]), where f runs through the polynomials in K[x] of positive degree. I claim this ideal is proper. If not we have an equation 1 = f(X[f[1]])g[1] +...+ f(X[f[n]])g[n] Now we can form a finite dimensional field extension of in which each has a zero (see splitting field ), say . There are finitely many variables involved in the above. If is not one of the then we just put At this point we can for substitute the variables in the equation with the various a[h]. When we do that obviously the right hand side vanishes and we get 1=0. This contradiction shows that the ideal generated by the f(X[f]) is proper. Hence there exists a maximal ideal I of R that contains it. Thus M[1]=R/I is a field and we have a natural injective ring homomorphism K-->M[1]. By construction, in the field extension M[1] of K we have that each polynomial f in K[x] of positive degree has a zero. Iterating we can form a chain of fields M[1] < M[2] < ... so that each polynomial in of positive degree has a zero in . Obviously the union of all these fields is itself a field extension of . Further, by construction it is algebraically closed. Finally, we apply the previous lemma to see that the collection F of elements in M that are algebraic over K is an field extension of K. It is also algebraically closed. For suppose that f in F[x] is a polynomial of positive degree. Then f has a root a in M, since M is algebraically closed. But since this means a will be algebraic over F and hence over K we see that a is in F, as was needed to show F algebraically closed. See also proof that the algebraic closure of a field is unique
{"url":"http://everything2.com/title/algebraically+closed?showwidget=showCs777233","timestamp":"2014-04-19T20:46:23Z","content_type":null,"content_length":"26620","record_id":"<urn:uuid:ae050987-cec8-473c-92bc-08958d23ba8c>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
The world's worst macro preprocessor Last week I added another plugin to my Blosxom installation. As I wrote before, the sole benefit of Blosxom is that it's incredibly simple and lightweight. So when I write plugins for it, I try to keep them incredibly simple and lightweight, lest I spoil the single major benefit of Blosxom. Sometimes I'm more successful, sometimes less so. This time I think I did a good job. The goal last time was a macro processor. I write a lot of math articles. I get tired of writing <sup>2</sup> every time I want a superscript 2. Even if I bind a function key to that sequence of characters, it's hard to read. But now, with my new Blosxom macro processor, I just insert a line into my article that says: #define ^2 <sup>2</sup> and for the rest of the article, ^2 is expanded to <sup>2</sup>. This has turned out really well, and I'm using it for all sorts of stuff. I use it for math notations, such as for making -> an abbreviation for &rarr; (→), and for making ~ an abbreviation for &not; But I've also used it to #define Godel G&ouml;del. I've used it to #define KK <b>K</b> and #define SS <b>S</b>, which makes an article I'm writing about combinatory logic readable, where it wasn't readable before. In my recent article about job hunting, I used it to #define CV r&eacute;sum&eacute;, which saved me from having to interrupt my train of thought several times in the article. There are some important points about the design that I think I got right on the first try. Whenever you write a macro system, you have to ask about escape sequences: what do you do if you don't want a macro expanded? For example, in the combinatory logic article I defined a macro SS. This meant that if I had written MOUSSE in the article somewhere, it would have turned into MOUSE. How should I prevent that kind of error? Answer: I don't. I'm unlikely to do that. But if I do, I'll pick it up during the article proofreading phase. If I can't avoid writing MOUSSE, I have two choices: I can change the name of the SS macro to something easier to avoid—like S*, say, or I can define a second macro: #define !MOUSSE MOUSSE. But so far, it hasn't come up. One alternative solution is to say that macros are expanded only in certain contexts. For example, SS might only be expanded when it is a complete word, not when it is in the middle of a word, as MOUSSE. I resisted this solution. It is much simpler to remember that every macro is expanded everywhere. And it it is much easier to fix the problem of a macro being expanded when I don't want it than it is to fix the problem of a macro not being expanded when I do want it. So every macro is expanded no matter where it appears. Related to the unintentional-expansion issue is that each article has its own private macro set. I don't have to worry that by defining a macro named -> in one article that I might be sabotaging my opportunity to actually write -> in some unknown future article. Each set of macros can be totally ad hoc. I don't have to worry about global tradeoffs. Do I #define --- &mdash;, knowing that that will foreclose my opportunity to use --- in any other way? I can make the decision based on simple, local information. It would have been tempting to over-engineer the system and add all sorts of complex escape facilities. I think I made the right choice here by not doing any of that. Another escaping issue: What if I want to write something that looks like a definition but isn't? Here I avoided the problem by choosing a definition syntax that I was unlikely to write in any other context: #define in the leftmost column indicates a definition. In this article, I had to write some similar text. It was no trouble to indent it a couple of spaces, disabling the special meaning. But HTML is already full of escape mechanisms, and it would have been no trouble to write &#35;define instead of #define if for some reason I had really needed it to appear in the leftmost column. (Unlikely anyway, since HTML has no column semantics.) Another right choice I think I made was not to parametrize the macros. An article on algebra might well have: #define ^2 <sup>2</sup> #define ^3 <sup>3</sup> and it might be oh-so-tempting to try to eliminate the duplication à la C: #define ^(\w+) <sup>$1</sup> I did not do this. It would have complicated the processing substantially. It would also have complicated the use of the package substantially: I would have to worry a lot more than I do about invoking macros unintentionally. And it is not needed. Not so far, anyway. Because macro definitions only last for the duration of the article, there is no pressure to make a complete or consistent set of definitions. If an article happens to use the notations ^2, ^i, and ^N, I can define macros for those and only those notations. Also tempting is to extend the macro system to support something like this: #define BF(.*) <b>$1</b> I have so far resisted this. My feeling is that if I want to do anything like this, I should take it as a sign that I should be writing the articles in some markup system other than HTML. Choice of that markup system should be made carefully, and not organically as an ad-hoc overburdening of the macro system. I did run into one trouble with the macro system. Originally, it was invoked before some of my other plugins and after others. The earlier plugins automatically inserted certain text into the article that sometimes accidentally triggered my macros. I have not had any trouble with this since I changed the plugin order to invoke the macro processor before any of the other plugins. The macro-processing code is about 19 lines long, of which three are diagnostic. It is the world's worst macro system. It has exactly one feature. It is, I think the simplest thing that could possibly work, and so a good companion to Blosxom. For this application, the world's worst macro system is the world's best. [ Addendum 20071004: There's now a one-year retrospective analysis. ] [Other articles in category /prog] permanent link Job hunting stories A guy named Jonathan Rentzsch bookmarked my article about creeping featurism and the ratchet effect, saying "I'd like to hire this guy just so I could fire him." Since I was looking for a new job last month, I sent him my résumé, inquiring about his company's severance package. He didn't reply. Also in the said-it-but-didn't-mean-it department, Anil Dash gave a plenary talk at OSCON in which he mentioned that sixapart was hiring, and "looking for Perl gods". I looked for the Perl god positions on the "jobs" part of their web site, and saw nothing relevant, but I sent my résumé anyway. They didn't reply. A few years ago I was contacted by a headhunter who was offering me a one-year contract in Milford, Iowa. I said I did not want to work in Milford, Iowa. He tried to sell me on the job anyway. I said I did not want to work in Milford, Iowa. He would not take "no". He said, "Look, I understand you are reluctant to consider this. But I would like you to take a few days and think it over, and tell me what it would take to get you to agree." Okay. I talked it over with my wife, and we decided that for $750,000 we would be willing for me to spend the year working in Milford, Iowa. $500,000, we decided, would not be sufficient, but $750,000 would. I forget by now how we arrived at this figure, but we took some care in coming up with it. The headhunter called back. "Have you thought it over?" Yes, I had, I said. I had decided that $750,000 would be required to get me to Milford, Iowa. He was really angry that I had wasted his time. I really hate commuting. I had a job once in which I had to commute from Philadelphia (where I live) to New York. I told them when I took the job that if I liked it, I'd move to New York, and if not, I'd quit and move to Taiwan. Commuting to New York wrecked me. For years afterward I couldn't get onto a train without falling asleep immediately. After I quit, I didn't move to Taiwan; I stayed in Philadelphia and became a consultant. Every day I would wake up, put on my trousers, shuffle downstairs, and sit down at the computer. "Ahhh," I would say, "my morning commute is complete!" The novelty of this did not wear off for years. One day I was on a five-week business trip to Asia. (I hate commuting, but I like travelling.) I got email from a headhunter who was offering me a long-term contract in Elkton, Maryland. I had told this headhunter's company repeatedly that I was only interested in working in the Philadelphia area. (Elkton, Maryland is about as close to Philadelphia as you can get and still be in Maryland, which means that the only thing between Elkton and Philadelphia is the state of Delaware.) I wrote back, from Tokyo, and said that his company should stop contacting me, because I had told them over and over that I would only work in Philadelphia, and they kept sending me offers of employment in places like Elkton. I said it was a shame that we should waste each others' time like this. He suggested that the problem could be solved if I could just give him a "courtesy call". From Tokyo. Instead, I solved the problem by putting his company in my spam filter file. When I was about nineteen, a friend of mine asked me to comment on his résumé. I told him it was too long (it was three pages long) and that no prospective employer would care that he had held a job washing dishes for his fraternity. He didn't like my advice. I still think it was good advice. No nineteen-year-old needs a three-page résumé. I wonder if he eventually figured this out. You'd think so, but now that I have to read the résumés of people who are applying to me for jobs, it seems that hardly any of the applicants have figured out that you should leave out the job where you washed dishes for your fraternity. I once explained to someone that I change my résumé and send a different one with each job application. He was really shocked, and said he would consider that dishonest. Huh. I've talked to a couple of people lately who tell me that it's very rare to get a cover letter from an applicant that actually helps their chances of getting the job. At best, it's neutral. At worst, you get to see all their spelling and grammar errors. I think most people don't know how to write a letter, or that they write one letter and send it with every application. Maybe they think it would be dishonest to send a different letter with every There's a story about how Robert A. Heinlein became a writer: he needed money, and saw that some magazine was offering a $50.00 prize for the best story by a new author. He wrote a story, but concluded that that magazine would be swamped with submissions, so he sent it to a different magazine, which bought the story for $70.00. I became a conference speaker and teacher of Perl classes in a similar way. I wanted to go to the second Perl conference, but I couldn't afford it. Someone mentioned to me that there was a $1,000 prize for the best user paper. I thought I could write a good paper, but I also thought that the best paper often doesn't win the prize. But I also found out that conference tutorial speakers were paid $1,500 and given free conference admission and airfare and hotel fees. When you're submitting a proposal for a tutorial, it's perfectly honorable to go talk to the program committee behind the scenes and lobby them to accept your proposal instead of someone else's. If you do that with the contest judges, it's cheating. So I ignored the user paper contest and submitted a proposal for a tutorial, which was accepted. I once applied for a sysadmin job at the College of Staten Island, which is the school they send you to if you aren't qualified for any of the City University schools but they have to let you go to college because the City University system guarantees to admit anyone who can pay the tuition. I got back a peevish letter telling me that I wasn't qualified and to stop wasting the search committee's time. It was signed, in pen, "The Search Committee". I accepted a job with the University of Pennsylvania instead. [Other articles in category ] permanent link More about automorphisms In a recent article, I asserted that "there aren't even any reasonable [automorphisms of R] that preserve addition.". This is patently untrue. My proof started by referring to a previous result that any such automorphism f must have f(1) = 1. But actually, I had only proved this for automorphisms that must preserve multiplication. For automorphisms that preserve addition only, f(1) need not be 1; it can be anything. In fact, x → kx is an automorphism of R for all k except zero. It is not hard to show, following the technique in the earlier article, that every continuous automorphism has this form. In hopes of salvaging something from my embarrassing error, I thought I'd spend a little time talking about the other automorphisms of R, the ones that aren't "reasonable". They are unreasonable in at least two ways: they are everywhere discontinuous, and they cannot be exhibited explicitly. To manufacture the function, we first need a mathematical horror called a Hamel basis. A Hamel basis is a set of real numbers H[α] such that every real number r has a unique representation in the $$r = \sum_{i=1}^n q_i H_{\alpha_i}$$ where all the q[i] are rational. (It is called a Hamel basis because it makes the real numbers into a vector space over the rationals. If this explanation makes no sense to you, please ignore it.) The sum here is finite, so only a finite number of the uncountably many H[α] are involved for any particular r; this is what characterizes it as a Hamel basis. Leaving aside the proof that the Hamel basis exists at all, if we suppose we have one, we can easily construct an automorphism of R. Just pick some rational numbers m[α], one for each H[α]. Then if as above, we have: The automorphism is: $$f(r) = \sum_{i=1}^n q_i H_{\alpha_i}m_{\alpha_i}$$ At this point I should probably prove that this is an automorphism. But it seems unwise, because I think that in the unlikely case that you have understood everything so far, you will find the statement that this is an automorphism both clear and obvious, and will be able to imagine the proof yourself, and for me to spell it out will only confuse the issue. And I think that if you have not understood everything so far, the proof will not help. So I should probably just say "clearly, this is an automorphism" and move on. But against my better judgement, I'll give it a try. Let r and s be real numbers. We want to show that f(s) + f(r) = f(s + r). Represent r and s using the Hamel basis. For each element H of the Hamel basis, let's say that c[H](r) is the (rational) coefficient of H in the representation of r. That is, it's the q[i] in the definition above. By a simple argument involving commutativity and associativity of addition, c[H](r+s) = c[H](r) + c[H](s) for all r, s, and H. Also, c[H](f(r)) = m·c[H](r), for all r and H, where m is the multiplier we chose for H back when we were making up the automorphism, because that's how we defined f. Then c[H](f(r+s)) = m·c[H](r+s) = m·(c[H](r) + c[H](s)) = m·c[H](r) + m·c[H](s) = c[H](f(r)) + c[H](f(s)) = c[H](f(r) + f(s)), for all H. This means that f(r+s) and f(r) + f(s) have the same Hamel basis representation. They are therefore the same number. This is what we wanted to show. If anyone actually found that in the least enlightening, I would be really interested to hear about it. One property of a Hamel basis is that exactly one of its uncountably many elements is rational. Say it's H[0]. Then every rational number q is represented as q = (q/H[0])·H[0]. Then f(q) = (q/H[0])·H [0]m[0] = m[0]q for all rational numbers q. But in general, an irrational number x will not have f(x) = m[0]x, so the automorphism is discontinuous everywhere, unless all the m[α] are equal, in which case it's just x → mx again. The problem with this construction is that it is completely abstract. Nobody can exhibit an example of a Hamel basis, being, as it is, an uncountably infinite set of mainly irrational numbers. So the discontinuous automorphisms constructed here are among the most utterly useless of all mathematical examples. I think that is the full story about additive automorphisms of R. I hope I got everything right this time. I should add, by the way, that there seems to be some disagreement about what is called a Hamel basis. Some people say it is what I said: a basis for the reals over the rationals, with the properties I outlined above. However, some people, when you say this, will sniff, adjust their pocket protectors, and and correct you, saying that that a Hamel basis is any basis for any vector space, as long as it has the analogous property that each vector is representable as a combination of a finite subset of the basis elements. Some say one, some the other. I have taken the definition that was convenient for the exposition of this article. [ Thanks to James Wetterau for pointing out the error in the earlier article. ] [ Previous articles in this series: Part 1 Part 2 Part 3 Part 4 ] [Other articles in category /math] permanent link Russell and Whitehead or Whitehead and Russell? In an earlier article, I asked: Everyone always says "Russell and Whitehead". Google results for "Russell and Whitehead" outnumber those for "Whitehead and Russell" by two to one, for example. Why? The cover and the title page [of Principia Mathematica] say "Alfred North Whitehead and Bertrand Russell, F.R.S.". How and when did Whitehead lose out on top billing? I was going to write that I thought the answer was that when Whitehead died, he left instructions to his family that they destroy his papers; this they did. So Whitehead's work was condemned to a degree of self-imposed obscurity that Russell's was not. I was planning to end this article there. But now, on further reflection, I think that this theory is oversubtle. Russell was a well-known political and social figure, a candidate for political office, a prolific writer, a celebrity, a famous pacifist. Whitehead was none of these things; he was a professor of philosophy, about as famous as other professors of philosophy. The obvious answer to my question above would be "Whitehead lost out on top billing on 10 December, 1950, when Russell was awarded the Nobel Prize." Oh, yeah. That. I'm reminded of the advertising for the movie Space Jam. The posters announced that it starred Bugs Bunny and Michael Jordan, in that order. I reflected for a while on the meaning of this. Was Michael Jordan incensed at being given second billing to a fictitious rabbit? (Probably not, I think; I imagine that Michael Jordan is entirely unthreatened by the appurtenances of any else's fame, and least of all by the fame of a fictitious rabbit.) Why does Bugs Bunny get top billing over Michael Jordan? I eventually decided that while Michael Jordan is a hero, Bugs Bunny is a god, and gods outrank heroes. [Other articles in category /math] permanent link Automorphisms of the complex numbers In an earlier article, I wrote a proof that the only automorphisms of the complex numbers are the identity function and the function a + bi → a - bi. Robert C. Helling points out that there is a much simpler proof that this is the case. Suppose that f is an automorphism, and that x^2 = y. Then f(x^2) = (f(x))^2 = f(y), so that if x is a square root of y, then f(x) is a square root of f(y). As I pointed out, f(1) = 1. Since -1 is a square root of 1, f(-1) must be a square root of 1, and so it must be -1. (It can't be 1, since automorphisms may not map two different arguments to the same value.) Since i is a square root of -1, f(i) must also be a square root of -1. So f(i) must be either ±i, and the theorem is proved. This is a nice example of why I am not a mathematician. When I want to find the automorphisms of C, my first idea is to explicitly write down the general automorphism and then start bashing away on the algebra. This sort of mathematical pig-slaughtering gets the pig cut up all right, but mathematicians are not interested in slaughtering pigs. By which I mean that the approach gets the result I want, usually, but not new or mathematically interesting results. In computer programming, the pig-slaughtering approach often works really well. Most programs are oversubtle, and can be easily improved by doing the necessary tasks in the simplest and most straightforward possible way, rather than in whatever baroque way the original programmer dreamed up. [ Previous articles in this series: Part 1 Part 2 Part 3 Followup article: Part 5 ] [Other articles in category /math] permanent link Imaginary units, again In my earlier discussion of i and -i I said " The point about the square roots of -1 is that there is no corresponding criterion for distinguishing the two roots. This is a theorem." The proof of the theorem is not too hard. What we're looking for is what's called an automorphism of the complex numbers. This is a function, f, which "relabels" the complex numbers, so that arithmetic on the new labels is the same as the arithmetic on the old labels. For example, if 3×4 = 12, then f(3) × f(4) should be f(12). Let's look at a simpler example, and consider just the integers, and just addition. The set of even integers, under addition, behaves just like the set of all integers: it has a zero; there's a smallest positive number (2, whereas it's usually 1) and every number is a multiple of this smallest positive number, and so on. The function f in this case is simply f(n) = 2n, and it does indeed have the property that if a + b = c, then f(a) + f(b) = f(c) for all integers a, b, and c. Another automorphism on the set of integers has g(n) = -n. This just exchanges negative and positive. As far as addition is concerned, these are interchangeable. And again, for all a, b, and c, g(a) + g(b) = g(c). What we don't get with either of these examples is multiplication. 1 × 1 = 1, but f(1) × f(1) = 2 × 2 = 4 ≠ f(1) = 2. And similarly g(1) × g(1) = -1 × -1 = 1 ≠ g(1) = -1. In fact, there are no interesting automorphisms on the integers that preserve both addition and multiplication. To see this, consider an automorphism f. Since f is an automorphism that preserves multiplication, f(n) = f(1 × n) = f(1) × f(n) for all integers n. The only way this can happen is if f(1) = 1 or if f(n) = 0 for all n. The latter is clearly uninteresting, and anyway, I neglected to mention that the definition of automorphism rules out functions that throw away information, as this one does. Automorphisms must be reversible. So that leaves only the first possibility, which is that f(1) = 1. But now consider some positive integer n. f(n) = f(1 + 1 + ... + 1) = f(1) + f(1) + ... + f(1) = 1 + 1 + ... + 1 = n. And similarly for 0 and negative integers. So f is the identity function. One can go a little further: there are no interesting automorphisms of the real numbers that preserve both addition and multiplication. In fact, there aren't even any reasonable ones that preserve addition. The proof is similar. First, one shows that f(1) = 1, as before. Then this extends to a proof that f(n) = n for all integers n, as before. Then suppose that a and b are integers. b·f(a/b) = f(b)f(a/b) = f(b·a/b) = f(a) = a, so f(a/b) = a/b for all rational numbers a/b. Then if you assume that f is continuous, you can fill in f(x) = x for the irrational numbers also. (Actually this is enough to show that the only continuous addition-preserving automorphism of the reals is the identity function. There are discontinuous addition-preserving functions, but they are very weird. I shouldn't need to drag in the continuity issue to show that the only addition-and-multiplication-preserving automorphism is the identity, but it's been a long day and I'm really fried.) [ Addendum 20060913: This previous paragraph is entirely wrong; any function x → kx is an addition-preserving automorphism, except of course when k=0. For more complete details, see this later article. ] But there is an interesting automorphism of the complex numbers; it has f(a + bi) = a - bi for all real a and b. (Note that it leaves the real numbers fixed, as we just showed that it must.) That this function f is an automorphism is precisely the content of the statement that i and -i are numerically indistinguishable. The proof that f is an automorphism is very simple. We need to show that if f(a + bi) + f(c + di) = f((a + bi) + (c + di)) for all complex numbers a+bi and c+di, and similarly f(a + bi) × f(c + di) = f((a + bi) × (c + di)). This is really easy; you can grind out the algebra in about two steps. What's more interesting is that this is the only nontrivial automorphism of the complex numbers. The proof of this is also straightforward, but a little more involved. The purpose of this article is to present the proof. Let's suppose that f is an automorphism of the complex numbers that preserves both addition and multiplication. Let's say that f(i) = p + qi. Then f(a + bi) = f(a) + f(b)f(i) = a + bf(i) (because f must leave the real numbers fixed) = a + b(p + qi) = (a + bp) + bqi. Now we want f(a + bi) + f(c + di) = f((a + bi) + (c + di)) for all real numbers a, b, c, and d. That is, we want (a + bp + bqi) + (c + dp + dqi) = (a + c) + (b + d)(p + qi). It is, so that part is just fine. We also want f(a + bi) × f(c + di) = f((a + bi) × (c + di)) for all real numbers a, b, c, and d. That means we need: (a + b(p + qi)) × (c + d(p + qi)) = f((ac-bd) + (ad+bc)i) (a + bp + bqi) × (c + dp + dqi) = ac - bd + (ad + bc)(p + qi) ac + adp + adqi + bcp + bdp^2 + 2bdpqi + bcqi - bdq^2 = ac - bd + adp + bcp + adqi + bcqi bdp^2 + 2bdpqi - bdq^2 = - bd p^2 + 2pqi - q^2 = -1 Equating the real and imaginary parts gives us two equations: 1. p^2 - q^2 = -1 2. 2pq = 0 Equation 2 implies that either p or q is 0. If they're both zero, then f(a + bi) = a, which is not reversible and so not an automorphism. Trying q=0 renders equation 1 insoluble because there is no real number p with p^2 = -1. But p=0 gives two solutions. One has p=0 and q=1, so f(a+bi) = a+bi, which is the identity function, and not interesting. The other has p=0 and q=-1, so f(a+bi) = a-bi, which is the one we already knew about. But we now know that there are no others, which is what I wanted to show. [ Previous articles in this series: Part 1 Part 2 Followup articles: Part 4 Part 5 ] [Other articles in category /math] permanent link Design patterns of 1972 "Patterns" that are used recurringly in one language may be invisible or trivial in a different language. Extended Example: "object-oriented class" C programmers have a pattern that might be called "Object-oriented class". In this pattern, an object is an instance of a C struct. struct st_employee_object *emp; Or, given a suitable typedef: EMPLOYEE emp; Some of the struct members are function pointers. If "emp" is an object, then one calls a method on the object by looking up the appropriate function pointer and calling the pointed-to function: emp->method(emp, args...); Each struct definition defines a class; objects in the same class have the same member data and support the same methods. If the structure definition is defined by a header file, the layout of the structure can change; methods and fields can be added, and none of the code that uses the objects needs to know. There are a bunch of variations on this. For example, you can get opaque implementation by defining two header files for each class. One defines the implementation: struct st_employee_object { unsigned salary; struct st_manager_object *boss; METHOD fire, transfer, competence; The other defines only the interface: struct st_employee_object { char __SECRET_MEMBER_DATA_DO_NOT_TOUCH[4]; struct st_manager_object *boss; METHOD fire, transfer, competence; And then files include one or the other as appropriate. Here "boss" is public data but "salary" is private. You get abstract classes by defining a constructor function that sets all the methods to NULL or to: void _abstract() { abort(); } If you want inheritance, you let one of the structs be a prefix of another: struct st_manager_object; /* forward declaration */ #define EMPLOYEE_FIELDS \ unsigned salary; \ struct st_manager_object *boss; \ METHOD fire, transfer, competence; struct st_employee_object { struct st_manager_object { unsigned num_subordinates; struct st_employee_object **subordinate; METHOD delegate_task, send_to_conference; And if obj is a manager object, you can still treat it like an employee and call employee methods on it. This may seem weird or contrived, but the technique is widely used. The C standard contains guarantees that the common fields of struct st_manager_object and struct st_employee_object will be laid out identically in memory, specifically so that this object-oriented class technique can work. The code of the X window system has this structure. The code of the Athena widget toolkit has this structure. The code of the Linux kernel filesystem has this structure. Rob Pike, one of the primary architects of the Plan 9 operating system (the Bell Labs successor to Unix) and co-author (with Brian Kernighan) of The Unix Programming Environment, recommends this technique in his article "Notes on Programming in C". This is a pattern There's only one way in which this technique doesn't qualify as a pattern according to the definition of Gamma, Helm, Johnson, and Vlissides. They say: A design pattern systematically names, motivates, and explains a general design that addresses a recurring design problem in object-oriented systems. It describes the problem, the solution, when to apply the solution, and its consequences. It also gives implementation hints and examples. The solution is a general arrangement of objects and classes that solve the problem. The solution is customized and implemented to solve the problem in a particular context. Their definition arbitrarily restricts "design patterns" to addressing recurring design problems "in object-oriented systems", and to being general arrangements of "objects and classes". If we ignore this arbitrary restriction, the "object-oriented class" pattern fits the description exactly. The definition in Wikipedia is: In software engineering, a design pattern is a general solution to a common problem in software design. A design pattern isn't a finished design that can be transformed directly into code; it is a description or template for how to solve a problem that can be used in many different situations. And the "object-oriented class" solution certainly qualifies. Codification of patterns Peter Norvig's presentation on "Design Patterns in Dynamic Languages" describes three "levels of implementation of a pattern": So much a part of language that you don't notice Implement pattern itself within the language Instantiate/call it for each use Usually implemented with macros Design pattern in prose; refer to by name, but Must be reimplemented from scratch for each use In C, the "object-oriented class" pattern is informal. It must be reimplemented from scratch for each use. If you want inheritance, you have to set it up manually. If you want abstraction, you have to set it up manually. The single major driver for the invention of C++ was to codify this pattern into the language so that it was "invisible". In C++, you don't have to think about the structs and you don't have to worry about keeping data and methods private. You just declare a "class" (using syntax that looks almost exactly like a struct declaration) and annotate the items with "public" and "private" as But underneath, it's doing the same thing. The earliest C++ compilers simply translated the C++ code into the equivalent C code and invoked the C compiler on it. There's a reason why the C++ method call syntax is object->method(args...): it's almost exactly the same as the equivalent code when the pattern is implemented in plain C. The only difference is that the object is passed implicitly, rather than explicitly. In C, you have to make a conscious decision to use OO style and to implement each feature of your OOP system as you go. If a program has fifty modules, you need to decide, fifty times, whether you will make the next module an OO-style module. In C++, you don't have to make a decision about whether or not you want OO programming and you don't have to implement it; it's built into the language. Sherman, set the wayback machine for 1957 If we dig back into history, we can find all sorts of patterns. For example: Recurring problem: Two or more parts of a machine language program need to perform the same complex operation. Duplicating the code to perform the operation wherever it is needed creates maintenance problems when one copy is updated and another is not. Solution: Put the code for the operation at the end of the program. Reserve some extra memory (a "frame") for its exclusive use. When other code (the "caller") wants to perform the operation, it should store the current values of the machine registers, including the program counter, into the frame, and transfer control to the operation. The last thing the operation does is to restore the register values from the values saved in the frame and jump back to the instruction just after the saved PC value. This is a "pattern"-style description of the pattern we now know as "subroutine". It addresses a recurring design problem. It is a general arrangement of machine instructions that solve the problem. And the solution is customized and implemented to solve the problem in a particular context. Variations abound: "subroutine with passed parameters". "subroutine call with returned value". "Re-entrant For machine language programmers of the 1950s and early 1960's, this was a pattern, reimplemented from scratch for each use. As assemblers improved, the pattern became formal, implemented by assembly-language macros. Shortly thereafter, the pattern was absorbed into Fortran and Lisp and their successors, and is now invisible. You don't have to think about the implementation any more; you just call the functions. Iterators and model-view-controller The last time I wrote about design patterns, it was to point out that although the movement was inspired by the "pattern language" work of Christopher Alexander, it isn't very much like anything that Alexander suggested, and that in fact what Alexander did suggest is more interesting and would probably be more useful for programmers than what the design patterns movement chose to take. One of the things I pointed out was essentially what Norvig does: that many patterns aren't really addressing recurring design problems in object-oriented programs; they are actually addressing deficiencies in object-oriented programming languages, and that in better languages, these problems simply don't come up, or are solved so trivially and so easily that the solution doesn't require a "pattern". In assembly language, "subroutine call" may be a pattern; in C, the solution is to write result = function(args...), which is too simple to qualify as a pattern. In a language like Lisp or Haskell or even Perl, with a good list type and powerful primitives for operating on list values, the Iterator pattern is to a great degree obviated or rendered invisible. Henry G. Baker took up this same point in his paper "Iterators: Signs of Weakness in Object-Oriented Languages". I received many messages about this, and curiously, some made the same point in the same way: they said that although I was right about Iterator, it was a poor example because it was a very simple pattern, but that it was impossible to imagine a more complex pattern like Model-View-Controller being absorbed and made invisible in this way. This remark is striking for several reasons. It is an example of what is perhaps the most common philosophical fallacy: the writer cannot imagine something, so it must therefore be impossible. Well, perhaps it is impossible—or perhaps the writer just doesn't have enough imagination. It is worth remembering that when Edgar Allan Poe was motivated to investigate and expose Johann Maelzel's fraudulent chess-playing automaton, it was because he "knew" it had to be fraudulent because it was inconceivable that a machine could actually exist that could play chess. Not merely impossible, but inconceivable! Poe was mistaken, and the people who asserted that MVC could not be absorbed into a programming language were mistaken too. Since I gave my talk in 2002, several programming systems, such as Ruby on Rails and Subway have come forward that attempt to codify and integrate MVC in exactly the way that I suggested. Progress in programming languages Had the "Design Patterns" movement been popular in 1960, its goal would have been to train programmers to recognize situations in which the "subroutine" pattern was applicable, and to implement it habitually when necessary. While this would have been a great improvement over not using subroutines at all, it would have been vastly inferior to what really happened, which was that the "subroutine" pattern was codified and embedded into subsequent languages. Identification of patterns is an important driver of progress in programming languages. As in all programming, the idea is to notice when the same solution is appearing repeatedly in different contexts and to understand the commonalities. This is admirable and valuable. The problem with the "Design Patterns" movement is the use to which the patterns are put afterward: programmers are trained to identify and apply the patterns when possible. Instead, the patterns should be used as signposts to the failures of the programming language. As in all programming, the identification of commonalities should be followed by an abstraction step in which the common parts are merged into a single solution. Multiple implementations of the same idea are almost always a mistake in programming. The correct place to implement a common solution to a recurring design problem is in the programming language, if that is possible. The stance of the "Design Patterns" movement seems to be that it is somehow inevitable that programmers will need to implement Visitors, Abstract Factories, Decorators, and Façades. But these are no more inevitable than the need to implement Subroutine Calls or Object-Oriented Classes in the source language. These patterns should be seen as defects or missing features in Java and C++. The best response to identification of these patterns is to ask what defects in those languages cause the patterns to be necessary, and how the languages might provide better support for solving these kinds of problems. With Design Patterns as usually understood, you never stop thinking about the patterns after you find them. Every time you write a Subroutine Call, you must think about the way the registers are saved and the return value is communicated. Every time you build an Object-Oriented Class, you must think about the implementation of inheritance. People say that it's all right that Design Patterns teaches people to do this, because the world is full of programmers who are forced to use C++ and Java, and they need all the help they can get to work around the defects of those languages. If those people need help, that's fine. The problem is with the philosophical stance of the movement. Helping hapless C++ and Java programmers is admirable, but it shouldn't be the end goal. Instead of seeing the use of design patterns as valuable in itself, it should be widely recognized that each design pattern is an expression of the failure of the source language. If the Design Patterns movement had been popular in the 1980's, we wouldn't even have C++ or Java; we would still be implementing Object-Oriented Classes in C with structs, and the argument would go that since programmers were forced to use C anyway, we should at least help them as much as possible. But the way to provide as much help as possible was not to train people to habitually implement Object-Oriented Classes when necessary; it was to develop languages like C++ and Java that had this pattern built in, so that programmers could concentrate on using OOP style instead of on implementing it. Patterns are signs of weakness in programming languages. When we identify and document one, that should not be the end of the story. Rather, we should have the long-term goal of trying to understand how to improve the language so that the pattern becomes invisible or unnecessary. [ Thanks to Garrett Rooney for pointing out some minor errors that I have since corrected. - MJD ] [ Addendum 20061003: There is a followup article to this one, replying to a response by Ralph Johnson, one of the authors of the "Design Patterns" book. This link URL is correct, but Johnson's website will refuse it if you come from here. ] [Other articles in category /prog] permanent link Imaginary units, revisited Last night, shortly after posting my article about the fact that i and -i are mathematically indistinguishable, I thought of what I should have said about it—true to form, forty-eight hours too late. Here's what I should have said. The two square roots of -1 are indistinguishable in the same way that the top and bottom faces of a cube are. Sure, one is the top, and one is the bottom, but it doesn't matter, and it could just as easily be the other way around. Sure, you could say something like this: "If you embed the cube in R^3, then the top face is the set of points that have z-coordinate +1, and the bottom face is the set of points that have z -coordinate -1." And indeed, once you arbitrarily designate that one face is on the top and the other is on the bottom, then one is on the top, and one is on the bottom—but that doesn't mean that the two faces had any a priori difference, that one of them was intrinsically the top, or that the designation wasn't completely arbitrary; trying to argue that the faces are distinguishable, after having made an arbitrary designation to distinguish them, is begging the question. Now can you imagine anyone seriously arguing that the top and bottom faces of a cube are mathematically distinguishable? [ Previous article in this series: Part 1 Followup articles: Part 3 Part 4 Part 5 ] [Other articles in category /math] permanent link I get a new job Where did my blog go for the past six weeks? Well, I was busy with another project. Usually, when I am busy with a project, it shows up here, because I am thinking about it, and I want to write about what I am thinking. As Hans Arp said, it grows out of me and I keep cutting it off, like toenails. But in this case I could not write about the project here, because it was a secret. I was looking for a new job, and I did not want my old job to find out before I was ready. (Many people have been surprised to learn that I have a job; they remember that for many years I was intermittently a software consultant and itinerant programming trainer. But since January 2004 I have been regularly employed to do maintenance programming for the University of Pennsylvania's Networking and Telecommunications group.) Anyway, the job hunt has come to a close. I accepted a new job, put in my resignation letters at the old one, and can stop thinking about it for a while. The new work will be head software engineer at the Penn Genomics Institute. I will try to develop software for genetic biologists to use in their research. I expect that the new job will suit me somewhat better than the old one. I like that it is connected to science, and that I will be working with scientists. The work itself is important; genomics is going to change everything in the world. Also, it pays rather more than the old one, although that was not the principal concern. So with any luck blog posts will resume here, and eventually some genomics-related articles may start appearing. [Other articles in category /bio] permanent link Imaginary units Yesterday I had a phenomenally annoying discussion with the pedants on the IRC #math channel. Someone was talking about square roots, and for some reason I needed to point out that when you are considering square roots of negative numbers, it is important not to forget that there are two square roots. I should back up and discuss square roots in more detail. The square root of x, written √x, is defined to be the number y such that y^2 = x. Well, no, that actually contains a subtle error. The error is in the use of the word "the". When we say "the number y such that...", we imply that there is only one. But every number (except zero) has two square roots. For example, the square roots of 16 are 4 and -4. Both of these are numbers y with the property that y^2 = 16. In many contexts, we can forget about one of the square roots. For example, in geometry problems, all quantities are positive. (I'm using "positive" here to mean "≥ 0".) When we consider a right triangle whose legs have lengths a and b, we say simply that the hypotenuse has length √(a^2 + b^2), and we don't have to think about the fact that there are actually two square roots, because one of them is negative, and is nonsensical when discussing hypotenuses. In such cases we can talk about the square root function, sqrt(x), which is defined to be the positive number y such that y^2 = x. There the use of "the" is justified, because there is only one such number. But pinning down which square root we mean has a price: the square root function applies only to positive arguments. We cannot ask for sqrt(-1), because there is no positive number y such that y^2 = -1. For negative arguments, this simplification is not available, and we must fall back to using √ in its full In high school algebra, we all learn about a number called i, which is defined to be the square root of -1. But again, the use of the word "the" here is misleading, because "the" square root is not unique; -1, like every other number (except 0) has two square roots. We cannot avail ourselves of the trick of taking the positive one, because neither root is positive. And in fact there is no other trick we can use to distinguish the two roots; they are mathematically indistinguishable. The annoying discussion was whether it was correct to say that the two roots are mathematically indistinguishable. It was annoying because it's so obviously true. The number i is, by definition, a number such that i^2 = -1. This is its one and only defining property. Since there is another number which shares this single defining property, it stands to reason that this other root is completely interchangeable with i—mathematically indistinguishable from it, in other words. This other square root is usually written "-i", which suggests that it's somehow secondary to i. But this is not the case. Every numerical property possessed by i is possessed by -i as well. For example, i^3 = -i. But we can replace i with -i and get (-i)^3 = -(-i), which is just as true. Euler's famous formula says that e^ix = cos x + i sin x. But replacing i with -i here we get e^-ix = cos x + -i sin x, which is also true. Well, one of them is i, and the other is -i, so can't you distinguish them that way? No; those are only expressions that denote the numbers, not the numbers themselves. There is no way to know which of the numbers is denoted by which expression, and, in fact, it does not even make much sense to ask which number is denoted by which expression, since the two numbers are entirely interchangeable. One is i, and one is -i, sure, but this is just saying that one is the negative of the other. But so too is the other the negative of the one. One of the #math people pointed out that there is a well-known Im() function, the "imaginary part" function, such that Im(i) = 1, but Im(-i) = -1, and suggested, rather forcefully, that they could be distinguished that way. This, of course, is hopeless. Because in order to define the "imaginary part" function in the first place, you must start by making an entirely arbitrary choice of which square root of -1 you are using as the unit, and then define Im() in terms of this choice. For example, one often defines Im(z) as !!z - \bar{z} \over 2i!!. But in order to make this definition, you have to select one of the imaginary units and designate it as i and use it in the denominator, thus begging the question. Had you defined Im() with -i in place of i, then Im(i) would have been -1, and vice versa. Similarly, one #math inhabitant suggested that if one were to define the complex numbers as pairs of reals (a, b), such that (a, b) + (c, d) = (a + c, b + d), (a, b) × (c, d) = (ac - bd, ad + bc), then i is defined as (0,1), not (0,-1). This is even more clearly begging the question, since the definition of i here is solely a traditional and conventional one; defining i as (0, -1) instead of (0,1) works exactly as well; we still have i^2 = -1 and all the other important properties. As IRC discussions do, this one then started to move downwards into straw man attacks. The #math folks then argued that i ≠ -i, and so the two numbers are indeed distinguishable. This would have been a fine counterargument to the assertion that i = -i, but since I was not suggesting anything so silly, it was just stupid. When I said that the numbers were indistinguishable, I did not mean to say that they were numerically equal. If they were, then -1 would have only one square root. Of course, it does not; it has two unequal, but entirely interchangeable, square roots. The that the square roots of -1 are indistinguishable has real content. 1 has two square roots that are not interchangeable in this way. Suppose someone tells you that a and b are different square roots of 1, and you have to figure out which is which. You can do that, because among the two equations a^2 = a, b^2 = b, only one will be true. If it's the former, then a=1 and b=-1; if the latter, then it's the other way around. The point about the square roots of -1 is that there is no corresponding criterion for distinguishing the two roots. This is a theorem. But the result is completely obvious if you just recall that i is merely defined to be a square root of -1, no more and no less, and that -1 has two square roots. Oh well, it's IRC. There's no solution other than to just leave. [ Addenda: Part 2 Part 3 Part 4 Part 5 ] [Other articles in category /math] permanent link
{"url":"http://blog.plover.com/2006/09/","timestamp":"2014-04-21T04:33:25Z","content_type":null,"content_length":"76485","record_id":"<urn:uuid:1b605df6-b8c6-4abb-8ee9-bfa466d83ef5>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
Hawthorne, CA Precalculus Tutor Find a Hawthorne, CA Precalculus Tutor ...My major is Civil Engineering which adds to my extensive teaching experience. I started tutoring when I was 15. My students have stayed in touch after more than 20 years, being most of them successful professionals in various technical fields such as Engineering. 20 Subjects: including precalculus, Spanish, physics, calculus ...Prior to becoming a teacher I was an Electrical Engineer and a graduate from Carnegie Mellon University in Pittsburgh, PA. I worked for a 8+ years as a teacher in all subjects of Math in High Schools. I have also tutored students during this time frame in Math (pre-Algebra, Algebra 1 and 2, Geo... 11 Subjects: including precalculus, geometry, algebra 2, SAT math ...For the past two years I worked individually with ten students from 6th grade to 12th grade on a weekly basis. The majority of my tutoring experience has been in mathematics (elementary school math through college calculus) and test preparation (ACT, SAT, etc.); however, I have also tutored stud... 28 Subjects: including precalculus, reading, calculus, English ...When assisting with test prep, I teach my students how to efficiently and confidently navigate the test itself, in addition to identifying topics glossed over in class and tricks that cater to a student's learning style. At Brown University, my studies centered on kinesthetic learning via dance,... 60 Subjects: including precalculus, English, Spanish, reading ...During my undergraduate study for a B.S. Chemical Engineering at UCR, I have taken college level courses in pre-calculus, English, and Chemistry. I am most experienced with college students, but can also tutor middle and high school students. 12 Subjects: including precalculus, chemistry, reading, writing
{"url":"http://www.purplemath.com/hawthorne_ca_precalculus_tutors.php","timestamp":"2014-04-20T01:54:17Z","content_type":null,"content_length":"24321","record_id":"<urn:uuid:56f64ae0-c1bd-4789-a7aa-8641af588e27>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Real Math • Students will use a variety of basic and advanced math skills to solve real-world problems • Students will understand the relationship between academic knowledge or skills and future careers 1. Real Math (PDF) Background Discussion (10 minutes) 1. Explain that mathematics doesn't have to be difficult or intimidating, and that accounting and banking are not the only careers that require math knowledge. Many other professionals use math every day without even realizing it. For most people, however, math is easier to understand if it can be applied to everyday life. For example, figuring out how much lumber to buy for a home improvement project may feel easier than completing geometry homework, and calculating what percentage of your paycheck is needed to pay a cell phone bill might seem simpler than finishing a word problem. However, the math involved is the same. 2. Ask: Do you believe that there is a difference between "classroom" math and "real world" math? Discuss your students' responses using examples of math in the real world such as the ones below. Help students see the links between the algebra, geometry, and arithmetic skills they learn at school and the math they use in their everyday lives. a. Construction worker: geometry, algebra, arithmetic, billing, and purchasing b. Hair stylist: calculate percentages to mix chemicals, understand angles and geometry when cutting/shaping different hairstyles c. Retail business owner: calculate prices based on wholesale costs, budgets, planning for holiday purchasing d. Magazine publisher: understand who is reading your magazine, the characteristics of your readers, understanding the market and whether or not there is growth in that market e. Real estate agent: calculate mortgage rates, price trends, taxes Writing Action (20 minutes) 1. Separate students into pairs and distribute Real Math (PDF) Student Reproducible to each pair. 2. Read the introduction aloud and instruct each group to complete the worksheet. 3. Review the answers as a class using the answer key below. Active Wrap-up (10 minutes) Play this quick response game to help students synthesize their understandings of math's role in a wide variety of careers. Choose a timekeeper for this activity. Then read the first career on the list below to a student. Ask the student to tell everyone one way in which the job uses math. Read the next career to the next student and repeat the exercise. When you get to the end of the list go back to the beginning or add careers of your own. Students should be given only five seconds to come up with a way that the job uses math. If they do not respond within the time frame, go back to the beginning and start again. Remind students that no answers should be repeated and challenge them to think as creatively as possible! Career List: • Computer designer • Video game developer • Hair stylist • Radio broadcaster • Marine biologist • Writer • NASA employee • Spy • Ship builder • Military officer • Actor • Bakery owner • Fashion designer • Hotel manager • Teacher • Fireworks designer • Child care worker • Musician Answer Key (For Real Math Student Reproducible 2): 1. A) 410+175+165+175=925; B) Antron Brown by 25 points, C) To determine how long it took Antron to travel one mile, divide the number of miles traveled by the number of seconds in an hour (3600) (185/3600=19.46). This figure represents how long it took Antron to travel one mile. Because Antron traveled one mile in 19.46 seconds, divide 19.46 by 4 to determine how long it took Antron to travel a quarter-mile (19.46/4=4.865 or 5 seconds). 2. Double the distance between the two circles and add half the circumference of each circle. (Please note that half the circumference of each circle equals the circumference of one circle.) (12 x 2) + (4*3.14)=36.56 inches. 3. A) ((32 ounces + 8 ounces)*10)/32=10.5 quarts; B) $3.75*10.5 quarts=$38.44; C) $38.44*36=$1383.84; d. $1383.84/10000=13.8% or 14% 4. A) (30+30)+x=180, x=120 degrees; B) (30+90)+x=180, x=60 degrees.
{"url":"http://www.scholastic.com/browse/lessonplan.jsp?id=701","timestamp":"2014-04-20T08:14:57Z","content_type":null,"content_length":"18759","record_id":"<urn:uuid:b7595a91-ef10-4d9f-9d1d-0814e35d0bdd>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00659-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 67 , 2000 "... Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic context-free grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a first-order range- ..." Cited by 1057 (71 self) Add to MetaCart Stochastic Logic Programs (SLPs) have been shown to be a generalisation of Hidden Markov Models (HMMs), stochastic context-free grammars, and directed Bayes' nets. A stochastic logic program consists of a set of labelled clauses p:C where p is in the interval [0,1] and C is a first-order range-restricted definite clause. This paper summarises the syntax, distributional semantics and proof techniques for SLPs and then discusses how a standard Inductive Logic Programming (ILP) system, Progol, has been modied to support learning of SLPs. The resulting system 1) nds an SLP with uniform probability labels on each definition and near-maximal Bayes posterior probability and then 2) alters the probability labels to further increase the posterior probability. Stage 1) is implemented within CProgol4.5, which differs from previous versions of Progol by allowing user-defined evaluation functions written in Prolog. It is shown that maximising the Bayesian posterior function involves nding SLPs with short derivations of the examples. Search pruning with the Bayesian evaluation function is carried out in the same way as in previous versions of CProgol. The system is demonstrated with worked examples involving the learning of probability distributions over sequences as well as the learning of simple forms of uncertain knowledge. , 1995 "... This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol ..." Cited by 631 (59 self) Add to MetaCart This paper firstly provides a re-appraisal of the development of techniques for inverting deduction, secondly introduces Mode-Directed Inverse Entailment (MDIE) as a generalisation and enhancement of previous approaches and thirdly describes an implementation of MDIE in the Progol system. Progol is implemented in C and available by anonymous ftp. The re-assessment of previous techniques in terms of inverse entailment leads to new results for learning from positive data and inverting implication between pairs of clauses. - MACHINE LEARNING , 1995 "... Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories f ..." Cited by 81 (7 self) Add to MetaCart Knowledge acquisition is a difficult, error-prone, and time-consuming task. The task of automatically improving an existing knowledge base using learning methods is addressed by the class of systems performing theory refinement. This paper presents a system, Forte (First-Order Revision of Theories from Examples), which refines first-order Horn-clause theories by integrating a variety of different revision techniques into a coherent whole. Forte uses these techniques within a hill-climbing framework, guided by a global heuristic. It identifies possible errors in the theory and calls on a library of operators to develop possible revisions. The best revision is implemented, and the process repeats until no further revisions are possible. Operators are drawn from a variety of sources, including propositional theory refinement, first-order induction, and inverse resolution. Forte is demonstrated in several domains, including logic programming and qualitative modelling. - Inductive Logic Programming , 1992 "... This paper addresses methods of specialising first-order theories within the context of incremental learning systems. We demonstrate the shortcomings of existing first-order incremental learning systems with regard to their specialisation mechanisms. We prove that these shortcomings are fundamental ..." Cited by 58 (11 self) Add to MetaCart This paper addresses methods of specialising first-order theories within the context of incremental learning systems. We demonstrate the shortcomings of existing first-order incremental learning systems with regard to their specialisation mechanisms. We prove that these shortcomings are fundamental to the use of classical logic. In particular, minimal "correcting " specialisations are not always obtainable within this framework. We propose instead the adoption of a specialisation scheme based on an existing non-monotonic logic formalism. This approach overcomes the problems that arise with incremental learning systems which employ classical logic. As a side-effect of the formal proofs developed for this paper we define a function called "deriv" which turns out to be an improvement on an existing explanation-based-generalisation (EBG) algorithm. Prolog code and a description of the relationship between "deriv" and the previous EBG algorithm are described in an appendix. 1 Introduction ... , 1997 "... The inductive synthesis of recursive logic programs from incomplete information, such as input/output examples, is a challenging subfield both of ILP (Inductive Logic Programming) and of the synthesis (in general) of logic programs from formal specifications. We first overview past and present achie ..." Cited by 34 (8 self) Add to MetaCart The inductive synthesis of recursive logic programs from incomplete information, such as input/output examples, is a challenging subfield both of ILP (Inductive Logic Programming) and of the synthesis (in general) of logic programs from formal specifications. We first overview past and present achievements, focusing on the techniques that were designed specifically for the inductive synthesis of recursive logic programs, but also discussing a few general ILP techniques that can also induce non-recursive hypotheses. Then we analyse the prospects of these techniques in this task, investigating their applicability to software engineering as well as to knowledge acquisition and discovery. - Machine Learning , 1996 "... . Machine learning can be a most valuable tool for improvingthe flexibility and efficiency of robot applications. Many approaches to applying machine learning to robotics are known. Some approaches enhance the robot's high-level processing, the planning capabilities. Other approaches enhance the low ..." Cited by 32 (6 self) Add to MetaCart . Machine learning can be a most valuable tool for improvingthe flexibility and efficiency of robot applications. Many approaches to applying machine learning to robotics are known. Some approaches enhance the robot's high-level processing, the planning capabilities. Other approaches enhance the low-level processing, the control of basic actions. In contrast, the approach presented in this paper uses machine learning for enhancing the link between the low-level representations of sensing and action and the high-level representation of planning. The aim is to facilitate the communication between the robot and the human user. A hierarchy of concepts is learned from route records of a mobile robot. Perception and action are combined at every level, i.e., the concepts are perceptually anchored. The relational learning algorithm grdt has been developed which completely searches in a hypothesis space, that is restricted by rule schemata, which the user defines in terms of grammars. Keywords... - SIGART Bulletin , 1993 "... Inductive Logic Programming (ILP) is a research area which investigates the construction of first-order definite clause theories from examples and background knowledge. ILP systems have been applied successfully in a number of real-world domains. These include the learning of structureactivity rules ..." Cited by 31 (3 self) Add to MetaCart Inductive Logic Programming (ILP) is a research area which investigates the construction of first-order definite clause theories from examples and background knowledge. ILP systems have been applied successfully in a number of real-world domains. These include the learning of structureactivity rules for drug design, finite-element mesh design rules, rules for primary-secondary prediction of protein structure and fault diagnosis rules for satellites. There is a well established tradition of learning-in-the-limit results in ILP. Recently some results within Valiant's PAC-learning framework have also been demonstrated for ILP systems. In this paper it is argued that algorithms can be directly derived from the formal specifications of ILP. This provides a common basis for Inverse Resolution, ExplanationBased Learning, Abduction and Relative Least General Generalisation. A new general-purpose, efficient approach to predicate invention is demonstrated. ILP is underconstrained by its logical ... - IN PROCEEDINGS OF THE ELEVENTH INTERNATIONAL CONFERENCE ON MACHINE LEARNING , 1994 "... In this paper we investigate the efficiency of `-- subsumption (` ` ), the basic provability relation in ILP. As D ` ` C is NP--complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. ..." Cited by 31 (3 self) Add to MetaCart In this paper we investigate the efficiency of `-- subsumption (` ` ), the basic provability relation in ILP. As D ` ` C is NP--complete even if we restrict ourselves to linked Horn clauses and fix C to contain only a small constant number of literals, we investigate in several restrictions of D. We first adapt the notion of determinate clauses used in ILP and show that `--subsumption is decidable in polynomial time if D is determinate with respect to C. Secondly, we adapt the notion of k--local Horn clauses and show that `-- subsumption is efficiently computable for some reasonably small k. We then show how these results can be combined, to give an efficient reasoning procedure for determinate k--local Horn clauses, an ILP--problem recently suggested to be polynomial predictable by Cohen (1993) by a simple counting argument. We finally outline how the `--reduction algorithm, an essential part of every lgg ILP--learning algorithm, can be improved by these ideas. - JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH , 2002 "... We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video sequences. First, we introduce a simple, propositional, temporal, event-description language called AMA that ..." Cited by 30 (3 self) Add to MetaCart We develop, analyze, and evaluate a novel, supervised, specific-to-general learner for a simple temporal logic and use the resulting algorithm to learn visual event definitions from video sequences. First, we introduce a simple, propositional, temporal, event-description language called AMA that is sufficiently expressive to represent many events yet sufficiently restrictive to support learning. We then give algorithms, along with lower and upper complexity bounds, for the subsumption and generalization problems for AMA formulas. We present a positive-examples -- only specific-to-general learning method based on these algorithms. We also present a polynomial-time -- computable "syntactic" subsumption test that implies semantic subsumption without being equivalent to it. A generalization algorithm based on syntactic subsumption can be used in place of semantic generalization to improve the asymptotic complexity of the resulting learning algorithm. Finally - Artificial Intelligence Journal , 1992 "... All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising first-order clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversi ..." Cited by 26 (2 self) Add to MetaCart All generalisations within logic involve inverting implication. Yet, ever since Plotkin's work in the early 1970's methods of generalising first-order clauses have involved inverting the clausal subsumption relationship. However, even Plotkin realised that this approach was incomplete. Since inversion of subsumption is central to many Inductive Logic Programming approaches, this form of incompleteness has been propagated to techniques such as Inverse Resolution and Relative Least General Generalisation. A more complete approach to inverting implication has been attempted with some success recently by Lapointe and Matwin. In the present paper the author derives general solutions to this problem from first principles. It is shown that clausal subsumption is only incomplete for self-recursive clauses. Avoiding this incompleteness involves algorithms which find "nth roots" of clauses. Completeness and correctness results are proved for a non-deterministic algorithms which constructs nth ro...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=691158","timestamp":"2014-04-19T23:23:34Z","content_type":null,"content_length":"39611","record_id":"<urn:uuid:a5740aed-8ffb-4942-9d1e-48cad9ccb9b9>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Nathan Brixius I have created a simulation model in Microsoft Excel using Frontline Systems’ Analytic Solver Platform to predict the 2014 NCAA Tournament using the technique I described in my previous post. Click here to download the spreadsheet. To try it out, go to solver.com and download a free trial of Analytic Solver Platform by clicking on Products –> Analytic Solver Platform: Once you’ve installed the trial, open the spreadsheet. You’ll see a filled-out bracket in the “Bracket” worksheet: Winners are determined by comparing the ratings of each time, using Excel formulas. Basically…a bunch of IF statements: The magic of simulation is that it accounts for uncertainty in the assumptions we make. In this case, the uncertainty is my crazy rating system: it might be wrong. So instead of a single number that represents the strength of, say, Florida, we actually have a range of possible ratings based on a probability distribution. I have entered these probability distributions for the ratings for each team in column F. Double click on cell F9 (Florida’s rating), and you can see the range of ratings that the simulation considers: The peak of the bell curve (normal) distribution is at 0.1245, the rating calculated in my previous post. Analytic Solver Platform samples different values from this distribution (and the other 63 teams), producing slightly different ratings, over and over again. As the ratings jiggle around for different trials, different teams win games and there are different champions for these simulated tournaments. In fact, if you hit F9 (or the “Calculate Now” button in the ribbon), you can see that all of the ratings change and the NCAA champion in cell Y14 sometimes changes from Virginia to Florida to Duke and so on. Click the “play” button on the right hand side to simulate the NCAA tournament 10,000 times: Now move over to the Results worksheet. In columns A and B you see the number of times each team won the simulated tournament (the sum of column B adds up to 10,000): There is a pivot table in columns E and F that summarizes the results. Right click to Refresh it, and the nifty chart below: We see that even though Virginia is predicted to be the most likely winner, Florida and Duke are also frequent winners. What’s nice about the spreadsheet is that you can change it to do your own simulations. Change the values in columns D and E in the Bracket worksheet to incorporate your own rating system and see who your model predicts will win. The simulation only scratches the surface of what Analytic Solver Platform can do. Go crazy with correlated distributions (perhaps by conference?) or even simulation-optimization models to tune your model. Have fun. I revealed my analytics model’s 2014 NCAA Tournament picks in yesterday’s post. Today, I want to describe how the ratings were determined. (Fair warning: this post will be quite a bit more technical and geeky.) Click here to download the Python model source code. My NCAA prediction model computes a numerical rating for each team in the field. Picks are generated by comparing team ratings: the team with the higher rating is predicted to advance. As I outlined in my preview, the initial model combines two ideas: 1. A “win probability” model developed by Joel Sokol in 2010 as described on Net Prophet. 2. An eigenvalue centrality model based on this post on BioPhysEngr Blog. The eigenvalue centrality model creates a big network (also called a graph) that links all NCAA teams. The arrows in the network represent games between teams. Eigenvalue centrality analyzes the network to determine which network nodes (which teams), are strongest. The model I described in my preview was pretty decent, but it failed to address two important issues: • Recently played games should count more than games at the beginning of the season. • Edge weights should reflect the probability one team is stronger than another, rather than probability one will beat another on a neutral floor. The first issue is easy to explain. In my initial model, game-by-game results were analyzed to produce edge weights in a giant network linking teams. The weight was simply the formula given by Joel Sokol in his 2010 paper. However, it seems reasonable that more recently played games are more important, from a predictive perspective, than early season games. To account for this factor, I scale the final margin of victory for more recently played games by a “recency” factor R. If one team beats another by K points at the start of the season, we apply the Sokol formula with K. However, if one team beats another by K points at the end of the season, we apply the formula with R*K. If R=2, that means a 10 point victory at the start of the season is worth the same as a 5 point victory at the end. If the game was in the middle of the season, we’d apply half of the adjustment: 7.5 points. The second issue – regarding edge weights and team strength – is more subtle. As you saw in the “Top 25” from my preview post, there were some strange results. For example, Canisius was rated #24. The reason is that the Sokol formula is not very sensitive to small margins of victory. Let’s look at an example. Here is the Sokol formula: phi(0.0189 * x – 0.0756) If you try the values 1..6 you get the probabilities [0.477, 0.485, 0.492, 0.5, 0.508, 0.515]. This means that the difference between a 1-point home win and a 6-point home win is only 0.515 – 0.477 = 0.0377 ~= 3%. This means that most of the nonzero values in the big adjacency matrix that we create are around 0.5, and consequently our centrality method is determining teams that are influential in the network, rather than teams that are dominant. One way to find teams that are dominant is to scale the margin of victory so that a 6-point victory is worth much more than a 1-point victory. So the hack here is to substitute S*x for x in the formula, where S is a “sensitivity” scaling factor. One last tiny adjustment I made was to pretend that Joel Embiid did not play this year, so that Kansas’s rating reflects their strength without him. Long story short, I subtracted 1.68 points for all games that Joel Embiid appeared in. This post has the details. My Python code implements everything I described in this post and the preview. I generated the picks by choosing the recency parameter R = 1.5 and strength parameter S = 2. Here is a sample call and scoreNcaa(25, 20, 2, 1.5, 0) ['Virginia', 0.13098760857436742] ['Florida', 0.12852960094006807] ['Duke', 0.12656196253849666] ['Kansas', 0.12443601960952431] ['Michigan St', 0.12290861109638007] ['Arizona', 0.12115701603335856] ['Wisconsin', 0.11603580613955565] ['Pittsburgh', 0.11492421298144373] ['Michigan', 0.11437543620057213] ['Iowa St', 0.1128795675290855] If you’ve made it this far, and have the source code, you can figure out what most of the other parameters mean. (Or you can ask in the comments!) The answer to the question, “why did Virginia come out first” is difficult to answer succinctly. Basically: • Virginia, Florida, and Duke are all pretty close. • Virginia had a consistently strong schedule. • Their losses were generally speaking close games to strong opponents. • They had several convincing, recent victories over other very strong teams. In a future post, I will provide an Excel spreadsheet that will allow you to build and simulate your own NCAA tournament models! Here are my picks for the 2014 NCAA Tournament, based on the analytics model I described in this post. This post contains the picks and my next post will contain the code and methodology for the geeks among us. I use analytics for my NCAA picks for my own education and enjoyment, and to absolve responsibility for them. No guarantees! Here is a link to picks for all rounds in PDF format. Here is a spreadsheet with all picks and ratings. This year’s model examined every college basketball game played in Division I, II, III, and Canada based on data from Prof. Peter Wolfe and from MasseyRatings.com. The ratings implicitly account for strength of opposition, and explicitly account for neutral site games, recency, and Joel Imbiid’s back (it turned out not to matter). I officially deem these picks “not crappy”. The last four rounds are given at the end – the values next to each team are the scores generated by the model. The model predicts Virginia, recent winners of the ACC tournament, will win it all in 2014 in a rematch with Duke. Arizona was rated the sixth best team in the field but is projected to make it to the Final Four because it plays in the weakest region (the West). Florida, the second strongest team in the field (juuust behind Virginia) joins them. Wichita State was rated surprisingly low (25th) even though it is currently undefeated, basically due to margin of victory against relatively weaker competition (although the Missouri Valley has been an underrated conference over the past several years). Wichita State was placed in the Midwest region, clearly the toughest region in the bracket, and is projected to lose to underseeded Kentucky in the second round. Here is the average and median strengths of the four regions. The last column is the 75th percentile, which is an assessment of the strength of the elite teams in each bracket. Green means easy: Region Avg Med Top Q South 0.0824 0.0855 0.1101 East 0.0816 0.0876 0.1064 West 0.0752 0.0831 0.1008 Midwest 0.0841 0.0890 0.1036 The model predicts a few upsets (though not too many). The winners of the “play-in games” are projected to knock off higher seeded Saint Louis and UMass. Kentucky is also projected to beat Louisville , both of whom probably should have been seeded higher. Baylor is projected to knock off Creighton, busting Warren Buffett’s billion dollar bracket in Round 2. Sweet 16 Elite 8 Florida 0.1285 Florida 0.1285 VA Commonwealth 0.1097 Syracuse 0.1111 Kansas 0.1281 Kansas 0.1244 Virginia 0.1310 Virginia 0.1281 Michigan St 0.1229 Iowa St 0.1129 Iowa St 0.1129 Villanova 0.1060 Arizona 0.1212 Arizona 0.1212 Oklahoma 0.1001 Baylor 0.1013 Wisconsin 0.1160 Wisconsin 0.1160 Kentucky 0.1081 Kentucky 0.1081 Louisville 0.1065 Duke 0.1266 Duke 0.1266 Michigan 0.1144 Final Four Championship Florida 0.1285 Virginia 0.1310 Virginia 0.1310 Duke 0.1266 Arizona 0.1212 Duke 0.1266 Joel Embiid is the starting center of the Kansas Jayhawks and one of the most talented college basketball players in the country. Unfortunately he suffered a stress fracture in his back and is likely to miss at least the first weekend of the upcoming NCAA tournament. Some think that Kansas is headed for an early round exit while others think that Kansas’s seed should not be affected at all. Can we use analytics, even roughly, to assess the impact on Kansas’ NCAA tournament prospects? How about looking at win shares? A “win share” is a statistical estimate of the number of team wins that can be attributed to an individual’s performance. According to the amazing Iowa-powered basketball-reference.com, Embiid’s win shares per 40 minutes are an impressive 0.212 (an average player is around .100). HIs primary replacement, Tarik Black, is at 0.169. That’s a difference of 0.042 win shares per 40 minutes. I probably can’t technically do what I am about to do, but who cares. Since Kansas averages 80 points a game, the win share difference is 80 x 0.042 = 3.36 points per game. However, Embiid was only playing around 23 minutes a game, and Black isn’t even getting all of his minutes. Certain other teammates (Wiggins!) may simply play more minutes than usual to compensate. So 3.36 is probably on the high side. If we estimate that Embiid’s presence will be missed for only 20 player-minutes per game, an estimate of 1.68 points per game is probably reasonable. I will use this assumption in my upcoming NCAA Tournament model. If we look at Kansas’s schedule we see that this difference would possibly only have swayed two games (Oklahoma State and Texas Tech). Embiid’s loss should not affect his team’s seeding any more than it already has by having lost to Iowa State in the Big 12 tournament. Kansas is a solid 2 seed, but Embiid’s loss, if prolonged, could delay a fifteenth Final Four appearance. Mae West said that too much of a good thing is wonderful. For we shipbuilders who write numerical code that is certainly true of speed and accuracy. How seldom we find ourselves in the happy situation of a piece of code that is both fast enough and accurate enough! A colleague and I were chatting about speed and accuracy today and I realized that when I am building software, I prefer a piece of code that is accurate but slow over one that is less accurate but faster. With profiling and careful thought applied to new code, it’s usually pretty easy to make it faster. Addressing a wide-spread numerical issue often requires a complete re-think. If I am simply using the software (rather than build it), then all bets are off; it depends on what I am trying to do. My NCAA Tournament Prediction Model posts have traditionally been pretty popular, so I thought I would put in a bit more effort this year. In this post I want to share some “raw materials” that you might find helpful, and describe the methodology behind this year’s model. Here are some resources that you might find helpful if you want to build your own computer-based model for NCAA picks: This year I am going to combine two ideas to build my model. The first is a “win probability” model developed by Joel Sokol which is described on Net Prophet. As the blog post says, this model estimates the probability that Team A will beat Team B on a neutral site given Team A beat Team B at home by a given number of points. So for example if A loses to B by 40 at home, this probability is close to zero. You can hijack this model to assign a “strength of victory” rating: a blowout win is a greater show of team strength than a one-point thriller. The second idea is a graph theoretical approach stolen from this excellent post on BioPhysEngr Blog. The idea here is to create a giant network based on the results of individual games. So for example if Iowa beats Ohio State then there are arrows between the Iowa and Ohio State nodes. The weight on the edge is a representation of the strength of the victory (or loss). Given this network we can apply an eigenvalue centrality approach. In English, this means determining the importance of all of the nodes in the network, which in my application means the overall strength of each team. I like this approach because it is easy for me to code: computing the largest eigenvalue using the power method is simple enough for even Wikipedia to describe succinctly. (And shockingly enough, according to the inscription on my Numerical Analysis text written by the great Ken Atkinson, I learned it twenty years ago!) The difference between my approach and the BioPhysEngr approach is that I am using Sokol’s win probability logic to calculate the edge weights. As you’ll see when I post the code, it’s about 150 lines of Python, including all the bits to read in the game data. I ran a preliminary version of my code against all college basketball games up until March 9, and my model’s Top 25 is given below. Mostly reasonable with a few odd results (Manhattan? Canisius? Iona?) I will make a few tweaks and post my bracket after the selection show on Sunday. 1 Wichita St 2 Louisville 3 Villanova 4 Duke 5 Kansas 6 Florida 7 Arizona 8 Virginia 9 Michigan St 10 North Carolina 11 Ohio State 12 Wisconsin 13 Manhattan 14 Syracuse 15 Iowa 16 Kentucky 17 Iona 18 Pittsburgh 19 Creighton 20 VA Commonwealth 21 Tennessee 22 Oklahoma St 23 Michigan 24 Canisius 25 Connecticut Box plots are widely used among data scientists and statisticians. They’re useful because they show variation both between and within data series. R, Python’s matplotlib, and many other charting libraries support box plots right out of the…box, but Excel does not. In Excel 2013, with a little bit of imagination you can create nice looking box plots without writing any code. Read this post to find out how to create box plots that look like this: Here is a workbook that has the finished product if you don’t want to follow along. You’ll need to start with a table containing the data you want to plot. I am using the data from the Michelson-Morley experiment: A box plot shows the median of each data series as a line, with a “box” whose top edge is the third quartile and whose bottom edge is the first quartile. Often we draw “whiskers” at the top and bottom representing the extreme values of each series. If we create an auxiliary data containing this data and follow my advice from my Error Bars in Excel post, we can create a nice looking box plot. Step 1: Calculate Quartiles and Extremes. Create another table with the following rows for each series: min, q1, q2, q3, max. These will be the primary data in your box plot. Min and max are easy – use the =MIN() and =MAX() formulas on each data series (represented as columns A – E in my example). To compute Q1-Q3 use the QUARTILE.INC() function. (INC means “inclusive”. QUARTILE.EXC() would work fine if that’s what you want.) Enter the formulas for the first series and then “fill right”: Step 2: Calculate box and whisker edges We are going to create a stacked column chart with error bars, and “hide” the bottommost column in the stack to make the chart look like a box plot. Therefore we have to calculate the tops and bottoms of our boxes and whiskers: • The bottom of each box is Q1. • The ‘middle’ of each box is Q2 (the median). Since this is a stacked column chart, we actually want to compute Q2 – Q1. • The top of each box is Q3. Since we want to represent this as a “slice” in the stacked column chart, we want Q3 – Q2. • The error going “down” in the chart is Q2 – min, since the whiskers start at the median. • The error going “up” is max – Q2. Compute these five quantities as rows and you’ll have this: Step 3: Create a stacked column chart. Go to the INSERT tab and select a stacked column chart: Now right click on the blank chart, choose Select Data Range and select the “box lo, box mid, mix hi range” as your data: Step 4: Make the chart look like a Box Plot. This is simple: the bottom bar (the blue ones in my example) need to go away. So right click on a blue bar and change both the outline and fill to nothing. Step 5: Add Whiskers. Follow the steps in my celebrated “Add Error Bars” post. Click on the “+” next to the chart, select Error Bars. Choose Series 2 (which corresponds to the median). Click on “More options” in the Error Bars flyout menu next to the “+”. In the task pane on the right, for Error Amount choose Custom and then click the Specify Value button: For “Positive Error Value” select the “err up” row and for “Negative Error Value” select “err down”. Both rows contain positive values, and that is totally fine. Here’s what mine looks like: That’s it! You can of course customize the other bars as desired.
{"url":"http://nathanbrixius.wordpress.com/","timestamp":"2014-04-19T00:03:32Z","content_type":null,"content_length":"113324","record_id":"<urn:uuid:8d3f9072-8535-4683-9a8a-13cc7686427b>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Cool furniture, design objects and desiderata post #2671 of 3471 9/9/13 at 12:57pm • Posts: 10,845 • Joined: 1/2008 • Location: The American Gardens Building, West 81st Butcher block normally has the end grain showing, correct? McMaster Carr also has table tops similar to those from Ikea, but in a heavier thickness (1-3/4" or 2-1/4") in maple. They offer a wider range of sizing and IMO sell a very reliable product. In fact I'm considering buying one as a basis for a workbench that I plan to add to from there. OK, I bought one for the basis for my workbench, so if anyone is curious about how it looks before I start adding vices to it and other woodworking implements I can post a pic when I receive it. Sounds to me like he wants an island/cart with a hardwood top. Doesn't necessarily have to be suitable for the commercial production of meat. FWIW, I think most of the time when people are referring to "butcher block" surfaces in a kitchen these days, they are looking at the mcmaster/ikea style laminated wood tops, not specifically the end-grain stuff. Ikea will also sell you Oak/Beech/Birch countertops in 1.5" thickness for quite cheap if you want a DIY solution but don't need the mcmaster thickness. Thanks for the tip on the mcmaster stuff though--I know a few people who work for them, so I'll keep it in mind if I need a surface (was thinking about doing a desk with hairpin legs)...although having them order me lumber is a little different than having them bring me some hinges and bearings. I can't tell if you are joking with this post or not. I would have no hesitation on using one of the IKEA solid beech tops for a semi-serious kitchen where a real built-in butcher block is not really necessary. couple things: my ikea countertop is NOT end-grain. And I don't use it as a cutting board (though my wife occasionally does, when she thinks i'm not looking). Would you really want a true butcher block to be integral to the kitchen counter? I would think a section of it that is set into the counter and removable for cleaning would be better for real use. And of course much more easily replaceable when it's in need of replacement or refinishing. I like the idea of a wood countertop, seems much more tool friendly than granite or marble. We have granite, because the previous owner wanted a selling point rather than a work-surface. The whole idea of the butcher block or any wooden cutting surface is that the knife(or cutteíng object) scores lightly into the surface of the wood which makes for a clean cut of whatever it happens to be that you are preparing. A hard surface does not allow for this. All told I think that granite is the better option because you can sit hot things directly on it. With all wood counter tops you would need a healthy supply of trivets. We are using trivets anyways. I really don't despise granite, but it's not my first choice. I think I would chose a mixed group of materials if working from the ground up at some point. I have four of the tall ones, very practical, fit a lot of books and very stable. I fill them so as not to show the armature so it just looks like huge stacks of books. If you don't have a large quantity of bigger books (artbooks, manuals, magazines, etc.) for the base it looks sort of weird though. Keep in mind I also have built-in bookshelves, some magazines stacks elsewhere and a bunch of books in another part of my place. They look kinda weird if they're the only bookshelves you have IMHO. Also granite is really great for rolling out pie crusts. i know this is a popular theory, but in my experience, not so much. we have granite in our test kitchen (with a marble inset slab), and my work counter at home is butcher block. i really don't see that much difference. butcherblock that's adequately floured works just fine. and I really don't like the cold hard look of granite (probably the only thing I'll ever be a 1%er on, I know). my home kitchen is half vintage tile (late 20s) and half butcherblock. when I am a baller, which only happens while I'm asleep, I have marble countertops. laminate table though. Originally Posted by foodguy i know this is a popular theory, but in my experience, not so much. we have granite in our test kitchen (with a marble inset slab), and my work counter at home is butcher block. i really don't see that much difference. butcherblock that's adequately floured works just fine. and I really don't like the cold hard look of granite (probably the only thing I'll ever be a 1%er on, I know). my home kitchen is half vintage tile (late 20s) and half butcherblock. Why must you slaughter these sacred cows with your "knowledge" and "experience"? Have you no decency, sir? post #2672 of 3471 9/9/13 at 1:44pm • Posts: 10,845 • Joined: 1/2008 • Location: The American Gardens Building, West 81st post #2673 of 3471 9/9/13 at 2:25pm • Posts: 9,266 • Joined: 8/2008 post #2674 of 3471 9/9/13 at 2:31pm post #2675 of 3471 9/9/13 at 2:49pm • Posts: 10,845 • Joined: 1/2008 • Location: The American Gardens Building, West 81st post #2676 of 3471 9/9/13 at 2:52pm • Posts: 8,101 • Joined: 4/2009 • Location: At the corner of hipster and hip replacement post #2677 of 3471 9/9/13 at 2:55pm • Posts: 10,845 • Joined: 1/2008 • Location: The American Gardens Building, West 81st post #2678 of 3471 9/9/13 at 10:09pm • Posts: 394 • Joined: 7/2009 post #2679 of 3471 9/10/13 at 8:07am post #2680 of 3471 9/10/13 at 8:47am • Posts: 10,845 • Joined: 1/2008 • Location: The American Gardens Building, West 81st post #2681 of 3471 9/10/13 at 9:59am • Posts: 23,755 • Joined: 12/2004 post #2682 of 3471 9/10/13 at 10:00am post #2683 of 3471 9/10/13 at 10:06am • Posts: 8,101 • Joined: 4/2009 • Location: At the corner of hipster and hip replacement post #2684 of 3471 9/10/13 at 10:10am • Posts: 14,724 • Joined: 10/2008 • Location: NYC post #2685 of 3471 9/10/13 at 12:52pm • Posts: 9,449 • Joined: 12/2010 • Location: People's Republic of San Francisco
{"url":"http://www.styleforum.net/t/83589/cool-furniture-design-objects-and-desiderata/2670","timestamp":"2014-04-21T01:12:46Z","content_type":null,"content_length":"111265","record_id":"<urn:uuid:995d5131-8f31-4716-a39b-293c1c599ba2>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
Approved by University Studies Sub Approved by Faculty Senate University Studies Course Approval Proposal Oral Communication Flag The Department of Mathematics and Statistics proposes the following course for inclusion in University Studies courses satisfying the Oral Communication Flag requirement at Winona State University. This was approved by the full department on Thursday, January 18, 2001. Course: Abstract Algebra (MATH 440), 4 s.h. Catalog Description: Axiomatic development of groups, rings, and fields. This is a University Studies course satisfying the Oral Communication Flag requirement. Prerequisite: MATH 210. This is an existing course, previously approved by A2C2. Department Contact Person for this course: Steven D. Leonhardi, Department of Mathematics and Statistics Email leonhardi@winona.edu General Discussion of University Studies Oral Communication Flag in relation to MATH 440: University Studies: Oral Communication Flag The purpose of the Oral Communication Flag requirement is to complete the process of providing graduates of Winona State University with the knowledge and experience required to enable them to become highly competent communicators by the time they graduate. Courses can merit the Oral Communication Flag by demonstrating that they allow for clear guidance, criteria, and feedback for the speaking assignments; that the course requires a significant amount of speaking; that speaking assignments comprise a significant portion of the final course grade; and that students will have opportunities to obtain student and faculty critiques of their speaking. These courses must include requirements and learning activities that promote students abilities to a. Earn significant course credit through extemporaneous oral presentations; Students in this course are required to learn and perform three different types of speaking in this course: (1) the discussion necessary for a group of 2 to 4 students to construct and verify a proof or disproof of a mathematical claim; (2) presenting completed mathematical proofs to the class; and (3) presenting the results of an expository research project to the class. Typically, the expository project and presentation is worth about 10% of the student s final grade, and homework problems that are worked on in groups are worth about 40% of the student s final grade, although this varies somewhat depending upon the instructor and year. A full week of the semester is used for oral presentations of student projects. b. Understand the features and types of speaking in their disciplines; The three types of speaking about mathematics listed above reflect the three major types of speaking needed by mathematicians and mathematics teachers. The first type, discussion among peers about mathematical concepts, examples, and arguments, is the primary means of progress in mathematical research, and is also a highly effective means of helping students at any level construct their own mathematical knowledge. Even to mathematics majors, mathematics is essentially a foreign language. This course is intended to help students learn to use and communicate mathematical terminology and arguments correctly and at a level of rigor and following the stylistic standards appropriate to the discipline. The second type of speaking, presenting a complete proof to an audience, corresponds to the type of presentation that a mathematician might give at a specialized conference, or a presentation that a teacher would give to a class after they have had time to work on a problem. The audience is assumed to already know a significant amount of background and terminology, but a complete, step-by-step explanation of the specific claim must be given. This type of speaking requires the highest level of rigor out of the three types. The third type of speaking, presenting results of an expository research project, corresponds to the type of talk one might give at a general mathematics conference, in which the audience is presumed to have only a minimal amount of background knowledge, and in which a large amount of information is condensed and summarized to the main themes and introduction. c. Adapt their speaking to field-specific audiences; Students in Abstract Algebra learn to adapt their speaking to communicate effectively with (1) students working together on the same problems, (2) "experts" (i.e., students in the same class working on different problems) who know the background info but still need to hear details of the speaker s specific work, and (3) "non-experts" with only minimal background in the topic about which they are speaking, as described in item b. above. d. Receive appropriate feedback from teachers and peers, including suggestions for improvement; Students receive immediate feedback from peers during discussion done in groups. The instructor circulates throughout the class during this group work, confirming when terminology and arguments are appropriate, and offering corrections, hints, and suggestions for improvement. The instructor also comments orally after problem solutions and proofs are presented to the class. For the expository project and presentation, the instructor offers comments on a preliminary outline, then comments on a first draft, with suggestions for improvement and suggestions on the oral presentation, and comments on the final presentation. e. Make use of the technologies used for research and speaking in the fields; and Students typically use the blackboard and chalk to present solutions to homework problems; some use overhead slides or printed handouts to supplement their explanations. For their project presentations, many students use Power Point (sometimes with audio supplements), some students use overhead slides, some use printed handouts, and some use the blackboard-- in approximately the same proportions as would be represented at a professional conference for mathematicians or mathematics teachers. f. Learn the conventions of evidence, format, usage, and documentation in their fields. This is a major focus of the course: for students to learn how to evaluate and present evidence, correct usage, and what exactly constitutes a "proof" in mathematics, and for students to learn how to communicate their ideas, conjectures, and conclusions. In particular, students must learn how to move through the process of communicating their informal intuitions based on concrete examples, to developing a formal, rigorous, general proof, and then finally to explaining their proof in a way that others can understand. That is, the type of speaking that is most effective for "discovering" a theorem and/or its proof is very different from the type of speaking that is required for presenting a formal statement of a theorem and its proof. Students must learn how to carry out both types of speaking, and learn to recognize which type of speaking (namely, what level of formality in terms of evidence and usage) is appropriate in different situations. Abstract Algebra (MATH 440) 4 s.h. Course Syllabus/Outline Course Title: Abstract Algebra MATH 440 Number of Credits: 4 S.H. Frequency of Offering: offered fall semester Prerequisite(s): MATH 210 Grading: Grade only for all majors, minors, options, concentrations and licensures within the Department of Mathematics and Statistics. The P/NC option is available to others. Course Description: Axiomatic development of groups, rings, and fields. This is a University Studies course satisfying the Oral Communication Flag requirement. University Studies: Oral Communication Flag The purpose of the Oral Communication Flag requirement is to complete the process of providing graduates of Winona State University with the knowledge and experience required to enable them to become highly competent communicators by the time they graduate. Courses can merit the Oral Communication Flag by demonstrating that they allow for clear guidance, criteria, and feedback for the speaking assignments; that the course requires a significant amount of speaking; that speaking assignments comprise a significant portion of the final course grade; and that students will have opportunities to obtain student and faculty critiques of their speaking. These courses must include requirements and learning activities that promote students abilities to a. Earn significant course credit through extemporaneous oral presentations; b. Understand the features and types of speaking in their disciplines; c. Adapt their speaking to field-specific audiences; d. Receive appropriate feedback from teachers and peers, including suggestions for improvement; e. Make use of the technologies used for research and speaking in the fields; and f. Learn the conventions of evidence, format, usage, and documentation in their fields. Course objectives that include such requirements and learning activities are indicated below using lowercase, boldface letters (a-f) corresponding to these. Statement of major focus and objectives of the course: The major focus of this course is to provide students with a) knowledge of the content of abstract algebra. a, b, c, d, f b) skills in carrying out the process of experimentation, conjecture, and verification. b, d, f c) skills in creating, critiquing, and communicating proofs, both orally and in writing. a, b, c, d, e, f Note that a focus of the course will be to prepare students to develop the competencies outlined in the following Minnesota Standards of Effective Teaching Practice for Beginning Teachers: Standard 1 -- Subject Matter; Objectives: To develop within the future teacher ... a) the ability to use a problem-solving approach to investigate and understand mathematical content b, d, f b) the ability to communicate mathematical ideas in writing, using everyday and mathematical language, including symbols c) the ability to communicate mathematical ideas orally, using both everyday and mathematical language a, b, c, d, e, f d) the ability to make and evaluate mathematical conjectures and arguments and validate their own mathematical thinking b, d, f e) an understanding of the interrelationships within mathematics f) an understanding of and the ability to apply concepts of number, number theory and number systems g) an understanding of and the ability to apply numerical computational and estimation techniques and the ability to extend them to algebraic expressions h) the ability to use algebra to describe patterns, relations and functions and to model and solve problems b, d, f i) an understanding of the role of axiomatic systems in different branches of mathematics, such as algebra and geometry a, b, d, f j) an understanding of the major concepts of abstract algebra a, b, c, d, e, f k) the ability to use calculators in computational and problem-solving situations e l) the ability to use computer software to explore and solve mathematical problems e m) a knowledge of the historical development of mathematics that includes the contributions of underrepresented groups and diverse cultures a, b, c, d, e, f Course Outline of the Major Topics and Subtopics (ordering at the discretion of the instructor): I. Preliminaries A. Historical origins of abstract algebra B. Basic set theory C. Review of methods of proof and the axiomatic method as applied to: 1. The integers and the Greatest Common Divisor Identity 2. Matrix algebra 3. Complex numbers 4. Functions and compositions 5. Relations and equivalence relations II. Groups A. Permutations, symmetries of a polygon B. Groups and subgroups C. Cyclic groups D. Permutation groups III. Rings A. Rings and subrings B. Factorization, uniqueness of factorization, units, and associates C. Integral domains and fields IV. Homomorphisms and Quotient Structures A. Homomorphisms B. Isomorphisms C. Normal subgroups D. Quotient subgroups E. Ideals F. Quotient rings Method of Instruction: Lecture, discussion, group work, student presentations, computer lab projects. Evaluation Procedure: Hour exams and/or quizzes, homework, student presentation of solved homework, expository research project and presentation, and a final exam. Textbooks or Alternatives: A First Course in Abstract Algebra, Anderson and Feil A First Course in Abstract Algebra, Fraleigh Contemporary Abstract Algebra, Gallian Abstract Algebra, Herstein Abstract Algebra: An Introduction, Hungerford A Book of Abstract Algebra, Pinter List of References and Bibliography: Laboratory Experiences in Group Theory: A Manual to be used with Exploring Small Groups, by Ellen Maycock Parker Approval/Disapproval Recommendations Department Recommendation: Approved Disapproved Date Chairperson Signature Date Dean's Recommendation: Approved Disapproved Date Dean's Signature Date *In the case of a Dean's recommendation to disapprove a proposal, a written rationale for the recommendation to disapprove shall be provided to USS. USS Recommendation: Approved Disapproved Date University Studies Director's Signature Date A2C2 Recommendation: Approved Disapproved Date A2C2 Chairperson Signature Date Faculty Senate Recommendation: Approved Disapproved Date FA President's Signature Date Academic VP's Recommendation: Approved Disapproved Date VP's Signature Date President's Decision: Approved Disapproved Date President's Signature Date
{"url":"http://www.winona.edu/ifo/courseproposals/Math_and_Stats/ay2001-2002.htm/Math440.htm","timestamp":"2014-04-20T21:05:26Z","content_type":null,"content_length":"21627","record_id":"<urn:uuid:6d603cd2-0cb4-45b9-b1c5-0f899ab234b7>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
MathFiction: Children of Dune (Frank Herbert) a list compiled by Alex Kasman (College of Charleston) Home All New Browse Search About Children of Dune (1976) Frank Herbert Note: This work of mathematical fiction is recommended by Alex for hardcore fans of science fiction. This third novel in the "Dune" series (which was also made into a TV miniseries) contains a wonderful (but rather brief and not very significant) bit of fictional mathematics. The following quotation is presented as an excerpt from a lecture concerning the mathematical explanation of religious leader Paul Muad'Dib's ability to see possible futures: (quoted from Children of Dune) Only in the realm of mathematics can you understand Muad'Dib's precise view of the future. Thus: first we postulate any number of point-dimensions in space. This is the classic n-fold extended aggregate of n dimensions. With this framework, time as commonly understood becomes an aggregate of one dimensional properties. Applying this to the Muad'Dib phenomenon, we find that we are either confronted by new properties of time or (by reduction through the infinity calculus) we are dealing with separate systems which contain n body properties. For Muad'Dib, we assume the latter. As demonstrated by the reduction, the point dimensions of the n-fold can only have separate existence within different frameworks of time. Separate dimensions of time are thus demonstrated to coexist. This being the inescapable case, Muad'Dib's predictions required that he percieve the n-fold not as extended aggregate but as an operation within a single framework. In effect, he froze his universe into that one framework which was his view of time. -Palimbasha: Lectures at Sietch Tabr The book also explains that Palimbasha is a mathematics professor sanctioned for his attempts to explain Muad'Dib's powers mathematically. Other than this, mathematics does not seem to play any significant role in the story. Furthermore, I cannot really make any sense out of the quote...it is just nonsense. But, interestingly, it is nonsense that really sounds like the sorts of things mathematicians say! Thanks to Eric Heisler for suggesting this addition to the list. Buy this work of mathematical fiction and read reviews at amazon.com. (Note: This is just one work of mathematical fiction from the list. To see the entire list or to see more works of mathematical fiction, return to the Homepage.) Works Similar to Children of Dune According to my `secret formula', the following works of mathematical fiction are similar to this one: 1. Cascade Point by Timothy Zahn 2. Contact by Carl Sagan 3. The Planiverse: computer contact with a two-dimensional world by A.K. Dewdney 4. The Blind Geometer by Kim Stanley Robinson 5. Round the Moon by Jules Verne 6. The Number of the Beast by Robert A. Heinlein 7. Diaspora by Greg Egan 8. Factoring Humanity by Robert J. Sawyer 9. Brave New World by Aldous Huxley 10. Flatland: A Romance of Many Dimensions by Edwin Abbott Abbott Ratings for Children of Dune: Content: Have you seen/read this work of mathematical fiction? Then click here to enter your own votes on its mathematical content and literary quality or send me comments to post on 1/5 (1 votes) this Webpage. Literary Quality: 4/5 (1 votes) Genre Science Fiction, Motif Academia, Topic Geometry/Topology/Trigonometry, Fictional Mathematics, Medium Novels, Home All New Browse Search About Your Help Needed: Some site visitors remember reading works of mathematical fiction that neither they nor I can identify. It is time to crowdsource this problem and ask for your help! You would help a neighbor find a missing pet...can't you also help a fellow site visitor find some missing works of mathematical fiction? Please take a look and let us know if you have seen these missing stories (Maintained by Alex Kasman, College of Charleston)
{"url":"http://kasmana.people.cofc.edu/MATHFICT/mfview.php?callnumber=mf563","timestamp":"2014-04-16T18:58:19Z","content_type":null,"content_length":"10686","record_id":"<urn:uuid:a7c0ce0c-fc20-4b16-a05f-61b4729789af>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00073-ip-10-147-4-33.ec2.internal.warc.gz"}
Help with proof December 14th 2011, 07:29 AM #1 Nov 2008 Help with proof This is a simple proof, and it makes sense logically, but I don't know how to show it. Any tips on how to prove stuff like this? if V is a vector space with basis {v1,...,vn} and W is a subspace of V = sp(v3,..,vn) show that if w E W and w = r1v1 + r2v2 for r1,r2 E R, then w = 0. Well since V has a basis of v1...vn, that means that sp(v1...vn) = V. All vectors in W can be written by a linear combination of v3...vn, thus if w is a linear combination of v1 and v2, and lies in W, that implies that it is the zero vector. Re: Help with proof Yes, but how exactly is this implied? If $w=r_1v_1+r_2v_2=r_3v_3+\dots+r_nv_n$, then $-r_1v_1-r_2v_2+r_3v_3+\dots+r_nv_n=0$, so... December 14th 2011, 07:45 AM #2 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/advanced-algebra/194249-help-proof.html","timestamp":"2014-04-17T22:23:48Z","content_type":null,"content_length":"33645","record_id":"<urn:uuid:56c1660c-6c73-4459-b03f-7e33afab0b5c>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
I have a quiz August 14th 2006, 07:43 PM I have a quiz If the distance to a sound source is halved, how will the sound intensity level change? a) increase by a factor of 2 b) depends on the actual distance c) increase by a factor of 4. d) increase by 6 dB e) increase by 3 dB August 14th 2006, 08:32 PM Originally Posted by Candy If the distance to a sound source is halved, how will the sound intensity level change? a) increase by a factor of 2 b) depends on the actual distance c) increase by a factor of 4. d) increase by 6 dB e) increase by 3 dB Loosely speaking; sound intensity of a source is measured in terms of the energy flow across unit surface area perpendicular to the wave front. Lets assume we are in the spherical spreading regime, then the surface area of a sphere of radius $R$ increases as $R^2$, so the energy density crossing a spherical shell of radius $R$ goes as $1/R^2$ (energy/power is the same crossing shells of all radii, but the area it is passing across increases as $R^2$). So halving the distance from the source quadruples the sound intensity. However you are asked about the sound intensity level, which is the sound intensity expressed in dB. For intensity levels the definition is: $L_I=10\ \log_{10}\left[ \frac{I}{I_0} \right]$, where $I_0$ is the reference intensity level (which we don't need to know but is often (and sensibly) $1 \mbox{ w/m^2 }$. Now at this point we could do some sums, but we should know that a factor of four increase in intensity is equivalent to $6 \mbox{dB}$ increase in intensity level, which is the answer.
{"url":"http://mathhelpforum.com/advanced-applied-math/4918-i-have-quiz-print.html","timestamp":"2014-04-20T19:19:00Z","content_type":null,"content_length":"6635","record_id":"<urn:uuid:72f056f1-cccf-4e90-8038-32442d9dafc8>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00363-ip-10-147-4-33.ec2.internal.warc.gz"}
details on make-event expansion Major Section: MAKE-EVENT The normal user of make-event can probably ignore this section, but we include it for completeness. We assume that the reader has read and understood the basic documentation for make-event (see make-event), but we begin below with a summary of expansion. Here is a summary of how we handle expansion involving make-event forms. (make-event form :check-expansion nil) This shows the :check-expansion default of nil, and is typical user input. We compute the expansion exp of form, which is the expansion of the original make-event expression and is evaluated in place of that expression. (make-event form :check-expansion t) The user presumably wants it checked that the expansion doesn't change in the future, in particular during include-book. If the expansion of form is exp, then we will evaluate exp to obtain the value as before, but this time we record that the expansion of the original make-event expression is (make-event form :check-expansion exp) rather than simply exp. (make-event form :check-expansion exp) ; exp a cons This is generated for the case that :check-expansion is t, as explained above. Evaluation is handled as described in that above case, except here we check that the expansion result is the given exp. (Actually, the user is also allowed supply such a form.) The original make-event expression does not undergo any expansion (intuitively, it expands to itself). Now let us take a look at how we expand progn forms (encapsulate is handled similarly). (progn ... (make-event form :check-expansion nil) ...) The expansion is obtained by replacing the make-event form as follows. Let exp be the expansion of form, Then replace the above make-event form, which we denote as F, by (record-expansion F exp). Here, record-expansion is a macro that returns its second argument. (progn ... (make-event form :check-expansion t) ...) The expansion is of the form (record-expansion F exp) as in the nil case above, except that this time exp is (make-event form :check-expansion exp'), where exp' is the expansion of form. (progn ... (make-event form :check-expansion exp) ...) ; exp a cons No expansion takes place unless expansion takes place for at least one of the other subforms of the progn, in which case each such form F is replaced by (record-expansion F exp) where exp is the expansion of F. Detailed semantics In our explanation of the semantics of make-event, we assume familiarity with the notion of ``embedded event form'' (see embedded-event-form). Let's say that the ``actual embedded event form'' corresponding to a given form is the underlying call of an ACL2 event: that is, LOCALs are dropped when ld-skip-proofsp is 'include-book, and macros are expanded away, thus leaving us with a progn, a make-event, or an event form (possibly encapsulate), any of which might have surrounding local, skip-proofs, or with-output calls. Thus, such an actual embedded event form can be viewed as having the form (rebuild-expansion wrappers base-form) where base-form is a progn, a make-event, or an event form (possibly encapsulate), and wrappers are (as in ACL2 source function destructure-expansion) the result of successively removing the event form from the result of macroexpansion, leaving a sequence of (local), (skip-proofs), and (with-output ...) forms. In this case we say that the form ``destructures into'' the indicated wrappers and base-form, and that it can be ``rebuilt from'' those wrappers and base-form. Elsewhere we define the notion of the ``expansion result'' from an evaluation (see make-event), and we mention that when expansion concludes, the ACL2 logical world and most of the state are restored to their pre-expansion values. Specifically, after evaluation of the argument of make-event (even if it is aborted), the ACL2 logical world is restored to its pre-evaluation value, as are all state global variables in the list *protected-state-globals-for-make-event*. Thus, assignments to user-defined state globals (see assign) do persist after expansion, since they are not in that list. We recursively define the combination of evaluation and expansion of an embedded event form, as follows. We also simultaneously define the notion of ``expansion takes place,'' which is assumed to propagate upward (in a sense that will be obvious), such that if no expansion takes place, then the expansion of the given form is considered to be itself. It is useful to keep in mind a goal that we will consider later: Every make-event subterm of an expansion result has a :check-expansion field that is a consp, where for this purpose make-event is viewed as a macro that returns its :check-expansion field. (Implementation note: The latest expansion of a make-event, progn, or encapsulate is stored in state global 'last-make-event-expansion, except that if no expansion has taken place for that form then 'last-make-event-expansion has value nil.) If the given form is not an embedded event form, then simply cause a soft error, (mv erp val state) where erp is not nil. Otherwise: If the evaluation of the given form does not take place (presumably because local events are being skipped), then no expansion takes place. Otherwise: Let x be the actual embedded event form corresponding to the given form, which destructures into wrappers W and base-form B. Then the original form is evaluated by evaluating x, and its expansion is as follows. If B is (make-event form :check-expansion val), then expansion takes place if and only if val is not a consp and no error occurs, as now described. Let R be the expansion result from protected evaluation of form, if there is no error. R must be an embedded event form, or it is an error. Then evaluate/expand R, where if val is not nil then state global 'ld-skip-proofsp is initialized to nil. (This initialization is important so that subsequent expansions are checked in a corresponding environment, i.e., where proofs are turned on in both the original and subsquent environments.) It is an error if this evaluation causes an error. Otherwise, the evaluation yields a value, which is the result of evaluation of the original make-event expression, as well as an expansion, E_R. Let E be rebuilt from W and E_R. The expansion of the original form is E if val is nil, and otherwise is the result of replacing the original form's :check-expansion field with E, with the added requirement that if val is not t (thus, a consp) then E must equal val or else we cause an error. If B is either (progn form1 form2 ...) or (encapsulate sigs form1 form2 ...), then after evaluating B, the expansion of the original form is the result of rebuilding from B, with wrappers W, after replacing each formi in B for which expansion takes place by (record-expansion formi formi'), where formi' is the expansion of formi. Note that these expansions are determined as the formi are evaluated in sequence (where in the case of encapsulate, this determination occurs only during the first pass). Except, if no expansion takes place for any formi, then the expansion of the original form is itself. Otherwise, the expansion of the original form is itself. Similarly to the progn and encapsulate cases above, book certification causes a book to replaced by its so-called ``book expansion.'' There, each event ev for which expansion took place during the proof pass of certification -- say, producing ev' -- is replaced by (record-expansion ev ev'). Implementation Note. The book expansion is actually implemented by way of the :expansion-alist field of its certificate, which associates 0-based positions of top-level forms in the book (not including the initial in-package form) with their expansions. Thus, the book's source file is not overwritten; rather, the certificate's expansion-alist is applied when the book is included or compiled. End of Implementation Note. It is straightforward by computational induction to see that for any expansion of an embedded event form, every make-event sub-event has a consp :check-expansion field. Here, by ``sub-event'' we mean to expand macros; and we also mean to traverse progn and encapsulate forms as well as :check-expansion fields of make-event forms. Thus, we will only see make-event forms with consp :check-expansion fields in the course of include-book forms, the second pass of encapsulate forms, and raw Lisp. This fact guarantees that an event form will always be treated as its original expansion. A note on ttags See defttag for documentation of the notion of ``trust tag'' (``ttag''). Here, we simply observe that if an event (defttag tag-name) for non-nil tag-name is admitted during the expansion phase of a make-event form, then although a ``TTAG NOTE'' will be printed to standard output, and moreover tag-name must be an allowed tag (see defttag), nevertheless such expansion will not cause tag-name to be recorded once the expansion is complete. That is, there will be no lingering effect of this defttag form after the make-event expansion is complete; no certificate written will be affected (where we are certifying a book), and the set of allowed ttags will not be affected. So for example, if this make-event form is in the top-level loop and subsequently we certify or include a book, then tag-name will not be associated with the top-level loop by this make-event form.
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/2/language/acl2-html-docs/MAKE-EVENT-DETAILS.html","timestamp":"2014-04-17T12:46:12Z","content_type":null,"content_length":"13156","record_id":"<urn:uuid:1dba7b1a-3a15-4002-b9dd-867621dc40d2>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Zero content subset May 28th 2011, 06:09 AM #1 Zero content subset I'm trying to find the proof of the following proposition. Let $S^n$ be the unit sphere in $R^n$. If $K \subset S^n$ is $n-1$ dimensional, then K has Lebesgue measure zero. I'd b very thankful if any of you could give me the general idea behind the proof or the link to a textbook or internet source where the proof of this is explained. Last edited by RaisinBread; May 28th 2011 at 09:00 AM. I'm trying to find the proof of the following proposition. Let $S^n$ be the unit sphere in $R^n$. If $K \subset S^n$ is $n-1$ dimensional, then K has Lebesgue measure zero. I'd b very thankful if any of you could give me the general idea behind the proof or the link to a textbook or internet source where the proof of this is explained. Are you aware of the theorem that states that if $M$ is an open submanifold of $\mathbb{R}^m$ with $m<n$ and $F:M\to\mathbb{R}^n$ is smooth then $F(M)$ has measure zero? What's the definition of "n-1 dimensional"? Hm, now that I think about it, the formulation I have written up there may be a little sloppy. The whole situation is the following; I'm trying to prove that a random matrix is always full rank, or, in other words, that if we are in $R^n$, a set of vectors with random coefficients will always be linearly independent, so long as we don't choose more than n vectors. Now I can easily show this in 2 dimensions, and the idea is that, if we choose one vector at random, $V_1=(r,\theta_1)$, we can calculate the probability that a second vector chosen at random will linearly dependent with $V_1$ by calculating the probability that the vector will be on the subspace of 1 dimension spanned by $V_1$. You can then show that this is going to happen if the second vector has an angle of $\theta_1$ or $\theta_1 + \pi$, and this set on the unit circle has measure zero, and thus an integral of any probability distribution over this set is zero. I'm now trying to generalize this idea on $R^n$. Last edited by RaisinBread; May 28th 2011 at 12:23 PM. I'd first show that, generically, you won't ever have the zero vector as a column in your random matrix. Then go by induction: change the basis so that the first column becomes the first standard basis vector (1,0,...,0). Now unless the second column is all zero except for possibly the first entry, you can change the basis again so that it becomes the second standard basis vector (0,1,0,...,0). And so forth. You're just using over and over again the fact that a point chosen "at random" from R^n won't be the origin. May 28th 2011, 11:22 AM #2 May 28th 2011, 11:36 AM #3 May 28th 2011, 12:08 PM #4 May 28th 2011, 12:15 PM #5
{"url":"http://mathhelpforum.com/differential-geometry/181855-zero-content-subset.html","timestamp":"2014-04-21T06:41:37Z","content_type":null,"content_length":"46533","record_id":"<urn:uuid:0113f84d-2af6-4062-a57a-d155341f9fe7>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Bacliff Calculus Tutor Find a Bacliff Calculus Tutor Hi, my name's Brian, I have a lot of experience tutoring and I'm fun and easy to work with. I can guarantee that I will help you get an A in your course or ace that big test you're preparing for. I am a Trinity University graduate and I have over 4 years of tutoring experience. 38 Subjects: including calculus, English, reading, writing ...I have 3 to 4 years experience in mathematics. I have many interest in mathematics. They range from Differential Geometry to Ordinary differential Equations. 16 Subjects: including calculus, geometry, algebra 1, algebra 2 ...I've also taught SAT and ACT prep. I've tutored students in English, reading, and writing. I enjoy teaching a great deal and am well versed in teaching in different styles to fit the student. 34 Subjects: including calculus, chemistry, reading, English ...I have experience with Matlab applied to the solution of systems of differential equations, control systems modeling, and physical dynamics modeling. I have low-level experience with tuning Matlab code for faster execution. I have documented and submitted product bugs to the publishers. 10 Subjects: including calculus, computer science, differential equations, computer programming ...My approach in working with you on algebra 2 is first to assess your familiarity and comfort with basic concepts, and explain and clarify the ones where you need some improvement; and then to work on the specific areas of your assignments, such as solving equations with radicals, exponents and lo... 20 Subjects: including calculus, writing, algebra 1, algebra 2 Nearby Cities With calculus Tutor Alvin, TX calculus Tutors Beach City, TX calculus Tutors Clear Lake Shores, TX calculus Tutors Dickinson, TX calculus Tutors El Lago, TX calculus Tutors Hitchcock, TX calculus Tutors Kemah calculus Tutors La Marque calculus Tutors League City calculus Tutors Nassau Bay, TX calculus Tutors Santa Fe, TX calculus Tutors Seabrook, TX calculus Tutors Shoreacres, TX calculus Tutors Taylor Lake Village, TX calculus Tutors Webster, TX calculus Tutors
{"url":"http://www.purplemath.com/bacliff_tx_calculus_tutors.php","timestamp":"2014-04-20T06:37:10Z","content_type":null,"content_length":"23658","record_id":"<urn:uuid:b9b581f0-f843-4a0e-94b6-0d17ba96384c>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
DTMCPack: Suite of functions related to discrete-time discrete-state Markov Chains A series of functions which aid in both simulating and determining the properties of finite, discrete-time, discrete state markov chains. Two functions (DTMC, MultDTMC) produce n iterations of a Markov Chain(s) based on transition probabilities and an initial distribution. The function FPTime determines the first passage time into each state. The function statdistr determines the stationary distribution of a Markov Chain. Version: 0.1-2 Published: 2013-05-28 Author: William Nicholson Maintainer: William Nicholson <wbnicholson at gmail.com> License: GPL-2 | GPL-3 [expanded from: GPL (≥ 2)] NeedsCompilation: no CRAN checks: DTMCPack results Reference manual: DTMCPack.pdf Package source: DTMCPack_0.1-2.tar.gz MacOS X binary: DTMCPack_0.1-2.tgz Windows binary: DTMCPack_0.1-2.zip
{"url":"http://cran.r-project.org/web/packages/DTMCPack/index.html","timestamp":"2014-04-19T14:33:16Z","content_type":null,"content_length":"2656","record_id":"<urn:uuid:247e783e-6985-4c8a-8202-9e7e690d404e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 25 , 1993 "... This is an introduction to the philosophy and use of OBJ, emphasizing its operational semantics, with aspects of its history and its logical semantics. Release 2 of OBJ3 is described in detail, with many examples. OBJ is a wide spectrum first-order functional language that is rigorously based on ..." Cited by 120 (29 self) Add to MetaCart This is an introduction to the philosophy and use of OBJ, emphasizing its operational semantics, with aspects of its history and its logical semantics. Release 2 of OBJ3 is described in detail, with many examples. OBJ is a wide spectrum first-order functional language that is rigorously based on (order sorted) equational logic and parameterized programming, supporting a declarative style that facilitates verification and allows OBJ to be used as a theorem prover. - Theoretical Computer Science , 2000 "... This paper publicly reveals, motivates, and surveys the results of an ambitious hidden agenda for applying algebra to software engineering. The paper reviews selected literature, introduces a new perspective on nondeterminism, and features powerful hidden coinduction techniques for proving behaviora ..." Cited by 110 (23 self) Add to MetaCart This paper publicly reveals, motivates, and surveys the results of an ambitious hidden agenda for applying algebra to software engineering. The paper reviews selected literature, introduces a new perspective on nondeterminism, and features powerful hidden coinduction techniques for proving behavioral properties of concurrent systems, especially renements; some proofs are given using OBJ3. We also discuss where modularization, bisimulation, transition systems and combinations of the object, logic, constraint and functional paradigms t into our hidden agenda. 1 Introduction Algebra can be useful in many dierent ways in software engineering, including specication, validation, language design, and underlying theory. Specication and validation can help in the practical production of reliable programs, advances in language design can help improve the state of the art, and theory can help with building new tools to increase automation, as well as with showing correctness of the whole e... - In Hartmut Ehrig and Fernando Orejas, editors, Proceedings, Tenth Workshop on Abstract Data Types , 1994 "... This paper surveys our current state of knowledge (and ignorance) on the use of hidden sorted algebra as a foundation for the object paradigm. Our main goal is to support equational reasoning about properties of concurrent systems of objects, because of its simple and ecient mechanisation. We sho ..." Cited by 85 (34 self) Add to MetaCart This paper surveys our current state of knowledge (and ignorance) on the use of hidden sorted algebra as a foundation for the object paradigm. Our main goal is to support equational reasoning about properties of concurrent systems of objects, because of its simple and ecient mechanisation. We show how equational speci cations can describe objects, inheritance and modules; our treatment of the latter topic emphasises the importance of reuse, and the r^ole of the so-called Satisfaction Condition. We then consider how to prove things about objects, how to unify the object and logic paradigms by using logical variables that range over objects, and how to connect objects into concurrent systems. - In Cafe: An Industrial-Strength Algebraic Formal Method , 1998 "... This paper explains the design and use of two equational proving tools, namely an inductive theorem prover -- to prove theorems about equational specifications with an initial algebra semantics -- and a Church-Rosser checker---to check whether such specifications satisfy the Church-Rosser property. ..." Cited by 38 (19 self) Add to MetaCart This paper explains the design and use of two equational proving tools, namely an inductive theorem prover -- to prove theorems about equational specifications with an initial algebra semantics -- and a Church-Rosser checker---to check whether such specifications satisfy the Church-Rosser property. These tools can be used to prove properties of order-sorted equational specifications in Cafe [11] and of membership equational logic specifications in Maude [7, 6]. The tools have been written entirely in Maude and are in fact executable specifications in rewriting logic of the formal inference systems that they implement. , 1996 "... We extend the ordinary concept of theory morphism in institutions to extra theory morphisms. Extra theory morphism map theories belonging to different institutions across institution morphisms. We investigate the basic mathematical properties of extra theory morphisms supporting the semantics of log ..." Cited by 26 (7 self) Add to MetaCart We extend the ordinary concept of theory morphism in institutions to extra theory morphisms. Extra theory morphism map theories belonging to different institutions across institution morphisms. We investigate the basic mathematical properties of extra theory morphisms supporting the semantics of logical multiparadigm languages, especially structuring specifications (module systems) a la OBJ-Clear. They include model reducts, free constructions (liberality), co-limits, model amalgamation (exactness), and inclusion systems. We outline a general logical semantics for languages whose semantics satisfy certain "logical" principles by extending the institutional semantics developed within the Clear-OBJ tradition. Finally, in the Appendix, we briefly illustrate it with the concrete example of CafeOBJ. Keywords Algebraic specification, Institutions, Theory morphism. AMS Classifications 68Q65, 18C10, 03G30, 08A70 2 1 Introduction Computing Motivation This work belongs to the research are... , 1994 "... This thesis proposes a general framework for equational logic programming, called categorybased equational logic by placing the general principles underlying the design of the programming language Eqlog and formulated by Goguen and Meseguer into an abstract form. This framework generalises equation ..." Cited by 24 (10 self) Add to MetaCart This thesis proposes a general framework for equational logic programming, called categorybased equational logic by placing the general principles underlying the design of the programming language Eqlog and formulated by Goguen and Meseguer into an abstract form. This framework generalises equational deduction to an arbitrary category satisfying certain natural conditions; completeness is proved under a hypothesis of quantifier projectivity, using a semantic treatment that regards quantifiers as models rather than variables, and regards valuations as model morphisms rather than functions. This is used as a basis for a model theoretic category-based approach to a paramodulation-based operational semantics for equational logic programming languages. Category-based equational logic in conjunction with the theory of institutions is used to give mathematical foundations for modularisation in equational logic programming. We study the soundness and completeness problem for module imports i... , 1994 "... Theories with hidden sorts provide a setting to study the idea of behaviour and behavioural equivalence of elements. But there are variants on the notion of theory: many sorted algebras, order sorted algebras and so on; we would like to use the theory of institutions to develop ideas of some general ..." Cited by 18 (3 self) Add to MetaCart Theories with hidden sorts provide a setting to study the idea of behaviour and behavioural equivalence of elements. But there are variants on the notion of theory: many sorted algebras, order sorted algebras and so on; we would like to use the theory of institutions to develop ideas of some generality. We formulate the notion of behavioural equivalence in a more abstract and categorical way, and we give a general explication of "hiding" in an institution. We use this show that both hidden many sorted algebras and hidden order sorted algebras yield institutions. - IN MAGNE HAVERAAEN, OLAF OWE, AND OLE-JOHAN DAHL, EDITORS, RECENT TRENDS IN DATA TYPE SPECIFICATION , 1996 "... This paper exploits the point of view of constraint programming as computation in a logical system, namely constraint logic. We define the basic ingredients of constraint logic, such as constraint models and generalised polynomials. We show that constraint logic is an institution, and we interna ..." Cited by 13 (4 self) Add to MetaCart This paper exploits the point of view of constraint programming as computation in a logical system, namely constraint logic. We define the basic ingredients of constraint logic, such as constraint models and generalised polynomials. We show that constraint logic is an institution, and we internalise the study of constraint logic to the framework of category-based equational logic. By showing that constraint logic is a special case of category-based equational logic, we integrate the constraint logic programming paradigm into equational logic programming. Results include a Herbrand theorem for constraint logic programming characterising Herbrand models as initial models in constraint logic. - Principles of Declarative Programming , 1998 "... : The benefits of the object, logic (or relational), functional, and constraint paradigms ..." - Proceedings Combinatorics, Computation and Logic , 1999 "... : This paper is an introduction to recent research on hidden algebra and its application to software engineering; it is intended to be informal and friendly, but still precise. We first review classical algebraic specification for traditional "Platonic" abstract data types like integers, vectors, ma ..." Cited by 10 (0 self) Add to MetaCart : This paper is an introduction to recent research on hidden algebra and its application to software engineering; it is intended to be informal and friendly, but still precise. We first review classical algebraic specification for traditional "Platonic" abstract data types like integers, vectors, matrices, and lists. Software engineering also needs changeable "abstract machines," recently called "objects," that can communicate concurrently with other objects through visible "attributes" and state-changing "methods." Hidden algebra is a new development in algebraic semantics designed to handle such systems. Equational theories are used in both cases, but the notion of satisfaction for hidden algebra is behavioral, in the sense that equations need only appear to be true under all possible experiments; this extra flexibility is needed to accommodate the clever implementations that software engineers often use to conserve space and/or time. The most important results in hidden algebra are ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=241009","timestamp":"2014-04-20T10:06:28Z","content_type":null,"content_length":"37415","record_id":"<urn:uuid:ac5a64aa-97e4-4b3e-a35f-dd2fad4d4332>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Evaluate the limits March 30th 2013, 09:52 AM #1 Evaluate the limits I have such a problem: Given that lim x->a f(x) = 0, lim x->a g(x) = 0, lim x->a h(x) = 1, lim x->a p(x) = INF, lim x->a q(x) = INF, Evaluate these limits: 1. Lim x->a f(x)/g(x) = 2. Lim x->a f(x)/p(x) = 3. Lim x->a h(x)/p(x) = 4. Lim x->a p(x)/f(x) = 5. Lim x->a p(x)/q(x) = I have to mention which of these limits are indeterminate forms, positive INF, Negative INF, or the limit does not exist (DNE) or we don’t have enough info to determine the limit. 1. Lim x->a f(x)/g(x) = 0/0, (DNE) 2. Lim x->a f(x)/p(x) = 0/INF, (Indeterminate form) 3. Lim x->a h(x)/p(x) = 1/INF, (positive infinitive) 4. Lim x->a p(x)/f(x) = INF/0, (Indeterminate form) 5. Lim x->a p(x)/q(x) = INF/INF, (DNE) I'm not quite sure in my reasoning so I need your input, thanks Re: Evaluate the limits Could you remind what an indeterminate form is? And what if f(x) = g(x)? And what if a = 0, f(x) = x and p(x) = 1/x? The bigger the denominator, the bigger the ratio? And what if p(x) = q(x)? Re: Evaluate the limits it is a result that doesn't provide enough info about the actual limit after replacing the values in the expression, so, wait a min, if by indeterminate forms is meant 0, 0/0, inf, inf/inf, it means I have some more indeterminate forms in my problem? you mean what if 0 = 0, ? but this is also an indeterminate form, isn't it? then f(x)=0 and p(x)=1/0 and are indeterminate forms well, actually, the bigger the denominator, more the values of f(x) approaches 0 (from how I understood, lesser and lesser parts of 1 we obtain, e.g. 0,5; 0,001; 0,00001 and so on) but this means that INF = INF => indeterminate forms, too? Re: Evaluate the limits Given that lim x->a f(x) = 0, lim x->a g(x) = 0, lim x->a h(x) = 1, lim x->a p(x) = INF, lim x->a q(x) = INF, Evaluate these limits: 1. Lim x->a f(x)/g(x) = 2. Lim x->a f(x)/p(x) = 3. Lim x->a h(x)/p(x) = 4. Lim x->a p(x)/f(x) = 5. Lim x->a p(x)/q(x) = I have to mention which of these limits are indeterminate forms, positive INF, Negative INF, or the limit does not exist (DNE) or we don’t have enough info to determine the limit. 1. Lim x->a f(x)/g(x) = 0/0, (DNE) 2. Lim x->a f(x)/p(x) = 0/INF, (Indeterminate form) 3. Lim x->a h(x)/p(x) = 1/INF, (positive infinitive) 4. Lim x->a p(x)/f(x) = INF/0, (Indeterminate form) 5. Lim x->a p(x)/q(x) = INF/INF, (DNE) Look I know that I am not as nice an instructor as emakarov. But I must tell you that I think that you have all of these concepts confused. Lets look at #2, You have a fraction. Its numerator is very 'close' to 0. Its denominator is very large in a positive sense. Here is an example: $~\frac{10^{-100}}{10^{100}}$. Of what do you think that fraction is approximation ? Now you have correctly listed Indeterminate forms, of which #2 is none of. So can you repost this so that it makes sense? Re: Evaluate the limits Look I know that I am not as nice an instructor as emakarov. But I must tell you that I think that you have all of these concepts confused. Lets look at #2, You have a fraction. Its numerator is very 'close' to 0. Its denominator is very large in a positive sense. Here is an example: $~\frac{10^{-100}}{10^{100}}$. Of what do you think that fraction is approximation ? Now you have correctly listed Indeterminate forms, of which #2 is none of. So can you repost this so that it makes sense? If I were to have such an example I would write it as [1/10^100]/[10^100] = [1/10^100]*[1/10^100]... so, the limit DNE if the lim x->a f(x) approaches o and lim x->a p(x) approaches INF =>the lim DNE 1. Lim x->a f(x)/g(x) = 0/0, (Indeterminate form) 2. Lim x->a f(x)/p(x) = 0/INF, (DNE) 3. Lim x->a h(x)/p(x) = 1/INF, (Indeterminate form) 4. Lim x->a p(x)/f(x) = INF/0, (Indeterminate form) 5. Lim x->a p(x)/q(x) = INF/INF, (Indeterminate form) am I right (bc I feel I'm going to be more confused... ps: don't worry about the niceness while giving me advices, I appreciate much more the advices, not the way they are given Last edited by dokrbb; March 30th 2013 at 06:51 PM. Re: Evaluate the limits wrong post, sorry Re: Evaluate the limits If I were to have such an example I would write it as [1/10^100]/[10^100] = [1/10^100]*[1/10^100]... so, the limit DNE if the lim x->a f(x) approaches o and lim x->a p(x) approaches INF =>the lim DNE 1. Lim x->a f(x)/g(x) = 0/0, (Indeterminate form) 2. Lim x->a f(x)/p(x) = 0/INF, (DNE) 3. Lim x->a h(x)/p(x) = 1/INF, (Indeterminate form) 4. Lim x->a p(x)/f(x) = INF/0, (Indeterminate form) 5. Lim x->a p(x)/q(x) = INF/INF, (Indeterminate form) am I right (bc I feel I'm going to be more confused... Re: Evaluate the limits It truly troubles me that you are seemingly in a calculus course but know so very little about limits. Please study this page on indeterminate forms. LEARN all of those forms. Then complete rework what you posted, because most of your answers are completely wrong. Re: Evaluate the limits Thanks Plato, I got them: 1. Lim x->a f(x)/g(x) = 0/0, (Indeterminate form) 2. Lim x->a f(x)/p(x) = 0/INF = 0 bc as f(x) approaches 0, p(x) becomes large 3. Lim x->a h(x)/p(x) = 1/INF = 0 since when h(x) approaches a finite nr p(x) becomes large 4. Lim x->a p(x)/f(x) = INF/0, DNE (I think we would need to evaluate limit from the right and from the left in order to be able to determine the lim) 5. Lim x->a p(x)/q(x) = INF/INF, (Indeterminate form) Re: Evaluate the limits I got them: 1. Lim x->a f(x)/g(x) = 0/0, (Indeterminate form) 2. Lim x->a f(x)/p(x) = 0/INF = 0 bc as f(x) approaches 0, p(x) becomes large 3. Lim x->a h(x)/p(x) = 1/INF = 0 since when h(x) approaches a finite nr p(x) becomes large 4. Lim x->a p(x)/f(x) = INF/0, DNE (I think we would need to evaluate limit from the right and from the left in order to be able to determine the lim) 5. Lim x->a p(x)/q(x) = INF/INF, (Indeterminate form) And that looks much better. Re: Evaluate the limits The only remark is the following. I am not sure if INF is included in DNE or not. The limit $\lim_{x\to a}p(x)/f(x)$ can be $\pm\infty$ or it may not exist. The former happens when f(x) does not change sign in some neighborhood of a, and the latter happens when there are both positive and negative values f(x) when x is arbitrarily close to a: then p(x) / f(x) changes sign as x tends to a while its absolute value continues to grow. Re: Evaluate the limits The only remark is the following. I am not sure if INF is included in DNE or not. The limit $\lim_{x\to a}p(x)/f(x)$ can be $\pm\infty$ or it may not exist. The former happens when f(x) does not change sign in some neighborhood of a, and the latter happens when there are both positive and negative values f(x) when x is arbitrarily close to a: then p(x) / f(x) changes sign as x tends to a while its absolute value continues to grow. well, yes but since I'm allowed to give separate answers: whether $\lim_{x\to a}p(x)/f(x)$ is a) $\p\infty$ ,b) $-\infty$ , c) DNE or don't have enough information to evaluate it, I considered the more appropriate answer in this case is c) and thanks a lot Last edited by dokrbb; March 31st 2013 at 11:57 AM. Re: Evaluate the limits When you say that the answer is c), it means that the limit does not exists for all function p(x) and f(x) under given assumptions. And this is not true: for some p(x) and f(x), the limit is +∞ or -∞. Therefore, I think the correct answer is that there is not enough information. Re: Evaluate the limits Frankly I am confused at this point, myself. In the OP we are given that ${\lim _{x \to a}}f(x) = 0\quad \& \quad {\lim _{x \to a}}p(x) = \infty$. We are also told that "c) DNE or don't have enough information to evaluate it" Also "I have to mention which of these limits are indeterminate forms, positive INF, Negative INF, or the limit does not exist (DNE) or we don’t have enough info to determine the limit[I]." I did not think that "that there is not enough information" was an option. Re: Evaluate the limits Frankly I am confused at this point, myself. In the OP we are given that ${\lim _{x \to a}}f(x) = 0\quad \& \quad {\lim _{x \to a}}p(x) = \infty$. We are also told that "c) DNE or don't have enough information to evaluate it" Also "I have to mention which of these limits are indeterminate forms, positive INF, Negative INF, or the limit does not exist (DNE) or we don’t have enough info to determine the limit[I]." I did not think that "that there is not enough information" was an option. the confusing statement is that for c) the both answers are mentioned (1)DNE or 2) not enough info); therefore, whether I consider the situation 1) or 2), I have to chose the same c) answer, sorry, but that's how the options were stated, and I checked the answers - it was c) the correct answer March 30th 2013, 12:45 PM #2 MHF Contributor Oct 2009 March 30th 2013, 03:48 PM #3 March 30th 2013, 04:37 PM #4 March 30th 2013, 06:23 PM #5 March 31st 2013, 08:22 AM #6 March 31st 2013, 08:23 AM #7 March 31st 2013, 09:15 AM #8 March 31st 2013, 10:57 AM #9 March 31st 2013, 11:16 AM #10 March 31st 2013, 11:29 AM #11 MHF Contributor Oct 2009 March 31st 2013, 11:50 AM #12 March 31st 2013, 11:59 AM #13 MHF Contributor Oct 2009 March 31st 2013, 12:58 PM #14 March 31st 2013, 01:20 PM #15
{"url":"http://mathhelpforum.com/calculus/216023-evaluate-limits.html","timestamp":"2014-04-20T07:59:48Z","content_type":null,"content_length":"99079","record_id":"<urn:uuid:ee20a92a-a5b2-416a-acfc-1b73ece64e6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Could widespread use of combination antiretroviral therapy eradicate HIV epidemics? Title Could widespread use of combination antiretroviral therapy eradicate HIV epidemics? Publication Journal Article Year of 2002 Authors Velasco-Hernandez, JX, Gershengorn HB, Blower SM Journal Lancet Infect Dis Volume 2 Pagination 487-93 Keywords Applications, drug resistance, HIV Current combination antiretroviral therapies (ARV) are widely used to treat HIV. However drug-resistant strains of HIV have quickly evolved, and the level of risky behaviour has increased in certain communities. Hence, currently the overall impact that ARV will have on HIV epidemics remains unclear. We have used a mathematical model to predict whether the current therapies: are reducing the severity of HIV epidemics, and could even lead to eradication of a high-prevalence (30%) epidemic. We quantified the epidemic-level impact of ARV on reducing epidemic severity by deriving the basic reproduction number (R(0)(ARV)). R(0)(ARV) specifies the average number of new infections that one HIV case generates during his lifetime when ARV is available and ARV-resistant strains can evolve and be transmitted; if R(0)(ARV) is less than one epidemic eradication is possible. We estimated for the HIV epidemic in the San Abstract Francisco gay community (using uncertainty analysis), the present day value of R(0)(ARV), and the probability of epidemic eradication. We assumed a high usage of ARV and three behavioural assumptions: that risky sex would (1) decrease, (2) remain stable, or (3) increase. Our estimated values of R(0)(ARV) (median and interquartile range [IQR]) were: 0.90 (0.85-0.96) if risky sex decreases, 1.0 (0.94-1.05) if risky sex remains stable, and 1.16 (1.05-1.28) if risky sex increases. R(0)(ARV) decreased as the fraction of cases receiving treatment increased. The probability of epidemic eradication is high (p=0.85) if risky sex decreases, moderate (p=0.5) if levels of risky sex remain stable, and low (p=0.13) if risky sex increases. We conclude that ARV can function as an effective HIV-prevention tool, even with high levels of drug resistance and risky sex. Furthermore, even a high-prevalence HIV epidemic could be eradicated using current ARV. URL http://www.semel.ucla.edu/sites/all/files/biomedicalmodeling/pdf/eradicate_hiv.pdf
{"url":"http://www.semel.ucla.edu/publication/journal-article/velasco-hernandez/2002/could-widespread-use-combination-antiretroviral-t","timestamp":"2014-04-18T11:05:28Z","content_type":null,"content_length":"32832","record_id":"<urn:uuid:081bbf21-e812-444e-9366-af57998db1e4>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00365-ip-10-147-4-33.ec2.internal.warc.gz"}
Measuring Electrical Potential and Electrical Fields without a computer Students know how to calculate Gauss’s law and Electric field lines even to the point to create enclosed Gaussian surfaces when they are asked a pencil and paper problem. However, when you give them a physical object, they are completely lost and make the connection between what they did by paper and pencil to its applications in the real world. It is much more general problem Gauss law is nothing exceptional among other laws - students (majority) are lost always if they must apply physics to real world. The way how they are taught in schools is so abstract, that they are unable to find relations between physical laws and reality. Do you remember Feynman's story about Brasilian students and polarisation of light reflected from water surface? In some countries it is a bit better, in some a bit worse, but all over the world educational problem is the same: kids are taught to memorize formulae, then apply them to "realistic" scenarios of school excercises, which often contradict common experience. So they learn: physics apply to excercises, the real world is ruled by common sense experience, those two have little (if any) in common.
{"url":"http://www.physicsforums.com/showpost.php?p=3520414&postcount=4","timestamp":"2014-04-20T08:39:15Z","content_type":null,"content_length":"8986","record_id":"<urn:uuid:bb83f681-a81b-43d2-96b7-fb6b698d8e4d>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00509-ip-10-147-4-33.ec2.internal.warc.gz"}
PUBLICATIONS--Report "Type curves for selected problems of flow to wells in confined aquifers" UNITED STATES DEPARTMENT OF THE INTERIOR GEOLOGICAL SURVEY RESTON, VA. 22092 In Reply Refer To: November 20, 1981 Mail Stop 411 GROUND WATER BRANCH TECHNICAL MEMORANDUM NO. 82.02 Subject: PUBLICATIONS--Report "Type curves for selected problems of flow to wells in confined aquifers" by J. E. Reed This report assembles, under one cover and in a standard format, the more commonly used type-curve solutions for 11 conditions of flow to a well in an infinite confined aquifer. Each solution discussion includes the following: (1) a list of the assumptions made in the mathematical development, (2) the differential equation and the boundary and initial conditions describing the ground-water problem are given in mathematical notation and the physical interpretation of each equation is described in the text, (3) the mathematical solution is given in the form of tables and/or type curves, (4) a "comments" section that describes the conditions for which some of the assumptions are applicable, and (5) in most cases a computer program written in FORTRAN is included that can be used for calculating additional function values. This latter feature should be particularly useful in generating type curves to be used for analyzing water- level change data in response to variable well discharge describable by one of five selected functional forms. Most of the type curves are on a folded plate. The curves are on one of two different plotting scales for ease of comparing solutions. Please note that figures 4.2 and 5.2 on that plate are slightly distorted by a fraction of a percent. It is our intent, when funds are available, to have the type-curve plots put on a mylar sheet and copies distributed to the field. (s) Gordon D. Bennett Chief, Ground Water Branch WRD Distribution: A, S, PO Two copies of memo with report to each District One copy of memo with report to each Subdistrict
{"url":"http://water.usgs.gov/admin/memo/GW/gw82.02.html","timestamp":"2014-04-20T05:47:03Z","content_type":null,"content_length":"3149","record_id":"<urn:uuid:73bc3ae5-efd0-47c3-8485-19deb6f6eff7>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00010-ip-10-147-4-33.ec2.internal.warc.gz"}
Topological Defects in Cosmology - A. Gangui 3.4. Macroscopic string description Let us recapitulate briefly the microphysics setting before we see its connection with the macroscopic string description we will develop below. We consider a Witten-type bosonic superconductivity model in which the fundamental Lagrangian is invariant under the action of a U(1) × U(1) symmetry group. The first U(1) is spontaneously broken through the usual Higgs mechanism in which the Higgs field m[s] ~ []^1/2 m[s] = m hereafter) we are left with a network of ordinary cosmic strings with tension and energy per unit length T ~ U ~ m^2, as dictated by the Kibble mechanism. The Higgs field is coupled not only with its associated gauge vector but also with a second charged scalar boson current carrier field, which in turn obeys a quartic potential. A second phase transition breaks the second U(1) gauge (or global, in the case of neutral currents) group and, at an energy scale ~ m[*], the generation of a current-carrying condensate in the vortex makes the tension no longer constant, but dependent on the magnitude of the current, with the general feature that T m^2 U, breaking therefore the degeneracy of the Nambu-Goto strings (more below). The fact that |A^([µ] is the electromagnetic potential) or the global U(1) is spontaneously broken in the core, with the resulting Goldstone bosons carrying charge up and down the string. Macroscopic quantities So, let us define the relevant macroscopic quantities needed to find the string equation of state. For that, we have to first express the energy momentum tensor as follows One then calculates the macroscopic quantities internal to the string worldsheet (recall `internal' means coordinates t,z) The macroscopic charge density/current intensity is defined as Now, the state parameter is w) |w|^1/2. For vanishing coupling e we have w ~ k^2 - ^2 and w < 0) or its momentum (w > 0). We get the energy per unit length U and the tension of the string T by diagonalizing ^ab Figure 1.8. Variation of the relevant macroscopic quantities with the state parameter. In the left panel we show the variation of the amplitude of the macroscopic (integrated) charge density (for w < 0) and current intensity (for w > 0) along the string core versus the state parameter, as defined by w)|w|^1/2. In the right panel one can see the corresponding variations of the integrated energy per unit length (upper set of curves) and tension (lower set of curves) for the string. Both the neutral (e = 0) and the charged cases are shown with, in the latter case, a rather exaggerated value of the coupling, in order to distinguish the curves in each set [Peter, 1992]. As shown in Figure (1.8) the general string dynamics in the neutral case does not get much modified when the electromagnetic e-coupling is included. Nevertheless, a couple of main features are worth to note: • In the magnetic regime there is saturation. In this situation (w > 0) the current intensity C reaches a maximum value and, at the same time, T passes through a minimum. • In the electric regime there is a phase frequency threshold. In this case (w < 0) the charge density of the conducting string diverges CT ^+. An analytic treatment shows that C w + m[ ]^2)^-1, with m[]^2 = 2 f (^2 - v^2). Note that this threshold changes with the coupling, when e is very large. • We always find T > 0 in w > 0 case. Hence, there is no place for springs, a conjecture first announced by Peter [1993]. Note that T diminishes just a few percent, and then the current saturates. If this were not the case, c^2[T] = T / U would be negative and this would imply instabilities [Carter, 1989]. Hence, there would be no static equilibrium configurations. Macroscopic description Now, let us focus on the macroscopic string description. For a local U(1) we have [to stick to usual notation in the literature, we are now changing ee in our expressions of previous sections]. In this equation we have the conserved Noether current Now, recall that A[µ]^( varies little inside the core, as the penetration depth was bigger than the string core radius. We can then integrate to find the macroscopic current which is well-defined even for electromagnetic coupling e The macroscopic dynamics is describable in terms of a Lagrangian function w) depending only on the internal degrees of freedom of the string. Now it is where [ab] is the induced metric on the worldsheet. The latter is given in terms of the background spacetime metric g[µ] with respect to the 4-dimensional background coordinates x^µ of the worldsheet. We use a comma to denote simple partial differentiation with respect to the worldsheet coordinates ^a and using Latin indices for the worldsheet coordinates ^[1] = ^[0] = [|a] is expressible in the presence of a background electromagnetic field with Maxwellian gauge covector A[µ]^( (A[µ] hereafter) by [|a] = [, a] - eA[µ] x^µ[,a]. So, now a key rôle is played by the squared of the gradient of w. The dynamics of the system is determined by the Lagrangian w). Note there is no explicit appearance of z[a], such that Let's define -ddw = ^1/[2] ^-1. Matching Eqns. (41) and (39), viz. z[a] (macro) I[a] (micro) we find which allows us to see the interpretation of the quantity ^-1. In fact, we have ^-1 amplitude of . When w [0] the zero current limit of
{"url":"http://ned.ipac.caltech.edu/level5/March02/Gangui/Gangui3_4.html","timestamp":"2014-04-16T10:26:26Z","content_type":null,"content_length":"13704","record_id":"<urn:uuid:9f057bec-adfb-4af4-9b44-e0f3e6f2c113>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00262-ip-10-147-4-33.ec2.internal.warc.gz"}
Least prime primitive root up vote 7 down vote favorite For $p$ a prime number, let $G(p)$ be the least prime $q$ such that $q$ is a primitive root mod $p$, that is $q$ generates the multiplicative group $(\mathbb Z/p\mathbb Z$)* . Is it known that $G(p)=O(p)$ ? I don't mind if the answer assumes GRH or any other standard conjecture. I am interested in results true for all $p$, much less (though a little bit) on results which exclude a density $0$ or other smallish set of $p$. I note that it is easier to find bounds in the literature for $g(p)$, the least integer $n$ such that $n$ is a primitive root mod $p$. For example $g(p)=O(p^{1/2+\epsilon})$ was known unconditionally to Vonogradov in the 1930's (we have better unconditional results since), and with GRH we have result of type $g(p)=O(log^A p)$ with $A$ is some small constant. But what are the best results we have for $G(p)$? What are the best expected results ? I am interested by $G(p)$ and not $g(p)$ because I use this problem as a testing ground of various effective forms of Chebotarev's there, and Chebotarev provides prime numbers. The best result I can prove this way is, under GRH, is $G(p)=O(p \log^{6+\epsilon} p)$ (edited: I made a mistake on the exponent of the $\log$), using Proposition 8.3 of the book of Ram Murty and Kumar Murty "Non-vanishing of $L$-functions and applications". With the GRH version of Lagarias-Odlyzko I get only $O(p^2 \log^2 p)$. EDIT: Here is the proof of the estimate using Murty and Murty, as GH asked. Proposition 8.3 of Murty and Murty states that if $G$ is the Galois group of an extension $L$ of $\mathbb Q$, $D$ a union of conjugacy classes in $G$, and $M=\sum \log p$, the sum being on the primes ramified in $L$, then $$| \pi_D(x) - \frac{|D|}{|G|} Li\, x | < C |D|^{1/2} x^{1/2} \log(Mx),$$ where $C$ is an absolute constant, $\pi_D(x)$ the numbers of primes $p \leq x$ such that $Frob_p \in D$. Let us apply this to $L=\mathbb Q(\mu_p)$, $D=$ set of primitive roots in $G=(\mathbb Z/p\mathbb Z)^\ast$. If for some real $x$, the principal term $\frac{|D|}{|G|} Li x = Li(x)/2$ is bigger than the error term $C |D|^{1/2} x^{1/2} \log(p x)$, then $\pi_D(x) > 0$ which means that $G(p)< x$. So we write that inequality, and solve it for $x$, using $|D|=\phi(p-1)$, and replacing $Li(x)$ by $x/\log x$ which just changes the constant $C$. So we want: $$ x/(\log(x) x^{1/2}) > C \phi(p-1)^{1/ 2} (\log p + \log x).$$ Since $\log p \log x > \log p + \log x$ except for $x$ ridiculously small, it is enough to have $$ x/(\log(x) x^{1/2}) > C \phi(p-1)^{1/2} \log p \log x,$$ or, taking the square, $$x / \log^4(x) > C^2 \phi(p-1) \log^2 p$$ which is implied by $$x > C' \phi(p-1) \log^2 p \log^4(\phi(p-1) \log^2 p),$$ Hence $G(p)=O(\phi(p-1) \log^2 p \log^4(\phi(p-1) \log^2 p)) = O(p \ log^{6+\epsilon} p)$. analytic-number-theory nt.number-theory primitive-roots prime-numbers According to mathoverflow.net/questions/834/… , a conjecture of Montgomery implies that, for EVERY residue class $a$ in $\mathbb{Z}/p^{\ast}$, there is a prime $q$ which is $O(p^{1+\epsilon})$ and represents $a$. I'm afraid I don't know more about this. – David Speyer Nov 16 '12 at 17:28 You may want to exclude successors of ( primorials and small multiples of primorials), as they may produce the hardest p to determine G(p). Or tackle them head on, as small values of phi(p-1) suggest potentially large values for G(p). Gerhard "But I May Be Wrong" Paseman, 2012.11.16 – Gerhard Paseman Nov 16 '12 at 17:34 @Gerhard: $\phi(p-1)$ is never too small, it is at least $C (p-1)/\log \log (p-1))$. A $\log \log$ term is not really important in those estimations. So I don't think it is "morally" necessary to excludes those primes. – Joël Nov 16 '12 at 17:50 1 @David: Yes. Actually a stronger form of this conjecture says that there is a prime $q$ that represents $a$ wiih $q=O(p \log(p)^2)$. And since (beware ! very bad heuristic follows...) it is much easier to find a prime which modulo $p$ falls in a set of $\phi(p-1)/2$ elements rather than on just one specific element, one could expect that $G(p) = O(p \log(p)^2 / (phi(p-1)/2)) = O(\log(p)^2 \log \log p)$ ! But very likely it's too naive... – Joël Nov 16 '12 at 17:54 1 @GH: I have wrote the answer -- and realized I made a mistake on my back of the envelope computation: the exponent of \log must be $6+\epsilon$, not $4$. Hope I haven't made an other mistake... – Joël Nov 16 '12 at 18:32 show 6 more comments 1 Answer active oldest votes For the expected behavior, see Paszkiewicz and Schinzel's paper "On the least prime primitive root modulo a prime" in Math. Comp. 71 (2002), no. 239, 1307–1321. There they examine a conjecture of Bach that $\limsup \frac{G(p)}{(\log p)(\log\log p)^2}=e^{\gamma}$. It is known that almost always $G(p)$ is bounded by a fixed power of $\log{p}$, and the word `almost' can be removed if we assume GRH. (Under GRH, we in fact have $G(p) \ll (\log{p})^ up vote 10 6$, and one can do better as long as $p-1$ doesn't have atypically many prime factors.) The best results I know in this direction are due to Greg Martin; see down vote accepted arxiv.org/abs/math/9807104 Unconditionally, I believe it's not even known that $G(p)$ is less than $p$ for all large $p$. 1 Greg Martin says $G(p) \ll (\log{p})^6$ under GRH is due to Shoup (1992). He says "Although both authors state their bounds only for primitive roots, the bounds actually hold for prime primitive roots as well." Can you explain why? – GH from MO Nov 16 '12 at 18:13 2 It looks from Wang's paper (which can be found by searching Google books for his collected works) that he actually shows the partial sums of $\Lambda(n) e^{-n/x}$, taken over primitive roots $n$ up to about $(log p)^{C}$, is positive. So this gives a prime small power $n$ which is a primitive root mod $p$. But then the prime which $n$ is a power of must also be a primitive root mod $p$. – Anonymous Nov 16 '12 at 18:43 Thank you very much! – GH from MO Nov 16 '12 at 18:51 Well, many thanks. I find striking how bad the effective chebotarev theorems (even those proved under GRH) are when we try to apply them in special situations, compared to what we can get directly in those situations. Here we have under GRH $O(\log^6 p)$ instead of $O(p \log^{6+\epsilon} p)$. Similarly, for the problem of the least prime in an arithmetic sequence, we expect $O(p \log^2 p)$ but with effective Chebotarev under GRH, we only get $O(p^2 \log^2 p)$. – Joël Nov 16 '12 at 18:57 Just wanted to confirm that Anonymous has said correctly almost everything I could say about unconditional results. Note that Linnik's theorem (with Xylouris's constant), applied to 4 any one primitive-root residue class mod $p$, shows unconditionally that there exists a prime primitive root mod $p$ that is $\ll p^{5.2}$. As far as I know, this is the best unconditional, uniform result for prime primitive roots! – Greg Martin Nov 16 '12 at 20:04 show 1 more comment Not the answer you're looking for? Browse other questions tagged analytic-number-theory nt.number-theory primitive-roots prime-numbers or ask your own question.
{"url":"http://mathoverflow.net/questions/112594/least-prime-primitive-root/112601","timestamp":"2014-04-18T23:22:14Z","content_type":null,"content_length":"67899","record_id":"<urn:uuid:27d45263-8a81-4850-907f-872108a168a2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00496-ip-10-147-4-33.ec2.internal.warc.gz"}
Coulhon T., Grigor'yan A., Manifolds and graphs with slow heat kernel decay - Duke Math. J , 2000 "... We prove that a two sided sub-Gaussian estimate of the heat kernel on an infinite weighted graph takes place if and only if the volume growth of the graph is uniformly polynomial and the Green kernel admits a uniform polynomial decay. ..." Cited by 32 (10 self) Add to MetaCart We prove that a two sided sub-Gaussian estimate of the heat kernel on an infinite weighted graph takes place if and only if the volume growth of the graph is uniformly polynomial and the Green kernel admits a uniform polynomial decay. - Math. Annalen , 2002 "... We show that a fi-parabolic Harnack inequality for random walks on graphs is equivalent, on one hand, to so called fi-Gaussian estimates for the transition probability and, on the other hand, to the conjunction of the elliptic Harnack inequality, the doubling volume property, and the fact that the m ..." Cited by 30 (6 self) Add to MetaCart We show that a fi-parabolic Harnack inequality for random walks on graphs is equivalent, on one hand, to so called fi-Gaussian estimates for the transition probability and, on the other hand, to the conjunction of the elliptic Harnack inequality, the doubling volume property, and the fact that the mean exit time in any ball of radius R is of the order R . The latter condition can be replaced by a certain estimate of a resistance of annuli.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1636126","timestamp":"2014-04-18T08:50:55Z","content_type":null,"content_length":"14513","record_id":"<urn:uuid:8a7517a6-9362-45bb-b15d-1911731119a2>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Litchfield, NH Prealgebra Tutor Find a Litchfield, NH Prealgebra Tutor ...I always put the student's interest first and adapt my explanations and the level of instructions according to the understanding of the particular person in my charge. My real satisfaction is to see that the student improves to the level that he doesn't need me anymore and can continue without ... 21 Subjects: including prealgebra, French, calculus, physics ...Exponents, Polynomials and Polynomial Functions: 17. Factoring: remove the greatest common factor: 18. Rational Expressions and Functions: 19. 13 Subjects: including prealgebra, physics, probability, algebra 1 ...I have been tutoring & teaching mathematics up to precalculus & calculus. I have a B.S in computer Science & Ms in Physics with Electronics as one of the subjects. Logic is used in both Electronics as well as in computer programming. 18 Subjects: including prealgebra, chemistry, calculus, geometry ...I've worked with Windows 2000, XP, Vista, and Windows 7, and am comfortable using whatever operating system is needed. All throughout high school and currently in college I have been using Windows PowerPoint, Word, Excel, Adobe Photoshop, and Visual Basic (to write/compile code in the C language... 25 Subjects: including prealgebra, reading, writing, geometry ...I have experience working with students with ADHD, autism, and specific learning disabilities. Helping students find success where they have only struggled in the past is what drives me. Please don't hesitate to get in touch if you would like to talk further!One aspect of my job as a special educator and one that I find very important is that I teach study skills. 29 Subjects: including prealgebra, English, reading, SAT math Related Litchfield, NH Tutors Litchfield, NH Accounting Tutors Litchfield, NH ACT Tutors Litchfield, NH Algebra Tutors Litchfield, NH Algebra 2 Tutors Litchfield, NH Calculus Tutors Litchfield, NH Geometry Tutors Litchfield, NH Math Tutors Litchfield, NH Prealgebra Tutors Litchfield, NH Precalculus Tutors Litchfield, NH SAT Tutors Litchfield, NH SAT Math Tutors Litchfield, NH Science Tutors Litchfield, NH Statistics Tutors Litchfield, NH Trigonometry Tutors Nearby Cities With prealgebra Tutor Amherst, NH prealgebra Tutors Auburn, NH prealgebra Tutors Bedford, NH prealgebra Tutors Brookline, NH prealgebra Tutors Chester, NH prealgebra Tutors Hollis, NH prealgebra Tutors Hudson, NH prealgebra Tutors Londonderry, NH prealgebra Tutors Merrimack prealgebra Tutors Milford, NH prealgebra Tutors Pelham, NH prealgebra Tutors Townsend, MA prealgebra Tutors Tyngsboro prealgebra Tutors Weare prealgebra Tutors Windham, NH prealgebra Tutors
{"url":"http://www.purplemath.com/Litchfield_NH_Prealgebra_tutors.php","timestamp":"2014-04-17T11:03:00Z","content_type":null,"content_length":"24075","record_id":"<urn:uuid:3db64f15-2369-42f0-9794-c784d69f4932>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00285-ip-10-147-4-33.ec2.internal.warc.gz"}
Meeting Details For more information about this meeting, contact Nigel Higson, John Roe, Ping Xu, Mathieu Stienon. Title: Weyl Character Formula in K-Theory, III Seminar: Noncommutative Geometry Seminar Speaker: Nigel Higson, Penn State Weyl's formula describes the characters of the irreducible representations of compact connected Lie groups. Atiyah and Bott pointed out a long time ago that there are close connections between the character formula and K-theory. I shall reexamine those connections in these lectures, partly to illustrate some of the basic features of K-theory, and partly to prepare for the case of noncompact groups, where similar connections ought to link the Baum-Connes conjecture to geometric representation theory. I shall begin with an introductory account of the character formula in lecture one, and then discuss basic K-theory in lecture 2 before venturing toward more specialized topics. Room Reservation Information Room Number: MB106 Date: 09 / 30 / 2010 Time: 02:30pm - 03:30pm
{"url":"http://www.math.psu.edu/calendars/meeting.php?id=7908","timestamp":"2014-04-20T06:13:07Z","content_type":null,"content_length":"3856","record_id":"<urn:uuid:cc59e1c7-588b-4763-b6ca-ddc2fc5467b1>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Tyngsboro Geometry Tutor Find a Tyngsboro Geometry Tutor ...I teach all levels of Pre-Algebra, Algebra, Geometry and Pre-Calculus. Test Preparation: - Math SAT - MTEL General Curriculum 03 Math Subtest - MTEL Elementary Math 53, Middle School Math 47 I have fulfilled all state testing requirements for the Massachusetts teaching certificate in middle school mathematics. Excellent recommendations available on WyzAnt. 15 Subjects: including geometry, algebra 1, algebra 2, precalculus ...I can work with students to help them improve their reading levels and become better writers. I have helped students improve their math skills in middle and high school, and I can help them become better organized in the classroom, working on note-taking, test-taking, and time management skills ... 29 Subjects: including geometry, reading, English, writing ...I believe in the power of learning and I instill that in others. I am experienced in test preparation, reading comprehension, essay writing, and quantitative skills in verbal and math. I am SSAT and ISEE trained and have focused much of my recent tutoring in that area of expertise. 43 Subjects: including geometry, reading, English, writing ...My approach to teaching math includes hands-on lessons, strategies, math games and problem solving. I try to bring a little fun to a stressful subject. I have my degree in math and I am currently pursuing a Master's in education. 8 Subjects: including geometry, calculus, algebra 1, algebra 2 ...I won the Bronze medal in the National Olympic Math Contest (primary section) in China. I taught my niece and three relatives' kids high school math from 2004 to 2006. I taught four students College Algebra from 2011 to 2012. 11 Subjects: including geometry, accounting, Chinese, algebra 1
{"url":"http://www.purplemath.com/Tyngsboro_geometry_tutors.php","timestamp":"2014-04-20T19:23:57Z","content_type":null,"content_length":"23791","record_id":"<urn:uuid:311dfb89-239f-4886-b935-cb0713094c12>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00516-ip-10-147-4-33.ec2.internal.warc.gz"}
Boundary integral equation approach for conformal mapping, complex boundary value problems, and reproducing kernels. Murid, Ali H. M. and Hurmin, Baharudin and Nasser, Mohamed M. S. (2003) Boundary integral equation approach for conformal mapping, complex boundary value problems, and reproducing kernels. The Proceedings of Annual Fundamental Science Seminar 2003 . pp. 72-78. Full text not available from this repository. Conformal mapping has been a familiar tool of science and engineering for generations. Its ability to map one planar region onto another via analytic function proves invaluable in applied mathematics. For numerical purposes in conformal mapping, the integral equation methods are more preferable and effective. Of special interest is the Riemann map which maps a simply connected region onto a unit disk. The Riemann map is closely to complex boundary value problem and some reproducing kernels known as the Szego and the Bergman kernels. The fact that there is an efficient numerical method, based on the Kerzman-Stein-Trummer integral equation, for computing the Szego kernel has been known since 1986. The study of the Kerzman-Stein-Trummer integral equation has led to our discovery of a new integral equation for the Bergman kernel which can be used effectively for numerical conformal mapping. This discovery also motivated a general formulation of integral equations associated to certain boundary relationships which can rise to various integral equations (classical and new) related to conformal mapping and boundary value problem of interior, exterior and doubly connected regions. This paper presents some of our past discoveries as well as ongoing research activities regarding the integral equation approach for conformal mapping, complex boundary value problem and reproducing kernels. Item Type: Article Uncontrolled Keywords: Integral equation, conformal mapping, boundary value problem, reproducing kernels. Subjects: Q Science > QA Mathematics Divisions: Science ID Code: 3849 Deposited By: Assoc. Prof. Dr. Ali Hassan Mohamed Murid Deposited On: 27 Jun 2007 08:09 Last Modified: 29 Jun 2007 02:03 Repository Staff Only: item control page
{"url":"http://eprints.utm.my/3849/","timestamp":"2014-04-21T02:10:07Z","content_type":null,"content_length":"18637","record_id":"<urn:uuid:f5d28695-47a7-43a3-953e-4303cedf58d2>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00102-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/laishaleecolon/answered","timestamp":"2014-04-20T08:18:39Z","content_type":null,"content_length":"89794","record_id":"<urn:uuid:1a7e1bcc-2426-45f0-ae5f-48ff16cf752d>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00444-ip-10-147-4-33.ec2.internal.warc.gz"}
The following content is provided under a Creative Commons license. Your support will help MIT OpenCourseWare continue to offer high quality educational resources for free. To make a donation, or to view additional materials from hundreds of MIT courses, visit MIT OpenCourseWare at ocw.mit.edu. PROFESSOR STRANG: So this is the first true review session in 18.085. The last Wednesday, the first Wedneday afternoon was a brief review of topics in linear algebra. But now we're into the course. We've done four lectures on the first four sections of the textbook and one homework problem in and back and a second homework set for next Monday. So I'm ready for any questions, including questions that are on the homework if necessary. But anything at all. Or maybe I'll just ask whether the pace, so this is really informal. Is the pace of the course, now today's lecture had a lot in it as I realized when I saw that the board with still full of 18.085 and there was a little more still to do because we didn't finish the matrix part. But are you ok with the sort of speed of the course? So that'll be one question. And then, what about specifics? Somebody start off if you will. Anybody. Yes, thanks. AUDIENCE: Well, actually I had a question about the lecture earlier today. PROFESSOR STRANG: Okay, go ahead. AUDIENCE: I was just going to look it up in the text, but. PROFESSOR STRANG: That's alright. AUDIENCE: But haven't had a chance to. But, ok, so I'm not really sure how you take the initial conditions and apply them to the ramp function to actually get a solution. PROFESSOR STRANG: So that's what luckily happens to be still here on the board, right? We've got these boundary, I would call them boundary conditions. So this is definitely, we're in a part of math that's about boundary value problems more than time, isn't in the picture. I mean later time will get into the picture. So with this particular example, the general solution is a standard ramp at the point a where things are happening. Plus the C+Dx, the usual. So that's a particular solution. That particular solution has the right behavior at the impulse. By the right behavior, I mean that the second derivative of the ramp is a delta and when I put the ramp at a, then the delta will show up at a. And when I put on that minus sign it'll mean that the second derivative is minus the delta. So the slope will step down. So that's a particular solution. But that by itself, what does that equal at zero? And what does that equal at one if you remember the ramp? So let me just draw that ramp again. So the ramp was really based on, centered at the point a. And I'll put it with a minus sign. So it came along there and down there. And now suppose this is one end of our interval and that's the other end. So is that ramp the answer to our problem? No. Well it happens to satisfy this boundary condition, happens to start out at zero. But it doesn't end up at zero. So just like every particular solution we need a little more. We have to include more solutions because that was only one. All those are equally good solutions. If I add C+Dx, the second derivative of that is zero, so it doesn't spoil anything at all. On the contrary, it adds more solutions. I mean the great thing we're using here is that our equations are linear and when zero's on the right-hand side-- notice there's no arbitrary constant. I'm not putting an arbitrary constant on that particular solution. Is one particular solution plus a subspace if I use that language. A lot of solutions to the, all the solutions to the problem with zero on the right-hand side. And these are the solutions, these are the null space guys, the ones that have zero on the right-hand side. Now we need those. And the effect of those will be to move that ramp. And effectively, what are they going to do? They're going to swing that ramp around, it'll stay a ramp. But instead of being level here, it'll go up. And instead of going down there, it'll go down there. It'll be the same ramp, with still that slope dropped by one. Slope went down by minus one and that's not going to change here. The slope will still go down by minus one. But now I've just adjusted it so that it goes through the, it satisfies the boundary conditions. And in the second problem with the free end, again, here's my ramp. But now I'm going to adjust it. And what happened? It just needed adjustment upwards. Because this was the zero, this was the u=0 fixed guy. And now if I'm doing u'(0)=0, the free guy, I can lift the whole thing up. So you see, I just lifted it up to the point where it came out right there. So in this case I just needed a C. And in this case I just needed a Dx. And in other cases I might have needed both. A little bit of C and a little of Dx. Anyway that's what was, maybe that's sort of repeating what we did today a little bit. But mechanically it's just, we've got a particular solution and we've got the complete solution and then we just have to choose C and D and we've got two conditions to do it with, the two boundary conditions. Could have other boundary conditions. Now, what about, here's a, yeah. I guess what will often happen in these review sessions is I get on a roll and I just keep, carry on with it. You know, you start me with a question and I can't stop. So I'll go a little longer but then I really will stop, ready for the next one. So we haven't discussed the free-free. u(0) u', sorry, free is u'. That's free at the left and free at the right. What's up with that? What's the solution to that? Again, I'm looking for -u'' equal an impulse. With those two boundary conditions, free-free. And do you know what's going to happen? No solution. Now it'd be interesting to see why not. Why no solution? Well one way to do it is try. Other right-hand sides could have a solution. So it's not just that this is something the matter here. Specifically, that right-hand side and most right-hand sides will fail. But let's just see it with this one. Why does that fail? Well you can see I can't, the slope here is zero and the slope here is minus one. If I adjust those I can't get, this is asking me to get slope zero at both ends. I can't do that, right? Yeah, just not possible for me to. This will add a straight line but it's the same straight. I can't get a ramp that comes flat at both ends because there's, once I say what it's doing at the ends, I've got it. And you see what I mean? I'm looking for one that starts flat and ends flat and that's not in my family of solutions. That problem just doesn't have a solution. So that's one way to do it is look at the solutions and realize you can't satisfy both boundary conditions. Another way might be this. This is a little bit deeper way and it leads to something better. Suppose I take the integral. So this is an equation that's supposed to hold. Let me integrate both sides from zero to one. So there's the idea I'm putting in now. To go a little further, to discover when this has a solution. Or let me take more generally, -u''=f(x). So some other load. Not necessarily a point load, not necessarily a uniform load, but maybe some other load. And now my boundary conditions, I'm trying to do free-free. And usually no solution. But let's just see why and when there might be a solution. The key idea is integrate from zero to one. What do I get on the right-hand side? Well, I get the integral of f, whatever it is, and I would call that the total load. Fair enough? The total load of if it was a delta function, the total load would be one and it would be all in one spot. If it was uniformly over the whole interval, well I guess that would also integrate to one, so the total load would be one spread out. But it could be a mixture of the two, could be a few delta functions, whatever. What happens when I integrate the left side from zero to one? Can you do that one? The integral from zero to one. There's that dopey minus sign. u''dx. What do I get? Why do I say that's a good idea? If I integrate the second derivative I get the first derivative. The integral of the second derivative will be the first derivative with the minus. So it's minus the first derivative. And what do I do now? I plug in the end points, right? You integrate. I'm integrating zero to one. So I've found the integral. I've put zero to one in there. So what is that? That's minus the derivative at one plus the derivative at zero. And now, what's that? That's zero. By my boundary conditions, that's zero. So what have I found? I've found that if these are the boundary conditions, then when I integrate the left side it's going to give me zero. So when, what loads could be ok? What loads f(x) could allow me to solve this equation? The condition will be I need, what do I need for the total load to be able to solve this equation? The integral of the left side was zero so the integral of the right side had better be zero. So that's the condition. If I have these boundary conditions, then my problem is singular. Usually no solution. It's like having a singular matrix. It's like having this particular singular matrix, of course. Whoops, not that one. Let me get the plus sign in the right position. That's a plus. -1; -1, 2, -1; -1, 1. Right? This is the discrete version with a zero slope at both ends. It's our T matrix. No, what matrix is it? B. It's our B matrix, both ends. I'll just come back here and then I'll do the discrete one. So tell me a load that we could handle? A load we could handle. So the integral has to be zero. So suppose my load has a delta function at a. Well that integral is one. So can you fix that, change that load or do something, maybe put on another load to get a total load of zero? What shall I do? Add another guy with a minus sign. In other words, maybe this, a delta function at some other point B. Well, that would do it. I believe we could solve that problem. Even with these bad boundary conditions. We could solve that problem. Because the total load would be one from that, minus one from that, the total load would be zero. In other words, what would are solution look like? It has to start with zero slope. So it would buzz alone to a and then after b. And what does it have to do here? If I graph the solution to this guy from zero to one it starts with, it's free, so nothing's happening until I get to a. Then what has to happen? Let's see, if I'm graphing u, it'll be ramp. Right? It'll be a ramp, yeah. Because I've two derivatives. And it has to ramp down by one, so it'll ramp whatever it does. I don't know where it stops. Where does it? Wait a minute. I haven't practiced this. So I start from the other end. The other end is flat. What's up? They gotta meet here. Oh, the other end is flat, but not at zero! Dumb, stupid. Right. Ok. Yes, the other end is flat, right. And it's, oh yeah, look! Oh, wonderful. You see. That slope dropped by one and the slope there increased by one back to zero. Slope was zero. It dropped to minus one because of that load. Now it increased back to zero because of this load with the minus sign and there's a solution. So that's a solvable problem. Well, you say, okay, that was a little surprising to get an answer for a singular problem. And no, it can happen. If we have a total load zero it'll happen. But, there is still a but, that's not the only answer. That picture is a solution. But not the only one. So what my point is going to be, that when the problem is singular, if there's an answer, you say great. But then something has to go wrong and what goes wrong is too many answers. So tell me some more, what other graphs would draw solutions to this problem. I could shift, I could lift the whole thing. Here I've got a plus C that I haven't used. I could just do the whole thing higher up. Any of these. These would all work. It's like temperature. I don't have an absolute temperature here. All I've got is, I would have to, it's not determined because there's a plus C that, the plus C satisfied everything. A plus C, a constant has zero slope, it has zero slope, its second derivative is zero. So it's like, unseen by this equation. And similarly can I just make the analogy as I always like to do with discrete stuff, so suppose I, tell me a right-hand side that we think would probably, is this going to be the same story for this guy? Yes. If I add those, where I integrated there, here I would add and I get zero, zero, zero. So this has to add to zero if there's a solution. So let me put for example, . That would be kind of like our delta function in one direction and our delta function in the other. I believe I can solve that problem. So I'm just carrying, because I always want you to see the discrete one as well as the continuous. Continuous involve this integration. The finite one just involves adding. The left side adds to zero so the right-hand side better add to zero. That right-hand side does. Tell me a solution. Well, let me start out with a seven there. What's the next guy going to be? See, I want seven. Whatever I put there, I better have a seven there, right? Seven, seven, good. Minus seven, plus 14, oh geez. I didn't know this was going to happen. No, I want to get the answer one. What number goes there? Six, is it six? It's six, yeah, good. Minus seven, 14, minus six is that. And now my claim is that we'll come out right on the third equation. So far I've just matched the first two. Now this one gives minus seven, plus six, that's minus one, good. So there's a solution. And I'll leave this problem alone if you tell me the rest, other solutions. That was a solution to a singular problem with a right-hand side that had total load zero, so it was ok. But now that's a solution, but there are more. Tell me another one. I can shift it, right? I could make it . I could add ten to everything. Right? That's the plus C that I could do over there. That can't change because 17 - 17 is still zero. -17 + 16 will still be minus one. all good. So actually that just like helps our intuition and physically my intuition is this. That I've got this bar and nothing's holding it. So if I put a weight on it, nothing to hold it, it'll just, rigid motion will take it out of sight, no good. But if I put another equal weight on it, no it's not a weight. What do I call it? If I lift it at that point, that's the other delta function that's going the other way, then it will sit there. But it would still be in equilibrium if I just moved it up to there or moved it as I like. I don't know if that is kind of a dumb picture. But it's saying what we've said from math. Well, you see where you're question lead. Yeah, thanks. No, the integral, it was-- Watch what we integrated. We integrated u''. So that's not the area. We integrated u'' and got, it's integral was u'. So that just told us that a difference in slopes at the ends, yeah. Good, because our intuition automatically is if we're integrating something, we're finding an area. But here, if it was u , then I'd be finding the area under u, but we integrated the second derivative. Right, good. Now let's change the subject. Yes, please. Yeah, I guess so. I'll try. Let's see. So my discrete equation was, like -u. Yeah, so let's back up to the beginning. We've got this minus sign and we're using a second difference. So second differences have coefficients one, minus two and one. Now I'm reversing the signs because of my minus. So I have -u at some point. Let's take that as the point to the left. Two u's at what I'll think of as the center point. -u_(i-1) is the load at that center. That center point is i times delta x. That's where I would be looking. So now I'm using subscript. It's a little bit of practice then to take subscripts, take this way of writing the equation and convert it to a matrix way. It's usually clearer once you see it as a matrix. Now this is happening at all the points. At i=1, let's say, I have -u_0+2u_1-u_2 is f at, agrees with the load at the first mesh point. That's the center, the point h, delta x. And then if I want to back up further, I would have -u_-1, but that doesn't really exist, plus 2u_0-u_1 should match the load at zero. And so on forward. But now I want to put in the boundary condition. That's what you want me to do, right? Put in this boundary condition. So what am I going to take as boundary condition? It has to be some approximation to u'(0)=0. Maybe I'm never going to get to minus one. Maybe I don't need minus one. That's right, yeah, exactly. We did. That's what we knew about it. Sending it forward, we knew about forward difference, so I chose to do it. But then I think better of it. I chose to do it because it made the point that we, that at that boundary we were introducing a higher order error, first order error that's going to wreck things. I mean, it's going to spoil the, this is second order accuracy. And, but let me do that first order. So what shall I take? I'm going to approximate that by u-- Shall I take this one as I did in class? Yes. Ah, plus one, thanks, plus one, right, thank you. Thank you, good. Okeydoke. Alright. This is how we got to that equation. If I now bring in this boundary condition-- I guess I don't have to, let me take your eye off of that guy for the moment, I think. We're getting beautiful music here. Is it coming out of this box or? No. Anyway. so I'm going to use this boundary condition to say, well ok, if u_1 is u_0, I'm going to replace this u_0 by u_1. This is the direct way. I replaced that u_0 by u_1 in that first equation. And then what I have is -u_1 and 2u_1, so that's the one and I have the -u_2. So do you see that that equation, when I put those together into a one, is going to, if this is u_1, this is u_2, this is u_3 onwards, that first equation is u_1-u_2 and that's what I've got. This is u_1-u_2. So I did it. I got to that matrix. The matrix is actually quite an important matrix. But from the point of view of accuracy in solving this differential equation, it's not the greatest. It's lost accuracy at that point. But the way to recover it turned out to be just a small adjustment at the boundary, so not a problem. Thanks. That's good. Yes, thanks. Sorry? When the boundary-- sorry. Two boundary conditions at the same point? That's a good question. So when would we have two? So instead of a boundary condition at zero and a boundary condition at one, you're putting them, is that what you mean? Put both boundary conditions at the end. Ok. So that would be, that would happen, I would think that would be more, it would be very typical in a, let me see if there's some space here, yeah. That would be very typical and we will do it, can I change x to t? Because that's what, if I have some. What does this problem look like? And u(0)=0 and u'(0) =0. Both at the same, at the start equals zero or whatever. So what kind of a problem is that? Now these are, I would say, initial values. Initial values instead of boundary values, I now have initial values. And can I solve it? Yes. So I'm starting at time zero. This is t=0. I'm starting at rest. No velocity and actually no displacement and just going forward in time. So I could solve that differential equation. I'd be interested in the corresponding difference equation. All fine. It's a different category of problem. This is an initial value problem. It's like tracking some mass that's, some satellite. So that's what you're doing in tracking a satellite or a planet or something. Yeah, tracking a planet or a satellite. You're solving equations like this. Forward in time. You know the initial position and you know the forces acting on it. Probably gravity. And you go forward in time. Yes. What would the matrix be? Good question. What would the matrix look like? So an electrical engineer would call a problem like this, and the kind of matrix that I'm going to write down, I think, would be called causal. That word just popped into my head, so let me mention it. You know, part of science and engineering, a big part of it is learning language, learning words. And you have to learn sort of the math language and the engineering language for whatever you're focusing on. But it's good to also to know a few other languages. Electrical engineering languages of filters and causal and other things that we'll see are important. What would the matrix look like? Here's what I think it would be. I've made this a plus there just so I'll have to remember that. I think, so I'm looking at u_0, u_1, no. Well, u_0 I actually know, so let me start with u_1, u_2, u_3, u_4. What would a typical equal sum, right side, f_1, f_2, f_3, f_4. 1, What do you think, what kind of a matrix am I going to get? Before I put it in there. This is a good question. What's the shape of this matrix? It's going to be triangular. Instead of being symmetric it's going to be triangular. I'm going to find, let's see, a typical value would be, say, u_3 because I've used a plus sign, oops! I can't make myself do it right. 1, -2, 1. That would say u_3-2u_2+u_1 would be the new force. This is the kind of thing we're going to get. One, maybe one something. I don't know what this is. This is up in the boundary, in the initial values. But from now on it'll be below the diagonal. It'll be 1, -2, Do you see? We're marching. We're marching forward. We start by knowing these and then the equation tells us the next one. Then the equation tells us the next one. That's what initial value problems do. You're told how you begin and you take a step, you take a step, take a step every time. And the new value just needs to know the older values. Do you see the big difference between that and our problems here? Our problem is looking left and right looking for back and forward. Back for one condition, forward for another. We start with one, but we're, it's more of a, it's like a hitting problem. We start forward, marching forward in our problems, but we have to hit the other end correctly. We don't know the slope, we don't know the starting slope, we know what we want to hit. Whereas these problems, we're told how we start and we just follow it in time. So that's the difference here. Yeah, sure, okay. That's true. So this'll be known. Yeah, that'll be known. Yeah. u_1 will also be known. Yeah, and really, maybe I should have got, let me put even the other known one. So we know this, we know this. So those are sort of not in our, yeah, that shouldn't be in our problem somehow. No, I think, what would we get in the end? You're always looking backwards. That's the point. Lower triangular matrices are always looking, they only look backwards for earlier values and then they give you the current value. So that's why lower triangular matrices are so easy to invert. No problem. If it's lower triangular, you just, like, march forward. And if it's upper triangular, which way do you march? So if you have an upper triangular problem, suppose I gave you the problem, let me make it upper triangular. So x+y+z=7. 2y+3z=12 and z=17. So that's upper triangular. Where do we start in solving that one? From the bottom. From the right-hand end, the bottom. And we march backwards in time. And what I was saying about A, well L times U, yeah, this is worth seeing. What I was saying about A=LU, it was, you remember that? Those letters? What that was saying was that this matrix that's looking both ways can be written as a product of a matrix L that looks behind for old values and you can go forward with it. And a matrix U, like this one, this upper triangular, 1, 1, 1, zeroes below that diagonal, that you go backward with. Somehow that's appealing. That's like aesthetic to break up a two-way problem into a problem like marches one way and then the other. And of course, that's what elimination aims for, is this problem that it can solve by, the words would be back substitution. When you've started with your original problem, got to this one, then you just have a, back substitution, you go backwards. Oh, so much, I'll mention the Kalman filter. That's a similar process of going forward, that's called prediction. Going backward, that's called smoothing. And so, Kalman had the great idea that he could break these problems that were fundamental in space computations for prediction and smoothing. Once again, we've got off. Yes? Oh, the beam. Let me help you even more before the question. I said it's better to draw the beam this way. I like the beam better this way. Because the point of the beam problem is loads are acting, and we'll see this, of course later, loads are acting perpendicular to the direction of the beam. That's why the beam bends. So it'll bend a little, right? And that is what leads us, it's bending moments and other stuff. If you haven't met beams, well, it'll be great to just have a very, half a lecture about, or maybe a lecture about beams. That gives a fourth order equation that I'll write down again. Fourth derivative equal the load. Now, ready. Yeah, now here I don't have the negative sign. Because once I've got second derivatives twice, so the second derivative is, in some way, negative. I'll complete that sentence in a second. Somehow the second derivative, which is the guy that has the 1, -2, 1, somehow that's a negative thing. But fourth derivative is second derivative of the second derivative. Yeah, do you want to tell me what the numbers would be? As long as we're wildly looking forward to fourth derivatives, just, it helps. Do you want to guess what will a typical row of the matrix B when I go to finite differences, fourth differences? Probably you've never seen a fourth difference. You may not have seen second differences before. That was a big deal, then, to introduce second differences. Those 1, -2, 1's. That was second differences. Fourth? Yeah, 1, 4, 6, 4, 1 with minus sign. 1, -4, 6, -4, and 1. In some way, I would get that by squaring this guy. So that would be a fourth difference. Oh, what's the deal with boundary conditions? What are you figuring on beams, beam problems for a fourth order equation and a matrix that's stretching out further. What's going to happen at the left-hand boundary? I guess my specific question is, How many boundary conditions do I now need? Four. And the typical is two at each end. That's the balanced way. That's the way that would make this matrix sort of symmetric. So I have maybe at this end I say it's held at zero and maybe it's just sitting on a log there. Right? That boundary condition I would call simply supported. That boundary condition says that u(0)=0. Because it's sitting there. And but the slope doesn't have to be zero. What does have to be zero there? Yeah, sort of the bending moment. Nobody's here twisting it, right? So the other condition in that picture would be second derivative equal zero. Maybe my point is that now you see what I said before, that the getting the boundary conditions into the problem is often the hardest part. Because I have to replace u(0)=0, that shouldn't be too hard to do. But I have to use this other condition somehow, it's going to screw up the 1, -4, 6, -4, 1. I'll have two boundary rows at the top, two boundary rows at the bottom. I don't want to go further today. But I think maybe just mentioning this gives you the picture of sort of the how things fit together. We would still have some nice constant diagonals in the middle, but now we'll have two boundary rows at each end. So that's something to come. Yes, now back to reality which is any questions. Lower triangular guy, yeah. What do I mean by marching forward? So let's see. I'll replace this. Let's see it better. I would replace this by maybe u, I'll use a different letter, n+1 at u_(n+1) -2u_n+u(n-1) is some right-hand side if there's a force acting on my thing. So f_n maybe. By marching forward, I just mean that this equation, that I can go in order. I can start with u_0 and u_1. They come from the boundary conditions. Then this equation will tell me u_2. I use the equation. With n as one. This says u_2, some u_1's, some u_0's, f_1's, all that I know. In other words, once I get started, I'm on a roll. If I have two boundary conditions to get me started, then the equation tells me u_2. And then the next time, u_3-2u_2+u_1. I can find u_3. So I can get those, I can go forever. If you give me enough to start on, two things to start, then I march forward. Whereas in our problems, we've only got one thing to start on and we've got one goal to hit. And that's why we have to solve the whole system together. This is, we can solve it step-by-step. This is way faster of course. To be able to just go forward in time. I'll mention that the topic of initial value problems and finite differences for them, we can't get to that. So we're seeing a little bit here, but it's done properly in 18.086 in the second semester is the initial value problem start part. And that has it's own interesting questions. Somehow we've talked about fourth order equations, initial value problems. But no homework problems. So I'm ready for, or even related. But that's fine with me. Is there a question? Yeah, thanks. Or it doesn't have to be a homework question, another question. Oh, good question. You mean I should just send the homework out to Natick where MATLAB is. Do you know that MATLAB is just 15 miles away? I almost get there, I live 2/3 of the way there. Yeah, so we could just send the whole thing out there and get it back. That would save a lot of work. I suppose, I'm ok. Why should I say no? Anything MATLAB can do and you can make it do, I'm ok with that. I don't see that you have to do things by hand if you've got a better way. That's ok. And then probably the answer gets printed and you can graph it. So that's fine. So I mean, somehow a course like this has got two parts to it. Applied math has two parts to it. The modeling part, set up the equation, think, what is it you're supposed to do. And then, step two is do it. The numerical part, the computing part. And that's where MATLAB, Python, Fortran, whatever, is going to do a lot of the heavy lifting. Was there another question? So that first homework was certainly very general intentionally. Because I'm hoping you will read the book. The lecture, you'll be able to match the lectures with the book even later on when they separate a little or separate more. You'll see what we're doing. And those, the homework problems, you should look at some of the others just to see. Do I know how to do that? Right. Let's stop here for this first review. I'm sure we'll have more, questions will build up for the second week.
{"url":"http://ocw.mit.edu/courses/mathematics/18-085-computational-science-and-engineering-i-fall-2008/video-lectures/recitation-2/","timestamp":"2014-04-20T03:17:02Z","content_type":null,"content_length":"88863","record_id":"<urn:uuid:e22ab33b-22c1-4a50-9bc7-693c0ff53190>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00170-ip-10-147-4-33.ec2.internal.warc.gz"}
Coordinate Geometry - Master Slope and Get Points! - Kaplan GRE Blog Coordinate Geometry – Master Slope and Get Points! When my classes arrive at the math session that covers coordinate geometry, there is a general outbreak of panic: “I haven’t done this since high school…” “Which one is the y-axis again?” “Seriously, I’m going to grad school for history – why do I need to know this?!” What people don’t realize is that coordinate geometry only requires you to memorize a few things, and they’re almost all related to the concept of slope. A line’s slope just measures its slant – the larger the slope’s absolute value, the closer it is to a vertical line. Any time the slope is expressed as a fraction, remember the expression “rise over run” – the slope represents the ratio of the vertical rate of change (in the numerator) to the horizontal rate of change (in the denominator). For example: A line with a slope of 1/3 moves up 1 unit, for every 3 units that it moves to the right. Let’s see how one piece of very key knowledge can help us – check out the following Quantitative Comparison: We’re asked to compare the lengths of a right triangle’s legs. Since the lengths are perpendicular and start at the origin, we can use what we know about slope to determine the relationship between We know that the line’s slope is -9/10 (if that didn’t jump out at you, review slope-intercept form – it’s the other important thing to know for coordinate geometry questions). What does this tell us? It means that for every 9 units the line goes down, it goes 10 units to the right. Well, we can use this to compare AB and BC. Since they cross at a right angle, they represent the vertical and horizontal change of the line – AB is 9 units, to BC’s 10 units. And we’re done – Quantity B is greater. Remember: Any topic with which a lot of test-takers struggle, is high-yield on the GRE. So the more comfortable you are applying the definition of slope, the sooner you’ll fly by the competition!
{"url":"http://gre.kaptest.com/2013/07/10/coordinate-geometry-master-slope-and-get-points/","timestamp":"2014-04-21T15:41:30Z","content_type":null,"content_length":"70728","record_id":"<urn:uuid:da6a9274-df7e-426c-ac99-7256bd18f4e4>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Easton, PA Algebra 1 Tutor Find an Easton, PA Algebra 1 Tutor ...I also tutor individual students who are preparing to take the MCAT. When I took the MCAT in preparation for Kaplan training, my overall score was a 41 which ranks in the 99th percentile. I am an experienced public speaker who has delivered technical presentations to large audiences of scientists and technical experts. 36 Subjects: including algebra 1, chemistry, reading, English ...Let me know what concepts you're struggling with before our session, so I can streamline the session as much as possible! In my free time, I like to play with my pet chickens, play Minecraft, code up websites, and write sci-fi creative stories. I participate in NaNoWriMo every year! ** NOTE: I can't travel farther than 10 miles to meet with you, due to an increase in tutees. 26 Subjects: including algebra 1, English, calculus, physics ...Understand FOIL. Discover methods for factoring trinomials quickly and easily. Understand slopes and line equations. 27 Subjects: including algebra 1, calculus, statistics, geometry ...We will then focus on some objectives or lessons that you have struggled with in the past so that you will be able to master them and build upon them. I may assign some homework in between our sessions, so that I can assess your understanding of the lessons we reviewed. I will tailor your sessions to fit your individual needs because we all have unique abilities and learning styles. 4 Subjects: including algebra 1, elementary (k-6th), elementary math, prealgebra ...I have experience teaching others to play piano and have had great success doing so. I am CompTIA Network + Certified. I have received extensive computer networking training from New Horizons Computer Learning Center, and I received a master of applied science in information technology. 16 Subjects: including algebra 1, reading, writing, English Related Easton, PA Tutors Easton, PA Accounting Tutors Easton, PA ACT Tutors Easton, PA Algebra Tutors Easton, PA Algebra 2 Tutors Easton, PA Calculus Tutors Easton, PA Geometry Tutors Easton, PA Math Tutors Easton, PA Prealgebra Tutors Easton, PA Precalculus Tutors Easton, PA SAT Tutors Easton, PA SAT Math Tutors Easton, PA Science Tutors Easton, PA Statistics Tutors Easton, PA Trigonometry Tutors Nearby Cities With algebra 1 Tutor Allentown, PA algebra 1 Tutors Alpha, NJ algebra 1 Tutors Bethlehem, PA algebra 1 Tutors College Hill, PA algebra 1 Tutors Forks Township, PA algebra 1 Tutors Glendon, PA algebra 1 Tutors Harmony Township, NJ algebra 1 Tutors Palmer Township, PA algebra 1 Tutors Phillipsburg, NJ algebra 1 Tutors Piscataway algebra 1 Tutors Reading Station, PA algebra 1 Tutors Stockertown Township, PA algebra 1 Tutors Tatamy algebra 1 Tutors Washington Street, NJ algebra 1 Tutors West Easton, PA algebra 1 Tutors
{"url":"http://www.purplemath.com/easton_pa_algebra_1_tutors.php","timestamp":"2014-04-17T07:47:15Z","content_type":null,"content_length":"24241","record_id":"<urn:uuid:54a6b1dd-cafe-49aa-a6b7-b0a6d8fa7db6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00031-ip-10-147-4-33.ec2.internal.warc.gz"}
Appendix J Previous | Table of Contents | Next Performance-related specifications require mathematical models to link construction quality to expected life and, ultimately, to value expressed in the form of payment schedules. Although ongoing research efforts continue to advance the state of the art, the type of data needed to develop accurate and precise models may not become available for years. In the interim, present engineering and mathematical knowledge can be used to create rational and practical models that can perform effectively until better models are available. Examples are presented to illustrate how both analytical data and survey data can be used to develop realistic performance models useful for the development of payment schedules for QA specifications. The issue of the proper method to combine the effects of multiple deficiencies is also addressed. The RQL is essentially defined as a severely deficient level of quality at which the agency reserves the option to require removal and replacement of the construction item. In the HMAC pavement specification for one agency, the RQL for both air voids and thickness had been defined in terms of percent defective as PD > 75. In other words, if either air voids or thickness exhibited this level of quality, the lot could be declared rejectable. However, this leads to the inconsistency shown in table 44. Table 44. An Inconsistent RQL Provision │ │ Quality Level │ │ │ │ Air Voids │ Thickness │ │ │ 1 │ PD = 0 (Excellent) │ PD = 75 (RQL) │ Yes │ │ 2 │ PD = 75 (RQL) │ PD = 0 (Excellent) │ Yes │ │ 3 │PD = 74 (Almost RQL) │PD = 74 (Almost RQL) │ No │ Clearly, case 3 is far worse than the other two but, under the existing system, it would not trigger the RQL provision whereas the first two cases would. Defining the RQL in a more appropriate way rectified this inconsistency. Intuitively, if both air voids and thickness reach some intermediate value less than PD = 75, say PD[VOIDS] = PD[THICK] = 50, for example, then that might logically be just as detrimental as PD[VOIDS] = 0, PD[THICK] = 75 or PD[VOIDS] = 75, PD[THICK] = 0. To illustrate how a more suitable RQL provision can be developed, suppose the agency has determined that the conditions listed in table 45 are all likely to severely shorten the life of the pavement and, therefore, are appropriate RQL points. By plotting these three points on a graph, as illustrated by model #1 in figure 61, it can be seen that the model must be in the form of a curve that is concave-downward. Since the purpose of this model is to account for the combined effect of air voids and thickness, a simple way to accomplish this is to include the cross-product term in the RQL provision given by equation 86. C[1](PD[VOIDS]) + C[2](PD[THICK]) + C[3](PD[VOIDS] x PD[THICK]) > 100 (86) where: PD[VOIDS] = air voids percent defective. PD[THICK]= thickness percent defective. C[i] terms = coefficients to be determined. The threshold value of 100 on the righthand side of equation 86 that triggers the rejection provision is chosen arbitrarily and could be any convenient value. To determine the three coefficients C [1], C[2], and C[3], the three predetermined points in table 45 are substituted into equation 86 to obtain equations 87 through 89, thus providing three equations with three unknowns. 75 C[1] + 10 C[2 ]+ 750 C[3 ]= 100 (87) 10 C[1] + 75 C[2] + 750 C[3 ]= 100 (88) 50 C[1] + 50 C[2] + 2500 C[3 ]= 100 (89) Solving these simultaneous equations, and substituting the values of the coefficients back into equation 86, produces equation 90, which is plotted as model #1 in figure 61. 1.273 PD[VOIDS] + 1.273 PD[THICK] - 0.0109 (PD[VOIDS] x PD[THICK])>100 (90) To demonstrate that the model can be made to bend the other way, if desired, and that greater weight can be put on one property, air voids for example, table 46 presents a slightly different set of assumptions that might have been used. Solving the simultaneous equations generated by this data set produces equation 91, which has been plotted as model #2 in figure 61. By defining the rejectable level for thickness at a lower level of quality than the rejectable level for air voids, the coefficient of the thickness term in equation 91 has been reduced, thus giving air voids greater weight in this example. 1.076 PD[VOIDS] + 0.847 PD[THICK] + 0.0144 (PD[VOIDS] x PD[THICK])>100 (91) Note that the coefficient of the cross product term in equation 90 is negative, producing a model that is concave-downward, while the positive cross product coefficient in equation 91 produces a concave-upward model. If there were no cross product term, i.e., if coefficient C[3] in equation 86 were zero, the model would plot as a straight line. Because an equation of this form can produce any of these three shapes, it can be very effective as a performance model when two quality characteristics are involved. The specific application will dictate which shape is appropriate. The method that is discussed in this and following sections is applied to the example of using air voids and thickness as acceptance measures for HMAC pavements. However, the concept that is presented is appropriate for both HMAC and PCC and for other acceptance measures, provided a method exists for estimating the pavement lives for various levels of the quality measure. A highway agency can use whatever model or other method with which it is comfortable to arrive at the estimated lives for the as-constructed pavements. The methods used herein are only examples of possible approaches that can be used. If a performance model has been developed by the agency, then it may be possible to use this model to directly arrive at the expected pavement life for any combination of values for the variables included in the model. Returning to the example using air voids and thickness, to derive a mathematical performance model it will be necessary to have reasonably accurate values of expected life for the four conditions indicated in table 47. The values in this table were obtained by an agency using a simplified computer model that it has developed. The first value is obtained by assuming that the expected life of the pavement will equal the design life of 20 years if both air voids and thickness are at the AQL of 10 PD. Next, using the results obtained with the agency's computer model, expected lives of approximately 10 years each are obtained for the cases in which either air voids or thickness is at the indicated poor level of quality while the other measure is at the AQL. Finally, a method must be found to estimate the joint effect of poor quality in both air voids and thickness to be able to complete the table. In the absence of actual data obtained under controlled field conditions, the agency decided that a survey of experienced pavement engineers would be necessary to estimate this missing piece of information. Table 47. Preliminary Performance Matrix of Expected Life Values for HMAC Pavement Under NJDOT Conditions │ │Thickness Quality│ │Air Voids Quality ├────────┬────────┤ │ │PD = 10 │PD = 90 │ │ PD = 10 │ 20 yrs │ 10 yrs │ │ PD = 75 │ 10 yrs │ ? │ Figure 62 shows a completed survey questionnaire of the type that was sent by the agency to the chief engineer (or equivalent position) of all State transportation departments. A brief cover letter described the purpose of the survey and requested that it be forwarded to those individuals having extensive experience in the performance of HMAC pavements. Respondents were asked to estimate the expected life for seven different combinations of pavement quality under the assumption that acceptable quality in all three measures would result in the pavement providing the design life of 20 Responses were received from 35 States, of which 4 indicated that this information was not available. Of the remaining responses, another five were excluded because some of the estimates of expected life were inconsistent with the assumption that, in a rational model, a large decrease in quality of any one parameter with the other parameters held constant would result in a corresponding decrease in expected life (provided it was not already zero). This left a total of 26 responses for the analysis, the averages of which appear on the survey questionnaire in figure The two matrices on the survey questionnaire provide two opportunities to examine how the effects of deficiencies in air voids and thickness should be combined based on the responses. Three different approaches were examined - additive, average, and product models - and the results are presented in table 48. It can be seen for case 1 in figure 62 that the effect of a change from good to poor in air-voids quality, with the other quality levels held constant, can be expressed as a decrease of 20 - 11.6 = 8.4 years, or as a ratio of expected life to design life of 11.6 / 20 = 0.58. Similarly, the effect due to thickness alone in case 1 is -5.0 years or a ratio of 0.75. Similar results are obtained for case 2. If the effects were truly additive, the predicted life resulting from poor quality in both air voids and thickness for case 1 would be 20 - 8.4 - 5.0 = 6.6 years, as indicated in the sixth column in table 48. If the effects were averaged, the predicted life would be 20 - (8.4 + 5.0) / 2 = 13.3 years, which appears in the seventh column of table 48. By the product method, the predicted life would be the product of the individual ratios and the design life, or 0.58 x 0.75 x 20 = 8.7 years, which appears in the eighth column of table 48. For case 2, the average response for good quality of 16.1 is used in place of the design life. Table 48. Comparison of Three Methods for Combining Effects │ │Effect on Expected Life Due to Change from Good to Poor Quality ^1 │Combined Predicted Life, yrs│ │ │ ├─────────────────────────────────┬─────────────────────────────────┤ (Three Combining Methods) │ │ │Case│ Air Voids │ Thickness │ │Expected Life Based on Survey, yrs.│ │ ├────────────────┬────────────────┼────────────────┬────────────────┼─────┬──────────┬───────────┤ │ │ │ Years │ Ratio │ Years │ Ratio │ Add │ Average │ Product │ │ │ 1 │ -8.4 │ 0.580 │ -5.0 │ 0.750 │ 6.6 │ 13.3 │ 8.7 │ 8.7 │ │ 2 │ -6.8 │ 0.578 │ -4.2 │ 0.739 │ 5.1 │ 14.5 │ 6.9 │ 6.8 │ ^1 Computed from survey results in figure 62 To judge which method is most appropriate, the last column of this table lists the average values estimated by the respondents of the survey. By comparing the values for predicted life with those in the last column, it is seen that the product method produces an almost exact agreement with the survey values, indicating that this method provides a good approximation of the manner in which experienced engineers believe the effects of multiple deficiencies manifest themselves. The average model greatly underestimates the expected loss of service life in this example, while the additive model, although overestimating the expected loss of service life, produces estimates that are reasonably close to the survey results. These results suggest how a reasonable estimate for the missing value in the performance matrix in table 47 can be obtained. The values in table 47 indicate that poor quality in either air voids or thickness results in a ratio of expected life to design life of 10 / 20 = 0.50. Therefore, by the product method, the expected life when both air voids and thickness are at the indicated poor values is 0.50 x 0.50 x 20 = 5 years, completing the performance matrix as shown in table 49. Based on these four values for expected life, it is possible to develop a realistic performance model, and also to determine the appropriate equation form for the RQL provision discussed earlier. It is once again stressed that the above approach is simply one example of how these estimated pavement lives could be obtained. Any method with which the agency is comfortable can be used to develop the values for estimated pavement resulting from various levels of the quality measure. For example, if a performance model is available, and the highway agency has confidence in the predictive capability of the model, then it could be used to develop the estimated pavement expected lives. As noted, such performance models are under development but may be a number of years away from widespread use. These models tend to be quite complicated, and will not likely use typical quality measures such as PWL or PD as input variables. Equation 92, patterned after the general form for the RQL provision in equation 86, is a practical model for a performance equation based on two quality characteristics. The expected pavement life in years is designated by EXPLIF, and all other terms are as previously defined. EXPLIF = C[0] + C[1] (PD[VOIDS]) + C[2] (PD[THICK]) + C[3] (PD[VOIDS] x PD[THICK]) (92) The values for expected life in table 49 are used to develop four simultaneous equations that can be solved to provide the four equation coefficients. These are then substituted back into equation 92 to produce the performance model given by equation 93. EXPLIF = 22.9 - 0.163 PD[VOIDS] - 0.135 PD[THICK] + 0.000961 (PD[VOIDS] x PD[THICK]) (93) It can be seen by inspection that this equation predicts that excellent quality (PD[VOIDS] = PD[THICK] = 0) will extend pavement life beyond its design life of 20 years to almost 23 years. It can also be readily calculated that the worst possible quality level in terms of percent defective (PD[VOIDS] = PD[THICK] = 100) produces an expected life of 2.7 years. Both of these are reasonable results based on the information that is available. As a further check of the derivation of equation 93, the four combinations of quality levels listed in table 49 can be entered into this equation to demonstrate that it returns the table 49 values for expected life. For a clearer picture of the operation of this performance model, figure 63 has been plotted. Here it can be seen that, based on the assumptions listed in table 49, the appropriate shape for the family of curves is concave-downward. This indicates that, of the two possible models for an RQL provision shown in figure 61, the concave-downward model should be selected. If, for example, the agency decided that an expected life of 10 years, or less, was sufficiently detrimental that it warranted outright rejection, then the value of EXPLIF = 10 would be substituted into equation 93, and the results scaled accordingly, to produce the RQL provision given by equation 94. (Equation 94 is very similar to the RQL provision given by equation 90 and plotted as model #1 in figure 61. It is represented by the 10-year-lifeline in figure 63.) 1.264 PD[VOIDS] + 1.047 PD[THICK] - 0.00745 (PD[VOIDS] x PD[THICK])>100 (94) A method was presented illustrating how analytical and/or survey data can be used to develop a mathematical model to predict pavement performance as a function of acceptance test results. The method involved developing a simple matrix of expected life values that are used to construct a set of simultaneous equations that are solved to derive a simplified practical performance model. This appendix also contains a summary of a nationwide survey conducted to determine the appropriate way to estimate the combined effect of multiple deficiencies. For this particular combination of quality characteristics (air voids and thickness), the analysis suggests that the combined effect is close to the sum of the individual effects, and appears to be best represented by the product of the ratios of the individual effects. A third method, based on the average of the individual effects, substantially underestimated the expected loss of service life.
{"url":"http://www.fhwa.dot.gov/publications/research/infrastructure/pavements/pccp/02095/appj.cfm","timestamp":"2014-04-17T19:09:32Z","content_type":null,"content_length":"39557","record_id":"<urn:uuid:f7c39a70-012d-4abd-9ed8-232d47fc5507>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00175-ip-10-147-4-33.ec2.internal.warc.gz"}
Lecture 1 Types of scales & levels of measurement Discrete and continuous variables Daniel's text distinguishes between discrete and continuous variables. These are technical distinctions that will not be all that important to us in this class. According to the text, discrete variables are variables in which there are no intermediate values possible. For instance, the number of phone calls you receive per day. You cannot receive 6.3 phone calls. Continuous variables are everything else; any variable that can theoretically have values in between points (e.g., between 153 and 154 lbs. for instance). It turns out that this is not all that useful of a distinction for our purposes. What is really more important for statistical considerations is the level of measurement used. When I say it is more important, I've really understated this. Understanding the level of measurement of a variable (or scale or measure) is the first and most important distinction one must make about a variable when doing statistics! Levels of measurement Statisticians often refer to the "levels of measurement" of a variable, a measure, or a scale to distinguish between measured variables that have different properties. There are four basic levels: nominal, ordinal, interval, and ratio. A variable measured on a "nominal" scale is a variable that does not really have any evaluative distinction. One value is really not any greater than another. A good example of a nominal variable is sex (or gender). Information in a data set on sex is usually coded as 0 or 1, 1 indicating male and 0 indicating female (or the other way around--0 for male, 1 for female). 1 in this case is an arbitrary value and it is not any greater or better than 0. There is only a nominal difference between 0 and 1. With nominal variables, there is a qualitative difference between values, not a quantitative one. Something measured on an "ordinal" scale does have an evaluative connotation. One value is greater or larger or better than the other. Product A is preferred over product B, and therefore A receives a value of 1 and B receives a value of 2. Another example might be rating your job satisfaction on a scale from 1 to 10, with 10 representing complete satisfaction. With ordinal scales, we only know that 2 is better than 1 or 10 is better than 9; we do not know by how much. It may vary. The distance between 1 and 2 maybe shorter than between 9 and 10. A variable measured on an interval scale gives information about more or betterness as ordinal scales do, but interval variables have an equal distance between each value. The distance between 1 and 2 is equal to the distance between 9 and 10. Temperature using Celsius or Fahrenheit is a good example, there is the exact same difference between 100 degrees and 90 as there is between 42 and 32. Something measured on a ratio scale has the same properties that an interval scale has except, with a ratio scaling, there is an absolute zero point. Temperature measured in Kelvin is an example. There is no value possible below 0 degrees Kelvin, it is absolute zero. Weight is another example, 0 lbs. is a meaningful absence of weight. Your bank account balance is another. Although you can have a negative or positive account balance, there is a definite and nonarbitrary meaning of an account balance of 0. One can think of nominal, ordinal, interval, and ratio as being ranked in their relation to one another. Ratio is more sophisticated than interval, interval is more sophisticated than ordinal, and ordinal is more sophisticated than nominal. I don't know if the ranks are equidistant or not, probably not. So what kind of measurement level is this ranking of measurement levels?? I'd say ordinal. In statistics, it's best to be a little conservative when in doubt. Two General Classes of Variables (Who Cares?) Ok, remember I stated that this is the first and most important distinction when using statistics? Here's why. For the most part, statisticians or researchers wind up only caring about the difference between nominal and all the others. There are generally two classes of statistics: those that deal with nominal dependent variables and those that deal with ordinal, interval, or ratio variables. (Right now we will focus on the dependent variable and later we will discuss the independent variable). When I describe these types of two general classes of variables, I (and many others) usually refer to them as "categorical" and "continuous." (Sometimes I'll use "dichotomous" instead of "categorical" ). Note also, that "continuous" in this sense is not exactly the same as "continuous" used in Chapter 1 of the text when distinguishing between discrete and continuous. It’s a much looser term. Categorical and dichotomous usually mean that a scale is nominal. "Continuous" variables are usually those that are ordinal or better. Ordinal scales with few categories (2,3, or possibly 4) and nominal measures are often classified as categorical and are analyzed using binomial class of statistical tests, whereas ordinal scales with many categories (5 or more), interval, and ratio, are usually analyzed with the normal theory class of statistical tests. Although the distinction is a somewhat fuzzy one, it is often a very useful distinction for choosing the correct statistical test. There are a number of special statistics that have been developed to deal with ordinal variables with just a few possible values, but we are not going to cover them in this class (see Agresti, 1984, 1990; O’Connell, 2006; Wickens, 1989 for more information on analysis of ordinal variables). General Classes of Statistics (Oh, I Guess I Do Care) Ok, so we have these two general categories (i.e., continuous and categorical), what next…? Well this distinction (as fuzzy as it may sound) has very important implications for the type of statistical procedure used and we will be making decisions based on this distinction all through the course. There are two general classes of statistics: those based on binomial theory and those based on normal theory. Chi-square and logistic regression deal with binomial theory or binomial distributions, and t-tests, ANOVA, correlation, and regression deal with normal theory. So here's a table to summarize. │Type of Dependent Variable (or Scale)│Level of Measurement │General Class of Statistic │Examples of Statistical Procedures │ │ │ │(Binomial or Normal Theory)│ │ │Categorical (or dichotomous) │nominal, ordinal with 2, 3, or 4 levels│binomial │chi-square, logistic regression │ │Continuous │ordinal with more than 4 categories │normal │ANOVA, regression, correlation, t-tests│ Survey Questions and Measures: Some Common Examples In actual practice, researchers and real life research problems do not tell you how the dependent variable should be categorized, so I will outline a few types of survey questions or other measures that are commonly used. Yes/No Questions Any question on a survey that has yes or no as a possible response is nominal, and so binomial statistics will be applied whenever a single yes/no question serves as the dependent variable or one of the dependent variables in an analysis. Likert Scales A special kind of survey question uses a set of responses that are ordered so that one response is greater than another. The term Likert scale is named after the inventor, Rensis Likert, whose name is pronounced "Lickert." Generally, this term is used for any question that has about 5 or more possible options. An example might be: "How would you rate your department administrator?" 1=very incompetent, 2=somewhat incompetent, 3=neither competent, 4=somewhat competent, or 5=very competent. Likert scales are either ordinal or interval, and many psychometricians would argue that they are interval scales because, when well constructed, there is equal distance between each value. So if a Likert scale is used as a dependent variable in an analysis, normal theory statistics are used such as ANOVA or regression would be used. Physical Measures Most physical measures, such as height, weight, systolic blood pressure, distance etc., are interval or ratio scales, so they fall into the general "continuous " category. Therefore, normal theory type statistics are also used when a such a measure serves as the dependent variable in an analysis. Counts are tricky. If a variable is measured by counting, such as the case if a researcher is counting the number of days a hospital patient has been hospitalized, the variable is on a ratio scale and is treated as a continuous variable. Special statistics are often recommended, however, because count variables often have a very skewed distribution with a large number of cases with a zero count (see Agresti, 1990, p. 125; Cohen, Cohen, West, & Aiken, 2003, Chapter 13). If a researcher is counting the number of subjects in an experiment (or number of cases in the data set), a continuous type measure is not really being used. Counting in this instance is really examining the frequency that some value of a variable occurs. For example, counting the number of subjects in the data set that report having been hospitalized in the last year, relies on a dichotomous variable in the data set that stands for being hospitalized or not being hospitalized (e.g., from a question such as "have you been hospitalized in the last year?"). Even if one were to count the number of cases based on the question "how many days in the past year have you been hospitalized," which is a continuous measure, the variable being used in the analysis is really not this continuous variable. Instead, the researcher would actually be analyzing a dichotomous variable by counting the number of people who had not been hospitalized in the past year (0 days) vs. those that had been (1 or more days).
{"url":"http://www.upa.pdx.edu/IOA/newsom/pa551/lecture1.htm","timestamp":"2014-04-16T07:53:23Z","content_type":null,"content_length":"24504","record_id":"<urn:uuid:5e5a8c99-5448-4998-a18c-dec058c203bd>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Hodge theory for Lie algebra (co)homology up vote 5 down vote favorite Let $G$ be a simple Lie group and $P$ its parabolic subgroup such that on the level of Lie algebras we have $\mathfrak{p} = \mathfrak{g}_0 \oplus \mathfrak{n}$. The dual $\mathfrak{n}^{\*}$ is identified with the nilradical of the opposite Lie algebra via the Killing form of $\mathfrak{g}$. Now for any representation $V$ of $\mathfrak{g}$ one can define Lie algebra cohomology of $\mathfrak {n}$ with values in $V$ and Lie algebra homology of $\mathfrak{n^*}$ with values in $V$. The cochain and chain spaces are the same: $C^k(\mathfrak{n},V) = C_k(\mathfrak{n^\*},V) = \Lambda^k \mathfrak {n^*}\otimes V$. All details can be found in the paper "Lie Algebra Cohomology and the Generalized Borel-Weil Theorem" by Kostant, where it is proved that for finite dimensional $V$, the Lie algebra homology differential $\delta$ is adjoint to the Lie algebra codifferential $d$ with respect to some invariant positive definite inner product on (co)chain spaces. From the fact that these operators are adjoint follows that there is a sort of Hodge theory on the (co)chain complex for the operator $\square = d\delta + \delta d$. In other words, there is a unique representative in each (co)homology class that is harmonic; i.e. it lies in the kernel of $\square$. The induced isomorphism of (co)homology groups with $\ker \square$ lies at the heart of the famous Kostant's theorem on the structure of $H^*(\mathfrak{n},V)$. One actually doesn't need $d$ and $\delta$ to be adjoint wrt an invariant inner product in order to get such a result. As Kostant actually proved in the paper, it is sufficient to prove that these operators are disjoint: $\delta d u = 0 \implies du=0\quad \&\quad d\delta u = 0 \implies \delta u = 0$. Let $G$ be a simple complex Lie group and let $P$ be a parabolic subgroup. For which representations $V$ of $(\mathfrak{g},P)$ are the Lie algebra differential and codifferential acting on $C(\ mathfrak{n},V)$ disjoint? Remark: (Co)Chain complexes with values in Segal-Shale-Weil representation are an example that these operators are not disjoint in general. This in particular means that there is no $\mathfrak{sp} $-invariant hermitian product on the Segal-Shale-Weil representation. rt.representation-theory lie-algebras infinite-dim-manifolds 1 What is your choice for the identification of $C_*(n,V)$ with $C^*(n,V)$? – Vladimir Dotsenko Oct 25 '11 at 23:52 I am sorry for the confusion. The Lie algebra (co)differential is for $C^*(\mathfrak{n},V)$ while the Lie algebra differential is for $C_*(\mathfrak{n^*},V)$. – Vít Tuček Oct 26 '11 at 12:47 OK, now it has just got worse. Before it was $C_*(n,V)$ and $C^*(n,V)$, the Chevalley--Eilenberg complexes computing the respective (co)homology. I am afraid I do not understand the notation $C_* (n^*,V)$. What I am asking, however, is that you obviously want to identify those two as vector spaces, since for $d$ and $\delta$ to be disjoint they should at least act on the same space. My question is, what is that identification? – Vladimir Dotsenko Oct 26 '11 at 14:20 I've tried to address your comments in the question. I hope it is clear now. – Vít Tuček Oct 26 '11 at 15:01 @Vladimir: I think that r0b0t is using doing the following. By $\mathfrak n$ I understand to mean the Lie algebra of "upper triangular" matrices. Then $\mathfrak n^*$ denotes the Lie algebra of "lower triangular" matrices, and the two Lie algebras are in duality via the Killing form. This is part of a general story. The identification $\mathfrak n^*=$lower triangulars induces on $\ mathfrak n$ the structure of a Lie bialgebra, which is (more or less) equivalent to using the identification $C_\bullet(\mathfrak n)=C^\bullet(\mathfrak n^*)$ to make it into a BV algebra. – Theo Johnson-Freyd Oct 26 '11 at 16:20 show 2 more comments Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged rt.representation-theory lie-algebras infinite-dim-manifolds or ask your own question.
{"url":"http://mathoverflow.net/questions/79022/hodge-theory-for-lie-algebra-cohomology","timestamp":"2014-04-17T07:14:31Z","content_type":null,"content_length":"54279","record_id":"<urn:uuid:a095ef0c-6504-4939-8d6e-3d171f274651>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Difference Between Euler Circuits And Euler Paths An introduction to Euler paths and circuits Library An Euler circuit is similar to an Euler path, except that the starting and ending the relationship between the nature of the vertices and the kind of path/circuit that Quick Summary An Euler Circuit IS a type of Euler Path but an Euler Path is not necessarily an ( The only difference between SEMIEULERIZATION and EULERIZATION is that Euler Circuits and Euler Paths YouTube Thank you very much, it really cleared my doubt for Euler path and Euler circuit. The example was nice too it cleared difference between Euler Euler Paths and Euler Circuits Euler Paths and Euler Circuits. An Euler path is a path that uses every edge of a graph exactly once. An Euler circuit is a circuit that uses every edge of a graph What is the difference between an Euler circuit and an Euler path What is a Euler path or circuit? you pen or An euler circuit is simiar to an euler path exept you must start Difference between circuit and network? Circuit is Difference between hamiltonian path and euler path Stack Overflow In graph theory, an Eulerian trail (or Eulerian path) is a trail in a graph which visits every edge exactly once. Similarly, an Eulerian circuit or Eulerian cycle is an How to Determine a Euler Circuit eHow A Euler path is a path that crosses every edge exactly once without A Hamiltonian/Eulerian circuit is a path/trail of the appropriate type that Euler Circuit Euler Path Graphs National Curve Bank When working with vertex edge graphs, students must determine the difference between Euler circuits and Euler paths. Euler circuits have routes that travel Difference between a euler path/cycle and a Hamilton path/cycle The National Curve Bank Project for Students of Mathematics: Graph Theory. Eulerian path and circuit R?[V] In this chapter, Eulerian trails or loosely known as Euler path and Euler Tour, Chinese proved that eulerian circuit only exists on an undirected graph if Compute the shortest path between each pair of vertices a,b in S. 4 Euler and Hamilton Cycles Planar Graphs Graph Theoretically: Which of the following graphs has an Euler path? G is a simple path containing every edge in G. An Euler circuit (or Euler cycle) is a cycle which is an Euler path. . Need to insert cycle between former edges 10 11: 1 ChinesePostman doc What Euler wanted to discover was whether it would be possible to cross all the There are some differences between the Köningsberg Bridge Problem (KBP) and and ends at the same vertex, it is called an Eulerian circuit (or Eulerian tour). vertices and at least one path between any pair of vertices in the subgraph. Euler Circuit Activities Activities # 1 2 3 Goal To discover the Goal: To discover the relationship between a graphs valence and Key Words: Graph, vertex, edge, path, circuit, valence, Euler circuit, connected. Activity # 4 Lecture 24 Euler and Hamilton Paths Definition 1 An Euler circuit in the existence of Euler circuit (path). We also introduce a few sufficient conditions for the existence of Hamilton circuit. What is the difference between sufficient Graphs 3 An Euler path or circuit should use every single edge exactly one time. The difference between and Euler path and Euler circuit is simply whether or not the Graphs Euler and Hamilton circuits Euler was one of the first to expand geometry into problems that were Sometimes we want a graph to indicate that we can only move between two nodes in If the path ends at the same vertex at which you started it is called an Euler circuit. Euler Circuit? Ask This circuit is also called an Eulerian cycle or Eulerian path. It was first discovered. What Is The Difference Between A Hamiltonian Circuit And A Euler Circuit? Simple question Hamilton Circuit/Path Math Help Forum Is there difference between hamilton circuit and hamilton path? well that true for Euler circuit, and Euler path, cause Euler path is when you Euler Hamilton Paths an Interactive Gizmo Euler Hamilton Paths: user defined graphs and interactive node selection in the search of Hamiltonian and Eular paths. Posted on May 31, 2013 by Prijom Man in posts Difference Between Euler Circuits And Euler Paths
{"url":"http://prijom.com/posts/difference-between-euler-circuits-and-euler-paths.php","timestamp":"2014-04-18T10:37:24Z","content_type":null,"content_length":"32216","record_id":"<urn:uuid:36e984e1-dd8d-43cb-9ca4-2de587ca44db>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Logic Syllabus:Course Description Course Syllabus Philosophy 103: Introduction to Logic Instructor: Lee C. Archie Office Hours: Office: LC M33 MWF 11:05-11:30 Telephone: 388-8383 TTh 11:05-11:30 Email: larchie@philosophy.lander.edu ICQ 14365150 Philosophy Homepage: http://philosophy.lander.edu/index.html Philosophy Chat: http://philosophy.lander.edu/chat Logic Help: http://philosophy.lander.edu/logichelp Logic Help Archive: http://philosophy.lander.edu/logichelp.archive Lander Philosophy Web: http://philosophy.lander.edu/lander/index.html I look forward to talking to each of you about our logic course. You are warmly encouraged to stop by my office to discuss classroom lectures, papers, ideas, or problems. If the stated office hours do not fit your schedule, other times can be arranged. Course Description I. M. Copi and Carl Cohen, Introduction to Logic, New York: Prentice Hall, 2001 (11th edition). Information about the logic text is available from Prentice Hall. Purpose of the Course The general goal is to learn how to differentiate good from bad arguments. The approach is two-sided: (1) the analysis and classification of fallacies and (2) the analysis as well as the construction of valid arguments. Objectives of the Course The specific aims of this introductory survey of logic are [1] to gain an appreciation for the complexity of language, [2] to learn effective methods of resolution for a variety of disagreements, [3] to obtain the ability to identify common fallacies in arguments, [4] to understand the structure of different kinds of arguments, [5] to recognize and evaluate different kinds of arguments, [6] to grasp the features of traditional logic. [7] to apply the principles of logic to ordinary language reasoning, [8] to obtain some facility in symbolic manipulations, [9] to develop the ability to think critically, and [10] to realize that the proper use of logic is a reasonable way to solve problems. The methods used to obtain these ends are [1] to solve selected problems which illustrate basic logical principles, [2] to read carefully and critically the text and/or several papers on logic, [3] to write analytically about some issues in logical theory, [4] to test your understanding by means of special examinations, and [5] to question critically several interpretations of introductory logic. In this course you will learn the difference between an argument and an explanation, the difference between deduction and induction, and the differences among truth, validity, and soundness in argumentation. You will learn some of the very effective methods of analysis and criticism.
{"url":"http://philosophy.lander.edu/logic/syllabus_description.html","timestamp":"2014-04-18T08:25:31Z","content_type":null,"content_length":"22669","record_id":"<urn:uuid:4437bb87-1423-4075-a94d-14f9809768a7>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00067-ip-10-147-4-33.ec2.internal.warc.gz"}
binomcdf, binompdf May 12th 2009, 08:38 PM #1 Junior Member Sep 2008 binomcdf, binompdf Ok, these problems should be easy, as I can use the calculator function to solve, but deriving the numbers has proved to be difficult. Find probability that at least 2 computers are defective: 23 computers, probability of defective is .093 What is the probability of a multiple choice test, 3 possible answers, 10 questions that you will get a 80% Couldn't figure out how to type them in righ. Ok, these problems should be easy, as I can use the calculator function to solve, but deriving the numbers has proved to be difficult. Find probability that at least 2 computers are defective: 23 computers, probability of defective is .093 What is the probability of a multiple choice test, 3 possible answers, 10 questions that you will get a 80% Couldn't figure out how to type them in righ. Let X be the random variable number of defective computers. X ~ Binomial(n = 23, p = 0.093) Calculate $\Pr(X \geq 2) = 1 - \Pr(X \leq 1)$. Let Y be the random variable number of correctly answered questions. X ~ Binomial(n = 10, p = 1/3) Calculate $\Pr(X = 8)$. The first one should be hypergeometric since you can't pick the same computer twice. The bottom line is, you need to know if you're sampling with or without replacement. Usually when you sample items looking to see if they are defective you do not replace them. Like buying milk or eggs. If you walk out with 4 containers of eggs, they were not replaced. The other problem here is 23 times .093 is not an integer. I had expected it to be an integer, meaning that were were 2 or 3 defectives computers here. Even if 23 times 0.093 is an integer, I still don't see why, in your first post, you jump to the conclusion that it's hypergeometric. First, the question is clearly about the whole population, ie, it's a census, not a sampling problem. Second, with hypergeometric problems, you are given the number of defective parts in the population, not the probability that each sample item is defective. Third, it's not just about sampling with or without replacement, it's about whether the probability of picking up defective parts is constant from sample to sample. If the probability is constant, it's a binomial problem, even if you're sampling without replacement. If it's not, because the number of defective parts are fixed in the population, then it's hypergeometric. I did not realize we were looking at all of the computers. I though we were looking at a subset, like there were 23 total machines and 21 good and 2 bad and how can we pick 2 bad and some good.... But it seems we weren't. May 13th 2009, 01:23 AM #2 May 13th 2009, 06:51 AM #3 May 14th 2009, 12:38 AM #4 May 14th 2009, 05:06 AM #5 Junior Member Nov 2008 May 14th 2009, 09:15 PM #6 May 14th 2009, 09:43 PM #7 Junior Member Nov 2008 May 14th 2009, 09:55 PM #8
{"url":"http://mathhelpforum.com/advanced-statistics/88785-binomcdf-binompdf.html","timestamp":"2014-04-18T13:41:17Z","content_type":null,"content_length":"51797","record_id":"<urn:uuid:42182e34-7255-4809-aa7b-b2fedcc58714>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
Auburndale, MA Statistics Tutor Find an Auburndale, MA Statistics Tutor ...While at MIT, I won prizes for my writing in both the humanities and the sciences. I've done labwork at MIT's Koch Center for Integrative Cancer Research, Sloan-Kettering Cancer Center, and the McGovern Institute for Brain Research. I've also worked as an editorial assistant at the Boston Review, a national magazine of politics, literature, and the arts. 47 Subjects: including statistics, English, reading, chemistry ...For those of you who do calculations, Excel is "the" scientific calculator with all the bells and whistles. I am a superuser of Excel, and if you can tell me what you are trying to do, I can show you how to do it! I can give you examples from my past work, and even develop examples I can give you to help you with your work at hand. 18 Subjects: including statistics, English, writing, GRE ...I've also helped write and edit textbook teacher's editions and workbooks in prealgebra, so I'm familiar with prealgebra pedagogy and how it can differ from one school district to the next. The topics learned in middle school prealgebra form a foundation of math skills that are used in every mat... 23 Subjects: including statistics, chemistry, calculus, writing I am a certified math teacher (grades 8-12) and a former high school teacher. Currently I work as a college adjunct professor and teach college algebra and statistics. I enjoy tutoring and have tutored a wide range of students - from middle school to college level. 14 Subjects: including statistics, geometry, algebra 1, algebra 2 ...I have a strong background in Math, Science, and Computer Science. I currently work as software developer at IBM. When it comes to tutoring, I prefer to help students with homework problems or review sheets that they have been assigned. 17 Subjects: including statistics, geometry, algebra 1, economics Related Auburndale, MA Tutors Auburndale, MA Accounting Tutors Auburndale, MA ACT Tutors Auburndale, MA Algebra Tutors Auburndale, MA Algebra 2 Tutors Auburndale, MA Calculus Tutors Auburndale, MA Geometry Tutors Auburndale, MA Math Tutors Auburndale, MA Prealgebra Tutors Auburndale, MA Precalculus Tutors Auburndale, MA SAT Tutors Auburndale, MA SAT Math Tutors Auburndale, MA Science Tutors Auburndale, MA Statistics Tutors Auburndale, MA Trigonometry Tutors
{"url":"http://www.purplemath.com/auburndale_ma_statistics_tutors.php","timestamp":"2014-04-17T01:10:50Z","content_type":null,"content_length":"24353","record_id":"<urn:uuid:51336af6-f333-406c-960d-c338ad7f7b04>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00662-ip-10-147-4-33.ec2.internal.warc.gz"}