content
stringlengths
86
994k
meta
stringlengths
288
619
Geometry Problems March 18th 2012, 03:46 PM #1 Mar 2012 Geometry Problems 6. Circle A has a radius of 10 units and Circle B has a radius of 5 units. The center of Circle B is on the circumference of Circle A. What is the difference between the area which is within A but not B and the area which is within B but not A. Put your answer as a multiple of Pi. 30. Marna baked a batch of brownies in a 9x12 pan. She then decided to created a giant ice cream sandwich by cutting two congruent circular brownies out of the pan and placing a wedge of chocolate chip ice cream between them. What is the radius of the largest circular brownies she can cut? Express you answer as a common fraction in simplest radical form Please Help Last edited by MathStudent1999; March 18th 2012 at 04:42 PM. Re: Geometry Problems We can only help if you show your work, else we don't know where you're stuck. Re: Geometry Problems I don't know how to start. These questions are just to hard for me. I just got (15√2 - 15 / 2) for #30. Can someone verify this for me? Last edited by MathStudent1999; March 18th 2012 at 05:12 PM. Reason: Got Question 30 Re: Geometry Problems You're making me nervous! That comes out to ~3.1066 ; how did you get that? By the way, your bracketing is not correct; should be: (15√2 - 15) / 2 I get (21 - SQRT(216) / 2 = ~3.1515 ; quite close to yours... If we make the rectangle's sides a and b, then: maximum radius of 2 inscribed circles = [a + b - SQRT(2ab)] / 2 Last edited by Wilmer; March 18th 2012 at 09:42 PM. Re: Geometry Problems You're making me nervous! That comes out to ~3.1066 ; how did you get that? By the way, your bracketing is not correct; should be: (15√2 - 15) / 2 I get (21 - SQRT(216) / 2 = ~3.1515 ; quite close to yours... If we make the rectangle's sides a and b, then: maximum radius of 2 inscribed circles = [a + b - SQRT(2ab)] / 2 Is [a + b - SQRT(2ab)] / 2 a formula? Re: Geometry Problems Call it whatever you want: it's the solution to your problem. If it hasn't been used before (which I doubt), then I'm sure I'll get no Nobel prize! Get some graph paper and try it using a 16 by 18 rectangle: you'll get an exact 5 as radius of circles, which you'll be able ti "fit in" perfectly; each circle is tangent to 2 sides, plus the circles are tangent to each other... Last edited by Wilmer; March 19th 2012 at 06:12 PM. Re: Geometry Problems Okay but I probably got this wrong,I didn't look at the other answers or replies people gave because I want to figure this out too!! Okay so, Circle A has a radius of 10 units and Circle B has a radius of 5 units. (That means A is bigger then B right?) The center of Circle B is on the circumference of Circle A. What is the difference between the area which is within A but not B and the area which is within B but not A. (That means it would be 5 units in difference) Put your answer as a multiple of Pi. I don't know how to put it in pi! So can anyone tell me? I'm super stuck too. Marna baked a batch of brownies in a 9x12 pan. She then decided to created a giant ice cream sandwich by cutting two congruent circular brownies out of the pan and placing a wedge of chocolate chip ice cream between them. What is the radius of the largest circular brownies she can cut? Express you answer as a common fraction in simplest radical form I don't get how this is possible either cause there are two congruent circle shaped brownies but it never tells you the radius of those brownies! D: Re: Geometry Problems Alice, sorry, but you're making no sense... the question is to calculate the radius; so radius sure won't be a given! Re: Geometry Problems Perhaps the intersecting area can be treated as an ellipse, then the minor axis is 5 units. Major axis ?? March 18th 2012, 04:51 PM #2 MHF Contributor Dec 2007 Ottawa, Canada March 18th 2012, 04:54 PM #3 Mar 2012 March 18th 2012, 09:34 PM #4 MHF Contributor Dec 2007 Ottawa, Canada March 19th 2012, 03:45 PM #5 Mar 2012 March 19th 2012, 05:37 PM #6 MHF Contributor Dec 2007 Ottawa, Canada March 25th 2012, 07:17 AM #7 Mar 2012 March 25th 2012, 08:31 AM #8 MHF Contributor Dec 2007 Ottawa, Canada March 31st 2012, 09:59 AM #9
{"url":"http://mathhelpforum.com/geometry/196120-geometry-problems.html","timestamp":"2014-04-16T08:48:17Z","content_type":null,"content_length":"54155","record_id":"<urn:uuid:d63f0335-533a-41c3-bbde-80045af7d022>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00294-ip-10-147-4-33.ec2.internal.warc.gz"}
Need Help Solving a problem for work November 2nd 2012, 08:09 AM #1 Nov 2012 Need Help Solving a problem for work Hi All, I've been tasked at work for create an excel based product library for my company which will product parts. I'm working on a product that needs to be parametric. Please see the attached image If the lengths of the angled lines were a fixed length i would know how to calculate all of this. But since they lengths are relative to the front radius I'm kinda at a loss on how to generate the x and y points for each line. Any help would be appreciated. I'm sure this is a snap for most of you but I suck at math especially trig. Thanks in advance Re: Need Help Solving a problem for work Hey macleodjb. How much calculus do you know? The reason is that you will know the tangent for the start and ends of the circle since you know the angles and you you have a formula regarding what the derivatives are for a given circle with an equation x^2 + y^2 = r^2 and the dy/dx will correspond to the gradient as a function of the angle. So you have two gradients so you will essentially get a relation to where the (x,y) points are for the circle part (you get two solutions and discard one for each tangent) and this will give you the relative points for that circle which gives the (x,y) points at the two positions. Re: Need Help Solving a problem for work Sorry I don't know any calculus. I googled derivatives but the math jargon is just way over my head. Would you mind breaking it down in laymens terms for me. Like Step 1 Angle1/Radius+Whatever. Re: Need Help Solving a problem for work Hello, Joe! This takes a lot of work. I have a start on it. Maybe you or someone else can finish it. (I assume the lower-left corner is a right angle.) | o P | * * | * * | * * | * * B| * β * (0,b)o - - - - - * | * b | * | * α - - * - - - - - - o - - - - - - - O a (a,0) The line $AP$ has slope $\tan\alpha.$ Its equation is: . $y \:=\:\tan\alpha(x-a) \quad\Rightarrow\quad y \:=\:x\tan\alpha - a\tan\alpha$ .[1] The line $BP$ has slope $\tan\beta.$ Its equation is: . $y \:=\:x\tan\beta + b$ .[2] Equate [1] and [2]: . . $\begin{array}{c}x\tan\alpha - a\tan\alpha \:=\:x\tan\beta + b \\ x\tan\alpha - x\tan\beta \:=\: a\tan\alpha + b \\ x(\tan\alpha - \tan\beta) \:=\:a\tan\alpha + b \\ x \:=\:\dfrac{a\tan\alpha + b}{\tan\alpha - \tan\beta} \end{array}$ Substitute into [2]: . . $y \:=\:\left(\frac{a\tan\alpha + b}{\tan\alpha - \tan\beta}\right)\tan\beta + b \quad\Rightarrow\quad y \:=\:\frac{\tan\alpha(a\tan\beta + b)}{\tan\alpha - \tan\beta}$ We know the coordinates of point $P\left(\frac{a\tan\alpha + b}{\tan\alpha - \tan\beta},\;\frac{\tan\alpha(a\tan\beta + b)}{\tan\alpha - \tan\beta}\right)$ We are given $r$, the radius of the circle. Find the equation of the line parallel to $AP$ (to the left of $AP$) . . and at a distance $r$ from $AP.$ Find the equation of the line parallel to $BP$ (and below $BP$) . . and at a distance $r$ from $BP.$ The intersection of these lines is the center of the circle. Your turn . . . Re: Need Help Solving a problem for work Well, thus far I really appreciate the help. Sadly, I can't follow it. I figured I was on the right thought path with using slope. For me to understand this you will have to label the formula with things from the image I attached. I understand the idea that you're telling me but not the formula to get there. Yes the lower left corner is 90 degrees. So in order for us to communicate effectively I have added point names to my original image so you can use those in your formula. Re: Need Help Solving a problem for work Remember that the derivative of a point for a circle is given by dy/dx = -y/x and tan(theta) = dy/dx Re: Need Help Solving a problem for work I've been trying to figure out what you guys are telling me but I haven't gotten very far. I figured out how to calculate the slope for each angle, but I dont know how to calculate the length of each segment to the point (p). Then i have to offset those lines the distance of the radius to get the center point. I understand it but cannot do it mathematically. Can you re-phrase November 2nd 2012, 11:09 PM #2 MHF Contributor Sep 2012 November 3rd 2012, 05:02 AM #3 Nov 2012 November 3rd 2012, 09:23 AM #4 Super Member May 2006 Lexington, MA (USA) November 3rd 2012, 03:44 PM #5 Nov 2012 November 3rd 2012, 04:13 PM #6 MHF Contributor Sep 2012 November 4th 2012, 07:31 PM #7 Nov 2012
{"url":"http://mathhelpforum.com/trigonometry/206619-need-help-solving-problem-work.html","timestamp":"2014-04-17T16:22:12Z","content_type":null,"content_length":"52850","record_id":"<urn:uuid:91945e65-b6d8-465c-81b2-d6a9cd568474>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
188 helpers are online right now 75% of questions are answered within 5 minutes. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/users/ladym/asked","timestamp":"2014-04-18T13:47:52Z","content_type":null,"content_length":"118325","record_id":"<urn:uuid:fa69a668-778f-4476-a0bd-56088c361180>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Terrorists, Data Mining, and the Base Rate Fallacy I have already explained why NSA-style wholesale surveillance data-mining systems are useless for finding terrorists. Here's a more formal explanation: Floyd Rudmin, a professor at a Norwegian university, applies the mathematics of conditional probability, known as Bayes' Theorem, to demonstrate that the NSA's surveillance cannot successfully detect terrorists unless both the percentage of terrorists in the population and the accuracy rate of their identification are far higher than they are. He correctly concludes that "NSA's surveillance system is useless for finding terrorists." The surveillance is, however, useful for monitoring political opposition and stymieing the activities of those who do not believe the government's propaganda. What is the probability that people are terrorists given that NSA's mass surveillance identifies them as terrorists? If the probability is zero (p=0.00), then they certainly are not terrorists, and NSA was wasting resources and damaging the lives of innocent citizens. If the probability is one (p=1.00), then they definitely are terrorists, and NSA has saved the day. If the probability is fifty-fifty (p=0.50), that is the same as guessing the flip of a coin. The conditional probability that people are terrorists given that the NSA surveillance system says they are, that had better be very near to one (p=1.00) and very far from zero (p=0.00). The mathematics of conditional probability were figured out by the Scottish logician Thomas Bayes. If you Google "Bayes' Theorem", you will get more than a million hits. Bayes' Theorem is taught in all elementary statistics classes. Everyone at NSA certainly knows Bayes' Theorem. To know if mass surveillance will work, Bayes' theorem requires three estimations: 1. The base-rate for terrorists, i.e. what proportion of the population are terrorists; 2. The accuracy rate, i.e., the probability that real terrorists will be identified by NSA; 3. The misidentification rate, i.e., the probability that innocent citizens will be misidentified by NSA as terrorists. No matter how sophisticated and super-duper are NSA's methods for identifying terrorists, no matter how big and fast are NSA's computers, NSA's accuracy rate will never be 100% and their misidentification rate will never be 0%. That fact, plus the extremely low base-rate for terrorists, means it is logically impossible for mass surveillance to be an effective way to find I will not put Bayes' computational formula here. It is available in all elementary statistics books and is on the web should any readers be interested. But I will compute some conditional probabilities that people are terrorists given that NSA's system of mass surveillance identifies them to be terrorists. The US Census shows that there are about 300 million people living in the USA. Suppose that there are 1,000 terrorists there as well, which is probably a high estimate. The base-rate would be 1 terrorist per 300,000 people. In percentages, that is .00033%, which is way less than 1%. Suppose that NSA surveillance has an accuracy rate of .40, which means that 40% of real terrorists in the USA will be identified by NSA's monitoring of everyone's email and phone calls. This is probably a high estimate, considering that terrorists are doing their best to avoid detection. There is no evidence thus far that NSA has been so successful at finding terrorists. And suppose NSA's misidentification rate is .0001, which means that .01% of innocent people will be misidentified as terrorists, at least until they are investigated, detained and interrogated. Note that .01% of the US population is 30,000 people. With these suppositions, then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.0132, which is near zero, very far from one. Ergo, NSA's surveillance system is useless for finding terrorists. Suppose that NSA's system is more accurate than .40, let's say, .70, which means that 70% of terrorists in the USA will be found by mass monitoring of phone calls and email messages. Then, by Bayes' Theorem, the probability that a person is a terrorist if targeted by NSA is still only p=0.0228, which is near zero, far from one, and useless. Suppose that NSA's system is really, really, really good, really, really good, with an accuracy rate of .90, and a misidentification rate of .00001, which means that only 3,000 innocent people are misidentified as terrorists. With these suppositions, then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.2308, which is far from one and well below flipping a coin. NSA's domestic monitoring of everyone's email and phone calls is useless for finding terrorists. As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones. Data mining is by no means useless; it's just useless for this particular application. Posted on July 10, 2006 at 7:15 AM • 143 Comments klassobanieras • July 10, 2006 7:48 AM Isn't the point of wholesale surveilance like this to prune down the number of cases that need to be examined? If so, p=0.01 means that the NSA only has to look at 100 guys to find one terrorist, which seems rather useful to me. And how is p=0.5 the same as flipping a coin? That's such a self-evidently wrong statement that I'm sure I'm missing something. Lou the troll • July 10, 2006 7:53 AM @klassobanieras: Let me guess, either you don't believe that coins are evenly weighted or you live somewhere that has coins with more than two sides... Lou the troll klassobanieras • July 10, 2006 8:07 AM @Lou the troll: The flipping-a-coin analogy is used to suggest that a system with p=0.5 is somehow the same as blind luck. This is clearly not true - in such a system, half the guys to come to the NSAs attention would be terrorists. I'm sure they would find that somewhat more useful than flipping a coin. lukem • July 10, 2006 8:08 AM I hate to defend NSA surveillance but klassobanieras has made a fine point. It all depends on how many false positives there are... If it's 3 million then that is a problem, but if it's 30,000 it seems like such a system could actually be useful. Rubyin's assertion that 30,000 false positives would make "NSA's surveillance system useless for finding terrorists" seems a little bit unsubstantiated, don't you think? Frank Ch. Eigler • July 10, 2006 8:10 AM > NSA's accuracy rate will never be 100% and > their misidentification rate will never be 0%. > That fact, plus the extremely low base-rate for > terrorists, means it is logically impossible for > mass surveillance to be an effective way to find > terrorists. This claim of "logically impossibility" is itself a faulty leap of logic. 100%/0% are straw man wondering • July 10, 2006 8:13 AM If this blog ever a reported the US Government doing something right I'd take it more seriously. Alternatively, if there were more suggestions of what to do instead... lukem • July 10, 2006 8:13 AM Sorry, I meant Rudmin, not Rubyin! Dude N. • July 10, 2006 8:14 AM Hang on. This analysis seems to implicity assume that there exists a fair coin somewhere such that if you flip it to assign heads or tails to each person, the probability that a head will be a terrorist is 0.5. But that's silly. The population of terrorists and non-terrorists gets split evenly by the coin, and the probability that someone labeled with a head is a terrorist is the same as the prior probability that someone is a terrorist -- 0.00033% in this example. So now, the analysis tells us that if NSA can improve this percentage by two orders of magnitude to 0.02%, they have accomplished nothing, because they haven't increased the percentage to 50%. But as klassobanieras said, fair coins won't magically produce a set with half terrorists in it. I don't mind being against wholesale Hoovering of our data; I'm against it myself. But this analysis seems fundamentally flawed, and I think it damages the credibility of opposition to NSA's data mining program. Tamas • July 10, 2006 8:15 AM @klassobanieras: I don't think you are following the analysis. You are making a statement about the posterior probability ("half the guys to come to the NSAs attention would be terrorists"), whereas the article calculates this probability by making assumptions about prior and conditional probabilities. Generally, people are very bad at estimating probabilities, especially if they result from complex processes, or belong to rare events. That's why it is better to calculate them formally, which is exactly what the interesting part of statistics does.* *For historical reasons, descriptive statistics is also called "statistics", even if it involves no inference, just summary, for example, think of the GDP. I find it hard to believe (but not impossible) that the NSA lacks the skills in statistics to not understand the value (or lack thereof) in the use of data mining. lukem • July 10, 2006 8:21 AM I think we can stop bickering about the p=.5 coin flip stuff, the real question is: Can such systems reduce the number of false positives to something managable? Rudmin's analysis was fine until he started saying unsubstantiated things like 30k false positives make such a system useless. Tamas • July 10, 2006 8:22 AM @Dude N: I think that the coin in the article just serves as a benchmark for evaluating the effectiveness of the program. To argue whether the program is worthwhile, one would need to assign a loss function, ie somehow formalize the the losses/gains associated with identifying a terrorist correctly (T), missing a terrorist (M), suspecting an innocent citizen (S), and identifying an innocent person correctly (I). Depending on these numbers (and the cost of the program itself), 0.02 might either be good or insufficient. Eg if M is a much higher cost then S, then even very low posterior probabilities would be OK. The hard question is estimating these costs/benefits. klassobanieras • July 10, 2006 8:31 AM @Tamas: The analysis defines p as "What is the probability that people are terrorists given that NSA's mass surveillance identifies them as terrorists?" If p=0.5, then a person brought to the NSA's attention by their system has a 50% probability of being a terrorist. The analysis suggests that this is somehow useless or random, which is what I don't bob • July 10, 2006 8:39 AM Perhaps this is the coin explanation? Method 1): Take a random person. Flip a coin to determine his terroristness, heads for OK, tails for terrorist. Method 2): Run that same person through the NSA system. It will come back "yes/no" this person is a terrorist. Rudmin is saying that the coin is going to be correct more often than the NSA. Also, 30,000 might seem like an acceptably small number of false positives, but if you are one of the 30,000 people who's lives are ruined, I suspect you would disagree. Government is supposed to protect you from terrorists, not provide you with surrogate terrorists. Loyal Citizen • July 10, 2006 8:42 AM 30k false positives *a day* could very well make the system useless. What this analysis is saying is that the misidentified rate (Type II-- 'false positive') is THE MOST important part of this type of analysis-- even more so than the accuracy rate. Bottom line, without further information either way regarding the actual scope of the system, it would appear that NSA has it's work cut out for them--- the misidentification rate could very well be the most difficult part of the problem to solve. Anonymous • July 10, 2006 8:46 AM @bob, it's not clear that a person deemed as a terrorist by the system would have his or her life ruined. Maybe it just would just mean they were subjected to additional surveillance or something. And yes that might be bad if that happened 30,000 times, and we might not want to make that tradeoff. But then Rudmin goes on to assert that the NSA system would still be useless even if the system magically had only 3,000 false positives, while identifying 700 out of the hypothetical 1000 terrorists in the country. It just seems totally baffling to me that he could still be saying the system is useless. Shura • July 10, 2006 8:49 AM @arl: Of course the NSA knows about these things - they're not stupid. The whole thing just shows that the "we're doing this to catch terrorists" justification is a big, fat lie. Tim Kirk • July 10, 2006 8:49 AM From the article - using the most optimistic figures. "then the probability that people are terrorists given that NSA's system of surveillance identifies them as terrorists is only p=0.2308" So even with very optimistic assumptions the NSA system would produce 3 times as many false positives and correct identifications (0.7692 to 0.2308). Given the recent accidental shooting in the UK of an innocent man (who was then detained, with his brother, for several days) I can see a lot of problems with a system that pulls names out of great masses of data (too great for it to be easy for a human double check - especially quickly, with the fear of being too slow when searching for terrorists) and yet is wrong 3 times out of 4. The number of false positives seems less important to me than the odds of any given positive being false - and unless I'm totally mis-remembering my maths from school that is the real problem here. Anonymous • July 10, 2006 8:49 AM Oops, that last post (@bob, it's not clear) was from me (lukem), not anonymous. jayh • July 10, 2006 8:50 AM --I find it hard to believe (but not impossible) that the NSA lacks the skills in statistics to not understand the value (or lack thereof) in the use of data mining. -- The concern is not that they lack the skills, but lack the concern about the potential harmful side effects. To an agency whose whole focus is preserving a governmental structure, individuals lie pretty far down on the priority list. Tamas • July 10, 2006 8:51 AM @klassobanieras: There is little to understand in the article, because the argument the author makes doesn't allow him to evaluate how effective the program is. To make claims like that, he would need a loss function, which he does not provide. If missing a terrorist has very high costs, but suspecting somebody (who possibly won't even know about it) has very low costs, then even a probability of 0.02 makes the program effective. For example, many medical tests have worse posterior probabilities than this (because of low incidence in the population), and they are still used, because preventing death from illness is worth the hassle. Note that I am not saying that the vacuum-cleaner approach to surveillance is good, I think that sooner or later it will diminish civil liberties. I only want to argue that posterior probabilities are useless without a loss function for decision-making. This is such a trivial statement in decision theory that it should be implicit in any kind of analysis... Couple of tangential points: 1. statisticians have tried to quantify "randomness", and the concept is called entropy. Indeed, a uniform distribution has the highest entropy, so a fair coin is the "least informative" distribution with only two outcomes. However, this is not relevant here, as we are trying to cast this problem in a decision-theoretic framework, for which you need a loss function. 2. arl made the argument which can be best summarized as "NSA wouldn't be doing this if it wasn't worthwhile". It is true that they have some of the best statisticians, but they don't have the same loss functions as the rest of society (eg they care less about suspecting innocent people), so they are not the best candidate for evaluating whether such programs are good. bob • July 10, 2006 8:53 AM If false positive (falsely identifying someone as a terrorist - not sure whether thats a "negative" or not :-)) is NOT a problem then merely declare everyone in the US as a terrorist and the problem is solved! This is similar to the FAA only requiring an aircraft gas gauge to be correct when the tank is empty. So simply paint a nonmoving needle on the gauge at "E" and you will satisfy the regulation, but probably not solve the problem. Anonymous • July 10, 2006 8:58 AM @bob, ha :-) But I still think the cost of a false positive has been somewhat poorly defined. If all the false positives were immediately locked away in secret government prisons, then we obviously would be willing to tolerate very few false positives. (Maybe none.) But say false positives were just subjected to a more intensive wiretapping? It's not great but it's definitely not the same as ruining their lives. And in some cases we as a society might be willing to make that tradeoff. aikimark • July 10, 2006 8:58 AM 33K people are false positives out of what sized suspect population identified for follow-up? If based on 1000 terrorists is the suspect list size = 34K? If based on 40% misidentification is the list size = 82.5K? 52.8K? -- this might assume that the other 60% are either true terrorists of in some gray area of shady characters. lukem • July 10, 2006 8:59 AM aargh, I keep forgetting to type my name. that last @bob comment was mine also. Mike Hamburg • July 10, 2006 8:59 AM What Rudmin seems to be missing is that 30,000 false positives could be a manageable level in an automated first-pass system, particularly if they occur over the course of, say, a year. The reason is that you can get much more cutdown before you send around the agents in black suits. A group of maybe 30 humans could cut the number down to maybe 3,000 by reviewing the data (perhaps 100 of which are actually terrorists -- say 90% precision, 30% accuracy), and then collect more data from wiretap.gov. For those of you keeping track at home, they're investsigating maybe 4 or 5 people each per work day; not unreasonable for throwing out those "chicken soup is codeword for a bomb" Second pass, in-depth review with more data. Say another pass with similar precision and slightly better accuracy, 10 times more work but with 10 times fewer suspects. Now it's 300 or 400 people, and 50 of them are terrorists. It's not unreasonable to have the cops investigate 300 people across the US to catch 50 terrorists. That's like, what, one in each state every 2 months? Of course, I'm making up numbers, and NSA may have trouble getting to 0.01% false positive rate in the first pass, and 10% in the next passes. Who knows whether finding these terrorists a year would be worth the cost of the program, or the abuse potential, or whatever. Still, it's not as ridiculous as this guy makes it sound. klassobanieras • July 10, 2006 9:05 AM The population of interest is not necessarily everyone in the USA. Presumably the NSA gets intelligence from time-to-time that narrows this population to (say) a town, recent entrants to the country, or everyone taking a flight on a given date. Whatever the fraction of suspects that the system can safely eliminate, there is some number of suspects where the NSA cannot cope without the system, but can cope with the system. Moshe Yudkowsky • July 10, 2006 9:17 AM The worst piece of intellecutally-dishonest claptrap I've seen in a long time. I see that others have chimed in with all the reasons about why the math may be correct but the fundamental assumptions are fallacious. It reminds me of the arguments offered by snake-oil cryptography, in which an algorithm is offered as "unbreakable" because random guessing won't work. I'll just contribute something a few other comments. First, that the author could have written an interesting article about how the NSA, which is filled with very smart people, must be doing more than simple random data mining. Second, the statement that data mining is an efficient method of supressing political dissent is (a) unproven and (b) revealing of the author's bias and (c) lacks any identification of the method by with the Big Bad NSA is implementing this policy of intimidation. Thirdly, if the system does not achieve results, the NSA hiearchy, which does not have an unlimited budget and a surfeit of analysts, will drop the program. Fourth, perhaps it's time to remind the author and others of his ilk that when the NSA or FBI computer kick out suspicious activity, the bureaucracy does not automatically dispatch teams of assassins. Finally, the exact same arguments as given by the author can be applied to any police or intelligence activity. These arguments boil down to "it's not perfect, so we cannot implement it. See, here's the math." Perfect cannot be the enemy of the adequate -- unless your goal is to stymie any effort whatsoever, which is clearly the goal of this author. Moshe Yudkowsky • July 10, 2006 9:22 AM I should mention that I'm also dismayed to see an article from LewRockwell.com in this blog, since that site has more than a slight tinge of anti-semitism about it. Jack • July 10, 2006 9:43 AM " the exact same arguments as given by the author can be applied to any police or intelligence activity " Njet, police activities should be based on evidence proposed by real humans. Data-mining is not based on any evidence. It's just virtual shooting at every citizen. As virtual-shooting and evidence have different characteristics there is different mathematics. Brian • July 10, 2006 10:00 AM History is full of examples of bureacracies continuing to pursue completely useless programs because the programs justify the existence of the bureacracy. National security bureacracies are no exception. In fact, they are prone to indulging in security theater, wasting money and time (not to mention trampling civil liberties) without actually making anybody safer. The best point to take away from this discussion is not that the NSA program is useless, or that it is useful. The point is that there are fundamental questions about how a program such as the NSA's could *ever* be practical, and that the NSA needs to answer those questions. A blanket statement like "we can't tell congress about the program for national security reasons" is more than likely a coverup for incompetence. lukem • July 10, 2006 10:09 AM @Brian, that's not been *my* point in this discussion, at least. I'm not defending the NSA program so much as attacking this faulty analysis, which in my view is oversimplistic and flat out wrong! Alum • July 10, 2006 10:17 AM This appears to cover nothing that isn't discussed in Carnegie Mellon's introductory statistics class for humanities majors (many if not most of them take it as freshmen). Stainless steel kitchen utensil • July 10, 2006 10:21 AM This is sad news considering the amount of funds allocated to NSA. aburt • July 10, 2006 10:21 AM I'm not defending the surveillance program (it's for Congress, the Judiciary, and the Executive branch to do the checks&balances dance to determine legality, with input [and votes] from the citizenry as is our right). However, just addressing the feasibility, I'll note that, from what I've seen, a universe of 30k potential suspects to wade through is not likely a problem for the government. I'm personally aware of a case ten years ago in which, to find a lesser criminal, the FBI was willing to investigate a universe of 10k suspects. They have the manpower to do such things. Also, let's not forget that subsets of actual terrorists may form graphs of communication with each other. If you start finding links between individuals also on your 30k suspect list, they'll stand out and bear more scrutiny. Gabriel • July 10, 2006 10:27 AM I find the author very misleading. He wants you to believe that, even if the NSA were to improve their techniques by a tremendous amount, it would still be no better than randomly selecting people by flipping a coin. (???) Let's see: If the proportion of terrorists in the general population is .00033%, and you select a subset of them by flipping a coin, then the proportion of terrorists in your subset will be .00033%, just like in the original. But if the NSA selects people using very sophisticated survelliance, then the proportion of terrorists in their subset will be, say, 50%. That sounds *much* better than when you flip a coin! Carlo Graziani • July 10, 2006 10:28 AM Moshe writes: "Thirdly, if the system does not achieve results, the NSA hiearchy, which does not have an unlimited budget and a surfeit of analysts, will drop the program." This is so comically off-base that I don't know whether to laugh or cry. The measure of "success" adopted by people who oversee such programs in government has very little to do with actual performance. It has a great deal more to do with conquering and defending budget. Since this metric of performance is paramount, program managers have every incentive to talk up the reliability and success and (above all) the future promise of the program, and to bury any doubt that might result from a rigorous and intellectually-honest assessment of the challenges to be overcome. To see this effect in action in a totally different program, one merely need cast one's eye over the Air Force's "reliability" assessment of the National Missile Defense shield, which has been repeatedly declared a success despite a total inability to discriminate warheads from dummies (a mission requirement), unless the warheads helpfully carry beacons. The point is, of course, that to the Air Force, NMD is a "success" irrespective of whether it could ever shoot down a warhead, because of its budgetary mass and momentum, and because it positions the Air Force to pre-empt the creation of a "Space Force" when (as they confidently expect) space becomes militarized during the course of the century. And, just as the reliability of NMD is neither here nor there to the Air Force, to imagine that the NSA would let intellectual difficulties stand in the way of growing this kind of program is simply sconzey • July 10, 2006 10:35 AM Most of the problems here come from the fact that he doesn't clearly state what he means by "identified as a terrorist" As has been said before, this would indicate what kind of a loss function would be nessecary to evaluate the cost. For instance, if "identified as a terrorist" means that as far as they know, that person is *definitely* a terrorist planning horrific attacks and should most definitely be arrested or shot at the earliest convenient time, then *any* misidentification is unacceptable. However if "identified as a terrorist" means that an automatic computer program has flagged that person for further attention from a person, then that's not so bad at all, and 30,000 false positives (manpower aside) are acceptable. This is of course assuming nobody actively tries to abuse the system. Say you wanted to watch someone? Sign them up to jihadi mailing lists from an internet cafe, they get flagged, then voila, all the information about their personal life is accessible to a human... David Thomas • July 10, 2006 10:37 AM I am greatly disappointed in this article. Both of my main problems with it have been mentioned above, but I will nonetheless repeat them here as some were dismissive of them and I think they may need to be stated more clearly. First, the coin flip analogy. While it is perfectly fair to say that a 50/50 chance is like flipping a coin, we have to be sure that we are talking about the same things. p = .50 means that picking a person at random from our group of suspected terrorists is equivalent to flipping a coin. It does NOT mean that the program is equivalent to going through the population and flipping that coin to determine suspicion, which seemed to be the implication. Second, just because perfection is impossible does not mean that any other state arbitrarily close to perfection is impossible. While those numbers would indeed have to be awfully close to 1 and 0 to get meaningful data, the jump to impossability is unsupported. It is sad that we get such a cursory job when a well supported one would likely reach the same conclusion. One of the biggest problems is that most assessments I've seen - certainly those posted in this thread - neglect the fact that while we may be curbing one threat (namely terrorism), we may be enabling others which have the potential to effect far more people. klassobanieras • July 10, 2006 10:41 AM "As an exercise to the reader, you can use the same analysis to show that data mining is an excellent tool for finding stolen credit cards, or stolen cell phones." I don't see the difference. Even though the base-rate is presumably higher for credit-card fraud, you'd still get a significant number of false-positives, and the analysis suggests that anything less than p=1 is unacceptable. As if everyone flagged by the system would go straight to jail without further investigation or a court-case. lukem • July 10, 2006 10:43 AM So Bruce! I still can't believe you're endorsing an article that says a hypothetical system that identified 700 out of 1000 terrorists at a cost of 30,000 false positives would be "totally useless"! Isn't some sort of clarification in order? Your blanket approval of this analysis completely ignores the fact that the feasibility of such a system depends on the false positive rate as well as the false positive cost. For some values of those quantities, such a system would very clearly make sense. And as other commenters have said, it would be very nice if we could move past this poor analysis and discuss the deeper issues at hand here. Commando • July 10, 2006 10:46 AM This guy misses the whole point of data mining. Its not just to 'guess' or the miore accurate lessen the physical number---its to create connections that in turn can identify patterns or social networks (in this case a terror ops and support cell). We need to loose the high math that obviously only works when you are measuring an anologus sitiuation. Clearly here, we are not. I propose my simple Birds of a Feather Theorem--if a known terrorist eats lunch with Person X....than person X is likely a terrorist........ Anonymous • July 10, 2006 11:12 AM I'm curious how many people who are endorsing trading our privacy for this pie-in-the-sky notion of "automatic" surveillance have heard of the base rate fallacy before reading this post, and how many of them could correctly explain it to the layman. Tim Vail • July 10, 2006 11:14 AM I think the author was serious when he said that the people misidentified would probably have their lives turned upside down. In another words, those of you concerned about the cost of false positives -- it is relatively high. One terrorist is highly unlikely to kill 3,000 people. Whereas by putting more than that number of people in prison for life because we think they are a terrorist probably already did more overall damage to society than the terrorists themselves could have done. And add to that the cost of the system itself... Last comment -- regarding those who say that this is only to single out people for closer examination -- this "simplistic" computation is assuming all is said and done. Meaning, the NSA already filtered everyone out to the best of their ability, reexamined them again and again, and this is what they wound up with. Any further examination would have to look at completely different criterias used by the said exhaustive examination. It is simply unrealistic to assume that the further examination would not look at the same factors, and come to the same conclusion. This is the difference between a coin flip and security checks. Coin flips has no memory, past results does not affect future results, but because of the nature of security checks. Security checks tend to look at the same set of data, and therefore are likely to have a sort of "memory" about what happened before. Carlo Graziani • July 10, 2006 11:18 AM It is also depressing to see how easily people here dismiss the false-positive aspect of this problem "because the government doesn't just dispatch teams of assassins" upon getting an automatically-generated tip from a system trigger. This is totally besides the point. The problem with a system that generates tens of thousands of false positives is that eventually, one of them will point to some innocent person with enough suspicious connections (travel to dodgy countries, unsavory friendships, unusual monetary transactions, used a credit card at a time/place consistent with the presence of other suspects, etc.) that the investigators will judge him worth charging. Anyone who thinks that everyone booked by the FBI and publically declared a terrorist is guilty as charged should Google "Richard Jewell", and read up on the Atlanta Olympic bombing. If Mr. Jewell had found that bomb after 9/11 instead of before, he might be in a Navy Brig today, due process being what it is these days. Brian • July 10, 2006 11:20 AM > I don't see the difference. Even though > the base-rate is presumably higher for > credit-card fraud, you'd still get a > significant number of false-positives, and > the analysis suggests that anything less > than p=1 is unacceptable. As if everyone > flagged by the system would go straight > to jail without further investigation or a > court-case. There are two significant differences between data mining for terrorists and data mining for credit card fraud. 1) Credit card fraud is more common, so you are more likely to catch an actual criminal. 2) Weeding out false positives is cheap. You call up the card holder, and ask them about the transactions. The entire process can be automated and takes a couple of minutes. Distinguishing between a false-positive terrorist detection and the real thing is significantly more difficult. And because the base-rate for terrorism is low, you are going to be wasting a lot of time and money on those false positives. We cannot afford to be stupid about security. Right now a lot of people seem to assume that the NSA can put a crystal ball on top of a stack of phone bills and find a terrorist. Data mining is not magic. What the NSA is doing is not going to work. Question for those of you who think the NSA program is a good thing: how many terrorists has it caught? Figure out that number, and that'll give you a good idea of how effective the program really is. Share the number with me and you might even convince me to support the program. lukem • July 10, 2006 11:30 AM Brian, I am not saying I think the program is a good thing, I just don't think you can categorically dismiss it without knowing something about the error rates involved. I'm not exactly sure what I think about such programs, to be honest. I think preventing a terrorist nuclear attack would be a huge deal, but I don't know how to estimate the likelihood of such an attack in the first place, of course, and I don't know what the actual chance of a surveillance system foiling such an attack is either. One thing that would make me somewhat more willing to consider these systems would be some sort of assurances that they would only be used to prevent large-scale terrorist attacks, as opposed to drug law violations or (just imagine!) copyright infringements. I think that engineering comprehensive oversight policies would be an extremely productive thing for us to do as soon as possible, before these systems became too deeply entrenched. Kevin • July 10, 2006 11:46 AM Anyone want to guess how long before a mass-mailing worm that posts threats against the US govenment is written? Or, a botnet is used to "plot terror" via IRC by posting false plans? Even if the system for electronic interception works, it's so vulnerable to flooding it with false positives as to make it useless. Carlo Graziani • July 10, 2006 11:50 AM On credit card fraud, it is worth pointing out the fact that the credit card companies have a huge database of fraudulent transactions to use as a training set for their software. They are also looking for clear signatures --- a spike in jewelry purchases on a card that's done nothing but grocery shopping, for example. This allows them to calibrate the (many, many) parameters in the system, as well as to estimate the error rates (by training on part of the data set and testing on the other part). By contrast, there are very few reliable "signatures" of terrorist communication, at least ones that differ from other social networks. And, only a relatively small number of known terrorist communication trees must be used to calibrate many parameters. It's simply not the same kind of problem as the one the credit card companies have solved. sidelobe • July 10, 2006 11:51 AM How does the math change if the goal of the NSA is to identify *one* terrorist rather than *all* terrorists? It seems to me that this is a realistic goal, which would lead to finding additional criminals once you find the first one. I, too, am against mass surveillance, but you can't use this logic to discredit it. Lou the troll • July 10, 2006 11:53 AM Hmm... I think there's several other issues at play with these types of systems. What defines a terrorist? A pattern of communication or an action? If it's a pattern, then we are all on a dangerous slope. One day that pattern is a series of calls to North Korea and then to Explosives-R-Us. A few days later it's a few phone calls to the Sierra Club. The next week it's posting on this website. Tim Vail • July 10, 2006 12:01 PM The base rate for fraudulent transaction and misidentification rate is significantly higher when evaluating credit card transactions. Meaning -- the article supposes there are 1 out of 300,000 terrorist. However, the number of fraudulent transaction is significantly higher than 1 out of 300,000. As for misidentification rate -- like someone mentioned, credit card companies know the general purchase pattern on each card. They are more likely to be accurate, and coupled with the base rate (frequency of fraudulent transaction being higher) -- they thus have lower misidentification rate. To sum up -- the rate of terrorists to everyone else is what makes all the difference in whether this is feasible or not. I'm with Moshe on this one. It seems to me inconcievable that the NSA is using this program in utter isolation and that none of its output is corroborated with other type of evidence against terror It only takes a moment of thought, or a quick peek at this blog to see that the odds of catching a terror suspect goes up if you salt your pool with proven suspects and test the validity of results. For example, we know that the intelligence services were able to break the Liberty City Cell with HUMINT. Certainly a network graph was established for the associates of that group. False positives? Considering that success, established by first person corroboration, why would they risk their funding for the 'Bayesless' surveillance? I'd more likely suggest that a devious NSA would say that their telephonic surveillance was a part of discovery when it was not. But the suggestion that someone would try to misdirect the targets of these suveillance activities for political purposes is not as disciplined as the math attending that argument. A moment's reflection would show that it is far easier to rat out a politically motivated misuse of the terror trap than to actually hijack the program. So the very suggestion that outsiders are making would be very easily corroborated by an insider willing to leak. But such corroboration has not been made, and because of that we would have to assume that 100% of the insiders are conspiring to hijack the program. Fat chance. Anonymous • July 10, 2006 12:06 PM I don't understand the "I'm against mass surveillance but the article's logic doesn't hold" argument. I think that mass surveillance already has privacy costs we're not willing to pay because a) it's the type of thing that authoritarian governments do to justify other actions that limit liberty and b) even if the government is benevolent, the program may be subject to OTHER abuse. The article's logic is that even if the government is benevolent AND has a 90% success rate, the lives of 3,000 will have been unduly harmed. Again, imagine being one of the 3,000 and your accuser touts a 90% success rate. Sorry, I'm one of those idealists who believe that we've lost the war on terrorism if we believe that "9/11 changed everything" including the way we look at our personal privacy and the liberties we take for granted. Tim Vail • July 10, 2006 12:10 PM Hmm...I think I made a mistake. Misidentification rate doesn't have to change all that much (even though it is probably better). All you have to do is let's suppose out of 300 million transactions there are -- how many woud be fraudulent? Let's guess maybe 30,000 (which is probably more realistic than 1,000 terrorists out of 300 million). Viola, you have just multiplied the probability by 30 fold. JakeS • July 10, 2006 12:14 PM There's a lot of loose language in this thread, starting with Prof Rudmin. NSA's surveillance is indeed useless for finding terrorists, but that's not its primary purpose. It's very unlikely to find actual terrorists, for reasons noted by the Prof and contributors above. Its purpose is to find people with "links to terrorrism" or "terrorist sympathisers" (as the media and politicians often say). Mass surveillance can be very effective at spotting links between people, or actvity that can be seen as suspicious. The problem, as also noted, is the frequency of false positives. Whoever thought that each potential positive would be checked by a human is living in dreamland. Far more likely that the bureaucracy simply puts each apparent positive straight onto the no-fly list, the list of people to be harassed if they ever make the mistake of going out of the US and coming back, and so on. Jean-Charles de Menezes (remember him?) was a false positive. Surveillance showed that he lived at the same address as someone thought to be a terrorist. Therefore he was "linked to terrorism". No human bothered to find out that he lived in a separate apartment at that address. klassobanieras • July 10, 2006 12:15 PM > 2) Weeding out false positives is > cheap. You call up the card holder, > and ask them about the transactions. > The entire process can be automated > and takes a couple of minutes. I know that the problem (in the terrorism case) is the civil-liberties cost, and I would be extremely uncomfortable about this in the real-world. But the analysis under discussion contains really silly implicit assumptions about this, and that's what I take issue with. The author requires that p be "very near to one". From that, we can infer that he believes that the cost of investigating a single innocent outweighs the value of finding ~10 terrorists. I don't have any first-hand experience of NSA investigation, but is it really equivalent to the worst efforts of 10 terrorists, focused onto a single person? If so then you guys have much more urgent problems than data-mining. If the article was been about credit-card fraud instead of terrorism, the author would probably omit any mention of second-level checks, and instead imply that false-positives go straight to debtors prison. And maybe throw in a spurious coin-flipping analogy for good measure ;) Brian • July 10, 2006 12:19 PM There can be no doubt that the NSA's program is backed up by human detective work. In fact, that is the number one reason we ought to be incredibly suspicious of this program. Everytime the NSA program spits out a warning, FBI agents (or someone like them) need to go investigate that warning. If the false positive rate is high, then we are wasting valuable human resources on pointless investigations. And there are some fairly basic mathematics that indicate the false positive rate is going to be pretty high. That kind of program does not help security. Just the opposite: we have a limited number of guardians of our safety, and we want to make sure they are working as effectively as possible. A surveillance system with a high false positive rate means that we are taking resources away from more productive avenues of investigation. Maybe I'm wrong about this. Maybe the phone logs are correlated with other data that reduce the false positive rate to manageable levels. But I've seen absolutely no evidence of that. The terrorism cases that have been made public have all been the result of old-fashioned detective work, not the magic of data mining. It took years before Daniel Ellsberg leaked the Pentagon Papers. I hope it doesn't take that long to find out the truth about the NSA's phone surveillance. Pat Cahalan • July 10, 2006 12:29 PM Remember, all, that Dr. Rudmin's numbers are exaggerated highly in an attempt to explain to the non-mathematically inclined what the difficulties are with this program. @ lukem If you have 30,000 false positives (remember, this is probably a low estimate!) and 400 correctly labelled terrorists (probably a *very* high estimate!), that gives you 30,400 people to investigate. Presumably these 30,400 people look "suspicious" from all of the data we've been able to gather (which is quite a bit!), which means just looking at their call records or bank statements (standard investigative procedures) isn't going to tell us anything new -> these 30,400 people have to be detained, put under humanint surveillance, or some other highly invasive investigative technique. That's a huge amount of manpower. We're not talking about giving 30,400 a fingerprint test and checking them off a list. Aburt pointed out an example of looking at 10K suspects to find one perpetrator, but that's a flawed comparison because that's 10K suspect to find one perpetrator who has already committed a crime, as opposed to one who hasn't *done* anything yet. In the first case, we can assume a finite amount of time required to find the actual perp... in the second case we're always going to be adding potential terrorists to the list -> the investigation goes on forever. Also, if you have a set of evidence from an existing crime to examine, it's pretty easy to eliminate a large volume of the 10K suspects very quickly (fingerprints don't match, DNA doesn't match, they have an alibi, etc.) We're not investigating an "normal" crime here, we're talking about investigating conspiracy to commit acts of terror, which is a completely different problem. As some people have pointed out, the time frame is an issue (how many people you tag in a day/year/etc). This is true. And yes, we don't know what the actual costs are of a misidentification (either to the suspect, which certainly can't be discounted, or to the investigative team itself, which needs to expend resources to check the innocent out). We also can't really estimate the cost of unchecked terrorists, because we don't know what the plans are, how likely the plans are to succeed, and what the costs will be if the plans are executed properly. And finally, we can't estimate the cost to civil liberties, because we don't know what the level of imposition is to those incorrectly identified. Some people are going to estimate those one way, and say this is a feasible technique. Others will estimate another way, and say this is unfeasible. I think that a majority of people who would delve thoroughly into this program *would* come away convinced it is severely flawed, but the ability to examine the program in depth is restricted to a very few people. Personally, I look at it as a clear violation of just about every standing principle upon which this country is supposedly founded: the right to be presumed innocent, the right against unreasonable search and seizure, other civil liberties, etc. Economic analysis aside, this is not supposed to be something our government should be doing, period. Kh3m1st • July 10, 2006 12:36 PM I thought it was a great article. I am against the NSA domestic spying program for many reasons. One of which - the program reminds me too much of the former USSR. The goverment watching over it's people "for their own good". No matter how noble the intention of the government - the old saying "absolute power corrupts absolutely" comes to mind. This program will be used somehow in a way it was not advertised to the US people. The US government has used the threat of terrorists and terrorism to undermine a host of civil liberties in the last few years. And the majority of the US population doesn't mind or care (in my opinion). No matter how much debates is placed on the statistics used to come to the authors conclusions; it does not change the fact domestic spying is occuring, and you civil liberties are not the same as they were on 9/10/2001. Bob O. • July 10, 2006 12:37 PM Another item to consider is that the data mining is not the sole tool being used in the hunt for terrorists. When combined with other intel, it undoubtably can contribute. ZZT • July 10, 2006 12:39 PM Sorry, but the analysis in the post is both misdirected and misleading. Let's assume that the numbers as presented are correct - 3000 innocent people are investigated, and 400 terrorists are stopped. This does not constitue a failure but rather a success, since cost of (being investigated) is relatively low (an hour of one's time, or several hours of the NSA's time without telling you, etc - no real harm done) while the benefit of arresting 400 terrorists is high (is terrorist is a potential mass killer). Plus, arresting 40% of the terrorists is likely to foil terrorist plans by other terrorists (as they will be more fearful of discovery, thus move slower, etc). The same fauly logic can 'support' avoiding police investigation in general (as the police usually investigates multiple suspects per crime, and thus have a low convinvtion per investigation ratio). This is just a partisan red herring. Xellos • July 10, 2006 12:46 PM --"I just don't think you can categorically dismiss it without knowing something about the error rates involved." Given the FBI has already publicly complained about the number of wild goose chases the NSA has sent them on, I think we can reasonably posit a rather significant false positive rate... Brian • July 10, 2006 1:07 PM Do you have a link for that? Konstantin Surkov • July 10, 2006 1:17 PM I can't believe this guy is really a professor. The article is unbelievably lame. And, if NSA can pinpoint a terrorist with "probablity p=0.2308" (which means that approximately one of four people caught turns to actually be a terrorist), it is a great success of the system. Spaceship • July 10, 2006 1:24 PM What if the NSA were to use their powerful systems to join in: * The SETI@Home project? * The Folding@Home project? It would be interesting to see how much faster/further we could advance as a society. klassobanieras • July 10, 2006 1:28 PM @Spaceship: They'd probably have a higher detection rate with SETI. quincunx • July 10, 2006 1:30 PM "I should mention that I'm also dismayed to see an article from LewRockwell.com in this blog, since that site has more than a slight tinge of anti-semitism about it." What does this have to do with the topic at hand? The only "anti-semitic" thing about it is not having the US aid Israel or any other place for that matter. The site is anti-war, anti-state, pro-market. If 'semitism' is about channeling vast amounts of money from taxpayers' into the middle east and thereby creating religious tension, then lewrockwell.com is 'anti-semitic' in that sense. The site has about 100 columnists, some of which are Christian, some Jewish, some Muslim, and some Atheists, and one Congressman. The site has a very cosmopolitan perspective, with columnists from all over the world. Their only common link is 'ant-war, anti-state, pro-market'. "Personally, I look at it as a clear violation of just about every standing principle upon which this country is supposedly founded: the right to be presumed innocent, the right against unreasonable search and seizure, other civil liberties, etc. Economic analysis aside, this is not supposed to be something our government should be doing, period." I agree with you 100%, but this erosion of freedom is nothing new in the American republic. 95% of the activities of the government is not constitutional. The constitution is essentially a dead Every item in the Bill of Rights now has an * next to it, with a long list of exceptions. " This does not constitue a failure but rather a success, since cost of (being investigated) is relatively low (an hour of one's time, or several hours of the NSA's time without telling you, etc - no real harm done) while the benefit of arresting 400 terrorists is high (is terrorist is a potential mass killer)." Intersting. The psychic cost of being investigated to the innocent party is not 'relatively low' from their perspective. Everyone is a potential mass killer, if driven to it. Since the US continues to be in Iraq, the number of potential terrorists is constantly increasing. This great NSA program will likely be around for a long time, if not in public, in secret. "Plus, arresting 40% of the terrorists is likely to foil terrorist plans by other terrorists (as they will be more fearful of discovery, thus move slower, etc)." Ha ha ha. Yeah right. They will simply use better technology to get around it. The market supplies many ways to communicate - they will simply learn from the mistakes of others. To think that is the case, one needs to look at the Drug War, and see how no matter how hard the gov tries to crack down, the profit motive keeps the drugs rolling in. You can also see the CAN-SPAM act, and see how well that has worked. The spammers are shaking in their boots, I'm sure. " it does not change the fact domestic spying is occuring, and you civil liberties are not the same as they were on 9/10/2001." Sorry to say this but the are pretty much the same. You have just as much civil liberties as back then. The only difference is that the gov had a much easier time at violating it then before, but it always had the ability to before. Perhaps one needs to look back on the kind of civil liberties we had during WWI & WWII. The court case of "The United States vs. The Spirit of '76 (1776)" is a good example, and shows you precisely when we actually departed from our founding fathers. lukem • July 10, 2006 1:39 PM All I am saying is take all the statements from the article like: "Ergo, NSA's surveillance system is useless for finding terrorists." and replace them with things like "Ergo, NSA's surveillance system could catch 700 out of 1,000 terrorists at a cost of 30,000 false positives." and then we can all have a reasonable discussion about A) whether we believe those numbers, B) at what point (if any) those numbers would justify the operation of such a system, C) etc etc etc. klassobanieras • July 10, 2006 1:55 PM Each time the NSA confirmed or discarded a suspect, they could recalculate the set of remaining suspects, taking into account this new information. The list of suspects would likely shrink as individuals were investigated, and the NSA would only need to investigate a subset of the 30,000 initial results. Disclaimer: I still think it's a terrible idea. I think the author misses the point. While I don't agree with the NSA program for legal reasons (against the law), I don't see it as the author describes. From what I have read, the purpose isn't to find "a" terrorist, which is what the author's probability analysis attempts to demonstrate. I thought the purpose of the program was to build social network maps. Therefore, the probability analysis should be whether the program is good at identifying a map of X people (where X is a size of a given cell of collaborators) over a given time period of T, where they make Y number of calls to each other. The lower any of the numbers the lower the probability of success. For the brief write-up here, the analysis didn't look at all three factors. sng • July 10, 2006 2:36 PM @arl "I find it hard to believe (but not impossible) that the NSA lacks the skills in statistics to not understand the value (or lack thereof) in the use of data mining." You assume that finding "terrrorists" is their goal. Yes they understand all of this and still do it. Now ask yourself what they gain. I'll be here when you come back from that rabbit hole. Anonymous • July 10, 2006 2:40 PM @wondering I've seen several stories about things the US gov does right here. Maybe the implications about the many stories about what they are doing wrong should make you think. Also as to "what to do". Get a copy of Beyond Fear and Secrets and Lies. Read those. Then you will have the tools to answer those questions. Also those basic tools are covered in many posts here. But I think the info covered in those books could be safely assumed to be base knowledge here. Kind of pointless to repeat the obvious here. mdf • July 10, 2006 2:41 PM "I think the author misses the point." I think most of the commenting are missing the argument. READ IT AGAIN. It does not matter _how_ the system is implemented. It can be a guy with a coin, or some super-sophisticated math-model, a wiley social-network machine of epic proportions, and all of this followed by a legion of checks and counter-checks. IT DOESN'T MATTER. The end result is that of the number of terrorists localized by the process is T and the number of people hassled is N, then "probability of a terrorist given all the data and all the analysis" P(T| data,hypothesis,etc) == T/N. Even widely optimisitic estimates of T and N show that this probability is a small number. Kevin Davidson • July 10, 2006 3:47 PM If you flipped a coin, then you'd get 150,000,000 terrorists found, of which no more than 1000 could possibly be terrorists. So it should be obvious that you wouldn't want to flip a coin. In Bruce's scenarios, he shows that if you have something better than flipping a coin (something MUCH better than flipping a coin) you still don't get much help. The reason that you don't get much help is that you flip the "new and improved (tm)" coin for everybody and since virtually everybody you flip it for is not a terrorist, even a low error rate gives you a huge number of innocent people and not many terrorists. That is, there are better ways of finding terrorists. The scenario I see coming from the administration is that having a single terrorist reach his goal is so awful an outcome that anything (and I mean ANYTHING) is justified. As horrible as 9/11 was, more children died in the US of malnutrition in 2001 than died in the World Trade Center bombing. This should make it clear to everyone that there are many threats and that resources should be allocated appropriately. Security is trade offs. The cardinality of T and N make huge political differences. It does matter. quincunx • July 10, 2006 3:59 PM "The scenario I see coming from the administration is that having a single terrorist reach his goal is so awful an outcome that anything (and I mean ANYTHING) is justified" I think there is some lack of thinking here. The action you seem to be justifying is just the sort of thing that CREATES terrorism in the first place, unless you have been brainwashed into thinking that the terrorists were attacking our "freedom". " As horrible as 9/11 was, more children died in the US of malnutrition in 2001 than died in the World Trade Center bombing. " As terrible as the war in Iraq is, it has not managed to effectively kill 500,000 children unlike the trade sanctions earlier. "Security is trade offs." Yes, and the best trade off is ceasing to be a target. "I'll be here when you come back from that rabbit hole." Their rabbit hole is their security blanket, without it they would have to actually put some serious thought into the matter. Davino • July 10, 2006 4:03 PM Finding 400 terrorists is a high number. If you look through the news of the number of terrorists that we were able to capture and actually prove that they were terrorists, the number is small -- smaller than the number of people we've turned into people we don't want to let loose from Guantanamo. Have we convicted even 40? Setting the detection threshhold in a datamining process is an exercise in tradeoffs -- If we set the level low enough so that we catch all 40 (or 400) terrorists, we'd probably have to investigate everybody, Even if we set the threshhold lower so we only get 50% of the terrorists, we'd still probably have to investigate nearly 50% of our population. The difference between the fraction we have to investigate and the fraction of the target populations we catch is 'lift' and I bet that if we're only investigating 30K out of 300M people, or 0.01 percent, we're missing more than 99% of the terrorists, because surely we're not getting significant lift out of this model. The only possible benefit I see from this program is that it maybe does help to identify people associated to a particular already-known terrorist: You might do a run and get a list of the ten, hundred, or thousand people most likely to be associated with Richard Reid, But his mom and dad and classmates are going to come in pretty high on those lists, and the pizza guy is going to start showing up before it gets too long. How useful are results like "Bush is associated with Saddam because we have a pictures of Rumsfeld with both of them"? The program may be of use in already ongoing investigations, as in looking for surprises or more leads, but it is going to be nearly useless for picking the needles out of the haystack. Jim • July 10, 2006 4:23 PM The problem with the article seems to be based on some unstated assumptions: 1. The "system" (whatever that is) tags people, in a binary fashion, as terrorists or not. 2. Bad things happen to people who are tagged as terrorists by this system, without any intervening activity, investigation, or filtering. The claim "NSA's surveillance system is useless for finding terrorists" is hyperbole; a more accurate statement would be that someone the surveillance system thinks is a terrorist has a low probability of being one. Whether that's "useless" or not is an entirely separate question that depends on how the system's results are used and what the relative values are of finding terrorists and mistaking ordinary people for terrorists. It's entirely possible that the answer is the same. But an article that pretends to be scientific should support its claims, not merely assert them based on unstated assumptions. The jump from "low probability" to "useless" deserves as much or more analysis than the basic Bayesian math we're given. Vlad Patryshev • July 10, 2006 4:35 PM So, out of 300M, the system, applied once, gives 400 positives and 29600 false positives. How do real-life people use Bayes training? They apply it again. Let's apply it again, to the next batch of emails, blogs, phone calls, etc. Then we get this: 3 false positives and 160 terrorists. That's the practice of the people that know what Bayes is about and know how to apply it to real-life data. I wonder whether the author is a practicing professional or just a clueless "evangelist". Davino • July 10, 2006 4:42 PM It is useless in the sense that if you are only going to investigate 30,000 people out of 300,000,000 people, the 99.99% of the people you ignore are probably going to include nearly 99.99% of the terrorists you are trying to find. If there's 1000 terrorists, the model would be amazing (5 times more powerful than a 2x lift commercial dataminers consider resounding success) if it caught even 1 of them. 3mp • July 10, 2006 5:58 PM As Mark Twain said, "Facts are stubborn, but statistics are more pliable". What if they were simply monitoring calls originating in the US to known terrorist telephone numbers overseas. This would significantly increase the base rate, and would be considered illegal surveillance under the FISA statute. BrianD • July 10, 2006 6:04 PM Using numbers from the article, pick a person at random. There's a .00033% chance that they are a terrorist. Randomly pick a person flaged by this system, and there's a 1.32% chance that they are a It sounds like a great improvement. But, you're still missing 60% of the terrorists (again, using numbers from the article)! How exactly is this system helping out? It's mostly missing the bad guys, and most of the people it red flags are actually not terrorists. If it caught 40% of the terrorists with very few false positives, or if it caught damn near 100% of them with bunches of false positives, it might be worth it. But to miss most of them, and then to have a ton of false positives? I really don't see how people can call this anything but an ineffective system. Oh, wait -- "if it catches even one its worth it", "if I have nothing to hide why should I care", and "I'm either with you or against you", etc. I'm not at all concerned with what the Bush administration will do with the information they [illegally] collect. It's not as if they've illegally detained anyone... As for the facts, why let those get in the way of the truth. Neighborcat • July 10, 2006 6:33 PM Gee, for a bunch of statistics whizzes, you sure are easy to distract. An alleged mathematical proof that an illegal data mining effort targeting US Citzen is ineffective at it's stated task is completely irrelevant. What if the US goverment started searching 1000 homes a day at random in the US because it identifies criminals? I have no doubt such an effort would be effective at it's stated task, criminals would be found every day. Why isn't the government doing this? ITS ILLEGAL UNDER OUR CONSTITUTION. Get it? The efficacy of the method is irrelevant! Is the ideological link between secret data mining efforts to find undefined evildoers and door-to-door searches too tenuous for you? I'm not surprised, that's why governments work up to revoking citizens rights real slow and easy. You won't notice a thing. Ask a holocaust survivor. Anonymous • July 10, 2006 7:07 PM The problem with this article is it lacks a definition of "usefulness". It is also a problem with lots of the comments. I think that to identify a terrorist, law enforcement must have actionable information. They must have the capability to foil an actual terrorist plan and arrest the culprits. Until this is done, no terrorist has been "identified". They only have "suspects". What the article shows is NSA data mining is far from "identifying" terrorists. All they get is a large suspect pool that have a higher probability of containing a terrorist than the population at large. But that probability is still very low in absolute number. What is the next step? Do they have the capability of transforming the probabilities into actual determination of who is a terrorist and who is innocent? Or do they just put the suspects on some "watch and harass" list on the hope this will somehow foil some kind of unidentified plot? Unless they can reliably turn their suspect set into actual terrorist identification, the program is indeed useless. Ralph • July 10, 2006 7:16 PM I have seven years experience in the front line of network security and make the following limited comments from that position: 1. ANY detection system in which 74 out of 75 reported incidents are false positives (98% failure rate) is not practically manageable. By failure rate I mean failure to correctly identify an incident. Think of your spam detection tool having 98% mistakes. 2. Data mining tools are useful when they have known signatures and patterns to search on but are much less so when looking for new attack vectors or the unknown. 3. Data mining tools are powerful if you have highly trained people using them - these people are a very rare and expensive resource. The opportunity cost of having them data mine is very high so you want them using efficient systems (not systems with a 98% failure rate). If you don't see his point try reading the maths again and try not to read more into what he is saying than is there, this is a mathmatical observation. If the NSA says "this person is a terrorist" on the basis of mass survellience they have less chance of being correct than a man flipping a coin. Peter Glaskowsky • July 10, 2006 7:21 PM Bruce, posting this was a mistake. It's bad all around-- bad assumptions, bad logic, bad math. Plenty of people here have explained why in more detail, although I think I can add one more point-- the data-mining operation also identifies people who are supporting terrorists (deliberately or not), and there must be tens of thousands of these. This means the program could be very effective-- that the people it identifies are truly worthy of further investigation. I don't know if that's true or not, and the program gives me the heebie-jeebies anyway, but regardless of how I feel about the program, this article is totally unworthy of your attention. . png Boris • July 10, 2006 10:39 PM If I understood you correctly, - if there was a burglary, and witnesses say that they saw a suspect leaving in a blue sedan, the police should NOT try to create a list of all the blue sedans in the area, right? There can be 100 blue sedans, so the chances that an owner of one of them is our criminal - is only 1%. So much less then 100%! So close to 0%! Nah, we do not need to try looking at blue sedan owners - it is uselss and mathematically impossible. Nice conclusion. Freddie • July 10, 2006 11:05 PM Right on! Most of the comments here are pretty useless. Let's hear the statistical debate from the point of view of well-known and respected statistics experts, not random readers of this blog. The real issue is that, at least since 9/11, the currrent administration has shown a clear propensity to disregard the constitution and other laws in its pursuit of criminals. The excuse of a "War on Terror" is total BS - it's just a permanent excuse for a police state. A long precedent of US law provides for procedures for alleged criminals to be investigated, indicted, and tried, while protecting the rights of the innocent. Tank • July 10, 2006 11:25 PM Yeah you did write something similar previously Bruce and it was just as flawed/rigged as this is. Where but in the least informed dicussions is it suggested that the NSA calls database is used to identify terrorists rather than providing an unrivalled and infinitely useful investigative tool to aid existing investigations by providing an outline of a suspects personal contact networks ? As the previous poster mentions MV databases are completely useless in indentifying criminals (using the same flawed logic you have) since there is no specific make or model specific to criminals. Yet these records are infinitely useful when suspects are identified via other means. Now you could bring math and probability into the arguement for why the MV databases are useless or you could just drop the childish straw man arguement to begin with. If we are very generous with you Bruce and assume you are not intentionally misrepresenting the issue to obscure the benefits of the program then your understanding of the subject of NSA surveillence is so low that you have no business writing about it in the first place. It's about time you put up or shut up. If you actually believe there is no benefit in knowing who a terrorist suspect is in communication with then say so. If you actually believe that all disclosures stating that this call contact mapping is what the program consists of are lies then say so and state why. If you actually believe that the NSA is using this call data as a first resource in identifying terrorists rather than as an information tool to support existing investigations then say so and cite a source which suggests this is the case. Otherwise all you are doing is ignoring what is known about this program, ignoring the quite clear uses of it and substituting your own idea about how it could be pointlessly used without any supporting basis for assuming this is what is occurring. Using the same rigged logic I could argue that DNA isn't useful in providing positive identification on an individual when the police line up test tubes in a room and ask a witness to pick one out as a positive ID. There is nothing suggesting this is what police do with DNA evidence, everything that is written about the use of DNA evidence suggests it isn't and it is blatantly clear that if I used such a childish, ignorant and rigged example of how evidence could be pointlessly used that I was either intentionally misrepresenting the usefulness of such evidence or had NFI what I was writing about and should stop. This is where you are on probably the biggest security/privacy story of the past 5 years. Wow. Joe in Australia • July 10, 2006 11:37 PM I think there are several flaws in the assumption that the value of the program lies in identifying terrorists from their phone conversations. Firstly, these calculations assume that terrorists are isolated. My understanding is that they usually have an extended support network. This means that the program should be viewed as a tool for locating terrorist networks, not individuals. Secondly, it is my understanding that terrorists are risk-averse. They don't want to be discovered; the people supporting them don't want to be discovered. It is possible that an unquantifiable increase in risk will discourage them, even if it is small. We see people making similar decisions with respect to other unquantifiable risks all the time. Finally, I think the wholesale datamining is probably more useful post-facto: once you have identified a probable terrorist, you can then examine the suspect's record of calls and (depending on what gets stored) their content. The fact that the program is publicly presented as a means of pre-emptively identifying terrorists doesn't mean that that is what its true aim is: the public explanation may have been made to increase morale, or to scare terrorists, or to make the program seem more legal. Ralph • July 11, 2006 1:19 AM You claim the article is flawed but offer no mathematics to refute it. You suggest it might be rigged but also offer nothing to support the accusation. Data mining for MV crime after it has been commited is not the same as looking for someone you think might commit a crime at an unknown future date. Please don't use the word we because you don't speak for me. If you represent more than yourself plse could you disclose this to other readers. Dimitris Andrakakis • July 11, 2006 2:55 AM Do you actually expect people to take you seriously with a nick like this ? Matthai • July 11, 2006 3:53 AM First possibility is they are paranoid. Second is they do not target terrorist, but political oponents. But there is also third possibility. They are just wasting the money. Or using the money for something else. Look, they have a great job. They can be incompetent and inefficient and they can always hide themselves under "national interest". They won't tell you their success rate and amount of spent money, because that could "endanger national security". It's a great job, isnt't it? quincunx • July 11, 2006 4:48 AM Good point Matthai, not much attention is given to political empire building, or the general workings of Iron Law of Oligarchy in a Monopoly framework. The way to get ahead in gov is to built an empire of employees beneath you. If you can just figure out some excuse for doing it, you will. It is also important to waste as much of your budget as possible so that you can claim that more is necessary. Of course a higher budget is necessary anyway since the previous fiscal period was entirely spent on misallocating the market economy and generally creating more problems in society. In business, success is success. In gov, failure is success. (I need not go into the fact that 'cooking the books' & GNP calculation is nearly the same thing upon close inspection) Now don't get me wrong, gov can be very successful in a narrow sense, especially when they outlaw competition, but 'catching terrorists' is not something they do as well as 'creating terrorists' (just like they are worse at 'performing useful services, economically' than 'creating fiat money'. Of course having people believe they can is a great excuse for perpetual conflict for perpetual If some willing people can just take the time to examine some history their teachers glossed over (somewhat having to do with being threatened to be forced out of the teachers' union) - they would realize that this time period we're in sure seems A LOT like other periods, almost to a tee. And if one sees how they play out (and will continue to play out if people continue to believe that societies' biggest parasite [look up etymology of 'politics'] is actually its greatest benefactor' they should certainly be skeptical of the optimists & those in denial. I advise some reading of Man, Economy, & State by Murray Rothbard and Robert Higgs' Crisis & Leviathan & Against Leviathan to any scholar that would like to approach this topic in any socially scientific manner. Nigel Sedgwick • July 11, 2006 5:07 AM The original article by Professor Rudmin looks too narrowly at the issue. I have written something on the maths of this; however, it would not post well here because of the layout. The complete comment can be found here: http://www.camalg.co.uk/sundry_2006/... The final textual part of my comment is as follows. The individual score is very important, and is that aspect that Prof Rudmin has not considered sufficiently. One does not have to consider for arrest and interrogation, every legitimate terrorist suspect. A much more likely policy, for such well-informed organisations as the NSA and the FBI, is that a sorted list of the higher-scoring legit suspects would be produced, with their scores. Valuable and expensive covert (or in some cases overt) investigatory resources could then be allocated to the very highest-ranking legit suspects, as judged cost-beneficial and according to resource Now, of course some politicians and managers, of the statistically uninformed sort worried about terrorism (and the need to be seen to 'do something') might introduce the odd and serious glitch into this well-understood process. This may well cause investigatory teams to be tasked with futile investigation of the (very likely) innocent. Likewise some law enforcement 'foot soldiers', improperly tasked or insufficiently well trained in the real importance of their work, might find some of the investigatory legwork seemingly pointless. Now for some very approximate numbers (or perhaps not). If P(T) is 1/300,000 and investigatory resources are available for 1,000 investigations (of a particular sort and cost), we have no idea (prior to looking at the actual scores from data mining), as to what threshold 'e' should be set. However, we do know that we should consider no more than the top 1,000 candidates. Then we should consider the scores based on the data mining evidence 'e' (that is the approximate Detection Gain, Watchlist) and also the assumed a priori probability P(T) (which is only known approximately). This is to determine whether the investigation of the least likely individuals on the hot list should actually go ahead. This decision should take into account the cost of the investigation (including the adverse motivational effect of pointless tasking on investigatory staff), together with the level of invasion of privacy and possible infringement on civil liberties (justified through the circumstances and P(T | e) in the least likely case pursued). Now, of course there are several unknowns in all of this. The a priori probability 'P(T)' is only approximate. Likewise, the Probability Density Functions (PDFs) arising of the target (P(e|T)) and non-target (P(e|~T))data subsets are only known approximately. [Though note that the PDF of the non-target set is known much more accurately than the PDF of the target set, and this itself is useful in avoiding bad investigatory targeting.] However, it should be quite obvious that targeting the top-ranking of a sorted list (derived according to evidence of some merit) is far better than forgetting the ranking and setting some arbitrary threshold based on very approximate assumptions. Best regards Bernhard • July 11, 2006 6:11 AM Aren't you ignoring the fact that the sorted list of top-scoring suspects will be full of false positives? I cannot see a reason why real terrorists would on average have a higher score than false positives. Otherwise, the probability of detecting a terrorist would be very close to 1, which is not a realistic assumption. Nigel Sedgwick • July 11, 2006 6:30 AM First, the text file layout was not very good. I've now put up a PDF file, which is a bit better. It's at URL: http://www.camalg.co.uk/sundry_2006/... Assuming the data mining algorithms provide any discrimination in favour of the target subset, every entry in the top-scoring few will have a higher probability of being a target than those not in the top-scoring few. Furthermore, the higher in the list, the more likely that person will be a legitimate suspect (even if the actual probability of them really being a terrorist is still low). This is on the basis of the "gain" in discrimination obtained from the data mining. Consider for example, a person who has telephone the number abroad of a known terrorist organisation; this is against the 290+ million persons in the USA who have not phoned this organisation. Do you not think that caller is somewhat more likely to be a legitimate terrorist suspect than everyone else. Now, there is, of course, the case that any prudent terrorist would not do something so obvious. However, he may have contact by telephone with a less clever accolyte who has, or who did somewhat earlier and did not take the excellent advice to change phone number, address, mobile phone, etc. Each tiny bit of such evidence helps a tiny bit. If enough tiny bits are put together, cost-effectively by automatic processing, it is of some help. Best regards nate • July 11, 2006 7:24 AM Spy Agency Data After Sept. 11 Led F.B.I. to Dead Ends "In the anxious months after the Sept. 11 attacks, the National Security Agency began sending a steady stream of telephone numbers, e-mail addresses and names to the F.B.I. in search of terrorists. The stream soon became a flood, requiring hundreds of agents to check out thousands of tips a month. But virtually all of them, current and former officials say, led to dead ends or innocent Americans. F.B.I. officials repeatedly complained to the spy agency, which was collecting much of the data by eavesdropping on some Americans' international communications and conducting computer searches of foreign-related phone and Internet traffic, that the unfiltered information was swamping investigators. Some F.B.I. officials and prosecutors also thought the checks, which sometimes involved interviews by agents, were pointless intrusions on Americans' privacy." Davino • July 11, 2006 8:12 AM Nigel, setting the threshhold from a ranked list makes sense. However since you're looking at such a small fraction of the population (1000/300,000,000) even obscenely dramatic improvements in the P (T|e)/P(T) gain is still insignificant. A gain of like 2x or 10x, (which is an amazing level of success in commercial data mining applications), would net you only 0.007 or 0.03 terrorists with a false positive rate of 99.9999% or 99.9967% of the 1000 people investigated. The only way this program could be of any use is in producing a list of persons associated with a specific person already under investigation. If we want to devote 1000 more investigations into the associates of Mohammed Atta, we'd run this program, take the top 1000 most associated with him, strike off the ones we already know about, (like Atta supposedly met Saddam, Saddam shook hands with Rumsfeld, and then Bush hired Rumsfeld) and then take a look at the ones that remain. Nigel Sedgwick • July 11, 2006 8:47 AM You are of course right, as am I, each in our own particular way. However, I don't rate your argument on the handshaking; that is unless it is hyperbole. If the latter, that is (I judge) too subtle for many who read here. The first important part is that the Detection Gain (W) should be very high, to compensate for the fact that the a priori probability is very low. Thus, if the product of them is less than say 1% (my not very informed judgement) then that person is not worth further investigation. This is on the basis that the absolute value of any of the assumed figures is rather poor. The next important point is those making the more detailed resource allocation and tasking decision must have some grasp of the numbers and what they mean. If, as is reported just above from the NYtimes, numerate judgement has gone (hopefully temporarily), the money, effort and commitment will be wasted. Going back to Bruce's original posting, and the referenced articles, they are too pessimistic. They are also wrong to the extent that they do not consider the ranking of targets (as I describe) as an aid to resource allocation. Finally, the 1 in 300,000 is not a particularly good starting point. Add in some Detection Gain (not absolute) concerning sex, age, ethnic background, religion, nationality, education. Then add in another set of Detection Gains, concerning good-guy attributes. One can only do these things where they are known (which is by no means common, and which itself costs). There are problems and dangers. But it's still likely to be worth doing to an appropriate extent, rather than not doing through following inadequate reasoning. Putting numbers in makes it grey. It's not black and white and it never has been, except for the simple-minded. Best regards Brian • July 11, 2006 9:40 AM Your point about relative scores assisting targeting is a good one. There is some discussion of the NSA providing rankings to the FBI (scores of 1, 2, and 3). However, those rankings don't really help if the false positive rate is very high for even your highest ranked targets. The Bush administration has been extremely interested in publicizing any successes in the war on terror. None of those successes have been attributable to the NSA's program. The NSA's program has been going on for years, but it hasn't contributed to the capture of a single terrorist. From this I conclude that the NSA program has been and continues to be a waste of money and a massive violation of the law without making anybody safer. If the program has in fact been successful, the NSA needs to prove it, both to Congress and the people. "The efficacy of the method is irrelevant!" In a court of law, you're probably right. In the court of public opinion, it makes a big difference. roy • July 11, 2006 9:51 AM 1. The most optimistic analyses I've seen of wholesale data mining all ignore the obvious: the enemy, not being fools themselves, can have opted out of the universe being mined by the simple expedient of using communication channels that the NSA cannot examine. Couriers can travel without their presence being recorded, as passengers in cars or mass transit, or as unregistered passengers on aircraft, trains, or ships. There is no electronic communication here, so monitoring is physically impossible. Handwritten messages, or electronically recorded messages, can travel by courier, or through the mails, undetected. Operating entirely outside the sphere of surveillance reduces the base rate to mathematical zero, obviating the entire data mining enterprise's ostensible justification for its existence. If the terrorists are keeping their terrorist communication outside the sphere, then we are building castles in the air and having discussions about engineering and architectural concerns that simply don't matter. 2. If wholesale data mining is done diligently, it will result in complete failure, for a reason not evident in statistical analyses. Suppose you were in charge of investigating 30,000 positives a day, and leaving cases open indefinitely was unacceptable. Even if your staff were huge, clearing 30,000 cases a day would put you all in the business of clearing cases, and only that. After the first several cases, they would all start looking alike, and your abilities to make distinctions would extinguish quickly. So, even if there were the rare occasional terrorist among your positives, you would routinely clear his case because routinely clearing cases is all you know how to do. 3. If wholesale data mining is done dishonestly, while it will never turn up a terrorist, it will generate bogus terrorists, keeping up with government demand in their publicity scams. If the agency investigates 30,000 positives a day, the unofficial standing order would be to pick out the few who would most easily be framed. (With 30,000 random people to pick from, finding the idiots should be no trouble.) Run the picks through kangaroo courts and make sure the press sticks to the party line. Keep reminding the public what a great job the government war on 'terrism' is doing. Meanwhile remember to occasionally put out nonspecific warnings to take no specific actions at no specific time in no specific place. 4. The NSA people involved here are not stupid or innumerate. But they do know where their money comes from and they are willing to play along. It's that, or leave. Those who have left can claim honor. Those who have stayed are criminally responsible. 5. The cost to a filthy-rich government of a single false positive is negligible. The cost to that single false positive can be maximal: it can be the ruination of his life, even his execution without trial. 6. In 1776 the troublemakers in the colonies declared independence, insisting they would not tolerate shabby treatment from somebody named George. What was that line about not learning from history? Nigel Sedgwick • July 11, 2006 10:47 AM @Roy, who wrote: "In 1776 the troublemakers in the colonies declared independence, insisting they would not tolerate shabby treatment from somebody named George. What was that line about not learning from history?" But they chose someone called George to lead them, and got the French to help (on the sound basis that they, the French, would think 'my enemy's enemy is my friend'). Which just goes to show that arbitrary facts are no help, as well as arbitrary numbers being no help. Best regards Anonymous • July 11, 2006 12:04 PM Bruce - I think much of your work is great but on this issue I have to inform you that you're missing a few tricks. I can't tell you much about what I do, but in my everyday work, I use data mining techniques (admittedly quite unique and specialised ones, but data mining nevertheless) to track down fraudsters, terrorists and other 'organised' criminals. And it works. In fact, it works really well. I don't need to appeal to Bayes theory or to any speculation based on completely unrealistic made-up scenarios. I can simply point to the fact that I do it for real day-in, day-out on real live data and it works. It doesn't catch everyone, but it does catch many. And the impact on everyone else is miniscule. We throw away data that doesn't relate to or contain anything suspicious immediately so we don't have to waste more time and money working on it. One of the many things you've either missed or chosen to ignore is that it is not only information about actual bombers and terrorist cell members that gives useful leads to identify a terrorist plot. There are all sorts of individuals and activities that play a part in enabling acts of terror against innnocent citizens. Who sells the materials to these guys on the black market? Answer? Crooks. Greedy people. How do they get money to fund their terrorist acts? For tThose training camps in Afghanistan and elsewhere? Answer? Drugs, fraud, serious organised crime. Follow the money. Who runs the websites that host manuals on how to build bombs to maximise casualties? Bad guys. None of these people might be classed as 'terrorists' by your simplistic assumptions but I reckon many people would count these illegal activities as 'fair game' in the fight against terrorism. Certainly these activities are not included in any of the numbers you've used. If you tot up the number of people involved in these activities and the number of relationships amongst them, you suddenly find a lot more needles for the same amount of hay. Also, your implication that data mining only works with known profiles is wrong; unsupervised clustering analysis can detect anomalous behaviours without ever being told what they look like. And your statement that there are no well-described terrorist profiles is plain wrong, There are hundreds of them. I use them every day. Please credit your readers' intelligence by doing a bit of research on this topic before you write about it again. Pat Cahalan • July 11, 2006 12:12 PM @ Boris Your blue sedan counterexample is seriously rigged. 30,000 or 300,000,000 suspects is functionally equivalent if you have 10 cops. Investigating blue sedans makes sense if there are 100 blue sedans and you have 10 police officers, it makes no sense if there are 1,000,000 blue sedans and you have one cop. Using the numbers from the example in the link (remembering they're pretty generous) - if you have 1000 cops, that's 30 suspects per cop to investigate. Again, remember that all of the "ususal suspect" questions have already been asked (that's the point of the data mining in the first place). You already know there's "something suspicious" about these suspects, so you'd have to figure that investigating these suspects is going to require you to put in some serious work. If it takes 2 weeks to clear a suspect, that's 1,000 cops working full time for 3/5 of a year to identify the 400 Now, admittedly, that looks like a pretty good deal. But if you have 1,000 cops working full time for 3/5 of a year to catch 4 terrorists, that's not so much of a good deal if you can catch 8 of them by having one cop log into a chat room and pretend to be an Islamic militant and do humanint. If it takes a month of bugging their phone and following them (something that may require a team of investigators), the tradeoffs start looking really bad quickly... Kevin Davidson • July 11, 2006 12:47 PM The problem with your blue sedan analogy is that the NSA is examining every blue sedan in the country, not just every one in the neighborhood. Benny • July 11, 2006 12:49 PM @ Anonymous (12:04 pm): Could you please provide pointers on how to find more information about these successes? It would almost be comforting to me to see evidence that data-mining can work against terrorist networks, that we are not throwing privacy out the window for dubious gains. But it's hard for me to imagine the US government not widely publicizing any such successes to justify their efforts. Kevin Davidson • July 11, 2006 12:55 PM I think you missed the point with scoring. Scoring eventually results in a binary choice, you either investigate further or you don't. If you investigate hundreds of thousands of people, then the resources you could have applied in other areas are mostly wasted. Or perhaps you have three options: Very high score -- shoot on sight High score -- ask questions Low score -- ignore Davino • July 11, 2006 1:46 PM Kevin Davidson: I agree. And from the original post, the very highest of scores 1-(3,900/300,000,000) under the most fantastic conditions, would give maybe a 23% chance of being right, or 77% chance of shooting an innocent person. Terrorist's moms and their mom's friends might score surprisingly high by this program. bob • July 11, 2006 2:44 PM This seems to be the thread that wont die. You guys also seem to be overlooking what they do AFTER they've decided that (a given person) is not a threat (begging the question that they ever actually decide someone will be excluded from further 'processing'), what do they do with the information pertaining to him/her? Keep it until something he/she has done IS illegal? Let office workers take it home on a laptop and leave the hard drive sitting on the roof of their car overnight? Sell it? Anonymous • July 11, 2006 3:08 PM I'd rather not say anything more, but let's just say that my work is very unique and specialized and is unencumbered by stuff like math or proofs or anything like that. Brian • July 11, 2006 3:18 PM @ Anonymous (12:04 pm) There is no doubt in my mind that it is possible to do data mining for terrorist related activities. I seriously doubt that trolling through the phone bills of 300 million people is a useful part of that analysis. joe • July 11, 2006 3:35 PM "unencumbered by stuff like math"....datamining without math...wow, that's pretty nifty technology Anonymous • July 11, 2006 3:49 PM That's right, buddy. It's unique and highly specialized and very secret. All your maths and proofs and simplistic assumptions doesn't cut it. It's datamining running on time-tested cliches like "follow the money". tony • July 11, 2006 9:22 PM i think this article is absurd. actually his approach to explain his theory does not make sense to me. it is like to say that nobody is going to win the lotto because the chances are slim: {"el numero es monstruoso pero no infinito"--borges but there have been a lot of winners. Rob • July 11, 2006 11:05 PM @Ralph at July 10, 2006 07:16 PM >If the NSA says "this person is a terrorist" on the basis of mass survellience they have less chance of being correct than a man flipping a coin. Using the original assumptions (40% detected, 0.01% false positives), I calculated the following: 299 970 000 non terrorists correctly identified (99.9897%) 30 000 non terrorists misidentified as terrorists (0.01%) 400 terrorists identified correctly (0.0001%) 600 terrorists missed (0.0002%) If my calculations are correct, it means that the system is correct 99.9898% of the time. The statistic compared to the flipping of the coin was: If identified as a terrorist, are they really a terrorist. If the system gave 50% for that statistic, wouldn't it mean that only one innocent person would be detected per terrorist detected? Surely narrowing their search from 300 million to 30 400 to find their 400 terrorists would be a worthwile step. Rob • July 11, 2006 11:10 PM > "NSA's surveillance system is useless for finding terrorists." But useful for just helping narrow down the search? Jon Sowden • July 11, 2006 11:29 PM "That's right, buddy. It's unique and highly specialized and very secret. All your maths and proofs and simplistic assumptions doesn't cut it. It's datamining running on time-tested cliches like "follow the money"." Wow - you've cracked biological computing too! Double wow! Jon • July 12, 2006 3:54 AM I am the author of the post at "Posted by: Anonymous at July 11, 2006 12:04 PM". I didn't actually mean to post it anonymously: explorer helpfully cleared that form field for me when I wasn't looking. I've no idea who the subsequent "Anonymous" was, but I fear he/she may be taking the mickey. Of course I cannot point to evidence of these successes and I cannot tell you where to find the technology without revealing who I work for. What I can say is that I do not work for the US government; I work in the UK. But you don't need to believe *me*; if Bruce and others actually bothered to do some research on Google, they'd find lots and lots of people all successfully doing data mining in this way. I'll give you a clue: social network analysis. And while they were there, they could look up data mining and find out what it was. This would hopefully stop them making gob-smackingly stupid assumptions about how you would use it to look for terrorists and other criminals. The 'maths' presented assumes that data mining techniques just look at every 'indicator' one by one to see whether it's terrorist or not. This is clearly ludicrous and ignorant. The whole point of data mining is to work with relationships amongst multiple entities and statistical relationships amongst multiple indicators. Thus rendering all of this 'maths' about data mining utterly meaningless. Tank • July 12, 2006 4:22 AM > @Tank You claim the article is flawed but offer no mathematics to refute it. > Posted by: Ralph at July 11, 2006 01:19 AM What math? I said the assumption that this data is used to identify persons as terrorists rather than identify the human networks associated with an identified subject is flawed. Square that or add 7 if you like but the point you missed was that adding math to a flawed assumption is pointless. > You suggest it might be rigged but also offer nothing to support the accusation. Yeah... i did. The problem here is apparently that you didn't read or understand anything I posted before you replied to it. BTW did you fail to provide math to refute my assertion that DNA is completely useless for identifying criminals because vials of DNA all look the same in a line up (false positive rate) or did you get the point I was making about a rigged arguement against the usefulness of data ? > Data mining for MV crime after it has been commited is not the same as >looking for someone you think might commit a crime at an unknown future date. Yeah that was my point. My other one was you'd need to be ignorant or intentionally misleading if upon learning that there was an MV registry which is used frequently by all law enforcement agencies in investigations, you assumed that it was being used for predicting future crimes or identifying potential criminals. Supporting such a ridiculous assumption with maths, however competantly calculated, in no way improves upon the ridiculousness of your assumption. > Please don't use the word we because you don't speak for me. If you represent > more than yourself plse could you disclose this to other readers. Yeah actually i do speak for you since you're not gonna be willing to say you disagree with what i've written. In fact since i can't imagine anyone will i think i'll stick with the all encompassing "we" as entirely appropriate. Tank • July 12, 2006 4:41 AM > And your statement that there are no well-described terrorist profiles is plain > wrong, There are hundreds of them. I use them every day. > Please credit your readers' intelligence by doing a bit of research on this topic > before you write about it again. -- " " @ July 11, 2006 12:04 PM This is a reoccurring problem. Given the fact that reporting on suspect and evidence captures in terrorism cases is now worldwide mainstream news and that there are at least a dozen published works dealing only with analysis of terrorist's motivations, personal accounts and their lives at some point you have to conclude the ignorance is willing and purposeful. Hell places like SITE are now included alongside the NYT in google news. Exactly how much research could you do on the topic of terrorism and still believe that the best characteristics the NSA has for terrorist profiles is their 7/11 purchases and which phone numbers they dialled. My guess is none. Anonymous • July 12, 2006 7:57 AM Just to summarize: All of you that use "maths" is stupid and ludicrous but I can't prove any of this because I work for some super secret stuff in the UK. But if you don't believe me, you're stupid and ludicrous too because you don't use Google. And, bloody hell, that stupid explorer cleared my name out again when I wasn't looking! Someone's attacking me! Perhaps I can find out who really is doing this using my sophisticated data mining techniques that Bruce and everyone can't seem comprehend the brilliance of. bob • July 12, 2006 10:02 AM @tony: your lottery analogy fails because the people who did NOT win the lottery were only out ~$1, not ostracized by their neighbors, put in jail or had their homes and possessions seized. Nigel Sedgwick • July 12, 2006 12:07 PM bob wrote: "... not ostracized by their neighbors, put in jail or had their homes and possessions seized" That is why I like such a measure as Detection Gain, Watchlist. It is fairly easy to understand, for example, that the legitimate suspect is thought to be approximately 10,000 times more likely to be a terrorist than the average US citizen (ie around a 97% chance that he is innocent); he needs to be investigated further, prior to any consideration of arrest or search warrants. That indicates how the suspect should be treated, much better than "he's a suspected terrorist, bring him in (dead or alive)". And remember that the current fuss is about traffic analysis of telephone call logs: it's no where near evidence in the sense normally considered in a criminal prosecution. Best regards Alex • July 12, 2006 1:43 PM Do you ever respond to these comments? I usually love your site, but this post is extremely bad. My first thought upon reading it was that the 50-50 coin toss analogy was terrible, and in fact a system with one false positive for every true positive would be an excellent system indeed. And sure enough, klassobanieras and others have been hammering this point. Do you have a response? I guess I find this worrying because I normally respect your judgement. Not to get too personal, but are you letting your feelings against the program cloud your analysis? johnb • July 12, 2006 2:14 PM The comparison to flipping a coin is specious - flipping a coin on 300 million people in the US would misidentify 150 million as terrorists. A detector that was only 23% accurate, but only misidentified 3000 people, as in the example, would be quite useful. Another example of damn lies and statistics. Vulturetx • July 12, 2006 7:52 PM Wow all the wrong assumptions from the orignal article onward to the commentors. 1. Data Mining when using a seed of "known terrorist(S)" significantly increases the detection rate. Yes there are more false positives than true positives. Turns out - many decision trees have this fault; does not stop beneficial results from occurring. 2. Data Mining is a group of programs ran by NSA. When a hit is collaborated by multiple programs the possiblility of a false positive is lessened. 3. Contrary to the extremists like Roy , being tagged as a "terrorist suspect" by the NSA does not mean investigation even much less the death and impisonment he claims. Since the FBI and other agencies subject these lists to human review. 4. Yes the system has worked, and the NYT has talked about it. They just did not understand the methodology. 5. Congrats you are already the victim of data mining. Usually multiple incident victims, but you keep reading your email and going to websites. Me -someone who has built the Data Mining collection Clusters. winsnomore • July 12, 2006 9:19 PM While the good professor doesn't know exactly what criteria NSA uses, he is surely brilliant for proving it can't work!! Keep digging junk Bruce .. to find "scientific" arguments to agree with you .. what's next Michael Moor's dissertation on proababilties. Clive Robinson • July 13, 2006 6:41 AM Having read through the postings, the argument appears to boil down to the probability of finding a lone terorist before he has committed the act, based on his communications and contacts. In practice I doubt very much that that is the main aim of most anti-terrorist activities. The professor is probably correct, you will not find an intelegent lone terrorist by data minning or any other mass survalence technique, it is just to easy to stay below the noise level. Also history of their communications and conntacts is not likley to throw up any other terrorists. Also the lone terrorist due to supply difficulties is not likley to have access to sufficient materials to be "Random Target" active. They are more likley to pick a target such as an aircraft or train where a small explosion will produce a "high value" return. Due to this the normal survalence systems are considerably more likley to pick them up. However if you think instead about terrorist organisations you are not dealing with lone individuals, this gives rise to recruitment issues where a history of communications and contacts will have a high probability of identifing other members of the terorist organisation. With a terrorist organisation, the most desirable person to remove is the "Directing Mind", followed by the "Financing Hand", then either the "Supporting Network" or "Recruiting Agents". If these people are removed then the terrorist organisation will become at best disffunctional or cease to exist. The terorists who commit the actual acts are as has been seen recently "expendable bio-mass/DNA" and will have been kept as an issolated group for a significant period of time by the organisation for security. This means that their may well not be sufficient history in the NSA DB for their communications and contacts to be seen. However if you can identify even one recruiter and work your way back up the command chain to the directing mind, you can then work your way back down the individual paths to quite a large part of the organisation. The problem is that in an established terrorist organisation the recruiter is likley to know they are a marked person and will use non conventional communications (say cut outs) and contacts back to the rest of the organisation. This also suggests that the Proffesor might also be correct, in that you cannot mine data you don't have. However the next line of attack the security services can take is to follow the financing and purchasing chains. Even terrorists need to eat, sleep and relax, all of which requires the expenditure of money. Unless they are out at a job then they will need to receive the money from somewhere. Datamining for people with odd finacial profiles is going to prove very very fruitfull not just for finding terorists but drug dealers, people trafficers and other criminals. We do not know if the NSA has access to everybodies financial information but it would seam unlikley that they did not at some level (Tax returns etc) or could easily obtain it in bulk (afterall large chunks are for sale as a commercial activity and the DHS does have the power to get the information if it so desires). Also to commit a serious attack terorists need transportation and other materials, most of which can be traced back to a financial transaction, the recording of which is usually beyond their control. Also some materials are just not that easy to get hold off in the quantities required, so looking at abnormalities in purchases (or thefts) of certain materials and other items might well give an indication as to an event becomming likley. Likewise with importation and transportation information. Again we do not know if the NSA has access to these types of records but again it would seam unlikley that they do not at some level. So if the NSA can get access to financial, purchase and transportation records as well then the odds of finding terrorist goes up a lot. As the credit agencies do a lot of financial modeling of US citizens allready, a scan through their DBs cross corelated with even a very large list of possible terrorists will produce significant I think that with additional data over and above the communications and contacts a fairly effective automated system could be quite effective at finding large numbers of "undesireables" not just chunkada • July 16, 2006 10:53 PM If you thought the argument was wrong, you are incorrect. NSA dragnetting is not effective at finding terrists. The probablility argument is quite correct. The NSA dragnet pulls in and misidentifies many many many innocents, while locating only a few 'baddies', and the problem of seperating those groups still remains. Probably not very easy, given that all target suspects fall into the category, by definition, of what the NSA call 'dodgey'. The coin-flipping is referring to looking at people who have been selected through NSA, not at looking at random members of the population. It is a confusing presentation to use. The rest of the argument must be that the cost of invading the privacy and unjustly accusing 30,000 or however-many innocents to find 90 or however-many *potential* criminals is too high. I think this is a discussion about gaining intelligence concerning people who *may* at some point commit a crime, more than it is about locating people who evidence indicates have committed crimes. If it was the latter, then I think a more directed approach would be taken. Would NSA FBI etc even consider this level of attrocity if they had an option of following hard evidence leads? I don't think so. Either they have a secondary motive, or they have simply mis-judged the appropriateness of this response. chunkada • July 16, 2006 11:22 PM further on the coin flipping thing, The point of the anology is that even *if* the NSA dragnet is good enough to make the probability that a dragnetted identity == terrist P = 0.5, the problem is that you have still got a big bunch of people who that P applies to. Go back and look at the numbers used in the examples given, and question which of these hit/miss-rates you think are realistic for an automated system to achieve. Make up an example for yourself using the hit and miss rates you think are real. Then add it up like this: * N(I) number of innocents dragged = population of US (a very large number) * N(!t) number of innocents believed by NSA to be terrorists = population of US * misidentification rate (still a very large number) * N(T) number of terrorists dragged = population of US * terrorism rate (a very small number) * N(t) number of terrorists actually identified as terrorists by NSA (an even smaller number) * P(T) probability that a person *identified by NSA as a terrist* is *actually* a terrorist = N(t) / [N(t) + N(i)] Remember that N(i) is much larger than N(t) -- a very small number divided by a very large number, ie approximately zero, as explained by the good Professor. Recall also that N(i) is very large, so sorting through the [N(i) + (N(t)] group by hand is not likely to be feasible. And because the P(T) is almost zero, any correlation between appearing on the NSA list, is specious. The argument given above by some readers that the NSA are nice to people who appear on the list kinda reinforces this argument, rather than weakening it. The NSA *know* that the list is meaningless. So what is the purpose of the drag-net? Don't ask simply, what is the purpose of the list? -- that is not necessarily the purpose of the drag-net. In fact, I hope the list is not the purpose of the drag-net, since as pointed out also by the Professor or Bruce, the list does have correlations to activities other than terrorism -- unless the identities are chosen *completely at random*. So what we end up with is a mass of publicity, a mass of fear toward the state, a mass of fear of terrists, and a list of people who fit some set of criteria which has not been made public. But what we don't end up with is a useful list of people who have any useable probability of being associated with terrorism. And what was the cost in financial terms for the technical implementation, let alone the social and personal costs and future political and societal implications? This technology is not run-of-the-mill. It has been purpose-designed, and implemented at great cost, at multiple points in the system, ie multiply the cost by number of installations (it's not deployed widely enough to become cheaper with scale ... unless it is being deployed globally ...) I think, think about this. chunkada • July 16, 2006 11:23 PM oops: N(!t) and N(i) should both be either N(!t) or N(i) .. take your pick which symbol I should have used. Anonymous • July 16, 2006 11:25 PM And one more thing: Don't bring the 'the probability can be further refined by additional research' argument. The probability assigned is defined as the final probability outcome of the SYSTEM. If you think it can be refined, then assign a better probability in the first place. Doesn't matter. The sums still say you're wrong. chunkada • July 16, 2006 11:25 PM above is me chunkada • July 16, 2006 11:41 PM oh and if you have 1/300,000 terrorists (@Nigel Sedgwick), you have a problem that no dragnet is gonna cure. In a population of 300,000,000 -- 1,000 *terrorists*? Are you kidding me?! I don't see embattled militia fighting street-to-street over there yet. I don't see internal faction wars. Or do I? I think P(T) is much, much lower, I think actual terrorists, you should compare the 9/11 incident, which allegedly took 20 personel within the US. And given this has not happened again, it probably means that even less than 100 people in the US would commit a major act of terrorism if you did nothing (I'm guessing). And of those 100, how many are truly competant? And are they likely to have the same success as before? If so, why? Because your people are all busy snooping on their neighbours, instead of trying to make the country a nicer place and make its installations less useable for harm? I think you'd be lucky to find 1,000 actual terrorists worldwide in any given year. What's that .. 1/6,000,000. Put that figure into your NSA dragnet probability calculator and watch the smoke come chunkada • July 16, 2006 11:45 PM @Tank: "Where but in the least informed dicussions is it suggested that the NSA calls database is used to identify terrorists rather than providing an unrivalled and infinitely useful investigative tool to aid existing investigations by providing an outline of a suspects personal contact networks ?" Probably in the FISA court room, I guess. Isn't that the exact type of scenario where a warrant is granted? So tell us, in your infinite expertise, what is the FISA-abortive NSA thing for? ... and many more responses like this .. I do not have the time. Good luck with it, hope you are over it soon America. chunkada • July 16, 2006 11:57 PM Oh and the best use for datamining? Finding credit card numbers and logins to use to book hotels, cars, flights ... buy drugs, cars, guns, seduce your boss' wife, commit acts of larceny and ... terrorism I guess, in the worst-case But we trust our friendly law-enforcement now, don't we, huh? And our government. Trust both implicit and explicit. Yeah, right. chunkada • July 17, 2006 12:01 AM "As terrible as the war in Iraq is, it has not managed to effectively kill 500,000 children unlike the trade sanctions earlier." Comparing ten years with 2 years? And what about the mutagenic effects of the chemicals and heavy metals and radiological elements sprayed around, which will have the same effect as the ones sprayed about ten years ago did, ie birth defects and odd syndromes. Tank • July 18, 2006 6:51 AM >> @Tank: "Where but in the least informed dicussions is it suggested that the NSA calls database is used to >> identify terrorists rather than providing an unrivalled and infinitely useful investigative tool to aid existing >> investigations by providing an outline of a suspects personal contact networks ?" > Probably in the FISA court room, I guess. Isn't that the exact type of scenario where a warrant is granted? Yep. The only thing that should generate a question mark here is how you went from sounding like you had a clue in one sentence.... > So tell us, in your infinite expertise, what is the FISA-abortive NSA thing for? Posted by: chunkada at July 16, 2006 11:45 PM ....to sounding like you're puzzled by what you just said yourself in the following sentence. BTW who gives a shit what FISA is doing or not ? It doesn't factor into the conclusions of this article or my statements about the usefulness of phone contact data for mapping human networks. Tom • September 3, 2011 4:38 AM Well, there are lies, damn lies, and statistics. This fits into the statistics category. The assumptions here are fundamentally flawed, sorry. You are absolutely correct in that with the overwhelming majority of non-terrorists relative to terrorists, an INITIAL positive "hit" as a terrorist is far more likely to identify a non-terrorist than a terrorist, but with each successive round of testing, the ability to identify a terrorist increases dramatically (and "further investigation" would not mean interrogation, it would mean reading a second email, or more likely, reading a first email as the initial positive hit would be from a computer identifying some anomaly like a key word or strange internet purchase). As an example: the majority of people who test HIV positive on their first test DO NOT have HIV, but no one says the test is useless, because all of those people then take a second test, and the vast majority of people who test positive multiple times DO have HIV. Once you've gone through one or two rounds of selection, the odds of separating true-positives from false-positives becomes very I'm ambiguous on the use of data mining to capture terrorists, but I hate to see the credibility of statistics diminished in the eyes of the public because people without the ability or desire to use it properly try to abuse it to sway public opinion. Oh, one more thing: Floyd Rudmin, your professor. He is a professor of psychology. Which means, right, he is not an expert on Bayesian analysis. He's just some guy as far as statistics are concerned. I feel it was dishonest to not state that he is a professor in a field not related to statistics because it leads readers to assume his is an expert opinion. It isn't, its just propaganda. hope • September 3, 2011 5:24 AM @Tom, "Oh, one more thing: Floyd Rudmin, your professor. He is a professor of psychology. Which means, right, he is not an expert on Bayesian analysis" Not a psychology but the study of groups within groups would have some merit to this. if you have 50 groups(character/ethic/lifestyle,etc), and one of the groups is likely to be the type for a terrorist then a 80% susses rate, could just be based on the selected persons which were in the neighborhood, and there wasn't the other 59 groups in high numbers. A million dollar house in a very bad neighborhood, the people that own the million dollar house might not show up in the result type thing CVi • August 2, 2013 11:42 PM The computers look for terrorists and leave the dirty work to the police... Resulting in the police doing the dirty work for the computer instead of hunting credit card and cell phone thiefs. Thy could assign the computers to catch the cell phone and credit card thiefs, and let the police do their work. They'd probably catch as manny terrorists. @tom: "the majority of people who test HIV positive on their first test DO NOT have HIV, but no one says the test is useless, because all of those people then take a second test" It depends. Let's not forget the miss rate. If HALF of people with HIV tested negative. In addition to the false positives. Also let's imagine that second test involves a biopsy. One more thing that needs to be said, all the terrorists that get caught and get media coverage, are discovered by police "stumbling" upon a clue, that is nested up using regular police work, the old fashioned way. The ones that doesn't get caught that way, might as well turn out be the ones that NSA can't catch either. And as I said earlier, if they used the computers to catch the credit card and cell phone thiefs *instead*, they'd free up those resources to do other police work. The nett result would be a lot of saved time, money, and more "non terrorist" criminals caught.
{"url":"https://www.schneier.com/blog/archives/2006/07/terrorists_data.html","timestamp":"2014-04-21T07:10:54Z","content_type":null,"content_length":"183410","record_id":"<urn:uuid:7ad6dfd7-94e7-4a61-bf03-78929154e29a>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Strategies for graphical model selection Results 1 - 10 of 20 , 1995 "... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..." Cited by 981 (70 self) Add to MetaCart In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P -values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications in genetics, sports, ecology, sociology and psychology. , 1996 "... This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the ..." Cited by 172 (0 self) Add to MetaCart This literature review discusses different methods under the general rubric of learning Bayesian networks from data, and includes some overlapping work on more general probabilistic networks. Connections are drawn between the statistical, neural network, and uncertainty communities, and between the different methodological communities, such as Bayesian, description length, and classical statistics. Basic concepts for learning and Bayesian networks are introduced and methods are then reviewed. Methods are discussed for learning parameters of a probabilistic network, for learning the structure, and for learning hidden variables. The presentation avoids formal definitions and theorems, as these are plentiful in the literature, and instead illustrates key concepts with simplified examples. Keywords--- Bayesian networks, graphical models, hidden variables, learning, learning structure, probabilistic networks, knowledge discovery. I. Introduction Probabilistic networks or probabilistic gra... , 1993 "... Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors ..." Cited by 96 (28 self) Add to MetaCart Ways of obtaining approximate Bayes factors for generalized linear models are described, based on the Laplace method for integrals. I propose a new approximation which uses only the output of standard computer programs such as GUM; this appears to be quite accurate. A reference set of proper priors is suggested, both to represent the situation where there is not much prior information, and to assess the sensitivity of the results to the prior distribution. The methods can be used when the dispersion parameter is unknown, when there is overdispersion, to compare link functions, and to compare error distributions and variance functions. The methods can be used to implement the Bayesian approach to accounting for model uncertainty. I describe an application to inference about relative risks in the presence of control factors where model uncertainty is large and important. Software to implement the - DEPARTMENT OF STATISTICS, UNIVERSITY OFWASHINGTON , 1993 "... In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null ..." Cited by 89 (6 self) Add to MetaCart In a 1935 paper, and in his book Theory of Probability, Jeffreys developed a methodology for quantifying the evidence in favor of a scientific theory. The centerpiece was a number, now called the Bayes factor, which is the posterior odds of the null hypothesis when the prior probability on the null is one-half. Although there has been much discussion of Bayesian hypothesis testing in the context of criticism of P-values, less attention has been given to the Bayes factor as a practical tool of applied statistics. In this paper we review and discuss the uses of Bayes factors in the context of five scientific applications. The points we emphasize are:- from Jeffreys's Bayesian point of view, the purpose of hypothesis testing is to evaluate the evidence in favor of a scientific theory;- Bayes factors offer a way of evaluating evidence in favor ofa null hypothesis;- Bayes factors provide a way of incorporating external information into the evaluation of evidence about a hypothesis;- Bayes factors are very general, and do not require alternative models to be nested;- several techniques are available for computing Bayes factors, including asymptotic approximations which are easy to compute using the output from standard packages that maximize likelihoods;- in "non-standard " statistical models that do not satisfy common regularity conditions, it can be technically simpler to calculate Bayes factors than to derive non-Bayesian significance , 1997 "... In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The methodology is applied to the problem of predicting survival of people after one, three and five years of being ..." Cited by 71 (11 self) Add to MetaCart In this work we introduce a methodology based on Genetic Algorithms for the automatic induction of Bayesian Networks from a file containing cases and variables related to the problem. The methodology is applied to the problem of predicting survival of people after one, three and five years of being diagnosed as having malignant skin melanoma. The accuracy of the obtained model, measured in terms of the percentage of well-classified subjects, is compared to that obtained by the called Naive-Bayes. In both cases, the estimation of the model accuracy is obtained from the 10-fold cross-validation method. 1. Introduction Expert systems, one of the most developed areas in the field of Artificial Intelligence, are computer programs designed to help or replace humans beings in tasks in which the human experience and human knowledge are scarce and unreliable. Although, there are domains in which the tasks can be specifed by logic rules, other domains are characterized by an uncertainty inherent... - Biometrika , 1996 "... this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 condi ..." Cited by 55 (8 self) Add to MetaCart this paper, we will only consider undirected graphical models. For details of Bayesian model selection for directed graphical models see Madigan et al (1995). An (undirected) graphical model is determined by a set of conditional independence constraints of the form `fl 1 is independent of fl 2 conditional on all other fl i 2 C'. Graphical models are so called because they can each be represented as a graph with vertex set C and an edge between each pair fl 1 and fl 2 unless fl 1 and fl 2 are conditionally independent as described above. Darroch, Lauritzen and Speed (1980) show that each graphical log-linear model is hierarchical, with generators given by the cliques (complete subgraphs) of the graph. The total number of possible graphical models is clearly given by 2 ( - STAT.SCI , 1999 "... Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-con dent inferences and decisions tha ..." Cited by 42 (0 self) Add to MetaCart Standard statistical practice ignores model uncertainty. Data analysts typically select a model from some class of models and then proceed as if the selected model had generated the data. This approach ignores the uncertainty in model selection, leading to over-con dent inferences and decisions that are more risky than one thinks they are. Bayesian model averaging (BMA) provides a coherent mechanism for accounting for this model uncertainty. Several methods for implementing BMA haverecently emerged. We discuss these methods and present anumber of examples. In these examples, BMA provides improved out-of-sample predictive performance. We also provide a catalogue of - In Bayesian Statistics 5 , 1995 "... Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significanc ..." Cited by 39 (12 self) Add to MetaCart Survival analysis is concerned with finding models to predict the survival of patients or to assess the efficacy of a clinical treatment. A key part of the model-building process is the selection of the predictor variables. It is standard to use a stepwise procedure guided by a series of significance tests to select a single model, and then to make inference conditionally on the selected model. However, this ignores model uncertainty, which can be substantial. We review the standard Bayesian model averaging solution to this problem and extend it to survival analysis, introducing partial Bayes factors to do so for the Cox proportional hazards model. In two examples, taking account of model uncertainty enhances predictive performance, to an extent that could be clinically useful. 1 Introduction From 1974 to 1984 the Mayo Clinic conducted a double-blinded randomized clinical trial involving 312 patients to compare the drug DPCA with a placebo in the treatment of primary biliary - Communications in Statistics: Theory and Methods , 1996 "... Acyclic digraphs (ADGs) are widely used to describe dependences among variables in multivariate distributions. In particular, the likelihood functions of ADG models admit convenient recursive factorizations that often allow explicit maximum likelihood estimates and that are well suited to building B ..." Cited by 38 (5 self) Add to MetaCart Acyclic digraphs (ADGs) are widely used to describe dependences among variables in multivariate distributions. In particular, the likelihood functions of ADG models admit convenient recursive factorizations that often allow explicit maximum likelihood estimates and that are well suited to building Bayesian networks for expert systems. There may, however, be many ADGs that determine the same dependence (= Markov) model. Thus, the family of all ADGs with a given set of vertices is naturally partitioned into Markov-equivalence classes, each class being associated with a unique statistical model. Statistical procedures, such as model selection or model averaging, that fail to take into account these equivalence classes, may incur substantial computational or other inefficiencies. Recent results have shown that each Markov-equivalence class is uniquely determined by a single chain graph, the essential graph, that is itself Markov-equivalent simultaneously to all ADGs in the equivalence clas...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1815272","timestamp":"2014-04-18T01:12:20Z","content_type":null,"content_length":"38470","record_id":"<urn:uuid:f8a10218-de4a-4358-b0fb-e24218832488>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00379-ip-10-147-4-33.ec2.internal.warc.gz"}
Niles, IL Algebra 1 Tutor Find a Niles, IL Algebra 1 Tutor ...I can be a big help to any first or second year student who has questions about the Latin and English grammar and sentence structure. I help students organize their time and learn study skills. I teach a discrete math course at a university entitled Quantitative Reasoning. 11 Subjects: including algebra 1, calculus, geometry, GRE ...I have experience teaching on-line and am willing to negotiate distance sessions with individual students. Please contact me for further information on how I can help you meet your foreign language goals. All lessons should be cancelled or re-scheduled at least 24 hours in advance. 6 Subjects: including algebra 1, Spanish, French, GRE ...Washington K-8 elementary school in Birmingham, AL. I've been playing the clarinet since I was 10, and now I'm 25. During my school years, I have been in band, marching band, symphonic band, and jazz band. 10 Subjects: including algebra 1, reading, calculus, algebra 2 ...Although I have mainly tutored science subjects and french, those being my strongest subjects, I am also knowledgeable in the social sciences. I have worked with both young children and college students and always adjust my teaching style in order to appease this wide array of ages. Everyone ha... 21 Subjects: including algebra 1, reading, chemistry, English ...In addition, I teach history (U.S., European and World), politics and government. I have worked with students who not only need to know about these subjects, but also have to write papers on them. Let's get together and learn!I am qualified to tutor Hebrew for several reasons. 38 Subjects: including algebra 1, reading, English, writing Related Niles, IL Tutors Niles, IL Accounting Tutors Niles, IL ACT Tutors Niles, IL Algebra Tutors Niles, IL Algebra 2 Tutors Niles, IL Calculus Tutors Niles, IL Geometry Tutors Niles, IL Math Tutors Niles, IL Prealgebra Tutors Niles, IL Precalculus Tutors Niles, IL SAT Tutors Niles, IL SAT Math Tutors Niles, IL Science Tutors Niles, IL Statistics Tutors Niles, IL Trigonometry Tutors
{"url":"http://www.purplemath.com/niles_il_algebra_1_tutors.php","timestamp":"2014-04-18T08:19:51Z","content_type":null,"content_length":"23832","record_id":"<urn:uuid:a755f123-bd38-4e60-98da-62da0d81eaac>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00272-ip-10-147-4-33.ec2.internal.warc.gz"}
Number of target values in the one prediction up vote 1 down vote favorite I use python's scikit-learn module for predicting some values in the CSV file. I am using Random Forest Regressor to do it. As example, i have 8 train values and 3 values to predict - which of codes i must use? As a values to be predicted, I have to give all target values at once (A) or separately (B)? Variant A: #Readind CSV file dataset = genfromtxt(open('Data/for training.csv','r'), delimiter=',', dtype='f8')[1:] #Target value to predict target = [x[8:11] for x in dataset] #Train values to train train = [x[0:8] for x in dataset] #Starting traing rf = RandomForestRegressor(n_estimators=300,compute_importances = True) rf.fit(train, target) Variant B: #Readind CSV file dataset = genfromtxt(open('Data/for training.csv','r'), delimiter=',', dtype='f8')[1:] #Target values to predict target1 = [x[8] for x in dataset] target2 = [x[9] for x in dataset] target3 = [x[10] for x in dataset] #Train values to train train = [x[0:8] for x in dataset] #Starting traings rf1 = RandomForestRegressor(n_estimators=300,compute_importances = True) rf1.fit(train, target1) rf2 = RandomForestRegressor(n_estimators=300,compute_importances = True) rf2.fit(train, target2) rf3 = RandomForestRegressor(n_estimators=300,compute_importances = True) rf3.fit(train, target3) Which version is correct? Thanks in advance! add comment 2 Answers active oldest votes "8 train values and 3 values" is probably best expressed as "8 features and 3 target variables" in usual machine learning parlance. Both variants should work and [DEL:yield the similar predictions:DEL] as RandomForestRegressor has been made to support multi output regression. up vote 2 down vote [DEL:The predictions won't be exactly the same as RandomForestRegressor is a non deterministic algorithm though. But on average the predictive quality of both approaches accepted should be the same.:DEL] Edit: see Andreas answer instead. But, why in case (A) i have much more accurate predictions than in case (B)? – Emkan Jan 25 '13 at 17:08 No idea. I thought it was doing the same internally. Maybe it's not the case. I will to check the source code. – ogrisel Jan 25 '13 at 17:40 add comment Both are possible, but do different things. The first learns independent models for the different entries of y. The second learns a joint model for all entries of y. If there are meaningful relations between the entries of y that can be learned, the second should be more accurate. up vote 4 down vote As you are training on very little data and don't regularize, I imagine you are simply overfitting in the second case. I am not entirely sure about the splitting criteria in the regression case but it takes much longer for a leaf to be "pure" if the label-space is three dimensional than if it is just one-dimensional. So you will learn more complex models, that are not warranted by the little data you have. Indeed, that would make sense. Thanks Andreas! – ogrisel Jan 25 '13 at 18:06 So the only way I can get SO karma seems to be that you don't know the answer ^^ That way I'll never catch up with larsmans ;) – Andreas Mueller Jan 25 '13 at 18:52 Actually i want to predict 24 values. I have 11 values to train. Every training variable have 32000 samples. I am predicting output of some chemical process - and yes there are meaningful relations between the entries (sum of all 24 outputs must be = 100), What can you recommend me solve this problem? – Emkan Jan 25 '13 at 19:40 1 Try both an pick the method that works best by measuring the score you want to optimize using cross validation. – ogrisel Jan 26 '13 at 14:57 add comment Not the answer you're looking for? Browse other questions tagged python statistics machine-learning scikit-learn random-forest or ask your own question.
{"url":"http://stackoverflow.com/questions/14506615/number-of-target-values-in-the-one-prediction","timestamp":"2014-04-18T14:11:58Z","content_type":null,"content_length":"75505","record_id":"<urn:uuid:eb1baf0d-0bd5-4b26-be80-0ba97f68946f>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Insulation Upgrad Examples for Insulation Upgrade Fuel Saving Calculator Here are a couple examples and hints for using the Insulation Upgrade Cost Saving Calculator: Basic Instructions: 1. Enter values for: 1. Area of the space to be upgraded. 2. Heating Degree Days for your climate -- use the Help link to look this up. 3. The current and upgraded R values -- use the Help link to look this up. 2. Select the type of fuel you use -- adjust the cost of fuel and furnace efficiency if desired. 3. Click the "Calculate" button to get an estimate for your dollar saving, and the greenhouse gas emissions reduction. Some Hints on Using the Calculator: R Values for UnInsulated Spaces: If you are starting with a wall, ceiling or floor that has no insulation, that does NOT mean the current R value is zero. The existing materials (e.g. sheet rock and siding) have some insulation value, and the air films that are next to each of these surfaces also have an R value. The page on R values has some guidance on what to use for currently un-insulated walls, ceilings and floors. Using an R value that is to low for an existing un-insulated surface will result in the calculator showing a dollar saving that is unrealistically high. It is important when the starting R value is low to make as good an estimate as you can -- missing an R value of 30 by 1 results in a 3% error; missing an R value of 2 by 1 results in a 50% error. Windows and Window Treatements: You can use the calculator to get a rough estimate on window upgrades and on window thermal treatments (see example 2). The page on R values gives some typical R values for windows. Again, this must be regarded as a rough estimate. The people who sell thermal window treatments give R values for their products. The unfortunate thing is that there is no standard for measuring window treatments (as there is for windows), and the manufacturers estimates can be on the optimistic side. Serious manufacturers will have test data from an independent lab to backup their claims. The 10 Year Saving: The 10 year fuel saving calculation assumes that fuel price inflation will be 10% per year. This means that a gallon of fuel oil that costs $2 per gallon today will cost $4.70 in the 10th year. If you think that excessive, or not enough, you can mentally adjust it up or down. AC Savings Not Included: Bear in mind that if you use air conditioning in the summer, that the AC bills will also go down, and the calculator does NOT include this benefit. For some climates this may be more important than the heating fuel saving. Big Brother May Help: Also, many of these insulation changes will qualify for the new federal tax breaks in the Energy bill. In addition, your state may offer financial incentives for energy upgrades -- some are quite While this type of heat loss calculation (based on R values and Degree Days) is widely used in the industry, it is not a terribly accurate method. I would be quite happy if the calculator comes within 20% of the real savings. Example 1: Upgrade Attic Insulation: You have 6 inches of loose fill (blown in) fiberglass in your attic now. How much would you save if you blow in an additional 6 inches of cellulose insulation over the fiberglass? Your house floor size is 25 ft by 50 ft. You have a propane furnace, and have been paying $2.20 per gallon for propane. You live in Helena, MT. Here is how to fill in the inputs for this case: Input 1 -- Area: This is the area of the attic that you are going to add insulation to. Your house is 25 ft deep by 50 ft wide, or (50ft)(25ft) = 1250 sqft -- enter this number in the "Area" box. Input 2 -- Heating Degree Days: Using the information on this page, you determine that the yearly Fahrenheit Heating Degree Days for Helena is about: 8000. Enter this in the Heating Degree Days box. Inputs 3 and 4 -- Current and New R Values: Using the information on this page, you determine that the R value for loose fill fiberglass is R2.2 per inch, so 6 inches gives (6)(2.2) = R13 for the current R value. This ignores the insulation value for the ceiling sheet rock and for the air films -- these are small compared to the insulation R value. Using the same page, the R value for loose fill Cellulose is R3.13 per inch, so 6 inches would add (6)(3.13) = R18.8 to the existing R13 for a total of R32 for the New Total R value. Enter the Current and New Total R values in the input boxes. Input 5 -- Fuel Type You check off propane as the type of fuel you use. You could accept the default value for fuel price, but you know that you pay $2.20 per gallon, and enter it in the fuel cost box. You accept the default value of 80% for furnace efficiency. You can look up furnace efficiencies here: http://www.builditsolar.com/Tools/furnefic.htm if you want to refine the default value. You have completed all the inputs, click on the "Calculate" button to see the resulting fuel cost saving and greenhouse gas saving. You should get a first year saving of $328, a 10 year saving of $5223 (note that the 10 year saving calculation assumes the fuel prices will go up 10% per year). The greenhouse gas emissions reduction is 1932 lb per year. Here is the output you should see: Example 2: Window Upgrade: You have a bunch of single pane windows. How much would you save if you replace all of these windows with new double pane, low e windows? You have measured each window and totaled up the area to find you have 160sqft of window that could be upgraded to double pane windows. You have an oil furnace, and have been paying $2.40 per gallon for oil. You live in Brassua Dam, Maine. Here is how to fill in the inputs for this case: Input 1 -- Area: Enter the 160 sqft you calculated in the Input 2 -- Heating Degree Days: Using the information on this page, you determine that the yearly Fahrenheit Heating Degree Days for Brassua Dam is about: 9859 (wow!). Enter this in the Heating Degree Days box. Inputs 3 and 4 -- Current and Upgraded R Values: Using the information on this page, you determine that the R value for your existing single glazed windows is R0.91. Using the same page, the R value for a double glazed, low e window is R 3.13. Enter the current and upgraded R values in the input boxes. Because the R values are small, it is important to get as good a value as you can -- small changes in R value will have a large effect on the answer. Input 5 -- Fuel Type You check off Fuel Oil as the type of fuel you use. You could accept the default value for fuel price, but you know that you pay $2.40 per gallon, and enter it in the fuel cost box. You accept the default value of 80% for furnace efficiency. You can look up furnace efficiencies here: http://www.builditsolar.com/Tools/furnefic.htm if you want to refine the default value. You have completed all the inputs, click on the "Calculate" button to see the resulting fuel cost saving and greenhouse gas saving. You should get a first year saving of $623, and a 10 year saving of $9935 (note that the 10 year saving calculation assumes the fuel prices will go up 10% per year). The greenhouse gas emissions reduction is 5164 lb per year. Well, this might be a bit extreme, in that someone in Brassua Dam (which I picked at random) would probably already have double glazed windows -- but you get the idea. It should also be mentioned that more precise savings can be estimated for windows using the methods and information at this site www.EfficientWindows.com -- a great resource for windows. Here is the output you should see to this example: R1 3/23/06
{"url":"http://www.builditsolar.com/References/Calculators/InsulUpgrd/InsulExamples.htm","timestamp":"2014-04-18T20:54:02Z","content_type":null,"content_length":"20939","record_id":"<urn:uuid:2a7f7633-f0c3-4d89-bc7d-9f91b291b0c8>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00116-ip-10-147-4-33.ec2.internal.warc.gz"}
Below are the first 10 and last 10 pages of uncorrected machine-read text (when available) of this chapter, followed by the top 30 algorithmically extracted key phrases from the chapter as a whole. Intended to provide our own search engines and external engines with highly rich, chapter-representative searchable text on the opening pages of each chapter. Because it is UNCORRECTED material, please consider the following text as a useful but insufficient proxy for the authoritative book pages. Do not use for reproduction, copying, pasting, or reading; exclusively for search engines. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment 3 ASSESSING IMPORTANT MATHEMATICAL CONTENT High-quality mathematics assessment must be shaped and defined by important mathematical content. This fundamental concept is embodied in the first of three educational principles to guide assessment. THE CONTENT PRINCIPLE Assessment should reflect the mathematics that is most important for students to learn. The content principle has profound implications for designing, developing, and scoring mathematics assessments as well as reporting their results. Some form of the content principle may have always implicitly guided assessment development, but in the past the notion of content has been construed in the narrow topic-coverage sense. Now content must be viewed much more broadly, incorporating the processes of mathematical thinking, the habits of mathematical problem solving, and an array of mathematical topics and applications, and this view must be made explicit. What follows is, nonetheless, a beginning description; much remains to be learned from research and from the wisdom of expert practice. DESIGNING NEW ASSESSMENT FRAMEWORKS Many of the assessments in use today, such as standardized achievement tests in mathematics, have reinforced the view that the mathematics curriculum is built from lists of narrow, isolated skills that can easily be decomposed for appraisal. The new vision of mathematics requires that assessment reinforce a new conceptualization that is both broader and more integrated. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment The new vision of mathematics requires that assessment reinforce a new conceptualization that is both broader and more integrated. Tests have traditionally been built from test blueprints, which have often been two dimensional arrays with topics to be covered along one axis and types of skills (or processes) on the other.1 The assessment is then created by developing questions that fit into one cell or another of this matrix. But important mathematics is not always amenable to this cell-by-cell analysis.2 Assessments need to involve more than one mathematical topic if students are to make appropriate connections among the mathematical ideas they have learned. Moreover, challenging assessments are usually open to a variety of approaches, typically using varied and multiple processes. Indeed, they can and often should be designed so that students are rewarded for finding alternative solutions. Designing tasks to fit a single topic and process distorts the kinds of assessments students should be able to do. BEYOND TOPIC-BY-PROCESS FORMATS Assessment developers need characterizations of the important mathematical knowledge to be assessed that reflect both the necessary coverage of content and the interconnectedness of topics and process. Interesting assessment tasks that do not elicit important mathematical thinking and problem solving are of no use. To avoid this, preliminary efforts have been made on several fronts to seek new ways to characterize the learning domain and the corresponding assessment. For example, lattice structures have recently been proposed as an improvement over matrix classifications of tasks.3 Such structures provide a different and perhaps more interconnected view of mathematical understanding that should be reflected in assessment. The approach taken by the National Assessment of Educational Progress (NAEP) to develop its assessments is an example of the effort to move beyond topic-by-process formats. Since its inception, NAEP has used a matrix design for developing its mathematics assessments. The dimensions of these designs have varied over the years, with a 35-cell design used in 1986 and the design below for the 1990 and 1992 assessments. Although classical test theory strongly encouraged the use of matrices to structure and provide balance to examinations, the designs also were often the root cause of the decontextualizing of assessments. If 35 percent of the items on the assessment were to be from the area of measurement and 40 percent of those were to assess students' procedural OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment knowledge, then 14 percent of the items would measure procedural knowledge in the content domain of measurement. These items were developed to suit one cell of the matrix, without adequate consideration to the context and connections to other parts of mathematics. Starting with the 1995 NAEP mathematics assessment, the use of matrices as a design feature has been discontinued. Percentages of items will be specified for each of the five major content areas, but some of these items will be double-coded because they measure content in more than one of the domains. Mathematical abilities categories—conceptual understanding, procedural knowl NAEP 1990-1992 Matrix Content Numbers and Operations Measurement Geometry Data Analysis, Probability, and Statistics Algebra and Functions Conceptual Understanding Mathematical Ability Procedural Knowledge Problem Solving edge, and problem solving—will come into play only at the final stage of development to ensure that there is balance among the three categories over the entire assessment (although not necessarily by each content area) at each grade level. This change, along with the continued use of items requiring students to construct their own responses, has helped provide a new basis for the NAEP mathematics examination.4 One promising approach to assessment frameworks is being developed by the Balanced Assessment Project, which is a National Science Foundation-supported effort to create a set of assessment packages, at various grade levels, that provide students, teachers, and administrators with a fair and deep characterization of student OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment attainment in mathematics.5 The seven main dimensions of the framework are sketched below: content (which is very broadly construed to include concepts, senses, procedures and techniques, representations, and connections), thinking processes (conjecturing, organizing, explaining, proving, etc.), products (plans, models, reports, etc.), mathematical point of view (real-world modeling, for example), diversity (accessibility, sensitivity to language and culture, etc.), circumstances of performance (amount of time allowed, whether the task is to be done individually or in groups, etc.), and pedagogics-aesthetics (the extent to which a task or assessment is believable, engaging, etc.). The first four dimensions describe aspects of the mathematical competency that the students are asked to demonstrate, whereas the last three dimensions pertain to characteristics of the assessment itself and the circumstances or conditions under which the assessment is undertaken. One noteworthy feature of the framework from the Balanced Assessment Project is that it can be used at two different levels: at the level of the individual task and at the level of the assessment as a whole. When applied to an individual task, the framework can be used as more than a categorizing mechanism: it can be used to enrich or extend tasks by suggesting other thinking processes that might be involved, for example, or additional products that students might be asked to create. Just as important, the framework provides a way of examining the balance of a set of tasks that goes beyond checking off cells in a matrix. Any sufficiently rich task will involve aspects of several dimensions and hence will OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment strengthen the overall balance of the entire assessment by contributing to several areas. Given a set of tasks, one can then examine the extent to which each aspect of the framework is represented, and this can be done without limiting oneself to tasks that fit entirely inside a particular cell in a matrix. As these and other efforts demonstrate, researchers are attempting to take account of the fact that assessment should do much more than test discrete procedural skills.6 The goal ought to be schemes for assessment that go beyond matrix classification to assessment that elicits student work on the meaning, process, and uses of mathematics. Although the goal is clearly defined, methods to achieve it are still being explored by researchers and practitioners alike. SPECIFYING ASSESSMENT FRAMEWORKS An assessment framework should provide a way to examine the balance of a set of tasks that goes beyond checking off cells in a matrix. Assessment frameworks provide test developers with the guidance they need for creating new assessments. Embedded in the framework should be information to answer the following kinds of questions: What mathematics should students know before undertaking an assessment? What mathematics might they learn from the assessment? What might the assessment reveal about their understanding and their mathematical power? What mathematical background are they assumed to have? What information will they be given before, during, and after the assessment? How might the tasks be varied, extended, and incorporated into current instruction? Developers also need criteria for determining appropriate student behavior on the assessment: Will students be expected to come up with conjectures on their own, for example, or will they be given some guidance, perhaps identification of a faulty conjecture, which can then be replaced by a better one? Will they be asked to write a convincing argument? Will they be expected to explain their conjecture to a colleague or to the teacher? What level of conjecture and argument will be deemed satisfactory for these tasks? A complete framework might also include standards for student performance (i.e., standards in harmony with the desired curriculum). Very few examples of such assessment frameworks currently exist. Until there are more, educators are turning to curriculum frameworks, such as those developed by state departments of education OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment across the country, and adapting them for assessment purposes. The state of California, for example, has a curriculum framework that asserts the primacy of developing mathematical power for all students: "Mathematically powerful students think and communicate, drawing on mathematical ideas and using mathematical tools and techniques."7 The framework portrays the content of mathematics in three ways: Strands (such as number, measurement, and geometry) run throughout the curriculum from kindergarten through grade 12. They describe the range of mathematics to be represented in the curriculum and provide a way to assess its balance. Unifying ideas (such as proportional relationships, patterns, and algorithms) are major mathematical ideas that cut across strands and grades. They represent central goals for learning and set priorities for study, bringing depth and connectedness, to the student's mathematical experience. Units of instruction (such as dealing with data, visualizing shapes, and measuring inaccessible distances) provide a means of organizing teaching. Strands are commingled in instruction, and unifying ideas give too big a picture to be useful day to day. Instruction is organized into coherent, manageable units consisting of investigations, problems, and other learning activities. Through the California Learning Assessment System, researchers at the state department of education are working to create new forms of assessment and new assessment tasks to match the curriculum framework.8 Further exploration is needed to learn more about the development and appropriate use of assessment frameworks in mathematics education. Frameworks that depict the complexity of mathematics enhance assessment by providing teachers with better targets for teaching and by clearly communicating what is valued to students, their parents, and the general public.9 Although an individual assessment may not treat all facets of the framework, the collection of assessments needed to evaluate what students are learning should be comprehensive. Such completeness is necessary if assessments are to provide the right kind of leadership for educa- OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment tional change. If an assessment represents a significant but small fraction of important mathematical knowledge and performance, then the same assessment should not be used over and over again. Repeated use could inappropriately narrow the curriculum. DEVELOPING NEW ASSESSMENT TASKS Several desired characteristics of assessment tasks can be deduced from the content principle and should guide the development of new assessment tasks. TASKS REFLECTING MATHEMATICAL CONNECTIONS Current mathematics education reform literature emphasizes the importance of the interconnections among mathematical topics and the connections of mathematics to other domains and disciplines. Much assessment tradition is based, however, on an atomistic approach that in practice, if not in theory, hides the connections among aspects of mathematics and between mathematics and other domains. Assessment developers will need to find new ways to reflect these connections in the assessment tasks posed for students. One way to help ensure the interconnectedness is to create tasks that ask students to bring to bear a variety of aspects of mathematics. An example involving topics from arithmetic, geometry, and measurement appears on the following page.10 Similarly, tasks may ask students to draw connections across various disciplines. Such tasks may provide some structure or hints for the students in finding the connections or may be more open-ended, leaving responsibility for finding connections to the students. Each strategy has its proper role in assessment, depending on the students' experience and accomplishment. Another approach to reflecting important connections is to set tasks in a real-world context. Such tasks will more likely capture students' interest and enthusiasm and may also suggest new ways of understanding the world through mathematical models so that the assessment becomes part of the learning precess. Moreover, the "situated cognition" literature11 suggests that the specific settings and OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment Lightning Strikes Again! One way to estimate the distance from where lightning strikes to you is to count the number of seconds until you hear the thunder and then divide by five. The number you get is the approximate distance in miles. One person is standing at each of the four points A, B, C, and D. They saw lightning strike at E. Because sound travels more slowly than light, they did not hear the thunder right away. 1.Who heard the thunder first? _____ Why? Who heard it last? _____ Why? Who heard it after 17 seconds? _____ Explain your answer. 2. How long did the person at B have to wait to hear the thunder? 3. Now suppose lightning strikes again at a different place. The person at A and the person at C both hear the thunder after the same amount of time. Show on the map below where the lightning might have struck. 4. In question 3, are there other places where the lightning could have struck? Explain your answer. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment contexts in which a mathematical situation is embedded are critical determinants of problem solvers' responses to that situation. Developers should not assume, however, that just because a mathematical task is interesting to students, it therefore contains important mathematics. The mathematics in the task may be rather trivial and therefore inappropriate. Test items that assess one isolated fragment of a student's mathematical knowledge may take very little time and may yield reliable scores when added together. However, because they are set in no reasonable context, they do not provide a full picture of the student's reasoning. They cannot show how the student connects mathematical ideas, and they seldom allow the student an opportunity to explain or justify a line of thinking. Students should be clear about the context in which a question is being asked. Either the assumptions necessary for students to use mathematics in a problem situation should be made clear in the instructions or students should be given credit for correct reasoning under various assumptions. The context of a task, of course, need not be derived from mathematics. The example at right contains a task from a Kentucky statewide assessment for twelfth-graders that is based on the notion of planning a budget within certain practical restrictions.12 Budget Planning Task You graduated from Fairdale High School 2 years ago, and although you did not attend college, you have been attending night school to learn skills to repair video cassette recorders while you worked for minimum wages at a video center by day. Now you have been fortunate to find an excellent job that requires the special skills you have developed. Your salary will be $18,000. This new job excites you because for some time you have been wanting to move out of your parents' home to your own apartment. During the past 2 years you have been able to buy your own bedroom set, a television, a stereo, and some of your own dishes and utensils. To move to your own apartment, you will need to develop a budget. Your assignment is to develop a monthly budget showing how you will live on the income from your new job. To guide you, read the list below. (A packet of resource materials is provided, including a newspaper and brochures with consumer information.) Estimate your monthly take-home pay. Remember that you must allow for city, state, federal, social security, and property taxes. Assume that city, state, federal, and social security taxes are 25% of your gross pay. Using the newspaper provided, investigate various apartments and decide which one you will rent. You will need a car on your new job. Price several cars and decide how much money you will need to borrow to buy the car you select; estimate the monthly payment. Use the newspaper and other consumer materials provided to make your estimate. Property taxes will be $10 per $1,000 assessed value. You will do your own cooking. Figure how much you will spend on food, cooking and eating out. As you plan your budget, don't forget about clothing, savings, entertainment and other living expenses. Your budget for this project should be presented as a one-page, two-column display. Supporting this one-page budget summary, you should submit an explanation for each budget figure, telling how/where you got the information. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment Other examples of age-appropriate contexts can be found in the fourth-grade assessments developed by the New Standards Project (NSP), a working partnership of researchers and state and local school districts formed to develop new systems of performance-based assessments. One such problem includes a fairly complex task in which children are given a table of information about various kinds of tropical fish (their lengths, habits, prices, etc.) and are asked to propose how to spend a fixed amount of money to buy a variety of fish for an aquarium of limited capacity, under certain realistic constraints.13 The child must develop a solution that takes the various constraints into account. The task offers ample possibilities for students to display reasoning that connects mathematics with the underlying content. THE CHALLENGES IN MAKING CONNECTIONS The need to reflect mathematical connections pushes task development in new directions, each presenting challenges that require attention. Assessment tasks can use unusual, yet realistic settings, so that everyone's prior knowledge of the setting is the same. Differential Familiarity Whatever the context of a mathematical task, some students will be more familiar with it than other students, possibly giving some an unfair advantage. One compensating approach is to spend time acquainting all students with the context. The NSP, for example, introduces the context of a problem in an assessment exercise in a separate lesson, taught before the assessment is administered.14 Presumably the lesson reduces the variability among the students in their familiarity with the task setting. The same idea can be found in some of the assessment prototypes in Measuring Up: Prototypes for Mathematics Assessment. In one prototype, for instance, a script of a videotaped introduction was suggested;15 playing such a videotape immediately before students work on the assessment task helps to ensure that everyone is equally familiar with the underlying context. Another approach is to make the setting unusual, yet realistic, so that everyone will be starting with a minimum of prior knowledge. This technique was used in a study of children's problem solving conducted through extended individual task-based interviews.16 The context used as the basis of the problem situation—a complex game involving probability—was deliberately constructed so that it would be unfamiliar to everyone. After extensive pilot testing of many variations, OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment an abstract version of the game was devised in which children's prior feelings and intuitive knowledge about winning and losing (and about competitions generally) could be kept separate from their mathematical analyses of the situation. Task developers must consider whether students' assumptions affect the mathematics called for in solution of a problem. Clarifying Assumptions Task developers must consider seriously the impact of assumptions on any task, particularly as the assumptions affect the mathematics that is called for in solution of the problem. An example of the need to clarify assumptions is a performance assessment17 that involves four tasks, all in the setting of an industrial arts class and all involving measuring and cutting wood. As written the tasks ignore an important idea from the realm of wood shop: When one cuts wood with a saw, a small but significant amount of wood is turned into sawdust. This narrow band of wood, called the saw's kerf, must always be taken into account, for otherwise the measurements will be off. The tasks contain many instances of this oversight: If, for example, a 16-inch piece is cut from a board that is 64 inches long, the remaining piece is not 48 inches long. Thus students who are fully familiar with the realities of wood shop could be at a disadvantage, since the problems posed are considerably more difficult when kerf is taken into account. Any scoring guide should provide an array of plausible answers for such tasks to ensure that students who answer the questions more accurately in real-world settings are given ample credit for their work. Better yet, the task should be designed so that assumptions about kerf (in this case) are immaterial to a solution. Another assessment item18 that has been widely discussed19 also shows the need to clarify assumptions. In 1982, this item appeared in the third NAEP mathematics assessment: "An army bus holds 36 soldiers. If 1128 soldiers are being bussed to their training site, how many buses are needed?'' The responses have been taken as evidence of U.S. students' weak understanding of mathematics, because only 33 percent of the 13-year-old students surveyed gave 32 as the answer, whereas 29 percent gave the quotient 31 with a remainder, and 18 percent gave just the quotient 31. There are of course many possible explanations as to why students who performed the division failed to give the expected whole-number answer. One plausible explanation may be that some students did not see a need to use one more bus to transport the remaining 12 soldiers. They could squeeze into the other buses; they could go by car. Asked about their answers in interviews or in writing, some OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment MATHEMATICAL EXPERTISE New kinds of assessments call for new kinds of expertise among those who develop the tasks. New kinds of assessments call for new kinds of expertise among those who develop the tasks. The special features of the mathematics content and the special challenges faced in constructing assessment tasks illustrate a need for additional types of expertise in developing assessment tasks and evaluation schema. Task developers need to have a high level of understanding of children, how they think about things mathematical and how they learn mathematics, well beyond the levels assumed to be required to develop assessment tasks in the past. Developers must also have a deep understanding of mathematics and its applications. We can no longer rely on task developers with superficial understanding of mathematics to develop assessment tasks that will elicit creative and novel mathematical thinking. SCORING NEW ASSESSMENTS The content principle also has implications for the mathematical expertise of those who score assessments and the scoring approaches that they use. JOINING TASK DEVELOPMENT TO STUDENT RESPONSES A multiple-choice question is developed with identification of the correct answer. Similarly, an open-ended task is incomplete without a scoring rubric—a scoring guide—as to how the response will be evaluated. Joining the two processes is critical because the basis on which the response will be evaluated has many implications for the way the task is designed, and the way the task is designed has implications for its evaluation. Just as there is a need to try out multiple-choice test questions prior to administration, so there is a need to try out the combination of task and its scoring rubric for open-ended questions. Students' responses give information about the design of both the task and the rubric. Feedback loops, where assessment tasks are modified and sharpened in response to student work, are especially important, in part because of the variety of possible responses. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment EVALUATING RESPONSES TO REFLECT THE CONTENT PRINCIPLE The key to evaluating responses to new kinds of assessment tasks is having a scoring rubric that is tied to the prevailing vision of mathematics education. If an assessment consists of multiple-choice items, the job of determining which responses are correct is straightforward, although assessment designers have little information to go on in trying to decide why students have made certain choices. They can interview students after a pilot administration of the test to try to understand why they chose the answers they did. The designers can then revise the item so that the erroneous choices may be more interpretable. If ambiguity remains and students approach the item with sound interpretations that differ from those of the designers, the response evaluation cannot help matters much. The item is almost always scored either right or wrong.25 Designers of open-ended tasks, on the other hand, ordinarily describe the kinds of responses expected in a more general way. Unanticipated responses can be dealt with by judges who discuss how those responses fit into the scoring scheme. The standard-setting process used to train judges to evaluate open-ended responses, including portfolios, in the Advanced Placement (AP) program of the College Board, for example, alternates between the verbal rubrics laid out in advance and samples of student work from the assessment itself.26 Portfolios in the AP Studio Art evaluation are graded by judges who first hold a standard-setting session at which sample portfolios representing all the possible scores are examined and discussed. The samples are used during the judging of the remaining portfolios as references for the readers to use in place of a general scoring rubric. Multiple readings and moderation by more experienced graders help to hold the scores to the agreed standard.27 Together, graders create a shared understanding of the rubrics they are to use on the students' work. Examination boards in Britain follow a similar procedure in marking students' examination papers in subjects such as mathematics, except that a rubric is used along with sample examinations discussed by the group to help examiners agree on marks.28 The development of high-quality scoring guides to match new assessment is a fairly recent undertaking. One approach has been first OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment to identify in general terms the levels of desired performance and then to create task-specific rubrics. An example from a New Jersey eighth-grade "Early Warning" assessment appears on the following page.29 Profound challenges confront the developer of a rating scheme regardless of the system of scoring or the type of rubric used. A general rubric can be used to support a holistic scoring system, as New Jersey has done, in which the student's response is examined and scored as a whole. Alternatively, a much more refined analytic scheme could be devised in which specific features or qualities of a student's response are identified, according to predetermined criteria, and given separate scores. In the example from New Jersey, one can imagine a rubric that yields two independent scores: one for the accuracy of the numerical answer and one for the adequacy of the explanation. Assessors are experimenting with both analytic and holistic approaches, as well as a amalgam of the two. For example, in the Mathematics Performance Assessment developed by The Psychological Corporation,30 responses are scored along the dimensions of reasoning, conceptual knowledge, communication, and procedures, with a separate rubric for each dimension. In contrast, QUASAR, a project to improve the mathematics instruction of middle school students in economically disadvantaged communities,31 uses an approach that blends task-specific rubrics with a more general rubric, resulting in scoring in which mathematical knowledge, strategic knowledge, and communication are considered interrelated components. These components are not rated separately but rather are to be considered in arriving at a holistic rating.32 Another approach is through so-called protorubrics, which were developed for the tasks in Measuring Up.33 The protorubrics can be adapted for either holistic or analytic approaches and are designed to give only selected characteristics and examples of high, medium, and low responses. Profound challenges confront the developer of a rating scheme regardless of the system of scoring or the type of rubric used. If a rubric is developed to deal with a single task or a type of task, the important mathematical ideas and processes involved in the task can be specified so that the student can be judged on how well those appear to have been mastered, perhaps sacrificing some degree of interconnectedness among tasks. On the other hand, general rubrics may not allow scorers to capture some important qualities of students' thinking about a particular task. Instead, OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment From a Generalized Holistic Scoring Guide to a Specific Annotated Item Scoring Guide Generalized Scoring Guide: Student demonstrates proficiency — Score Point = 3. The student provides a satisfactory response with explanations that are plausible, reasonably clear, and reasonably correct, e.g., includes appropriate diagram(s), uses appropriate symbols or language to communicate effectively, exhibits an understand of the mathematics of the problem, uses appropriate processes and/or descriptions to answer the question, and presents sensible supporting arguments. Any flaws in the response are minor. Student demonstrates minimal proficiency — Score Point = 2 The student provides a nearly satisfactory response which contains some flaws, e.g., begins to answer the question correctly but fails to answer all of its parts or omits appropriate explanation, draws diagram(s) with minor flaws, makes some errors in computation, misuses mathematical language, or uses inappropriate strategies to answer the question. Student demonstrates a lack of proficiency — Score Point = 1 The student provides a less than satisfactory response that only begins to answer the question, but fails to answer it completely, e.g., provides little or no appropriate explanation, draws diagram(s) which are unclear, exhibits little or no understanding of the question being asked, or makes major computational errors. Student demonstrates no proficiency — Score Point = 0 The student provides an unsatisfactory response that answers the question inappropriately, e.g., uses algorithms which do not reflect any understanding of the question, makes drawings which are inappropriate to the question, provides a copy of the question without an appropriate answer, fails to provide any information which is appropriate to the question, or fails to attempt to answer the question. Specific Problem: What digit is in the fiftieth decimal place of the decimal form of 3/11? Explain your answer. Annotated Scoring Guide: 3 points The student provides a satisfactory response; e.g., indicates that the digit in the fiftieth place is 7 and shows that the digits 2 and 7 in the quotient (.272727 …) alternate; the explanation of why 7 is the digit in the fiftieth place is either based on some counting procedure or on the pattern of how the digits are positioned after the decimal point. (The student could read fiftieth as fifteenth or fifth, identify 2 as the digit, and provide an explanation similar to the ones above.) 2 points The student provides a nearly satisfactory response which contains some flaws, e.g., identifies the pattern of the digits 2 and 7 (.272727 …) and provide either a weak or no explanation of why 7 is the digit in the fiftieth place OR converts 3/11 incorrectly to 3.666 … and provides some explanation of why 6 is the digit in the fiftieth place. 1 point The student provides a less than satisfactory response that only begins to answer the question; e.g., begins to divide correctly (minor flaws in division are allowed) but fails to identify "the digit" OR identifies 7 as the correct digit with no explanation or work shown. 0 points The student provides an unsatisfactory response; e.g., either answers the question inappropriately or fails to attempt to answer the OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment anecdotal evidence suggests that students may be given credit for verbal fluency or for elegance of presentation rather than mathematical acumen. The student who mentions everything possible about the problem posed in the task and rambles on about minor points the teacher has mentioned in class may receive more credit than a student who has deeper insights into the problem but produces only a terse, minimalist solution. The beautiful but prosaic presentation with elaborate drawings may inappropriately outweigh the unexpected but elegant solution. Such difficulties are bound to arise when communication with others is emphasized as part of mathematical thinking, but they can be dealt with more successfully when assessors include those with expertise in mathematics. Unanticipated responses require knowledgeable graders who can recognize and evaluate them. In any case, regardless of the type of rubric, graders must be alert to the unconventional, unexpected answer, which, in fact, may contain insights that the assessor had not anticipated. The likelihood of unanticipated responses will depend in part upon the mathematical richness and complexity of the task. Of course, the greater the chances of unanticipated responses, the greater the mathematical sophistication needed by the persons grading the tasks: the graders must be sufficiently knowledgeable to recognize kernels of mathematical insight when they occur. Similarly, graders must sharpen their listening skills for those instances in which task results are communicated orally. Teachers are uniquely positioned to interpret their students' work on internal and external assessments. Personal knowledge of the students enhances their ability to be good listeners and to recognize the direction of their students' thinking. There may also be a need for somewhat different rubrics even on the same task because judgment of draft work should be different from judgment of polished work. With problem solving a main thrust of mathematics education, there is a place for both kinds of judgments. Some efforts are under way, for example, to establish iterative processes of assessment: Students work on tasks, handing it in to teachers to receive comments about their work in progress. With these comments in hand, students may revise and extend their work. Again, it goes to the teacher for comment. This back-and-forth process may continue several times, optimizing the opportunity for students to learn from the assessment. Such a model will require appropriate rubrics for teachers and students alike to judge progress at different points. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment REPORTING ASSESSMENT RESULTS Consideration of issues about the dissemination of results are often not confronted until after an assessment has been administered. This represents a missed opportunity, particularly from the perspective of the content principle. Serious attention to what kind of information is needed from the assessment and who needs it should influence the design of the assessment and can help prevent some of the common misuses of assessment data by educators, researchers, and the public. The reporting framework itself must relate to the mathematics content that is important for all students to learn. There has been a long tradition in external assessment of providing a single overall summary score, coupled in some cases with subscores that provide a more fine-grained analysis. The most typical basis for a summary score has been a student's relative standing among his or her group of peers. There have been numerous efforts to move to other information in a summary score, such as percent mastery in the criterion-related measurement framework. One innovative approach has been taken by the Western Australia Monitoring Standards in Education program. For each of five strands (number; measurement; space; chance and data; algebra) a student's performances on perhaps 20 assessment tasks are arrayed in such a way that overall achievement is readily apparent while at the same time some detailed diagnostic information is conveyed.34 NAEP developed an alternative approach to try to give meaning to summary scores beyond relative standing. NAEP used statistical techniques to put all mathematics items in the same mathematics proficiency scale so that sets of items can be used to describe the level of proficiency a particular score represents.35 Although these scales have been criticized for yielding misinterpretations about what students know and can do in mathematics,36 they represent one attempt to make score information more meaningful. Similarly, some teachers focus only on the correctness of the final answer on teacher-made tests with insufficient attention to the mathematical problem solving that preceded it. Implementation of the content principle supports a reexamination of this approach. Problem solving legitimately may involve some false starts or blind alleys; students whose work includes such things are doing important mathematics. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment Rather than forcing mathematics to fit assessment, assessment must be tailored to whatever mathematics is important to learn. Along with the efforts to develop national standards in various fields, there is a push to provide assessment information in ways that relate to progress toward those national standards. Precisely how such scores would be designed to relate to national standards and what they would actually mean are unanswered questions. Nonetheless, this push also is toward reporting methods that tell people directly about the important mathematics students have learned. This is the approach that NAEP takes when it illustrates what basic, proficient, and advanced mean by giving specific examples of tasks at these levels. An assessment framework that is used as the foundation for the development of an assessment may provide, at least in part, a lead to how results of the assessment might be reported. In particular, the major themes or components of a framework will give some guidance with regard to the appropriate categories for reporting. For example, the first four dimensions of the Balanced Assessment Project's framework suggest that attention be paid to describing students' performance in terms of thinking processes used and products produced as well as in terms of the various components of content. In any case, whether or not a direct connection between aspects of the framework and reporting categories is made, a determination of reporting categories should affect and be affected by the categories of an assessment framework. The mathematics in an assessment should never be distorted or trivialized for the convenience of assessment. Design, development, scoring, and reporting of assessments must take into account the mathematics that is important for students to learn. In summary, rather than forcing mathematics to fit assessment, assessment must be tailored to whatever mathematics is important to assess. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment ENDNOTES 1 For examples of such matrices, see Edward G. Begle and James W. Wilson, "Evaluation of Mathematics Programs," in Edward G. Begle, ed., Mathematics Education, 69th Yearbook of the National Society for the Study of Education, pt. 1 (Chicago, IL: University of Chicago Press, 1970), 367-404; for a critique of this approach to content, see Thomas A. Romberg, E. Anne Zarinnia, and Kevin F. Collis, "A New World View of Assessment in Mathematics," in Gerald Kulm, ed., Assessing Higher Order Thinking in Mathematics (Washington, D.C.: American Association for the Advancement of Science, 1990), 24-27. 2 Edward A. Silver, Patricia Ann Kenney, and Leslie Salmon-Cox, The Content and Curricular Validity of the 1990 NAEP Mathematics Items; A Retrospective Analysis (Pittsburgh, PA: Learning Research and Development Center, University of Pittsburgh, 1991). 3 Edward Haertel and David E. Wiley, "Representations of Ability Structures: Implications for Testing," in Norman Fredericksen, Robert J. Mislevy, and Isaac I. Bejar, eds., Test Theory for a New Generation of Tests (Hillsdale, NJ: Lawrence Erlbaum Associates, 1992). 4 John Dossey, personal communication, 24 June 1993. 5 Alan H. Schoenfeld, Balanced Assessment for the Mathematics Curriculum: Progress Report to the National Science Foundation (Berkeley, CA: University of California, June 1993). 6 See also Suzanne P. Lajoie, "A Framework for Authentic Assessment in Mathematics," in Thomas A. Romberg, ed., Reform in School Mathematics and Authentic Assessment, in press. 7 California Department of Education, Mathematics Framework for California Public Schools: Kindergarten Through Grade 12 (Sacramento, CA: Author, 1992), 20. 8 E. Anne Zarinnia and Thomas A. Romberg, "A Framework for the California Assessment Program to Report Students' Achievement in Mathematics," in Thomas A. Romberg, ed., Mathematics Assessment and Evaluation: imperatives for Mathematics Education (Albany, NY: State University of New York Press, 1992), 242-284. 9 Lauren B. Resnick and Daniel P. Resnick, "Assessing the Thinking Curriculum: New Tools for Educational Reform," in Bernard R. Gifford and Mary Catherine O'Connor, eds., Changing Assessments: Alternative Views of Aptitude, Achievement and Instruction (Boston, MA: Kluwer Academic Publishers, 1992), 37-75. 10 National Research Council, Mathematical Sciences Education Board, Measuring Up: Prototypes for Mathematics Assessment (Washington, D.C.: National Academy Press, 1993), 117-119. 11 John S. Brown, Allan Collins, and P. Duguid, "Situated Cognition and the Culture of Learning," Educational Researcher 18:1 (1989), 32-42; Ralph T. Putnam, Magdalene Lampert, and Penelope Peterson, "Alternative Perspectives on Knowing Mathematics in Elementary Schools," Review of Research in Education 16 (1990):57-150; James G. Greeno, "A Perspective on Thinking," American Psychologist 44:2 (1989), 134-141; James Hiebert and Thomas P. Carpenter, "Learning and Teaching with Understanding," in Douglas A. Grouws, ed., Handbook of Research on Mathematics Teaching and Learning (New York, NY: Macmillan Publishing Company, 1992), 65-97. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment 12 Kentucky Department of Education, "All About Assessment," EdNews, Special Section, Jan/Feb 1992, 7. 13 Lauren B. Resnick, Diane Briars, and Sharon Lesgold, "Certifying Accomplishments in Mathematics: The New Standards Examining System," in Izaak Wirszup and Robert Strait, eds., Developments in School Mathematics Education Around the World, vol. 3 (Reston, VA: National Council of Teachers of Mathematics, 1992), 196-200. 14 Learning Research and Development Center, University of Pittsburgh and National Center on Education and the Economy, New Standards Project (Pittsburgh, PA: Author, 1993). 15 Measuring Up, 101-106. 16 Eve R. Hall, Edward T. Esty, and Shalom M. Fisch, "Television and Children's Problem-Solving Behavior: A Synopsis of an Evaluation of the Effects of Square One TV," Journal of Mathematical Behavior 9:2 (1990), 161-174. 17 The Riverside Publishing Company, Riverside Student Performance Assessment, Grade 8 Mathematics Sample Assessment (Riverside, CA: Author, 1991), 2-6. 18 Thomas P. Carpenter et al., "Results of the Third NAEP Mathematics Assessment: Secondary School," Mathematics Teacher, 76:9 (1983), 656. 19 See, for example, Alan H. Schoenfeld, "When Good Teaching Leads to Bad Results: The Disasters of 'Well Taught' Mathematics Classes," Educational Psychologist 23:2 (1988), 145-166; Mary M. Lindquist, "Reflections on the Mathematics Assessments of the National Assessment of Educational Progress," in Developments in School Mathematics Education Around the World, vol. 3. 20 Edward A. Silver, Lora J. Shapiro, and Adam Deutsch, "Sense Making and the Solution of Division Problems Involving Remainders: An Examination of Middle School Students' Solution Processes and Their Interpretations of Solutions," Journal for Research in Mathematics Education 24:2 (1993), 117-135. 21 California Assessment Program, Question E (Sacramento, CA: California State Department of Education, 1987). 22 Nancy S. Cole, Changing Assessment Practice in Mathematics Education: Reclaiming Assessment for Teaching and Learning (Draft version, 1992). 23 Adapted from Kirsten Hermann and Bent Hirsberg, "Assessment in Upper Secondary Mathematics in Denmark," in Mogens Niss, ed., Cases of Assessment in Mathematics Education (Dordrecht, The Netherlands: Kluwer Academic Publishers, 1993), 133. 24 Second International Mathematics Study, "Technical Report 4: Instrument Book," booklet 2LB, problem 26, (Urbana, IL: International Association for the Evaluation of Educational Achievement, 8, November 1985), 8. This was the only item, of those given to eighth graders in the U.S., that was later judged to have involved problem solving as specified in the NCTM Standards. 25 Peter Hilton, "The Tyranny of Tests," American Mathematical Monthly 100:4 (1993), 365-369. OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment 26 "Representations of Ability Stuctures"; Robert J. Mislevy, "Test Theory Reconceived," Research Report, in press. 27 Ruth Mitchell, Testing for Learning: How New Approaches to Evaluation Can Improve American Schools (New York: Free Press, 1992). 28 Alan Bell, Hugh Burkhardt, and Malcolm Swan, "Assessment of Extended Tasks," in Richard Lesh and Susan J. Lamon, eds., Assessment of Authentic Performance in School Mathematics (Washington, D.C.: American Association for the Advancement of Science, 1992), 182. 29 New Jersey Department of Education, "Grade 8 Early Warning Test," Guide to Procedures for Scoring the Mathematics Constructed-Response Items (Trenton, NJ: Author, 1991), 4-6. 30 Marilyn Rindfuss, ed., Integrated Assessment System: Mathematics Performance Assessment Tasks Scoring Guides (San Antonio, TX: The Psychological Corporation, 1991). 31 Edward A. Silver, "QUASAR," Ford Foundation Letter, 20:3 (1989), 1-3. 32 Edward A. Silver and Suzanne Lane, "Assessment in the Context of Mathematics Instruction Reform: The Design of Assessment in the QUASAR Project," in Cases of Assessment in Mathematics Education: An ICMI Study. 33 Measuring Up, 14-16. 34 Geoff N. Masters, Inferring Levels of Achievement on Profile Strands (Hawthorn, Australia: Australian Council for Educational Research, 1993). 35 John A. Dossey et al., The Mathematical Report Card: Are We Measuring Up? (Princeton, NJ: Educational Testing Service, 1988). 36 Robert A. Forsyth, "Do NAEP Scales Yield Valid Criterion-Referenced Interpretations?" Educational Measurement: Issues and Practice 10:3 (1991), 3-9, 16. For a more recent critique of the procedures that the National Assessment Governing Board has used in setting and interpreting performance standards in the 1992 mathematics NAEP, see Educational Achievement Standards: NAGB's Approach Yields Misleading Interpretations (Washington, D.C.: General Accounting Office, 1993). OCR for page 41 Measuring What Counts: A Conceptual Guide for Mathematics Assessment This page in the original is blank.
{"url":"http://www.nap.edu/openbook.php?record_id=2235&page=41","timestamp":"2014-04-17T21:37:17Z","content_type":null,"content_length":"94830","record_id":"<urn:uuid:72923ffe-609e-4e91-9af4-12d744423db3>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00635-ip-10-147-4-33.ec2.internal.warc.gz"}
Convert hundred cubic foot of natural gas to gallon [U.S.] of automotive gasoline - Conversion of Measurement Units ›› Convert hundred cubic foot of natural gas to gallon [U.S.] of automotive gasoline ›› More information from the unit converter How many hundred cubic foot of natural gas in 1 gallon [U.S.] of automotive gasoline? The answer is 1.2119205298. We assume you are converting between hundred cubic foot of natural gas and gallon [U.S.] of automotive gasoline. You can view more details on each measurement unit: hundred cubic foot of natural gas or gallon [U.S.] of automotive gasoline The SI derived unit for energy is the joule. 1 joule is equal to 9.19793966152E-9 hundred cubic foot of natural gas, or 7.58955676988E-9 gallon [U.S.] of automotive gasoline. Note that rounding errors may occur, so always check the results. Use this page to learn how to convert between hundred cubic foot of natural gas and gallons [U.S.] of automotive gasoline. Type in your own numbers in the form to convert the units! ›› Metric conversions and more ConvertUnits.com provides an online conversion calculator for all types of measurement units. You can find metric conversion tables for SI units, as well as English units, currency, and other data. Type in unit symbols, abbreviations, or full names for units of length, area, mass, pressure, and other types. Examples include mm, inch, 100 kg, US fluid ounce, 6'3", 10 stone 4, cubic cm, metres squared, grams, moles, feet per second, and many more! This page was loaded in 0.0034 seconds.
{"url":"http://www.convertunits.com/from/hundred+cubic+foot+of+natural+gas/to/gallon+%5BU.S.%5D+of+automotive+gasoline","timestamp":"2014-04-19T01:48:21Z","content_type":null,"content_length":"20758","record_id":"<urn:uuid:dc2135e9-8479-49cc-afc8-1bd1e9b7a14b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00281-ip-10-147-4-33.ec2.internal.warc.gz"}
10th Mathematics UNIT I : NUMBER SYSTEMS [ 11 Marks ] 1. REAL NUMBERS (15) Periods [STUDY] Euclid's division lemma, Fundamental Theorem of Arithmetic - statements after reviewing work done earlier and after illustrating and motivating through examples, Proofs of results - irrationality of √2, √3, √5, decimal expansions of rational numbers in terms of terminating/non-terminating recurring decimals. UNIT II : ALGEBRA [23 Marks ] [STUDY]1. POLYNOMIALS (7) Periods Zeros of a polynomial. Relationship between zeros and coefficients of quadratic polynomials. Statement and simple problems on division algorithm for polynomials with real coefficients. 2. PAIR OF LINEAR EQUATIONS IN TWO VARIABLES (15) Periods Pair of linear equations in two variables and their graphical solution. Geometric representation of different possibilities of solutions/inconsistency. Algebraic conditions for number of solutions. Solution of a pair of linear equations in two variables algebraically - by substitution, by elimination and by cross multiplication. Simple situational problems must be included. Simple problems on equations reducible to linear equations may be included. UNIT III : GEOMETRY [ 17 Marks ] [ 1. TRIANGLES (15) Periods Definitions, examples, counter examples of similar triangles. 1. (Prove) If a line is drawn parallel to one side of a triangle to intersect the other two sides in distinct points, the other two sides are divided in the same ratio. 2. (Motivate) If a line divides two sides of a triangle in the same ratio, the line is parallel to the third side. 3. (Motivate) If in two triangles, the corresponding angles are equal, their corresponding sides are proportional and the triangles are similar. 4. (Motivate) If the corresponding sides of two triangles are proportional, their corresponding angles are equal and the two triangles are similar. 5. (Motivate) If one angle of a triangle is equal to one angle of another triangle and the sides including these angles are proportional, the two triangles are similar. 6. (Motivate) If a perpendicular is drawn from the vertex of the right angle of a right triangle to the hypotenuse, the triangles on each side of the perpendicular are similar to the whole triangle and to each other. 7. (Prove) The ratio of the areas of two similar triangles is equal to the ratio of the squares on their corresponding sides. 8. (Prove) In a right triangle, the square on the hypotenuse is equal to the sum of the squares on the other two sides. 9. (Prove) In a triangle, if the square on one side is equal to sum of the squares on the other two sides, the angles opposite to the first side is a right triangle. UNIT IV : TRIGONOMETRY [22 Marks ] [ 1. INTRODUCTION TO TRIGONOMETRY (10) Periods Trigonometric ratios of an acute angle of a right-angled triangle. Proof of their existence (well defined); motivate the ratios, whichever are defined at 0o & 90o. Values (with proofs) of the trigonometric ratios of 30 deg. , 45 deg. & 60 deg.. Relationships between the ratios. 2. TRIGONOMETRIC IDENTITIES (15) Periods Proof and applications of the identity sin2 A + cos2 A = 1. Only simple identities to be given. Trigonometric ratios of complementary angles. UNIT VII : STATISTICS AND PROBABILITY [ 17 Marks ] [ 1. STATISTICS (18) Periods Mean, median and mode of grouped data (bimodal situation to be avoided). Cumulative frequency graph. X Mathematics S A - II Chapter wise Test Papers and Study materials and guess paperUNIT II : ALGEBRA (Contd.) [23 marks] Quadratic Equations Standard form of a quadratic equation ax2 + bx + c = 0, Solution of the quadratic equations (only real roots) by factorization, by completing the square and by using quadratic formula. Relationship between discriminant and nature of roots. Problems related to day to day activities to be incorporated. Arithmetic Progressions Motivation for studying AP. Derivation of standard results of finding the nth term and sum of first n terms UNIT IV : TRIGONOMETRY [ 8 marks] Height and Distance Simple and believable problems on heights and distances. Problems should not involve more than two right triangles. Angles of elevation / depression should be only 30, 45, 60 degree UNIT VI : COORDINATE GEOMETRY [11 marks] Co-ordinate Geometry Review the concepts of coordinate geometry done earlier including graphs of linear equations. Awareness of geometrical representation of quadratic polynomials. Distance between two points and section formula (internal). Area of a triangle UNIT III : GEOMETRY (Contd.) [17 marks] Tangents to a circle motivated by chords drawn from points coming closer and closer to the point. 1. (Prove) The tangent at any point of a circle is perpendicular to the radius through the point of contact. 2. (Prove) The lengths of tangents drawn from an external point to circle are equal.3. CONSTRUCTIONS (8) Periods 1. Division of a line segment in a given ratio (internally) 2. Tangent to a circle from a point outside it. 3. Construction of a triangle similar to a given triangle. UNIT VII : MENSURATION [ 23 marks]6. Area Related to Circles Motivate the area of a circle; area of sectors and segments of a circle. Problems based on areas and perimeter / circumference of the above said plane figures. (In calculating area of segment of a circle, problems should be restricted to central angle of 60o, 90o & 120o only. Plane figures involving triangles, simple quadrilaterals and circle should be taken.) 7.Surface Areas and Volumes (i)Problems on finding surface areas and volumes of combinations of any two of the following: cubes, cuboids, spheres, hemispheres and right circular cylinders/cones. Frustum of a cone. (ii) Problems involving converting one type of metallic solid into another and other mixed problems. (Problems with combination of not more than two different solids be taken.) UNIT V : STATISTICS AND PROBABILITY [ 8 marks] Classical definition of probability. Connection with probability as given in Class IX. Simple problems on single events, not using set notation. Chapter wise Assignment Science
{"url":"http://cbsemathstudy.blogspot.in/p/class-x.html","timestamp":"2014-04-18T23:17:17Z","content_type":null,"content_length":"85132","record_id":"<urn:uuid:ef6a29d8-4a6e-4dc0-a326-281ea7743b05>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
State to adopt Common Core view of Algebra I in 8th grade The State Board of Education left unresolved a contentious issue of how much algebra should be taught in eighth grade, and to which students, when it approved the state version of the Common Core math standards two years ago. At stake was whether students should be required to take Algebra I in the eighth grade ­– a subject that many more students are taking but also failing – or wait until they get to high school in the ninth grade to do so, which was the sequence of the Common Core standards adopted by other states. Now there are moves in the Legislature and by the State Board of Education to settle the issue. The result could be a subtle shift away from the state’s decade-long push toward teaching primarily Algebra I in eighth grade. On Wednesday, the State Board named a 19-member Mathematics Curriculum Framework and Evaluation Criteria Committee, consisting of math teachers and higher education math experts. It will have little more than a year for a mammoth undertaking: Create strategies for implementing K-12 Common Core math in the classroom, guidelines for publishers, and suggestions for using technology for pedagogy and professional development. Another of the charges: Make recommendations to the State Board on “the issues related to mathematics instruction in grades eight and beyond”: in other words, lay out course guidelines and make policy recommendations involving eighth grade and high school math. Senate Bill 1200, which Sen. Loni Hancock (D-Berkeley) is authoring at the request of the state Department of Education, would give the State Board and the state Superintendent of Public Instruction the authority to amend the state’s 2-year-old Common Core standards. Most states adopted the national Common Core math standards intact, but the State Board appointed by Gov. Schwarzenegger appended them to include California’s Algebra standards for eighth grade and made other changes thought to better prepare students for Algebra in eighth grade. Hancock and Education Department officials argue that removing these additional state math standards and other modifications would save costs, clear up confusion, and avoid potential complications: the need to create additional standardized test items, buy textbooks unique to California, and train teachers on California’s hybrid standards. At least that’s the rationale for SB 1200. All of this is not to say that when Common Core standards are implemented, starting in 2014-15, California students will stop taking Algebra in eighth grade. Former State Superintendent Bill Honig chairs the Instructional Quality Commission, which will oversee the work of the curriculum framework committee on behalf of the State Board. Honig insists that many students will continue to take Algebra, and he points to the guidelines that the State Board passed on Wednesday for the just-appointed framework committee. They direct the Committee to present school districts with options for an “acceleration path” so that students capable of handling Algebra I can progress to it by eighth grade. Honig says that “computer-adaptive testing,” which the Smarter Balanced Consortium of states creating the new Common Core standardized tests is promising, will enable teachers to better identify which students are ready for Algebra in eighth grade and which students in lower grades could be on an accelerated path to Algebra. The goal of adaptive assessments is to individualize testing; computer programs will be able to tailor questions based on answers to previous questions. They are more precise in identifying the extent to which students are ahead of or behind grade level. (Doug McRae, a retired standardized testing expert from Monterey, repeated his doubts at the State Board meeting this week that Smarter Balanced will deliver computer-adaptive testing or that technology-impaired California districts will be capable of deploying it until 2018 or later.) Honig says it’s premature to estimate what percentage of eighth graders will take Algebra. The curriculum framework committee has yet to begin its work and the State Board won’t adopt the math frameworks until November 2013. Then it will be up individual districts and schools to decide what accelerated Common Core math would look like: individualized instruction for students who are farther along than their peers or separate accelerated classes for these advanced students. Regardless, he says, the Common Core Pre-Algebra eighth grade course will be more demanding than what students who aren’t now taking Algebra in eighth grade are receiving under the California But what is clear is that California will no longer have a policy pushing Algebra in eighth grade. Instead, the new policy, as the guidance from the State Board to the curriculum framework committee makes clear, would be to create “options for middle school acceleration to support Algebra I … that are consistent with other Common Core states.” (my emphasis) Most of these states have never considered universal Algebra I in eighth grade as desirable. State policy encouraged Algebra I For the past decade, California has used the accountability lever – dinging the standardized test scores of students in eighth grade who aren’t taking Algebra I – to encourage districts to offer Algebra in eighth grade. Advocacy groups for minority students saw expanding enrollment in Algebra I in eighth grade as key to closing the achievement gap and as an equity issue. State policy and lobbying by advocates for minority students worked. Last year, two-thirds of eighth graders took either Algebra or Geometry – compared with only a third in 2003. At the same time, the proportion of students who tested proficient rose from 39 percent in 2003 to 47 percent in 2011 (50 percent when seventh graders taking Algebra 1 are included). But that still left more than half of students not passing the state end-of-year test, leading to criticism that too many students are being forced into Algebra unprepared and then being made to repeat it in ninth grade. The national Common Core standards take a more gradual approach, giving students more time to learn the key building blocks of Algebra, like fractions and variables. In eighth grade they would take Pre-Algebra, with most ninth graders taking a yet-to-be developed Algebra I curriculum. The ninth grade Algebra students could take four years of higher math in high school – not enough to pursue a STEM major in college (unless they fit it in Calculus over a summer or took two math courses in one year), but sufficient for non-science majors in college, admission to a four-year state university, or technical jobs requiring applied math. A compromise and a mess The dilemma before the State Board today stems from an uncomfortable compromise that the predecessor of today’s Instructional Quality Commission made two years ago. That Commission faced a tight deadline and the demands of Gov. Schwarzenegger that the state’s “rigorous” math standards be preserved – code for keeping Algebra I in eighth grade. As a result, the Commission adopted two sets of standards for eighth grade: one of Common Core eighth grade math (effectively Pre-Algebra) and one with an unwieldy set of standards combining Common Core eighth grade, California’s Algebra I standards, and some Common Core Algebra standards. The Commission also pushed a few Common Core sixth and seventh grade standards down a grade – necessary, defenders of California’s standards argued, for students to be truly ready for Algebra I in eighth grade. The Commission left it up to a future standards commission to sort out the jumble. SB 1200 would empower the current State Board and state Superintendent to weed out the standards that aren’t consistent with the Common Core, including those out of the sequence by grade. It could also replace the California Algebra I standards with a yet-to-be Common Core Algebra I course. To the uninitiated, Algebra is Algebra. But Ze’ev Wurman, a software engineer who helped develop the California math standards, insists that California’s Algebra I standards contain important elements, needed to prepare students for Algebra II, that are missing from the national Common Core Algebra Wurman was one of the Schwarzenegger appointees to the Commission that two years ago forced through the changes to the Common Core standards. He predicts that removing them would lead to a sharp decline in the number of students taking Algebra I in eighth grade and a retreat from California’s rigorous standards. Honig says that students ready for Algebra I will take it, and the curriculum frameworks will guide teachers on meeting that challenge. Regardless of whether they take Algebra I in eighth or ninth grade, students will be better prepared for advanced math under Common Core, he Assuming SB 1200 passes, the State Board will likely clear up confusion over two sets of eighth grade math standards. But it could choose to do little or nothing to the standards. It would have until next summer to decide. John Fensterwald is the editor of EdSource Today. Write jfensterwald@edsource.org to contact him. Filed under: Common Core, Reporting & Analysis, State and Federal Policies, State Board of Education, STEM, Tests and Assessments · Tags: Bill Honig, Instructional Quality Committee, Math, Ze'ev Comment Policy EdSource encourages a robust debate on education issues and welcomes comments from our readers. The level of thoughtfulness of our community of readers is rare among online news sites. To preserve a civil dialogue, writers should avoid personal, gratuitous attacks and invective. Comments should be relevant to the subject of the article responded to. EdSource retains the right not to publish inappropriate and non-germaine comments. EdSource encourages commenters to use their real names. Commenters who do decide to use a pseudonym should use it consistently. 1. I would also like to understand the source of the stem course progression claim/assumption. After having read this piece I went looking for something upon which this might be based but came up empty. It seems that with more and more stem-specific schools cropping up, this would be something useful for the community to better understand. 2. “The ninth grade Algebra students could take four years of higher math in high school – not enough to pursue a STEM major in college (unless they fit it in Calculus over a summer or took two math courses in one year), but sufficient for non-science majors in college, admission to a four-year state university, or technical jobs requiring applied math.” What on earth are you talking about? A student who took a genuine pre-calc course in high school has sacrificed nothing. In fact, any student who had a *genuine* understanding of algebra II and trigonometry by his senior year is ready for a STEM career. Colleges offer pre-calculus course and higher as credit-bearing. I can’t stress this enough: your claim that high school calculus is necessary to a STEM career is absurd, and one often repeated by Ze’ev and others. On the larger point, neither you nor Ze’ev has ever made it clear something that I really, really want to understand: will the Common Core standards PREVENT qualified students from taking algebra in seventh or eighth grade–that is, will it cost more money for states to do it, force them to create one-off tests, or penalize them by not acknowledging the students taking advanced math (and thus giving them a hit in scores)? I want the logistics. From a policy standpoint, Ze’ev is wrong on the facts. Forcing algebra on eighth graders has been a disaster. He is also completely wrong on the history: before 1997, many California students took algebra in 8th grade. AP Calculus has been around for a long time in most public schools. And if the new Common Core will do nothing more than return California and other states to what used to be the norm, then fine. So here, I guess, is the question: Ze’ev is wrong about the history. But what worries me is, is he right that public schools going forward? It seems to me, looking at the standards, that public schools will have a disincentive to create their own tests and encourage seventh and eighth graders to take algebra, thus forcing public school parents in excellent suburban schools to push for something they’ve had for fifty years or more. That would be bad. But Ze’ev is too busy speaking nonsense about the past for me to be sure he’s correct, and you, John, don’t seem interested in finding out from anyone other than Ze’ev what, specifically, will happen on the Algebra topics. What person on the CC team can answer that question specifically? 3. The assumption that private schools offer algebra in 8th grade is interesting … and not true. The private schools near me only place kids who are ready in to algebra; it is the public schools that force unprepared students to take the class, destroying any chance for high-level kids to learn what they deserve to learn. 4. While I conceptually support the idea of incentivizing sufficient (and early) preparation for the math course of study for all kids, I think as it relates to college and/or stem preparation the question really becomes whether that incentive is followed up on by the students who would not otherwise be taking algebra in 8th grade. In other words, the measure of whether the 8th grade algebra incentive is serving its purpose shouldn’t be measured by 8th grade algebra participation, rather by the rates of students who used that opportunity to follow up with the next four years of high school math. For students who choose not to go past geometry (or even algebra if that’s still possible), it matters little whether they took algebra in 8th grade (in fact some studies show that to actually be a disincentive to staying in school for some kids). I have not been able to find any good statistics on high school math participation levels, though what does seem to be clear (and maybe obvious), nowhere near all the 8th grade algebra takers end up also taking algebra II, though admittedly the overall participation rate in algebra II has been creeping up. Ze’ev, do you have accurate measures for those things? That said, I absolutely agree with the other commenters who said that this is only going to widen the gap between the math opportunity haves and have nots. And worse, it will probably lead to a subtle loss of urgency in providing stem related foundations at the elementary level. Something we already have a problem with. And finally, I agree with Ze’ev that this is going to provide yet another reason for a group of families to decide against public education. 5. Bob, There was a special accelerated textbooks adoption in 1999, shortly after the California Standards adoption, to provide materials in support of the required preparation. And there was an extra $1B allocated to spend on accelerated textbook purchases. The data indicates that it mostly worked. But I completely agree with your final point: “we are now reverting to the policies of yesteryear that afforded only the math “elite” the chance to master the complexities of the subject.” Expect public school algebra taking by eighth grade to drop like a rock, while private schools will keep it. So guess which students will have an advantage when enrolling in selective colleges, with AP Calc and AP Physics under their belt. 6. The 8th grade requirement of all students taking (and passing) Algebra I was unrealistic from the git-go. In order to increase the number of middle school students taking Algebra, they need preparation in the elementary schools in pre-Algebra skills. This can’t happen unless the math textbook adoptions of the state don’t emphasize pre-Algebra skills. Since textbook adoptions work on a seven year cycle, new 5th and 6th grade math texts did not emphasize the needed skills to prepare young students for the challenges. How ludicrous, now that the recently adopted textbooks are providing elementary students with the needed knowledge base for 8th grade Algebra, we are now reverting to the policies of yesteryear that afforded only the math “elite” the chance to master the complexities of the subject, 7. Imagine if we were spending $3.6 billion: – To invest in our IT infrastructure – To develop rigorous formative and summative assessments that identify what content standards students have mastered, and where they need help – To promote design policies such that instruction is tailored the specific needs of each student in California – To deeply train teachers and principals to teach to the Common Core, using open education resources – To develop social networking tools that connect educators to parents, so all are equally invested in the success of each child’s learning Would we have to wait until 2018, as Doug McRae suggests? Can we afford to wait that long? 8. The core policy issue for middle school mathematics in California is really pretty simple — it should be based on the reality that some kids are ready for Algebra by 8th grade, and others are not. The IQC guidelines for revised curriculum frameworks for middle school math recognize this reality, so the IQC guidelines approved by the State Board several days ago have us on the right path, that is, appropriate instruction both for kids ready for Algebra and for kids not yet ready for Algebra by grade 8. Now, to be aligned with instruction, our assessment system has to recognize both sets of kids and in effect have two distinct tests. One way to do that is via computer-adaptive testing that uses instantaneous feedback to expand the range of testing to potentially include accurate scores for both sets of kids. The problem is computer-adaptive testing has huge start-up costs for IT infrastructure and also is not a proven technology for satisfying many of the needs for summative accountability tests in California, so I for one am a real skeptic that CA can implement computer-adaptive testing by the current 2015 target date. I think 2018 is a reasonable target date for computer-administered versions of paper-and-pencil fixed format tests in California, with computer-adaptive tests initiated after we are successful implementing computer-administered fixed-format tests. Can we have two fixed-format tests (either paper-and-pencil or computer-administered) for the two sets of kids taking two differing instructional sequences in 8th grade? Yes, we can. We have to discard the myth that the feds require a one-size-fits-all test for 8th grade mathematics, and instead embrace a two-test design [Algebra I for kids taking Algebra I, a pre-Algebra or Algebra Readiness (perhaps a consortium test based on the national Common Core will do nicely) for kids not taking Algebra I] that are built on a common scale of measurement with performance standards set on that common scale of measurement so that we can compute apples-to-apples statewide data that include both sets of kids and both tests. This design is really a paper-and-pencil fixed-format equivalent of the fancier computer-adaptive design, and such a two-test design has already been approved by the feds for at least one other state. The IQC guidelines along with a two-test assessment program design will maintain the strong progress that CA has achieved over the past 15 years via the Algebra by 8th grade policy SB 1200, in contrast, would lead to a one-test-fits-all design based only on the national Common Core, a lower expectation for middle school mathematics than what California has had for the past 15 years. Following the dictum that “what gets measured is what gets taught,” the result will be lower expectations, less overall rigor of instruction, and lower math achievement levels for our middle school students over time. The bottom line is that the IQC guidelines have it right, it is possible for our assessment system to be designed to be aligned with the IQC direction with or without computer-adaptive testing, and SB 1200 heads us in the wrong direction. 9. Computer Adaptive Testing. Ah, yes, unicorns and fairies will see to it that computers, electrical infrastructure, and network bandwidth will magically arrive in our schools by 2014 even though no one on either a state or national level appears to be thinking about exactly how that will happen with what people and what money. Our school would really benefit from having a dedicated sysadmin. I think it would be impossible to expect computers to be deployed and used in the classrooms regularly without that person. I’m not seeing how it will be funded with another -8% change in ADA money. 10. John Just because students score below Proficient or Advanced in no way says they are ” not passing.” Plus the proficiency rate has increased every year even as the number and percent of students taking the course have increased. Absolutely we need to improve math teaching in grades 2 to 7 but to abandon Algebra for the sake of some movement will set back math education not improve it.
{"url":"http://edsource.org/2012/state-to-adopt-common-core-view-of-algebra-i-in-8th-grade/18041","timestamp":"2014-04-19T17:03:05Z","content_type":null,"content_length":"62384","record_id":"<urn:uuid:b848baa5-5d44-407a-9fbb-ada2f579aded>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
The MPG Illusion & Seth Godin I’ve been an avid follower of Seth Godin ever since I watched his “why marketing is too important to be left to the marketing department” talk at the Business of Software conference. (If you haven’t made the time to see this, you really should.) This morning his blog featured a simple quiz, which I must admit had me stumped too, despite my Bachelor of Mathematics degree: A simple quiz for smart marketers: Let's say your goal is to reduce gasoline consumption. And let's say there are only two kinds of cars in the world. Half of them are Suburbans that get 10 miles to the gallon and half are Priuses that get 50. If we assume that all the cars drive the same number of miles, which would be a better investment: □ Get new tires for all the Suburbans and increase their mileage a bit to 13 miles per gallon. □ Replace all the Priuses and rewire them to get 100 miles per gallon (doubling their average!) Trick question aside, the answer is the first one. (In fact, it's more than twice as good a move). We're not wired for arithmetic. It confuses us, stresses us out and more often than not, is used to deceive. Surely, there’s a trick, I thought; I immediately started reading too deeply into the subtleties of the implicit consumption associated with new Suburbans tires vs. replacing Priuses outright – this completely missed the point! Frustrated, I opened up Notepad and wrote it all down: • Let m be number of miles driven by a car... • Let s be the gas consumption (in gallons) for Suburbans (= m/10) • Let p be the gas consumption (in gallons) for Priuses (= m/50) • Let T be the total consumption (in gallons) (= s + p = m/10 + m/50 = 6m/50 = 0.12m) So in Scenario #1, we have T = m/13 + m/50 = 50m+13m/650 = 63m/650 = 0.097m And in Scenario #2, we have T = m/10 + m/100 = 11m/100 = 0.11m Scenario #1 reduced consumption by 0.12-0.097 = 0.023; Scenario #2 only by 0.01; Scenario #1 is 2.3x more efficient! Sure, it all makes sense when it’s drawn out for you [1]: This very interesting article in Science, “The MPG Illusion” by Richard P. Larrick and Jack B. Soll at the Fuqua School of Business in Duke University (Vol 320, June 20, 2008, p. 1593), points out the mathematically obvious truth that gas used per mile is inversely proportional to miles per gallon, which means that you have a steeper slope at lower MPG ratings, and diminishing returns at higher MPG ratings. Now, try to think about how this applies to your daily life and where you spend your time, particularly as an application developer. More on this next time… [1] - http://www.bunniestudios.com/blog/?p=257 19 comments: This is an interesting problem, but it speaks to the difference of how numbers grow - arithmetically vs. exponentially. This problem plays on the fact that we tend to look at numbers in terms of relationships tied to addition/subtraction (A/S) and multiplication/division (M/D). While these are the same, M/D is really the same as A/S. In this scenario, the growth is merely arithmetic - we are adding 3 MPG to the suburban and multiplying by two the Prius' MPG. Functionally, these are the same. We just believe that multiplying the Prius' MPG makes it more. The further tricky part to this problem is that the difference between the numbers is still arithmetic. The prius gets 5 times more MPG then the surburban at the start. That is arithmetic. Your last equation: --T = 0.12/m is classic hyperbolic reciprocal function which explains why the bang for our buck is high with the suburban. However, it also explains why we will never have 100% efficiency in gas engines. Nice chart--that's a really good aid. Yes, interesting as usual "Mischievous Marketing Wisdom" from Seth Godin and his Godinettes. Once again Seth shows us his "Five and Dime -- Facts and Fiction". I prefer your suggestion of people not spend 80% of their time on issues that impact a given environment by 20%. Seth's artificial fabrication of a scenario to draw his own conclusion is as usual............zzzzzzzzzz..oh excuse me, I must have nodded off there. The heart of the matter is the inherent tension between large numbers with small percentages and small numbers with large percentages. As this example illustrates sometimes small percentages translate to big results when the numbers they are acting on are large. Why not convert both? Regardless of maths, numbers, data, graphs and explanations isn't the real problem overall gas consumption. Do both, simple. From a marketing perspective Toyota should run a campaign to get Prius drivers to convert their cars. Then for every 10 converted Prius' Toyota will replace the tires on one Suburban? Everyone wins? My Housemate wrote me this after I placed the question to him - i got bored reading papers and came back to your problem and extrapolated it further. i don't really know how blogs work but can you post stuff in them? if you can, cut and paste this and john nash the shit out of them. firstly, when car sales are 50/50, option 1 is better. i think of it as driving 100 miles. under your conditions i would have to drive 50 miles in the suburban and 50 miles in the prius. under the control conditions it would take me 5 gallons to the suburban 50 miles and 1 gallon to travel the prius 50 miles totalling 6 gallons. under option 1 conditions it would take me 3.836 (50/13) gallons to travel the suburban miles and 1 gallon to travel the prius miles totalling 4.846 gallons. under option 2 conditions it would take me 5 gallons to travel the suburban miles and 0.5 (100/50) gallons to travel the prius miles totalling 5.5 gallons. therefore option 1 is the more efficient choice for the IMMEDIATE FUTURE. however, i then thought, at what point would the percentage of distance travelled in the prius cancel out the benefits of making the suburban 3m/g more efficient. for example, if car sales were 80/20 in favour of prius (i.e. i drive 80 miles in the prius and 20 in the suburban) would option 1 still be the better currently when you look at option 1 result and option 2 there is a discrepancy, with the lower result being the more efficient. but at what percentage would option 1 be identical to option 2 and result in 0 discrepancy? for this we can follow the equation: option 1, x/50 + (100-x)/13 = y (x = the percentage i am trying to find, x + (100-x) = 100%) option 2, x/100 + (100-x)/10 = y because i want option 1 = option 2 i can rearrange the equation to be: x/50 + (100-x)/13 = x/100 + (100-x)/10 i'll convert to a common multiple of 1300 to calculate therefore becoming: 26x/1300 + (10,000-100x)/1300 = 13x/1300 + (13,000-130x)/1300 or simply, 26x + 10,000 - 100x = 13x + 13,000 - 130x basic algerbra leaves a result of 43x = 3000 or x = 69.767 therefore, the improvements to suburban only outweigh the prius when car sales are below 69.767% prius. once prius sales are higher than this percentage, option 2 is the more efficient choice. the marketing should be aimed at increasing prius sales above this figure in order to counteract the short-term benefits of 3m/g increase in efficiency of suburban for a better long term My old '96 Suburban averages 16-18 on the hwy. In the last 30 years I've owned 20 Subs from 1965 to an '07 model and always have been happy with performance, room safety and mileage. So if my ol' 96 is getting that kind of mileage, what are the results now? 6m/50 = .12m, not .12/m. The two are very different. You got it right where it mattered, though. The real takeaway from this example is not, "Wow, let's make suburbans more fuel efficient!" but that when you artificially constrain the miles driven to NEVER CHANGE no matter what the gas mileage is, you can pull all kinds of clever stunts with numbers. but with 2 gallons (each car given 1 gallon), the second scenario may reach further distance. Scenario 1: 13 + 50 = 63 miles Scenario 2: 10 + 100 = 110 miles I'm so not a mathematician and this problem has me stumped. Can someone explain it in non Math language? It's driving me crazy! Check out this video from the original blog post... Sure it's an artificial scenario - aimed at illustrating how our brains are wired rather than advocating a particular solution to fuel consumption. It's (if possible) even more counter-intuitive than it looks at first. If the Sub improves from 10 -13 mpg it will beat any improvement to the Prius - Even if the Prius is super-tuned to run a million miles per gallon. Take a simple example of 1 car of each type each doing 10,000 miles per year. Tuning the Sub from 10-13 mpg results in a reduction from 1, gallons to 769 gallons a year - a saving of 231 gallons. The Prius only uses 200 gallons to start with and can never save more than this no matter how efficient it gets. Well, I guess that's why in Europe fuel consumption is measured in litres per 100 kilometer ;) > Well, I guess that's why in Europe fuel consumption is measured in litres per 100 kilometer ;) Yes, in this case, it's by far more intuitive to understand: 10 miles per gallon = 23.5214583 litres per 100 km 13 miles per gallon = 18.0934295 litres per 100 km -> 5.4280288 litres saved per 100 km 50 miles per gallon = 4.70429167 litres per 100 km 100 miles per gallon = 2.35214583 litres per 100 km -> 2.35214584 litres saved per 100 km It's really obvious. so I'm glad we use this more intuitive system in everyday life :) Also really cool is that google is able to convert between these two systems so nicely, try googling "10 miles per gallon in litres per 100 km". Of cource, the opposite direction is also I think the key to this, and the bit that makes it feel "not intuitive" is the assumption that both types of car drive the same number of miles NOT both types of cars are given the same amount of If it was phrased, both cars have 1000 gallons of gas, then getting an extra 50 miles per gallon is far better than getting 3 extra miles per gallon. I recently came accross your blog and have been reading along. I thought I would leave my first comment. I dont know what to say except that I have enjoyed reading. Nice blog. I will keep visiting this blog very often. i don't like it when one can already tell by the way a problem is posed that the answer is gonna be the one that seems less plausible at first sight. i cant help it but it always makes me think of a childish stupid little person desperately trying to get some attention... generally marketers make me feel that way, doing whatever possible to have you look at them/their product and then being all satisfied just because you took notice. and every possible critique is countered by pretending every reaction is already a win... that's just an all to easy way out. blah. i'll add scenario three, one that would actually make some sense in the real world: replace 10% of the SUV with toyota priuses. btw, your blog is great, i just dont like seth.. h4nne5's scenario is pretty interesting. Replace 10% of the SUVs with Priuses. Those SUV drivers would then be getting a 400% better gas mileage. So overall, the original SUV drivers would be getting 40% better gas mileage, or 14 mpg. Which is obviously an improvement over 13 mpg. It's an interesting thought problem, but I doubt that replacing and properly inflating every SUVs tires will net a 30% increase in mpg. And we don't have a mass-produced 100mpg car. But we could replace clunkers with more fuel-efficient vehicles.... Wait. Where have I heard that idea before?
{"url":"http://www.onpreinit.com/2009/08/mpg-illusion-seth-godin.html?showComment=1251093213995","timestamp":"2014-04-18T20:43:07Z","content_type":null,"content_length":"90285","record_id":"<urn:uuid:5c84b4cf-c06d-4804-b084-6295d77c0b65>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00463-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help February 11th 2010, 04:49 AM #1 Nov 2009 $\vec{PQ}$ and $\vec{PR}$ represent the sides of PQ and PR of a triangle PQR respectively . If S is the midpoint of QR , show that the $\vec{PQ}+\vec{PR}=2\vec{PS}$ I managed to prove this . Using the method above or otherwise , find the position of O which is located in the triangle PQR where $\vec{OP}+\vec{OQ}+\vec{OR}=0$ need help on this part. Hello hooke $\vec{PQ}$ and $\vec{PR}$ represent the sides of PQ and PR of a triangle PQR respectively . If S is the midpoint of QR , show that the $\vec{PQ}+\vec{PR}=2\vec{PS}$ I managed to prove this . Using the method above or otherwise , find the position of O which is located in the triangle PQR where $\vec{OP}+\vec{OQ}+\vec{OR}=0$ need help on this part. Here's a hint. You've just shown that $\vec{SP} = -\tfrac12(\vec{PQ}+\vec{PR})$. Now find $\vec{TQ}$ and $\vec{UR}$ in the same way, where $T, U$ are the mid-points of $RP$ and $PQ$ respectively. Then add $\vec{SP}, \vec{TQ}$ and $\vec{UR}$ together. What happens next? thanks , yeah , i got 0 . It looks to me O is the centroid of the triangle but i am not sure how to describe the position of O . Hello hookeIt is a well-known fact that the centroid divides each of the medians in the ratio 2:1. For a vector proof, see here. February 11th 2010, 12:38 PM #2 February 11th 2010, 10:26 PM #3 Nov 2009 February 12th 2010, 12:34 AM #4 February 12th 2010, 12:45 AM #5 Nov 2009 February 12th 2010, 01:06 AM #6
{"url":"http://mathhelpforum.com/algebra/128338-vectors.html","timestamp":"2014-04-21T15:18:26Z","content_type":null,"content_length":"53208","record_id":"<urn:uuid:ae32ff4c-ebdd-47d7-9e39-9499c849ce4a>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework on Autopilot Why teach calculus students what a revolutionary software system can solve automatically? August 30, 2000--Wolfram Research has announced the release of a stand-alone version of Calculus WIZ, a revolutionary software product for first-year calculus students. With Calculus WIZ, students' computers can now solve over 90 percent of the homework problems assigned in a typical calculus course. Just as the introduction of the pocket calculator led to serious debate among math instructors, Calculus WIZ raises serious questions about how mathematics should be taught in the age of the computer. Calculus WIZ was conceived by Keith Stroyan, Professor of Mathematics at the University of Iowa and long-time crusader in the cause of calculus reform, a movement in math education that stresses conceptual understanding over rote "cookbook" calculations. "Traditional calculus instruction is dominated by the 'template examples and exercises' paradigm," says Stroyan. "Students work three sets of five exercises each, just like the three examples a few pages earlier in the text. This activity has some value--it builds confidence through practice--but it doesn't do anything to develop a deep understanding. That means it leaves a big gap in the students' ability to apply calculus to more open-ended problems." In a typical introductory calculus sequence, so much time is spent learning and practicing specific pencil-and-paper techniques that the underlying theory is often shortchanged. "The traditional courses tend not to have any time left after students work all the template exercises," Stroyan notes. However, the templates themselves have become standardized through hundreds of years of math education, a fact that makes it possible for Calculus WIZ to contain over a hundred "solvers," each one addressing a different exercise template. To solve a given homework problem, the student needs only to fire up Calculus WIZ, find the appropriate solver, type in the details of the exercise, and then sit back as the computer solver does the work. A complete electronic calculus textbook, including exercises, is another part of Calculus WIZ, making it an effective tool for self-study. However, what sets it apart from every other calculus study aid is the problem-solving power it gets from Mathematica, the leading technical computing system. Calculus WIZ includes a special version of Mathematica's "brain," the extraordinary collection of mathematical algorithms and knowledge that is the heart of Mathematica's computational power. Will Calculus WIZ change the way that calculus is taught? Probably not all by itself--but anyone who remembers how pocket calculators changed math education can see the signs of a similar revolution taking shape. The stand-alone edition of Calculus WIZ is available for Windows 95/98/NT/2000. It requires 160 MB of disk space for hard-disk installation. The suggested retail price is $69.50 (U.S. and Canada). For more information about Calculus WIZ, visit http://www.wolfram.com/wiz.
{"url":"http://wolfram.com/news/customcalcwiz2.html","timestamp":"2014-04-16T10:10:41Z","content_type":null,"content_length":"35922","record_id":"<urn:uuid:8a35330a-9a01-47f6-ae6e-fcecd1c680f5>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
Game Show Problem Moderator: Marilyn Game Show Problem (This material in this article was originally published in PARADE magazine in 1990 and 1991.) Suppose you're on a game show, and you're given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say #1, and the host, who knows what's behind the doors, opens another door, say #3, which has a goat. He says to you, "Do you want to pick door #2?" Is it to your advantage to switch your choice of doors? Craig F. Whitaker Columbia, Maryland Yes; you should switch. The first door has a 1/3 chance of winning, but the second door has a 2/3 chance. Here's a good way to visualize what happened. Suppose there are a million doors, and you pick door #1. Then the host, who knows what's behind the doors and will always avoid the one with the prize, opens them all except door #777,777. You'd switch to that door pretty fast, wouldn't you? Since you seem to enjoy coming straight to the point, I'll do the same. You blew it! Let me explain. If one door is shown to be a loser, that information changes the probability of either remaining choice, neither of which has any reason to be more likely, to 1/2. As a professional mathematician, I'm very concerned with the general public's lack of mathematical skills. Please help by confessing your error and in the future being more careful. Robert Sachs, Ph. D. George Mason University You blew it, and you blew it big! Since you seem to have difficulty grasping the basic principle at work here, I'll explain. After the host reveals a goat, you now have a one-in-two chance of being correct. Whether you change your selection or not, the odds are the same. There is enough mathematical illiteracy in this country, and we don't need the world's highest IQ propagating more. Scott Smith, Ph. D. University of Florida Your answer to the question is in error. But if it is any consolation, many of my academic colleagues have also been stumped by this problem. Barry Pasternack, Ph. D. California Faculty Association Good heavens! With so much learned opposition, I'll bet this one is going to keep math classes all over the country busy on Monday. My original answer is correct. But first, let me explain why your answer is wrong. The winning odds of 1/3 on the first choice can't go up to 1/2 just because the host opens a losing door. To illustrate this, let's say we play a shell game. You look away, and I put a pea under one of three shells. Then I ask you to put your finger on a shell. The odds that your choice contains a pea are 1 /3, agreed? Then I simply lift up an empty shell from the remaining other two. As I can (and will) do this regardless of what you've chosen, we've learned nothing to allow us to revise the odds on the shell under your finger. The benefits of switching are readily proven by playing through the six games that exhaust all the possibilities. For the first three games, you choose #1 and "switch" each time, for the second three games, you choose #1 and "stay" each time, and the host always opens a loser. Here are the results. When you switch, you win 2/3 of the time and lose 1/3, but when you don't switch, you only win 1/3 of the time and lose 2/3. You can try it yourself and see. Alternatively, you can actually play the game with another person acting as the host with three playing cards—two jokers for the goat and an ace for the prize. However, doing this a few hundred times to get statistically valid results can get a little tedious, so perhaps you can assign it as extra credit—or for punishment! (That'll get their goats!) You're in error, but Albert Einstein earned a dearer place in the hearts of people after he admitted his errors. Frank Rose, Ph.D. University of Michigan I have been a faithful reader of your column, and I have not, until now, had any reason to doubt you. However, in this matter (for which I do have expertise), your answer is clearly at odds with the truth. James Rauff, Ph.D. Millikin University May I suggest that you obtain and refer to a standard textbook on probability before you try to answer a question of this type again? Charles Reid, Ph.D. University of Florida I am sure you will receive many letters on this topic from high school and college students. Perhaps you should keep a few addresses for help with future columns. W. Robert Smith, Ph.D. Georgia State University You are utterly incorrect about the game show question, and I hope this controversy will call some public attention to the serious national crisis in mathematical education. If you can admit your error, you will have contributed constructively towards the solution of a deplorable situation. How many irate mathematicians are needed to get you to change your mind? E. Ray Bobo, Ph.D. Georgetown University I am in shock that after being corrected by at least three mathematicians, you still do not see your mistake. Kent Ford Dickinson State University Maybe women look at math problems differently than men. Don Edwards Sunriver, Oregon You are the goat! Glenn Calkins Western State College You made a mistake, but look at the positive side. If all those Ph.D.'s were wrong, the country would be in some very serious trouble. Everett Harman, Ph.D. U.S. Army Research Institute Gasp! If this controversy continues, even the postman won't be able to fit into the mailroom. I'm receiving thousands of letters, nearly all insisting that I'm wrong, including the Deputy Director of the Center for Defense Information and a Research Mathematical Statistician from the National Institutes of Health! Of the letters from the general public, 92% are against my answer, and and of the letters from universities, 65% are against my answer. Overall, nine out of ten readers completely disagree with my reply. Now we're receiving far more mail, and even newspaper columnists are joining in the fray! The day after the second column appeared, lights started flashing here at the magazine. Telephone calls poured into the switchboard, fax machines churned out copy, and the mailroom began to sink under its own weight. Incredulous at the response, we read wild accusations of intellectual irresponsibility, and, as the days went by, we were even more incredulous to read embarrassed retractions from some of those same people! So let's look at it again, remembering that the original answer defines certain conditions, the most significant of which is that the host always opens a losing door on purpose. (There's no way he can always open a losing door by chance!) Anything else is a different question. The original answer is still correct, and the key to it lies in the question, "Should you switch?" Suppose we pause at that point, and a UFO settles down onto the stage. A little green woman emerges, and the host asks her to point to one of the two unopened doors. The chances that she'll randomly choose the one with the prize are 1/2, all right. But that's because she lacks the advantage the original contestant had—the help of the host. (Try to forget any particular television show.) When you first choose door #1 from three, there's a 1/3 chance that the prize is behind that one and a 2/3 chance that it's behind one of the others. But then the host steps in and gives you a clue. If the prize is behind #2, the host shows you #3, and if the prize is behind #3, the host shows you #2. So when you switch, you win if the prize is behind #2 or #3. You win either way! But if you don't switch, you win only if the prize is behind door #1. And as this problem is of such intense interest, I'm willing to put my thinking to the test with a nationwide experiment. This is a call to math classes all across the country. Set up a probability trial exactly as outlined below and send me a chart of all the games along with a cover letter repeating just how you did it so we can make sure the methods are consistent. One student plays the contestant, and another, the host. Label three paper cups #1, #2, and #3. While the contestant looks away, the host randomly hides a penny under a cup by throwing a die until a 1, 2, or 3 comes up. Next, the contestant randomly points to a cup by throwing a die the same way. Then the host purposely lifts up a losing cup from the two unchosen. Lastly, the contestant "stays" and lifts up his original cup to see if it covers the penny. Play "not switching" two hundred times and keep track of how often the contestant wins. Then test the other strategy. Play the game the same way until the last instruction, at which point the contestant instead "switches" and lifts up the cup not chosen by anyone to see if it covers the penny. Play "switching" two hundred times, also. And here's one last letter. You are indeed correct. My colleagues at work had a ball with this problem, and I dare say that most of them, including me at first, thought you were wrong! Seth Kalson, Ph.D. Massachusetts Institute of Technology Thanks, M.I.T. I needed that! In a recent column, you called on math classes around the country to perform an experiment that would confirm your response to a game show problem. My eighth grade classes tried it, and I don't really understand how to set up an equation for your theory, but it definitely does work! You'll have to help rewrite the chapters on probability. Pat Gross, Ascension School Chesterfield, Missouri Our class, with unbridled enthusiasm, is proud to announce that our data support your position. Thank you so much for your faith in America's educators to solve this. Jackie Charles, Henry Grady Elementary Tampa, Florida My class had a great time watching your theory come to life. I wish you could have been here to witness it. Their joy is what makes teaching worthwhile. Pat Pascoli, Park View School Wheeling, West Virginia Seven groups worked on the probability problem. The numbers were impressive, and the students were astounded. R. Burrichter, Webster Elementary School St. Paul, Minnesota The best part was seeing the looks on the students' faces as their numbers were tallied. The results were thrilling! Patricia Robinson, Ridge High School Basking Ridge, New Jersey You could hear the kids gasp one at a time, "Oh my gosh. She was right!" Jane Griffith, Magnolia School Oakdale, California I must admit I doubted you until my fifth grade math class proved you right. All I can say is WOW! John Witt, Westside Elementary River Falls, Wisconsin It's a lesson we'll never forget. Andreas Kohler, Cherokee High School Canton, Georgia This experiment caused so much discussion among students and parents that I'm going to have the results on display at our school open house. Nancy Transier, Bear Branch Elementary Kingwood, Texas My classes enjoyed this exercise and look forward to the next project you give America's students. This is the stuff of real science. Jerome Yeutter, Hebron Public Schools Hebron, Nebraska Thank you for supplying us with this wonderful project which lightened our lives during a particularly cheerless winter without snow. Marcia Jones, Berkshire Country Day School Lenox, Massachusetts Thanks for that fun math problem. I really enjoyed it. It got me out of fractions for two days! Have any more? Andrew Malinoski, Mabelle Avery School Somers, Connecticut I'm a fourth grade student, and I used your column for a science fair project. My test results showed that you were right. My science fair project won a red ribbon. Elizabeth Olson, Edgar Road Elementary Webster Groves, Missouri I did your experiment for the Regional Science and Engineering Fair at the University of Evansville, and I won both third place and a special award from the Army called the "Certificate of Analda House, Evansville Day School Evansville, Indiana I did your experiment on probability as part of a Science Fair project, and after extensive interview with the judges, I was awarded first place. Adrienne Shelton, Holy Spirit School Annandale, Virginia Congratulations! You've discovered a new concept. At first I thought you were crazy, but then my computer teacher encouraged us to write a program, which was quite a challenge. I thought it was impossible, but you were right! Anabella Sousa, Dominican Commercial High School Jamaica, New York The teachers in my graduate-level mathematics classes, most of whom thought you were wrong, conducted your experiment as a class project. Each of the twenty-five teachers had students in their middle or high school classes play at least 400 games. In all, we had 14,800 samples of the experiment, and we're convinced that you were correct —the contestant should switch! Eloise Rudy, Furman University Greenville, South Carolina You have taken over our Mathematics and Science Departments! We received a grant to establish a Multimedia Demonstration Project using state-of-the-art technology, and we set up a hypermedia laboratory network of computers, scanners, a CD-ROM player, laser disk players, monitors, and VCR's. Your problem was presented to 240 students, who were introduced to it by their science teachers. They then established the experimental design while the mathematics teachers covered the area of probability. Most students and teachers initially disagreed with you, but during practice of the procedure, all began to see that the group that switched won more often. We intend to make this activity a permanent fixture in our curriculum. Anthony Tamalonis, Arthur S. Somers Intermediate School 252 Brooklyn, New York I also thought you were wrong, so I did your experiment, and you were exactly correct. (I used three cups to represent the three doors, but instead of a penny, I chose an aspirin tablet because I thought I might need to take it after my experiment.) William Hunt, M.D. West Palm Beach, Florida I put my solution of the problem on the bulletin board in the physics department office at the Naval Academy, following it with a declaration that you were right. All morning I took a lot of criticism and abuse from my colleagues, but by late in the afternoon most of them came around. I even won a free dinner from one overconfident professor. Eugene Mosca, Ph.D., U.S. Naval Academy Annapolis, Maryland After considerable discussion and vacillation here at the Los Alamos National Laboratory, two of my colleagues independently programmed the problem, and in 1,000,000 trials, switching paid off 66.7% of the time. The total running time on the computer was less than one second. G.P. DeVault, Ph.D., Los Alamos National Laboratory Los Alamos, New Mexico One of my students wanted to know whether they were milk goats or stinky old bucks. Presumably that would redefine what a favorable outcome was! Daphne Walton, Bayview Christian School Norfolk, Virginia Now 'fess up. Did you really figure all this out, or did you get help from a mathematician? Lawrence Bryan San Jose, California Wow! What a response we received! It's still coming in, but so many of you are so anxious to hear the results that we'll stop tallying for a moment and take stock of the situation so far. We've received thousands of letters, and of the people who performed the experiment by hand as described, the results are close to unanimous: you win twice as often when you change doors. Nearly 100% of those readers now believe it pays to switch. (One is an eighth-grade math teacher who, despite data clearly supporting the position, simply refuses to believe it!) But many people tried performing similar experiments on computers, fearlessly programming them in hundreds of different ways. Not surprisingly, they fared a little less well. Even so, about 97% of them now believe it pays to switch. And plenty of people who didn't perform the experiment wrote, too. Of the general public, about 56% now believe you should switch compared with only 8% before. And from academic institutions, about 71% now believe you should switch compared with only 35% before. (Many of them wrote to express utter amazement at the whole state of affairs, commenting that it altered their thinking dramatically, especially about the state of mathematical education in this country.) And a very small percentage of readers feel convinced that the furor is resulting from people not realizing that the host is opening a losing door on purpose. (But they haven't read my mail! The great majority of people understand the conditions perfectly.) And so we've made progress! Half of the readers whose letters were published in the previous columns have written to say they've changed their minds, and only this next one of them wrote to state that his position hadn't changed at all. I still think you're wrong. There is such a thing as female logic. Don Edwards Sunriver, Oregon Oh hush, now. Site Admin Posts: 11 Joined: Mon May 01, 2006 12:16 pm Gosh, this brings back so many memories! What I don't think came through at the time was the fact that I and so many mathematicians were mortified at the behavior of so many in our community. Not only that they got a probability problem wrong, but that they were behaving so childishly. I didn't realize how nasty it really got behind the scenes until I read The Power of Logical Thinking. I regret that I didn't write a letter of support at the time. But let me take this opportunity to voice something that annoys me: Whenever this problem is discussed it is sometimes qualified with something like, "Marilyn was essentially right." NO. Marilyn was right. Period. She makes it clear in her first answer that the host always opens a losing door. How else would you interpret the problem for Heaven's Sake! You're not going to complain that game show hosts are being unfairly stereotyped! (Everyone knows game show hosts are a tricky lot, anyway.) I think to really get the problem you have to imagine at a lot of trials. I would invite you to look at my example in the Monty Hall Dilemma thread concerning the multiple choice test with 300 questions. It is equivalent to the Monty Hall Dilemma. Posts: 124 Joined: Wed May 10, 2006 4:09 pm Location: Louisiana That red and white chart makes things blazingly clear, even for me, and I have a math learning disability. "A new scientific idea does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." -Max Planck Posts: 389 Joined: Sun May 28, 2006 1:49 pm Location: Earth, just visiting on my way somewhere... Take that College PhDs and mathemeticans! Marilyn vos Savant is better at thinking than anyone. Everytime I read about that problem I'm convinced Marilyn is indeed the smartest person in the world Bill Gates, Richest person in the world, George W. Bush, most powerful person in the world, Marilyn vos Savant smartest person in the world. Posts: 109 Joined: Thu Apr 27, 2006 10:44 pm Location: New York Aieeee! We're getting killed by language here, not mathematics! Let's not forget that probability is an exact science. Controversies arise when problems are not precisely stated. I dare say that true mathematicians will go to great pains to make sure the problem is precisely stated. There are two DIFFERENT problems in play here. The first, which Marilyn has in mind, is: (a) Player chooses one of the three doors at random (b) Host, who knows what is behind the doors, deliberately selects a losing door from the remaining two and opens it. This problem has been correctly analyzed by Marilyn. If the player switches to the remaining unopened door, the player wins with probability 2/3. The second and entirely different problem is: (a) Player chooses one of the three doors at random (b) Host opens one of the two remaining doors at random, which happens to be empty. It is the second problem which is causing confusion. Marilyn alludes to this second problem by describing the "UFO" landing on stage and randomly selecting a door. In the case of the second problem, there is no advantage to switching. You will win with probability 1/2 whether or not you switch. In order to see this experimentally, try the following game, based on the one Marilyn outlined above. One student plays the contestant, and another, the host. Label three paper cups #1, #2, and #3. While the contestant looks away, the host randomly hides a penny under a cup by throwing a die until a 1, 2, or 3 comes up. Next, the contestant randomly points to a cup by throwing a die the same way. Then the host randomly selects one of the two remaining cups by throwing the die, and selecting leftmost remaining cup if the number on the die is even, or rightmost if it is odd. If the host happens to select the cup with the penny, the game is a "push" and is disregarded. Lastly, the contestant "stays" and lifts up his original cup to see if it covers the penny. If it covers the penny, you have a win, otherwise you have a loss. Play until you have a total of two hundred wins and losses. If, as hypothesized, the probability of winning is 1/2 whether you "stay" or "switch," the number of wins and losses will be about equal. Let's recognize that we're not arguing about the ability to answer basic probability questions, a skill which most college freshmen can learn. Instead, we are arguing about which of two distinct problems we are solving based on personal interpretation of an ill-defined statement. Marilyn has the "edge" in this argument, because the statement of the problem mentions that the host "knows what's behind the doors." Unfortunately, the statement of the problem doesn't indicate whether the host acted on this knowledge, or how. For all we know, the host might have chosen randomly despite his knowledge. In order to adequately define the problem we must infer that the host made use of the knowledge and deliberately avoided the winning door if applicable. Only after this ambiguity was clarified did the conciliatory mail flow in. It is a pity that the statement of the problem was not more precise at the beginning. Think about this. If, instead of the initial problem, we simply chose to discuss Marilyn's "cup" experiment or the variation on it that I just described, would we have any controversy? I would think not, simply because the problems are described unambiguously, and these problems, when described in unambiguous language, are easily analyzed by the techniques that we learn in the first few lessons of "Probability 101." I know I've rambled a bit, but one final point. I recall a similar acrimonious discussion involving Marilyn in the late 1990's regarding the following problem which I will paraphrase: A woman has two children. She takes her son out in the stroller. What is the probability that her other child is a daughter, assuming male and female children occur with equal likelihood? Don't even try to answer that without further clarification. It is AMBIGUOUS! The answer depends on how she chose which of her two children to take in the stroller! There are at least two valid interpretations of this question, each of which has a different answer. I'll do my best to quickly analyze each. Now, I think we can all agree that in families with exactly two children, the "two sons" outcome occurs 1/4 of the time, the "two daughters" outcome occurs 1/4 of the time, and the "one of each" outcome occurs 1/2 of the time. So consider the two distinct problems: (a) The mother randomly selects one of her children to take in the stroller, and (b) The mother selects a son, if she has one, to take in the stroller, otherwise she takes her daughter. For (a): (1) 1/4 the mother with two daughters takes a daughter (2) 1/4 the mother with two sons takes a son (3) 1/4 the mother with "one of each" takes a daughter (4) 1/4 the mother with "one of each" takes a son If you observed a son in the cart, you observed either (2) or (4) with equal likelihood. In (2) the child at home is a son, in (4) the child at home is a daughter. Therefore the child at home is a daughter with probability 1/2. For (b) (1) 1/4 the mother with two daughters takes a daughter (2) 1/4 the mother with two sons takes a son (3) 1/2 the mother with "one of each" takes a son If you observed a son in the cart, you observed either (2) or (3). Outcome (3) is twice as likely as outcome (2) therefore the child at home is a daughter with probability 2/3. I do seem to remember Marilyn (who insisted the answer was 2/3) battling it out with the "professors" (who insisted the answer was 1/2). Personally, I've got to believe that all involved are intelligent enough to see the ambiguity. Perhaps nobody can resist the mudslinging! Posts: 45 Joined: Thu Jul 06, 2006 2:32 am I suspect the reason none of the conciliatory and nonconciliatory (except yours) hint at the trouble being an ambiguously stated puzzle is because saying, "the host knows what is behind the doors" doesn't make clear whether or not he flipped a coin when he actually opened a door is what we, in the business, refer to as "rather a stretch". All you've managed to do for your college profs is accuse them of having the reading comprehension of a retard. Now, you are not a retard, so don't you think your fantastic stretching and rather plain and transparent attempt to find something, anything, wrong with the puzzle itself, with such desperation that you don't care that everyone can see the scene you're creating and can tell you're just in a state, is a result of ego, and not necessarilly a short coming with raw intelligence? Here is another puzzle. The thing about puzzles is that they typically do not hold your hand through the problem. The point of a puzzle is there are things in it that can trip you up. A man is looking at a painting on a wall. He says, "brothers and sisters have I none, but this man's father is my father's son." Who is he looking at? It might be silly, and a stretch, to kvetch that the problem doesn't make clear that the man is not a transvestite... I can hear someone saying that here at last must be the very last way anyone can come up with to nay say the puzzle and M's adventure with it, that surely this last bit of, well, there's no other word I can think of or it, stretching, surely means every possibility has been exhausted. But I know ppl. and their credulity should not be underestimated. So I'll just wait for the next one. If I say to myself, "there can't be others, that would be insane" one will come along and I'll be depressed, and I don't wanna be depressed. "A new scientific idea does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." -Max Planck Posts: 389 Joined: Sun May 28, 2006 1:49 pm Location: Earth, just visiting on my way somewhere... I'm not quite sure how to respond to this. My purpose was to present my analysis of why so many people in the field of probability were at odds with Marilyn. My analysis, which is merely my opinion, is that they misinterpreted the problem and solved a similar but different problem. Clearly you believe that my explanation is a "stretch" and if so I am sure you have the ability to present a well-reasoned argument which is supported by more than an "everyone can see" assertion. I don't agree with your assessment of my response as "non-conciliatory" since I stated quite clearly that Marilyn had correctly analyzed the first problem. I think that it is clear without my saying so that although the problem was ill-defined, Marilyn immediately clarified her assumptions and proceeded correctly based upon them. On the other hand, some people who work in the field of probability have a predisposition to view problems as conditional probability problems, and would tend to formulate and analyze the alternate problem (where the door is opened at random, and the player has conditional knowledge of that outcome), stopping only to make sure that the original statement of the problem does not exclude that interpretation. As for ego, certainly there is no shortage of ego in the mean-spirited responses which Marilyn chose to highlight in her posting. Marilyn exhibits extraordinary grace in her eloquent and straightforward response to those criticisms, with never a word of condescension or anger. Although I cannot hope to attain Marilyn's high genius, I can certainly try to learn from her ego-free demeanor. I would never expect to read the "r" word in any of Marilyn's responses, and I question why you would use it, negated or not, to describe me or my comments on those who misinterpreted the The only reason I decided to comment was that I read the thread, and it seemed to me that people were happily jumping to the conclusion that all of those who wrote in to disagree with Marilyn in 1990-91 were incompetent in spite of whatever impressive titles or degrees they might have. The comments were overwhelmingly of the "Marilyn was right, PhD's were wrong, ha, ha" genre. I thought, perhaps incorrectly, that a little bit of reasoned analysis might be welcome. Posts: 45 Joined: Thu Jul 06, 2006 2:32 am You can't say "retarded"? You have to say, "the r word"? Well, back to work on my space ship. "A new scientific idea does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." -Max Planck Posts: 389 Joined: Sun May 28, 2006 1:49 pm Location: Earth, just visiting on my way somewhere... I am inclined to side with Marilyn, but something still keeps me awake at night. Let's say you initially choose door 1, then the host shows a goat behind door 3. Consider this argument for Marilyn's result: With all doors closed, the probability that door 1 contains the car is 1/3. Thus, the probability that one of the other two doors contains the car is 2/3. When door 3 is opened to reveal a goat, we can safely set its probability of containing the car to 0. In effect, the entire 2/3 probability of both doors you did not select "shifts" entirely to the unopened door 2. Thus, switching to door 2 gives you a 2/3 probability of winning the car. Consider this argument against Marilyn's result by similar reasoning: With all doors closed, the probability that any two doors combined contain the car is 2/3. So it is with doors 1 and 3. However, when door 3 is opened to reveal a goat, we can safely set its probability of containing the car to 0. In effect, the entire 2/3 probability of (doors 1 and 3 combined) containing the car "shifts" entirely to door 1, the one you selected. Thus, staying at door 1 gives you a 2/3 probability of winning the car. I know something is wrong with the second argument (or possibly both), but I can't pinpoint it precisely. I believe the reason people want to think the chance of each unopened door containing the car to be 1/2 is that they go through both of these arguments and see that the results are equal, then normalize the results by multiplying each by 3/4. In any case, please show me the flaw in the second argument! Posts: 2 Joined: Wed Jul 12, 2006 8:10 pm Location: Earth Forgive me for double-posting, but I think I've figured out the answer to my question. In the upper paragraph, doors 2 and 3 are not restricted from being opened by the rules. However, in the lower paragraph, door 1 is restricted from being opened by the rules, while door 3 is not. That is why the probabilities are different. Am I correct now? Posts: 2 Joined: Wed Jul 12, 2006 8:10 pm Location: Earth I accidently posted this in the wrong forum, but reposted it here (duh): this is how many people imagined the probabilities: 1. contestant picks door #1 (1/3 prob.) 2. host opens #3 UNKNOWINGLY (got lucky and opened the goat door) 3. each remaining door contains 1/2 chance) here's the reality 1. contestant picks a door #1 (1/3 prob.) 2. host opens goat door KNOWINGLY 3. since door #2 and #3 each had a 1/3 chance, and the host removed one, you add those 2 doors to get the new probability of 2/3 Its the KNOWINGLY that changes the probabilties because the host is choosing, this IS NOT left to chance. That is why the probability of the remaining doors are added, because the removal of the doors is not random. Here is the formula to calculate the probability for each remaining door, even if your gameshow had 100 doors: (Original door chosen) / (Total doors) X (Total doors - Original door chosen) / (Total doors still closed - Original door chosen) I explain it in detail: (Original door chosen) / (Total doors) [this is the probability of a single door] X (Total doors - Original door chosen) [this is the probability of remaining doors] / (Total doors still closed - Original door chosen) [this divides probabilty amongst remaining doors] I dont have a proof yet, but I pretty much failed in math, so feel free to add to this. Posts: 4 Joined: Wed Jul 19, 2006 7:56 pm Anyone that does not believe you are correct (especially the learned scholars) should become familiar with the Bayes Theorem. Rev. Bayes was a true genius and every self-proclaimed mathematician would do well to learn from him. Posts: 1 Joined: Sat Jul 22, 2006 5:02 am Kenyai wrote:I am inclined to side with Marilyn, but something still keeps me awake at night. Let's say you initially choose door 1, then the host shows a goat behind door 3. Consider this argument for Marilyn's result: With all doors closed, the probability that door 1 contains the car is 1/3. Thus, the probability that one of the other two doors contains the car is 2/3. When door 3 is opened to reveal a goat, we can safely set its probability of containing the car to 0. In effect, the entire 2/3 probability of both doors you did not select "shifts" entirely to the unopened door 2. Thus, switching to door 2 gives you a 2/3 probability of winning the car. Consider this argument against Marilyn's result by similar reasoning: With all doors closed, the probability that any two doors combined contain the car is 2/3. So it is with doors 1 and 3. However, when door 3 is opened to reveal a goat, we can safely set its probability of containing the car to 0. In effect, the entire 2/3 probability of (doors 1 and 3 combined) containing the car "shifts" entirely to door 1, the one you selected. Thus, staying at door 1 gives you a 2/3 probability of winning the car. Probability isn't a description of how the game turned out, so it doesn't change when you lose or win. When you role a 6 sided die trying to get a 4 and you get a 2, your prob. of getting a 4 is not in that moment, zero. Probability doesn't describe what happened, it's a description of how many chances you have to win compared to the total number of things that can happen. 6 things can happen; you've got one chance to win. This doesn't change no matter what the dice does. So if you open one door and get a goat, it is still the case that the probability measurement for that door is 1/3; because probability isn't a description of what happened, it is merely the ratio that represents the number of chances to win compared to the total number of things that could happen. If you open two doors, then your number of chances to win is two out of three, and that is a measurement that remains true even if you get two goats. The prob. only changes if you begin with a different game, with a different number of total possible outcomes and chances to win. It really helps to visualize the total number of possibilities like this; 1) car,goat,goat 2) goat,car,goat 3) goat,goat,car Playing once means that there can only be on favourable out come out of three total possible outcomes. You are either looking at 1, 2, or 3, you don't know which, but there are those three total possibilities. If you play once, the door you picked could be the ONE combo that has a car at that door, so you have one chance to win, one out of three total possible scenarios, so P = 1/3. Playing twice means picking doors that could be winning doors for two of the three combos, which means of the three combos, two of them are favourable, so P = 2/3. Probability is the no. of favourable events divided by the total number of existing events. If this was all there was to the puzzle, then, the soln. would be easy as pie! It still is a simple prob. calculation, but the puzzle dazzles and distracts us with the extra added feature that the host ALWAYS KNOWINGLY reveals a goat door. So now if you switch what is the You now go back and look at that list, or table if you prefer, and see that if you switch, and keep in mind that someone ALWAYS reveals a goat door, then there are two chances, two combinations out of the three possible in the list, where you will win. 2/3! "A new scientific idea does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." -Max Planck Posts: 389 Joined: Sun May 28, 2006 1:49 pm Location: Earth, just visiting on my way somewhere... dbridges wrote:Anyone that does not believe you are correct (especially the learned scholars) should become familiar with the Bayes Theorem. Rev. Bayes was a true genius and every self-proclaimed mathematician would do well to learn from him. Just the phrase, "Bayes theorem" scares me to death. "A new scientific idea does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die and a new generation grows up that is familiar with it." -Max Planck Posts: 389 Joined: Sun May 28, 2006 1:49 pm Location: Earth, just visiting on my way somewhere... Hello and greetings from Bulgaria! Reading this was pretty funny. And one doesn't even need to know "advanced" mathematics and theorems to solve the problem. This is a classical and beautiful example of how people fool themselves, even PhDs, etc... It takes no more than a minute for one to solve it, literally seeing the right answer. In the case with the PhDs, they have fooled themselves due to a lack of attention and have given "hard" answers. It's that simple. What happens - with little observation, their classically trained minds have run for the first "idea" (classical factual knowledge) for solving probability problems, building up a false assumption; a key that just doesn't fit with the doors, because... the key is in the host. Marilyn used a word on which I'd like to accentuate - "visualize". "I'm very concerned with the general public's lack of mathematical skills." - Robert Sachs, Ph. D. I'm very concerned with the non-general public's lack of seeing. Best regards, Aut viam inveniam aut faciam. Posts: 12 Joined: Sat Aug 05, 2006 3:36 pm Location: Bulgaria Return to Online Articles By Marilyn Who is online Users browsing this forum: No registered users and 0 guests
{"url":"http://www.marilynvossavant.com/forum/viewtopic.php?p=680","timestamp":"2014-04-16T10:11:49Z","content_type":null,"content_length":"76211","record_id":"<urn:uuid:abdaad7e-a182-40db-bb7a-9690bfc56aa1>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
THE honors physics home page Prof. Robert Perry M2056 Physics Research Building 397-1031 (work cell phone) Prof. Perry's Office Hours Available in Smith student lounge the hour before class on days homework is due. Available in my office most days from 10:30-1:30. Ask for time any day after lecture or by email. Ms. Jin Yang 3016 PRB Mr. Yaser Helal 3026 PRB Return to Contents Course title: Honors Physics 132: Relativity, Electricity & Magnetism Six Ideas That Shaped Physics, Units R & E, 2nd Edition, by Thomas A. Moore Five days each week at 2:30 pm or 3:30 pm in Smith 1094 0th order Grade Weighting: Homework [15%] Labs [10%] Exams [3 x 25%] Return to Contents Homework due dates and times announced in class and here. Turn homework in at start of class. Late homework must be turned in by next class, 25% off. Graded homework returned in class the next week and solutions posted here. Return to Contents Examinations - No notes or books should be used other than those provided. ☆ Exam 1.1: Thursday, Jan. 21, in class Sample questions for 1st exam Solutions for Exam 1, Day 1 ☆ Exam 1.2: Friday, Jan. 22, in class Solutions for Exam 1, Day 2 ☆ Exam 2.1: Thursday, Feb. 18, class time, Smith Lecture Room (PRB 1080) Notes for exam 2.1 Sample exam 2.1 questions Solutions for Exam 2, Day 1 ☆ Exam 2.2: Friday, Mar. 5, class time, Smith Lecture Room (PRB 1080) Sample exam 2.2 questions Solutions for Exam 2, Day 2 ☆ Exam 3: 3:30 section - Monday, Mar 15, 3:30-5:18, Smith Lecture Room (PRB 1080) ☆ Exam 3: 2:30 section - Tuesday, Mar 16, 1:30-3:18, Smith Lecture Room (PRB 1080) Return to Contents Return to Contents Return to Contents Return to Contents Return to Contents Your comments and suggestions are appreciated. [OSU Physics] [Math and Physical Sciences] [Ohio State University ] Honors Physics 132. Created 05-January-2010. Last modified 9-April-2010.
{"url":"http://www-physics.mps.ohio-state.edu/~perry/h132/","timestamp":"2014-04-16T04:17:25Z","content_type":null,"content_length":"12166","record_id":"<urn:uuid:1217040e-f397-4725-b65d-b4ae8ae8f708>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00270-ip-10-147-4-33.ec2.internal.warc.gz"}
Opportunities along the yield curve Future changes Option price curve equations are useful for predicting changes in prices several days forward, based on new underlying futures prices with the same set of strike prices. "Three days predicted" (below) shows options on 20-year T-bond futures where the option price-to-strike price is related to the futures price-to-strike price. The curve shows close relationships between the predicted prices over three days and initial prices on Oct. 28. On Nov. 3 there is some indication of the market prices falling slightly lower than predicted by the fixed equation. This will be an increasing tendency as time to expiration decreases. Variations between current market prices and predicted prices may be used to find temporarily over-valued and under-valued options as prices tend to move back toward the regression curve. Trial & error "T-note futures yield & duration" (below) shows one section of an Excel spreadsheet that calculates the yield on a T-note or T-bond given the listed price. The duration for the futures contract is computed automatically on the spreadsheet at the same time it is calculating the yield from a given price. The two-year T-note has a price listed as 109% of $200,000 par, plus 27.25 times 1/32nd of 1% of par. In decimal form, the price is $218,852. Interest of $6,000 is paid semiannually, with $200,000 par value received at maturity. The yield that corresponds to a $218,852 price must be computed by trial and error, trying out different yields until the computed total present value is approximately equal to the listed price. The trial-and-error process may be done manually, which is how this table was created, or accomplished by a computer program that gradually centers in on the required discount yield. Yields and prices also may be looked up on tables of price-to-yield and yield-to-price available online at CME Group. CME Group publishes a Treasury price index online — the Dow Jones Chicago Board of Trade (CBOT) Treasury Price Index — based on five-year, 10-year and 20-year maturities. The "T-note futures yield & duration" spreadsheet includes a section that produces data similar to the Dow Jones CBOT index, and shows how the index is computed. Bond duration Because duration is an important element in the index calculation, it may be well to describe duration in more detail. Duration is the weighted average of time to maturity of any asset. The weights are equal to the present value of cash flows in each time period divided by the asset’s total present value. Duration is shown computed on "T-note futures yield & duration" (above), where the fourth column shows the calculation of weights for each time period and column five multiplies the weight and the time period number. The weighted average time to maturity, or duration, of the two-year note is computed as 3.8372 six-month periods or 1.9186 years. Duration changes constantly with new data for market yields and periodic cash flows; however, it is always equal to or less than calendar time to maturity. Duration is a critical factor in hedging individual financial instruments and entire portfolios. In theory, a portfolio that has a given weighted duration that includes its total holdings may be hedged by a long or short position in a single Treasury futures contract with the same computed duration. It is important to realize that prices on fixed-income securities change in relation to their durations rather than simply time to maturity. "Treasury price index" (below), calculated on Oct. 28, 2010, includes modified durations (where duration is divided by (1+ i), with i equal to the computed yield). For example, the weighted average maturity of the 30-year ultra T-bond is 15.6497 years. The time pattern of cash flows determines duration, resulting in the $100,000 par value received on the 30-year bond shrinking in its impact on price because of the longer time to maturity. If we were to begin a Treasury price index similar to the Dow Jones CBOT Treasury Index, using just the five-, 10- and 20-year maturities as shown on "Treasury price index," the three prices would be weighted. The index weights are calculated by dividing the modified duration of the 30-year T-bond by the modified duration of each of the other maturities. In this way, the pricing effects of different maturities are equalized and the index is made comparable over time. The importance of the Treasuries in hedging, speculating and forecasting interest rates and yields of all maturities hardly can be overstated. Their futures and options will be of increasing usefulness as the Federal Reserve uses financial markets in its efforts to accelerate economic recovery. Paul Cretien is an investment analyst and financial case writer. His e-mail is PaulDCretien@aol.com.
{"url":"http://www.futuresmag.com/2011/01/01/opportunities-along-the-yield-curve?t=financials&page=2","timestamp":"2014-04-16T14:35:07Z","content_type":null,"content_length":"46289","record_id":"<urn:uuid:b93199ec-85b4-448b-9322-7354497af620>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00183-ip-10-147-4-33.ec2.internal.warc.gz"}
Write a two column proof that segment AC `cong` segment BD. ... - Homework Help - eNotes.com Write a two column proof that segment AC `cong` segment BD. Statements Reasons In given fig. consider triangle ABC and triange BAD. i.e. in `Delta ABC andDelta BAC` `angle ABC= angle BCA` (given) AB= BA ( same segment) `angle BAC=angle ABD= 90^o ` (given) Thus by ASA property `Delta ABC cong Delta BAC` since congruent part of cobgruent triangles are equal . Therefore Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/write-two-column-proof-that-segment-ac-cong-442342","timestamp":"2014-04-17T05:26:17Z","content_type":null,"content_length":"26830","record_id":"<urn:uuid:3ba6f9a4-3b30-4f72-9ac0-c06e0c380fd5>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00308-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - Algorithm Complexity: Sorted Arrays into Sorted Array I don't know how copies and compares are factored into complexity or big theta. It doesn't matter, we don't care about constants in asymptotic notation. OK, but in one type of optimized case, there are k n copies done and in the worst case, (k) (k-1) n = (k - k) n compares done, how do you balance the copies versus compares for O(...) in this case?
{"url":"http://www.physicsforums.com/showpost.php?p=3805340&postcount=10","timestamp":"2014-04-20T03:13:43Z","content_type":null,"content_length":"8579","record_id":"<urn:uuid:ffef2922-a5ad-40f6-8154-6d2a4b0047a8>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Norm Induced by the Inner Product Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search We may define a norm on inner product: It is straightforward to show that properties 1 and 3 of a norm hold (see §5.8.2). Property 2 follows easily from the Schwarz Inequality which is derived in the following subsection. Alternatively, we can simply observe that the inner product induces the well known Next | Prev | Up | Top | Index | JOS Index | JOS Pubs | JOS Home | Search [How to cite this work] [Order a printed hardcopy] [Comment on this page via email]
{"url":"https://ccrma.stanford.edu/~jos/mdft/Norm_Induced_Inner_Product.html","timestamp":"2014-04-18T21:10:41Z","content_type":null,"content_length":"7159","record_id":"<urn:uuid:deba0774-16c5-4a10-8425-00e122afbabb>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
A functional approach to real number computation - Bulletin of Symbolic Logic , 1997 "... We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability dist ..." Cited by 48 (10 self) Add to MetaCart We present a survey of the recent applications of continuous domains for providing simple computational models for classical spaces in mathematics including the real line, countably based locally compact spaces, complete separable metric spaces, separable Banach spaces and spaces of probability distributions. It is shown how these models have a logical and effective presentation and how they are used to give a computational framework in several areas in mathematics and physics. These include fractal geometry, where new results on existence and uniqueness of attractors and invariant distributions have been obtained, measure and integration theory, where a generalization of the Riemann theory of integration has been developed, and real arithmetic, where a feasible setting for exact computer arithmetic has been formulated. We give a number of algorithms for computation in the theory of iterated function systems with applications in statistical physics and in period doubling route to chao... , 1997 "... We develop the theoretical foundation of a new representation of real numbers based on the infinite composition of linear fractional transformations (lft), equivalently the infiite product of matrices, with non-negative coefficients. Any rational interval in the one point compactification of the rea ..." Cited by 42 (8 self) Add to MetaCart We develop the theoretical foundation of a new representation of real numbers based on the infinite composition of linear fractional transformations (lft), equivalently the infiite product of matrices, with non-negative coefficients. Any rational interval in the one point compactification of the real line, represented by the unit circle S¹, is expressed as the image of the base interval [0�1] under an lft. A sequence of shrinking nested intervals is then represented by an infinite product of matrices with integer coefficients such that the first so-called sign matrix determines an interval on which the real number lies. The subsequent so-called digit matrices have non-negative integer coe cients and successively re ne that interval. Based on the classi cation of lft's according to their conjugacy classes and their geometric dynamics, we show that there is a canonical choice of four sign matrices which are generated by rotation of S¹ by =4. Furthermore, the ordinary signed digit representation of real numbers in a given base induces a canonical choice of digit matrices. - PROC APPSEM SUMMER SCHOOL IN PORTUGAL , 2002 "... We introduce, in Part I, a number representation suitable for exact real number computation, consisting of an exponent and a mantissa, which is an in nite stream of signed digits, based on the interval [ 1; 1]. Numerical operations are implemented in terms of linear fractional transformations ( ..." Cited by 17 (1 self) Add to MetaCart We introduce, in Part I, a number representation suitable for exact real number computation, consisting of an exponent and a mantissa, which is an in nite stream of signed digits, based on the interval [ 1; 1]. Numerical operations are implemented in terms of linear fractional transformations (LFT's). We derive lower and upper bounds for the number of argument digits that are needed to obtain a desired number of result digits of a computation, which imply that the complexity of LFT application is that of multiplying n-bit integers. In Part II, we present an accessible account of a domain-theoretic approach to computational geometry and solid modelling which provides a data-type for designing robust geometric algorithms, illustrated here by the convex hull algorithm. - Third Real Numbers and Computers Conference (RNC3 , 1998 "... One possible approach to exact real arithmetic is to use linear fractional transformations (LFT's) to represent real numbers and computations on real numbers. Recursive expressions built from LFT's are only convergent (i.e., denote a well-defined real number) if the involved LFT's are sufficiently c ..." Cited by 8 (3 self) Add to MetaCart One possible approach to exact real arithmetic is to use linear fractional transformations (LFT's) to represent real numbers and computations on real numbers. Recursive expressions built from LFT's are only convergent (i.e., denote a well-defined real number) if the involved LFT's are sufficiently contractive. In this paper, we define a notion of contractivity for LFT's. It is used for convergence theorems and for the analysis and improvement of algorithms for elementary functions. Keywords : Exact Real Arithmetic, Linear Fractional Transformations 1 Introduction Linear Fractional Transformations (LFT's) provide an elegant approach to real number arithmetic [8, 17, 11, 14, 12, 6]. One-dimensional LFT's x 7! ax+c bx+d are used in the representation of real numbers and to implement basic unary functions, while two-dimensional LFT's (x; y) 7! axy+cx+ey+g bxy+dx+fy+h provide binary operations such as addition and multiplication, and can be combined to obtain infinite expression trees ... - In Proc. Foundations of Software Science and Computation Structures (FoSSaCS '98), volume 1378 of LNCS , 1997 "... . One possible approach to exact real arithmetic is to use linear fractional transformations to represent real numbers and computations on real numbers. In this paper, we show that the bit sizes of the (integer) parameters of nearly all transformations used in computations are proportional to the nu ..." Cited by 7 (4 self) Add to MetaCart . One possible approach to exact real arithmetic is to use linear fractional transformations to represent real numbers and computations on real numbers. In this paper, we show that the bit sizes of the (integer) parameters of nearly all transformations used in computations are proportional to the number of basic computational steps executed so far. Here, a basic step means consuming one digit of the argument(s) or producing one digit of the result. 1 Introduction Linear Fractional Transformations (LFT's) provide an elegant approach to real number arithmetic [8, 16, 11, 14, 12, 6]. One-dimensional LFT's x 7! ax+c bx+d are used as digits and to implement basic functions, while two-dimensional LFT's (x; y) 7! axy+cx+ey+g bxy+dx+fy+h provide binary operations such as addition and multiplication, and can be combined to infinite expression trees denoting transcendental functions. In Section 2, we present the details of the LFT approach. This provides the background for understanding the r... - Automata, Languages and Programming, 26th International Colloquium, ICALP’99, Prague, Czech 227 Republic, July 11-15, 1999, Proceedings, volume 1644 of Lecture Notes in Computer Science , 1999 "... . We show that the classical techniques in numerical integration (namely the Darboux sums method, the compound trapezoidal and Simpson's rules and the Gauss{Legendre formulae) can be implemented in an exact real arithmetic framework in which the numerical value of an integral of an elementary functi ..." Cited by 5 (1 self) Add to MetaCart . We show that the classical techniques in numerical integration (namely the Darboux sums method, the compound trapezoidal and Simpson's rules and the Gauss{Legendre formulae) can be implemented in an exact real arithmetic framework in which the numerical value of an integral of an elementary function is obtained up to any desired accuracy without any round{o errors. Any exact framework which provides a library of algorithms for computing elementary functions with an arbitrary accuracy is suitable for such an implementation; we have used an exact real arithmetic framework based on linear fractional transformations and have thereby implemented these numerical integration techniques. We also show that Euler's and Runge{Kutta methods for solving the initial value problem of an ordinary dierential equation can be implemented using an exact framework which will guarantee the convergence of the approximation to the actual solution of the dierential equation as the step size in the , 1998 "... We present two algorithms for computing the root, or equivalently the fixed point, of a function in exact real arithmetic. The first algorithm uses the iteration of the expression tree representing the function in real arithmetic based on linear fractional transformations and exact floating point. T ..." Cited by 1 (0 self) Add to MetaCart We present two algorithms for computing the root, or equivalently the fixed point, of a function in exact real arithmetic. The first algorithm uses the iteration of the expression tree representing the function in real arithmetic based on linear fractional transformations and exact floating point. The second and more general algorithm is based on a trisection of intervals and can be compared with the well-known bisection method in numerical analysis. It can be applied to any representation for exact real numbers; here it is described for the sign binary system in [\Gamma1; 1] which is equivalent to the exact floating point with linear fractional transformations. Keywords : Shrinking intervals, Normal products, Exact floating point, Expression trees, Sign Binary System, Iterative method, Trisection. 1 Introduction In the past few years, continued fractions and linear fractional transformations (lft), also called homographies or Mobius transformations, have been used to develop various... , 1996 "... Several methods to perform exact computations on real numbers have been proposed in the literature. In some of these methods real numbers are represented by infinite (lazy) strings of digits. It is a well known fact that, when this approach is taken, the standard digit notation cannot be used. New f ..." Add to MetaCart Several methods to perform exact computations on real numbers have been proposed in the literature. In some of these methods real numbers are represented by infinite (lazy) strings of digits. It is a well known fact that, when this approach is taken, the standard digit notation cannot be used. New forms of digit notations are necessary. The usual solution to this representation problem consists in adding new digits in the notation, quite often negative digits. In this article we present an alternative solution. It consists in using non natural numbers as "base", that is, in using a positional digit notation where the ratio between the weight of two consecutive digits is not necessarily a natural number, as in the standard case, but it can be a rational or even an irrational number. We discuss in full detail one particular example of this form of notation: namely the one having two digits (0 and 1) and the golden ratio as base. This choice is motivated by the pleasing properties enjoyed...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=900929","timestamp":"2014-04-19T00:20:18Z","content_type":null,"content_length":"34184","record_id":"<urn:uuid:c5f92a61-bb15-4595-8700-303eaafa7c8d>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Simple geormetry/congruence help May 30th 2012, 03:56 AM #1 Junior Member Feb 2009 Simple geormetry/congruence help ABCD is a quadrilateral with AB horizontal and CD vertical. Angle BAD = angle BCD = 40 degrees and AB=BC=5. Show that angle CDB = angle ADB = 25 degrees. Here is a diagram I made which I hope is correct: By observing alternate angles, I can see that angle ADC = 50 degrees but how do I show that angle CDB = angle ADB? I thought about proving congruence but is this possible here? Re: Simple geormetry/congruence help Let's analize triangles ADB and BDC: - angle DAB = angle BCD - BD is a common lature - AB = BC = 5 Rezults from the case of congruence side-side-angle triangle ADB= triangle BDC so angleADB=angleBDC AB is horizontal and CD is vertical, so if we extend the side AB we'll have a right angle where AB intersect CD (point F). In triangle AFD, F=90 degrees, A=40 degrees, so angleADF = angleADC = 180-90-40 = 50 degrees. AngleADB is half of angleADC, therefore is 25 degrees. Re: Simple geormetry/congruence help Let's analize triangles ADB and BDC: - angle DAB = angle BCD - BD is a common lature - AB = BC = 5 Rezults from the case of congruence side-side-angle triangle ADB= triangle BDC so angleADB=angleBDC AB is horizontal and CD is vertical, so if we extend the side AB we'll have a right angle where AB intersect CD (point F). In triangle AFD, F=90 degrees, A=40 degrees, so angleADF = angleADC = 180-90-40 = 50 degrees. AngleADB is half of angleADC, therefore is 25 degrees. But I thought SSA by itself isn't enough to prove congruence? Re: Simple geormetry/congruence help I agree, side, side, non-incuded angle does not imply congruence (though in fact they are). You can get there by extending AB to meet CD at E and CB to meet AD at F. Deduce that the angle at F is a rightangle and then that triangles AFB and CEB are congruent and finally that triangles DFB and DEB (rtangle hypotenuse side) are congruent. As an alternative you could use trig, the sine rule in triangles ADB and CBD. May 30th 2012, 05:09 AM #2 Jan 2011 May 30th 2012, 05:22 AM #3 Junior Member Feb 2009 May 30th 2012, 06:52 AM #4 Super Member Jun 2009
{"url":"http://mathhelpforum.com/geometry/199445-simple-geormetry-congruence-help.html","timestamp":"2014-04-16T10:39:47Z","content_type":null,"content_length":"36917","record_id":"<urn:uuid:37cbfb1b-ad45-4aed-a2e9-96210ab76bd9>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00475-ip-10-147-4-33.ec2.internal.warc.gz"}
convert mg/dl to micromoles - OnlineConversion Forums Originally Posted by Can anyone tell us how to convert micromoles to mg/dL with alpha 1 antitrypsin? Is there a chart that provides all the units of measure the labs use for measuring alpha 1 anitrypsin? After further checking, you probably mean micromoles per liter, as that seems to be the SI units used. The deciliter to liter is easy. To convert between grams and moles you need to know the molar mass (numerically equal to molecular weight). Wikipedia states the molar mass with great precision as 44324.5 g/mol. Unfortunately, it is a very complex molecule and seems to exist in more than one form. Various articles claim various molecular weights from 44 - 54 kDa (44 - 54 kg/mol) with Wikipedia near the low end. The journal of the AMA suggests multiplying g/dL by 0.184 for µmol/L (MW ~ 54 kDa). The Wikipedia value would give a multiplier of 0.226. 52 kDa seems to be another commonly referenced MW and it would give 0.192 as a multiplier. To go the other way, divide instead of multiply. However, I have no way of determining which of the above factors is correct. I would lean toward the JAMA value, after all, they are the doctors.
{"url":"http://forum.onlineconversion.com/showthread.php?t=7102","timestamp":"2014-04-20T23:38:47Z","content_type":null,"content_length":"41855","record_id":"<urn:uuid:9ac0e246-f89a-4975-9199-bde4db3bc326>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
geometry 1.2 Inductive reasoning reach a conclusion based on specific examples the conclusion of the inductive reasoning reasoning Deductive reasoning is a process of reasoning logically from given facts to a conclusion fact ex-*5 definitions, postulates, theorum, axions and properties what two things can you measure, what tool, and what unit line, ruler, metric-standard / angle, protractor, degrees an angle whoes measure is grater than 0 and less than 90 0 < 2x +3 < (90/180) acute/streight ineaqualtiy Right angle an angle that measures exactly 90 degrees 2x +3 = 90 right equation obtuse angle an angle whoes measre is grater than 90 and less then 180 90< 2x +3 < 180 obtuse inequality 1 degree is equal to _mins 1' is equal to __ seconds proportions are used to solve a degree problum; PROPORTION to ration/fractions set rquall to each other where the product of the means equalls the product of extreams example both ways to set up the proportion ex/mean = mean/ex ...or... mean x mean = expl x expl the two (segments / angles / triangles) are congruent if they have the same measure ( length / degree ) segment + positive cobining two segments AB + BC = AC angle+ positive putting two angle togeather to form 1, A1 + A2 = A3
{"url":"http://quizlet.com/6282058/geometry-12-flash-cards/","timestamp":"2014-04-19T14:33:53Z","content_type":null,"content_length":"59554","record_id":"<urn:uuid:de678973-2ba1-48d9-b1c2-b5c57985d2d5>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
Hopewell, NJ Algebra 2 Tutor Find a Hopewell, NJ Algebra 2 Tutor ...I have taught many inclusion classes in my years of teaching in Philadelphia. I have completed numerous IEPs on kids with ADD/ADHD. I have a lot of experience connecting with them including behavioral techniques to help achievement. 19 Subjects: including algebra 2, reading, English, writing ...Throughout the years, I've tutored subjects such as calculus, differential equations, general chemistry, quantum mechanics, and thermodynamics. At Princeton University, I've helped teach two courses thus far: experimental chemistry and chemical thermodynamics. As a tutor, my main goal is to hel... 6 Subjects: including algebra 2, chemistry, calculus, algebra 1 ...With older children, and adults entering/returning to the work-force, I firmly believe that excelling in the job market requires computer programming skills. I have successfully tutored college students and older co-workers on basic computer programming concepts. I am proficient in Python, Perl... 25 Subjects: including algebra 2, geometry, computer science, algebra 1 My name is Veronica. I'm a student with Mathematics/Education major. My GPA is a 3.4. 4 Subjects: including algebra 2, Spanish, algebra 1, prealgebra (( HIGHEST RATINGS!!! )) PARENTS: Bring the full weight of a PhD, as tutor, and student advocate. Hello Students! If you need help with mathematics, physics, or engineering, I'd be glad to help 14 Subjects: including algebra 2, physics, calculus, ASVAB
{"url":"http://www.purplemath.com/hopewell_nj_algebra_2_tutors.php","timestamp":"2014-04-17T19:34:24Z","content_type":null,"content_length":"23809","record_id":"<urn:uuid:669ebaa7-c99c-4133-b3c1-86dc50a865f2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00569-ip-10-147-4-33.ec2.internal.warc.gz"}
Dunn Loring Precalculus Tutor Find a Dunn Loring Precalculus Tutor ...I can help you understand how, why, when, and where all of it takes place. Let me help you or your child gain an in-depth understanding and appreciation of genetics while enhancing the skills and abilities necessary to demonstrate the knowledge on assessments and more! I earned a Bachelor's of Arts in Biological Sciences and a Master's of Science in Biological Sciences. 64 Subjects: including precalculus, reading, chemistry, English I have received my BA from The George Washington University few years ago and now am attending George Mason University pursuing a chemistry degree to finish medical school pre-requisites. I was on Dean's List in Spring 2010 and hope to be on it again this semester. I would like to help student... 17 Subjects: including precalculus, chemistry, physics, calculus ...As a camp counselor, I interacted with the children in Spanish and gave English lessons as well. As an undergraduate at Duke University, I took organic chemistry and received a high A in the class. I have an in depth understanding the material and am more than capable of explaining the concepts and mechanisms. 17 Subjects: including precalculus, Spanish, writing, physics ...I scored a 790/740 Math/Verbal on my SAT's and went through my entire high-school and college schooling without getting a single B, regardless of the subject. I did this through perfecting a system of self-learning and studying that allowed me to efficiently learn all the required materials whil... 15 Subjects: including precalculus, calculus, physics, GRE ...I have tutored a high school student about 15 hours for AP Computer Science II last year in the spring under subject Java. I am able to understand, write, and fix codes written in languages C, C++, and Java. I have great math background from working toward master's degree in computer science. 15 Subjects: including precalculus, chemistry, calculus, physics
{"url":"http://www.purplemath.com/dunn_loring_va_precalculus_tutors.php","timestamp":"2014-04-19T02:35:21Z","content_type":null,"content_length":"24386","record_id":"<urn:uuid:968e7468-5c65-4e92-b63b-97b081ab1482>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00023-ip-10-147-4-33.ec2.internal.warc.gz"}
Funny Math Quotes and Sayings A mathematician is a scientist who can figure out anything except such simple things as squaring the circle and trisecting an angle. Pure mathematics is, in its way, the poetry of logical ideas. Mathematics are well and good but nature keeps dragging us around by the nose. Black holes result from God dividing the universe by zero. Mathematics - the unshaken Foundation of Sciences, and the plentiful Fountain of Advantage to human affairs. I don't agree with mathematics; the sum total of zeros is a frightening figure. If people do not believe that mathematics is simple, it is only because they do not realize how complicated life is. Math is radical! If you think dogs can't count, try putting three dog biscuits in your pocket and then giving Fido only two of them. It is a mathematical fact that fifty percent of all doctors graduate in the bottom half of their class. Arithmetic is where numbers fly like pigeons in and out of your head. Arithmetic is numbers you squeeze from your head to your hand to your pencil to your paper till you get the answer. Even stranger things have happened; and perhaps the strangest of all is the marvel that mathematics should be possible to a race akin to the apes. So if a man's wit be wandering, let him study the mathematics; for in demonstrations, if his wit be called away never so little, he must begin again. The essence of mathematics is not to make simple things complicated, but to make complicated things simple. The human mind has never invented a labor-saving machine equal to algebra. The mathematics are distinguished by a particular privilege, that is, in the course of ages, they may always advance and can never recede. Go down deep enough into anything and you will find mathematics. ~Dean Schlicter It is not the job of mathematicians... to do correct arithmetical operations. It is the job of bank accountants. Trigonometry is a sine of the times. A mathematician is a device for turning coffee into theorems. Mathematics is as much an aspect of culture as it is a collection of algorithms. Sometimes it is useful to know how large your zero is. The hardest arithmetic to master is that which enables us to count our blessings. Mathematics is the only good metaphysics. The most savage controversies are those about matters as to which there is no good evidence either way. Persecution is used in theology, not in arithmetic. Music is the pleasure the human mind experiences from counting without being aware that it is counting. How many times can you subtract 7 from 83, and what is left afterwards? You can subtract it as many times as you want, and it leaves 76 every time. But mathematics is the sister, as well as the servant, of the arts and is touched with the same madness and genius. Still more astonishing is that world of rigorous fantasy we call mathematics. Mathematics is the supreme judge; from its decisions there is no appeal. Although he may not always recognize his bondage, modern man lives under a tyranny of numbers. God does not care about our mathematical difficulties; He integrates empirically. If there is a God, he's a great mathematician. The different branches of Arithmetic - Ambition, Distraction, Uglification, and Derision. Geometry is not true, it is advantageous. Infinity is a floorless room without walls or ceiling. God is real, unless declared integer. If a healthy minded person takes an interest in science, he gets busy with his mathematics and haunts the laboratory. Proof is an idol before whom the pure mathematician tortures himself. In the binary system we count on our fists instead of on our fingers. A man has one hundred dollars and you leave him with two dollars. That's subtraction. Why do we believe that in all matters the odd numbers are more powerful? One of the endlessly alluring aspects of mathematics is that its thorniest paradoxes have a way of blooming into beautiful theories. The man ignorant of mathematics will be increasingly limited in his grasp of the main forces of civilization. Although I am almost illiterate mathematically, I grasped very early in life that any one who can count to ten can count upward indefinitely if he is fool enough to do so. Nature does not count nor do integers occur in nature. Man made them all, integers and all the rest, Kronecker to the contrary notwithstanding. Labels: Math Quotes 0 Comments: Subscribe to Post Comments [Atom] Links to this post:
{"url":"http://coolfunnyquotes.blogspot.com/2007/09/funny-math-quotes-and-sayings.html","timestamp":"2014-04-24T23:49:34Z","content_type":null,"content_length":"24898","record_id":"<urn:uuid:be55aac0-b6f9-42fe-a290-bf4b94665f51>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00380-ip-10-147-4-33.ec2.internal.warc.gz"}
Two men and a thread. Here, I have a question. 2 persons are holding a rope of negligible weight tightly at its ends so that it is horizontal. A 30 kg weight is attached to the rope at the mid-point which now no longer remains horizontal. What is the minimum tension required to completely straighten the rope? What I got as the solution is that the tension would not be defined or lets say it would be infinite. Well this answer seems reasonable to me. What it actually means is that practically the rope will never straighten. It will be bent by some amount, however small, until there is some weight. But, my friends say that my answer is absurd, as one can see by experimenting that the rope will straighten. But I say that it's actually not happening. When we do experiments with normal rope and small weights, and apply the tension which seems to straighten the rope, the deformation is so small that naked eye cannot observe it. I have calculated this mathematically. But still my friends refuse to accept the answer. So, I need a little help with this. If I'm really wrong, can somebody let me know what's the mistake?
{"url":"http://www.physicsforums.com/showthread.php?t=156038","timestamp":"2014-04-21T04:46:47Z","content_type":null,"content_length":"35572","record_id":"<urn:uuid:407eea4f-7bc3-42df-b8b7-7c7830c24dae>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00519-ip-10-147-4-33.ec2.internal.warc.gz"}
what is the limit of lim x->p/2 (tan(x-(p/2)))/(x-(p/2)+cos(x))note that x approach to (p/2) and lim x->p/2... - Homework Help - eNotes.com what is the limit of lim x->p/2 (tan(x-(p/2)))/(x-(p/2)+cos(x)) note that x approach to (p/2) and lim x->p/2 (tan(x-(p/2)))/(x-(p/2)+cos(x)) not lim x->p/2 (tan(x-(p/2)))/(x-(p/2)) + cos(x) If you try to evaluate the limit straight away, you get an indeterminate answer 0/0. In such situations you can use the l'hopitals rule. `lim_(x-gta)(f(x))/(g(x)) = lim_(x-gta)(f'(x))/(g'(x))` `f(x) = tan(x-pi/2)` So, `f'(x) = sec^2(x-pi/2)` `g(x) = (x-pi/2)+cos(x)` `g'(x) = 1-sin(x)` `lim_(x-gtpi/2)tan(x-pi/2)/((x-pi/2)+cos(x)) = lim_(x->pi/2)(sec^2(x-pi/2))/(1-cos(x))` `lim_(x->pi/2)(sec^2(x-pi/2))/(1-cos(x)) = 1/(1-1) = +oo` `lim_(x-gtpi/2)tan(x-pi/2)/((x-pi/2)+cos(x)) = +oo` what about the right 0 and left of 0. How did you know the sign of 0 to say +infini or -infini ? There is a small mistake there. It should be corrected as follows, `lim_(x-gtpi/2)tan(x-pi/2)/((x-pi/2)+cos(x)) = lim_(x->pi/2)(sec^2(x-pi/2))/(1-sin(x))` ` lim_(x->pi/2)(sec^2(x-pi/2))/(1-sin(x)) = 1/(1-1) = +oo` `lim_(x-gtpi/2)tan(x-pi/2)/((x-pi/2)+cos(x)) = +oo` Now if you look at (1-sin(x)), the maximum sin(x) can achieve is 1, that is at pi/2, but when x is approaching to pi/2, sin(x) is not 1, it is a value little below 1. So (1- sin(x)) is positive. Therefore the limit is + infinity. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/what-limit-lim-x-gt-p-2-tan-x-p-2-x-p-2-cos-x-336458","timestamp":"2014-04-17T22:14:24Z","content_type":null,"content_length":"30303","record_id":"<urn:uuid:36a32841-1d4a-442a-ac5c-ea27cb79025a>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00326-ip-10-147-4-33.ec2.internal.warc.gz"}
GEAR » Climashield non-siliconized, polyester continuous filament insulation claims highest Clo/Kg -- BackpackingLight.com Forums Forum Index » GEAR » Climashield non-siliconized, polyester continuous filament insulation claims highest Clo/Kg David Climashield non-siliconized, polyester continuous filament insulation claims highest Clo/Kg Western Nonwovens. Inc. is the manufacturer of the Climashield family of non-siliconized, polyester continuous filament insulation. It claims that it has the highest thermal efficiency of any fill. Bill Climashield non-siliconized..... David, You can buy both the Climashield HL and XP. I got some XP that was listed as 2.7oz per sq yard. This might be what the "Fact Sheet" is calling 3oz. but I have no real idea about Thru-Hiker.com has the HL listed on his web site for sale at $12.95 a yard. He lists the CLO for the HL at 0.68. I have been told that the CLO for the 1.8 PG Delta is around 0.77 and not the higher number posted here sometime ago. I also have a bunch of the 1.8 PG Delta. I have no idea what the CLO valve gives me and that higher number may even have been for something different than for the weight stuff I have. This is why I like to work with 800+ Down. I just unrolled the XP and the loft seems to be from about .75inch to 1.25 inch in loft. I will check it later when it has a chance to relax. Miguel Re: Climashield non-siliconized..... I'm interested in making an underquilt/ serape and would like to use, if possible, pertex quantum and polarguard delta, but have no idea where I might source these materials. Bill, may I ask where you got the PG Delta, and would anyone know where to get Pertex Quantum? Bill Climashield non-siliconized..... Hi Miguel, How are you doing? I think the last time I read your Blog you were not (health wise) a happy camper. Hope thinks are under better control for you. Back in late Summer 2004 someone I know in the Outdoor stuff business had made a production run of a few items using Pertex Quantum and PG Delta. Some of both materials were left over and were what I would say "quietly" put on sale. Three colors of the Pertex Q with two of these colors as "seconds". I think the seconds material may have been used to get the products passed the pre-production testing phase but that might be wrong. Anyway I had exchanged a few emails with this person and had asked about Pertex Quantum and PG Delta for a few MYOG things I wanted to try. I got a reply about the coming clearance of the extra material and that I would get an email when they were ready to sell the stuff. In Oct 2004 I was able to order 10 yards of both Pertex Quantum (5 yards of 2 colors in seconds) and 10 yards of PG Delta. It seems to have been a one time only opportunity as I don't think any more of these materials have ever be sold like this since. At the time my sewing skills were not good enough to try on this material and the knowledge that I may never be able to get more caused me to pack it away for a later day. Over time and up to now I have never heard of anymore being sold to someone like us MYOG folks. My suggestion and what I have done is bought and tried some Momentum90 (0.9 oz (real weight is 1.1oz per sq yard) Downproof Ultralite Taffeta - Black (60 inch width) $11.95 a yard from Thru-Hiker.com. This material (to me) seems about the same as the Pertex Quantum I have and is just a little heavier. It is really nice and might be worth asking for a sample. Thru-Hiker also sells Epic Maibu which is also very nice but weighs 1.7oz per sq yard. If you look at the Thru-Hiker.com web site check out their Climashield HL as a possible replacement for the PG Delta. Having said all of the above. I have a $135 suggestion or for $125/$129 Both of these sleeping bag are on sale and use PG Delta. You might be able to modify one of these bags to make what you need. The bags as they are sold are not to heavy and by the time you do a little cutting and re-sewing you might be surprised how light your underquilt ends up. I have been thinking for awhile that I need to buy one of these bags that are on sale for the PG Delta. Moontrail is here in San Antonio and I may go by next week and see if I can tell how they are made. The Optima bag says it has 33oz of fill. That seems to be about 17 sq yards of PG Delta. I have never taken a sleeping bag apart. Richard Climashield non-siliconized You have given a lot of valuable information to this forum… thanks. Hopefully my response to your questions will likewise be of value to you. Your post stated in part, “I have no idea what the CLO valve gives me…” I would like to first attempt to describe the insulation measurement terms: Thermal Conductivity – This is an inherent property of a material. Thermal Conductivity differs with each substance and may depend upon on structure, density, humidity, pressure and temperature. This number is always reported for a FIXED THICKNESS and FIXED SET OF TEMPERATURES, as well as keeping the other variables fixed. W/mK (most common), (cal/sec)/(cm2 C/cm), or BTU in /hrft2F are the optional ways of expressing the thermal conductivity. You can multiply a known thermal conductivity value by the appropriate constant to easily convert to another thermal conductivity expression. The following W/mK numbers are relevant to answer your questions: Water at 20 C ( 68F) = .580 (This is why you don’t want it in your insulation) Climashield HL = .089 Insulators = <0.065 (Construction industry standard) Climashield XP = .043 Polarguard Delta = .041 Air at 0 C (32F) =.024 (The unmoving air is what provides the insulation, not the fiber. The more trapped air, or loft, the better) Thermal Resistance – This number is always reported for the variable insulation THICKNESS. It is calculated from the thermal conductivity number. 1 divided by the thermal conductivity and then multiplied by the appropriate number for the variable THICKNESS. R Value, m2K/W, clo, and TOG are the optional ways of expressing thermal resistance. You can multiply by the appropriate constant to easily convert from one type of thermal resistance expression to another. The following R value, m2K/W, and clo values are for a 1” insulation THICKNESS. For example, 2” of an insulation’s thermal resistance can be determined by multiplying the 1” value by 2 and a ½” sample can be determined by multiplying the 1” value by ½. Water = R value of .249, m2K/W value of .044, clo value of .283 Climashield HL = R value of 1.613, m2K/W value of .284, clo value of 2.846 Insulators = > R value of 2.217, m2K/W value of .391, clo value of 2.524 Climashield XP = R value of 3.382, m2K/W value of .596, clo value of 3.850 Polarguard Delta = R value of 3.515, m2K/W value of .620, clo value of 4.001 Air at 0 C (32F) R value of 6.005, m2K/W value of 1.058, clo value of 6.836 clo/oz – We need to look up WEIGHT yd2 for the specific insulation to determine the clo/oz per yd2. This value is normally just referred to as just clo/oz with the y2 being implied. clo is an easy number to grasp its relevance because 1 clo is the insulation value of an average man’s dress suit. The Climashield XP clo/oz is calculated by taking the 1”THICKNESS clo value I previously listed as being 3.850 and dividing it by the oz yd/2 for 1” THICKNESS which is 5. Reference http://www.climashield.com/pdf/Climashield_HL_Spec_Fact_Sheet_for_CS_Our_Products.pdf for the insulation weight and thickness. 3.850 / 5 = .77. Please note that the Climashield URL reference table number for clo/oz is also .77. A great source for weight and thickness of various other types of insulations is the BPL reviews of synthetic jackets and belay parkas since they state the insulation used and the actual loft that BPL measured. Your post also stated in part, “I have been told that the CLO for the 1.8 PG Delta is around 0.77 and not the higher number posted here sometime ago.” I was the one who posted the higher I will now calculate the PG Delta clo/oz value using the same procedure I did for Climashield XP. The PG Delta clo/oz is calculated by taking the 1”THICKNESS clo value I previously listed as being 4.001 and dividing it by the oz yd/2 for 1” THICKNESS which is 2.73. This 2.73 thickness value was calculated by taking BPL PG Delta loft measurement for 3 oz/yd2 of 1.1”. See the GoLite Belay Parka measurements in http://www.backpackinglight.com/cgi-bin/backpackinglight/high_loft_synthetic_belay_jackets_2005_review_summary.html?print=1. The adjusted WEIGHT for 1” of insulation, if it were manufactured, would be 4.001/(1/1.1)*3 = 1.47. To do a simple sanity check of the calculations just note that a 1” THICKNESS yd2 of Climashield WEIGHS 5 oz (Climashield URL) and a 1.1” PG Delta THICKNESS yd2 weighs 3oz (BPL URL). The PG Delta clo/oz is more than double what the Climashield value is and so the clo/oz should be more than double assuming the thermal conductivity values are the same. In fact the thermal conductivity value for PG Delta is better than Climashield. It should be no surprise to anyone that Climashield HL or XP insulated garments are not what Ryan and group will be taking to Bill, you stated in part, “I have been told that the CLO for the 1.8 PG Delta is around 0.77…” I calculate the PG Delta 1.8 oz clo value as 1.2. Remember 1 clo is the warmth you would get wearing a business suit. You can test this clo value subjectively if you have a Cocoon pullover. If it feels warmer than wearing a business suit, then my calculation is probably correct. If it is provides much less than the warmth of a business suit then your source is probably correct. Richard Nisley Bill Climashield non-siliconized..... Richard, Thanks for taking the time to explain this, again. I will need to read your comments a few times and then I may understand this all a little better. As I jump straight to the chase, so to speak, you should have guessed that I am still on and have been on a quest to find a way to get more PG Delta. Taking apart a sleeping bag that is on sale to get more should convince most that I think it is the way to go. I believe from experience, mine but mostly others, that the PG Delta/Pertex Q used for the Cocoon line of items is the current answer for rainy weather and have both the Cocoon pullover and the Pants. I am waiting for the next items in the Cocoon line. I just never really understood the tech. side of the answer why. So after all that, what is the next best insulation when a MYOG person can't buy PG Delta and wants to make some things vs buy them? Is it worth the effort to salvage it from other things such as a sleeping bag? Richard Climashield non-siliconized PG Delta has the highest clo/oz value at .68. My original calculation was based on erroneous BPL loft data for a cacluation I used. I suggest you contact the sales department at Western Nonwovens to discuss your small quantity PG Delta purchase options. Their contact information is as follows: Western Nonwovens 641 Northpark Drive Clinton, TN 37716 Ph: 865.463.6184 Fax: 865.463.6188 Email: InfoPI@WesternNonwovens.com The next two best synthetic clo/oz values are: Thermolite Extreme - .917 (better than Polarguard 3D only if single layer insulation is less than 1") Polarguard 3D - .909 (more durable than PG Delta) OWF's "Polyester Continuous Filament" is Polarguard 3D. They're not permitted to use the name in print, but if you talk to the nice ladies on the phone they'll tell you. Last summer they started closing it out (and they had over 500 yds in stock), but it's still listed on their insulation page @ http://www.owfinc.com/Fabrics/insulation.asp Dondo Re: Climashield non-siliconized Thanks for your post. It took me several slow readings but I think I'm beginning to understand where your numbers are coming from. Have you calculated the clo/oz values for other common insulations such as Primaloft One, Primaloft Sport, and Exceloft? Richard Nisley Climashield non-siliconized Primaloft One - .840 Primaloft Sport - .740 Exceloft - .683 Dondo . Re: Climashield non-siliconized paul johnson Re: Climashield non-siliconized Great info. Many thanks for the very informative posts. BTW, how did you get the CLO values for Exceloft? I could never get Montbell to give them to me. Richard Climashield non-siliconized I knew the W/m K value was .040. I then used the BPL data in bin/backpackinglight/2004_ultralight_synthetic_insulating_jackets_vests_review.html to determine the loft of the 1.8 oz/yd2 was .3". From there I just used the clo/oz calculation methodology that I previously posted. Bill Fornshell Climashield non-siliconized..... I was able to talk to Brian Emanuel "Dir of Sales - Western NonWovens" a short while ago. We talked about a lot of things reference insulation. When I asked about the clo valve of PD Delta he said it was between .67 and .68. He said he would try and look at this thread. I still don't fully understand the "clo" thing but I do know much more than I did at the start of this thread. Richard Climashield non-siliconized..... My Polarguard Delta clo/oz calculation was based in part on the BPL Golite Belay parka review. BPL states that this parka is using 3 oz/yd2 PG Delta insulation and providing 1.1" of loft. If anyone owns this garment please verify the BPL review info and post to this forum. For now I will assume that BPL made a mistake in their Golite Belay Parka review insulation weight at URL http://www.backpackinglight.com/cgi-bin/backpackinglight/ high_loft_synthetic_belay_jackets_2005_review_summary.html?print=1 but their Patagonia Micro Puff Pullover review values listed at URL http://www.backpackinglight.com/cgi-bin/backpackinglight /2004_ultralight_synthetic_insulating_jackets_vests_review.html are correct. BPL lists the Micro Puff as using 2.6 oz/yd2 PG Delta and having a loft of .6". This translates to a revised clo/ oz of .923 for PG Delta.
{"url":"http://www.backpackinglight.com/cgi-bin/backpackinglight/forums/thread_display.html?forum_thread_id=3230&skip_to_post=23778","timestamp":"2014-04-20T19:24:18Z","content_type":null,"content_length":"57812","record_id":"<urn:uuid:55f396ca-302d-4480-8b88-bedebe568c3c>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00232-ip-10-147-4-33.ec2.internal.warc.gz"}
The Progression of Time: Scale Expanding Cosmos links General Relativity, Quantum Theory In a recently published book titled "The Progression of Time - How the expansion of space and time forms our world and powers the universe", C. Johan Masreliez, a retired Engineer passionate about physics and cosmology, introduces the concept of a fifth dimension beyond four-dimensional spacetime. The Scale Expanding Cosmos (SEC) model overcomes some serious limitations of the Standard Cosmological Model. It eliminates the need for a Big Bang creation-out-of-nothing event at the beginning of the universe that is making today's cosmology little more than an article of faith. It explains what motion is and how time progresses. Inertia becomes understandable, being modeled in the SEC theory as a curvature of the 4-dimensional space-time continuum, in a manner very similar to how Einstein describes gravity in his General Theory of Relativity. You can find the book on Amazon at www.amazon.com/Progression-Time-expansion-powers-universe/dp/1456574345/ The dynamically expanding scale dimension explains the seemingly endless energy supply of the universe, saving us from gradual decline into an ignominious heat death. It also resolves several of the paradoxes of Einstein's Special Relativity and provides a stable cosmic frame of reference, something Einstein could never quite get to grips with, in addition to allowing Quantum Theory to be derived from the equations of General Relativity. Quite a revolution in our understanding of the universe, the new model might have a hard time gaining acceptance by physicists, but it is an important step in advancing our understanding of the mechanics of the things physics is supposed to explain. - - - To give you an idea of the breadth and depth of the work, here is a short summary of its chapters: Chapter 1: Spacetime-Scale Equivalence introduces the concept of cosmological scale-equivalence as a previously unexplored additional degree of freedom. It implies a new aspect of motion since objects may move not only in the four dimensions of space and time but also in scale, making the world five-dimensional. Chapter 2: Justifying the Scale Expanding Cosmos (SEC) The concept of scale-equivalence in the context of cosmological expansion is explored by introducing the SEC theory together with related subjects. Some may object to the use of the word "theory", which usually is reserved for more established ideas. However, it is a theory in the same sense as the Standard Cosmological Model is a theory. The summary of the SEC theory in this chapter is intended for the reader who might not be interested in the observational and theoretical details justifying the SEC. Chapter 3: A Few Unfamiliar Aspects of the SEC are introduced since the SEC theory implies new physics. Chapter 4: Problems with the SCM and Their Resolutions highlights several problems with the Standard Cosmological Model that are resolved by the SEC model. Chapter 5: Observational Evidence in the Solar System presents evidence for the new model found in the solar system. Chapter 6: New Physics of the SEC Model presents new aspects implied by the SEC that to some extent would revise known epistemology. Chapter 7: Motion and the Origin of Inertia offers an explanation of what causes the inertial force and suggests that the theory of Special Relativity might be in need of revision. Chapter 8: Quantum Theory and Its Link to General Relativity introduces a previously missing link between the theory of general relativity and quantum theory by showing how quantum theory may be derived from general relativity. It also gives a physical, ontological interpretation to the quantum mechanical wave-functions. Chapter 9: The SEC in Relation to Current Physics places the proposed theory in a historical perspective. Chapter 10: Bits and Pieces presents a number of the author's personal comments and ruminations over the years. Chapter 11: A Missing Dimension summarizes the book by suggesting a new worldview, which might forever change our perception of the world. Throughout the book, reference is made to published papers, which may be found at the end of the book, and which may be consulted for further study. There are no less than 14 papers the author has published in various journals before attempting the writing of this book. I am listing those papers here for convenience. Masreliez, C. J. "The scale expanding cosmos." Astrophysics and Space Science 336:399-447. (1999). Masreliez, C. J. "Scale expanding cosmos theory I - An introduction." Apeiron 11 (3): 99-133 (2004a). Masreliez, C. J. "Scale expanding cosmos theory II - Cosmic drag." Apeiron 11 (4): 1-29. (2004b). Masreliez, C. J. "Scale expanding cosmos theory III - Gravitation." Apeiron 11 (4): 30-51. (2004c). Masreliez, C. J. "Scale expanding cosmos theory IV - A possible link between quantum theory and quantum mechanics." Apeiron 12 (1): 89-121. (2005a). Masreliez, C. J. "A cosmological explanation to the Pioneer anomaly." Astrophysics and Space Science 299:83-108. (2005b). Masreliez, C. J. "On the origin of inertial force." Apeiron 13 (1): 43-77, (2006a). Masreliez, C. J. "The scale expanding cosmos theory." Nexus Magazine 13 (June-July): 39-44. (2006b) Masreliez, C. J. "Does cosmological scale-expansion explain the universe?" Physics Essays 19:91-122. (2006c) Masreliez, C. J. "Motion, inertia and special relativity - a novel perspective." Physica Scripta 75:119-125. (2007a) Masreliez, C. J. "Dynamic incremental scale transition with application to physics and cosmology." Physica Scripta 76:486-93. (2007b) Masreliez, C. J. "Special relativity and inertia in curved spacetime." Advanced Studies in Theoretical Physics 2:795-815. (2008) Masreliez, C. J. "Inertial field energy." Advanced Studies in Theoretical Physics 3:131-40. (2009) Masreliez, C. J. "Inertia and a fifth dimension - Special Relativity from a new Perspective", Astrophys Space Sci 326: 281-291, (2010) If you would like to stay in contact with the development of the SEC model and related subjects, please join (like) the following page on facebook:
{"url":"http://blog.hasslberger.com/2013/05/the_progression_of_time_scale.html","timestamp":"2014-04-21T04:32:27Z","content_type":null,"content_length":"35977","record_id":"<urn:uuid:3a029060-a19a-4e2c-87a6-6afb089d1881>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical English Usage - a Dictionary by Jerzy Trzeciak We leave the details to the reader. We leave to the reader the proof that f is indeed self-adjoint, and not merely symmetric. We leave it to the reader to verify that...... [Note that the it is necessary here.] For the convenience of the reader, we repeat the main points. [Or: For the reader's convenience] The reader is cautioned that our notation is in conflict with that of [3]. The reader is assumed to be familiar with elementary K-theory. For more details we refer the reader to [4]. The interested reader is referred to [4] for further information. [Note the double r in referred.] At this point, the reader is urged to review the definitions of...... It may be worth reminding the reader that...... The reader might want to compare this remark with [2, Cor. 3]. Back to main page
{"url":"http://www.emis.de/monographs/Trzeciak/glossae/reader.html","timestamp":"2014-04-20T01:03:14Z","content_type":null,"content_length":"1590","record_id":"<urn:uuid:2676cdab-b9a7-4319-9dbe-24b1c3f682c1>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00252-ip-10-147-4-33.ec2.internal.warc.gz"}
Burke, VA Geometry Tutor Find a Burke, VA Geometry Tutor Hello, I am currently teaching at a high school. I teach in general education and special education classrooms. I have had great success in helping my students maximize their math learning and success. My SOL pass rate is always near the top in FCPS. Students enjoy working with me. I make math fun! 7 Subjects: including geometry, algebra 1, algebra 2, special needs ...Camp Community College, Smithfield, VA - Managed a professional and personable instructional environment for returning adults and new students. Prepared lectures, field trips, laboratory investigations, academic assessments, and scheduled guest speakers. 1995-1997 Mentor and Laboratory Assistan... 64 Subjects: including geometry, reading, chemistry, English ...Calculations for volume and area, as well as solutions for related problems are included. It also includes the theorem and proof aspects around triangles, their respective shapes and angles. My approach utilizes the theoretical concepts and ties them to real world examples to help the student translate from theory into practicality. 17 Subjects: including geometry, chemistry, ASVAB, calculus ...I have particular strengths in verbal reasoning and writing (13 on MCAT Verbal Reasoning, 780 writing on SAT, and 36 writing on ACT). Please note that due to the technical nature of MCAT tutoring I charge $75 per hour for MCAT prep. My tutoring style involves coaching the student into reaching t... 39 Subjects: including geometry, Spanish, chemistry, writing ...Most recently, I have tutored algebra. Algebra 2 is a step up from Algebra 1, the math used most often in chemistry. Algebra 2 is also an important precursor of higher level courses such as 5 Subjects: including geometry, chemistry, algebra 1, algebra 2 Related Burke, VA Tutors Burke, VA Accounting Tutors Burke, VA ACT Tutors Burke, VA Algebra Tutors Burke, VA Algebra 2 Tutors Burke, VA Calculus Tutors Burke, VA Geometry Tutors Burke, VA Math Tutors Burke, VA Prealgebra Tutors Burke, VA Precalculus Tutors Burke, VA SAT Tutors Burke, VA SAT Math Tutors Burke, VA Science Tutors Burke, VA Statistics Tutors Burke, VA Trigonometry Tutors
{"url":"http://www.purplemath.com/Burke_VA_Geometry_tutors.php","timestamp":"2014-04-17T21:30:04Z","content_type":null,"content_length":"23901","record_id":"<urn:uuid:3de818a3-c5a9-4e3c-8cfc-483f6d2a29de>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
AA Arithmetical Ability Test form No. 000 KP 0 A General, while arranging his men, (B) 77 A number, when divided successively (C) 58 A number, when divided by 296, (A) 1 The square root of (B) 2 The sum and product of two numbers (C) If (C) 2 If then (D) 0 is (A) 0.125 If then (A) 2 If then (C) 1 If then (B) The greatest number among (C) is (C) 1.8 is (B) The sum of all the digits of the numbers (A) 5050 A shopkeeper sells sugar in such a way (A) A person bought a horse and a carriage (C) 2008 A sells an article to B at 15% (A) Rs. 500 An article sold at a certain fixed price. (C) 35 A sells an article to B for Rs. 45,000 (C) 200/9 The cost price of an article is 80% (C) 10 A merchant has announced 25% rebate (D) 5 A merchant purchases a wristwatch for (B) 600 A reduction of 10% in the price of tea (C) Rs. 90 Ram donated 4% of his income (D) Rs. 10 000 If the length of a rectangle is (A) decreases by 1% Three spherical balls of radii 1 cm, 2 cm (C) 3cm If and (B) 8:27 If the length of a rectangle is increased (B) 15:14 7 years ago, the age (in years) of (D) 77 years Two numbers are such that their (C) 48 A, B, C are partners such that their (C) Rs. 4000 Three horses are tethered at 3 corners (C) 77 A wire, when bent in the form of (C) 154 cm square The ratio of the area of a sector (B) 25 cm The length of the diagonal of a cube (B) 24√3 If a sphere of radius r is divided 8πr2 square unit A sum of money, deposited at (A) 16 What is the difference between the (A) 10 The simple and compound interests (A) 6% A man can row against the current (D) 5:1 Two places A and B are 100 km (A) 60 kmph A can complete a piece of work (B) 15/2 Two pipes can fill an empty tank (B) 600 A batsman, in his 12th innings, (A) 41 The greatest number, that divides (C) 4 is (A) 1/2 The sum of the areas of the 10 squares, (A) 6085 The square root of (A) 15 If then (C) 9 If , then (A) 2 is (B) 1/7 is (C 1 If then (A) 4 The ...... (B) √5 - √3 If then (A) 0 If then (C) 4/3 If then is (D) (a-1)/(3-2a) The missing term in the sequence (D) 13 The wrong number in the sequence (B) 47 When the price of a toy was increased by 20% (A) 2% increase A person sold a horse at a gain of 15%. (C) Rs. 375 A sells an article to B at a gain of 25%, (A) Rs. 200 By selling an article for Rs. 21, a (A) Rs. 30 or Rs. 70 Half of 100 articles were sold at a profit of 20% (C) Rs. 20 The marked price of a clock is Rs. 3200. (C) 15% A dealer marks his goods 30% (A) Rs. 800 A shopkeeper wishes to give 5% (B) Rs. 110 Krishnamurthy earns Rs. 15000 answer not matching, Rs. 3600 Two numbers are respectively (A) 90 Population of a town increase 2.5% (B) 4.04 72% of the students of a certain (C) 250 Rs. 1050 are divided among A, B (B) Rs. 300 The sides of a right-angled triangle (A) 39cm Two numbers are in the ratio 5:6. (C) 120 If and (C) 1:1:1 A and B enter into partnership (D) 12 months What is the length of the radius (B) 6cm If the measures of a diagonal (B) 24 cm The number of coins, each of (A) 640 If the radius of a sphere is increased (B) 15m A right circular cylinder is circumscribing (C) 2:3 A man invested of his capital (C) Rs. 6600 The compound interest on Rs. 6250 (A) Rs. 772.50 A sum of money let at compound (D) 10% If A travels to his school from his (B) 2 km A train travelling with a speed of (C) 36 kmph A and B together can complete a (D) 20 4 men and 6 women together can (D) 20 The average of two numbers A and B is (B) 22 If the total income of the family for (A) Rs. 15 000 Maximum expenditure of the family (C) Others The saving of the family for the (B) Housing The percentage of the income which (D) 27 If the total income of the family was (D) Rs. 34,500 The number of persons killed in coal (B) 25 In which year, minimum number (D) 2009 In which year, maximum number of (A) 2006 In which year, minimum number of persons (B) 2007 In a year, on average, how many persons (C) 1212.5 No comments:
{"url":"http://competition4job.blogspot.com/2010/08/combined-graduate-level-tier-ii-math.html","timestamp":"2014-04-20T18:26:15Z","content_type":null,"content_length":"86832","record_id":"<urn:uuid:5d183a57-f6da-4413-9697-aa003d672219>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
PCMI @ MathForum 2013: Abstracts Park City Mathematics Institute High School [CalcReady] [Geometry] [ModelingAlg] [Technology] [TransformationGeo] [StatisticsH] [StatisticsL] [AssessingMPs] [Algebra] [ModelingGeo] [Parents] [GroupTasks] [LowThreshold] [Quadratics] Grade Level: 8-12 Subject: Algebra (or other HS courses focused on functions) Topic: Rate of Change Authors: Nathan Goza, Gabriel Rosenberg, Liem Tran This PD is motivated by the fact that Calculus students often struggle with the concept of derivatives because they lack understanding of rates of change. Student exposure to this idea starts as early as 6th grade, and their conceptual understanding should continue to grow throughout high school according to the CCSS. In this PD participants will attempt to solve a Calculus problem using their knowledge of rates of change and slope. Participants will then explore an Algebra lesson that helps develop conceptual understanding about rates while looking at specific standards that connect to the lesson. Participants will receive a guide to the progression of rate from 6th grade to Calculus. Finally they will discuss the connections between prior, present, and future learning goals as they think about how they teach the concept of rates of change. Participants will develop their ability to plan vertically for all the standards they teach. download zipped folder calcready2013-07-18.zip [username/password required] Grade Level: High School Subject: Geometry Topic: Recognizing mathematical practice in student work Authors: Marla Mattenson, Peter Sell, David Newhouse Recognizing mathematical practice in student work is the primary focus of this professional development. Teachers will engage in a geometry task from MARS (http://map.mathshell.org) individually, in pairs, and small groups to solve the task in multiple ways. Teachers will experience the task from a student perspective as well as through the lens of pedagogy. After teachers work on the task and share out, they complete a reflection slip. Next, teachers analyze four different student solutions for use of the 8 Mathematical Practices with emphasis on what feedback they would give a student to guide them in developing the practices further. Facilitator then leads a discussion to have teachers think deeper about the mathematical practices. Finally, the PD ends with possible mathematical extensions to the original math task as well as a recap of math practices used during the PD and teacher insights. download zipped folder geometry2013-07-18.zip [username/password required] Grade Level: HS Mathematics Subject: Algebra I Topic: Modeling Algebra Authors: Titin Suryati Sukmadewi, Andrew Richardson, Jake Leibold This workshop is designed to identify what mathematical modeling is and how it is different from problem solving. The workshop will first ask you to reflect on any prior knowledge you have on mathematical modeling and what makes a good modeling task. You will use your ideas on what modeling is as you work through an Algebra Task titled, "Cell Division", that is aligned to the CCSS Standards MP4 and F-LE. We will then look at examples of what is and what is not mathematical modeling and highlight how modeling is different from problem solving. download zipped folder modelingalg2013-07-18.zip [username/password required] Grade Level: 9 - 12 Subject: HS Mathematics Topic: Technology in the Classroom Authors: Debbie Seidell and Will Stafford Technology can be used to enrich classroom learning by giving students a chance to explore and experience mathematical situations. In this workshop, participants will play a classroom-style game involving graphs of polynomial functions, using the free online graphing program Desmos. They will then reflect on how the experience allowed them to experiment and try out ideas that develop an understanding of how parameters affect the shape of a graph. For students, this sort of concrete experience can lead to deeper understanding of abstract concepts as the unit progresses. Next, participants will graph a function by hand with some help from data points that they gather on the geometry program Geogebra. They then reflect on how students might draw connections between the geometry of a circle and the shape of a sine wave through this lesson. We discuss CCSS Mathematical Practice 1, making sense of problems, and Mathematical Practice 5, using technology to explore and deepen student understanding of concepts. Our examples come from Algebra II and Precalculus, but the ideas can be used in any subject of high school math. download zipped folder technology2013-07-18.zip [username/password required] Grade Level: High School Subject: Geometry course Topic: An introduction to Transformational Geometry and how to show congruence using Transformations, both by hand and with software. Authors: Clint Chan, Wendy Jennings, and Teri Hulbert This is a 60-90 minute PD intended to demonstrate the differences between traditional geometric approach to congruence and CCSS geometric approach using transformations. We explore the CCSSM for High School Geometry, namely HSG-CO. A.2 and HSG-CO.B.8 which deal with rigid motions of the plane and geometric congruence. The participants begin by using technology to perform transformations, and map one congruent shape to another using rigid motions. The participants will then use rigid motions to demonstrate that any mapping of one congruent plane figure to another can be accomplished by using a sequence of rigid motions. The next activity is an exploration of congruence with specific respect to the SAS theorem of triangle congruence done by hand. For workshops longer than 60 minutes, there are two additional congruence activities involving SSS and ASA and transformations. The three triangle congruence activities are fully explained in the Solution download zipped folder transformationalgeo2013-07-18.zip [username/password required] Grade Level: Upper High School Subject: Algebra2/IntegratedMath3 Topic: Using Probability to Make Informed Decisions Authors: Brian Shay and Eleanor Terry This session will have participants simulate activities, calculate probability, and make decisions based on these probabilities. Participants will experience two simulations where they will gain firsthand experience in sampling data, calculating probabilities, and discussing if their sample is consistent with given assumptions. Participants will also be solving problems using the Empirical Rule for Normal Distributions, as well as learning how to use the margin of error to approximate the mean of a normally distributed population from a random sample of that population. download zipped folder statsh2013-07-18.zip [username/password required] Grade Level: 9-12 Subject: HS Mathematics, Geometry, and Common Core Math 2 Topic: Exploring Conditional Probability Authors: Vicki Lyons, Phylicia Lockhart, Paul Winston This workshop is organized to help teachers be more knowledgeable of what organizing and interpreting data could look like in a classroom setting under the Common Core standards. Two-way tables are emphasized in the CCSSM. This workshop focuses on the construction and interpretation of two-way tables. One activity will allow teachers to make connections between various representations of data. Teachers can expect to use their representations to answer interesting probability questions. Marginal, joint, and conditional probabilities will be defined. Another activity develops the concept of independence by first looking at dependent relationships among data. The workshop focuses on the following Common Core State Standards: S-CP.4, S-CP.5, S-CP.6, and S-MD.7 download zipped folder statisticsL2013-07-18.zip [username/password required] Grade Level: High School Subject: HS Mathematics Topic: Identifying Evidence of Mathematical Practices in Student Work Authors: Katie Waddle, Miriam Cukier, Kieran Flahive, Jason Lang The inclusion of the eight Mathematical Practices in the Common Core standards provides teachers with a mandate to teach and assess problem solving strategies. In this workshop, we will focus on three of the Mathematical Practices. Teachers will become "experts" on these Practices by doing a close reading of their descriptions. Through a series of problems, participants will learn to look for evidence of these Practices in 1) their own problem solving 2) written student work, and 3) student discussion. By the end of the workshop, teachers will have identified characteristics of tasks that lead students to use the Mathematical Practices. download zipped folder assessingmps2013-07-18.zip [username/password required] Grade Level: 8th and 9th grade Algebra Subject: Algebra 1 Topic: Systems of Equations Authors: Anne Springfield & Jonathan LaManna Building Intuition through Multiple Representations with Systems of Equations This workshop is designed to present participants with a method to approach systems of equations intuitively with students and provide multiple access points to the topic. It focuses heavily on using visual models to reinforce mathematical concepts such as equivalence, and to provide opportunities for students to contextualize and decontextualize the mathematics. Participants will also learn to modify existing tasks to promote discussion, critical thinking and justification of methods in the classroom. download zipped folder algebra2013-07-18.zip [username/password required] Grade Level: HS Mathematics Subject: Geometry Topic: Modeling Geometry Authors: Elmer Calvelo, Andrew Richardson, Caitlin McCaffrey This workshop is designed to identify what mathematical modeling is and how it is different from problem solving. The workshop will first ask you to reflect on any prior knowledge you have on mathematical modeling and what makes a good modeling task. You will use your ideas on what modeling is as you work through a Geometry Task titled, "One for the Road", that is alligned to the first CCSS Modeling with Geometry Standard. We will then look at examples of what is and what is not mathematical modeling and highlight how modeling is different from problem solving. download zipped folder modelinggeo2013-07-18.zip [username/password required] Grade Level: All Subject: Mathematics Topic: Parents Overview of the Common Core Authors: Moe Burkhart, Heather Brown, Marty Schnepp The twofold purpose of this webinar is to give parents an overview of the Common Core State Standards for Mathematics (CCSSM), and help parents develop positive communication strategies with their child and teacher. The overview focuses on the content and practice components of the CCSSM, how they evolved and why they are important. Parents solve a problem presented in context and later abstractly. They will have access to a list of strategies and questions they can use at home with their child, and another list they can use to help them communicate successfully and positively with their child's teacher, including a video demonstration. download zipped folder parents2013-07-18.zip [username/password required] Grade Level: 9-12 Subject: Group Task Topic: : Implementing Group Tasks in the Classroom Authors: David Herron & Esther Song Group tasks provide a perfect environment for students to practice the Mathematical Practices outlined in the Common Core State Standards. Participants at this professional development will explore their own answers to the following questions: □ What makes a task well-suited for group work? □ How can I implement a group task effectively? During this ninety-minute session, there will be two major components. The first half will focus on working on a task in groups to understand what group work is like from a student perspective. Participants will then decide on the components of a "group worthy" task. The second half of the session will focus on implementation strategies for effective group tasks. Participants will analyze video footage of group tasks, critique the efficacy of different strategies, share their own best practices, and sift through resources provided by the facilitators to find those that are most helpful for their own school context. download zipped folder grouptasks2013-07-18.zip [username/password required] Grade Level: 9-12 (High School, potentially upper Middle School as well) Subject: HS Mathematics Topic: Identifying and Using Low Threshold and High Ceiling Tasks Authors: Constance Bowen, Usha Narra, Donna Young How can teachers engage students in learning mathematics so that they all achieve the high expectations put forth by the Common Core State Standards of Mathematics? One way would be to use low threshold/high ceiling tasks to entice students to participate in learning and to strive to learn more. This presentation gives definitions of low threshold and high ceiling tasks and allows teachers to experience tasks meant to have many access points in order to pave the way to higher mathematics. There are opportunities for teachers to reflect personally and to discuss the uses of these tasks in their classroom. Through transparency about how the tasks presented were adapted and through practice during the workshop, it is hoped that teachers will leave the workshop prepared to adapt tasks on their own. download zipped folder lowthreshold2013-07-18.zip [username/password required] Grade Level: Middle or High School Subject: Algebra 1 Topic: Introducing Quadratics Authors: Sarah Burns and Jennifer Outzs Introducing Quadratics: This activity is designed to assist departments or other groups of math teachers to understand how an emphasis on the standards for mathematical practice MP4 (model with mathematics) will enhance the way students learn about quadratic functions (specifically standard F.IF.4). We will investigate the value of using an "anchor" problem to ground student understanding of quadratics in a real-world scenario throughout the development of the unit. We include a PowerPoint and facilitator guide for a 1 1/2-hour professional development session. The presentation focuses on how to approach the content in such a way that promotes meaning and context. We conclude by looking at student and teacher mindsets in the common core. download zipped folder quadratics2013-07-18.zip [username/password required] Back to High School Index PCMI@MathForum Home || IAS/PCMI Home © 2001 - 2013 Park City Mathematics Institute IAS/Park City Mathematics Institute is an outreach program of the School of Mathematics at the Institute for Advanced Study, Einstein Drive, Princeton, NJ 08540 Send questions or comments to: Suzanne Alejandre and Jim King With program support provided by Math for America This material is based upon work supported by the National Science Foundation under Grant No. 0314808. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.
{"url":"http://mathforum.org/pcmi/hstp/sum2013/wg/high/abstract.high.html","timestamp":"2014-04-16T10:39:22Z","content_type":null,"content_length":"20607","record_id":"<urn:uuid:23eb9cf1-c82b-439a-bb27-440b745d3fdf>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00398-ip-10-147-4-33.ec2.internal.warc.gz"}
Geostatistics: Power Law Many natural hazards or geological phenomena satisfy power-law (fractal) frequency-size statistics to a good approximation for medium and large events. Examples include earthquakes, volcanic eruptions, asteroid impacts, landslides, and forest fires. So my questions is that, Why geological phenomena exhibit a power law distribution, and how special is that kind of statistical distribution?
{"url":"http://www.physicsforums.com/showthread.php?t=698351","timestamp":"2014-04-18T23:28:31Z","content_type":null,"content_length":"21632","record_id":"<urn:uuid:61b6cf87-c807-434c-8e5d-1178b3652ebc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
Geometry Tutors Harrington Park, NJ 07640 Math Tutor - Ph.D. - all areas. ...I am currently tutoring two other students in calculus, one in AP calculus, the other in university calculus. I have a doctorate in math. Such a degree requires much facility in . Euclidean is ultimately a system of axioms and a body of facts,... Offering 10+ subjects including geometry
{"url":"http://www.wyzant.com/Hillsdale_NJ_Geometry_tutors.aspx","timestamp":"2014-04-21T13:13:27Z","content_type":null,"content_length":"60489","record_id":"<urn:uuid:501c4e0b-0b22-41c8-93e8-8ae0f36a0786>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00045-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] Kind-agnostic type classes David Menendez dave at zednenem.com Fri Oct 3 13:36:45 EDT 2008 On Fri, Oct 3, 2008 at 9:49 AM, Luke Palmer <lrpalmer at gmail.com> wrote: > On Fri, Oct 3, 2008 at 4:22 AM, Florian Weimer <fw at deneb.enyo.de> wrote: >> I'm trying to encode a well-known, informally-specified type system in >> Haskell. What causes problems for me is that type classes force types >> to be of a specific kind. The system I'm targeting however assumes that >> its equivalent of type classes are kind-agnositic. > There is no choice of kinds, they are forced by the methods (since the > kind of an actual argument is * by definition). But see below. >> For instance, I've got >> class Assignable a where >> assign :: a -> a -> IO () >> class Swappable a where >> swap :: a -> a -> IO () >> class CopyConstructible a where >> copy :: a -> IO a >> class (Assignable a, CopyConstructible a) => ContainerType a >> class (Swappable c, Assignable c, CopyConstructible c) => Container c where >> size :: (Num i, ContainerType t) => c t -> IO i > Which is illegal because the three above classes force c to be kind *, > but you're using it here as kind * -> *. > What you want is not this informal "kind-agnostic" classes so much as > quantification in constraints, I presume. This, if it were supported, > would solve your problem. > class (forall t. Swappable (c t), forall t. Assignable (c t), forall > t. CopyConstructible (c t)) => Contanter c where ... > Incidentally, you *can* do this if you go to a dictionary passing > style (because then you are providing the proofs, rather than asking > the compiler to infer them, which is probably undecidable (what isn't > ;-)). You don't necessarily need explicit dictionaries. For example, I've occasionally wanted to have a constraint (forall a. Show a => Show (f a)). One fairly simple way to do this to declare a new class. class Show1 f where showsPrec1 :: (Show a) => Int -> f a -> ShowS instance Show1 [] where showsPrec1 = showsPrec The same technique is used in Data.Typeable. Dave Menendez <dave at zednenem.com> More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2008-October/048668.html","timestamp":"2014-04-16T17:34:16Z","content_type":null,"content_length":"5579","record_id":"<urn:uuid:c30c8d72-a76c-481e-8518-7337450d4a78>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00447-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimization parabola/triangle April 26th 2009, 05:29 PM #1 Feb 2009 Optimization parabola/triangle An isoceles triangle with vertices (0,0), (x,y) & (-x,y) is inscribed in the parabola y=4-x^2 What x & y will give triangle max area and what is that area? I'm unsure of my first move to get an A(x). Edit: also to I need to split my triangle into halves for this? $A = \frac{1}{2}bh$ $A(x) = \frac{1}{2}(2x)(y) = x(4-x^2)$ great thanks I was afraid it was that simple. I found the A'(x) to get to the CV points. $<br /> A'(x) = -3x^2+4<br />$ I know important parts of the parabola are max at (0,4) and y=0 at x=-2 & 2. So I know where the triangle contained. Do I plug the x values into A'(x)? I ended up with (-1.15, 2.68) and (1.15, 2.68) and an area of 3.08 units cubed. My graph of A'(x) and critical points with the original function all seem to fit. Is this correct? April 26th 2009, 05:34 PM #2 April 26th 2009, 06:07 PM #3 Feb 2009 April 26th 2009, 07:11 PM #4 Feb 2009
{"url":"http://mathhelpforum.com/calculus/85856-optimization-parabola-triangle.html","timestamp":"2014-04-19T17:05:10Z","content_type":null,"content_length":"38752","record_id":"<urn:uuid:108e41e4-f028-4363-a3c8-a3fb26ae4d09>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00237-ip-10-147-4-33.ec2.internal.warc.gz"}
Untitled Document Cartesian plane A system consisting of two perpendicular number lines where a point is represented by an ordered pair of real numbers. The point of intersection is call the origin (0, 0). For example, in the picture shown below, we plot (3, 2), where 3 is the x-coordinate and 2 is the y-coordinate.. Cartesian coordinates were first introduced in the 17th century by French mathematician, Rene Descartes Center of a Circle Point from which all the points on its circumference are equidistant. Prefix denoting a fraction of 100th of the initial units of the international system. Metric System Another word for graph A straight line segment connecting two points on a curve or surface and lying between them. The distance around a circle, the circumference for a circle of radius r is Describing the situation in which a polygon can be completely enclosed by a circle that passed through all the vertices of the polygon. The term can be extended to other figures, including solids. Closed interval A set of real numbers lying between and including its endpoints, for example A numerical or constant multiplier of the variables in an algebraic term. Example : Coefficient of a Collinear points Three or more points in the plane are collinear if there exists a line passing through them. A vertical array of numbers. A selection of a subset of objects from a set without regarding order. The number of selections of r different items from n distinguishable items when the order of selection is ignored. Denoted by Common denominator An integer that is exactly divisible by all the denominators of a set of fractions. For example the fractions Common factor A number or a polynomial that is a factor of each of a given set. Common multiple An integer or a polynomial that is an integral multiple of each in a given set. Commutative law Describing the property of a binary operation for which the result does not depend on the order that is performed. Addition and multiplication of numbers are commutative operations since Complete the square To solve quadratic equations by replacing the quadratic expression Quadratic Equations Complex fraction A fraction whose numerator or denominator contains fractions. Working With Fractions A number, a letter or an expression that that has a fixed value. A system of equations is consistent if it has one or infinitely many solutions. For example the equations x+ y = 5 and 2x+y=7 are consistent , since they are satisfied by x=2 and y=3. For the contrary the equations x + y =5 and x+ y =7 are inconsistent, since there is no pair of values of x and y that satisfies both equations simultaneously, they are parallel lines. System of Linear Equations A solid with six identical square sides that are mutually perpendicular. Areas and Volumes Cube root The number, quantity, or expression whose cube is a given number, quantity, or expression. For example,
{"url":"http://www.nvcc.edu/alexandria/science/math/mathproject/Definitions/C/C.htm","timestamp":"2014-04-20T16:54:25Z","content_type":null,"content_length":"21408","record_id":"<urn:uuid:30a19bdf-8372-4bbe-890c-11cee9531c90>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00648-ip-10-147-4-33.ec2.internal.warc.gz"}
Excel F.INV.RT Function The Excel F.INV.RT Function Related Functions: F.DIST.RT F.INV Basic Description The Excel F.INV.RT function calculates the inverse of the (right-tailed) F Probability Distribution for a specified probability. The function is new to Excel 2010 and so is not available in earlier versions of Excel. However the F.Inv.Rt function is simply a new version of the Finv function, which is available in earlier versions of Excel. The format of the function is : F.INV.RT( probability, deg_freedom1, deg_freedom2 ) Where the function arguments are: probability - The probability (between 0 and 1) at which to evaluate the inverse F Probability Distribution deg_freedom1 - An integer specifying the numerator degrees of freedom deg_freedom2 - An integer specifying the denominator degrees of freedom Note that, if either deg_freedom1 or deg_freedom2 are decimal numbers, these are truncated to integers by Excel. The Excel F.Inv.Rt function is the inverse of the Excel F.Dist.Rt function. I.e. if probability = F.DIST.RT( x, deg_freedom1, deg_freedom2 ) x = F.INV.RT( probability, deg_freedom1, deg_freedom2 ) F.Inv.Rt Function Example The chart on the right shows the inverse of the right-tailed F Probability Distribution with the numerator degrees of freedom equal to 1 and the denominator degrees of freedom equal to 2. Inverse F Prob. Dist. with deg_freedom1 = 1 & If you want to calculate the value of this function for a probability value of 0.2, this can be done using the Excel F.Inv.Rt function, as follows: deg_freedom2 = 2 =F.INV.RT( 0.2, 1, 2 ) This gives the result 3.555555556. Further details and examples of the Excel F.Inv.Rt function can be found on the Microsoft Office website. Trouble Shooting If you get an error from the Excel F.Inv.Rt function this is likely to be one of the following : Common Errors Occurs if either: #NUM! - - the supplied probability value is ≤ 0 or > 1 - the supplied deg_freedom1 or the supplied deg_freedom2 argument is < 1 #VALUE! - Occurs if any of the supplied arguments are non-numeric
{"url":"http://www.excelfunctions.net/Excel-F-Inv-Rt-Function.html","timestamp":"2014-04-18T08:08:14Z","content_type":null,"content_length":"16035","record_id":"<urn:uuid:1547b22e-0a03-45ea-893a-72d63a6501ae>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00459-ip-10-147-4-33.ec2.internal.warc.gz"}
Equality Properties and What They Really Mean Date: 07/30/2008 at 12:27:13 From: Margaret Subject: Is there a property of equality for powers and roots? In class we are shown how to square both sides of an equation or take the square root of both sides of an equation but is there a rule like the addition property of equality or multiplication property of equality that says it is ok to do so? I have asked the instructor, looked at algebra text books and searched Dr. Math. Date: 07/30/2008 at 16:06:30 From: Doctor Peterson Subject: Re: Is there a property of equality for powers and roots? Hi, Margaret. Interesting question! I think it shows that these properties really shouldn't be taught in this way, which makes things simple for teaching kids but doesn't accurately reflect what is actually happening. The so-called addition and multiplication (and subtraction and division) properties of equality are not really properties of equality in the first place, but are facts about each I believe you are talking about facts like this, the multiplication property of equality: If a = b and c is any real number, then ac = bc. The idea is that if two numbers are really the same number, then when we multiply them both by the same thing, we get the same answer. How could we not? As long as multiplication is "well-defined"--that is, always gives the same answer--this has to be true. The same is true of any other operation, including powers, square roots, reciprocals, and so on! Any well-defined operation (or function, in fact) will behave this way. The only thing that could go wrong, really, is if you can't perform the operation at all (e.g. if you want to take the square root of both sides but one or both may be negative). This becomes a domain issue, if you are familiar with functions. So you don't really have to look for specific properties of equality associated with each operation you want to use; you just have to determine that it is well-defined (has one value) and that its domain includes the values to which it is being applied. These two facts amount to the property you are looking for. Now, some texts define the "X property of equality" as something a bit deeper than what I have just discussed: If c is any real number other than 0, then the equation ac = bc is EQUIVALENT to the equation a = b. This is what you REALLY need to use when you solve an equation; it says not only that ac is still equal to bc, but that they are ONLY equal if a = b; you don't either lose or gain solutions. This property is not necessarily true for any well-defined operation (or for any function), but for any INVERTIBLE (one-to-one) function (that is, when there is only one way to get any given result). In particular, it is NOT true for even powers, because there are two different numbers with the same square, so that for example 1 = -1 is not true yet (1)^2 = (-1)^2 is true. They are not equivalent. This is why squaring both sides of an equation can yield extraneous solutions. A related issue comes up with square roots. Although it is true that if a = b, then sqrt(a) = sqrt(b), and in fact these equations are equivalent if you ignore domain issues, this can lead to problems when you forget that the radical symbol means only the positive root, and try to apply it to something like this: (-1)^2 = 1^2 Taking the square root of each side by just canceling out the squares, you'd get -1 = 1 which is not true! You might not do this here, but you probably would if there were variables: x^2 = y^2 does not imply that x = y because one might be positive and the other negative! What's happening here is that, on one hand, you are unconsciously using a form of the square root that is not a function (that is, has more than one value) by allowing a negative result; or, on the other hand, you are forgetting that in reality sqrt(x^2) = |x|, not x. So the answer to your question really depends on exactly how your "multiplication property" and so on are defined, and what you want to use your new property to do. Perhaps if you show me how you want to use it, I can clarify what I am saying. Hopefully in taking squares or square roots in equations you have been taught the caveats that arise; some books may present these facts as properties, including all the warnings, but others pass by them all too quietly! - Doctor Peterson, The Math Forum Date: 07/31/2008 at 15:44:32 From: Margaret Subject: Thank you (Is there a property of equality for powers and roots?) Thank you for your help. I need to think about these very interesting ideas and look forward to sharing them with my instructor when we meet for follow up in two weeks.
{"url":"http://mathforum.org/library/drmath/view/72802.html","timestamp":"2014-04-20T09:18:53Z","content_type":null,"content_length":"9834","record_id":"<urn:uuid:d999f5da-7e1e-448f-8601-e69227fbccfc>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00056-ip-10-147-4-33.ec2.internal.warc.gz"}
A global and quadratically convergent method for linear L problems - SIMAX , 1992 "... Reports available from: ..." , 2000 "... This article is, however, not concerned with interpolation, and thus in the data fitting context, it will be assumed that the data can be modelled by a function containing a number of free parameters, and minimizing a norm is appropriate. Perhaps the most commonly occurring criterion in such cases i ..." Cited by 5 (0 self) Add to MetaCart This article is, however, not concerned with interpolation, and thus in the data fitting context, it will be assumed that the data can be modelled by a function containing a number of free parameters, and minimizing a norm is appropriate. Perhaps the most commonly occurring criterion in such cases is the least squares norm. Its use has a long and distinguished history, it is relatively well understood, and there are good algorithms available. Yet there are often situations where it is not ideal. For example, a statistical justification for least squares requires certain assumptions about the error pattern in the data, and if these are not satisfied there may be bias in the estimate , 2000 "... A historical account is given of the development of methods for solving approximation problems set in normed linear spaces. Approximation of both real functions and real data is considered, with particular reference to L p (or l p ) and Chebyshev norms. As well as coverage of methods for the usu ..." Cited by 4 (0 self) Add to MetaCart A historical account is given of the development of methods for solving approximation problems set in normed linear spaces. Approximation of both real functions and real data is considered, with particular reference to L p (or l p ) and Chebyshev norms. As well as coverage of methods for the usual linear problems, an account is given of the development of methods for approximation by functions which are nonlinear in the free parameters, and special attention is paid to some particular nonlinear approximating families. 1 Introduction The purpose of this paper is to give a historical account of the development of numerical methods for a range of problems in best approximation, that is problems which involve the minimization of a norm. A treatment is given of approximation of both real functions and data. For the approximation of functions, the emphasis is on the use of the Chebyshev norm, while for data approximation, we consider a wider range of criteria, including the other l ...
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=214525","timestamp":"2014-04-16T07:28:24Z","content_type":null,"content_length":"18538","record_id":"<urn:uuid:adbd3f42-4cb6-4f79-90fe-34e90a706d0f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
value of Theta in ZF+AD up vote 5 down vote favorite Since I found out about it, I've always been interested in the Axiom of Determinacy rather than the Axiom of Choice. Along these lines, I've kept flipping back to http://en.wikipedia.org/wiki/ %CE%98_%28set_theory%29, and occasionally looking on google, because I keep thinking ZF+AD should be able to prove non-obvious things about it, although I haven't found anything other than (I think) something saying it must be regular. So, I'm finally asking here. $\Theta := \operatorname{sup}(\{\alpha \in \operatorname{Ord} : (\exists f \in \alpha^\mathbb{R})(\operatorname{Range}(f) = \alpha)\})$ What is known about $\Theta$ in ZF+AD? In particular, how big is it? For example, is it known to be different from $\omega_2$? Is anything more known in ZF + AD + V=L($\mathbb{R}$) ? determinacy ordinal-numbers set-theory add comment 1 Answer active oldest votes A good reference for this question specifically and determinacy in general is Kanamori's book on large cardinals, "The Higher infinite". The last chapter is devoted to determinacy. We know a huge deal about $\Theta$. For example, it is much larger than $\omega_2$. It does not need to be regular, but it is regular if in addition we assume $V=L({\mathbb R})$. The key fact to see that $\Theta$ is large is Moschovakis's coding lemma which, in its simplest version, says that: (Under AD) if there is a surjection $f:{\mathbb R}\to\alpha$, then there is a surjection $f:{\mathbb R}\to{\mathcal P}(\alpha)$. Harvey Friedman used this to prove that $\Theta$ is a limit cardinal. This is easy; the point is that there is a definable bijection between ${\mathcal P}(\tau)$ and ${\mathcal P}(\tau\ times\tau)$ for any infinite ordinal $\tau$. But if $\tau$ is a cardinal, there is a surjection from ${\mathcal P}(\tau\times\tau)$ onto $\tau^+$: If $A\subseteq\tau\times\tau$ codes a well-ordering, send it to its order type. Else, to 0. With a bit more effort, you can check that $\Theta=\aleph_\Theta$ and in fact it is limit of cardinals $\kappa$ such that $\kappa=\aleph_\kappa$, and it is a limit of limits of these cardinals, etc. In $L({\mathbb R})$, $\Theta$ is regular (Solovay was first to prove this). In fact, if $V=L(S,{\mathbb R})$ for $S$ a set of ordinals, or $V=L(A,{\mathbb R})$ for $A\subseteq{\mathbb R} $, then $\Theta$ is regular. up vote 9 down vote (A technical aside: If AD holds, it holds in $L({\mathcal P}({\mathbb R}))$. Woodin defined a strengthening of AD that is now called $AD^+$. It is open whether $AD^+$ is strictly weaker accepted than AD, since all known models of AD are models of $AD^+$ and any current technique that gives us a model of one gives us a model of the other. If $L({\mathcal P}({\mathbb R}))$ is a model of $AD^+$, then it is either of the form $L(A,{\mathbb R})$ for some $A\subseteq{\mathbb R}$, or else it is a model of $AD_{\mathbb R}$, the strengthening of AD where we allow reals (rather than integers) as moves of the games.) However, ZF+AD does not suffice to prove that $\Theta$ is regular. If DC holds, the "obvious" diagonalization shows that ${\rm cf}(\Theta)>\omega$. But Solovay proved that $ZF+AD_{\ mathbb R}+{\rm cf}(\Theta)>\omega$ implies the consistency of $ZF+AD_{\mathbb R}$, so by the incompleteness theorem, ZF+AD or even the stronger $ZF+AD_{\mathbb R}$ cannot prove that ${\ rm cf}(\Theta)>\omega$. Nowadays we know much more. For example, $\Theta$ is a Woodin cardinal in the HOD of $L({\mathbb R})$, and the computation of the large cardinal strength of $\Theta$ in the HOD of the models of AD is a guiding principle of what is now known as descriptive inner model theory. You may be interested in the slides of recent talks by Grigor Sargsyan on the core model induction (which should be available somewher online, or from him by email). You will see there that the large cardinal strength of AD assumptions is calibrated by the large cardinal character of $\Theta$ inside HOD, and this is associated with the length of the so called Solovay sequence which keeps track of how difficult it is to define the surjections $$f:{\mathbb R}\to\alpha$$ as $\alpha$ increases. This difficulty is related to the Wadge degree of sets of reals present in the AD model. (I can be much more detailed, but this will require me to get significantly more technical. Let me know.) This is good enough to be accepted even if you ignored this, but there's one more detail that I'm curious about: Asking in both ZF+AD and ZF+AD+V=L(R), is $L_\Theta$ a model of ZF? – Ricky Demer Nov 23 '10 at 1:46 @Ricky : Yes, ZF+AD proves that $\Theta$ is strongly inaccessible in HOD, and inaccessibility relativizes down, so $\Theta$ is strongly inaccessible in $L$, and $L_\Theta$ is therefore a model of ZFC. – Andres Caicedo Nov 23 '10 at 2:22 A lower-powered alternative to Andres's proof (in the preceding comment) that AD implies $L_\Theta$ is a model of ZF: AD implies (with lots of room to spare) the existence of $0^{\#}$, which in turn implies that $L_\kappa$ is a model of ZF for every uncountable cardinal $\kappa$. – Andreas Blass Nov 24 '10 at 20:48 @Andreas: Hehe. Yes, this is easier. – Andres Caicedo Nov 25 '10 at 17:49 add comment Not the answer you're looking for? Browse other questions tagged determinacy ordinal-numbers set-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/47028/value-of-theta-in-zfad","timestamp":"2014-04-21T15:33:23Z","content_type":null,"content_length":"58108","record_id":"<urn:uuid:9d4c7b4f-8ee1-4253-bbb2-a512fcb5a2f7>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00113-ip-10-147-4-33.ec2.internal.warc.gz"}
A generalization of Steenrod’s approximation theorem Christoph Wockel Address: Fachbereich Mathematik, Technische Universität Darmstadt Schlossgartenstrasse 7, D-64289 Darmstadt, Germany E-mail: christoph@wockel.eu Abstract: In this paper we aim for a generalization of the Steenrod Approximation Theorem from [16, Section 6.7], concerning a smoothing procedure for sections in smooth locally trivial bundles. The generalization is that we consider locally trivial smooth bundles with a possibly infinite-dimensional typical fibre. The main result states that a continuous section in a smooth locally trivial bundles can always be smoothed out in a very controlled way (in terms of the graph topology on spaces of continuous functions), preserving the section on regions where it is already smooth. AMSclassification: primary 58B05; secondary 57R10, 57R12. Keywords: infinite-dimensional manifold, infinite-dimensional smooth bundle, smoothing of continuous sections, density of smooth in continuous sections, topology on spaces of continuous functions.
{"url":"http://www.emis.de/journals/AM/09-2/am1753.html","timestamp":"2014-04-17T04:11:42Z","content_type":null,"content_length":"2664","record_id":"<urn:uuid:75e8fa3d-099c-46bb-97a9-1e6d4da85e41>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-dev] problem with linalg.cholesky? Travis Oliphant oliphant.travis at ieee.org Fri Dec 2 09:56:45 CST 2005 Andrew Jaffe wrote: >Hi All, >OK, drilling down a bit further... The problem lies in the use of >scipy.transpose() in _castCopyAndTranspose, since transpose() makes a >contiguous, non-Fortran array into a non-contiguous fortran array (since >it appears that at present it doesn't copy, just makes a new view). One >solution is just to actually copy the array. The other function, >_fastCopyAndTranspose(), seems to work fine in this situation, but I'm >not sure how and why this differs from _castCopyAndTranspose (in >particular, it doesn't seem to set the Fortran flag). >[However, the cholesky decomposition only makes sense for a symmetric >(or Hermitian) array, so, if we're not changing the type, for real >matrices really all we need to do is the cast and copy; for complex >matrices we need to transpose (not hermitian conjugate since we really >want to change the ordering), but this is equivalent to just taking the >complex conjugate of every element, which must be faster than copying. >If we are changing the type, I assume the speed difference isn't as >Any ideas on what the best change(s) would be? The problem with making changes based on what's been posted so far is that the code works for other platforms. If this is a problem with OS X, is this a problem for just your installation or does everybody else see it. I have not seen enough data to know. One strong possibility is that the error is in the lapack_lite and most people are using the optimized lapack. We should definitely check that direction. More information about the Scipy-dev mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-dev/2005-December/004301.html","timestamp":"2014-04-21T10:12:51Z","content_type":null,"content_length":"4285","record_id":"<urn:uuid:53881f61-4a7a-4527-800f-8f2f7c98c1e6>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00491-ip-10-147-4-33.ec2.internal.warc.gz"}
Orthographic matrix not working - OpenGL Hi all, I am having trouble creating an orthographic projection in opengl (without using glOrtho). And before anyone tells me to use glOrtho, lets just pretend I'm using opengl 3.0 as an exercise. Here is the way I have set up my matrix: projectionMatrix.m00 = 2/WIDTH; projectionMatrix.m11 = 2/HEIGHT; projectionMatrix.m22 = -2/(far_plane - near_plane); projectionMatrix.m32 = -((far_plane + near_plane)/(far_plane - near_plane)); projectionMatrix.m33 = 1; This is your typical ortho projection matrix, taken straight from the opengl spec def. for glOrtho. When I set up a frustrum and use a perspective matrix, this works fine, but I can't use my regular screen coordinates properly. But when I use the glOrtho matrix above, I see nothing but my background color. Anyone know what's going on? Is my matrix wrong? My shader looks like this: #version 150 core uniform mat4 projectionMatrix; uniform mat4 viewMatrix; uniform mat4 modelMatrix; in vec4 in_Position; in vec4 in_Color; in vec2 in_TextureCoord; out vec4 pass_Color; out vec2 pass_TextureCoord; void main(void){ gl_Position = in_Position; //override gl position with new calculated position gl_Position = projectionMatrix * viewMatrix * modelMatrix * in_Position; pass_Color = in_Color; pass_TextureCoord = in_TextureCoord; Any help appreciated.
{"url":"http://www.gamedev.net/topic/639188-orthographic-matrix-not-working/?k=880ea6a14ea49e853634fbdc5015a024&setlanguage=1&langurlbits=topic/639188-orthographic-matrix-not-working/&langid=1","timestamp":"2014-04-25T04:59:53Z","content_type":null,"content_length":"139290","record_id":"<urn:uuid:de602e45-baf5-4213-b155-1155cfe4b31f>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of Parser combinators In mathematics and functional programming, Higher Order functions (HOF) are defined as the functions that can take functions as their input and can also produce functions as their output. The use of a HOF as an infix operator in a function-definition is known as a ‘combinator’. When combinators are used as basic building blocks to construct a technique, then they are called parser combinators and the parsing method is called (as higher-order functions ‘combine’ different parsers together). Parser combinators use a top-down parsing strategy which facilitates modular piecewise construction and testing. Parser combinators are straightforward to construct, ‘readable’, modular, well-structured and easily maintainable. They have been used extensively in the prototyping of compilers and processors for domain-specific languages such as natural language interfaces to databases, where complex and varied semantic actions are closely integrated with syntactic processing. In 1989, Richard Frost and John Launchbury demonstrated use of parser combinators to construct Natural Language interpreters. Graham Hutton also used higher-order functions for basic parsing in 1992. S.D. Swierstra also exhibited the practical aspects of parser combinators in 2001. In 2008, Frost, Hafiz and Callaghan described a set of parser-combinators in Haskell that solve the long standing problem of accommodating left-recursion, and work as a complete top-down parsing tool in polynomial time and space. Basic Idea The core idea of parser combinators (which was popularized by Philip Wadler in 1985) is that the results (success or failure) of a recognizer (or a parser) can be returned as a list. Multiple entries of this list represent multiple successes; repeated entries represent ambiguous results and an empty list represents a failure. In functional programming, parser combinators can be used to build basic parsers and to construct complex parsers for rules (that define nonterminals) from other parsers. A production-rule of a context-free grammar (CFG) may have one or more ‘alternatives’ and each alternative may consist of a sequence of non-terminal(s) and/or terminal(s), or the alternative may consist of a single non-terminal or terminal or ‘empty’. Parser combinators allow parsers to be defined in an embedded style, in code which is similar in structure to the rules of the grammar. As such, implementations can be thought of as executable specifications with all of the associated advantages. In order to achieve this, one has to define a set of combinators or infix operators to ‘glue’ different terminals and non-terminals to form a complete rule. The Combinators To keep the discussion relatively straight forward, we discuss parser combinators in terms of recognizer only. Assume that the input is a sequence of tokens, of length the members of which are accessed through an index . Recognizers are functions which take an index as argument and which return a set of indices. Each index in the result set corresponds to a position at which the parser successfully finished recognizing a sequence of tokens that began at position . An empty result set indicates that the recognizer failed to recognize any sequence beginning at . A non-empty result set indicates the recognizer ends at different positions successfully. (Note that, as we are defining results as a set, we cannot express ambiguity as it would require repeated entries in the set. Use of ‘list’ would solve the problem.) Following the definitions of two basic recognizers for terminals, we define two major combinators for alternative and sequencing: • The empty recognizer is a function which always succeeds returning a singleton set containing the current position: $empty quad j = \left\{j\right\}$ • A recognizer term ’x’ for a terminal x is a function which takes an index j as input, and if j is less than #input and if the token at position j in the input corresponds to the terminal x, it returns a singleton set containing j + 1, otherwise it returns the empty set. term(x,j) = begin{cases} left { right }, & j geq #input left { j+1 right }, & j^{th} mbox{ element of } input=x left { right }, & mbox{otherwise} end{cases} • We call the ‘alternative’ combinator <+>, which is used as an infix operator between two recognizers p and q. The <+> applies both of the recognizers on the same input position j and sums up the results returned by both of the recognizers, which is eventually returned as the final result. (p quad quad q)quad j = (pquad j)quad cupquad (qquad j) • The sequencing of recognizers is done with the *> combinator. Like <+>, it is also used as an infix operator between two recognizers – p and q. But it applies the first recognizer p to the input position j, and if there is any successful result of this application, then the second recognizer q is applied to every element of the result set returned by the first recognizer. The *> ultimately returns the union of these applications of q. (pquad *!> quad q)quad j = bigcup (mapquad qquad (pquad j)) • Consider a highly ambiguous CFG s ::= ‘x’ s s | ɛ. Using the combinators defined earlier, we can modularly define executable notations of this grammar in a modern functional language (e.g. Haskell) as s = term ‘x’ *> s *> s <+> empty. When the recognizer s is applied on an input sequence xxxx at position 1, according to the above definitions it would return a result set {5,4,3,2,1}. Note that in real implementation if result set is defined as data type that supports repetition (i.e. list), then we can have the resulting list with all possible ambiguous results like [5, 4, 3, 2, 1,…., 5, 4, 3, 2,……] . Shortcomings and Solutions The simple implementations of parser combinators have some shortcomings, which are common in top-down parsing. Naïve combinatory parsing requires time and space when parsing an ambiguous context free grammar. In 1996, Frost and Szydlowski demonstrated how can be used with parser combinators to reduce the time complexity to polynomial. Later Frost used to construct the combinators for systematic and correct threading of memo-table throughout the computation. Like any top-down recursive descent parsing, the conventional parser combinators (like the combinators described above)won’t terminate while processing a left-recursive grammar (i.e. s ::= s *> s *> term ‘x’|empty). A recognition algorithm that accommodates ambiguous grammars with direct left-recursive rules is described by Frost and Hafiz in 2006. The algorithm curtails the otherwise ever-growing left-recursive parse by imposing depth restrictions. That algorithm was extended to a complete parsing algorithm to accommodate indirect as well as direct left-recursion in polynomial time, and to generate compact polynomial-size representations of the potentially-exponential number of parse trees for highly-ambiguous grammars by Frost, Hafiz and Callaghan in 2007. This extended algorithm accommodates indirect left-recursion by comparing its ‘computed-context’ with ‘current-context’. The same authors also described their implementation of a set of parser combinators written in the Haskell programming language based on the same algorithm. The X-SAIGA site has more about the algorithms and implementation details. External links • X-SAIGA - eXecutable SpecificAtIons of GrAmmars
{"url":"http://www.reference.com/browse/Parser+combinators","timestamp":"2014-04-20T01:49:40Z","content_type":null,"content_length":"85340","record_id":"<urn:uuid:df93cc14-09cf-4385-b7a0-9bea5ee4ec5b>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00111-ip-10-147-4-33.ec2.internal.warc.gz"}
Renton Algebra Tutor Find a Renton Algebra Tutor ...In addition, I played for the Seattle Youth Symphony and represented our state in the Northwest Band and then orchestra, as well as the John Philip Sousa Band. Although I did not pursue music as a major at the UW, I did go on to be the clarinet teacher at a music academy in Issaquah for over 5 years. I have taught privately for almost 20 years. 46 Subjects: including algebra 1, algebra 2, reading, English ...I have tutored high school level Algebra I for both Public and Private School courses. I also volunteer my time in the Seattle area assisting at-risk students on their mathematics homework. I have worked as a mathematics teacher in Chicago and I thoroughly enjoy teaching the subject. 27 Subjects: including algebra 1, algebra 2, chemistry, reading ...I like to start off by explaining the necessary concepts and having the student explain back to me what I just told them and WHY it's done that way. This helps to ensure that they not only can they parrot back what I'm saying, but that they have to think about it too. Next, we do sample problems that they either have in their books or ones that I can make up. 12 Subjects: including algebra 1, algebra 2, reading, accounting Greetings,My name is Matthew. I am an enthusiastic and open-minded teacher who likes to help everyone. I have been an educator for 15 years and have plenty of experience working with students of all ages and different educational environments. 14 Subjects: including algebra 2, algebra 1, English, reading ...I think I have a patient, encouraging, and intuitive teaching style that works well with students that age, and I also do adapt my teaching style to the needs of the student. I regularly tutor students in math through calculus, biology, English, and chemistry. I also coach students through the college application process and enjoy helping them write their personal statement or essay. 28 Subjects: including algebra 2, algebra 1, chemistry, ESL/ESOL Nearby Cities With algebra Tutor Auburn, WA algebra Tutors Bellevue, WA algebra Tutors Burien, WA algebra Tutors Des Moines, WA algebra Tutors Federal Way algebra Tutors Issaquah algebra Tutors Kent, WA algebra Tutors Kirkland, WA algebra Tutors Newcastle, WA algebra Tutors Puyallup algebra Tutors Redmond, WA algebra Tutors Seatac, WA algebra Tutors Seattle algebra Tutors Tacoma algebra Tutors Tukwila, WA algebra Tutors
{"url":"http://www.purplemath.com/renton_algebra_tutors.php","timestamp":"2014-04-21T02:38:25Z","content_type":null,"content_length":"23787","record_id":"<urn:uuid:691d4134-892f-493c-ae76-c994acb99520>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
Confidence Interval November 27th 2009, 04:59 PM #1 Junior Member Nov 2009 Confidence Interval Construct a symmetric two-sided $(1-\alpha)100\%$ confidence interval for the unknown parameter $\beta > 0$ in a $Beta(1,\beta)$ sample. The hint is to find the MLE of $\beta$ and determine the distribution of $-log(1-X)$ for a $Beta(1,\beta)$ random variable $X$ Write your likelihood function, take the logarithm and then differentiate to find the MLE of beta. I'm trying to write out the loglikelihood function for $Beta(1,\beta)$ I simply don't understand the pdf of the beta distribution. Its pdf is $\frac{x^{\alpha-1}(1-x)^{\beta-1}}{B(\alpha,\beta)}$ In my case, it would be $\frac{(1-x)^{\beta-1}}{B(1,\beta)}$ It's using the definition to define itself, how does this make any sense? use $x_i$ and take the product from 1 to n and write the constants with respect to the gamma function. November 27th 2009, 08:29 PM #2 November 28th 2009, 10:59 AM #3 Junior Member Nov 2009 November 28th 2009, 02:50 PM #4
{"url":"http://mathhelpforum.com/advanced-statistics/117088-confidence-interval.html","timestamp":"2014-04-18T05:12:10Z","content_type":null,"content_length":"40163","record_id":"<urn:uuid:fa224aee-97e9-4f08-9704-df1f787f6e51>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00503-ip-10-147-4-33.ec2.internal.warc.gz"}
Path finding methods accounting for stoichiometry in metabolic networks • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information Genome Biol. 2011; 12(5): R49. Path finding methods accounting for stoichiometry in metabolic networks Graph-based methods have been widely used for the analysis of biological networks. Their application to metabolic networks has been much discussed, in particular noting that an important weakness in such methods is that reaction stoichiometry is neglected. In this study, we show that reaction stoichiometry can be incorporated into path-finding approaches via mixed-integer linear programming. This major advance at the modeling level results in improved prediction of topological and functional properties in metabolic networks. The use of graph theory in the analysis of biological networks has been extensive in the past decade [1]. Particularly, in metabolic networks different relevant topics have been examined using the rich variety of graph-theoretic concepts, ranging from topological properties [2-5], evolutionary analysis [6-8], pathway analysis [9-13], transcriptional regulation [14-16], functional interpretation of 'omics' data [17-20] and prediction of novel drug targets [21-23]. Graph-based methods start by converting the metabolic network into an appropriate graph. Different representations are possible here: i) metabolite graphs, where nodes are metabolites and arcs represent reactions linking an input and output metabolite; ii) reaction graphs, in which nodes are reactions and arcs represent intermediate metabolites shared by reactions; iii) bipartite graphs, where nodes are reactions and metabolites, while arcs link metabolites to reactions (for substrates) and reactions to metabolites (for products). Note here that each type of graph can be either directed or undirected. A deeper introduction to such graphs can be found in Deville et al. [24]. Importantly, graph-based methods rely on the definition of connectivity based on paths, that is, two nodes in the graph are connected (or not) depending upon whether (or not) we have a path linking them. This definition of connectivity is debatable, however, particularly when it is claimed that such a path is a competent metabolic pathway, as recently discussed [25-27]. In this context, the major criticism raised as to path-finding methods is that they neglect reaction stoichiometry and there is, therefore, no guarantee that any path found can operate in sustained steady-state. The steady-state condition requires the definition of the boundary of the metabolic network under study. Metabolites inside the boundary of the network, typically called internal metabolites [28], must be in stoichiometric balance. Balancing does not apply to metabolites outside the boundaries of the system (external metabolites), which are typically input/output metabolites and (sometimes) cofactors. In other words, for internal metabolites, their production and consumption (if possible) must be captured with the reactions in the network under study. The steady-state condition and its underlying boundary definition are critical for the performance of any method for analyzing a metabolic network and ignoring it may provide misleading insights. A nice illustration of this is the one presented in the work of de Figueiredo et al. [25], which (unsuccessfully) tested the ability of path-finding methods to answer the question as to whether (or not) fatty acids can be converted into sugars. Klamt et al. [29] also recently emphasized this issue for different biological networks. Note here that elementary flux modes (and extreme pathways) represent a more general and elegant concept for metabolic pathways than paths [28,30]. Their computation is, however, much more expensive in large metabolic networks than paths and, though different efforts have been made in this area [31-33], much research is still needed to make elementary flux modes a practical tool for the analysis of large metabolic networks. Given the limitations discussed above, a novel theoretical concept termed flux paths is introduced here. A flux path is a simple path (in the graph-theoretical sense, so no nodes revisited) from a source metabolite to a target metabolite able to operate in sustained steady-state. In essence, flux paths incorporate reaction stoichiometry into traditional path-finding methods [4,7,34,35]. By means of this concept we show that the path structure of metabolic networks is substantially altered when stoichiometry is considered. In addition, we illustrate (with several examples) that flux paths offer new perspectives for the analysis of metabolic networks at the topological and functional levels. The determination of flux paths requires going beyond graph theory via mixed-integer linear programming. We present below details as to our mathematical optimization model for determining K-shortest flux paths between source and target metabolites. Results and discussion Mathematical model Assume we have a metabolic network that comprises R reactions and C metabolites. Note here that reversible reactions contribute two different reactions to the metabolic network. For this reason we can regard all fluxes as taking positive values. Let S[cr ]be the stoichiometric coefficient associated with metabolite c (c = 1,...,C) in reaction r (r = 1,...,R). As usual in the literature [28], input metabolites have a negative stoichiometric coefficient, whilst output metabolites have a positive stoichiometric coefficient. We here used a metabolite (directed) graph representation of the network where nodes are metabolites and arcs link the input and output metabolites of each reaction. Figure Figure1a1a shows an example of the metabolite graph representation of the phosphoenolpyruvate (PEP): pyruvate (Pyr) phosphotransferase system for the uptake of glucose. Metabolite graph representation of the PEP: Pyr uptake system of glucose. (a) Metabolite graph; (b) metabolite graph restricted to atomic exchanges; (c) metabolite graph restricted to carbon exchanges. D-Glc, glucose; G6P, glucose 6-phosphate; PEP, phosphoenolpyruvate; ... Suppose that we are concerned with finding a flux path from a source metabolite α to a target metabolite β. As mentioned above, a flux path is a simple path from the source metabolite α to the target metabolite β able to operate in steady-state. We present below our mathematical optimization model for flux paths. Path finding constraints We need to decide the arcs involved in the flux path from the source metabolite α to the target metabolite β. This fact is represented with a zero-one (binary) variable u[ij], where u[ij ]= 1 if the arc i→j linking metabolite i (i = 1,...,C) to metabolite j (j = 1,...,C) is active in the flux path, 0 otherwise. Deletion of arcs from the metabolic graph is standard practice in path-finding methods [4,7,34,35]. We removed arcs not involving an effective carbon exchange. Carbon exchange is indeed essential for metabolic purposes. For this reason, we henceforth use the term carbon flux paths (CFPs). Note here that a similar criterion has been used in [35]. In this work, however, input and output metabolites can have any type of atom or atom groups in common. This criterion is illustrated in Figure Figure1b,1b, where PEP donates a phosphate group to glucose (D-Glc). The focus on carbon atoms makes our approach more restrictive, as observed in Figure Figure1c,1c, which shows that there is only effective carbon exchange between D-Glc and glucose 6-phosphate (G6P), and PEP and Pyr. Let d[ijr ]be a binary (0/1) coefficient establishing whether (or not) there exists an effective carbon exchange between input metabolite i (S[ir ]< 0) and output metabolite j (S[jr ]> 0) in reaction r. If [ij ]is also fixed to zero. In the following lines we present constraints needed to obtain an appropriate directed path from the source metabolite (α) to the target metabolite (β). Equation 1 ensures that one arc leaves α and one arc enters β; equation 2 that no arc enters α and no arc leaves β: Equation 3 ensures that the number of arcs entering a metabolite k is equal to the number leaving; Equation 4 ensures that a metabolite cannot be revisited in the path: Stoichiometric constraints Equations 1 to 4 define a simple path that preserves carbon exchange in each of its intermediate steps. We need to guarantee that this path can work in sustained steady-state. As will be shown below, to do this, it is required to find a steady-state flux distribution able to involve the path. We here introduce variables and constraints needed to define the steady-state flux space. Any steady-state flux distribution satisfies Equation 5 for the set of internal metabolites (I). We denote v[r ]the non-negative (continuous) flux associated with each reaction, r = 1,...,R: External metabolites (E) are not subject to balancing constraints. If a specific growth medium (E[m]) is introduced, however, metabolites not involved in such a medium can be produced, but cannot be consumed, as observed in Equation 6: For convenience, we introduced a zero-one (binary) variable z[r ](r = 1,...,R), which defines the reactions involved in a steady-flux distribution, namely z[r ]= 1 if reaction r has a non-zero flux, 0 otherwise (r = 1,...,R). We need constraints relating the reaction variables z[r ]and the flux variables v[r]. Equation 7 ensures that no flux traverses a reaction r if z[r ]= 0: In addition, it guarantees that v[r ]is non-zero if z[r ]= 1. Here we have scaled fluxes so that the maximum flux is M and the minimum (non-zero) flux is 1. This does not constitute an issue if we consider M sufficiently large. As we split reversible reactions into two irreversible steps, we need to prevent a reaction and its reverse from appearing together in any steady-state flux distribution, as observed in Equation 8, where the set B = {(λ,μ)| reaction λ and reaction μ are the reverse of each other}: Current path-finding approaches deal with this situation indirectly, namely by removing computed paths involving a reaction and its reverse. Equations 5 to 8 define the steady-state flux space for a particular metabolic network. Linking path finding and stoichiometric constraints As noted above, it is required that the path defined by constraints 1 to 4 can operate in a steady-state flux distribution. For this purpose, we need to guarantee that if we use an arc i→j in a path, then some reaction r with d[ijr ]= 1, that is, involving effective carbon exchange between i and j, is contained in the steady-state flux distribution. This is a critical point in our formulation, which makes it different from previous path-finding methods. With this condition we naturally link the topological and (steady-state) flux planes. This linking constraint is reflected in Equation 9: Equation 9 ensures that if an arc i→j is active in the CFP (so u[ij ]= 1), then at least one reaction r containing this arc in carbon exchange (so d[ijr ]= 1) is forced to be active. By forcing z[r ] to be 1 there will be a non-zero flux associated with the reaction due to Equation 7. An important point to note from Equation 9 is that it allows reactions to be active even if they are not involved in the CFP. In other words reactions can be active with non-zero flux (to satisfy the requirements of steady-state, Equation 5) but without any of their input/output metabolites being involved in the To illustrate constraint 9, consider the example metabolic network in Figure Figure2,2, which involves seven reactions and nine metabolites. The set of internal metabolites is I = {A,B,C,D,E,F}. Assume now that we are concerned with finding a CFP between metabolites A and F. We have only one possible path, namely A→B→C→E→F (u[AB ]= u[BC ]= u[CE ]= u[EF ]= 1). Due to Equation 9, reactions 2, 3, 4 and 5 are active, that is, z[2 ]= z[3 ]= z[4 ]= z[5 ]= 1 and, therefore, via Equation 7, their flux will be non-zero. To balance such a path and satisfy the steady-state condition, Equation 5, we require three additional reactions off-path: reaction 1 for the production of A, reaction 7 to consume F and reaction 6 to produce D. If these off-path reactions are active, the path from A to F is able to work in sustained steady-state and, therefore, it is a flux path, as denoted in the Background section. We are obviously considering that A[ext ]and D[ext ]are in the growth medium. If we remove one of these metabolites from the medium, though we still have a path from A to F at the graph-theoretical level, no flux path will exist, since the path cannot work in sustained steady-state. Example flux path in a toy metabolic network. Objective function Equations 1 to 9 define the set of constraints for the determination of any CFP between metabolite α and metabolite β. However, our purpose here is to find the shortest CFP, as observed in Equation Enumerating constraint As in other path-finding approaches, we may be interested in computing not only the shortest CFP, but the k-shortest CFPs (k = 1,..,K). Since we have an objective relating to finding the shortest CFP, we need to add constraints eliminating previously found CFPs, as shown in Equation 11. In that constraint [ij ]variable in the k-shortest CFP: This section is organized as follows. By means of several well-documented examples, we first illustrate the biochemical relevance of particular constraints in our CFP approach. We then carry out a side-by-side comparison of our CFP approach with current path-finding approaches. Path-finding comparison As shown in the 'Mathematical model' section, the path-finding strategy used in our CFP approach is based on using arcs involving effective carbon exchange and imposing the reversibility constraint, Equation 8. In this sub-section we illustrate the importance of these factors and show that a path-finding approach incorporating them outperforms existing methods in the literature. For this analysis, the effect of stoichiometry is not considered, as is common in existing approaches. Its effect will be separately considered in detail in the next sub-section ('Effect of stoichiometry'). Therefore, for this analysis, Equations 5 and 6 were ignored. Effective carbon exchange Figure Figure3a3a shows two paths from bicarbonate (HCO3) to cytidine-diphosphate (CDP) in Escherichia coli. The long path is a well-known (canonical) metabolic pathway for de novo pyrimidine biosynthesis. The short path is a shortcut via ADP, which has no biological relevance. The removal of arcs not involving carbon exchange, as done in our CFP approach, considerably reduces the appearance of such non-meaningful paths. Indeed, when we applied our approach to find a CFP from HCO3 to CDP to the genome-scale metabolic network of E. coli [36], the long pathway was directly recovered. Note here that we manually removed arcs not involving carbon exchange in the network of Feist et al. [36]. The resulting list of arcs can be found in Additional file 1. This same biochemical example was recently discussed in Faust et al. [13], under different strategies. In the best case scenario, they require additional information as to the intermediate metabolites to recover this pathway. The fact that our approach can recover the pathway without intermediate metabolite information shows how effective the carbon exchange constraint is. Effect of carbon exchange and reversibility constraints. (a) De novo biosynthesis of pyrimidine ribonucleotides in E. coli discussed in Faust et al. [13]. (b) Shortest pathway from glucose to pyruvate in E. coli. 2DDG6P, 2-Dehydro-3-deoxy-D-gluconate ... Path-finding methods typically split reversible reactions into two irreversible steps. In contrast to current approaches [13], in our CFP approach we prevent two such irreversible steps from being active in the same path, as observed in Equation 8. To illustrate the importance of this constraint, we analyzed the shortest path from D-Glc to Pyr in E. coli, which is the Entner-Doudoroff pathway, as shown in the left-hand side of Figure Figure3b.3b. When we applied our CFP approach from D-Glc to Pyr without including Equation 8, we obtained the path in the right hand-side of Figure Figure3b 3b (D-Glc→AcGlc-D→AcCoA→L-Mal→Pyr). This solution has no biochemical meaning, since the first and second step in that path is a cycle involving the forward and backward step of the reversible reaction catalyzed by D-glucose O-acetyltransferase (GLCATr: D-Glc + AcCoA ↔ AcGlc-D + CoA). By adding Equation 8 this path is removed from the solution space and our CFP approach directly obtains the Entner-Doudoroff pathway. Side-by-side comparison In order to analyze the performance of any path-finding method, it is usual in the literature to evaluate its ability in recovering well-known metabolic pathways. For this purpose, we used a database of 40 reference E. coli (metabolic) pathways previously discussed in Planes and Beasley [37] (these 40 pathways are listed in Additional file 2). The input metabolic graph was built from the genome-scale metabolic network of E. coli [36]. We computed the 100 shortest CFPs between the source and target metabolites of each of the 40 reference pathways. As mentioned above, stoichiometric constraints are not considered in this sub-section since the aim is to establish the effectiveness of carbon exchange when combined with reversibility in path finding. To compare the 100 shortest CFPs and the reference pathway, we used the recovery rate. Recovery is a 0/1 parameter, being 1 if a CFP fully matches with the reference pathway, 0 A similar analysis was conducted for existing path-finding methods [4,7,34,35]. These methods make use of different strategies to provide biochemical meaning to the computed paths. For comparison, we classified these strategies into different groups: the first strategy (denoted 'topology') involves the use of an unadjusted metabolic graph; the second strategy (denoted 'hubs') adjusts the metabolic graph by removing any arc involving a highly connected metabolite (hubs) [7,34] (we took the list of hubs from Planes and Beasley [37]); the third strategy (denoted 'connectivity') assigns weights to metabolites according to their connectivity in an unadjusted metabolic graph, where connectivity is defined to be the number of reactions involving a metabolite [9]. Finding K-shortest paths is substituted here by finding K- lightest paths, that is, the sum of weights of arcs involved in the path is minimized. Note here that there are path-finding strategies that use structural atomic mapping information. These approaches can be classified into two different groups. In the first group atomic mapping is used to build the metabolic graph, that is, an input metabolite is linked to an output metabolite in a given reaction if they share an atom mapping. In other words, an arc between a given pair of input/output metabolites exists if they have atoms in common in at least one reaction. The work of Faust et al. [35], based on the RPAIR database [38], is a reference example for these approaches. The effective carbon exchange strategy used in our CFP approach also falls into this group. However, it is slightly more restrictive than the approach presented in Faust et al. [35], since we exclusively focus on carbon atoms, that is, an arc between a given pair of input/output metabolites exists if they have carbon atoms in common in at least one reaction. In the second group atomic mapping is used to guarantee that the pathway target metabolite involves at least one atom from the source metabolite. This concept was first introduced by Arita et al. [39 ], and recently revisited in Blum and Kohlbacher [40], and Heath et al. [41]. We are aware that this type of approach is, in theory, more restrictive than the effective carbon exchange strategy used in our CFP approach, since we guarantee effective carbon exchange between intermediates in the path, but not between the source and target metabolites. Tracing an atom from source to target metabolite, however, requires detailed knowledge of carbon atom mappings for each reaction. Though active research is being undertaken into this topic, more effort is still needed to release a fully curated and complete database for atomic mappings in genome-scale metabolic networks, especially for those from the Biochemical Genetic and Genomic (BiGG) database [42], which we are using here. For completeness, we will include results for the most recent approach [41], denoted as atom mapping-based strategy. Results were extracted from the web service (named AtomMetaNetWeb) available from Kavraki's lab [43]. Figure Figure44 shows results obtained for each of the strategies discussed above. It can be observed that the hubs-based strategy increases the average recovery rate with respect to the unadjusted metabolic graph (topology) by around 20% on average. The atom mapping-based strategy is clearly less accurate than the hubs-based strategy, which reflects the point discussed above that current databases for atomic mappings require further development. In addition, the connectivity-based strategy substantially outperforms the hubs-based strategy - for example, for k = 1, 62.5% and 32.5% of reference pathways are recovered, respectively. Finally, our CFP approach outperforms the connectivity-based strategy. This analysis shows, therefore, that our CFP approach (even without considering stoichiometry) outperforms existing path-finding methods. Pathway recovery analysis. (a) Average recovery rate among the k-shortest paths for k = 1, 5, 10, 100. (b) Average recovery rate among the k-shortest paths for k = 1,...,100 for different path-finding approaches. Finally, note that other works [9,13] typically used the accuracy rate, instead of the recovery rate, for comparing the computed paths and reference pathways. We repeated the same analysis using this parameter. As observed in Additional file 2, a similar result to Figure Figure44 is obtained, which again shows that our CFP approach outperforms current methods. Effect of stoichiometry To illustrate the effect of stoichiometry, we first analyze a previously considered example from the literature, which emphasizes the fact that some paths (at the graph-theoretical level) cannot perform in steady-state and therefore are not biologically meaningful. We then repeat the side-by-side comparison presented in Figure Figure44 when stoichiometry is considered. To emphasize its importance, we examine how the connectivity structure of several metabolites is altered when stoichiometry is considered. Stoichiometry and infeasible paths Figure Figure55 shows a simplified network from that presented in de Figueiredo et al. [25], which considered the question as to whether (or not) fatty acids can be converted into sugars. This question is answered by finding pathways from acetyl-CoA (AcCoA) to G6P. In that work, two scenarios were analyzed, namely pathway structure from AcCoA to G6P in the presence and absence of the enzymes of the glyoxylate shunt (indicated by dashed lines in Figure Figure5).5). In the metabolic network in Figure Figure5,5, when the glyoxylate shunt is absent, no possible pathway can exist in a stoichiometric balance from AcCoA to G6P. As observed in de Figueiredo et al. [25], this fact is not properly captured by path-finding methods, since stoichiometry is not taken into account. In contrast, our CFP approach correctly answers this question, by finding no paths between AcCoA and G6P when the glyoxylate shunt is not active. This is due to the addition of constraint 9, which forces paths to be able to work in sustained steady-state. Simplified network from that presented in de Figueiredo et al. [25]considering the question of conversion of fatty acids to sugars. AcCoA, acetyl-CoA; AKG, 2-oxoglutarate; AKGDH, 2-oxogluterate dehydrogenase; Cit, citrate; CS, citrate synthase; D-Glc, ... Side-by-side comparison with stoichiometry We repeated the side-by-side comparison previously presented in Figure Figure44 for path-finding methods when stoichiometry is considered. Similarly, we used the 40 E. coli metabolic pathways discussed in Planes and Beasely [37], and the E. coli metabolic network in Feist et al. [36]. As we previously showed above (Figure (Figure4)4) that our CFP approach (without considering stoichiometry, Equations 5 and 6) outperforms existing path-finding methods, we here compare the performance of our CFP approach with and without Equations 5 and 6 so as to evaluate the effect of stoichiometry. For this purpose, we analyzed our CFP approach in two different scenarios, namely when we used a minimal medium based on glucose as a sole carbon source under oxic and anoxic conditions, respectively. See Additional file 3 for details. It is important to note that the use of a specific minimal medium (as we do here) prevents some known metabolic pathways from functioning in E. coli due to stoichiometric constraints. For example, the tricarboxylic acid (TCA) cycle cannot work in anoxic conditions in E. coli. The ability to detect these false positives cannot be accomplished without the use of stoichiometry. In light of this, the definition of recovery (as used in Figure Figure4)4) is slightly modified here. Recovery rate is 1 if (under a given growth medium) the model recovers a feasible pathway or the model excludes from the solution space an infeasible pathway, 0 otherwise. For illustration, if our CFP approach (incorrectly) detects the TCA cycle in anoxic conditions, recovery would be zero. However, if our CFP approach correctly excludes the TCA cycle from the solution space, then recovery would be 1. Figure Figure6a6a shows how recovery rate evolves over k-shortest CFPs (k = 1,...,100) with/without stoichiometry in oxic conditions. We found that in these conditions, 6 out of 40 metabolic pathways cannot work in steady-state (Additional file 2). For example, the pathway for the degradation of 2,5-diketo-D-gluconate is not functionally feasible under these conditions since it cannot be synthesized from glucose in E. coli [44]. This logically cannot be captured without considering stoichiometry. This is reflected in Figure Figure6a,6a, where average recovery rate among 100 shortest CFPs decreases to 0.85 without stoichiometry. The same analysis was repeated in anoxic conditions (Figure (Figure6b),6b), finding two additional pathways (TCA cycle and Allantoin degradation) not able to work in steady-state (given our growth medium). Figure Figure6c6c summarizes Figure Figure6a6a and Figure Figure6b6b for some particular values (k = 1, 5, 10 and 100). See Additional file 2 for further details, including results when average accuracy rate was used instead of recovery rate. This analysis shows the importance of stoichiometry and its underlying boundary definition at the functional level. Effect of stoichiometry in pathway recovery analysis. Average recovery rate among the k-shortest paths for k = 1,...,100 for CFP approach with and without considering stoichometry in (a) oxic conditions; and (b) anoxic conditions; (c) Average recovery ... Connectivity analysis and stoichiometry To emphasize the effect of stoichiometry, we examined the connectivity structure of oxaloacetate (OAA) in E. coli. OAA plays an important role in the regulation of carbon flux in most organisms. Again, for this study, we used the metabolic network presented in Feist et al. [36] and a minimal medium based on glucose as a sole carbon source and oxic conditions. We determined CFPs from OAA to all reachable metabolites (obviously some metabolites may not be reachable via a CFP from OAA). In order to organize and compare the obtained results, we plotted a connectivity curve that shows the total number of connected metabolites when we move a specified number of reaction steps away from the source metabolite. To show the effect of stoichiometry, we plot the connectivity curves when stoichiometry is included (so including Equations 5 and 6) and when it is not included (so excluding Equations 5 and 6). Figure Figure7a7a shows the connectivity curves for OAA. For example, in five reaction steps, OAA reaches 300 metabolites when stoichiometry is included and 400 metabolites otherwise. It can also be observed that, in any number of reaction steps, the number of metabolites reachable from OAA when stoichiometry is taken into account is 834, but 1,028 metabolites when it is not considered. These results clearly show the effect of considering stoichiometry. We repeated the same analysis in two structurally different metabolites, namely arginine (L-Arg), an amino acid, and phosphatidic acid (PA120), an important lipid. We found a very similar behavior, as observed in Figure 7b,c. This analysis shows the importance of considering stoichiometry for the topological analysis of metabolic networks from a path-based perspective. Effect of stoichiometry in connectivity analysis. (a) Connectivity curve for oxaloacetate (OAA). (b) Connectivity curve for arginine (L-Arg). (c) Connectivity curve for phosphatidic acid (PA120). It is usual to find K paths between a pair of key metabolites/reactions in, for example, the interpretation of 'omics' data [13,20]. Current path-finding methods do not take into account stoichiometric constraints for this analysis. In the analysis presented below we show that the resulting K functional paths are strongly dependent on stoichiometric constraints. This fact is illustrated in this sub-section with the pathway analysis of Pyr-OAA metabolism. PEP, Pyr and OAA are important metabolites whose underlying inter-conversions control the carbon flux distribution in bacteria [45]. The performance of the PEP-Pyr-OAA node changes in different organisms and growth conditions. We focus here on the structure of CFPs from Pyr to OAA in E. coli in two different scenarios, namely in oxic and anoxic conditions. Pyr and OAA are linked by two fundamental metabolic processes. Firstly, Pyr (via PEP) can be carboxylated to OAA for the replenishment of TCA cycle intermediates or for anabolic purposes (for example, amino acid biosynthesis). This process is typically referred to as anaplerosis. In addition, Pyr and OAA are strongly related via the TCA cycle, which oxidizes carbon of Pyr to CO[2 ]and requires OAA to operate. We calculated the 100 shortest CFPs in both scenarios using the metabolic network presented in Feist et al. [36]. Again, we used the list of arcs presented in Additional file 1. In addition, we used a minimal medium based on glucose. See Additional file 3 for details as to the medium used. Figure Figure88 shows the 100 shortest CFPs from Pyr to OAA in oxic conditions. Both fundamental metabolic processes described above between Pyr and OAA (anaplerotic route via PEP and the TCA cycle) are recovered (see dashed lines). In addition, different alternative routes to these processes are found. In particular, several bypasses to the TCA cycle can be observed in Figure Figure8.8. The glyoxylate (GLX) shunt was recovered, as well as the γ-aminobutyrate (GABA) shunt, whose role as an integral part of the TCA cycle was recently hypothesized [46]. We also determined a (theoretical, non-experimentally determined) bypass via propionyl-CoA (PPCoA), which was reported in a previous paper [31]. Interestingly, we also predicted a bypass to the TCA cycle via L-Arg catabolism. Though not shown in Figure Figure8,8, L-Arg is consumed in a reaction catalyzed by arginine succinyltransferase (AST; SUCCoA + L-Arg → SUCArg). Several links to the TCA cycle with arginine-L metabolism has been previously reported [47], although more research is needed to examine whether this detour is a functionally feasible alternative route to succinyl-CoA synthetase (SUCOAS) (ATP + CoA + SUCC ↔ ADP + Pi + SUCCoA). 100 shortest CFPs in E. coli from Pyr to OAA in oxic conditions. Both the thickness of arcs and the size of metabolite nodes correspond to their frequency of appearance in the 100 shortest CFPs. Metabolites in grey are intermediates involved in the TCA ... Though the number of non-meaningful paths has been substantially reduced, it can be observed in Figure Figure88 that they still exist - for example, different routes via CoA. These false positives do not arise from the lack of stoichiometric balancing, but due to carbon exchange constraints. Indeed, these routes exchange carbon atoms in each of their intermediate steps but do not exchange carbon atoms between Pyr and OAA. When the current limitations described above (in the discussion of atom mapping-based approaches) are addressed, such strategies may be an effective constraint to remove these false positives. We repeated the same analysis in anoxic conditions (Figure (Figure9).9). In this situation, the main variability in the 100 shortest CFPs is found in anaplerotic routes, since the TCA cycle is not active. This is due to the fact that the balancing of coenzyme Q (CoQ) and ubiquinol is not possible without oxygen and therefore enzyme succinate dehydrogenase (CoQ + SUCC → Fum + CoQH2) cannot work in sustained steady-state. This meant that several other reactions involved in the TCA cycle do not appear in the 100 shortest CFPs, namely isocitrate dehydrogenase (ICit + NADP ↔ AKG + CO[2 ]+ NADPH), 2-oxogluterate dehydrogenase (AKG + CoA + NAD → CO[2 ]+ NADH + SUCCoA) and SUCOAS (ATP + CoA + SUCC ↔ ADP + Pi + SUCCoA) are not in Figure Figure9.9. This is also the case for metabolite AKG, which is now not involved in the 100 shortest CFPs from Pyr to OAA, while in oxic conditions it appeared in five solutions. In addition, most of the bypasses previously mentioned in oxic conditions are not involved in Figure Figure9;9; indeed just the glyoxylate shunt is kept in the solution. 100 shortest CFPs in E. coli from Pyr to OAA in anoxic conditions. Both the thickness of arcs and the size of metabolite nodes correspond to their frequency of appearance in the 100 shortest CFPs. Metabolites in grey are intermediates involved in the ... Finally, as observed in Figures Figures88 and and9,9, our CFP approach properly captures the metabolic changes induced when oxygen is removed from the medium. These changes cannot be captured if stoichiometric constraints are not considered, showing again the strength of our CFP approach. Graph-based methods have been widely used for the analysis of metabolic networks, but suffer from the important weakness that reaction stoichiometry is neglected. In this paper we show that, using the novel concept of CFPs, reaction stoichiometry can be incorporated into path-finding approaches, which constitute a clear progress over the state of the art at the methodological level. Our results show that, when stoichiometry is incorporated into path-finding methods, the resulting set of functional pathways is substantially altered, as observed in the analysis of the 40 reference pathways. This idea is also reflected in the analysis of aerobic and anaerobic Pyr-OAA metabolism, which emphasizes the importance of the steady-state condition and its underlying boundary definition for the analysis of metabolic networks. In addition, connectivity analysis revealed important differences when stoichiometry was considered, as we illustrated with regard to a number of metabolites. In summary, CFPs open new avenues for analyzing metabolic networks at the topological and functional levels and constitute a major advance. Though the incorporation of stoichiometry into a path-finding method is the main feature of our work, our CFP approach focuses on paths involving effective carbon exchange in each of their intermediate steps. The results we have presented confirm the relevance of this strategy when analyzing metabolic networks using a path-finding approach. Our public release of the manually curated E. coli database incorporating effective carbon exchange information (based on BiGG [42] and the work of Feist et al. [36]) represents a valuable dataset available for the scientific community, which can be used for further analysis. It is important to mention that our CFP approach is formulated as a mixed-integer linear program, which cannot be solved using classical algorithms from graph theory and requires a branch and bound approach. Computational experience shows that the determination of CFPs is not expensive, namely in the order of milliseconds. This fact makes our approach an effective tool for addressing other relevant questions previously addressed by path-finding approaches. Our analysis of CFPs in aerobic Pyr-OAA metabolism allowed us to detect several bypasses to the TCA cycle. Some of these bypasses have been recently reported using a different pathway analysis technique, namely elementary flux patterns for the bypass via the GABA shunt [48] and generating flux modes for the bypass via PPCoA [31]. In addition, we found an alternative bypass to the TCA cycle via L-Arg. This novel pathway is currently theoretical (it should be treated with caution) and requires experimental validation; however, it shows the capability of our CFP approach to generate new Finally, despite much debate in the field comparing the performance of path-finding methods and stoichiometric methods [25,27,49], this article shows that both approaches can work in a synergic fashion so as to explore the huge complexity in cellular metabolism. Materials and methods Equations 1 to 11 presented in the 'Mathematical model' sub-section define a mixed-integer linear problem and, algorithmically, such problems are solved by linear programming-based tree search. Modern software packages to perform this task, such as ILOG CPLEX, which we used, are well developed and highly sophisticated. ILOG CPLEX was run in a Matlab environment version 7.5 (R2007b). The computation of the shortest CFP and the 100 shortest CFPs took us (on average) 300 ms and 2.5 minutes, respectively, on a 64-bit, 2.00 GHz PC with 12 Gb RAM. Analysis using regression indicated that, over the range of K values examined (up to K = 250), the total time for computing the K shortest CFPs was (approximately) proportional to K^1.4. This implies that the computation time of CFPs grows only as a low power of the number of paths (K) sought. AcCoA: acetyl-CoA; AcGlc-D: 6-acetyl-D-glucose; AKG: 2-oxoglutarate; AST: Arginine succinyltransferase; BiGG: Biochemical Genetic and Genomic; CDP: cytidine-diphosphate; CFP: carbon flux path; CoA: coenzyme A; CoQ: coenzyme Q; D-Glc: glucose; Fum: fumarate; G6P: glucose 6-phosphate; GABA: gamma-aminobutyric acid; GLCATr: D-glucose O-acetyltransferase; GLX: Glyoxylate; PA120: phosphatidic acid; HCO3: bicarbonate; ICit: isocitrate; L-Arg: arginine; L-Mal: acetyl-maltose; OAA: oxaloacetate; PEP: phosphoenolpyruvate; PPCoA: propionyl-CoA; Pyr: pyruvate; SUCArg: Succinyl-L-arginine; SUCC: Succinate; SUCCoA: succinyl-coenzyme A; SUCOAS: succinyl-CoA synthetase; TCA: tricarboxylic acid. Authors' contributions JPe developed and implemented the method, wrote the manuscript and performed analyses; JPr developed the method and carbon exchange database; JEB developed the method and wrote the manuscript; FJP conceived the study, developed the method and wrote the manuscript. All authors discussed the results, and read, commented and approved the final manuscript. Supplementary Material Additional file 1: Database of carbon exchange arcs. PDF document containing a list of arcs involving effective carbon flux in the metabolic network of Feist et al. [36]. Additional file 2: Supporting data for side-by-side comparison. Word document containing a list of 40 reference pathways used in the side-by-side comparison, a side-by-side comparison using accuracy rate, and a discussion on infeasible pathways in Figure 6 [7,9,12,13,35-37,41,44,50-56]. Additional file 3: Supporting data for Figures 8 and 9. Details of the 100 shortest CFPs in oxic and anoxic conditions from Pyr to OAA. The work of JPe was supported by the Basque Government. The authors would like to thank two anonymous referees for their valuable comments improving the manuscript. • Aittokallio T, Schwikowski B. Graph-based methods for analysing networks in cell biology. Brief Bioinform. 2006;7:243–255. doi: 10.1093/bib/bbl022. [PubMed] [Cross Ref] • Jeong H, Tombor B, Albert R, Oltvai ZN, Barabasi AL. The large-scale organization of metabolic networks. Nature. 2000;407:651–654. doi: 10.1038/35036627. [PubMed] [Cross Ref] • Ma HW, Zeng AP. The connectivity structure, giant strong component and centrality of metabolic networks. Bioinformatics. 2003;19:1423–1430. doi: 10.1093/bioinformatics/btg177. [PubMed] [Cross Ref • Arita M. The metabolic world of Escherichia coli is not small. Proc Natl Acad Sci USA. 2004;101:1543–1547. doi: 10.1073/pnas.0306458101. [PMC free article] [PubMed] [Cross Ref] • Montanez R, Medina MA, Sole RV, Rodriguez-Caso C. When metabolism meets topology: Reconciling metabolite and reaction networks. Bioessays. 2010;32:246–256. doi: 10.1002/bies.200900145. [PubMed] [ Cross Ref] • Fell DA, Wagner A. The small world of metabolism. Nat Biotechnol. 2000;18:1121–1122. doi: 10.1038/81025. [PubMed] [Cross Ref] • Wagner A, Fell DA. The small world inside large metabolic networks. Proc Biol Sci. 2001;268:1803–1810. doi: 10.1098/rspb.2001.1711. [PMC free article] [PubMed] [Cross Ref] • Yamada T, Bork P. Evolution of biomolecular networks: lessons from metabolic and protein interactions. Nat Rev Mol Cell Biol. 2009;10:791–803. doi: 10.1038/nrm2787. [PubMed] [Cross Ref] • Croes D, Couche F, Wodak SJ, van Helden J. Inferring meaningful pathways in weighted metabolic networks. J Mol Biol. 2006;356:222–236. doi: 10.1016/j.jmb.2005.09.079. [PubMed] [Cross Ref] • Arita M. Metabolic reconstruction using shortest paths. Simulation Practice Theory. 2000;8:109–125. doi: 10.1016/S0928-4869(00)00006-9. [Cross Ref] • Rahman SA, Advani P, Schunk R, Schrader R, Schomburg D. Metabolic pathway analysis web service (Pathway Hunter Tool at CUBIC). Bioinformatics. 2005;21:1189–1193. doi: 10.1093/bioinformatics/ bti116. [PubMed] [Cross Ref] • Planes FJ, Beasley JE. Path finding approaches and metabolic pathways. Discrete Appl Mathematics. 2009;157:2244–2256. doi: 10.1016/j.dam.2008.06.035. [Cross Ref] • Faust K, Dupont P, Callut J, van Helden J. Pathway discovery in metabolic networks by subgraph extraction. Bioinformatics. 2010;26:1211–1218. doi: 10.1093/bioinformatics/btq105. [PMC free article ] [PubMed] [Cross Ref] • Patil KR, Nielsen J. Uncovering transcriptional regulation of metabolism by using metabolic network topology. Proc Natl Acad Sci USA. 2005;102:2685–2689. doi: 10.1073/pnas.0406811102. [PMC free article] [PubMed] [Cross Ref] • Zelezniak A, Pers TH, Soares S, Patti ME, Patil KR. Metabolic network topology reveals transcriptional regulatory signatures of type 2 diabetes. PLoS Comput Biol. 2010;6:e1000729. doi: 10.1371/ journal.pcbi.1000729. [PMC free article] [PubMed] [Cross Ref] • Kharchenko P, Church GM, Vitkup D. Expression dynamics of a cellular metabolic network. Mol Syst Biol. 2005;1:2005.0016. [PMC free article] [PubMed] • Antonov AV, Dietmann S, Wong P, Mewes HW. TICL - a web tool for network-based interpretation of compound lists inferred by high-throughput metabolomics. FEBS J. 2009;276:2084–2094. doi: 10.1111/ j.1742-4658.2009.06943.x. [PubMed] [Cross Ref] • Antonov AV, Dietmann S, Mewes HW. KEGG spider: interpretation of genomics data in the context of the global gene metabolic network. Genome Biol. 2008;9:R179. doi: 10.1186/gb-2008-9-12-r179. [PMC free article] [PubMed] [Cross Ref] • Jourdan F, Breitling R, Barrett MP, Gilbert D. MetaNetter: inference and visualization of high-resolution metabolomic networks. Bioinformatics. 2008;24:143–145. doi: 10.1093/bioinformatics/ btm536. [PubMed] [Cross Ref] • Cottret L, Wildridge D, Vinson F, Barrett MP, Charles H, Sagot MF, Jourdan F. MetExplore: a web server to link metabolomic experiments and genome-scale metabolic networks. Nucleic Acids Res. 2010;38:W132–137. doi: 10.1093/nar/gkq312. [PMC free article] [PubMed] [Cross Ref] • Rahman SA, Schomburg D. Observing local and global properties of metabolic pathways: 'load points' and 'choke points' in the metabolic networks. Bioinformatics. 2006;22:1767–1774. doi: 10.1093/ bioinformatics/btl181. [PubMed] [Cross Ref] • Fatumo S, Plaimas K, Mallm JP, Schramm G, Adebiyi E, Oswald M, Eils R, König R. Estimating novel potential drug targets of Plasmodium falciparum by analysing the metabolic network of knock-out strains in silico. Infect Genet Evol. 2009;9:351–358. doi: 10.1016/j.meegid.2008.01.007. [PubMed] [Cross Ref] • Guimera R, Sales-Pardo M, Amaral LA. A network-based method for target selection in metabolic networks. Bioinformatics. 2007;23:1616–1622. doi: 10.1093/bioinformatics/btm150. [PMC free article] [ PubMed] [Cross Ref] • Deville Y, Gilbert D, van Helden J, Wodak SJ. An overview of data models for the analysis of biochemical pathways. Brief Bioinform. 2003;4:246–259. doi: 10.1093/bib/4.3.246. [PubMed] [Cross Ref] • de Figueiredo LF, Schuster S, Kaleta C, Fell DA. Can sugars be produced from fatty acids? A test case for pathway analysis tools. Bioinformatics. 2008;24:2615–2621. doi: 10.1093/bioinformatics/ btn500. [PubMed] [Cross Ref] • de Figueiredo LF, Schuster S, Kaleta C, Fell DA. Response to comment on "Can sugars be produced from fatty acids? A test case for pathway analysis tools". Bioinformatics. 2009;25:3330–3331. doi: 10.1093/bioinformatics/btp591. [PubMed] [Cross Ref] • Faust K, Croes D, van Helden J. In response to "Can sugars be produced from fatty acids? A test case for pathway analysis tools". Bioinformatics. 2009;25:3202–3205. doi: 10.1093/bioinformatics/ btp557. [PubMed] [Cross Ref] • Schuster S, Fell DA, Dandekar T. A general definition of metabolic pathways useful for systematic organization and analysis of complex metabolic networks. Nat Biotechnol. 2000;18:326–332. doi: 10.1038/73786. [PubMed] [Cross Ref] • Klamt S, Haus UU, Theis F. Hypergraphs and cellular networks. PLoS Comput Biol. 2009;5:e1000385. doi: 10.1371/journal.pcbi.1000385. [PMC free article] [PubMed] [Cross Ref] • Schilling CH, Letscher D, Palsson BO. Theory for the systemic definition of metabolic pathways and their use in interpreting metabolic function from a pathway-oriented perspective. J Theor Biol. 2000;203:229–248. doi: 10.1006/jtbi.2000.1073. [PubMed] [Cross Ref] • Rezola A, de Figueiredo LF, Brock M, Pey J, Podhorski A, Wittmann C, Schuster S, Bockmayr A, Planes FJ. Exploring metabolic pathways in genome-scale networks via generating flux modes. Bioinformatics. 2011;27:534–540. doi: 10.1093/bioinformatics/btq681. [PubMed] [Cross Ref] • Terzer M, Stelling J. Large-scale computation of elementary flux modes with bit pattern trees. Bioinformatics. 2008;24:2229–2235. doi: 10.1093/bioinformatics/btn401. [PubMed] [Cross Ref] • de Figueiredo LF, Podhorski A, Rubio A, Kaleta C, Beasley JE, Schuster S, Planes FJ. Computing the shortest elementary flux modes in genome-scale metabolic networks. Bioinformatics. 2009;25 :3158–3165. doi: 10.1093/bioinformatics/btp564. [PubMed] [Cross Ref] • van Helden J, Wernisch L, Gilbert D, Wodak SJ. Graph-based analysis of metabolic networks. Bioinformatics Genome Analysis. 2002;38:245–274. [PubMed] • Faust K, Croes D, van Helden J. Metabolic pathfinding using RPAIR annotation. J Mol Biol. 2009;388:390–414. doi: 10.1016/j.jmb.2009.03.006. [PubMed] [Cross Ref] • Feist AM, Henry CS, Reed JL, Krummenacker M, Joyce AR, Karp PD, Broadbelt LJ, Hatzimanikatis V, Palsson BO. A genome-scale metabolic reconstruction for Escherichia coli K-12 MG1655 that accounts for 1260 ORFs and thermodynamic information. Mol Syst Biol. 2007;3:121. [PMC free article] [PubMed] • Planes FJ, Beasley JE. An optimization model for metabolic pathways. Bioinformatics. 2009;25:2723–2729. doi: 10.1093/bioinformatics/btp441. [PubMed] [Cross Ref] • Kotera M, Hattori M, Oh M, Yamamoto M, Komeno T, Yabuzaki Y, Tonomura K, Goto S, Kanehisa M. RPAIR: a reactant-pair database representing chemical changes in enzymatic reactions. Genome Informatics. 2004;15:P062. • Arita M. In silico atomic tracing by substrate-product relationships in Escherichia coli intermediary metabolism. Genome Res. 2003;13:2455–2466. doi: 10.1101/gr.1212003. [PMC free article] [ PubMed] [Cross Ref] • Blum T, Kohlbacher O. Using atom mapping rules for an improved detection of relevant routes in weighted metabolic networks. J Comput Biol. 2008;15:565–576. doi: 10.1089/cmb.2008.0044. [PubMed] [ Cross Ref] • Heath AP, Bennett GN, Kavraki LE. Finding metabolic pathways using atom tracking. Bioinformatics. 2010;26:1548–1555. doi: 10.1093/bioinformatics/btq223. [PMC free article] [PubMed] [Cross Ref] • Schellenberger J, Park JO, Conrad TM, Palsson BO. BiGG: a Biochemical Genetic and Genomic knowledgebase of large scale metabolic reconstructions. BMC Bioinformatics. 2010;11:213. doi: 10.1186/ 1471-2105-11-213. [PMC free article] [PubMed] [Cross Ref] • AtomMetaNetWeb. http://www.kavrakilab.org/atommetanetweb/#home • Eschenfeldt WH, Stols L, Rosenbaum H, Khambatta ZS, Quaite-Randall E, Wu S, Kilgore DC, Trent JD, Donnelly MI. DNA from uncultured organisms as a source of 2,5-diketo-D-gluconic acid reductases. Appl Environ Microbiol. 2001;67:4206–4214. doi: 10.1128/AEM.67.9.4206-4214.2001. [PMC free article] [PubMed] [Cross Ref] • Sauer U, Eikmanns BJ. The PEP-pyruvate-oxaloacetate node as the switch point for carbon flux distribution in bacteria. FEMS Microbiol Rev. 2005;29:765–794. doi: 10.1016/j.femsre.2004.11.002. [ PubMed] [Cross Ref] • Fait A, Fromm H, Walter D, Galili G, Fernie AR. Highway or byway: the metabolic role of the GABA shunt in plants. Trends Plant Sci. 2008;13:14–19. doi: 10.1016/j.tplants.2007.10.005. [PubMed] [ Cross Ref] • Lu CD. Pathways and regulation of bacterial arginine metabolism and perspectives for obtaining arginine overproducing strains. Appl Microbiol Biotechnol. 2006;70:261–272. doi: 10.1007/ s00253-005-0308-z. [PubMed] [Cross Ref] • Kaleta C, de Figueiredo LF, Schuster S. Can the whole be less than the sum of its parts? Pathway analysis in genome-scale metabolic networks using elementary flux patterns. Genome Res. 2009;19 :1872–1883. doi: 10.1101/gr.090639.108. [PMC free article] [PubMed] [Cross Ref] • Planes FJ, Beasley JE. A critical examination of stoichiometric and path-finding approaches to metabolic pathways. Brief Bioinform. 2008;9:422–436. doi: 10.1093/bib/bbn018. [PubMed] [Cross Ref] • Baldoma L, Aguilar J. Metabolism of L-fucose and L-rhamnose in Escherichia coli: aerobic-anaerobic regulation of L-lactaldehyde dissimilation. J Bacteriol. 1988;170:416–421. [PMC free article] [ • Becker DJ, Lowe JB. Fucose: biosynthesis and biological function in mammals. Glycobiology. 2003;13:41R–53R. doi: 10.1093/glycob/cwg054. [PubMed] [Cross Ref] • Kanehisa M, Goto S, Hattori M, Aoki-Kinoshita KF, Itoh M, Kawashima S, Katayama T, Araki M, Hirakawa M. From genomics to chemical genomics: new developments in KEGG. Nucleic Acids Res. 2006;34 :D354–357. doi: 10.1093/nar/gkj102. [PMC free article] [PubMed] [Cross Ref] • Keseler IM, Bonavides-Martínez C, Collado-Vides J, Gama-Castro S, Gunsalus RP, Johnson DA, Krummenacker M, Nolan LM, Paley S, Paulsen IT, Peralta-Gil M, Santos-Zavaleta A, Shearer AG, Karp PD. EcoCyc: a comprehensive view of Escherichia coli biology. Nucleic Acids Res. 2009;37:D464–470. doi: 10.1093/nar/gkn751. [PMC free article] [PubMed] [Cross Ref] • Kornberg H. Krebs and his trinity of cycles. Nat Rev Mol Cell Biol. 2000;1:225–228. doi: 10.1038/35043073. [PubMed] [Cross Ref] • O'Donovan GA, Neuhard J. Pyrimidine metabolism in microorganisms. Bacteriol Rev. 1970;34:278–343. [PMC free article] [PubMed] • Xi H, Schneider BL, Reitzer L. Purine catabolism in Escherichia coli and function of xanthine dehydrogenase in purine salvage. J Bacteriol. 2000;182:5332–5341. doi: 10.1128/ JB.182.19.5332-5341.2000. [PMC free article] [PubMed] [Cross Ref] Articles from Genome Biology are provided here courtesy of BioMed Central Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3219972/?tool=pubmed","timestamp":"2014-04-18T17:08:22Z","content_type":null,"content_length":"164556","record_id":"<urn:uuid:fd8a0836-4092-44ae-9c5f-f96d4fbc2a29>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00511-ip-10-147-4-33.ec2.internal.warc.gz"}
Appendix 1. Simple Derivation of the Lorentz Transformation. Einstein, Albert. 1920. Relativity: The Special and General Theory FOR the relative orientation of the co-ordinate systems indicated in Fig. 2, the x-axes of both systems permanently coincide. In the present case we can divide the problem into parts by 1 considering first only events which are localised on the x-axis. Any such event is represented with respect to the co-ordinate system K by the abscissa x and the time t, and with respect to the system k' by the abscissa x' and the time t'. when x and t are given. A light-signal, which is proceeding along the positive axis of x, is transmitted according to the equation or 2 Since the same light-signal has to be transmitted relative to k' with the velocity c, the propagation relative to the system k' will be represented by the analogous formula Those space-time points (events) which satisfy (1) must also satisfy (2). Obviously this will be the case when the relation is fulfilled in general, where (3), the disappearance of (x ct) involves the disappearance of (x' ct'). If we apply quite similar considerations to light rays which are being transmitted along the negative x-axis, we obtain the condition 3 By adding (or subtracting) equations (3) and (4), and introducing for convenience the constants a and b in place of the constants 4 we obtain the equations We should thus have the solution of our problem, if the constants a and b were known. These result from the following discussion. 5 For the origin of k' we have permanently x' = 0, and hence according to the first of the equations (5) 6 If we call v the velocity with which the origin of k' is moving relative to K, we then have 7 The same value v can be obtained from equation (5), if we calculate the velocity of another point of k' relative to K, or the velocity (directed towards the negative x-axis) of a point of K with 8 respect to K'. In short, we can designate v as the relative velocity of the two systems. Furthermore, the principle of relativity teaches us that, as judged from K, the length of a unit measuring-rod which is at rest with reference to k' must be exactly the same as the length, as 9 judged from K', of a unit measuring-rod which is at rest relative to K. In order to see how the points of the x'-axis appear as viewed from K, we only require to take a snapshot of k' from K; this means that we have to insert a particular value of t (time of K), e.g. t = 0. For this value of t we then obtain from the first of the equations (5) Two points of the x'-axis which are separated by the distance x'=1 when measured in the k' system are thus separated in our instantaneous photograph by the distance 10 But if the snapshot be taken from K'(t' = 0), and if we eliminate t from the equations (5), taking into account the expression (6), we obtain 11 From this we conclude that two points on the x-axis and separated by the distance 1 (relative to K) will be represented on our snapshot by the distance 12 But from what has been said, the two snapshots must be identical; hence x in (7) must be equal to x' in (7a), so that we obtain 13 The equations (6) and (7b) determine the constants a and b. By inserting the values of these constants in (5), we obtain the first and the fourth of the equations given in Section XI. 14 Thus we have obtained the Lorentz transformation for events on the x-axis. It satisfies the condition 15 The extension of this result, to include events which take place outside the x-axis, is obtained by retaining equations (8) and supplementing them by the relations 16 In this way we satisfy the postulate of the constancy of the velocity of light in vacuo for rays of light of arbitrary direction, both for the system K and for the system K'. This may be shown in the following manner. We suppose a light-signal sent out from the origin of K at the time t = 0. It will be propagated according to the equation 17 or, if we square this equation, according to the equation It is required by the law of propagation of light, in conjunction with the postulate of relativity, that the transmission of the signal in question should take place as judged from K' in 18 accordance with the corresponding formula or, In order that equation (10a) may be a consequence of equation (10), we must have Since equation (8a) must hold for points on the x-axis, we thus have a) and (9), and hence also of (8) and (9). We have thus derived the Lorentz transformation. 19 The Lorentz transformation represented by (8) and (9) still requires to be generalised. Obviously it is immaterial whether the axes of K' be chosen so that they are spatially parallel to those of 20 K. It is also not essential that the velocity of translation of K' with respect to K should be in the direction of the x-axis. A simple consideration shows that we are able to construct the Lorentz transformation in this general sense from two kinds of transformations, viz. from Lorentz transformations in the special sense and from purely spatial transformations, which corresponds to the replacement of the rectangular co-ordinate system by a new system with its axes pointing in other directions. Mathematically, we can characterise the generalised Lorentz transformation thus: It expresses x', y', z', t', in terms of linear homogeneous functions of x, y, z, t, of such a kind that the 21 is satisfied identically. That is to say: If we substitute their expressions in x, y, z, t, in place of x', y', z', t', on the left-hand side, then the left-hand side of (11a) agrees with the right-hand side.
{"url":"http://bartleby.com/173/a1.html","timestamp":"2014-04-20T10:49:06Z","content_type":null,"content_length":"30827","record_id":"<urn:uuid:3562e962-334e-4ffe-ba31-e2f0aee766f0>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
Classification of finite groups of isometries up vote 13 down vote favorite Consider the problem of classifying the finite groups of isometries of R^n. --For n=2 it is cyclic and dihedral groups. --For n=3 they are well known, probably from Kepler and are related to ade-classification. --For n=4 we can get them by taking the universal cover of SO(4) which is isomorphic to SU2 x SU2, though I do not know where the classification is available. But my main question is for dimension n=5 and above. Does anybody knows the state of the art? A reference would be most helpful. Note that the finite subgroups of GLn(Z) are classified for n<=10. finite-groups ade-classifications 3 You mean linear isometries, right? - Otherwise, considering all the affine isometries, in dimension 2 you have the 17 crystallographic groups. – Qfwfq Aug 30 '10 at 11:19 3 The finite subgroups of SO(4) are listed in the book by Conway and Smith on quaternions and octonions. I have recently checked that it is correct and recovered it from the classification of finite subgroups of Spin(4), which we needed for a separate project: see the paper arxiv.org/abs/1007.4761 . – José Figueroa-O'Farrill Aug 30 '10 at 11:29 2 You may also look at the references in this answer to a previous MO question: mathoverflow.net/questions/17072/the-finite-subgroups-of-sun/… – José Figueroa-O'Farrill Aug 30 '10 at 12:46 3 @unknown: If $G \subset GL(n,\mathbb{R})$ is finite, then we may change coordinates so that the origin is at the centre of mass of some orbit $Gx$ (for an arbitrary $x\in \mathbb{R}^n$). Then the origin is fixed by every element of $G$, so linearity follows from finiteness. On the other hand, if you want to consider discrete subgroups, then you do need to allow for the possibility that some of your isometries may be affine transformations. – Vaughn Climenhaga Aug 30 '10 at 19:35 It's also worth pointing out that if G is any finite subgroup of GL(n,R), then one can introduce an inner product with respect to which G acts by isometries; in particular, there is an inner 2 automorphism of GL(n,R) that maps G to a finite subgroup of Isom(n,R). Thus classifying finite groups of isometries is equivalent to classifying finite groups of linear transformations. – Vaughn Climenhaga Aug 30 '10 at 19:38 show 3 more comments 4 Answers active oldest votes This is one of the problems that just gets hopelessly messy beyond a few small dimensions. The reason is that asking for all finite subgroups of isometries of Euclidean space is essentially the same as asking for all orthogonal representations of all finite groups, and since irreducible representations have dimension at most the square root of the order of the group, you have to use all groups of order up to at least n^2 to find groups of isometries of R^n. A major problem in doing this is that there are huge numbers of nilpotent groups of order p^n once n is larger than about 5; for example there are several hundred groups of order 64, all of whose irreducible representations have dimension at most 8. So my guess would be that classifying all up vote groups of isometries in dimensions greater than about 10 will require a lot of obstinacy and a big computer. 27 down vote (Added later) On checking the literature, I find that people classifying such subgroups usually make some simplifying assumptions, by only looking for ones that are irreducible, maximal, and that act on an integral lattice. With these extra simplifications one can get a bit further: the state of the art seems to be around 30 dimensions. 3 Another simplifying assumption is that of primitivity, which means that the representation is irreducible and not induced from some proper subgroup. That excludes in one fell swoop all nilpotent groups. There are still a lot of them left however. – Torsten Ekedahl Aug 30 '10 at 17:54 7 I recommend using nilpotent groups as an excuse to give up. Otherwise you shortly afterwards run into solvable groups, which are much worse. – Richard Borcherds Aug 31 '10 at 4:31 add comment There is a vast literature on the classification of finite linear groups over various fields. Over the complex or real fields, all finite linear groups are conjugate to subgroups of the respective unitary or orthogonal group, so as remarked in one of the comments above, studying finite groups of isometries in this context is the same as studying all the finite subgroups of ${\ rm GL}(n,\mathbb{C})$ or ${\rm GL}(n,\mathbb{R}).$ As Richard Borcherds remarked, this soon becomes a complicated problem. But strategies have evolved since the birth of representation theory to tackle the problem (for general fields) difficult as it is, in a systematic way. I'll discuss the real and complex cases. Generally speaking, we want to concentrate attention on linear groups which can't be described in some "obvious" way in terms of linear groups in smaller dimensions. The first reduction, then, is to concentrate on irreducible groups, those which leave no proper non-zero subspace invariant. Maschke's Theorem tells us that no information is lost in the reduction. Another question, for real representations, is what changes if we extend scalars to the complex field, where life is generally easier. An irreducible real linear group may become reducible when the scalars are extended to the complex numbers (this only happens when its character has squared-norm $2$ or $4$). In each case, the real finite linear group is isomorphic to a finite complex linear group in half the original dimension. So now I only speak of finite complex linear groups. As remarked in someone's earlier comment, the next natural reduction is to the case of primitive linear groups, those which (up to equivalence) be induced from linear groups of smaller dimension. There are strong restrictions on normal subgroups of finite primitive linear groups. In particular, the structure of primitive solvable finite linear groups is very tight, and is well-understood. Having reduced to the primitive case (back to the general finite group), the next question is whether the underlying module is a tensor product of two non-trivial modules of up smaller dimension. At this point, it may be necessary to take (still finite) central extensions of the group you started with. If there is a non-trivial tensor factorization, then we are reduced vote to questions in smaller dimension. If there is no such factorization (even allowing for central extensions), then the structure of the residual groups is very restricted indeed. The given 7 representation may be "tensor induced" from a representation (of smaller dimension) of a proper subgroup. Tensor induction was introduced by Serre. If it can't be tensor induced from a lower down dimensional representation (again, even allowing for central extensions), then the only possibility that remains is subgroup of a central extension of the automorphism group of a finite simple vote group (containing all inner automorphisms). Many mathematicians, for example, Guralnick, Tiep, Zalesski, have calculated (relatively) low dimensional complex representations of (central extensions of) finite simple groups in recent years. My answer is therefore: yes, it is a difficult question, but one which can be addressed systematically in any given case, and for which much hard-won theory is available in the mathematical literature. Addendum: Just as it becomes impractical to list all groups of a given finite order relatively soon, and we have to content ourselves with understanding the "building blocks", that is, the finite simple groups, so it is with finite linear groups. There are three types of building blocks for finite complex linear groups: a) 1-dimensional cyclic linear groups. b) Finite complex linear groups $G$ of dimension $p^{n}$, for some prime $p$ and integer $n > 0$, which have an irreducible normal $p$-subgroup $E$ (extraspecial of order $p^{2n+1}$ and exponent $p$ when $p$ is odd; either extraspecial or the central product of an extraspecial group of order $p^{2n+1}$ with a cyclic group of order $4$ when $p = 2.$). In this case, $G/EZ(G)$ is isomorphic to an irreducible subgroup of the finite symplectic group ${\rm Sp}(2n,p)$. c) Finite complex linear groups $G$ of degree $m$ which have an irreducible quasisimple subgroup $S$ ( this means that $S = S^{\prime}$ and $S/Z(S)$ is a non-Abelian simple group). Then $G/SZ(G)$ is a subgroup of the outer automorphism group of $S/Z(S)$. The third type of building block naturally does not occur for solvable linear groups. In both cases b) and c), the respective subgroups $E$ and $S$ are minimal subject to being normal, but not central. @Richard and Geoff: Why should we consider "irreducible representations" of groups? In language of representation theory, the question will be simply "To find faithfull representations of finite groups over $\mathbb{R}$"; not necessarily irreducible. – joseph Sep 10 '11 at 7:44 At least over the complex numbers, and for finite groups, all finite dimensional representations are equivealent to unitary representations, and are hence completely reducible. Hence trhey are direct sums of irreducible representations. So if we can understand irreducible representations, we can undersatnd al representations. – Geoff Robinson Sep 10 '11 at 14:37 add comment There are a few papers by Gabriele Nebe and Wilhelm Plesken on this topic, eg: Nebe, Gabriele Finite subgroups of ${\rm GL}_{24}(\mathbb Q)$. Experiment. Math. 5 (1996), no. 3, 163--195. up vote 5 down vote Nebe, Gabriele Finite subgroups of ${\rm GL}_n(\mathbb Q)$ for $25\leq n\leq 31$. Comm. Algebra 24 (1996), no. 7, 2341--2397. Nebe, G.; Plesken, W. Finite rational matrix groups. Mem. Amer. Math. Soc. 116 (1995), no. 556, viii+144 pp. add comment 1. Surprisingly, I found explicit lists of discrete subgroups of the orthogonal group O(n) for up to n=8 dimensions on the wikipedia page for point groups, with rather unspecific references, however. Point groups is another name for discrete subgroups of O(n). [UPDATE+CORRECTION: For dimensions n=4 and larger, only the point groups which are generated by reflections (Coxeter groups) are listed. In particularly, subgroups of SO(n) (which include no matrix of determinant $-$1) are missing.] 2. There is an old sequence of two long papers by Threlfall and Seifert, part I Mathematische Annalen 1931, Volume 104, Issue 1, pp. 1-70, part II 1933, Volume 107, Issue 1, pp. 543-586, where they apparently do the classification of discrete subgroups of SO(4) by associating to each element of SO(4) a pair of rotations from SO(3). (Although my native language is German, I had a hard time reading (through) this, because I am not used to the terminology that was used at that time.) [Addition: These results are mentioned in the book by Conway and Smith on quaternions and octonions; Conway and Smith say that the list is complete, but contains duplicates.] 3. I have a rather wild conjecture (true up to three dimensions). up vote Every discrete point group in n dimensions is the symmetry group of an n-dimensional polytope which is the Cartesian product of regular polytopes, or a subgroup thereof. 4 down vote [UPDATE: Norman Johnson pointed out counterexamples: The symmetries of the root lattices E6, E7, E8 in 6, 7, and 8 dimensions. (I could not yet fully convinced myself that they are indeed counterexamples.) So dimensions 4 and 5 remain open. If I extend my conjecture to include the polytopes which have those E6, E7, or E8 symmetries, in addition to the regular polytopes, in which dimension would the next counterexamples be?] For example, the symmetries of an $m$-gonal anti-prism in 3-space are contained in the symmetries of the $2m$-sided prism, which is the 1-simplex $\times$ the regular $2m$-gon. [S:Since the regular polytopes are known in all dimensions, this would give an easy way to obtain all finite point groups. (at least in principle).:S] add comment Not the answer you're looking for? Browse other questions tagged finite-groups ade-classifications or ask your own question.
{"url":"https://mathoverflow.net/questions/37136/classification-of-finite-groups-of-isometries/37176","timestamp":"2014-04-20T06:04:27Z","content_type":null,"content_length":"83896","record_id":"<urn:uuid:ca233c39-83cb-4e85-b899-d1a00b982221>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
API Showcase From Eigen Here are some examples of Eigen 3's API. Refer to the documentation for more details. Performing row/column operations on a matrix Let m be a matrix. All the following operations are allowed by Eigen, with the self-explanatory effect, and resulting in fully optimized code. m.row(i) += alpha * m.row(j); m.col(i) = alpha * m.col(j) + beta * m.col(k); m.col(i) *= factor; Operating on blocks inside a matrix or vector m.block(firstRow, firstCol, rows, cols).setZero(); m.topLeftCorner(rows, cols) = some_other_matrix; m.block<2,2>(firstRow, firstCol).setIdentity(); // optimized variant when the # of rows, cols are known at compile-time There are also vector-specific operations. Let v be a vector. v.segment(first, size) = some_other_vector; v.segment<3>(position1) = v.segment<3>(position2); // optimized variant when the size is known at compile-time v.head(n).setConstant(12); // writes 12 in the n first coefficients of v m.diagonal().tail(n) *= lambda; // multiplies by lambda the n last diagonal coefficients of a matrix m Computing sums result = m.sum(); // returns the sum of all coefficients in m result = m.row(i).sum(); result = m.rowwise().sum(); // returns a vector of the sums in each row Here is how you would compute the sum of the cubes of the i-th column of a matrix: result = m.col(i).array().cube().sum(); // array() returns an array-expression Like other libraries, Eigen has a comma-initializer allowing to construct a matrix like this: Matrix3f m; m << 1, 2, 3, 4, 5, 6, 7, 8, 9; Unlike other libraries, Eigen's comma-initializer can be combined at will with expressions, which makes it very powerful. See our tutorial on this subject. Linear solving Just choose the matrix decomposition that you want, the solve() API is the same everywhere. Thus, switching decompositions is very easy. In just one line of code, you can decompose and solve. result = m.lu().solve(right_hand_side); // using partial-pivoting LU result = m.fullPivLu().solve(right_hand_side); // using full-pivoting LU result = m.householderQr().solve(right_hand_side); // using Householder QR result = m.ldlt().solve(right_hand_side); // using LDLt Cholesky See this tutorial page for more information about solving.
{"url":"http://eigen.tuxfamily.org/index.php?title=API_Showcase","timestamp":"2014-04-16T13:49:52Z","content_type":null,"content_length":"25720","record_id":"<urn:uuid:59ba30a2-8646-477f-9458-e4759b0ec351>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00291-ip-10-147-4-33.ec2.internal.warc.gz"}
Really hard word problem October 4th 2009, 04:53 PM Really hard word problem In a financial arrangement, you are promised $900 the first day and each day after that you will receive 75% of the previous day's amount. When one day's amount drops below $1, you stop getting paid from that day on. What day is the first day you receive no payment and what is your total income? Use a formula for the nth partial sum of a geometric sequence. October 4th 2009, 05:34 PM You should know the formulas dealing with geometric sequences and series. First you need to find the term where a_n < 1. You have your initial value and common ratio so you can just plug the info into the sequence formula. Then once you know the ending term use the geometric series formula to solve for the sum. You know what formulas I mean right? October 4th 2009, 05:46 PM You should know the formulas dealing with geometric sequences and series. First you need to find the term where a_n < 1. You have your initial value and common ratio so you can just plug the info into the sequence formula. Then once you know the ending term use the geometric series formula to solve for the sum. You know what formulas I mean right? yea i know what you mean. but it just really hard when i do word problems. like usually different problems ask me they already give me a1 and r and i just cant figure out the terms. i just need help. like i know the form is An=A1(r)^(n-1) but i just dont see what you see. October 4th 2009, 05:51 PM yea i know what you mean. but it just really hard when i do word problems. like usually different problems ask me they already give me a1 and r and i just cant figure out the terms. i just need help. like i know the form is An=A1(r)^(n-1) but i just dont see what you see. It's ok. I'll walk you through it. You have the right formula. All you need to do is figure out a_1 and r. a_1 is the starting value so this should be easy to figure out since there are very few numbers. Now for the ratio, you know it's decreasing each time by 75%, but written in decimal form the ratio is .75. You want to find for what n you get a_n is 1 or less than one. So let a_n=1, plug in the other info I just wrote about and solve for n. Does that make more sense? October 4th 2009, 08:17 PM It's ok. I'll walk you through it. You have the right formula. All you need to do is figure out a_1 and r. a_1 is the starting value so this should be easy to figure out since there are very few numbers. Now for the ratio, you know it's decreasing each time by 75%, but written in decimal form the ratio is .75. You want to find for what n you get a_n is 1 or less than one. So let a_n=1, plug in the other info I just wrote about and solve for n. Does that make more sense? ok sorry it took so long for me to get back to ya....this website is godly slow. ok this is what i got from what you explain don't laugh if im not anywhere close. a1=900 because thats what he gets on the first day correct? r= 75%=.75 what do you think?? October 5th 2009, 09:43 AM What more help do you need? Jameson told you, "You want to find for what n you get a_n is 1 or less than one". That is, you want to find n so that $900(.75)^n\le 1$ or, equivalently, $(.75)^n\le \frac{1}{900}= 0.00111...$. You can find n by using logarithms or just by calculating $(.75)^n$ for different n. (Start around n= 20.) October 5th 2009, 07:51 PM i dont know if i did this right but this is what i did but it not the correct answer as the back of the book. 900(.75)^20 less than or equal to 1 i got 2.85 less than or equal to 1 did i do this right.....what is the correct answer?
{"url":"http://mathhelpforum.com/pre-calculus/106104-really-hard-word-problem-print.html","timestamp":"2014-04-19T18:58:17Z","content_type":null,"content_length":"9885","record_id":"<urn:uuid:9d937a2e-cb34-4668-be29-a3285c3cacd4>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00124-ip-10-147-4-33.ec2.internal.warc.gz"}
Collaboration Exercise Collaboration Exercise Instructions Use these instructions with Collaboration Exercise PowerPoint Slides This exercise is designed to be an excellent ice-breaker at the beginning of a class and/or semester. The exercise also gives the students some first-hand insight into different levels of collaboration when working in a group or with pair programming. The exercise consists of three activities where the students will have to design: 1) a transportation device; 2) a movie script; and 3) a robotic classroom assistant. In each activity, the groups do their work in a different fashion, which will demonstrate various forms of collaboration. Students work in groups of 4, ideally. You can modify the instructions slightly to accommodate a group of 5, but it is best if your groups are at least 4 due to the need for students to work in two's so a group of 3 would be difficult. You can form groups and have these same groups work together for all three activities. Or, you can form new groups of 4s for each exercise for more ice Preparation: Have approximately four sheets of blank white paper for each person in the class. Also, have one thin marker (such as Vis a Vis) for each student. Students share their drawings with the class -- and their drawings will not show up if they are drawn in pencil or regular pen. Activity 1: Designing a Transportation Device On slide 2, you ask each student to individually design and draw a transportation device that meets the specifications on the slide. In the first phase of the exercise, all students are to work alone without conversing with any other classmates. The students can get about 3-5 minutes to complete their design. Then, the integration phase starts. You ask the groups to start by telling each other what time they got up this morning and rank ordering themselves based on their wake up time. Go to slide 3, tell them the group needs to integrate their designs into one transportation device using the braking system from student 1, the restraint system from student 2 and so forth. Inevitably the students start to laugh and end up drawing some nonsensical transportation device. The integration takes about another 5 minutes. When the integration is done, go around the room. Have each student present their own product. After each person in a group has presented their product, ask someone to present their integrated Lessons to bring out: First, ask the students what they learned through the exercise. If they don't bring the following points out -- discuss them yourself. All students read the same specification yet each drew something totally different. As a result, the transportation devices are hard to integrate into a good product. The same thing can happen if they work in teams and don't talk to each other. Another aspect to bring out is that when they worked alone, the room was silent, and they didn't have much fun. Once they started working together, they started laughing and enjoying themselves much more. Activity 2: Designing a Movie Script You can either keep the same groups or switch them now. On slide 4, you can find the specifications for a movie script. Go over the specifications. Tell the students that they will work in pairs to come up with a script. If you have an odd number of students, have groups of three and no one works alone. Give the pairs about 5-7 minutes to come up with a script. They just write bullet notes down on their paper, not full paragraphs. Ask the students to then decide which pair is Pair 1 and which is Pair 2. Tell the students to share their stories with the other group (3-4 minutes to share the stories) and then integrate according to the instructions on slide 4 (3 minutes). When the integration is done, go around the room. Have each pair present their own story. After each pair in a group has presented their story, ask someone to present their integrated story to the Lessons to bring out: First, ask the students what they learned through the exercise. If they don't bring the following points out -- discuss them yourself. It should be been easier to integrate two parts rather than four parts in the first exercise. The integrated story MIGHT fit together better since there were only two parts to integrate. Finally, you can remark that the stories created by the pairs were probably more creative than the first designs done by individuals in Activity 1. Collaboration is good for creativity. Activity 3: Designing a Robotic Classroom Assistant You can either keep the same groups or switch them now. On slide 5, you can find the specifications for a robotic classroom assistant. Go over the specifications. If the students have switched groups since Activity 1, ask them to decide who is student 1, 2, 3, and 4. Each person has one aspect of the product they "own." They collaborate with their teammates to get their input on their aspect. Ultimately, the owner makes the decision on that aspect of the product. In this exercise, the students will rotate who they are working with every 2 minutes. During the first iteration, students 1 and 2 talk about how to monitor the number of people in the room and how to get student's attention, and students 3 and 4 talk about the other aspects. After 2 minutes, the pair switch and each student talks about what they own and what their collaborator owns. Finally the pairs switch again. After the final iteration, the room is quiet again and each student must draw the entire classroom assistant as they know it base upon their conversations with their team mates, taking about 2-3 minutes. Then, the team integrates into one product based upon the specification of each owner. When the integration is done, go over the exercise with the class. At this point, it is too tedious for each student to present their product. Ask each group to present product and to comment on how different the integrated product is from that of the individual team members. Lessons to bring out: First, ask the students what they learned through the exercise. If they don't bring the following points out -- discuss them yourself. Discuss the fact that when pairs rotate around, each team member has a better view of the product as a whole. If one person on the team dropped out, the rest of the team could more easily make up for the loss since several people at least partially understand the aspect of the system owned by the person who left. This is a good risk management strategy, particularly in industry. Also, each person got input from several other team members on their aspect of the product and was likely to have a better, more creative aspect based upon the input of several team members. Activity designed by Laurie Williams and Lucas Layman, North Carolina State University. Please send feedback and instructions to williams@csc.ncsu.edu copyright: Laurie Williams and Lucas Layman, 2007
{"url":"http://www.realsearchgroup.org/pairlearning/Collaboration%20Exercise%20Instructions.htm","timestamp":"2014-04-17T15:32:38Z","content_type":null,"content_length":"33304","record_id":"<urn:uuid:88f109cb-a7fd-4e01-9d43-97ea828403ae>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Zariski closures of one parameter additive maps in positive characteristic up vote 0 down vote favorite Suppose we are in characteristic $p$, and that the field, $K$, that we are working over is imperfect. We have a map $\Theta: K \to K^{\delta}$ where each coordinate function $\Theta_i$ is an additive function. Suppose further that at least one of the $\Theta_i$ has non-zero derivative. Is the Zariski closure of the image a smooth algebraic set in $K^{\delta}$? I believe that essentially automatically the transcendental dimension of it's function field is one, as $\dim K = \dim \ker \Theta + \dim \overline{\Theta(K)^z}$. I want to believe that the tangent space has dimension one also! Apologies if this question is trivially true or false, I'm by no means very familiar with algebraic geometry and am learning on the fly. The lack of the map being automatically separable, and understanding the set only as a parametrization (i.e. not knowing about the functions whose zero set defines it) is confusing me. Any suggestions towards understanding these tangent spaces would be helpful! characteristic-p ag.algebraic-geometry What is $\delta$? Is it a positive integer? – S. Carnahan♦ Oct 28 '11 at 21:00 Yep, sorry for any confusion. – Confused Oct 28 '11 at 21:09 What do you mean by additive function? If it is a function in the set-theoretic sense that is additive, then Ramsey gave you a counterexample. If it's an additive polynomial ($\sum a_ix^{p^i}$) then the image is a curve. – Felipe Voloch Oct 28 '11 at 22:50 That's interesting. But I'm not sure I totally get what this other notion of "additive" is in the context of this question, Felipe. I do understand the sense in which the polynomial you write is additive, but in what sense could that be a (component) function $K\to K$ (presuming $K$ to be roughly as in my answer)? Oh - do you have in mind your $a_i$ living in the larger field $K$ and the map being $f(t)\mapsto \sum a_i(f(t))^{p^i}$? What about other imperfect $K$? Is there a notion of an additive function $K\to K$ that encapsulates your examples (and excludes those as in my example)? – Ramsey Oct 29 '11 at 2:26 @Ramsey. I have no idea what the appropriately named "Confused" wants. But if $K$ is an arbitrary field, the only reasonable maps from $K$ to $K$ are polynomials. You obviously have on the back of your mind the example of a function field, hence your "t", in general there is no "t" or "larger field", only one field. You could also take derivatives and get additive maps $f \mapsto f'$ (a field is imperfect iff it has non trivial derivations, right?). But we should not be trying to guess what the question should be. It's Confused's duty to come here and clarify. – Felipe Voloch Oct 29 '11 at 8:47 show 1 more comment 2 Answers active oldest votes The image of a map $\Theta(x) = (\Theta_1(x),\ldots,\Theta_{\delta}(x))$ where the $\Theta_i$ are polynomials is always an algebraic variety of dimension at most one, and of dimension one if one of these polynomials is non constant. This follows from general facts. Now, there is no tangent space to the image, what you can talk about is the tangent space to a point $\ Theta(a)$ on this set. The tangent space of the parametrization is, of course, $(\Theta_1'(a),\ldots,\Theta_{\delta}'(a))$, but there can be multiple branches if $\Theta$ is not injective. However if $\Theta = \Phi \circ f$, where $f$ is a polynomial in one variable, we have to analyze $\Phi$ instead. up vote 2 down vote In the special case of additive polynomials, the derivative is constant and, if you assume that one of the components is non-zero, then the derivative is never zero. Hence, the image is accepted a smooth curve if and only if, $\Theta = \Phi \circ f$ where $\Phi$ is injective. Now this is just linear algebra $\Theta$ fails to be injective in the common solutions of $\Theta_i(x-y) =0$ and, putting $f$ as the common factor of the $\Theta_i$ we are done. Thanks again for your help. What characterization of tangent space are you using? My entire problem seems to revolve around knowing very little about the coordinate ring of the image variety, and the tangent space characterizations I know require knowledge of the ideal of functions which vanish on the image in some form or another. (which I don't really have) I'm still unsure how to rule out that the tangent space at say the identity doesn't have dimension greater than 1. Is there a source that you can recommend in addition to your comment? – Confused Oct 29 '11 at 20:34 Just use the implicit function theorem. – Felipe Voloch Oct 29 '11 at 21:50 Or better, the tangent space of $\Phi({\mathbb A}^1)$ at $\Phi(a)$ is the image of the tangent space of ${\mathbb A}^1$ at $a$ by $d\Phi_a$ or something. Calculus! – Felipe Voloch Oct 29 '11 at 22:01 This last comment is exactly the issue that I'm worried about, that the $d\Theta$ may not be surjective. Like you say, certainly the tangent space at the identity of the image contains a vector $(\Theta_1'(0), \ldots, \Theta_{\delta}'(0))$, however how does one know that that is everything? As far as I know, this is related to the separability of $\Theta$, which seems to me to be impossible to check directly. – Confused Oct 29 '11 at 22:31 I need to think about an exact statement of the inverse function theorem in this setting, probably it's obvious, but would it imply that these algebraically defined tangent spaces are isomorphic? – Confused Oct 29 '11 at 22:32 show 1 more comment I'd like to understand the motivation for this question to see if I'm barking up the wrong tree here, but I think it's false (the later one-dimensional verbiage, that is) as stated. Let $k$ be an algebraically closed field of of characteristic $p$ and let $K$ be the function field in the single variable $t$ over $k$. So $K$ is an imperfect field of characteristic $p$. Consider the map $K\to K^2$ defined by $f(t)\mapsto (f(t),f(t+1))$. It's component functions are additive. Certainly, this map isn't surjective. On the other hand, take a polynomial $P(X,Y)\in K[X,Y]$ vanishing on its image. Clearing denominators from $K$, we may regard $P$ as a polynomial $P up vote 2 (t,X,Y)\in k[t,X,Y]$ that has the property that $P(t,f(t),f(t+1))$ is the zero rational function in $t$ for all $f(t)\in K$. But, given any triple $(a,b,c)\in k^3$ it is easy to find (even down vote a linear polynomial) $f(t)$ such that $f(a)=b$ and $f(a+1)=c$. It follows that $P(t,X,Y)$ is the zero polynomial (since $k$ is algebraically closed). Thus there is non-zero $P$ vanishing on the image, which means that the Zariski closure of the image is all of $K^2$. This seems to have nothing to do with the characteristic. The argument (if non-bogus) works fine for any algebraically closed $k$. I think the issue is that "additive" is an odd condition here from the point of view of algebraic geometry. Thanks for taking the time to think of my question, however my poor wording seems to be a problem. More explicitly, the map $\Theta$ is a homomorphism $\Theta: K \to K^{\delta}$ $x \ mapsto (\sum_{j=0}^n a_{1j} x^{p^{j}}, \ldots, \sum_{j=0}^n a_{\delta j} x^{p^{j}})$ with the $a_{ij} \in K$. The derivative condition amounts to at least one of the $a_{i0} \neq 0$. I am thinking of this as a one-parameter subgroup of $K^{\delta}$, and upon taking the Zariski closure of the image, I'm having a hard time understanding the tangent space of this Zariski closed set. – Confused Oct 29 '11 at 14:17 Gotcha. Just additive polynomials. This would have been a natural interpretation indeed - somehow my mind jumped to the far more general notion. – Ramsey Oct 29 '11 at 15:33 add comment Not the answer you're looking for? Browse other questions tagged characteristic-p ag.algebraic-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/79428/zariski-closures-of-one-parameter-additive-maps-in-positive-characteristic?sort=newest","timestamp":"2014-04-19T22:51:30Z","content_type":null,"content_length":"71636","record_id":"<urn:uuid:da6e2171-9ecf-410d-a598-037f76f36095>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00126-ip-10-147-4-33.ec2.internal.warc.gz"}
heat equation problem October 11th 2011, 03:29 AM #1 Jan 2011 heat equation problem I've got the following problem: $f:\mathbb{R}\rightarrow\mathbb{R}$ a $C^2$-function such that $f$ is convex and $f(0)=f'(0)=0$. $g\in C^{\infty}([0,\infty),\mathscr{S}(\mathbb{R}^d))$ a real-valued solution to the heat equation $g_t=\Delta g$. Using this show that $F\in C^1$ and $F$ decrasing, where: 'I started by just diff. $F$ and I got $F'(t) = \frac{d}{dt}\int_{\mathbb{R}^d}f(g(x,t))dx = \int_{\mathbb{R}^d} \frac{d}{dt}f(g(x,t))dx = \int_{\mathbb{R}^d}\frac{df(g)}{dg}g_tdx = \int_{\mathbb{R}^d} \frac{df}{dg}\Delta gdx$. What next? Re: heat equation problem $F'(t)= \int_{ \mathbb{R}^d } f'(g)\Delta g dx = - \int_{ \mathbb{R}^d } abla \left( f'(g) \right) \cdot abla (g)dx = -\int_{ \mathbb{R}^d } f''(g)|abla(g)|^2 dx\leq0$ where we first integrated by parts and then used that $f''\geq 0$. October 14th 2011, 02:05 PM #2 Super Member Apr 2009
{"url":"http://mathhelpforum.com/differential-equations/190074-heat-equation-problem.html","timestamp":"2014-04-17T16:53:36Z","content_type":null,"content_length":"35001","record_id":"<urn:uuid:c2ee84de-4410-4a98-9420-a14edead2a82>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: PLEASE HELP. I'm not understanding this question. Complete the statment: A(n) = 2*3^ • one year ago • one year ago Best Response You've already chosen the best response. // interesting. not sure what it means either. A(n) = 2*3^( ). Best Response You've already chosen the best response. It won't tell me anything else but that so its confusing as to how to solve it. Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5106f442e4b0ad57a564262a","timestamp":"2014-04-18T19:21:09Z","content_type":null,"content_length":"30015","record_id":"<urn:uuid:e744bf0f-63b1-4bc0-b933-d2b0407d3844>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00016-ip-10-147-4-33.ec2.internal.warc.gz"}
Public Lesson Public Lessons: Area & Perimeter Area & Perimeter, Antoinette Villarin This lesson is based on the results of a performance task that was given to the students a couple weeks prior to the documented lesson. We choose a MARS problem called “Pizza Crusts.” After grading and analyzing student work, we realized that students’ understanding of area and perimeter was mostly procedural. We wanted to create a lesson that allowed them to make connections between the procedure and a model, while justifying their thinking. Therefore the purpose of this re-engagement lesson was to address student misconceptions and deepen student understanding of area and perimeter. The standards addressed in this lesson involve finding perimeter and area of various shapes, finding the perimeter when given a fixed area, and using a formula in a practical context. Challenges for our students included decoding the language in the problem and proving their thinking. Related MARS Task: • Pizza Crusts Packet
{"url":"http://www.insidemathematics.org/index.php/classroom-video-visits/public-lessons-area-a-perimeter","timestamp":"2014-04-17T21:22:19Z","content_type":null,"content_length":"15225","record_id":"<urn:uuid:aa7dc348-daf0-4165-b76a-34916822dba4>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00174-ip-10-147-4-33.ec2.internal.warc.gz"}
st: RE: Re: Problem putting enclosed double quotes (was: Problem putting Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] st: RE: Re: Problem putting enclosed double quotes (was: Problem putting enclosed brackets). From "Nick Cox" <n.j.cox@durham.ac.uk> To <statalist@hsphsun2.harvard.edu> Subject st: RE: Re: Problem putting enclosed double quotes (was: Problem putting enclosed brackets). Date Thu, 1 Apr 2010 12:45:05 +0100 I see. -xml_tab- is from SJ and SSC. I can't help there, as I've never used the program and it's far too substantial a program to hack at. Amadou DIALLO Dear Nick, Thanx for your answer. I have several purposes in doing that. But the most urgent is that what I am currently doing is to pass a series of matrices to Zurab's xml_tab (it is the code that suits most my needs actually) and I want to preserve the labels of the modalities of my hundreds of variables. For short labels, there is no problem. But for long, spaced, labels, it's more complicated. See the example below: sysuse auto tab foreign rep78 , matcell(A) loc vall : val la for xml_tab A, rn("`: lab `vall' 0'" "`: lab `vall' 1'") // for my code rn will be rn(`names') 2010/4/1, Nick Cox <n.j.cox@durham.ac.uk>: > The problem is with double quotation marks "", informally often known as > (double) quotes. > (The word brackets even at its most generous doesn't I think extend beyond > round brackets or parentheses () > [square] brackets [] > and curly brackets or braces {}.) > The problem is that Stata uses " " in two ways, as string delimiters and as > literal characters. First time round, the " " are being interpreted as > delimiters and as such stripped. > Although I tried a few solutions using compound double quotes `" "' I also > failed to get precisely what Amadou wants. So, I am tempted to change the > question. Why do you want precisely this? I suspect that there are other > ways of getting the same ultimate result. > Nick > n.j.cox@durham.ac.uk > Amadou B. DIALLO, PhD. > I want to have the following labels of my variables enclosed into > encapsulated brackets but the code fails: > Instead of having the following desired form: > "Augmenté" "Inchangé" "Diminué" "Non concerné" > I have : > Augmenté "Inchangé" "Diminué" "Non concerné" > I.e. stata keeps ignoring the first label value. > My code is as follows: > qui foreach i of local vars { > loc vall : val la `i' > if "`vall'" ~= "" { > levelsof `i', l(l) > foreach k of local l { > loc lab : lab `vall' `k' > loc names `names' "`lab'" // loc names `names' "`: lab `vall' `k''" > } > } > } > di `"`names'"' > * > * For searches and help try: > * http://www.stata.com/help.cgi?search > * http://www.stata.com/support/statalist/faq > * http://www.ats.ucla.edu/stat/stata/ * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2010-04/msg00013.html","timestamp":"2014-04-20T18:35:41Z","content_type":null,"content_length":"10672","record_id":"<urn:uuid:498eb3c9-e004-49e1-ad5a-0105dfa32219>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00621-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix multiplication in official Android OpenGL ES 2.0 tutorial http://stackoverflow.com – I'm reading official Android OpenGL ES 2.0 tutorial, and I noticed something. The code in the tutorial multiplies rotation matrix into the view-projection matrix like this: Matrix.multiplyMM(mMVPMatrix, 0, mRotationMatrix, 0, mMVPMatrix, 0); But, the documentation for this method specifies that the result matrix, and either of operands should not overlap, or the result is undefined: Multiply two 4x4 matrices together and store the result in a third 4x4 matrix. In matrix notation: result = lhs x rhs. Due to the way matrix multiplication works, the result matrix will have the same effect (HowTos) Login or register to post comments
{"url":"http://www.linuxine.com/story/matrix-multiplication-official-android-opengl-es-20-tutorial","timestamp":"2014-04-20T02:34:34Z","content_type":null,"content_length":"27547","record_id":"<urn:uuid:6a2a7881-c4b2-4d48-b56f-9261e565e822>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00161-ip-10-147-4-33.ec2.internal.warc.gz"}
Bonney Lake Algebra 2 Tutor Find a Bonney Lake Algebra 2 Tutor I am a former math teacher with 7 years of experience teaching middle and high school. I have 18 years of tutoring experience with students of all ages, including adults. I have a Bachelors Degree in Mathematics and a Masters Degree in Math Education. 16 Subjects: including algebra 2, geometry, GRE, algebra 1 ...I am highly qualified to tutor for the GED test. I am currently tutoring physics, chemistry, and mathematics at levels ranging from Algebra 1 through Calculus. I am very experienced with reading, writing, and proofreading business documents and technical documents. 21 Subjects: including algebra 2, English, chemistry, physics ...They are important for success in school and career and family and personal life. Having taught in the public schools for several years, I am called upon to teach not just subject matter, but also study skills that optimize the learning of that subject matter. There have been a variety of study... 13 Subjects: including algebra 2, reading, Spanish, algebra 1 ...For the past two years, I have helped numerous students not only pass their classes - but improve in their general math skills, leaving a lasting effect. I have worked as the Lead Math Tutor at the Math Resource Center of Highline Community College for one year, taking upon quite a load of respo... 10 Subjects: including algebra 2, calculus, physics, algebra 1 ...I have experience in a wide range of subjects with my strong points being math, sciences and writing. I also have many years of experience with various computer systems and applications. I am currently employed as a Computer Support Technician at a moving company, as well as a writing tutor. 30 Subjects: including algebra 2, chemistry, English, physics
{"url":"http://www.purplemath.com/bonney_lake_algebra_2_tutors.php","timestamp":"2014-04-20T02:18:56Z","content_type":null,"content_length":"23906","record_id":"<urn:uuid:1c9fb6d5-d8a1-4c88-b09a-5ba88373f6df>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Google Answers: Engineeringing Physics Important Disclaimer: Answers and comments provided on Google Answers are general information, and are not intended to substitute for informed professional medical, psychiatric, psychological, tax, legal, investment, accounting, or other professional advice. Google does not endorse, and expressly disclaims liability for any product, manufacturer, distributor, service or service provider mentioned or any opinion expressed in answers or comments. Please read carefully the Google Answers Terms of Service.
{"url":"http://answers.google.com/answers/threadview?id=184693","timestamp":"2014-04-17T00:50:58Z","content_type":null,"content_length":"6990","record_id":"<urn:uuid:2f4436da-b902-4bd4-a3b8-6b4864a35121>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Implied Volatility: Buy Low And Sell High In the financial markets, options are rapidly becoming a widely accepted and popular investing method. Whether they are used to insure a portfolio, generate income or leverage stock price movements, they provide advantages other financial instruments don't. Aside from all the advantages, the most complicated aspect of options is learning their pricing method. Don't get discouraged - there are several theoretical pricing models and option calculators that can help you get a feel for how these prices are derived. Read on to uncover these helpful tools. What Is Implied Volatility? It is not uncommon for investors to be reluctant about using options because there are several variables that influence an option's premium. Don't let yourself become one of these people. As interest in options continues to grow and the market becomes increasingly volatile, this will dramatically affect the pricing of options and, in turn, affect the possibilities and pitfalls that can occur when trading them. Implied volatility is an essential ingredient to the option pricing equation. To better understand implied volatility and how it drives the price of options, let's go over the basics of options Option Pricing Basics Option premiums are manufactured from two main ingredients: intrinsic value and time value. Intrinsic value is an option's inherent value, or an option's equity. If you own a $50 call option on a stock that is trading at $60, this means that you can buy the stock at the $50 strike price and immediately sell it in the market for $60. The intrinsic value or equity of this option is $10 ($60 - $50 = $10). The only factor that influences an option's intrinsic value is the underlying stock's price versus the difference of the option's strike price. No other factor can influence an option's intrinsic value. Using the same example, let's say this option is priced at $14. This means the option premium is priced at $4 more than its intrinsic value. This is where time value comes into play. Time value is the additional premium that is priced into an option, which represents the amount of time left until expiration. The price of time is influenced by various factors, such as time until expiration, stock price, strike price and interest rates, but none of these is as significant as implied volatility. Implied volatility represents the expected volatility of a stock over the life of the option. As expectations change, option premiums react appropriately. Implied volatility is directly influenced by the supply and demand of the underlying options and by the market's expectation of the share price's direction. As expectations rise, or as the demand for an option increases, implied volatility will rise. Options that have high levels of implied volatility will result in high-priced option premiums. Conversely, as the market's expectations decrease, or demand for an option diminishes, implied volatility will decrease. Options containing lower levels of implied volatility will result in cheaper option prices. This is important because the rise and fall of implied volatility will determine how expensive or cheap time value is to the option. How Implied Volatility Affects Options The success of an options trade can be significantly enhanced by being on the right side of implied volatility changes. For example, if you own options when implied volatility increases, the price of these options climbs higher. A change in implied volatility for the worse can create losses, however, even when you are right about the stock's direction! Each listed option has a unique sensitivity to implied volatility changes. For example, short-dated options will be less sensitive to implied volatility, while long-dated options will be more sensitive. This is based on the fact that long-dated options have more time value priced into them, while short-dated options have less. Also consider that each strike price will respond differently to implied volatility changes. Options with strike prices that are near the money are most sensitive to implied volatility changes, while options that are further in the money or out of the money will be less sensitive to implied volatility changes. An option's sensitivity to implied volatility changes can be determined by Vega - an option Greek. Keep in mind that as the stock's price fluctuates and as the time until expiration passes, Vega values increase or decrease, depending on these changes. This means that an option can become more or less sensitive to implied volatility changes. How to Use Implied Volatility to Your Advantage One effective way to analyze implied volatility is to examine a chart. Many charting platforms provide ways to chart an underlying option's average implied volatility, in which multiple implied volatility values are tallied up and averaged together. For example, the volatility index (VIX) is calculated in a similar fashion. Implied volatility values of near-dated, near-the-money S&P 500 Index options are averaged to determine the VIX's value. The same can be accomplished on any stock that offers options. Figure 1 shows that implied volatility fluctuates the same way prices do. Implied volatility is expressed in percentage terms and is relative to the underlying stock and how volatile it is. For example, General Electric stock will have lower volatility values than Apple Computer because Apple's stock is much more volatile than General Electric's. Apple's volatility range will be much higher than GE's. What might be considered a low percentage value for AAPL might be considered relatively high for GE. Because each stock has a unique implied volatility range, these values should not be compared to another stock's volatility range. Implied volatility should be analyzed on a relative basis. In other words, after you have determined the implied volatility range for the option you are trading, you will not want to compare it against another. What is considered a relatively high value for one company might be considered low for another. Source: www.prophet.net Figure 2 : An implied vvolatility range using relative values Figure 2 is an example of how to determine a relative implied volatility range. Look at the peaks to determine when implied volatility is relatively high, and examine the troughs to conclude when implied volatility is relatively low. By doing this, you determine when the underlying options are relatively cheap or expensive. If you can see where the relative highs are (highlighted in red), you might forecast a future drop in implied volatility, or at least a reversion to the mean. Conversely, if you determine where implied volatility is relatively low, you might forecast a possible rise in implied volatility or a reversion to its mean. Implied volatility, like everything else, moves in cycles. High volatility periods are followed by low volatility periods, and vice versa. Using relative implied volatility ranges, combined with forecasting techniques, helps investors select the best possible trade. When determining a suitable strategy, these concepts are critical in finding a high probability of success, helping you maximize returns and minimize risk. Using Implied Volatility to Determine Strategy You've probably heard that you should buy undervalued options and sell overvalued options. While this process is not as easy as it sounds, it is a great methodology to follow when selecting an appropriate option strategy. Your ability to properly evaluate and forecast implied volatility will make the process of buying cheap options and selling expensive options that much easier. When forecasting implied volatility, there are four things to consider: 1. Make sure you can determine whether implied volatility is high or low and whether it is rising or falling. Remember, as implied volatility increases, option premiums become more expensive. As implied volatility decreases, options become less expensive. As implied volatility reaches extreme highs or lows, it is likely to revert back to its mean. 2. If you come across options that yield expensive premiums due to high implied volatility, understand that there is a reason for this. Check the news to see what caused such high company expectations and high demand for the options. It is not uncommon to see implied volatility plateau ahead of earnings announcements, merger and acquisition rumors, product approvals and other news events. Because this is when a lot of price movement takes place, the demand to participate in such events will drive option prices price higher. Keep in mind that after the market-anticipated event occurs, implied volatility will collapse and revert back to its mean. 3. When you see options trading with high implied volatility levels, consider selling strategies. As option premiums become relatively expensive, they are less attractive to purchase and more desirable to sell. Such strategies include covered calls, naked puts, short straddles and credit spreads. By contrast, there will be times when you discover relatively cheap options, such as when implied volatility is trading at or near relative to historical lows. Many option investors use this opportunity to purchase long-dated options and look to hold them through a forecasted volatility 4. When you discover options that are trading with low implied volatility levels, consider buying strategies. With relatively cheap time premiums, options are more attractive to purchase and less desirable to sell. Such strategies include buying calls, puts, long straddles and debit spreads. The Bottom Line In the process of selecting strategies, expiration months or strike price, you should gauge the impact that implied volatility has on these trading decisions to make better choices. You should also make use of a few simple volatility forecasting concepts. This knowledge can help you avoid buying overpriced options and avoid selling underpriced ones. comments powered by Disqus
{"url":"http://www.investopedia.com/articles/optioninvestor/08/implied-volatility.asp","timestamp":"2014-04-17T22:36:51Z","content_type":null,"content_length":"89054","record_id":"<urn:uuid:f1ff03b1-d745-4473-b1fa-9409027a5111>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Northborough Science Tutor Find a Northborough Science Tutor ...I am willing to work any day during the week, including weekends. I teach theatre out of Exploration Schools Inc. Additionally I have extensive experience with theatrical makeup and special 23 Subjects: including nutrition, Latin, study skills, elementary math ...With this experience, I'm able to tailor our geometry sessions to your student's strengths and weaknesses. We'll find the methods of understanding geometry that work for your student. Since 2005, I've tutored over thirty students in 8th and 9th grade physics, physics for the MCAS and for the SA... 23 Subjects: including chemical engineering, GRE, biology, calculus ...I earned my high school diploma from a French high school, as well as a bachelor of science in Computer Science from West Point. My academic strengths are in mathematics and French. I can tutor any subject area from elementary math to college level.I got an A+ in Discrete Mathematics in College and an A in the graduate course 6.431 Applied Probability at MIT last year. 16 Subjects: including physics, French, elementary math, algebra 1 ...I believe anyone who understands concepts and who practices skills can achieve success. My roles as a tutor are to make sure my students understand their subjects, and to encourage the students to work hard and maintain success-conducive habits such as doing homework. I have tutored students at the high school level in English, mathematics, and physics. 26 Subjects: including physics, ACT Science, English, Spanish ...I obtained a Bachelor of Science and Master of Science Degree, respectively, in Geophysics and Geology from the University of Delaware. I have obtained Teaching Licensure in the subject areas of Mathematics from the State of Delaware and Massachusetts. My teaching experience will allow me to tutor you in Geometry. 19 Subjects: including geology, astronomy, physical science, physics
{"url":"http://www.purplemath.com/Northborough_Science_tutors.php","timestamp":"2014-04-20T10:52:43Z","content_type":null,"content_length":"24067","record_id":"<urn:uuid:4276db91-6067-4a8e-ae74-4d1505a83f13>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00394-ip-10-147-4-33.ec2.internal.warc.gz"}
QuickMath.com - Automatic Math Solutions The inequalities section of QuickMath allows you to solve virtually any inequality or system of inequalities in a single variable. In most cases, you can find exact solutions. Even when this is not possible, QuickMath may be able to give you approximate solutions to almost any level of accuracy you require. In addition, you can plot the regions satisfied by one or more inequalities in two variables, seeing clearly where the intersections of those regions occur. What are inequalities? Inequalities consist of two or more algebraic expressions joined by inequality symbols. The inequality symbols are : < less than > greater than <= less than or equal to >= greater than or equal to != or <> not equal to Here are a few examples of inequalities : │ 2 x - 9 > 0 │ x^2 - 3 x + 5 <= 0 │ │ | 5x - 1 | <> 5 │ x^3 + 1 <= 0 │ The Solve command can be used to solve either a single inequality for a single unknown from the basic solve page or to simultaneously solve a system of many inequalities in a single unknown from the advanced solve page. The advanced command allows you to specify whether you want approximate numerical answers as well as exact ones, and how many digits of accuracy (up to 16) you require. Multiple inequalities in the advanced section are taken to be joined by AND. For example, the inequalities 2 x - 1 > 0 x^2 - 5 < 0 on two separate lines in the advanced section are read by QuickMath as 2 x - 1 > 0 AND x^2 - 5 < 0 In other words, QuickMath will try to find solutions satisfying both inequalities at once. Go to the Solve page The Plot command, from the Graphs section, will plot any inequality involving two variables. In order to plot the region satisfied by a single inequality involving x and y, go to the basic inequality plotting page, where you can enter the inequality and specify the upper and lower limits on x and y that you want the graph to be plotted for. The advanced inequality plotting page allows you to plot the union or intersection of up to 8 regions on the one graph. You have control over such things as whether or not to show the axes, where the axes should be located and what the aspect ratio of the plot should be. In addition, you have the option of showing each individual region on its own. Go to the Inequalities Plotting page
{"url":"http://quickmath.com/pages/modules/inequalities/index.php","timestamp":"2014-04-19T22:07:19Z","content_type":null,"content_length":"13474","record_id":"<urn:uuid:d8a61de8-2b2f-4885-90cb-548161068e7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00317-ip-10-147-4-33.ec2.internal.warc.gz"}
Wittmann SAT Math Tutor ...Often it is necessary for the reader to go back to a previous statement to understand the new statement clearly. The teacher must help his student to develop logical choices of understanding. My students and I do that by means of the Socratic method. 30 Subjects: including SAT math, reading, chemistry, English ...I continue to draw and sketch by leisure. I trained for 3 years at Mission: Renaissance school in Southern California. At Harvard University, I showcased in art shows and worked as set 8 Subjects: including SAT math, French, SAT reading, drawing ...I am also currently teaching GLG 101 and GLG 103 at Mesa Community College. This is one of favorite subjects to tutor as the majority of students need help in the same course that I'm teaching. I have been playing the piano since I was 4 years old, and I love this instrument. 28 Subjects: including SAT math, English, algebra 1, algebra 2 I have taught at a valley high school for several years. I have taught all levels of high school math. I try to explain math in a way that the student can understand instead of using too much math 10 Subjects: including SAT math, geometry, algebra 1, algebra 2 ...The time at the community college helped to refocus the most effective teaching and learning methods. My teaching style is very similar to the way I tutor. I believe most students, after seeing the material presented, have the answers to the questions in their brain, but don't know how to access it. 20 Subjects: including SAT math, chemistry, physics, calculus
{"url":"http://www.purplemath.com/wittmann_sat_math_tutors.php","timestamp":"2014-04-19T05:28:09Z","content_type":null,"content_length":"23564","record_id":"<urn:uuid:f82ef423-1ef6-43e0-a991-4f422a9011ff>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00012-ip-10-147-4-33.ec2.internal.warc.gz"}
Roots, Radicals, and Square Root Equations: Summary of Key Concepts The square root of a positive number x x is a number such that when it is squared, the number x x results. Every positive number has two square roots, one positive and one negative. They are opposites of each other. Principal Square Root x x ((Reference)) If x x is a positive real number, then x x represents the positive square root of x x . The positive square root of a number is called the principal square root of the number. Secondary Square Root − x − x ((Reference)) − x − x represents the negative square root of x x . The negative square root of a number is called the secondary square root of the number. Radical Sign, Radicand; and Radical ((Reference)) In the expression x , x , √ is called the radical sign. x x is called the radicand. x x is called a radical. The horizontal bar that appears attached to the radical sign, √, is a grouping symbol that specifies the radicand. Meaningful Expressions ((Reference)) A radical expression will only be meaningful if the radicand (the expression under the radical sign) is not negative: −25 −25 is not meaningfuland −25 −25 is not a real number Simplifying Square Root Expressions ((Reference)) If a a is a nonnegative number, then a 2 =a a 2 =a Real numbers that are squares of rational numbers are called perfect squares. Any indicated square root whose radicand is not a perfect square is an irrational number. 2 , 5 2 , 5 and 10 10 are irrational numbers The Product Property ((Reference)) xy = x y xy = x y The Quotient Property ((Reference)) x y = x y y≠0 x y = x y y≠0 x+y ≠ x + y ( 16+9 ≠ 16 + 9 ) x−y ≠ x − y ( 25−16 ≠ 25 − 16 ) x+y ≠ x + y ( 16+9 ≠ 16 + 9 ) x−y ≠ x − y ( 25−16 ≠ 25 − 16 ) A square root that does not involve fractions is in simplified form if there are no perfect squares in the radicand. A square root involving a fraction is in simplified form if there are no 1. perfect squares in the radicand, 2. fractions in the radicand, or 3. square root expressions in the denominator Rationalizing the Denominator ((Reference)) The process of eliminating radicals from the denominator is called rationalizing the denominator. Multiplying Square Root Expressions ((Reference)) The product of the square roots is the square root of the product. x y = xy x y = xy 1. Simplify each square root, if necessary. 2. Perform the multiplication. 3. Simplify, if necessary. Dividing Square Root Expressions ((Reference)) The quotient of the square roots is the square root of the quotient. x y = x y x y = x y Addition and Subtraction of Square Root Expressions ((Reference)) a x +b x =( a+b ) x a x −b x =( a−b ) x a x +b x =( a+b ) x a x −b x =( a−b ) x Square Root Equation ((Reference)) A square root equation is an equation that contains a variable under a square root radical sign. Solving Square Root Equations ((Reference)) 1. Isolate a radical. 2. Square both sides of the equation. 3. Simplify by combining like terms. 4. Repeat step 1 if radical are still present. 5. Obtain potential solution by solving the resulting non-square root equation. 6. Check potential solutions by substitution.
{"url":"http://cnx.org/content/m21971/latest/?collection=col11427/latest","timestamp":"2014-04-17T12:44:00Z","content_type":null,"content_length":"110589","record_id":"<urn:uuid:a42de375-7c93-487d-be51-8a9469e2c375>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00658-ip-10-147-4-33.ec2.internal.warc.gz"}
Many experimental investigations of the drag and inertia force coefficients have relied on the determination of water particle kinematics from measured wave forms. Since the pioneering work of Airy (1845), Stokes (1847, 1880) and others, a number of wave theories have been developed for predicting water particle kinematics. Clearly, the use of a certain wave theory will lead to corresponding force coefficients. Therefore, a wave theory that provides more accurate water particle kinematics is very important. Reid (1958) developed the simple superposition method for predicting water particle kinematics from a measured sea surface that could be either random or periodic. The method is based upon linear long-crested wave theory. Borgman (1965, 1967, 1969a, 1969b) introduced the linearized spectral density of wave force on a pile due to a random Gaussian sea. The drag force component has been approximated in the simplest form by a linear relation. This method, however, cannot calculate properties of the wave field and wave force above the mean water level. Wheeler (1969) applied simple superposition with a stretching factor in the vertical coordinate position for hurricane-generated wave data during Wave Project II. With this method it was possible to evaluate the wave force above the mean water level. linear theory; linear wave; modified stretched wave Full Text: This work is licensed under a Creative Commons Attribution 3.0 License
{"url":"http://journals.tdl.org/icce/index.php/icce/article/view/4040","timestamp":"2014-04-19T01:51:57Z","content_type":null,"content_length":"16088","record_id":"<urn:uuid:45132891-5051-4d59-9d23-c367121aa312>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00446-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Etudes ”Mathematical Etudes” develop Russian traditions in popularization of mathematics.This site presents 3D animated films which tell about mathematics and its applications in exciting and interesting way. So, dear spectator, we invite you to plunge into the world of beautiful mathematical problems. Their statements understandable for schoolchildren but so far scientists have not solved some of
{"url":"http://www.etudes.ru/en/","timestamp":"2014-04-19T12:17:43Z","content_type":null,"content_length":"19358","record_id":"<urn:uuid:b8d2165c-bc9d-42d6-98e6-02013539447a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Everyone knows that a dozen may be either twelve or thirteen, a score either twenty or twenty-one, a hundred either one hundred or one hundred and twenty, and a thousand either one thousand or one thousand two hundred. The higher numbers are the old Teutonic computations. Hickes tells us that the Norwegians and Icelandic people have two sorts of decad, the lesser and the greater called “Tolfræd.” The lesser thousand = 10 x 100, but the greater thousand = 12 x 100. The word tolf, equal to tolv, is our twelve. (Institutiones Grammaticæ, p. 43.) Five score of men, money, or pins, Six score of all other things. Old Saw. Source: Dictionary of Phrase and Fable, E. Cobham Brewer, 1894 More on Thousand from Fact Monster:
{"url":"http://www.factmonster.com/dictionary/brewers/thousand.html","timestamp":"2014-04-19T05:15:37Z","content_type":null,"content_length":"21892","record_id":"<urn:uuid:da618fc0-98e2-431d-bf74-19f0755a38d6>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00517-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about numerical integration on Xi'an's Og Diego Salmerón and Juan Antonio Cano from Murcia, Spain (check the movie linked to the above photograph!), kindly included me in their recent integral prior paper, even though I mainly provided (constructive) criticism. The paper has just been arXived. A few years ago (2008 to be precise), we wrote together an integral prior paper, published in TEST, where we exploited the implicit equation defining those priors (Pérez and Berger, 2002), to construct a Markov chain providing simulations from both integral priors. This time, we consider the case of a binomial regression model and the problem of variable selection. The integral equations are similarly defined and a Markov chain can again be used to simulate from the integral priors. However, the difficulty therein follows from the regression structure, which makes selecting training datasets more elaborate, and whose posterior is not standard. Most fortunately, because the training dataset is exactly the right dimension, a re-parameterisation allows for a simulation of Bernoulli probabilities, provided a Jeffreys prior is used on those. (This obviously makes the “prior” dependent on the selected training dataset, but it should not overly impact the resulting
{"url":"http://xianblog.wordpress.com/tag/numerical-integration/","timestamp":"2014-04-20T06:46:04Z","content_type":null,"content_length":"65060","record_id":"<urn:uuid:9fbbae26-c73c-4322-88f2-a49b20575318>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Classification of Homology 3-Spheres? Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Certainly. There's a general description of all compact 3-manifolds now that geometrization is about. So for homology 3-sphere's you have the essentially unique connect sum decomposition into primes. A prime homology 3-sphere has unique splice decomposition (Larry Siebenmann's terminology). The splice decomposition is just a convienient way of encoding the JSJ-decomposition. The tori of the JSJ-decomposition cut the manifold into components that are atoroidal, so you form a graph corresponding to these components (as vertices) and the tori as edges. up vote 11 down vote The splice decomposition you can think of as tree where the vertices are decorated by pairs (M,L) where M is a homology 3-sphere and L is a link in M such that M \ L is an atoroidal By geometrization there's not many candidates for pairs (M,L). The seifert-fibred homology spheres that come up this way are the Brieskorn spheres, in that case L will be a collection of fibres in the Seifert fibering. Or the pair (M,L) could be a hyperbolic link in a homology sphere. That's a pretty big class of manifolds for which there aren't quite as compact a description, compared to, say, Brieskorn spheres. An excellent example of how this decomposition can be used to say something about general homology 3-spheres: front.math.ucdavis.edu/0606.5220 Ryan Budney Nov 10 '09 at 2:10 add comment On the other hand, there is no particular classification of hyperbolic homology 3-spheres, much less hyperbolic links in homology 3-spheres, other than in general terms that they all come from hyperbolic groups. For instance, part of geometrization establishes that if a finite group acts freely on $S^3$, then it is equivalent to an action by isometries on a round $S^3$ and is a subgroup of $\mathrm up vote {SO}(4)$. Before geometrization, Milnor and Lee established severe restrictions on how a finite group $G$ can act freely on any homology 3-sphere, with the case of $S^3$ particularly in 8 down mind. Either $G$ is a spherical group, or it is one other family that hasn't been excluded. For all we know, if $G$ acts freely on any homology 3-sphere, then it acts on $S^3$ too. I think vote that this is still an open problem, and geometrization by itself doesn't settle it. The working description homology 3-spheres for many purposes, in particular quantum topological invariants, is rather different. In practice, a homology 3-sphere is often given by surgery on a link in $S^3$ (or in some other homology 3-sphere) whose matrix has determinant 1. The big drawback of course is that the description is far from unique. What geometrization does do for you in this case (as in the paper I included a link for) is reduce problems like this to questions about symmetry groups of hyperbolic links in homology Ryan Budney Nov 10 '09 at 2:41 add comment A nice historical note - Dehn observed that if M and N are knot complements and if you glue M to N switching meridian and longitude then the result is a homology sphere. Of course up vote 5 down this is a special case of what Ryan was saying. Another nice fact: the Poincare homology sphere is the only one with finite fundamental group. add comment Another way to represent homology spheres is to take a Heegaard splitting for $S^3$, cut and reglue by an element of the Torelli group. This is not canonical, but any two Heegaard splittings up vote are equivalent after some number of stabilizations. If you wanted to enumerate every homology sphere, you could list elements of the Torelli group, and construct 3-manifolds, then throw away 4 down repeats by using some solution to the homeomorphism problem for 3-manifolds. This is not really feasible to carry out in practice, but is one way to give a "general description" of homology vote spheres at least in theory, by giving a recursive enumeration of them. add comment
{"url":"http://mathoverflow.net/questions/4798/classification-of-homology-3-spheres","timestamp":"2014-04-21T10:20:43Z","content_type":null,"content_length":"62919","record_id":"<urn:uuid:22655886-02a6-4d18-a736-66a5c4082e35>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00290-ip-10-147-4-33.ec2.internal.warc.gz"}
Pancakes, graphs and the genome of plants Malkevitch, Joseph (2001) Pancakes, graphs and the genome of plants. In: Robert J. Bumcrot Festschrift, 11 May 2001, Hofstra University. Full text available as: In 1975, Jacob Eli Goodman posed the following problem in the American Mathematical Monthly under the pseudonym of Harry Deweighter (harried waiter): The chef in our place is sloppy, and when he prepares a stack of pancakes they come out all different sizes. Therefore, when I deliver them to a customer, on the way to the table I rearrange them (so that the smallest winds up on top, and so on, down to the largest on the bottom) by grabbing several from the top and flipping them over, repeating this (varying the number I flip) as many times as necessary. If there are n pancakes, what is the maximum number of flips (as a function of n) that I shall ever have to use to rearrange them? What is exciting about this problem is that not only is it a problem in pure mathematics of great charm, but also it has led to a variety of ideas of great applicability. Furthermore, the original problem Goodman posed is not completely resolved despite the simplicity in stating the problem. The purpose of this note is to survey some of what is known about the pancake flipping problem, to suggest some projects that might make interesting work for high school students and undergraduates, and to show how questions such as this are valuable ones to show students in a wide variety of classroom settings. The problem connects with many important and fundamental ideas in computer science and mathematics and affords an illustration of how these ideas can be looked at in an attractive context. Item Type: Conference or Workshop Item (Paper) Additional Information: This paper was given at the Robert J. Bumcrot Festschrift on May 11, 2001. Uncontrolled Keywords: combinatorics graph theory genetics Subjects: Q Science > Q Science (General) Q Science > QA Mathematics ID Code: 43 Deposited By: Admin HofPrints Deposited On: 03 January 2006 Repository Staff Only: edit this item
{"url":"http://hofprints.hofstra.edu/43/","timestamp":"2014-04-19T06:52:41Z","content_type":null,"content_length":"8787","record_id":"<urn:uuid:68d49690-8d40-44de-88b2-831e6b3a9044>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00070-ip-10-147-4-33.ec2.internal.warc.gz"}
Book Review: The Art of Computer Programming 4A asgard4 (1665145) writes "Statistics Title: The Art of Computer Programming. Volume 4A: Combinatorial Algorithms Part 1 Author: Donald E. Knuth Rating: 9/10 Publisher: Addison-Wesley Publishing http://www.awl.com/ ISBN-10: 0-201-03804-8 ISBN-13: 978-0-201-03804-0 Price: $74.99 US Summary: Knuth's latest masterpiece. Almost all there is to know about combinatorial search algorithms. Decades in the making, Donald Knuth presents the latest few chapters in his by now classic book series "The Art of Computer Programming". The computer science pioneer's latest book on combinatorial algorithms is just the first in an as-of-yet unknown number of parts to follow. While these yet-to-be-released parts will discuss other combinatorial algorithms, such as graph and network algorithms, the focus of this book titled "Volume 4A Combinatorial Algorithms Part 1" is solely on combinatorial search and pattern generation algorithms. Much like the other books in the series, this latest piece is undoubtedly an instant classic, not to be missing in any serious computer science library or book collection. The book is organized into four major parts, an introduction, a chapter on Boolean algebra, a chapter on algorithms to generate all possibilities (the main focus of the book), and finally 300 pages of answers to the many exercises at the end of every section in the book. These exercises and answers make this work an excellent companion for teachers of a university course. The book begins with some introductory examples of combinatorial searching and then gives various definitions of graphs and directed acyclic graphs (DAGs) since a lot of combinatorial algorithms conveniently use graphs as the data structures they operate on. Knuth's writing style is terse and to the point, especially when he presents definitions and proofs. However, the text is sprinkled with toy problems and puzzles that keep it interesting. After the introduction, the first chapter of the book (out of only two) is titled "Zeros and Ones" and discusses Boolean algebra. Most readers that have studied computer science in some form should be intimately familiar with most of the discussed basics, such as disjunctive normal forms and Boolean functions and their evaluation. The reader might be surprised to find a discussion of such an elemental foundation of computer science in a book on combinatorial algorithms. The reason is that storage efficiency is especially important for these types of algorithms and understanding the basic storage unit of computer systems nowadays (as the decimal computer is a definite thing of the past) is of importance. After covering the basics of Boolean algebra and Boolean functions in quite some detail, Knuth gets to the most fun part of this chapter in my opinion: the section on bitwise tricks and techniques on integer numbers. Being a software engineer in the video games industry, I recognized a lot of the techniques from my day-to-day work, such as bit packing of data and various bit shifting and bit masking tricks. There is also a discussion of some interesting rasterization-like algorithms, such as the shrinking of bitmaps using Levialdi's transformation or filling of regions bounded by simple curves. The chapter concludes with Binary Decision Diagrams that represent an important family of data structures for representing and manipulating Boolean functions. This topic was also quite interesting to me since I have never been exposed to it before. The second and main chapter of the book is titled "Generating All Possibilities". In this particular volume of the "The Art of Computer Programming" series, the only subsection of the chapter in this volume is on generating basic combinatorial patterns, or more specifically generating all n-tuples, permutations, combinations, partitions, and trees. We can expect more on this topic from Knuth in his continuation in Volume 4B and beyond. The discussion on n-tuples starts out with a lengthy focus on Gray codes, which are binary strings of n bits arranged in an order such that only one bit changes from string to string. A quite fun example for generating all permutations presented in this part of the book is alphametics, also sometimes known as verbal arithmetic — a kind of puzzle where every letter of a word stands for a digit and words are used in equations. The goal is to assign digits to letters in such a way that the equation is correct. A classic example is SEND + MORE = MONEY (the solution is left as an exercise for the reader). The next section deals with generating all combinations. Given a set of n elements, the number of all possible combinations of distinct subsets containing k elements is the well-known binomial coefficient, typically read as "n choose k". One of the more interesting algorithms in this section of the book to me was generating all feasible ways to fill a rucksack, which can come in quite handy when going camping :P After combinations, Knuth moves on to briefly discuss integer partitions. Integer partitions are ways to split positive integer numbers into sums of positive integers, disregarding order. So, for example 3, 2+1, and 1+1+1 are the three possible partitions of the integer 3. Knuth, in particular, focuses on generating all possible integer partitions and determining how many there are for a given number. The book continues with a concise presentation of the somewhat related topic of set partitions, which refer to ways of subdividing a set of elements into disjoint subsets. Mathematically, a set partition defines an equivalence relation and the disjoint subsets are called equivalence classes; concepts that should be familiar to any mathematics major. Again, the focus is on generating all possible set partitions and determining how many partitions can be generated. The main part of the book closes with a discussion of how to exhaustively generate all possible trees, which is a topic that I have never given much thought to. I am familiar with generating permutations, combinations, and partitions, but have never really been confronted with generating all possible trees that follow a certain pattern. One main example used throughout this part of the book is generating all possible strings of nested parentheses of a certain length. Such strings can be represented equivalently as binary trees. Knuth's latest book is comprehensive and almost all encompassing in its scope. It compiles an incredible amount of computer science knowledge on combinatorial searching from past decades into a single volume. As such, it is an important addition to any computer science library. This book is not necessarily an easy read and requires a dedicated reader with the intention of working through it from front to back and a considerable amount of time to fully digest. However, for those with patience, this book contains a lot of interesting puzzles, brain teasers, and almost everything there is to know on generating combinatorial patterns. On a final note, if you don't have volumes 1-3 yet you can get all volumes in a convenient box set (http://www.amazon.com/Computer-Programming-Volumes-1-4A-Boxed/dp/0321751043). About the review author: Martin Ecker has been involved in real-time graphics programming for more than 10 years and works as a professional video game developer for High Moon Studios http://www.highmoonstudios.com/ in sunny
{"url":"http://beta.slashdot.org/submission/1503758","timestamp":"2014-04-18T00:15:13Z","content_type":null,"content_length":"74628","record_id":"<urn:uuid:e0dc3c05-29b9-45b8-b8b6-d878fbf79715>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00560-ip-10-147-4-33.ec2.internal.warc.gz"}
Some Prime Quality Facts! September 21st, 2007 Some Prime Quality Facts! Time for some prime facts roundup, or prime numbers round up to be arithmetically exact. Take the following sciensational mathematics fact: The largest prime number is 9,808,358 digits long; more than the number of atoms in the universe. The basics first. What is a prime number? It’s when a number can be divided only by itself, with no remainder, and by the number 1. You know, 1, 3, 5, 7, 11… The list literally goes on. What makes discovering a new, long prime so sciensational? Well, you have to prove that a new big Prime Suspect number is indeed a prime, by doing the simple formula of “divided only by itself and 1″ which means showing the world that when divided by all the previous primes, you didn’t get a whole number as a remainder. That’s quite a task to prove a prime number, but thanks to computers and all, we are discovering new, long primes frequently. The largest prime above was confirmed to be a new big prime as late as September, 2006. The fact above has another fascinating fact in itself: the number of atoms in the universe. Though it is a discussion for a different post altogether, let’s just know that the number of atoms in the universe can be written in just 80 digits. That makes our biggest prime find even greater. Another sciensational maths fact says: 2 and 5 are the only primes that end in 2 or 5. That is true for all decimal numbers for all known primes!
{"url":"http://www.sciensational.com/blog/2007/09/some-prime-quality-facts.html","timestamp":"2014-04-18T04:03:46Z","content_type":null,"content_length":"17653","record_id":"<urn:uuid:16828510-78d1-4c47-bd7b-4c0e164b1c33>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00323-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] two previously unresolved issues Stefan van der Walt stefan@sun.ac... Tue Mar 27 01:45:01 CDT 2007 I just went through my mail archive and found these two minor outstanding issues. Thought I'd ask for comments before the new From: "Charles R Harris" <charlesr.harris@gmail.com> Subject: Re: [Numpy-discussion] Assign NaN, get zero On 11/11/06, Lisandro Dalcin <dalcinl@gmail.com> wrote: > On 11/11/06, Stefan van der Walt <stefan@sun.ac.za> wrote: > > NaN (or inf) is a floating point number, so seeing a zero in integer > > representation seems correct: > > > > In [2]: int(N.nan) > > Out[2]: 0L > > > Just to learn myself: Why int(N.nan) should be 0? Is it C behavior? In [1]: int32(0)/int32(0) Warning: divide by zero encountered in long_scalars Out[1]: 0 In [2]: float32(0)/float32(0) Out[2]: nan In [3]: int(nan) Out[3]: 0L I think it was just a default for numpy. Hmmm, numpy now warns on integer division by zero, didn't used to. Looks like a warning should also be raised when casting nan to integer. It is probably a small bug not to. I also suspect int(nan) should return a normal python zero, not 0L. From: "Bill Baxter" <wbaxter@gmail.com> To: numpy-discussion@scipy.org Subject: [Numpy-discussion] linalg.lstsq for complex Is this code from linalg.lstsq for the complex case correct? lapack_routine = lapack_lite.zgelsd lwork = 1 rwork = zeros((lwork,), real_t) work = zeros((lwork,),t) results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond, 0, work, -1, rwork, iwork, 0) lwork = int(abs(work[0])) rwork = zeros((lwork,),real_t) a_real = zeros((m,n),real_t) bstar_real = zeros((ldb,n_rhs,),real_t) results = lapack_lite.dgelsd(m, n, n_rhs, a_real, m, bstar_real, ldb, s, rcond, 0, rwork, -1, iwork, 0) lrwork = int(rwork[0]) work = zeros((lwork,), t) rwork = zeros((lrwork,), real_t) results = lapack_routine(m, n, n_rhs, a, m, bstar, ldb, s, rcond, The middle call to dgelsd looks unnecessary to me. At the very least, allocating astar_real and bstar_real shouldn't be necessary since they aren't referenced anywhere else in the lstsq function. The lapack documentation for zgelsd also doesn't mention any need to call dgelsd to compute the size of the work array. More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-March/026836.html","timestamp":"2014-04-19T08:01:03Z","content_type":null,"content_length":"5485","record_id":"<urn:uuid:76d99c53-98a1-451f-b7f0-a1a39e4a403c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Minimum Expected Risk Estimation for Near-neighbor Classification Maya Gupta, Santosh Srivastava, Luca Cazzanti near-neighbor learning, local learning, Bayesian estimation, LIME We consider the problems of class probability estimation and classification when using near-neighbor classifiers, such as k-nearest neighbors (kNN). This paper investigates minimum expected risk estimates for neighborhood learning methods. We give analytic solutions for the minimum expected risk estimate for weighted kNN classifiers with different prior information, for a broad class of risk functions. Theory and simulations show how significant the difference is compared to the standard maximum likelihood weighted kNN estimates. Comparisons are made with uniform weights, symmetric weights (tricube kernel), and asymmetric weights (LIME kernel). Also, it is shown that if the uncertainty in the class probability is modeled by a random variable, and the expected misclassification cost is minimized, the result is equivalent to using a classifier with a minimum expected risk estimate. For symmetric costs and uniform priors, it is seen that minimum expected risk estimates have no advantage over the standard maximum likelihood estimates. For asymmetric costs, simulations show that the differences can be striking. Download the PDF version Download the Gzipped Postscript version
{"url":"https://www.ee.washington.edu/techsite/papers/refer/UWEETR-2006-0006.html","timestamp":"2014-04-17T06:43:42Z","content_type":null,"content_length":"3524","record_id":"<urn:uuid:2cd89111-cf85-4122-9ec5-2be945491748>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00354-ip-10-147-4-33.ec2.internal.warc.gz"}
item response theory Imagine for a second that you’re teaching a math remediation course full of fourth graders. You’ve just administered a test with 10 questions. Of those 10 questions, two questions are trivial, two are incredibly hard, and the rest are equally difficult. Now imagine that two of your students take this test and answer nine of the 10 questions correctly. The first student answers an easy question incorrectly, while the second answers a hard question incorrectly. How would you try to identify the student with higher ability? Under a traditional grading approach, you would assign both students a score of 90 out of 100, grant both of them an A, and move on to the next test. This approach illustrates a key problem with measuring student ability via testing instruments: test questions do not have uniform characteristics. So how can we measure student ability while accounting for differences in questions? Item response theory (IRT) attempts to model student ability using question level performance instead of aggregate test level performance. Instead of assuming all questions contribute equally to our understanding of a student’s abilities, IRT provides a more nuanced view on the information each question provides about a student. What kind of features can a question have? Let’s consider some First, think back to an exam you have previously taken. Sometimes you breeze through the first section, work through a second section of questions, then battle with a final section until the exam ends. In the traditional grading paradigm described earlier, a correct answer on the first section would count just as much as a correct answer on the final section, despite the fact that the first section is easier than the last! Similarly, a student demonstrates greater ability as she answers harder questions correctly; the traditional grading scheme, however, completely ignores each question’s difficulty when grading students! The one-parameter logistic (1PL) IRT model attempts to address this by allowing each question to have an independent difficulty variable. It models the probability of a correct answer using the following logistic function: 1. For a given ability level, the probability of a correct answer increases as item difficulty decreases. It follows that, between two questions, the question with a lower beta value is easier. 2. Similarly, for a given question difficulty level, the probability of a correct answer increases as student ability increases. In fact, the curves displayed above take a sigmoidal form, thus implying that the probability of a correct answer increases monotonically as student ability increases. Now consider using the 1PL model to analyze test responses provided by a group of students. If one student answers one question, we can only draw information about that student’s ability from the first question. Now imagine a second student answers the same question as well as a second question, as illustrated below. We immediately have the following additional information about both students and both test questions: 1. We now know more about student 2’s ability relative to student 1 based on student 2’s answer to the first question. For example, if student 1 answered correctly and student 2 answered incorrectly we know that student 1’s ability is greater than student 2’s ability. 2. We also know more about the first question’s difficulty after student 2 answered the second question. Continuing the example from above, if student 2 answers the second question correctly, we know that Q1 likely has a higher difficulty than Q2 does. 3. Most importantly, however, we now know more about the first student! Continuing the example even further, we now know that Q1 is more difficult than initially expected. Student 1 answered the first question correctly, suggesting that student 1 has greater ability than we initially estimated! This form of message passing via item parameters is the key distinction between IRT’s estimates of student ability and other naive approaches (like the grading scheme described earlier). Interestingly, it also suggests that one could develop an online version of IRT that updates ability estimates as more questions and answers arrive! But let’s not get ahead of ourselves. Instead, let’s continue to develop item response theory by considering the fact that students of all ability levels might have the same probability of correctly answering a poorly-written question. When discussing IRT models, we say that these questions have a low discrimination value, since they do not discriminate between students of high- or low-ability. Ideally, a good question (i.e. one with a high discrimination) will maximally separate students into two groups: those with the ability to answer correctly, and those without. This gets at an important point about test questions: some questions do a better job than others of distinguishing between students of similar abilities. The two-parameter logistic (2PL) IRT model incorporates this idea by attempting to model each item’s level of discrimination between high- and low-ability students. This can be expressed as a simple tweak to the 1PL: How does the addition of alpha, the item discrimination parameter, affect our model? As above, we can take a look at the item response function while changing alpha a bit: As previously stated, items with high discrimination values can distinguish between students of similar ability. If we’re attempting to compare students with abilities near zero, a higher discrimination sharply decreases the probability that a student with ability < 0 will answer correctly, and increases the probability that a student with ability > 0 will answer correctly. We can even go a step further here, and state that an adaptive test could use a bank of high-discrimination questions of varying difficulty to optimally identify a student’s abilities. As a student answers each of these high-discrimination questions, we could choose a harder question if the student answers correctly (and vice versa). In fact, one could even identify the student’s exact ability level via binary search, if the student is willing to work through a test bank with an infinite number of high-discrimination questions with varying difficulty! Of course, the above scenario is not completely true to reality. Sometimes students will identify the correct answer by simply guessing! We know that answers can result from concept mastery or filling in your Scantron like a Christmas tree. Additionally, students can increase their odds of guessing a question correctly by ignoring answers that are obviously wrong. We can thus model each question’s “guess-ability” with the three-parameter logistic (3PL) IRT model. The 3PL’s item response function looks like this: where chi represents the item’s “pseudoguess” value. Chi is not considered a pure guessing value, since students can use some strategy or knowledge to eliminate bad guesses. Thus, while a “pure guess” would be the reciprocal of the number of options (i.e. a student has a one-in-four chance of guessing the answer to a multiple-choice question with four options), those odds may increase if the student manages to eliminate an answer (i.e. that same student increases her guessing odds to one-in-three if she knows one option isn’t correct). As before, let’s take a look at how the pseudoguess parameter affects the item response function curve: Note that students of low ability now have a higher probability of guessing the question’s answer. This is also clear from the 3PL’s item response function (chi is an additive term and the second term is non-negative, so the probability of answering correctly is at least as high as chi). Note that there are a few general concerns in the IRT literature regarding the 3PL, especially regarding whether an item’s “guessability” is instead a part of a student’s “testing wisdom,” which arguably represents some kind of student ability. Regardless, at Knewton we’ve found IRT models to be extremely helpful when trying to understand our students’ abilities by examining their test performance. de Ayala, R.J. (2008). The Theory and Practice of Item Response Theory, New York, NY: The Guilford Press. Kim, J.S., Bolt, D (2007). “Estimating Item Response Theory Models Using Markov Chain Monte Carlo Methods.” Educational Measurement: Issues and Practices 38 (51). Sheng, Y (2008). “Markov Chain Monte Carlo Estimation of Normal Ogive IRT Models in MATLAB.” Journal of Statistical Software 25 (8). Also, thanks to Jesse St. Charles, George Davis, and Christina Yu for their helpful feedback on this post!
{"url":"http://www.knewton.com/tech/blog/tag/item-response-theory/","timestamp":"2014-04-20T20:57:12Z","content_type":null,"content_length":"26499","record_id":"<urn:uuid:8f992be7-4f4c-4bed-bd18-7848c6100dc9>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00537-ip-10-147-4-33.ec2.internal.warc.gz"}
What are the invariant definitions of spinorial quantities from mathematical physics? up vote 3 down vote favorite When physicists write expressions involving spinors $\psi \in S \otimes V$, where $S=S_+ \oplus S_-$ is a complex spinor representation of a spin group $Spin(2d)$ and $V$ is a complex representation of a non-Abelian Lie group $G$ preserving an inner product, they often continue to make use of explicit bases for the Clifford algebra. What is the standard mathematically invariant way of writing and defining expressions like $\bar \psi, \psi^\dagger, \psi^*, \psi^\dagger \gamma^0, \psi^\dagger \gamma^0 \gamma^5, etc... $ without using such explicit bases, but using only • The (presumably Hermitian) inner product on $S$. • The (presumably Hermitian) inner product on $V$. • Tensor products and direct sums of vector spaces and their elements. • Invariants inside the Clifford Algebra and other algebraic structures etc... • Clearly specified complex vector spaces like $V$, $\bar V$, $V^\star$, $\bar V^\star$. In particular, what is the standard mathematically invariant definition of the spinorial source term $J(\psi)$ in the Yang-Mills Equation $$ d_A^* F_A = J(\psi) $$ where $V$ represents a non-trivial representation of a non-abelian $G$? add comment 1 Answer active oldest votes I don't have the book in front of me but I think a good place to start for this question and similar ones is Quantum Fields and Strings: A Course for Mathematicians by Pierre Deligne up vote 1 down vote The relevant chapter is on the arXiv: Deligne, Freed, Supersolutions (arXiv:hep-th/9901094) – Urs Schreiber Apr 23 '13 at 19:03 add comment Not the answer you're looking for? Browse other questions tagged mp.mathematical-physics clifford-algebras dg.differential-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/128512/what-are-the-invariant-definitions-of-spinorial-quantities-from-mathematical-phy","timestamp":"2014-04-17T01:23:29Z","content_type":null,"content_length":"52739","record_id":"<urn:uuid:004e36c5-837f-4c3f-84c0-5e6c2a75ed09>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00632-ip-10-147-4-33.ec2.internal.warc.gz"}
Multiway Ratio Cut (Software Demonstration) Graph Clustering Using Multiway Ratio Cut (Software Demonstration) Roxborough, Tom and Sen, Arunabha (1998) Graph Clustering Using Multiway Ratio Cut (Software Demonstration). In: Graph Drawing 5th International Symposium, GD '97, September 18-20, 1997, Rome, Italy , pp. 291-296 (Official URL: http://dx.doi.org/10.1007/3-540-63938-1_71). Full text not available from this repository. Identifying the natural clusters of nodes in a graph and treating them as supernodes or metanodes for a higher level graph (on an abstract graph) is a technique used for the reduction of visual complexity of graphs with a large number of nodes. In this paper we report on the implementation of a clustering algorithm based on the idea of ratio cut, a well known technique used for circuit partitioning in the VLSI domain. The algorithm is implemented in WINDOWS95/NT environment. The performance of the clustering algorithm on some large graphs obtained from the archives of Bell Laboratories is presented. Repository Staff Only: item control page
{"url":"http://gdea.informatik.uni-koeln.de/85/","timestamp":"2014-04-19T01:48:42Z","content_type":null,"content_length":"21466","record_id":"<urn:uuid:d959f534-500d-4cda-9db8-e3bea2ab7a9b>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00144-ip-10-147-4-33.ec2.internal.warc.gz"}
Transfer independance from $\mathbb{N}$ to $\mathbb{N}^2$ up vote 4 down vote favorite Here is a little problem for which I have no clue, and I don't even know if it is difficult. Does there exist a measurable (!) function $\psi:[0,1]^2\mapsto [0,1]$ such that if $(X_i)_i$ is a sequence of iid uniform variables on $[0,1]$, then the $\psi(X_i,X_j), i< j$ are indepdendant (and of course identically distributed) variables? The setting of the problem ($[0,1]$, uniform law, ...) can be changed, the only requirement is that the support of the $X_i$ and the arrival space have at least two values. For info, the examples 1) $X_i$ are Bernoulli and $\psi(x,y)=x y$ and 2) $X_i$ are uniform and $\psi(x,y)$ is the congruence of $x+y$ modulo $1$ do not work. Any remark is welcome too... pr.probability measure-theory Any constant function $\psi$ works. You presumably do not want that. – Emil Jeřábek Aug 18 '11 at 13:53 @Emil: No because the arrival space has to take two values (maybe it is poorly expressed, but it means the constant function does not work). – Raphael L Aug 18 '11 at 14:02 Then it is poorly expressed, indeed. What does “arrival space” mean? Is it the same as the range of the function? And why is this condition stated only in the description of the generalization, not in the original question about $\psi\colon[0,1]^2\to[0,1]$? Anyway, what about $$\psi(x,y)=\begin{cases}x,&\text{if }y=0,\\0,&\text{otherwise}\end{cases}$$ – Emil Jeřábek Aug 18 '11 at 15:31 Ok then let's just say that $\psi$ should not be constant...Concerning your example i assume you are working by default with uniform variables, but then what happens with $y=0$ should not matter because it happens almost never, so then we're back to a constant function equal to $0$. – Raphael L Aug 18 '11 at 15:53 So what you really mean is that $\psi$ should not be constant almost everywhere. – Robert Israel Aug 18 '11 at 18:22 show 1 more comment 1 Answer active oldest votes If the sequence $X_1,X_2,\ldots$ has finite range, then this is impossible except in the trivial way mentioned by Emil. The "input" random variables (e.g. $X_1$) all have equal and finite entropy $a$ and the "output" random variables (e.g. $\phi(X_1,X_2)$) all have equal entropy $b$. By independence the up vote 5 down entropy of $(X_1,\ldots, X_n)$ is $na$. There are $\binom{n}{2}$ independent output random variables defined deterministically in terms of these $n$ inputs, so their total entropy can vote accepted be no greater: $\binom{n}{2}b \leq na$. Letting $n$ go to infinity we obtain $b=0$, i.e., the outputs are constant almost surely. Indeed, very elegant solution, thank you very much. It should also apply to variables with density, no? – Raphael L Aug 18 '11 at 22:08 The lemma that the entropy of a function of a discrete random variable is at most the entropy of the random variable itself is false if you replace "discrete" with "continuous" and "entropy" with "differential entropy". For example you can scale a Gaussian random variable to produce one with any given variance and thereby any differential entropy. So at least this proof does not apply to variables with density. – Noah Stein Aug 18 '11 at 22:30 Then I think one should look for a function $\psi$ that scatters the points somehow. I think it could be related to find Borel sets $A$ and $B$ occupying half the measure everywhere, meaning for all Borel $C$, $${\rm leb}(A\cap C)={\rm leb}(B\cap C)={\rm leb}(C)/2,$$ but I guess this is impossible. – Raphael L Aug 19 '11 at 10:35 add comment Not the answer you're looking for? Browse other questions tagged pr.probability measure-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/73148/transfer-independance-from-mathbbn-to-mathbbn2/73186","timestamp":"2014-04-16T16:24:35Z","content_type":null,"content_length":"59950","record_id":"<urn:uuid:a48651d2-b449-4621-9f08-4439f0aef7f4>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
Examples of non-normal observations for which a studentized sample mean has a student’s Find out how to access preview-only content August 2011 Volume 69 Issue 2 pp 175-183 Examples of non-normal observations for which a studentized sample mean has a student’s T-distribution Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access Statistical methodologies and their practice often rely upon tests and confidence interval procedures based on Studentized sample means of independent observations from a normal parent population and their Student’s t distributions. This is specially so when the sample size n is small. An unmistakable impression one is left with, whether implied or not, is that such exact Student’s t distributions may not be valid when the observations are dependent or non-normal. We show that one cannot discard the possibility of an exact Student’s t distribution for a Studentized sample mean, or its suitable multiple, simply because the observations may be dependent or non-normal. In arriving at this conclusion, we have uncovered a very interesting and seemingly unknown feature (Theorem 2.1) of an n-dimensional multivariate t distribution with equi-correlation p, arbitrary degree of freedom v, and arbitrary n. 1. Ferguson, T. S. (1962) A representation of the symmetric bivariate Cauchy distribution, Annals of Mathematical Statistics, 33, 1256–1266. CrossRef 2. Johnson, N. L. and Kotz, S. (1972) Distributions in Statistics: Continuous Multivariate Distributions, New York, Wiley. 3. Kelker, D. (1970) Distribution theory of spherical distributions and a location-scale parameter generalization, Sankhya, Series A, 32, 419–430. 4. Kimbal, A. W. (1951) On dependent tests of significance in the analysis of variance, Annals of Mathematical Statistics, 22, 600–602. CrossRef 5. Mukhopadhyay, N. (2000) Probability and Statistical Inference, New York, Dekker. 6. Mukhopadhyay, N. (2009) Onpx1 dependent random variables having each (p − 1) × 1 sub-vector made up of iid observations with examples, Statistics and Probability Letters, 79, 1585–1589. CrossRef 7. Mukhopadhyay, N. and Chattopadhyay, B. (2011) Selected examples of non-normal data for which a Studentized sample mean has a Student’s t-distribution, Technical Report No. 11-04, Department of Statistics, University of Connecticut-Storrs. 8. Rao, C. R. (1973) Linear Statistical Inference and Its Applications, 2^nd edition, New York, Wiley. CrossRef 9. Tong, Y. L. (1990) The Multivariate Normal Distribution, New York, Springer. CrossRef Examples of non-normal observations for which a studentized sample mean has a student’s T-distribution Cover Date Print ISSN Online ISSN Additional Links □ Dependent data □ Equi-correlated t □ Multi-modal data □ Multivariate Cauchy □ Multivariate F □ Multivariate t □ Non-normal data □ Peaked data Author Affiliations □ 1. Department of Statistics, University of Connecticut Storrs, Connecticut Storrs, CT 06269-4120, U.S.A
{"url":"http://link.springer.com/article/10.1007/BF03263555","timestamp":"2014-04-18T08:05:13Z","content_type":null,"content_length":"44616","record_id":"<urn:uuid:335b04f8-2970-4983-8ca6-8678dfbb697a>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00407-ip-10-147-4-33.ec2.internal.warc.gz"}
It's the summer holidays, and there is no school for six weeks! The downside is that once you've finished going on holiday, having barbecues, paddling, sunbathing and doing other summery-type things, you reach the point where whatever you do, you've already done it so many times that you are almost instantly bored to the point of insanity. Don't worry! When you reach this stage, do this wordsearch, the 'massive' one. Not only that, but it times you, so once you've done it you can do it again to try to beat that record! I advise you to only do this when extremely bored.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=11822","timestamp":"2014-04-16T13:15:16Z","content_type":null,"content_length":"20790","record_id":"<urn:uuid:428e97c8-2320-460e-8e58-0c4d694543b8>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
On a Cut-Matching Game for the Sparsest Cut Problem Rohit Khandekar, Subhash A. Khot, Lorenzo Orecchia and Nisheeth K. Vishnoi EECS Department University of California, Berkeley Technical Report No. UCB/EECS-2007-177 December 22, 2007 We study the following game between a ``cut'' player C and a ``matching'' player M. The game starts with an empty graph G on a set V of n vertices. In each round, the cut player chooses a bisection (S,V\S) of vertices and the matching player then adds a perfect matching M (not necessarily belonging to G) between S and V\S to the (multi-)graph G. The choices of the players in each round may depend on those in the previous rounds. The game ends when G becomes an edge-expander. The value of this game, denoted by Val(n,C,M), is the total number of rounds in the game before it ends. We study this game for its connection with the Sparsest Cut problem in undirected graphs: if there is a polynomial-time cut player C such that Val(n,C,M) < f(n) for all M, then there is a polynomial-time O(f(n))-approximation algorithm for the Sparsest Cut problem. We show that there is no cut player C, even unbounded-time, that can ensure Val(n,C,M) = o(GAP(n) ^1/2) for all matching players M, where GAP(n) is the integrality gap of the well-studied SDP with triangle inequality constraints for the Sparsest Cut problem. Recall that GAP(n) = Omega(log log n). Thus, we prove that this approach cannot yield a o(GAP(n) ^1/2)-approximation (and in particular, o((log log n) ^1/2)-approximation) algorithm for this problem. Furthermore, we show that there is a (super-polynomial time) cut player C* such that, for all M, we have Val(n,C*,M) = O(log n). BibTeX citation: Author = {Khandekar, Rohit and Khot, Subhash A. and Orecchia, Lorenzo and Vishnoi, Nisheeth K.}, Title = {On a Cut-Matching Game for the Sparsest Cut Problem}, Institution = {EECS Department, University of California, Berkeley}, Year = {2007}, Month = {Dec}, URL = {http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-177.html}, Number = {UCB/EECS-2007-177}, Abstract = {We study the following game between a ``cut'' player C and a ``matching'' player M. The game starts with an empty graph G on a set V of n vertices. In each round, the cut player chooses a bisection (S,V\S) of vertices and the matching player then adds a perfect matching M (not necessarily belonging to G) between S and V\S to the (multi-)graph G. The choices of the players in each round may depend on those in the previous rounds. The game ends when G becomes an edge-expander. The value of this game, denoted by Val(n,C,M), is the total number of rounds in the game before it ends. We study this game for its connection with the Sparsest Cut problem in undirected graphs: if there is a polynomial-time cut player C such that Val(n,C,M) < f(n) for all M, then there is a polynomial-time O(f(n))-approximation algorithm for the Sparsest Cut We show that there is no cut player C, even unbounded-time, that can ensure Val(n,C,M) = o(GAP(n)<SUP>1/2</SUP>) for all matching players M, where GAP(n) is the integrality gap of the well-studied SDP with triangle inequality constraints for the Sparsest Cut problem. Recall that GAP(n) = Omega(log log n). Thus, we prove that this approach cannot yield a o(GAP(n)<SUP>1/2</SUP>)-approximation (and in particular, o((log log n)<SUP>1/2</SUP>)-approximation) algorithm for this problem. Furthermore, we show that there is a (super-polynomial time) cut player C* such that, for all M, we have Val(n,C*,M) = O(log n).} EndNote citation: %0 Report %A Khandekar, Rohit %A Khot, Subhash A. %A Orecchia, Lorenzo %A Vishnoi, Nisheeth K. %T On a Cut-Matching Game for the Sparsest Cut Problem %I EECS Department, University of California, Berkeley %D 2007 %8 December 22 %@ UCB/EECS-2007-177 %U http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-177.html %F Khandekar:EECS-2007-177
{"url":"http://www.eecs.berkeley.edu/Pubs/TechRpts/2007/EECS-2007-177.html","timestamp":"2014-04-19T14:33:44Z","content_type":null,"content_length":"7792","record_id":"<urn:uuid:fcf8c26b-b871-4e0f-b14b-44da1a0c21d3>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00416-ip-10-147-4-33.ec2.internal.warc.gz"}
FRT construction in the case of super algebras up vote 4 down vote favorite I'm looking on papers which are talking about the super quantum algebra osp(2|1). I want to understand how one applies the FRT construction in the case of osp(2|1). Of course there is a super permutation $P$, but if I'm right, taking a matrix $T$ whose entries represent the generators of an algebra of functions $A$ on a formal super-group and the universal $R$-matrix of the quantum superalgebra osp(2|1) and writing down the equation $$PR(T\otimes T)=(T\otimes T)PR,$$ I should find the relations defining $A$. Now, looking what happens when the quantum parameter $q$ of $R$ goes to $1$, I should find the commutation relations between the entries of a matrix in $OSp(2|1)$, but it is not so. So my question is : are there other sign contributions in the equation$$PR(T\otimes T) =(T\otimes T)PR$$ as those coming from $P$ ? qa.quantum-algebra mp.mathematical-physics super-algebra reference-request 2 May be just guess something like this : be careful with notation ToT-- may be you should think it leaving in algebra o supermatrices o supermatrices. Hence sign might appear from usual way. : AoB*CoD=(-1)^sgn(BC)ACoBD. Ill try to think laater – Alexander Chervov Nov 30 '12 at 16:35 I think you're right Alexander. The matrix element of the tensor product of even matrices $\left\{ F_{ij}\right\}$ and $\left\{ G_{kl}\right\}$ has the form $$(F\otimes G)_{ik,jl}=(−1)^{p(k)(p(i) +p(j))}F_{ij}G_{kl}.$$ – kieffer Dec 1 '12 at 11:34 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged qa.quantum-algebra mp.mathematical-physics super-algebra reference-request or ask your own question.
{"url":"https://mathoverflow.net/questions/114976/frt-construction-in-the-case-of-super-algebras","timestamp":"2014-04-17T13:10:25Z","content_type":null,"content_length":"49697","record_id":"<urn:uuid:91981b06-bd41-4e73-befd-f1e2a6bf616b>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00518-ip-10-147-4-33.ec2.internal.warc.gz"}
Hurrah for MATHS!Hurrah for MATHS! Hurrah for MATHS! It’s not rocket science to understand that maths skills are at the root of success in business, finance, building, science, banking, medicine, upholstery, plumbing, football league tables… and rockets. And, having done training sessions with adults working at an insurance company who couldn’t calculate percentages, it’s easy to understand why the government is keen to support Prof Alison Wolf’s recommendation that students keep studying maths till they reach a level equivalent to a C at GCSE. As a nation we don’t seem to be very good at numbers. It’s also easy to understand why so many parents and students must be sinking their heads in their hands and groaning. Most people, at some time, experience the miserable mist of incomprehension that falls over one when asked, for example to describe the two points where a given parabola intersects the x and y axes; or forced to try and imagine what a nifty way of finding a value for cos might be when you have values for sin and tan (and really, why would any one care?). It’s like a marathon runner hitting “the Back in Scotland in the 1970’s they had O Grades and I remember doing one in arithmetic – I think they were compulsory. We all passed, though not everyone passed the mathematics O Grade. It meant we could all add, subtract, multiply and divide. We could also do percentages and compound interest. It was a jolly sensible exam that made sure you could work out useful things – like how much curtain fabric was needed for a pair of bedroom curtains, or how much income you could expect from your savings account. I think this sort of maths, “Household Maths” or “Tesco Maths”, which helps people work out the real value of a special offer, measure things and understand consumer issues, is excellent. Everybody should be able to work out whether 20 washes from a bottle of Persil concentrate for £6.38 is better or worse value than 32 washes of multi-colour gels from Ariel for £9.25. (The answer is Yes: Persil would come in at 0.319p per wash and Ariel at 0.289p per wash, however if one is in pre-measured gels amounts rather than pouring it into the little drawer, the gels are probably better value because we tend to over-fill, all calculations need to be reviewed if you live in a hard water area…) This sort of maths can be extended to include food, eg how many bowls of lentil soup can you make from 200g lentils, 2 slices streaky bacon, 3 carrots, 2 onions and a stock cube, and what does it cost compared to buying a carton of ready-made stuff from Covent Garden soups. It does occur to me that many adults do these calculations automatically and accurately while protesting that they can’t “do” maths. Most children have an innate understanding of pattern and division too, witness the three year old lining up his toy cars in series; or the outrage of the 4 year old not given a fair slice of cake. Once you can do these kinds of calculations you will have mastered some useful life skills, though if you then move beyond arithmetic to geometry and algebra the beauty really begins. While students are still in formal education there is opportunity to continue to support their learning at whatever level they can manage, we shouldn’t give up. The obstacles, and misery, have come about in part because of rubbish PR on the part of maths. There is also a tendency to perceive numeracy skills as being too geek-like to be cool. Somehow parents and grandparents find it easy to excuse poor results in maths. “Oh, don’t worry – all our family are rubbish at maths” you hear parents of Year 1 children say as their child has become confused with numbers up to 10. Or “They’ve changed the way they do maths – it doesn’t make sense to anyone anymore, they have ‘bus shelters’ and ‘chunking’, that’s not proper maths” from Year 5 parents. Or, to a girl “Aren’t you clever doing maths and physics, girls aren’t normally any good with numbers”. So, the Real Life Skills Maths Plan has three components: 1) Really support our children learning arithmetic right from an early age – bribe them to learn their tables; nag them to do their maths homework; let them weigh, measure and estimate everyday things everyday.* 2) Be positive about maths and stop using the language of inherited incompetence – just because a child’s father or mother couldn’t multiply doesn’t mean the child won’t ever be able to. 3) Have confidence that maths – the abstract science of number, quantity and space – is beautiful, creative and philosophically rewarding. It’s magic. *My mother pointed out recently that when she was a child not only did many families do their housekeeping with a system of jam jars where people literally divided their income for things like rent, food and utilities, they also did it all in base 12 (pounds, shillings and pence).
{"url":"http://reallifeskills.co.uk/blog/commentary/hurrah-for-maths/","timestamp":"2014-04-19T01:48:27Z","content_type":null,"content_length":"22969","record_id":"<urn:uuid:d2594938-d2fe-4995-af2b-3459094f773f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
South Gate Geometry Tutor Find a South Gate Geometry Tutor ...If requested I can present references and letters of recommendation. I am bilingual! English-Spanish. 6 Subjects: including geometry, algebra 1, SAT math, elementary (k-6th) ...I take my classes very personally and work together with parents to identify my students' weakness areas. I am a strong believer that by giving tutoring a holistic approach, tutors are successful in changing students' views of life in the most important aspects, preparing them to face the future with more confidence. This is my approach. 20 Subjects: including geometry, Spanish, calculus, physics I'm a former tenured Community College Professor with an M.A. degree in Math from UCLA. I have also taught university level mathematics at UCLA, the University of Maryland, and the U.S. Air Force 9 Subjects: including geometry, calculus, algebra 1, algebra 2 ...Though my academic background weighs heavily toward writing and verbal, I naturally excel in mathematics, and am comfortable teaching that as well (fun fact: I have received a perfect score on the SAT, ACT, PSAT, and GRE math sections!). Specifically, I have the most experience teaching Pre-Algeb... 49 Subjects: including geometry, reading, writing, English ...Again, I applied lessons I had learned in order to push them beyond what was expected from them. Later, at Columbia University I worked closely with colleagues. Often, serving as their reader or in many ways their tutor. 17 Subjects: including geometry, English, GED, writing
{"url":"http://www.purplemath.com/South_Gate_Geometry_tutors.php","timestamp":"2014-04-19T09:46:09Z","content_type":null,"content_length":"23709","record_id":"<urn:uuid:9c23d41f-92cd-4bd0-8593-3620cb82a6b3>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00466-ip-10-147-4-33.ec2.internal.warc.gz"}
Verify solution of equation November 24th 2007, 05:49 PM #1 Junior Member Nov 2007 Verify solution of equation could anyone please help? If you would please explain one part, I can attempt to do the other. Thanks in advance! Directions: verify that the x-values are solutions of the equation. (a.) x= pi/3 (b.) x= 5pi/3 Thanks again! First, convert the radians to degrees Then, use your calculator to find the decimal values for cosx For example, in question a) cos(pi/3) or cos60 = 0.5 Then sub in the value of 0.5 in the equation "2cosx - 1 = 0" And solve: 2cos(0.5) - 1 = 0 Owwch Mommy, my head hurts!! I dearly hope that anyone that would be expected to be able to solve such an equation would already have memorized $cos(60^o)= \frac{1}{2}$ and not need to rely on a calculator to give them that answer. November 24th 2007, 05:57 PM #2 Nov 2007 November 25th 2007, 06:15 AM #3
{"url":"http://mathhelpforum.com/trigonometry/23422-verify-solution-equation.html","timestamp":"2014-04-18T17:46:07Z","content_type":null,"content_length":"36785","record_id":"<urn:uuid:32dce367-10af-43ea-bbab-1b996f4b0e5b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Ranking relations using analogies in biological and information networks Ricardo Silva, Katherine Heller, Zoubin Ghahramani and E M Airoldi Annals of Applied Statistics Volume 4, Number 2, , 2010. Analogical reasoning depends fundamentally on the ability to learn and generalize about relations between objects. We develop an approach to rela- tional learning which, given a set of pairs of objects S = {A(1) : B (1) , A(2) : B(2),...,A(N) :B(N)}, measures how well other pairs A:B fit in with the set S. Our work addresses the following question: is the relation between objects A and B analogous to those relations found in S? Such questions are particularly relevant in information retrieval, where an investigator might want to search for analogous pairs of objects that match the query set of inter- est. There are many ways in which objects can be related, making the task of measuring analogies very challenging. Our approach combines a similarity measure on function spaces with Bayesian analysis to produce a ranking. It requires data containing features of the objects of interest and a link matrix specifying which relationships exist; no further attributes of such relation- ships are necessary. We illustrate the potential of our method on text analysis and information networks. An application on discovering functional interac- tions between pairs of proteins is discussed in detail, where we show that our approach can work in practice even if a small set of protein pairs is provided. PDF - Requires Adobe Acrobat Reader or other PDF viewer.
{"url":"http://eprints.pascal-network.org/archive/00007857/","timestamp":"2014-04-17T15:43:36Z","content_type":null,"content_length":"8969","record_id":"<urn:uuid:c19e4741-176d-4e94-a709-9fb0f54172c5>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00095-ip-10-147-4-33.ec2.internal.warc.gz"}
Bulletin - Courses Home Course ID: RMIN 4100. 3 hours. Course Title: The Theory of Interest Course An introduction to actuarial cash flow models. Simple, compound, and effective interest functions are analyzed and used in the calculation of present value and future values of various Description: types of annuities as well as more complex cash flow streams. Oasis Title: THEORY OF INTEREST Prerequisite: MATH 2260 or permission of department Semester Offered every year. Grading A-F (Traditional) Course 1) Teach actuarial cash flow models, which constitute the Objectives: theoretical foundation of actuarial science. 2) Describe different annuities and their cash flows. 3) Provide an understanding of the time value of money among different types of annuities as well as more complex cash flow 4) Develop analytical problem solving skills to solve complex problems from first principles rather than memorization. 5) Incorporate examples and problems both in class and in 6) Give an opportunity for students to work together in groups to complete homework. This will allow students to learn how to work together in a team and how to effectively communicate quantitative ideas. 7) Cover most of the material and prepare students for Exam 2 of the Casualty Actuarial Society and Exam FM of the Society of Topical This course provides an introduction to actuarial cash flow Outline: models. The course covers most of the material for Exam 2 of the Casualty Actuarial Society and Exam FM of the Society of 1) The measurement of interest 2) Solution of problems in interest 3) Basic annuities 4) More general annuities 5) Amortization schedules and sinking funds 6) Generalized cash flow models 7) Yield rates 8) More advanced financial analysis 9) The term structure of interest rates 10) Risk Management applications 11) Overview of financial instruments and their cash flows
{"url":"http://bulletin.uga.edu/link.aspx?cid=RMIN4100","timestamp":"2014-04-18T19:10:45Z","content_type":null,"content_length":"15883","record_id":"<urn:uuid:32568900-5031-4d5f-a2b4-15885dd0e936>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] 2D interpolation question Zachary Pincus zachary.pincus@yale.... Wed Mar 17 11:12:01 CDT 2010 > I'd like to do a 2D interpolation but not sure to use > scipy.ndimage.map_coordinates or scipy.interpolate Your help is > greatly appreciated. > I have a large evenly spaced 2D array (say 8000x8000). I need to > interpolate over a smaller-but-finer grid (say 500x500). The grid is > an evenly spaced rectangular, but it is rotated (i.e., with an angle). > Which scipy function should I call? map_coordinates is likely going to be the best and simplest for tasks like this that are basically "image resampling". Basically, for each point in your output array, calculate the x,y position in the input array (as floating point), and put it into the input format described by the map_coordinates documentation (a 2x500x500 array, IIRC). Then choose the order of spline interpolation: sound choices include 0 (nearest neighbor), 1 (linear), and 3 or 5 (higher order). The higher-order splines will give "nicer looking" results on smooth patches, but are susceptible to ringing artifacts near sharp edges in the input array. The rest should be self-explanatory, but if not, ask away. PS. if you're doing a lot of resampling of the same input array, and speed for these operations is an absolute priority, and linear interpolation is acceptable, you could also consider do this on the graphics card with opengl. (I like pyglet for these tasks, but I bet pyCUDA would work as well?) You'd just have to slice the input texture a bit so as to not be over 4096x4096. More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-March/024732.html","timestamp":"2014-04-18T20:45:37Z","content_type":null,"content_length":"4122","record_id":"<urn:uuid:3630c271-c835-414e-ad2f-afa07418874b>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00065-ip-10-147-4-33.ec2.internal.warc.gz"}
deepika padukone pictures indian reservation photos lake jocassee pictures, Which each bin horizontal axis responsegray lake coloring pictures, Imagethis matlab image mar , displaying of an representation ofvertical Colorjan , describe edge direction histogram jessica simpson photos, bins hasnot the operation in image but for each bin Rgbhist displays data from image equal-sized answer fromislanders team photo File exchange, matlab Connected text i am new in an imagethis matlab file information equalisation filename mean aug , imagematlab codelocal histogram For one bins hasnot the rgbi , message image histogram histogram equalization on theunivariate histograms fori Range of histogram equalisation filename mean aug , rgbhist equinique photo Suppose we have images that look nearly identical tagged image processing toolbox Matlabs function of imadjust, histeq and find a grayscalefile One image using imhist for thehistogram are calculated mean aug Black i was trying to findjava image editor Course - possible to implement house spider pictures, rgbhist histogram is a direction histogram for thehistogram are constructed Links, and then save them asthresholds, and then Asthresholds, and then save them asthresholds, and then save them asthresholds , my question is not clear which the color Horizontal axis responsegray scale image analyst white color image filematlab m files image access solutions, Does equalization on the similarity using a ago thehistogram internet photo release, Rgbdev tools matlab imadjust histeq Programmingjun , ago processing and blogs for deck lighting pictures, Was trying to use histogram falls under this faststone photo viewer, handheld photo scanner, Y-nov , im tryingimages new enterprise, duracell photo contest, load rgb values levels - calculated Suppose we have images that look nearly identical background It possible to create histograms File size kb horizontal axis frequency at which each rgbdev tools Sizei, file information axis frequencyinspirational quote wallpaper Hiding a grayscalefile information toolbox histograms figure illustrates the color asthresholds With light histogram vlues to find the idea that Histeq and draw the matlabimage sun edwardsville, Two simple lsbfeb , , lets Lets suppose we have Using imhist image src c rgbi mar Want to find a cover image using color image histogram matlab, Src rgbhisti rgbhist histogram Hours ago rgbdev tools matlab image but for hiding Lsbfeb , a color image used the probabilitycalculate an image image guided radiation, Themotivation for b w image Frequencyjayalalitha wallpaper, Create histograms figure this matlab image itfeb , at which each hannspree digital photo Programmingjun , and draw the data Which each rgbdev tools matlab http pages feb Will display a using euclideanfeb , cover image Create histograms figure this matlab nearly identical compute the similarity Adapthisteq and then save them asthresholds, and find Into equally spaced ofvertical axis frequency at which each rgbdev find desktop wallpaper, Showingin image src c the filename Hours ago used the similarityimage histogram matlab code, Size kb aug , connected text i have File exchange, matlab function imhist for each bin horizontal axis responsegray Images and image itfeb X y- aug , hasnot the histogram vlues to create histograms Hiding a cover image, counts for your , compute the use of Pixel, based on theunivariate histograms figure illustrates the levels glass elevator picture, , imhisti,n displays a histogram equalisation filename rgbi mar fema train pictures iphone picture folders, Visualizing the histogram aug Function displays is aindian wallpaper pics, Wesep , responsegray scale image hours ago under this cover image images hair extensions, Suppose we have image an image in image analyst by splitting Or camera asimage processing toolbox illustrates the hello i house elevation photos, Simple lsbfeb , inimage in tools Adjustments on theunivariate histograms for your Other questions tagged image src rgbhisti rgbhist Want to use lena or camera asimage processing and find the imhisti,n 3d image histogram matlab, funshine bear pictures, kirsten kreuk pictures, Questions tagged image src rgbhisti rgbhist internet map wallpaper, Frequency at which the effect of -grey levels - tagged image histogram image event Colorjan , mar , course - , mean aug glitter myspace photos, drywood termite photos, Created by splitting the function displays each bin horizontal axis responsegray scale Extract data into equal-sized features are constructed Cover image hours ago retrieval matlab meanjan , cumulativemar kawasaki zx7r pictures, Bin horizontal axis frequency, counts for a simple lsbfebhd wallpaper download, , read code does equalization and for your , cumulativemar , similarity using euclideanfeb , itfeb , rgbi mar , under this matlab Thresholding approach falls under this splitting Function imhist image of rgb image file exchange Code does equalization and then save them illustrates the effect of connected ghost pictures website, Exchange, matlab code does equalization , cumulativemar , matlabs function Lets suppose we have used fd photo countsjohn hancock wallpaper, Does equalization on input image in hence wesep kenyi cichlid pictures,
{"url":"http://designshopbb.com/Scripts/image-histogram-matlab","timestamp":"2014-04-17T12:29:03Z","content_type":null,"content_length":"15188","record_id":"<urn:uuid:1894c28a-5173-4b1c-92d1-0e37f1958d5c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00590-ip-10-147-4-33.ec2.internal.warc.gz"}
Help understanding proof November 9th 2010, 05:03 AM Help understanding proof I came across a proof in my notes that I will be more then happy to try on my own but I don't really know what for sure they are looking for. If anyone could just point me in the right direction I would appreciate it. Let $a$ be some irrational number. Given any $M\in \mathbb{N}$, there is a $\gamma >0$, such that for all $p\in \mathbb{Z}$ and $q\in \mathbb{N}, q\leq M\Rightarrow |a-p/q|\geq \gamma$. [Hint: There are only finitely many $p$ such that $a-\gamma \leq p/q\leq a+\gamma$.] Like I said, I just don't know where to get started. All we have talked about up to this point in the chapter was the basics continuity, composition rule, and removable discontinuities. Any help would be appreciated. November 9th 2010, 10:58 AM I came across a proof in my notes that I will be more then happy to try on my own but I don't really know what for sure they are looking for. If anyone could just point me in the right direction I would appreciate it. Let $a$ be some irrational number. Given any $M\in \mathbb{N}$, there is a $\gamma >0$, such that for all $p\in \mathbb{Z}$ and $q\in \mathbb{N}, q\leq M\Rightarrow |a-p/q|\geq \gamma$. [Hint: There are only finitely many $p$ such that $a-\gamma \leq p/q\leq a+\gamma$.] Like I said, I just don't know where to get started. All we have talked about up to this point in the chapter was the basics continuity, composition rule, and removable discontinuities. Any help would be appreciated. There are only finitely many values of $q$ from 1 to $M$ (obviously). For each of these, look at the multiple of 1/ $q$ that is closest to $a$. Let $d_q$ be the distance of this point from $a$. Then choose $\gamma$ so that $0<\gamma<\min\{d_q:1\leqslant q\leqslant M\}$, and explain why that $\gamma$ has the required property.
{"url":"http://mathhelpforum.com/differential-geometry/162633-help-understanding-proof-print.html","timestamp":"2014-04-20T19:09:09Z","content_type":null,"content_length":"9490","record_id":"<urn:uuid:ad520015-936b-4c82-9340-2c6efeb91013>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00302-ip-10-147-4-33.ec2.internal.warc.gz"}
Length of an arc December 22nd 2013, 02:52 PM #1 Junior Member Nov 2011 Length of an arc The radius of a circle is 10 meters. What is the length of a 135° arc? 135/360 = 0.375 * d * pi -> 0.375 * 20 meters * 3.14 = 23.55 meters. I'm doing this problem on an online Geometry program. 23.55 is the right answer but it will only take it in the form of 15pi/2 meters. Here's a screenshot: http://prntscr.com/2d8pi3 How do I arrive to that simplification from 23.55? Re: Length of an arc 23.55 is only an APPROXIMATION. The EXACT answer is \displaystyle \begin{align*} \frac{15\pi}{2} \end{align*}. Just multiply \displaystyle \begin{align*} \frac{135}{360} \cdot 20 \end{align*}, cancel any common factors, and multiply by \displaystyle \begin{align*} \pi \end{align*}. Re: Length of an arc The radius of a circle is 10 meters. What is the length of a 135° arc? 135/360 = 0.375 * d * pi -> 0.375 * 20 meters * 3.14 = 23.55 meters. I'm doing this problem on an online Geometry program. 23.55 is the right answer but it will only take it in the form of 15pi/2 meters. Here's a screenshot: http://prntscr.com/2d8pi3 How do I arrive to that simplification from 23.55? you don't. you recognize that 135 degrees is 3pi/4 radians and that the length of your arc is given by $arclen=10 \cdot \left(\frac{3\pi}{4}\right)=\frac{30\pi}{4}=\frac{ 15\pi}{2}$ Re: Length of an arc What the OP did is fine (and the OP probably has not encountered radian measurement before). It is perfectly correct to use your angle as the proportion of the circle swept out, and so the arclength is also the same proportion of the circumference. The problem was the converting to decimals (in particular, approximating \displaystyle \begin{align*} \pi \end{align*}). Re: Length of an arc Okay, so for the next problem I got: Radius is 7 and the arc is 135 degrees. $135/360 * 14 = 5.25$ I'm stumped on what common factors to cancel out there. Re: Length of an arc Re: Length of an arc I see, confused that with something else, so answer would be 3pi/8 according to post #2? This is high school Geometry if it helps. Re: Length of an arc Re: Length of an arc Re: Length of an arc Re: Length of an arc Something that may help you: the arc-length of a circular arc is proportional to both the radius and the angle swept out. This is what is meant by the formula: Circumference = 2*pi*r (where r is the length of the radius). So if the angle swept out is an entire circle, the the arc-length is 2*pi*r. If the angle swept out is part of the circle, say n degrees, then the arc-length is: For example, if we have a circle of radius 2, and the angle swept out is 180 degrees (a semi-circle), the arc-length is: (180/360)(2*pi*2) = (1/2)(2*pi*2) = (1/2)(4*pi) = 2*pi. This formula works for both problems given above: (135/360)(2*pi*10) = [(3*3*3*5)/(2*2*2*3*3*5)](2*pi*10) = (3/(2*2*2))(2*pi*10) = (3/8)(20*pi) = (60/8)pi = [(4*15)/(4*2)]pi = (15/2)pi <---same answer as above (135/360)(2*pi*7) = (3/8)(14*pi) = [(2*3*7)/(2*2*2)]pi = [(3*7)/(2*2)]pi = (21/4)pi. Really, this is just "math with fractions". You should hopefully know the following: 1) the total circumference of a circle of radius r is 2*pi*r 2) the total angle swept out in one turn of a complete circle is 360 degrees. So if your arc has an angle of n degrees, it is an arc that is n/360-ths of a circle, and we multiply THAT fraction times the total circumference (so 180 degrees is "half a circle" so for an arc of 180 degrees, we halve the total circumference). For many commonly used angles (such as 30, 60, 90 for example) the fraction n/360 can be "reduced" considerably, by cancelling common prime factors from the numerator and denominator. This seems to be where you are struggling. December 22nd 2013, 03:32 PM #2 December 22nd 2013, 03:34 PM #3 MHF Contributor Nov 2013 December 22nd 2013, 04:09 PM #4 December 22nd 2013, 05:34 PM #5 Junior Member Nov 2011 December 22nd 2013, 05:57 PM #6 MHF Contributor Nov 2013 December 22nd 2013, 06:18 PM #7 Junior Member Nov 2011 December 22nd 2013, 08:01 PM #8 MHF Contributor Nov 2013 December 22nd 2013, 09:30 PM #9 Junior Member Nov 2011 December 22nd 2013, 09:56 PM #10 MHF Contributor Nov 2010 December 23rd 2013, 04:29 AM #11 MHF Contributor Mar 2011
{"url":"http://mathhelpforum.com/geometry/225193-length-arc.html","timestamp":"2014-04-20T00:01:27Z","content_type":null,"content_length":"66016","record_id":"<urn:uuid:4d869591-8c05-400d-9194-8768afea5899>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00655-ip-10-147-4-33.ec2.internal.warc.gz"}
Does 3DSMAX export conveniently normal vectors to .ASE ? [Archive] - OpenGL Discussion and Help Forums 08-21-2001, 04:55 AM i'm experimenting my ASE loader and I've met the following problem : When I look at my model's most lighted face, some of its meshes are very dark, whereas the other meshes are very well lighted. And when I look at the opposite face, these meshes which were dark are now well lighted. My conclusion is that these meshes have their normal vectors in the wrong sense. Enabling two-sided lighting has not worked. Do you have any idea ?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-133132.html","timestamp":"2014-04-17T04:10:28Z","content_type":null,"content_length":"8987","record_id":"<urn:uuid:0a17244c-6f86-4612-ba46-28625434781f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00508-ip-10-147-4-33.ec2.internal.warc.gz"}
GHK current equation 34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Biological: Behavioural genetics · Evolutionary psychology · Neuroanatomy · Neurochemistry · Neuroendocrinology · Neuroscience · Psychoneuroimmunology · Physiological Psychology · Psychopharmacology (Index, Outline) The Goldman-Hodgkin-Katz current equation (or GHK current equation) describes the current carried by an ionic species across a cell membrane as a function of the transmembrane potential and the concentrations of the ion inside and outside of the cell. Since both the voltage and the concentration gradients influence the movement of ions, this process is called electrodiffusion. The eponyms of the equation The American David E. Goldman of Columbia University, and the English Nobel laureates Alan Lloyd Hodgkin and Bernard Katz derived this equation. Assumptions underlying the validity of the equation Several assumptions are made in deriving the GHK current equation: • The membrane is a homogeneous substance • The electrical field is constant so that the transmembrane potential varies linearly across the membrane • The ions access the membrane instantaneously from the intra- and extracellular solutions • The permeant ions do not interact • The movement of ions is affected by both concentration and voltage differences The equation The GHK current equation for an ion S: $I_{S} = P_{S}z_{S}^2\frac{V_{m}F^{2}}{RT}\frac{[\mbox{S}]_{i} - [\mbox{S}]_{o}\exp(-z_{S}V_{m}F/RT)}{1 - \exp(-z_{S}V_{m}F/RT)}$ • I[S] is the current across the membrane carried by ion S, measured in amperes (A = C·s^-1) • P[S] is the permeability of ion S measured in m^3·s^-1 • z[S] is the charge of ion S in elementary charges • V[m] is the transmembrane potential in volts • F is the Faraday constant, equal to 96,485 C·mol^-1 or J·V^-1·mol^-1 • R is the gas constant, equal to 8.314 J·K^-1·mol^-1 • T is the absolute temperature, measured in Kelvins (= degrees Celsius + 273.15) • [S][i] is the intracellular concentration of ion S, measured in mol·m^-3 or mmol·l^-1 • [S][o] is the extracellular concentration of ion S, measured in mol·m^-3 Rectification and the GHK current equation Since one of the assumptions of the GHK current equation is that the ions move independently of each other, the total flow of ions across the membrane is simply equal to the sum of two oppositely directed fluxes. Each flux (or current) approaches an asymptotic value as the membrane potential diverges from zero. These asymptotes are $I_{S|i\to o} = P_{S}z_{S}^2 \frac{V_{m}F^{2}}{RT}[\mbox{S}]_{i}\ \mbox{for}\ V_{m} \gg \; 0$ $I_{S|o\to i} = P_{S}z_{S}^2 \frac{V_{m}F^{2}}{RT}[\mbox{S}]_{o}\ \mbox{for}\ V_{m} \ll \; 0$ where subscripts 'i' and 'o' denote the intra- and extracellular compartments, respectively. Keeping all terms except V[m] constant, the equation yields a straight line when plotting I[S] against V [m]. It is evident that the ratio between the two asymptotes is merely the ratio between the two concentrations of S, [S][i] and [S][o]. Thus, if the two concentrations are identical, the slope will be identical (and constant) throughout the voltage range (corresponding to Ohm's law). As the ratio between the two concentrations increases, so does the difference between the two slopes, meaning that the current is larger in one direction than the other, given an equal driving force of opposite signs. This is contrary to the result obtained if using Ohm's law, and the effect is called The GHK current equation is mostly used by electrophysiologists when the ratio between [S][i] and [S][o] is large and/or when one or both of the concentrations change considerably during an action potential. The most common example is probably intracellular calcium, [Ca^2+][i], which during a cardiac action potential cycle can change 100-fold or more, and the ratio between [Ca^2+][o] and [Ca^ 2+][i] can reach 20,000 or more. See also
{"url":"http://psychology.wikia.com/wiki/GHK_current_equation?oldid=60206","timestamp":"2014-04-17T05:11:09Z","content_type":null,"content_length":"64792","record_id":"<urn:uuid:7579c70b-432f-4555-b881-0fa7fd3c7945>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00435-ip-10-147-4-33.ec2.internal.warc.gz"}
Existence of positive solutions for nonlinear m-point boundary value problems on time scales In this article, we study the following m-point boundary value problem on time scales, where is a time scale such that , ϕ[p](s) = |s|^p-2s,p > 1,h ∈ C[ld]((0, T), (0, +∞)), and f ∈ C([0,+∞), (0,+∞)), . By using several well-known fixed point theorems in a cone, the existence of at least one, two, or three positive solutions are obtained. Examples are also given in this article. AMS Subject Classification: 34B10; 34B18; 39A10. positive solutions; cone; multi-point; boundary value problem; time scale 1 Introduction The study of dynamic equations on time scales goes back to its founder Hilger [1], and is a new area of still theoretical exploration in mathematics. Motivating the subject is the notion that dynamic equations on time scales can build bridges between continuous and discrete mathematics. Further, the study of time scales has led to several important applications, e.g., in the study of insect population models, neural networks, heat transfer, epidemic models, etc. [2]. Multipoint boundary value problems of ordinary differential equations (BVPs for short) arise in a variety of different areas of applied mathematics and physics. For example, the vibrations of a guy wire of a uniform cross section and composed of N parts of different densities can be set up as a multi-point boundary value problem [3]. Many problems in the theory of elastic stability can be handled by the method of multi-point problems [4]. Small size bridges are often designed with two supported points, which leads into a standard two-point boundary value condition and large size bridges are sometimes contrived with multi-point supports, which corresponds to a multi-point boundary value condition [5]. The study of multi-point BVPs for linear second-order ordinary differential equations was initiated by Il'in and Moiseev [6]. Since then many authors have studied more general nonlinear multi-point BVPs, and the multi-point BVP on time scales can be seen as a generalization of that in ordinary differential equations. Recently, the existence and multiplicity of positive solutions for nonlinear differential equations on time scales have been studied by some authors [7-11], and there has been some merging of existence of positive solutions to BVPs with p-Laplacian on time scales [12-19]. He [20] studied subject to one of the following boundary conditions where . By using a double fixed-point theorem, the authors get the existence of at least two positive solutions to BVP (1.1) and (1.2). Anderson [21] studied subject to one of the following boundary conditions by using a functional-type cone expansion-compression fixed-point theorem, the author gets the existence of at least one positive solution to BVP (1.3), (1.4) and BVP (1.3), (1.5). However, to the best of the authors' knowledge, up to now, there are few articles concerned with the existence of m-point boundary value problem with p-Laplacian on time scales. So, in this article, we try to fill this gap. Motivated by the article mentioned above, in this article, we consider the following m-point BVP with one-dimensional p-Laplacian, where ϕ[p](s) = |s|^p-2s,p > 1,h ∈ C[ld]((0,T), (0, +∞)), . δ, β[i ]> 0, i = 1,..., m - 2. We will assume throughout (S1) h ∈ C[ld ]((0, T), [0, ∞)) such that ; (S2) f ∈ C([0, ∞), (0, ∞)), f ≢ 0 on ; (S3) By ϕ[q ]we denote the inverse to ϕ[p], where ; 2 Preliminaries In this section, we will give some background materials on time scales. Definition 2.1. [7,22] For and , define the forward jump operator σ and the backward jump operator ρ, respectively, for all . If σ(t) > t, t is said to be right scattered, and if ρ(r) < r, r is said to be left scattered. If σ(t) = t, t is said to be right dense, and if ρ(r) = r, r is said to be left dense. If has a right scattered minimum m, define ; Otherwise set . The backward graininess is defined by If has a left scattered maximum M, define ; Otherwise set . The forward graininess is defined by Definition 2.2. [7,22] For and , we define the "Δ" derivative of x(t), x^Δ(t), to be the number (when it exists), with the property that, for any ε > 0, there is neighborhood U of t such that for all s ∈ U. For and , we define the "∇" derivative of x(t),x^Δ (t), to be the number(when it exists), with the property that, for any ε > 0, there is a neighborhood V of t such that for all s ∈ V. Definition 2.3. [22] If F^Δ (t) = f(t), then we define the "Δ" integral by If F^∇ (t) = f(t), then we define the "∇" integral by Lemma 2.1. [23]The following formulas hold: Lemma 2.2. [7, Theorem 1.75 in p. 28] If f ∈ C[rd ]and , then According to [23, Theorem 1.30 in p. 9], we have the following lemma, which can be proved easily. Here, we omit it. where the integral on the right is the usual Riemann integral from calculus. (ii) If [a, b] consists of only isolated points, then In what follows, we list the fixed point theorems that will be used in this article. Theorem 2.4. [24]Let E be a Banach space and P ⊂ E be a cone. Suppose Ω[1], Ω[2 ]⊂ E open and bounded, . Assume is completely continuous. If one of the following conditions holds (i) ∥Ax∥ ≤ ∥x∥, ∀x ∈ ∂Ω[1 ]∩ P, ∥Ax∥ ≥ ∥x∥, ∀x ∈ ∂Ω[2 ]∩ P; (ii) ∥Ax∥ ≥ ∥x∥, ∀x ∈ ∂Ω[1 ]∩ P, ∥Ax∥ ≤ ∥x∥, ∀x ∈ ∂Ω[2 ]∩ P. Then, A has a fixed point in . Theorem 2.5. [25]Let P be a cone in the real Banach space E. Set If α and γ are increasing, nonnegative continuous functionals on P, let θ be a nonnegative continuous functional on P with θ(0) = 0 such that for some positive constants r, M, for all . Further, suppose there exists positive numbers a < b < r such that If is completely continuous operator satisfying (i) γ(Au) > r for all u ∈ ∂P(γ, r); (ii) θ(Au) < b for all u ∈ ∂P(θ, r); (iii) and α(Au) > a for all u ∈ ∂P(α, a). Then, A has at least two fixed points u[1 ]and u[2 ]such that Let a, b, c be constants, P[r ]= {u ∈ P : ∥u∥ < r}, P(ψ, b, d) = {u ∈ P : a ≤ ψ(u), ∥u∥ ≤ b}. Theorem 2.6. [26]Let be a completely continuous map and ψ be a nonnegative continuous concave functional on P such that for , there holds ψ(u) ≤ ∥u∥. Suppose there exist a, b, d with 0 < a < b < d ≤ c such that (i) and ψ(Au) > b for all u ∈ P(ψ, b, d); (iii) ψ(Au) > b for all u ∈ P(ψ, b, d) with ∥Au∥ > d. Then, A has at least three fixed points u[1],u[2], and u[3 ]satisfying Let the Banach space E = C[ld][0, T] be endowed with the norm ∥u∥ = sup[t ∈ [0,T] ]u(t), and cone P ⊂ E is defined as It is obvious that ∥u∥ = u(T) for u ∈ P. Define A : P → E as for t ∈ [0, T]. In what follows, we give the main lemmas which are important for getting the main results. Lemma 2.7. A : P → P is completely continuous. Proof. First, we try to prove that A : P → P. Thus, (Au)^Δ (T) = 0 and by Lemma 2.1 we have (Au)^Δ∇ (t) = -h(t)f(t, u(t)) ≤ 0 for t ∈ (0, T). Consequently, A : P → P. By standard argument we can prove that A is completely continuous. For more details, see [27]. The proof is complete. Lemma 2.8. For u ∈ P, there holds for t ∈ [0,T]. Proof. For u ∈ P, we have u^Δ∇ (t) ≤ 0, it follows that u^Δ (t) is non-increasing. Therefore, for 0 < t < T, Combining (2.1) and (2.3) we have as u(0) ≥ 0, it is immediate that The proof is complete. 3 Existence of at least one positive solution First, we give some notations. Set Theorem 3.1. Assume in addition to (S1) and (S2), the following conditions are satisfied, there exists such that Then, BVP (1.6) has at least one positive solution. Proof. Cone P is defined as above. By Lemma 2.7 we know that A : P → P is completely continuous. Set Ω[r ]= {u ∈ E, ∥u∥ < r}. In view of (H1), for u ∈ ∂ Ω[r ]∩ P, which means that for u ∈ ∂Ω[r ]∩ P, ||Au|| ≤ ||u||. On the other hand, for u ∈ P, in view of Lemma 2.8, there holds , for t ∈ [ξ[1], T]. Denote Ω[ρ ]= {u ∈ E, ∥u∥ < ρ}. Then for u ∈ ∂Ω[ρ ]∩ P, considering (H2), we have which implies that for u ∈ ∂Ω[ρ]∩P, ∥Au∥ ≥ ∥u∥ Therefore, the immediate result of Theorem 2.4 is that A has at least one fixed point u ∈ (Ω[ρ]\Ω[r]) ∩ P. Also, it is obvious that the fixed point of A in cone P is equivalent to the positive solution of BVP (1.6), this yields that BVP (1.6) has at least one positive solution u satisfies r ≤ ∥u∥ ≤ ρ. The proof is complete. Here is an example. Example 3.2. Let . Consider the following four point BVP on time scale . and h(t) = 1, T = 4, ξ[1 ]= 2, ξ[2 ]= 3, δ = 2, β[1 ]= β[2 ]= 1,p = q = 2. In what follows, we try to calculate Λ, B. By Lemmas 2.2 and 2.3, we have Thus, if all the conditions in Theorem 3.1 satisfied, then BVP (3.1) has at least one positive solution lies between 100 and 1000. 4 Existence of at least two positive solutions In this section, we will apply fixed point Theorem 2.5 to prove the existence of at least two positive solutions to the nonlinear BVP (1.6). and define the increasing, nonnegative, continuous functionals γ, θ,α on P by We can see that, for u ∈ P, there holds In addition, Lemma 2.8 implies that which means that We also see that For convenience, we give some notations, Theorem 4.1. Assume in addition to (S1), (S2) there exist positive constants such that the following conditions hold (H3) f(t, u) > ϕ[p](c/M) for t ∈ [ξ[1],T] u ∈ [c,Tc/ξ[1]]; (H4) f(t, u) < ϕ[p](b/K) for t ∈ [0,ξ[m-2]], u ∈ [b,Tb/ξ[m-2]]; (H5) f(t, u) > ϕ[p](a/L) for t ∈ [η,T], u ∈ [a,Ta/η]. Then BVP (1.6) has at least two positive solutions u[1 ]and u[2 ]such that Proof. From Lemma 2.7 we know that A : P(γ, c) → P is completely continuous. In what follows, we will prove the result step by step. Step one: To verify (i) of theorem 2.5 holds. We choose u ∈ ∂P(γ,c), then . This implies that u(t) ≥ c for t ∈ [ξ[1],T], considering that , we have As a consequence of (H3), Since Au ∈ P, we have Thus, (i) of Theorem 2.5 is satisfied. Step two: To verify (ii) of Theorem 2.5 holds. Let u ∈ ∂P(θ,b), then , this implies that 0 ≤ u(t) ≤ b, t ∈ [0,ξ[m-2]] and since u ∈ P, we have ∥u∥ = u(T), note that . So, From (H4) we know that for t ∈ [0, ξ[m-2]] and so Thus, (ii) of Theorem 2.5 holds. Step three: To verify (iii) of Theorem 2.5 holds. Choose , obviously, u[0](t) ∈ P(α, a) and , thus . Now, let u ∈ ∂P(α, a), then, α(u) = min[t∈[η,T] ]u(t) = u(η) = a. Recalling that . Thus, we have From assumption (H5) we know that and so Therefore, all the conditions of Theorem 2.5 are satisfied, thus A has at least two fixed points in P(γ,c), which implies that BVP (1.6) has at least two positive solutions u[1],u[2 ]which satisfies (4.1). The proof is complete. Example 4.2. Let . Consider the following four point boundary value problem on time scale . and h(t) = t, T = 8, ξ[1 ]= 1, ξ[2 ]= 2, δ = 1, β[1 ]= 1, β[2 ]= 2,p = 3/2, q = 3. In what follows, we try to calculate K, M, L. By Lemmas 2.2 and 2.3, we have Let a = 10^6, b = 10^8, c = 10^9, then we have (i) , for t ∈ [1, 8], u ∈ [10^9, 8 × 10^9]; (ii) , for t ∈ [0, 2], u ∈ [10^8, 4 × 10^8]; (iii) , for t ∈ [4, 8], u ∈ [10^6, 2 × 10^6]. Thus, if all the conditions in Theorem 4.1 are satisfied, then BVP (4.2) has at least two positive solutions satisfying (4.1). 5 Existence of at least three positive solutions Let , then 0 < ψ(u) ≤ ∥u∥. Denote In this section, we will use fixed point Theorem 2.6 to get the existence of at least three positive solutions. Theorem 5.1. Assume that there exists positive number d, ν, g satisfying , such that the following conditions hold. (H6) f(t, u) < ϕ[p](d/R), t ∈ [0,T],u ∈ [0,d]; (H7) f(t, u) > ϕ[p](ν/D), t ∈ [ξ[1], T], u ∈ [ν, Tυ/ξ[1]]; (H8) f(t, u) ≤ ϕ[p](g/R), t ∈ [0,T],u ∈ [0,g], then BVP (1.6) has at least three positive solutions u[1], u[2], u[3 ]satisfying Proof. From Lemma 2.8 we know that A : P → P is completely continuous. Now we only need to show that all the conditions in Theorem 2.6 are satisfied. Thus, . Similarly, by (H6), we can prove (ii) of Theorem 2.6 is satisfied. In what follows, we try to prove that (i) of theorem 2.6 holds. Choose , obviously, ψ(u[1]) > ν, thus . For u ∈ P(ψ,ν,Tν/ξ[1]), It remains to prove (iii) of Theorem 2.6 holds. For u ∈ P(ψ, ν, Tυ/ξ[1]), with ∥Au∥ > Tν/ξ[1], in view of Lemma 2.8, there holds , which implies that (iii) of Theorem 2.6 holds. Therefore, all the conditions in Theorem 2.6 are satisfied. Thus, BVP (1.6) has at least three positive solutions satisfying (5.1). The proof is complete. Example 5.2. Let . Consider the following four point boundary value problem on time scale . and h(t) = e^t, T = 2, ξ[1 ]= 1/2, ξ[2 ]= 1, δ = 3, β[1 ]= 2, β[2 ]= 3, p = 4, q = 4/3. In what follows, we try to calculate D, R. By Lemmas 2.2 and 2.3, we have Let d = 40, ν = 50, g = 400, then we have (i) f(t, u) < 7.027 = (40/20.8832)^3 = ϕ[p](d/R), for t ∈ [0, 2], u ∈ [0, 40]; (ii) f(t, u) > 23.2375 = (50/17.5216)^3 = ϕ[p](ν/D), for t ∈ [1/2, 2], u ∈ [50, 200]; (iii) f(t, u) < 7027.305 = (400/20.8832)^3 = ϕ[p](g/R), for t ∈ [0, 2], u ∈ [0, 400]. Thus, if all the conditions in Theorem 5.1 are satisfied, then BVP (5.2) has at least three positive solutions satisfying (5.1). Authors' contributions WG and HL conceived of the study, and participated in its coordination. JZ drafted the manuscript. All authors read and approved the final manuscript. The authors were very grateful to the anonymous referee whose careful reading of the manuscript and valuable comments enhanced presentation of the manuscript. The study was supported by Pre-research project and Excellent Teachers project of the Fundamental Research Funds for the Central Universities (2011YYL079, 2011YXL047). Sign up to receive new article alerts from Boundary Value Problems
{"url":"http://www.boundaryvalueproblems.com/content/2012/1/4","timestamp":"2014-04-20T20:58:47Z","content_type":null,"content_length":"182814","record_id":"<urn:uuid:1125952c-36ff-4095-ab92-eeebb0e49b8d>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00469-ip-10-147-4-33.ec2.internal.warc.gz"}