content
stringlengths
86
994k
meta
stringlengths
288
619
Don't know; don't care: Whatever Maybe Monad type (covered in this blog in May ) is a practical data type in and is used extensively. Since it is also a type, it is also immediately useful for logic programming, particularly of the semideterministic variety. Personal, I view as a second-order ean type, which, in itself, is a very powerful mode of expression, as George Boole demonstrated. Do yourself a favor, reread his works on algebra, or perhaps some musings from others, such as . Just as Donald Knuth showed that any n-tree could be transformed into an equivalent binary tree, and Paul Tarau put that to good, clause and effect , the Boolean algebra can model many other forms of algebras ... and does, for, after all, last I checked all my calculations are reduced to boolean representations (some call them nary digi 's all well and good (and for the most part, it is — it's very well-suited to describe a large class of problems). But, as a dyed-in-the-wool logic programmer, Haskell has a bit of a nagging problem capturing the concept of logic variables ... that is, a thing that either may be grounded to a particular value (which already has when the instance is Just x ) or may be free (which does not have). In short, the problem is with . And what is the problem? Values resolving to in a monadic computation, because they represent , or failure, propagate throughout the entire (possibly chained) computation, forcing it to abort. Now, under the protocol, this is the desired result, but, when doing logic programming, it is often desirable to proceed with the computation until the value is grounded. Now, deferred computation in Haskell is not a novel concept. In fact, there are several approaches to deferred, or nondeterministic, assignment, including my own unification recipe as well as the credit card transformation (which the author immediately disavows). The problem with these approaches is that they require resolution within that expression. What we need is a data type that captures succinctly this decided or undecided state. This we do with the data type: data Whatever x = Only x | Anything Just like can be monadic: instance Functor Whatever where fmap f (Only x) = Only (f x) fmap f Anything = Anything instance Monad Whatever where return x = Only x Only x >>= f = f x Anything >>= f = Anything You'll note that the monadic definition of is the same as , with the exception that , whereas has no such semantics. Even though is monadic, the interesting properties of this data type come to the fore in its more ordinary usage ... instance Eq x ⇒ Eq (Whatever x) where Only x ≡ Only y = x ≡ y Anything ≡ _ = True _ ≡ Anything = True ... and with that definition, simple logic puzzles, such as the following, can be constructed: In a certain bank the positions of cashier, manager, and teller are held by Brown, Jones and Smith, though not necessarily respectively. The teller, who was an only child, earns the least. Smith, who married Brown's sister, earns more than the manager. What position does each man fill?^1 An easy enough puzzle solution to construct in Prolog, I suppose, and now given the data type, easy enough in Haskell, too! Starting from the universe of pairs that contain the solution ... data Man = Smith | Jones | Brown deriving (Eq, Show) data Position = Cashier | Manager | Teller deriving (Eq, Show) type Sibling = Bool type Ans = (Position, Man) universe :: [Ans] universe = [(pos, man) | pos ← [Cashier, Manager, Teller], man ← [Smith, Jones, Brown]] ... we restrict that universe under the constraint of the first rule ("The teller, who was an only child...") by applying "only child"ness to both the and to the holding that (as obtained from the fact: "... Brown's sister ..."), but, , abstaining from defining a sibling restriction on the other s ... -- first rule definition: The teller must be an only child sibling :: Position → Whatever Sibling sibling Teller = Only False sibling _ = Anything -- fact: Brown has a sibling hasSibling :: Man → Whatever Sibling hasSibling Brown = Only True hasSibling _ = Anything -- seed: the universe constrained by the sibling relation seed :: [Ans] seed = filter (λ(pos, man).sibling pos ≡ hasSibling man) universe And, given that, we need define the rest of the rules. The first implied rule, that each is occupied by a different is straightforward when implemented by the monadic operator defined elsewhere , the other rule concerns the pecking order when it comes to earnings: "The teller ... earns the least" (earnings rule 1) and "Smith, ..., earns more than the manager." (earnings rule 2). This "earnings" predicate we implement monadically, to fit in the domain of -- the earnings predicate, a suped-up guard makesLessThan :: Ans → Ans → StateT [Ans] [] () -- earning rule 1: the teller makes less than the others (Teller, _) `makesLessThan` _ = StateT $ λans. [((), ans)] _ `makesLessThan` (Teller, _) = StateT $ λ_. [] -- and the general ordering of earnings, allowed as long as it's -- different people and positions (pos1, man1) `makesLessThan` (pos2, man2) = StateT $ λans. if pos1 ≡ pos2 ∨ man1 ≡ man2 then [] else [((), ans)] ... we then complete the earnings predicate in the solution definition: rules :: StateT [Ans] [] [Ans] rules = do teller@(Teller, man1) ← choose mgr@(Manager, man2) ← choose guard $ man1 ≠ man2 cashier@(Cashier, man3) ← choose guard (man1 ≠ man3 ∧ man2 ≠ man3) -- we've extracted an unique person for each position, -- now we define earnings rule 2: -- "Smith makes more than the manager" -- using my good-enough unification^2 to find Smith let k = const^3 let smith = (second $ k Smith)∈[teller,mgr,cashier] mgr `makesLessThan` smith return [teller, mgr, cashier] ... and then to obtain the solution, we simply evaluate the over the evalStateT rules seed The pivotal rôle that the data type is in the sibling relation (defined by the involuted darts ). We can abduce the derived fact that is not an only child, which, with this fact reduces the , removing the possibility that may be a universe \\ seed ≡ [(Teller,Brown)] ... but what about the other participants? For , when we are doing the involution, we don't know his familial status (we eventually follow a chain of reasoning that leads us to the knowledge that he is an only child) and, at that point in the reasoning we don't care. , we never have enough information derived from the problem to determine if he has siblings, and here again, we don't care. That is the power that the data type gives us, we may not have enough information reachable from the expression to compute the exact value in question, but we may proceed with the computation anyway. Unlike 's resolution to one of the members of a type ( Just x ) or to no solution ( ), the data type allows the resolution ( Only x ), but it also allows the value to remain unresolved within the type constraint ( So, I present to you, for your logic problem-solving pleasure, the data type. ^ Problem 1 from 101 Puzzles in Thought and Logic, by C. R. Wylie, Jr. Dover Publications, Inc. New York, NY, 1957. ^ f ∈ list = head [x|x ← list, f x ≡ x] When x is atomic, of course f ≡ id, but when x is a compound term, then there are many definitions for f for each data type of x that satisfy the equality f x 2 ≡ x and that do not equate to id. We see one such example in the code above. ^ I really must define a library of combinators for Haskell, à la Smullyan's To Mock a Mockingbird, as I've done for Prolog. ^ This semantic is syntactically implement in Prolog as anonymous logic variables. ^ I find it ironic that Maybe says everything about the resolution of a value, but Whatever allows you to say Nothing about that value's resolution. 7 comments: I hope you are aware than cyan text on white is almost invisible, especially for people with colour-blindness in red. I propose a shorter keyword: Meh. > hasSibling :: Teller → Whatever Sibling You have a mistake there, should be Man -> Whatever Sibling Enjoyable reading. Have you seen the LogicT monad transformer? http://okmij.org/ftp/Computation/monads.html#LogicT I haven't done very much logic programming, but I think if I did, I'd do it in Haskell with LogicT. @david, thanks for the fix ... yes, that was an error in the transcription of the Haskell source to the blog. Fixed it on the blog. @denis, yes, I have reviewed LogicT and worked with it a bit. Not my cup of tea, so I've been working entirely within Haskell (arrows, monads, data types, and recently comonads) to get typeful logic programming and all of Haskell at any point in the program. My experiments so far have been promising: efficient runtime results and compact and consistent syntax. Still working on my system, as these blog entries demonstrate, so let me know how your explorations with LogicT go. @barry, as we discussed off line, the cyan font colour is now darkened to stand out better from the white background.
{"url":"http://logicaltypes.blogspot.com/2008/07/dont-know-dont-care-whatever.html?showComment=1217609760000","timestamp":"2014-04-18T10:34:51Z","content_type":null,"content_length":"104807","record_id":"<urn:uuid:83b26562-705f-427e-b5dc-991d31eab768>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00044-ip-10-147-4-33.ec2.internal.warc.gz"}
Help Needed February 13th, 2013, 02:48 AM Help Needed IF Statements Write the program logic that will compute the paycheck of an employee. This is as much an exercise in designing your logic as much as a programming exercise in the grammar and syntax. Your program's logic will perform a user to perform the following: 1. Enter the employee's LAST Name 2. Enter the employee's FIRST Name 3. Enter the employee's ID Number 4. Enter the employee's Total Hours Worked up to a maximum of 60 hours 5. Enter the employee's Pay Per Hour. The Pay per hour cannot be more than $100.00 per hour. It cannot be less than the minimum wage (i.e. $8.50 per hour). Detect any erroneous pay rate and set it to a default. If less than the minimum hourly wage, set to the minimum hourly wage. If greater than the maximum rate then set that employee to the maximum rate. Display an appropriate “error message” telling that data entry clerk what they did. 6. Federal tax rate and State tax rate range from zero to 40%. State and Federal tax rates cannot exceed a combined 80%. 7. Calculations include: a) Compute the Regular Pay (any hours up to 40 hours) Regular Hours X Pay Per Hour b) Compute the Overtime Pay if ANY (hours OVER 40 hours) Overtime Hours X Pay Per Hour X 1.25 ("time and a quarter" for overtime) c) Compute Gross Pay Regular Pay + Overtime Pay (if NO overtime then Overtime Pay is zero) d) Compute Federal Taxes Withheld (assume 20% rate, i.e. 0.2) Gross Pay X Federal Tax Rate e) Compute State Taxes Withheld (assume 5% rate, i.e. 0.05) Gross Pay X State Tax Rate f) Compute Net Pay (Gross Pay - Federal Taxes Withheld - State Taxes Withheld) Output the employee's LAST Name, FIRST Name, ID Number, Total Hours, Regular Hours, Overtime Hours, if any (if NO overtime hours then don't print it), Gross Pay, Regular Pay, Overtime Pay, if any (if NO overtime hours then don't print it), Federal Taxes Withheld, State Taxes Withheld, Net Pay. Output format, I leave to your imagination (i.e. some sort of if statement?). output where i have to show last name , first name , id, total hours, regular hours then regular hour and over time then gross pay then federal tax and state tax . i need the constant declare February 13th, 2013, 06:16 AM Re: Help Needed Do you have any specific questions about your assignment? Please post your code and any questions about problems you are having. What have you tried? February 13th, 2013, 12:38 PM Re: Help Needed // Assignment 2 import java.io.*; public class payment public static void main(String args[]) throws IOException BufferedReader input = new BufferedReader (new InputStreamReader(System.in)); System.out.println("Enter your last name: "); String last = input.readLine(); System.out.println("Enter your first name: "); String first = input.readLine(); System.out.println("Enter your ID: "); String ID = input.readLine(); System.out.println("Enter your total work hrs: "); Double hr = Double.parseDouble(input.readLine()); if( hr > 60) System.out.println("Total Hours can not be more than 60hrs"); System.out.println("Enter your total work hrs: "); hr = Double.parseDouble(input.readLine()); System.out.println("Enter pay per hrs rate: "); Double rate = Double.parseDouble(input.readLine()); System.out.println("Rate per hr. not correct"); rate = 100.00; System.out.println("Rate per hr. not correct"); rate = 8.5; System.out.println("Enter Federal tax(0% - 40%): "); Double fd = Double.parseDouble(input.readLine()); System.out.println("Enter State tax (0% - 40%): "); Double st = Double.parseDouble(input.readLine()); System.out.println("Wrong tax rate enterd"); Double rpay; Double otime=0.0; rpay= 40*rate; otime= (hr-40) * rate * 1.25; Double totalworkhours = hr+otime; Double gpay= rpay+otime; Double ftw= gpay*fd/100; Double stw=gpay*st/100; Double netpay= gpay-ftw-stw; System.out.println("Last Name"+ last +"First Name"+ first); System.out.println("ID number: "+ ID); System.out.println("Total Hours Worked " + totalworkhours); System.out.println("Regular time: "+ (hr-otime)); System.out.println("Over Time: "+ otime); System.out.println(" Federal tax: "+ ftw); System.out.println(" State tax : "+ stw); System.out.println("Net Pay: "+ netpay); this is code but its not working February 13th, 2013, 12:41 PM Re: Help Needed its not working Please explain. If there are error messages, please copy the full text and paste it here. If the output is wrong, copy the output and paste it here and add some comments saying what is wrong with the output and show what the output should be. Please edit your post and wrap your code with <YOUR CODE HERE> to get highlighting and preserve formatting. February 13th, 2013, 12:46 PM Re: Help Needed Last Name johalFirst Name navi ID number: 123de Total Hours Worked 636.0 Regular time: -524.0 Over Time: 580.0 Federal tax: 348.0 State tax : 174.0 Net Pay: 1218.0 like when more than 40 then it gaves me a negative answer for the regular time how can i fix it February 13th, 2013, 12:55 PM Re: Help Needed Regular time: "+ (hr-otime)); What are the expected and actual values of hr and otime? If otime > hr, then the result will be a negative number. Look at where each of those variables get their values and see why they are a problem. Define what is in the hr variable and in the otime variable. Are they compatible? when more than 40 then it gaves me a negative answer for the regular time What should the answer be? Your unformatted code is hard to read and understand. Please edit your post and wrap your code with <YOUR CODE HERE> to get highlighting and preserve formatting.
{"url":"http://www.javaprogrammingforums.com/%20whats-wrong-my-code/23765-help-needed-printingthethread.html","timestamp":"2014-04-19T22:32:14Z","content_type":null,"content_length":"12047","record_id":"<urn:uuid:efaa72dd-e8f0-40ad-8729-025338100c78>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: Constraints and Functional Dependencies From: Marshall <marshall.spight_at_gmail.com> Date: 24 Feb 2007 16:23:10 -0800 Message-ID: <1172362990.289544.267480@p10g2000cwp.googlegroups.com> On Feb 24, 10:08 am, Bob Badour <bbad..._at_pei.sympatico.ca> wrote: > Marshall wrote: > > One can express unboundedness, but since I was proposing > > limiting what one can quantify over to named relations, and > > since the natural numbers are something other than that, > > (an infinite set) my expressiveness restrictions make it > > impossible to express the unboundedness *of the natural > > numbers* specifically. > Suppose one has an extent function that returns a relation representing > the extent of a type. Then "extent(natural)" is the name for a relation > representing the set of all natural numbers. > However, in a real computer, the natural type will be finite not > infinite and the unbounded constraint above would always fail (except > for empty relations ironically). I would phrase that differently: in a real computer, the basic numeric type will be a finite moduo type, like the familiar 32 bit in from C. But yeah. There are languages out there (scheme?) for which the basic integral type is an arbitrary precision integer. Of course "arbitrary precision" is not the same thing as "infinite". But an arbitrary precision type is nonetheless qualitatively different from a modular int type, because in the case of the second, the limitation is on the type itself, but in the case of the first, the limitation is on the machine, not the type. (Actually most arbitrary precision types are probably also finite and hence not truly arbitrary precision. I would expect a size limit of 2^32 bytes or words. Even in that case though, the type limit exceeds or approaches the machine limit. And even that limit can be broken, say with run length encoding.) (Another aside: arbitrary precision isn't as useful as one might guess. In fact I can only recall one instance in my entire career in which a 64 bit int wasn't sufficient: representing $250,000 in millionths of old Turkish lira. Ha ha! In another forum I recall someone saying that just 40 digits of pi was sufficient to calculate the circumference of the universe to within one centimeter.) > How does one express that R has at least one tuple? Or that a specific > instance exists? > Would the following do? > exists R(a): true > exists R(a): a = literal Yes and yes, I would say. I asked that first question a while ago and Jan Hidders answered with "an empty existential constraint", which I take to mean exists R(): > If that works, I suppose one could describe the extent of the natural > numbers as: > (exists extent(natural)(value): value = 1) > and > (forall extent(natural)(value'): > exists extent(natural)(value''): > value'' = value' + 1 > ) > Of course, a finite computer would have to amend the above as follows: > (exists extent(natural)(value): value = 1) > and > (forall extent(natural)(value'): > value' = max(natural) > or exists extent(natural)(value''): > value'' = value' + 1 > ) > Assuming a max function that returns the largest value in an ordered type. That modification would be necessary if the software was going to actually do the computation. However there are algebra and proof assistant systems that don't need to make that modification because they don't do the computation but rather they instead just manipulate the symbols. Maybe later I'll see if I can coax Coq into verifying a proof of the unboundedness of the Marshall Received on Sat Feb 24 2007 - 18:23:10 CST
{"url":"http://www.orafaq.com/usenet/comp.databases.theory/2007/02/24/0917.htm","timestamp":"2014-04-21T03:08:14Z","content_type":null,"content_length":"11351","record_id":"<urn:uuid:11eae88f-276c-413c-8a3a-24aa9498ee93>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00082-ip-10-147-4-33.ec2.internal.warc.gz"}
Bargain Mom If you've been following me on Facebook then you know that I promised to post this giveaway for my fans tonight. I'm feeling generous, so that means one lucky Bargain Mom fan will win $250 via Paypal from me! 1. Fan Bargain Mom on Facebook (1 entry) 2. Follow Bargain Mom on Twitter (1 entry) 3. Subscribe to Bargain Mom by email (1 entry) 4. Fan my Soylicious.com page on Facebook (1 entry) 5. Follow @soyliciousmommy on Twitter (1 entry) 6. Write a review for Bargain Mom on Alexa (3 entries) 7. Tweet about this giveaway 1x per day and include @mibargainmom and a link to this post (1 entry/tweet) 8. Subscribe to Bargain Mom through Feedburner (1 entry) 9. Enter the Soylicious.com or Mary Kay giveaway (1 entry/giveaway) You must leave a separate comment for each entry. I will check each entry to make sure they are valid. If your email does not show in your profile, please leave it in a comment. Add mi.bargainmom at yahoo.com to your email contacts so you don't miss out if you win! Giveaway is open to all! Contest ends Friday, August 31, 2012 at 11:59 EST. The winner will be chosen on or after Saturday, September 1, 2012 by Random.org. Winner will have 48 hours to respond by email or another winner will be chosen. Good Luck! Disclosure: This giveaway is sponsored by Bargain Mom. Paypal and Facebook are in no way affiliated with this giveaway. Prize fulfillment is the sole responsibility of Bargain Mom. 1064 comments: «Oldest ‹Older 1 – 200 of 1064 Newer› Newest» «Oldest ‹Older 1 – 200 of 1064 Newer› Newest»
{"url":"http://www.bargainmom.net/2012/08/250-giveaway-via-paypal-worldwide-ends.html","timestamp":"2014-04-18T13:06:17Z","content_type":null,"content_length":"356988","record_id":"<urn:uuid:770e763a-640b-4239-b6fe-c836dd758255>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Is the nc torus a quantum group? up vote 9 down vote favorite The non-commutative n-torus appears in many applications of non-commutative geometry. To stay in the setting $n=2$: it is a C$^\ast$-algebra generated by unitaries $u$ and $v$, satisfying $u v = e^{i \theta} v u$. It is the deformation of the 2-torus, i.e. a group. So my question is: besides viewing the nc torus as a 'non-commutative space', is it also a compact quantum group? That is, is there Hopf algebraic structure in it? quantum-groups noncommutative-geometry add comment 3 Answers active oldest votes The $C^*$-algebra versions are treated in this paper by Piotr Soltan: up vote 10 down vote The abstract reads: We prove that some well known compact quantum spaces like quantum tori and some quantum two-spheres do not admit a compact quantum group structure. This is achieved by considering existence of traces, characters and nuclearity of the corresponding $\mathrm{C}^*$-algebras. add comment I don't know about the $C^*$-algebra version, but I can tell you about the algebraic version (the algebra generated by $u$ and $v$, invertible, such that $uv = qvu$). It is not a Hopf algebra but a "braided group", that is, a Hopf algebra in some braided category (classical Hopf algebras being, in this parlance, "Hopf algebras in the category of vector spaces with the trivial twist"). Concretely, there is a map of algebras $A \to A \otimes A$ satisfying all the axioms you want, except that $A \otimes A$ is not made into an algebra in the way you think. up vote 9 If I were allowed a bit of self-advertising, I'd recommend §4 of down vote Majid's book on quantum groups may have some formulae about the codiagonal in the quantum tori. add comment Despite the negative result quoted by MTS, there have been some attempts to put a Hopf-like structure on the quantum torus. One of these attemps, which seems orthogonal to the one mentioned by Pierre in his answer, is via Hopfish algebras. To be short, Hopfish algebras (after Tang-Weinstein-Zhu) are unital algebras equipped a coproduct, a counit and an antipode that are morphisms in the Morita category (they are bimodules,rather than actual algebra morphisms). The Hopfish structure on the quantum torus has been studied in details in this paper. up vote 3 To be complete, let me emphazise the following point (taken from the above paper): down vote It is important to note that, although the irrational rotation algebra may be viewed as a deformation of the algebra of functions on a 2-dimensional torus, our hopfish structure is not a deformation of the Hopf structure associated with the group structure on the torus. Rather, the classical limit of our hopfish structure is a second symplectic groupoid structure on $T^∗\mathbb{T}^2$ (...), whose quantization is the multiplication in the irrational rotation algebra. We thus seem to have a symplectic double groupoid which does not arise from a Poisson Lie group. add comment Not the answer you're looking for? Browse other questions tagged quantum-groups noncommutative-geometry or ask your own question.
{"url":"http://mathoverflow.net/questions/76913/is-the-nc-torus-a-quantum-group?sort=votes","timestamp":"2014-04-19T00:04:53Z","content_type":null,"content_length":"56821","record_id":"<urn:uuid:a4103c9e-ebf3-4be5-b7af-08b528f68476>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Simulated Annealing Example in C# Simulated annealing (SA) is an AI algorithm that starts with some solution that is totally random, and changes it to another solution that is “similar” to the previous one. It makes slight changes to the result until it reaches a result close to the optimal. Simulated annealing is a stochastic algorithm, meaning that it uses random numbers in its execution. So every time you run the program, you might come up with a different result. It produces a sequence of solutions, each one derived by slightly altering the previous one, or by rejecting a new solution and falling back to the previous one without any change. When SA starts, it alters the previous solution even if it is worse than the previous one. However, the probability with which it will accept a worse solution decreases with time,(cooling process) and with the “distance” the new (worse) solution is from the old one. It always accepts a new solution if it is better than the previous one. The probability used is derived from The Maxwell-Boltzmann distribution which is the classical distribution function for distribution of an amount of energy between identical but distinguishable particles. It's value is: Besides the presumption of distinguishability, classical statistical physics postulates further that: • There is no restriction on the number of particles which can occupy a given state. • At thermal equilibrium, the distribution of particles among the available energy states will take the most probable distribution consistent with the total available energy and total number of • Every specific state of the system has equal probability. The name “simulated annealing” is derived from the physical heating of a material like steel. This material is subjected to high temperature and then gradually cooled. The gradual cooling allows the material to cool to a state in which there are few weak points. It achieves a kind of “global optimum” wherein the entire object achieves a minimum energy crystalline structure. If the material is rapidly cooled, some parts of the object, the object is easily broken (areas of high energy structure). The object has achieved some local areas of optimal strength, but is not strong throughout, with rapid cooling. In my program, I took the example of the travelling salesman problem: file tsp.txt.The matrix designates the total distance from one city to another (nb: diagonal is 0 since the distance of a city to itself is 0). As for the program, I tried developing it as simple as possible to be understandable. You could change the starting temperature, decrease or increase epsilon (the amount of temperature that is cooling off) and alter alpha to observe the algorithm's performance. The program calculates the minimum distance to reach all cities(TSP). The best minimal distance I got so far using that algorithm was 17. Can you calculate a better distance? The Code public string StartAnnealing() ArrayList list = new ArrayList(); //primary configuration of cities int [] current={0,1,2,3,4,5,6,7,8,9,10,11,12,13,14}; //the next configuration of cities to be tested int []next=new int[15]; int iteration =-1; //the probability double proba; double alpha =0.999; double temperature = 400.0; double epsilon = 0.001; double delta; double distance = TspDataReader.computeDistance(current); //while the temperature did not reach epsilon while(temperature > epsilon) //get the next random permutation of distances //compute the distance of the new permuted configuration delta = TspDataReader.computeDistance(next)-distance; //if the new distance is better accept it and assign it distance = delta+distance; proba = rnd.Next(); //if the new distance is worse accept //it but with a probability level //if the probability is less than //E to the power -delta/temperature. //otherwise the old value is kept if(proba< Math.Exp(-delta/temperature)) distance = delta+distance; //cooling process on every iteration temperature *=alpha; //print every 400 iterations if (iteration%400==0) return "best distance is"+distance; return "error"; /// <summary> /// compute a new next configuration /// and save the old next as current /// </summary> /// <param name="c">current configuration</param> /// <param name="n">next configuration</param> void computeNext(int[] c, int[] n) for(int i=0;i<c.Length;i++) int i1 = (int)(rnd.Next(14))+1; int i2 = (int)(rnd.Next(14))+1; int aux = n[i1]; Make sure the debug window is opened to observe the algorithm's behavior through iterations. Happy programming!
{"url":"http://www.codeproject.com/Articles/13789/Simulated-Annealing-Example-in-C","timestamp":"2014-04-23T17:08:35Z","content_type":null,"content_length":"86771","record_id":"<urn:uuid:312a05b9-ed7a-4444-b359-91089a948dba>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00136-ip-10-147-4-33.ec2.internal.warc.gz"}
Sparse canonical methods for biological data integration: application to a cross-platform study • We are sorry, but NCBI web applications do not support your browser and may not function properly. More information BMC Bioinformatics. 2009; 10: 34. Sparse canonical methods for biological data integration: application to a cross-platform study In the context of systems biology, few sparse approaches have been proposed so far to integrate several data sets. It is however an important and fundamental issue that will be widely encountered in post genomic studies, when simultaneously analyzing transcriptomics, proteomics and metabolomics data using different platforms, so as to understand the mutual interactions between the different data sets. In this high dimensional setting, variable selection is crucial to give interpretable results. We focus on a sparse Partial Least Squares approach (sPLS) to handle two-block data sets, where the relationship between the two types of variables is known to be symmetric. Sparse PLS has been developed either for a regression or a canonical correlation framework and includes a built-in procedure to select variables while integrating data. To illustrate the canonical mode approach, we analyzed the NCI60 data sets, where two different platforms (cDNA and Affymetrix chips) were used to study the transcriptome of sixty cancer cell lines. We compare the results obtained with two other sparse or related canonical correlation approaches: CCA with Elastic Net penalization (CCA-EN) and Co-Inertia Analysis (CIA). The latter does not include a built-in procedure for variable selection and requires a two-step analysis. We stress the lack of statistical criteria to evaluate canonical correlation methods, which makes biological interpretation absolutely necessary to compare the different gene selections. We also propose comprehensive graphical representations of both samples and variables to facilitate the interpretation of the results. sPLS and CCA-EN selected highly relevant genes and complementary findings from the two data sets, which enabled a detailed understanding of the molecular characteristics of several groups of cell lines. These two approaches were found to bring similar results, although they highlighted the same phenomenons with a different priority. They outperformed CIA that tended to select redundant In systems biology, it is particularly important to simultaneously analyze different types of data sets, specifically if the different kind of biological variables are measured on the same samples. Such an analysis enables a real understanding on the relationships between these different types of variables, for example when analyzing transcriptomics, proteomics or metabolomics data using different platforms. Few approaches exists to deal with these high throughput data sets. The application of linear multivariate models such as Partial Least Squares regression (PLS, [1]) and Canonical Correlation Analysis (CCA, [2]), are often limited by the size of the data set (ill-posed problems, CCA), the noisy and the multicollinearity characteristics of the data (CCA), but also the lack of interpretability (PLS). However, these approaches still remain extremely interesting for integrating data sets. First, because they allow for the compression of the data into 2 to 3 dimensions for a more powerful and global view. And second, because their resulting components and loading vectors capture dominant and latent properties of the studied process. They may hence provide a better understanding of the underlying biological systems, for example by revealing groups of samples that were previously unknown or uncertain. PLS is an algorithmic approach that has often been criticized for its lack of theoretical justifications. Much work still needs to be done to demonstrate all statistical properties of the PLS (see for example [3,4] who recently addressed some theoretical developments of the PLS). Nevertheless, this computational and exploratory approach is extremely popular thanks to its efficiency. Recent integrative biological studies applied Principal Component Analysis, or PLS [5,6], but for a regression framework, where prior biological knowledge indicates which type of omic data is expected to explain the other type (for example transcripts and metabolites). Here, we specifically focus on a canonical correlation framework, when there is either no assumption on the relationship between the two sets of variables (exploratory approach), or when a reciprocal relationship between the two sets is expected (e.g. cross platform comparisons). Our interests lie in integrating these two high dimensional data sets and perform variable selection simultaneously. Some sparse associated integrative approaches have recently been developed to include a built-in selection procedure. They adapt lasso penalty [7] or combine lasso and ridge penalties (Elastic Net, [8]) for feature selection in integration studies. In this study, we propose to apply a sparse canonical approach called "sparse PLS" (sPLS) for the integration of high throughput data sets. Methodological aspects and evaluation of sPLS in a regression framework were presented in [9]. This novel computational method provides variable selection of two-block data sets in a one step procedure, while integrating variables of two types. When applying canonical correlation-based methods, most validation criteria used in a regression context are not statistically meaningful. Instead, the biological relevancy of the results should be evaluated during the validation process. In this context, we compare sparse PLS with two other canonical approaches: penalized CCA adapted with Elastic Net (CCA-EN [10]), which is a sparse method that was applied to relate gene expression with gene copy numbers in human gliomas, and Co-Inertia Analysis (CIA, [11]) that was first developed for ecological data, and then for canonical high-throughput biological studies [12]. This latter approach does not include feature selection, which has to be performed in a two-step procedure. This comparative study has two aims. First to better understand the main differences between each of these approaches and to identify which method would be appropriate to answer the biological question, second to highlight how each method is able to reveal the underlying biological processes inherent to the data. This type of comparative analysis renders biological interpretation mandatory to strengthen the statistical hypothesis, especially when there is a lack of statistical criteria to assess the validity of the results. We first recall some canonical correlation-based methods among which the two sparse methods, sPLS and CCA-EN will be compared with CIA on the NCI60 cell lines data set. We propose to use appropriate graphical representations to discuss the results. The different gene lists are assessed, first with some statistical criteria, and then with a detailed biological interpretation. Finally, we discuss the pros and cons of each approach before concluding. Canonical correlation-based methods We focus on two-block data matrices denoted X(n × p) and Y (n × q), where the p variables x^j and q variables y^k are of two types and measured on the same samples or individuals n, for j = 1 ... p and k = 1 ... q. Prior biological knowledge on these data allows us to settle into a canonical framework, i.e. there exists a reciprocal relationship between the X variables and the Y variables. In the case of high throughput biological data, the large number of variables may affect the exploratory method, due to numerical issues (as it is the case for example with CCA), or lack of interpretability (PLS). We next recall three types of multivariate methods (CCA, PLS, CIA). For CCA and PLS, we describe the associated sparse approaches that were proposed, either to select variables from each set or to deal with the ill-posed problem commonly encountered in high dimensional data sets. Canonical Correlation Analysis [2] studies the relationship between two sets of data. The CCA n-dimensional score vectors (Xa[h], Yb[h]) come in pairs to solve the objective function: where the p- and q-dimensional vectors a[h ]and b[h ]are called canonical factors, or loading vectors, and h is the CCA chosen dimension. As cor(Xa[h], Yb[h]) = cov(Xa[h], Yb[h])/$var(Xah)var(Ybh)$, the aim of CCA is to simultaneously maximize cov(Xa[h], Yb[h]) and minimize the variances of Xa[h ]and Yb[h]. It is known that the CCA loadings are not directly interpretable [13]. It is however very instructive to interpret these components by calculating the correlation between the original data set X and {a[1], ..., a[H]} and similarly between Y and {b[1], ..., b[H]}, to project variables onto correlation circles. Easier interpretable graphics are then obtained, as shown in the R package cca [14]. In the p + q >> n framework, CCA suffers from high dimensionality as it requires the computation of the inverse of two covariance matrices XX' and YY ' that are singular. This implies numerical difficulties, since the canonical correlation coefficients are not uniquely defined. One solution proposed by [15] was to ntroduce l[2 ]penalties in a ridge CCA (rCCA) on the covariance matrices, so as to make them invertible. rCCA was recently applied to genomic data [16], but was not adapted in our study as it does not perform feature selection. We focused instead of another variant called CCA with Elastic Net penalization (see below). Partial Least Squares regression [1] is based on the simultaneous decomposition of X and Y into latent variables and associated loading vectors. The latent variables methods (e.g. PLS, Principal Component Regression) assume that the studied system is driven by a small number of n-dimensional vectors called latent variables. These latter may correspond to some biological underlying phenomenons which are related to the study [17]. Like CCA, the PLS latent variables are linear combinations of the variables, but the objective function differs as it is based on the maximization of the covariance: where X[h-1 ]is the residual (deflated) X matrix for each PLS dimension h. We denote ξ[h ]and ω[h ]the n-dimensional vectors called "latent variables" which are associated to each loading vector a[h ]and b[h]. In contrary to CCA, the loading vectors (a[h], b[h]) are interpretable and can give information about how the x^j and y^k variables combine to explain the relationships between X and Y. Furthermore, the PLS latent variables (ξ[h], ω[h]) indicate the similarities or dissimilarities between the individuals, related to the loading vectors. Many PLS algorithms exist, not only for different shapes of data (SIMPLS, [18], PLS1 and PLS2 [1], PLS-SVD [19]) but also for different aims (predictive, like PLS2, or modelling, like PLS-mode A, see [10,20,21]). In this study we especially focus on a modelling aim ("canonical mode") between the two data sets, by deflating X and Y in a symmetric way (see Additional file 1). [10] proposed a sparse penalized variant of CCA using Elastic Net [8,22] for a canonical framework. To do so, the authors used the PLS-mode A formulation [20,21] to introduce penalties. Note that Elastic Net is well adapted to this particular context. It combines the advantages of the ridge regression, that penalizes the covariance matrices XX' and YY' which become non singular, and the lasso [7] that allows variable selection, in a one step procedure. However, when p + q is very large, the resolution of the optimization problem requires intensive computations, and [8,10] proposed instead to perform a univariate thresholding, that leaves only the lasso estimates to compute (see Additional file 1). sparse PLS [9] proposed a sparse PLS approach (sPLS) based on a PLS-SVD variant, so as to penalize both loading vectors a[h ]and b[h ]simultaneously. For any matrix M (p × q) of rank r, the SVD of M is given by: where the columns of A (p × r) and B(q × r) are orthonormal and contain the eigenvectors of MM' and M'M, Δ (r × r) is a diagonal matrix of the squared eigenvalues of MM' or M'M. Now if M = X'Y, then the column vectors of A (resp. B) correspond to the loading vectors of the PLS a[h ](resp. b[h]). Sparsity can then be introduced by iteratively penalizing a[h ]and b[h ]with a soft-thresholding penalization, as [23] proposed for a sparse PCA using SVD computation. Both regression and canonical deflation modes were proposed for sPLS [9]. In this paper, we will focus on the canonical mode only (see Additional file 1 for more details of the algorithm). The regression mode has already been discussed in [9] with a thorough biological interpretation of the results. Co-Inertia analysis (CIA) was first introduced by [11] in the context of ecological data, before being applied to high throughput biological data by [12]. CIA is suitable for a canonical framework, as it is adapted for a symmetric analysis. It involves analyzing each data set separately either with principal component analyses, or with correspondence analyses, such that the covariance between the two new sets of projected scores vectors (that maximize either the projected variability or inertia) is maximal. This results in two sets of axes, where the first pair of axes are maximally co-variant, and are orthogonal to the next pair [24]. CIA does not propose a built-in variable selection, but we can perform instead a two-step procedure by ordering the weight vector (loadings) for each CIA dimension and by selecting the top variables. Differences between the approaches These three canonical based approaches, CCA-EN, sPLS and CIA profoundly differ in their construction, and hence their aims. On the one hand, CCA-EN looks for canonical variate pairs (Xa[h], Yb[h]), such that a penalized version of the canonical correlation is maximized. This explains why a non monotonic decreasing trend in the canonical correlation can sometimes be obtained [10]. On the other hand, sPLS (canonical mode) and CIA aim at maximizing the covariance between the scores vectors, so that there is a strong symmetric relationship between both sets. However, here CIA is based on the construction of two Correspondence Analyses, whereas sPLS is based on a PLS analysis. Parameters tuning In CCA-EN, the authors proposed to tune the penalty parameters for each dimension, such that the canonical correlation cor(Xa[h], Yb[h]) is maximized. In practice, they showed that the correlation did not change much when more variables were added in the selection. Therefore, an appropriate way of tuning the parameters would be to choose instead the degree of sparsity (i.e. the number of variables to select), as previously proposed for sparse PCA by [22,23]-see the elasticnet R package for example, and hence to rely on the biologists needs. Thus, depending on the aim of the study (focus on few genes or on groups of genes such as whole pathways) and on the ability to perform follow-up studies, the size of the selection can be adapted. When focusing on groups of genes (e.g. pathways, transcription factor targets, variables involved in the same biological process), we believe that the selection should be large enough to avoid missing specific functions or annotations. The same strategy will be used for sPLS (see also [9] where the issue of tuning sPLS parameters is addressed). No other parameters than the number of selected variables is needed in CIA either. Graphical representations are crucial to help interpreting the results. We therefore propose to homogenize all outputs to enable their comparison. Samples are represented with the scores or latent variable vectors, in a superimposed manner, as proposed in the R package ade4 [25], first to show how samples are clustered based on their biological characteristics, and second to measure if both data sets strongly agree according to the applied approach. In these graphical representations, each sample is indicated using an arrow. The start of the arrow indicates the location of the sample in the X data set in one plot, and the tip of the arrow the location of the sample in the Y data set in the other plot. Thus, short (long) arrows indicate if both data sets strongly agree (disagree) between the two data sets. Variables are represented on correlation circles, as previously proposed by [14]. Correlations between the original data sets and the score or latent variable vectors are computed so that highly correlated variables cluster together in the resulting graphics. Only the selected variables in each dimension are represented. This type of graphic not only allows for the identification of interactions between the two types of variables, but also for identifying the relationship between variable clusters and associated sample clusters. Note that for large variable selections, the use of interactive plotting, color codes or representations limited to user-selected variables may be required to simplify the outputs. Cross-platform study Data sets and relevance for a canonical correlation analysis We chose to compare the three canonical correlation-based methods (CCA-EN, CIA and sPLS) for their ability to highlight the relationships between two gene expression data sets both obtained on a panel of 60 cell lines (NCI60) from the National Cancer Institute (NCI). This panel consists of human tumor cell lines derived from patients with leukaemia (LE), melanomas (ME) and cancers of ovarian (OV), breast (BR), prostate (PR), lung (LU), renal (RE), colon (CO) and central nervous system (CNS) origin. The NCI60 is used by the Developmental Therapeutics Program (DTP) of the NCI to screen thousands of chemical compounds for growth inhibition activity and it has been extensively characterized at the DNA, mRNA, protein and functional levels. The data sets considered here have been generated using Affymetrix [26,27] or spotted cDNA [28] platforms. These data sets are highly relevant to an analysis in a canonical framework since 1) there is some degree of overlap between the genes measured by the two platforms, but also a large degree of complementarity through the screening of different gene sets representing common pathways or biological functions [12] and 2) they play fully symmetric roles, as opposed to a regression framework where one data set is explained by the other. We assume that the data sets are correctly normalized, as described below. The Ross Data set [28] used spotted cDNA microarrays containing 9,703 human cDNAs to profile each of the 60 cell line in the NCI60 panel [28]. Here, we used a subset of 1,375 genes that has been selected using both non-specific and specific filters described in [29]. In particular, genes with more than 15% of missing values were removed and the remaining missing values were imputed by k-nearest neighbours [12]. The pre-processed data set containing log ratio values is available in [12]. The Staunton Data set Hu6800 Affymetrix microarrays containing 7,129 probe sets were used to screen each of the 60 cell lines in another study [26,27]. Pre-processing steps are described in [27] and [12]. They include 1) replacing average difference values less than 100 by an expression value of 100, 2) eliminating genes whose expression was invariant across all 60 cell lines and 3) selecting the subset of genes displaying a minimum change in expression across all 60 cell lines of at least 500 average difference units. The final analyzed data set contained the average difference values for 1,517 probe sets, and is available in [12]. Application of the three sparse canonical correlation-based methods We applied CCA-EN, CIA and sPLS to the Ross (X) and Staunton (Y) data sets. For each dimension h, h = 1 ... 3, we selected 100 genes from each data set. The number of dimensions was arbitrarily chosen, as when H ≥ 4, the analysis of the results becomes difficult given the high number of graphical outputs. Indeed, for higher dimensions, the cell lines did not cluster by their tissue of origin, which made their interpretation more difficult. The size of the selection (100) was judged small enough to allow for the identification of individual relevant genes and large enough to reveal gene groups belonging to the same functional category or pathway. Results and Discussion We apply the three canonical correlation-based approaches to the NCI60 data set and assess the results in two different ways. First we examine some statistical criteria, then we provide a biological interpretation of the results from each method, using graphical representations along with database mining. How to assess the results? Canonical correlation-based methods are statistically difficult to assess. Firstly, because they do not fit into a regression/prediction framework, meaning that the prediction error cannot be estimated using cross-validation to evaluate the quality of the model. Secondly, because in many two-block biological studies, the number of samples n is very small compared to the number of variables p + q. This makes any statistical criteria difficult to compute or estimate. This is why graphical outputs are important to help analyze the results (see for example [12,20]). When working with biological data, a new way of assessing the results should be to strongly rely on biological interpretation. Indeed, our aim is to show that each approach is applicable and to assess whether they answer the biological question. We therefore propose to base most of our comparative study on the biological interpretation of the results by using appropriate graphical representations of the samples and the selected variables. Link between two-block data sets Variance explained by each component [20] proposed to estimate the variance explained in each data set X and Y in relation to the "opposite" component score or latent variables (ω[1], ..., ω[H]) and (ξ[1], ..., ξ[H]), where ξ[h ]= Xa[h ]and ω[h ]= Yb[h ]in all approaches. The redundancy criterion Rd, or part of explained variance, is computed as follows: Similarly, one can compute the variance explained in each component in relation with its associated data set: Figure Figure11 displays the Rd criterion for h = 1 ... 3 for each set of components (ξ[1], ξ[2], ξ[3]) (ω[1], ω[2], ω[3]) and for each approach. While there seems to be a great difference in the first dimension between CCA and the other methods, the components in dimensions 2 and 3 explain the same amount of variance in both X and Y for CCA-EN and sPLS. This suggests a strong similarity between these two approaches at this stage. However, CIA differs from these two methods. The components computed from the "opposite" set explain more variance than CCA/sPLS, and less in their respective set. Overall, we can observe that more information seems to be present in the X (Ross) rather than in the Y (Staunton) data set. Indeed, similarly to [12], we noticed that a hierarchical clustering of the samples from the Ross data set allows a better clustering of the cell lines based on their tissue of origin than from the Staunton data set (Figure (Figure22). Rd. Cumulative explained variance (Rd criterion) of each data set in relation to its component score (CCA-EN, CIA) or latent variable (sPLS). Hierarchical clustering of the two data sets using all expression profiles. Hierarchical clustering of the cell lines with Ward method and correlation distance using the expression profiles from the Ross (left) and Staunton (right) data sets. The tissues ... Correlations between each component The canonical correlations between the pair of score vectors or latent variables were very high (>0.93) for any approach and in any dimension (see Table Table1).1). This confirms our hypothesis regarding the canonical aim of each method. The non monotonic decreasing trend of the canonical correlations in CCA-EN is not what can be expected from a CCA variant. This fact was also pointed out by [10] as the optimization criterion in CCA-EN differs from ordinary CCA. However, the computations of the Rd criterion (Figure (Figure1)1) seem to indicate that the cumulative variance explained by the latent variables increases with h. sPLS and CIA also highlight very strongly correlated components, as their aim is to maximize the covariance. This suggests that the associated loading vectors may also bring related information regarding the variables (genes) from both data sets. The maximal canonical correlation ( Correlations. Correlations between the score vectors (CCA-EN, CIA) or between latent variables (sPLS) for each dimension. Interpretation of the observed cell line clusters Graphical representation of the samples Figures Figures33 and and44 display the graphical representations of the samples in dimension 1 and 2 (a), or 1 and 3 (b) for CCA-EN (Figure (Figure3)3) and sPLS (Figure (Figure4).4). CIA showed similar patterns to sPLS and to those presented in [12]. All graphics show that both data sets are strongly related (short arrows), but the components differ, depending on the applied method. In dimension 1, the pair (ξ[1], ω[1]) tends to separate the melanoma cell lines from the other cell lines in CCA-EN (Figure 3(a)), whereas sPLS and CIA tend to separate the LE and CO cell lines on one side from the RE and CNS cell lines on the other side (Figure 4(a)). As previously proposed by [12], we interpreted this latter clustering as the separation of cell lines with epithelial characteristics (mainly LE and CO) from those with mesenchymal characteristics (in particular RE and CNS). Epithelial cells generally form layers by making junctions between them and interacting with the extracellular matrix (ECM), whereas mesenchymal cells are able to migrate through the ECM and are found in the connective tissues. In dimension 2, we observe the opposite tendency: the CCA-EN score vectors (ξ[2], ω[2]) separates the cell lines with epithelial characteristics from the cell lines with mesenchymal characteristics (Figure 3(a)), while the sPLS or CIA pair (ξ[2], ω[2]) separates the melanoma samples from the other samples (Figure 4(a), not shown for CIA). Finally, in dimension 3 all three methods separate the LE from the CO cell lines. Graphical representations of the samples using CCA-EN. Graphical representations of the cell lines by plotting the component scores from CCA-EN from dimension 1 and 2 (a) or 1 and 3 (b). The component scores computed on each data set are displayed in ... Graphical representations of the samples using sPLS. Graphical representations of the cell lines by plotting the latent variable vectors from sPLS from dimension 1 and 2 (a) or 1 and 3 (b). The latent variable vectors computed on each data set are displayed ... Hierarchical clustering of the samples To further understand this difference between the methods, we separately performed hierarchical clustering of the 60 cell lines for each data set (Figure (Figure2).2). The main clusters that we identified corresponded to the three groups of cell lines which were previously highlighted by the three methods (Figures (Figures33 and and44): 1) cell lines with epithelial characteristics (mainly LE and CO), 2) cell lines with mesenchymal characteristics (in particular RE and CNS) and 3) ME cell lines which systematically clustered with MDA_N and MDA_MB435. These latter cell lines are indeed melanoma metastases derived from a patient diagnosed with breast cancer. As previously reported [12,28,29], ME cell lines (including MDA_N and MDA_MB435) form a compact and homogeneous cluster which is strictly identical between the two data sets. Only the LOXIMVI cell line, which lacks melanin and several typical markers of melanoma cells [30] did not cluster with all ME cell lines (Figure (Figure2).2). CCA-EN first focused on separating ME vs. the other cell lines, a cluster that seems consistent in both data sets. In contrast, sPLS and CIA first focused on the separation between epithelial vs. mesenchymal cell lines characteristics, even though most OV and LU cell lines clustered either with the mesenchymal-like cell lines (Ross data set) or with the epithelial-like cell lines (Staunton data set) in Figure Figure2.2. This illustrates an important difference between CCA-EN and sPLS/CIA: by maximizing the correlation, CCA-EN first focuses on the most conserved clusters between the two data sets. To evaluate this hypothesis, we artificially reduced the consistency in the ME clustering by permuting some of the labels of the melanoma cell lines with other randomly selected cell lines in one of the data set. The resulting graphics in CCA-EN happened to be similar to those obtained for sPLS and CIA in the absence of permutation (Figure 3(a)), separating epithelial-like vs. mesenchymal-like cell lines on the first dimension. By contrast, sPLS and CIA graphics remained the same after the permutations. Thus it seems that the maximal correlation can only be obtained through a high consistency of the clusterings between the two data sets. However, CCA-EN may be more strongly affected by the few samples that would not cluster similarly in the two data sets, that is, by a low consistency between the two data sets. Interpretation of the observed genes clusters Graphical representation of the genes We computed the correlations between the original data sets and the scores vectors or latent variables (ξ[1], ξ[2], ξ[3]) and (ω[1], ω[2], ω[3]) to project the selected genes onto correlation circles. Figures Figures55 and and66 provide an illustrative example of these types of figures in the case of sPLS. These graphical outputs proposed by [31] improve the interpretability of the results in the following manner. First they allow for the identification of correlated gene subsets from each data set, i.e. with similar expression profiles. Second they help revealing the correlations between gene subsets from both data sets (by superimposing both graphics). And third they help relating these correlated subsets to the associated tumor cell lines by combining the information contained in Figures Figures5,5, ,66 and Figure 4(a). For example, the genes that were selected on the second sPLS dimension for both data sets should help discriminating melanoma tumors from the other cell lines. Graphical representations of the variables selected by sPLS, Ross data set. Example of graphical representation of the genes selected on the first two sPLS dimensions. The coordinates of each gene are obtained by computing the correlation between the ... Graphical representations of the variables selected by sPLS, Staunton data set. Example of graphical representation of the genes selected on the first two sPLS dimensions. The coordinates of each gene are obtained by computing the correlation between ... If the loading vectors are orthogonal (i.e. if cor(a[s], a[r]) = 0, cor(b[s], b[r]) = 0, r <s), there is a small degree of overlap between the genes selected in each dimension (Table (Table2).2). In this case, this means that each selection focuses on a specific aspect of the data set, for example a specific tumor type. This valuable orthogonal property between loading vectors is kept in the sparse methods (sPLS, CCA-EN), which is not often the case, for example with the sparse PCA approaches [8,23,32]. The gene lists selected with CCA-EN and sPLS are hence almost not redundant across all dimensions. In fact, only 0 to 2 genes are overlapping between dimensions 1–2 and 1–3 in the Ross data set, and between 1 to 13 genes in the Staunton data set for both approaches (Table (Table2).2). On the contrary, there is no orthogonality between CIA loadings vectors, leading to a high number of overlapping genes (up to 31 between dimensions 1 and 2). Comparisons between gene lists. Analysis of the gene lists Based on the interpretation of the cell line clusters, we analysed three sets of gene lists (3 methods × 2 data sets = 6 lists of 100 genes per set, see Additional files 2, 3, 4 for each heat map of each gene list): -Set 1: the lists associated with the separation of cell lines with epithelial (mainly LE and CO) vs. mesenchymal (mainly RE and CNS) characteristics (CCA-EN dimension 2, CIA and sPLS dimension 1), -Set 2: the lists associated with the separation of the melanoma cell lines (ME, BR_MDAN and BR_MDAMB435) from the other cell lines (CCA-EN dimension 1, CIA and sPLS dimension 2), -Set 3: the lists associated with the separation of the LE cell lines from the CO cell lines (dimension 3 for each method, see Figures 3(b) and 4(b)). For each set of gene lists we evaluated the number of genes that were commonly selected by the different methods. For example, figure figure77 displays the Venn diagrams for the lists of genes characterizing the melanoma cell lines (Set 2). These Venn diagrams revealed a very strong similarity between the CCA-EN and sPLS gene lists, whereas CIA selected different genes characterizing the cell lines. Similar results were obtained for Set 1 and Set 3 and the same trend was observed when more than 100 variables were selected on each dimension (data not shown). Venn Diagrams. Venn diagrams for 100 selected genes associated to melanoma vs. the other cell lines for each data set (top). These lists were then decomposed into up and down regulated genes For each dimension and each method, we evaluated the overlap between the gene lists obtained from the two initial data sets. We would expect from such canonical correlation-based methods that they identify high correlations between features selected from the two platforms, when these features actually measure the expression of the same gene. To evaluate this aspect, the identifiers of the features from each platform were mapped to unique gene identifiers using Ingenuity Pathways Analysis application (IPA, http://www.ingenuity.com). For each dimension, CCA-EN and sPLS selected approximately 20 features from the Ross and Staunton data sets that corresponded to identical genes. In contrast, CIA selected 15 to 17 identical genes between the two data sets. The heatmaps for each of the 18 gene lists (Additional files 2, 3, 4) illustrated well the general finding that CCA-EN and sPLS yield highly similar lists of genes exhibiting expression patterns which characterize well the cell lines separated along each dimension. In contrast, CIA tends to select genes with a higher variance across all cell lines compared to CCA-EN and sPLS. Analysis of the gene lists with IPA Finally, we evaluated the biological relevance of the genes selected by each method. The 3 sets of gene lists were loaded into IPA along with their corresponding log ratios (i.e. Set 1: mean expression in LE+CO/mean expression in RE+CNS, Set 2: mean expression in ME+BR MDAN+BR MDAMB435/mean expression in the other cell lines, Set 3: mean expression in LE/mean expression in CO). We focused on: 1) biological functions that were significantly over-represented (right-tailed Fisher's exact test) in the gene lists compared to the initial data sets, 2) canonical pathways in which the selected genes were significantly over-represented compared to the genes in the initial data sets and 3) the first networks generated by IPA from the gene selections. These networks are built by combining the genes into small networks (35 molecules maximum) that maximize their specific connectivity [ 33]. This results in highly-interconnected networks. Over-represented biological functions For the three methods, the over-represented biological functions in the different gene lists were generally relevant to the cell lines separated along each corresponding dimensions. The epithelial to mesenchymal transition (EMT, Set 1), a key process for tissue remodelling during embryonic development, could contribute to establish the metastatic potential of carcinoma cells [34]. Studying the events underlying the EMT is thus of primary importance to better understand tumor malignancy. During the EMT, cells acquire morphological and biochemical characteristics that enables them to limit their contacts with neighbouring cells and to invade the extracellular matrix. Accordingly, for Set 1, the three methods identified biological functions related to cellular movement, connective tissue development and cell-to-cell signalling and interaction (see Additional files 5 and 6) which directly relate to the EMT. Melanomas (Set 2) originate from skin melanocytes which are pigment cells producing melanin, the synthesis of which involves the amino acids tyrosine and cysteine. Accordingly, for Set 2, the different methods identified biological functions related to skin biology and to amino acid metabolism (not shown). Finally, LE cell lines represent leukaemia which result from the abnormal proliferation of blood cells while CO cell lines represent colon carcinomas which originate from epithelial cells of the colon (Set 3). Not surprisingly, the different methods identified lists of genes linked to the functions and diseases of the haematological and immunological systems which were differentially expressed between LE and CO cell lines (not shown). The three methods extracted complementary findings from the two data sets. Particularly, they frequently identified similar biological functions supported by different genes from the two platforms. One major finding from this analysis was that CIA identified many more significant biological functions compared to CCA-EN/sPLS. For example for the Ross/Set 1 data, CCA-EN and sPLS identified 7 functions with p < 0:001 while CIA identified 21 different functions using the same threshold. However, the functions identified by CIA were highly redundant between the three sets, as a result of important overlaps in the gene lists selected by this method (Table (Table2).2). Additionally, CIA recurrently identified categories representing relatively general functions for tumor cells such as cell death, cancer or cell morphology. Overall, the findings obtained by CCA-EN and sPLS were much more specific and allowed a deeper understanding of the biological processes characterizing the different cell lines. Canonical pathways In accordance with this observation CCA-EN and sPLS generally found more significant canonical pathways compared to CIA. This likely results from redundant and less specific genes contained in the CIA gene selections, hence limiting the enrichment of a sufficient number of genes in a given pathway. In particular, the integrin and actin sytoskeleton pathways were only identified by CCA-EN and sPLS for Set 1. These two pathways are central to cellular movement and for the interactions with the extracellular matrix. Consistently, several genes from these pathways, including integrins α and β, caveolin, α-actinin and vinculin are over-expressed in RE and CNS cell lines (mesenchymal) compared to LE and CO cell lines (epithelial). For Set 2, all three methods identified the overexpression of genes from the tyrosine metabolism pathway in melanoma cell lines, in particular tyrosinase, tyrosinase related proteins 1 and 2 and dopachrome tautoisomerase which are involved in melanin synthesis. However, only CCA-EN and sPLS identified glycosphingolipid (ganglioside and globosid) biosynthesis pathways as characterizing the melanoma cell lines. Melanoma tumors are known to be rich in these glycosphingolipids [35]. Indeed, their presence at the cell membrane makes them interesting targets for immunotherapy and vaccination strategies [30]. Among the pathways identified for Set 3, only sPLS identified the tight junction signalling pathway (in particular Claudin 4 and Zona occludens 1) as characterizing CO cell lines compared to LE cell lines. This is consistent with the typical epithelial characteristics of the CO cell lines. We explored the networks generated by IPA from each gene list. For Set 1, the first networks for each method were highly connected and were mainly related to cellular movement. Interestingly, all networks pointed to the extracellular signal-regulated kinase (ERK) as a central player in the expression of the selected genes, which is consistent with the role of the ERK pathway in cell migration [36]. When we merged the first networks obtained from the three methods, highly similar networks were obtained for the two platforms (Additional files 7 and 8) but only the Staunton data set highlighted the transforming growth factor-β (TGF-β) pathway, which is thought to be a primary inducer of the EMT [34]. Despite this difference, the most connected nodes (including integrins α and β, α-actinin, connective tissue growth factor, fibronectin 1, SERPINE1, plasminogen activator urokinase, Ras or ERK) were found in both networks. These likely represent central players in establishing the different phenotypes of LE and CO cell lines on one hand and of RE and CNS cell lines on the other hand. The networks characterizing melanoma cell lines (Set 2, not shown) highlighted several markers used for the diagnosis of melanomas including the over expressed MITF, vimentin, S-100A1, S-100B and Melan-A and the under expressed keratins 7, 8, 18 and 19. Finally, the networks generated for Set 3 highlighted many genes involved in cell-cell contacts, cell adhesion and cellular movement which were generally expressed at higher levels in CO compared to LE cell lines. The analysis of the NCI60 data sets with CCA-EN, CIA and sPLS evidenced the main differences between these methods. CIA does not propose a built-in variable selection procedure and requires a two-step analysis to perform variable selection. The main individual effects were identified. However, the loadings or weight vectors obtained were not orthogonal, in contrary to CCA-EN and sPLS. This resulted in some redundancy in the gene selections, which may be a limitation for the biological interpretation, as it led to less specific results. CCA-EN first captured the main robust effect on the individuals that was present in the two data sets. As a consequence, it may hide strongest individual effects that are present in only one data set. We observed a strong similarity between CCA-EN and sPLS in the gene selections, except that the first two axes were permuted. In fact, we believe that CCA-EN can be considered as a sparse PLS variant with a canonical mode. Indeed, the elastic net is approximated with a univariate threshold, which is similar to a lasso soft-thresholding penalization, and the whole algorithm uses PLS and not CCA computations. This explains why the canonical correlations do not monotonically decrease. The only difference that distinguishes sPLS canonical mode from CCA-EN is the initialization of the algorithm for each dimension. CCA-EN maximizes the correlation between the latent variables, whereas sPLS maximizes the covariance. We found that sPLS made a good compromise between all these approaches. It includes variable selection and the loading vectors are orthogonal. Although sPLS and CCA-EN do not order the axis in the same manner, both approaches were highly similar, except for slight but significant differences when studying LE vs. CO (Set 3). In this particular case, the resulting gene lists clearly provided complementary information. Based on the present study, we would primarily recommend the use of CCA-EN or sPLS when gene selection is an issue. Like CCA-EN, sPLS includes a built-in variable selection procedure but captured subtle individual effects. Therefore, these two approaches may differ when computing the fist axes. All approaches are easy to use and fast to compute. These approaches would benefit from the development of an R package to harmonize their inputs and outputs so as to facilitate their use and their comparison. Authors' contributions KALC developed the algorithm, performed the statistical analyses. PGPM performed the biological interpretation. KALC and PGPM wrote the manuscript, CRG, PGPM and PB participated in the design of the study. All authors read and approved the final manuscript. Supplementary Material Additional File 1: Algorithms. The algorithms PLS, sPLS and CCA-EN are detailed. Additional File 2: Hierarchical clusterings, epithelial vs. mesenchymal-like (Set 1). Heat map displays of hierarchical clustering results with the Ward method and correlation distance with genes in lines and cell lines in columns. Samples are clustered according to the dendrograms obtained in Figure Figure2.2. The red (green) colour represents over-expressed (under-expressed) genes. Genes from Set 1 are displayed for each method. Additional File 3: Hierarchical clusterings, melanoma (Set 2). Heat map displays of hierarchical clustering results with the Ward method and correlation distance with genes in lines and cell lines in columns. Samples are clustered according to the dendrograms obtained in Figure Figure2.2. The red (green) colour represents over-expressed (under-expressed) genes. Genes from Set 2 are displayed for each method. Additional File 4: Hierarchical clusterings, LE vs. CO cell lines (Set 3). Heat map displays of hierarchical clustering results with the Ward method and correlation distance with genes in lines and cell lines in columns. Samples are clustered according to the dendrograms obtained in Figure Figure2.2. The red (green) colour represents over-expressed (under-expressed) genes. Genes from Set 3 are displayed for each method. Additional File 5: Biological functions from Set 1 for the Ross data set. Biological functions significantly over-represented in the gene lists selected from the Ross data set by the three methods CCA-EN, CIA and sPLS (Set 1 of gene lists). Only the biological functions with a p-value lower than 0.001 for all three methods are presented. "x" indicates how the genes were selected. The analysis was performed using Ingenuity Pathways Analysis application http://www.ingenuity.com which evaluates the over-representation of functional categories through a right-tailed Fisher's exact test. Additional File 6: Biological functions from Set 1 for the Staunton data set. Biological functions significantly over-represented in the gene lists selected from the Staunton data set by the three methods CCA-EN, CIA and sPLS (Set 1 of gene lists). Only the biological functions with a p-value lower than 0.001 for all three methods are presented. "x" indicates how the genes were selected. The analysis was performed using Ingenuity Pathways Analysis application http://www.ingenuity.com which evaluates the over-representation of functional categories through a right-tailed Fisher's exact test. Additional File 7: Network from the Ross gene list, Set 1. Molecular network obtained from the Ross gene lists from Set 1. For each canonical method (CCA-EN, CIA or sPLS), molecular networks were built from the Ross gene lists (focus genes) of Set 1 using Ingenuity Pathways Analysis (IPA, http://www.ingenuity.com). The first networks obtained from each method were merged into the presented network. Green and red colors indicate under- and over-expressions respectively in the LE/CO cell lines compared to the RE/CNS cell lines for the genes that were selected by sPLS. Genes that were selected by CCA-EN or CIA are in grey and were all under-expressed in the LE/CO cell lines compared to the RE/CNS cell lines. Genes in white have been added by IPA based on their high connectivity with focus genes. Additional File 8: Network from the Staunton gene list, Set 1. Molecular network obtained from the Staunton gene lists from Set 1. For each canonical method (CCA-EN, CIA or sPLS), molecular networks were built from the Staunton gene lists (focus genes) of Set 1 using Ingenuity Pathways Analysis (IPA, http://www.ingenuity.com). The first networks obtained from each method were merged into the presented network. Green and red colors indicate under- and over-expressions respectively in the LE/CO cell lines compared to the RE/CNS cell lines for the genes that were selected by sPLS are colored in red or green. Genes that were selected by CCA-EN or CIA are in grey and were all under-expressed in the LE/CO cell lines compared to the RE/CNS cell lines. Genes in white have been added by IPA based on their high connectivity with focus genes. We would like to thank Dr. Sandra Waaijenborg who kindly provided the CCA-EN program and the anonymous reviewers for their helpful comments that improved the manuscript. • Wold H. In: Multivariate Analysis. krishnaiah pr, editor. Academic Press, New York, Wiley; 1966. • Hotelling H. Relations between two sets of variates. Biometrika. 1936;28:321–377. • Krämer N. An overview of the shrinkage properties of partial least squares regression. Computational Statistics. 2007;22:249–273. doi: 10.1007/s00180-007-0038-z. [Cross Ref] • Chun H, Keles S. Tech rep. Department of Statistics, University of Wisconsin, Madison, USA; 2007. Sparse Partial Least Squares Regression with an Application to Genome Scale Transcription Factor • Bylesjö M, Eriksson D, Kusano M, Moritz T, Trygg J. Data integration in plant biology: the O2PLS method for combined modeling of transcript and metabolite data. The Plant Journal. 2007;52 :1181–1191. [PubMed] • Vijayendran C, Barsch A, Friehs K, Niehaus K, Becker A, Flaschel E. Perceiving molecular evolution processes in Escherichia coli by comprehensive metabolite and gene expression profiling. Genome Biology. 2008;9:R72. doi: 10.1186/gb-2008-9-4-r72. [PMC free article] [PubMed] [Cross Ref] • Tibshirani R. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B. 1996;58:267–288. • Zou H, Hastie T. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society Series B. 2005;67:301–320. doi: 10.1111/j.1467-9868.2005.00503.x. [Cross Ref] • Lê Cao KA, Rossouw D, Robert-Granié C, Besse P. A Sparse PLS for Variable Selection when Integrating Omics data. Stat Appl Genet Mol Biol. 2008;7:Article 35. [PubMed] • Waaijenborg S, de Witt Hamer V, Philip C, Zwinderman A. Quantifying the Association between Gene Expressions and DNA-Markers by Penalized Canonical Correlation Analysis. Stat Appl Genet Mol Biol. 2008;7:Article3. [PubMed] • Doledec S, Chessel D. Co-inertia analysis: an alternative method for studying species-environment relationships. Freshwater Biology. 1994;31:277–294. doi: 10.1111/j.1365-2427.1994.tb01741.x. [ Cross Ref] • Culhane A, Perriere G, Higgins D. Cross-platform comparison and visualisation of gene expression data using co-inertia analysis. BMC Bioinformatics. 2003;4:59. doi: 10.1186/1471-2105-4-59. [PMC free article] [PubMed] [Cross Ref] • Gittins R. Canonical Analysis: A Review with Applications in Ecology. Springer-Verlag; 1985. • González I, Déjean S, Martin PGP, Baccini A. CCA: An R Package to Extend Canonical Correlation Analysis. Journal of Statistical Software. 2008;23 • Vinod HD. Canonical Ridge and Econometrics of Joint Production. Journal of Econometrics. 1976;4:147–166. doi: 10.1016/0304-4076(76)90010-5. [Cross Ref] • Combes S, González I, Déjean S, Baccini A, Jehl N, Juin H, Cauquil L, Gabinaud B, Lebas F, Larzul C. Relationships between sensorial and physicochemical measurements in meat of rabbit from three different breeding systems using canonical correlation analysis. Meat Science. 2008. [PubMed] • Wold S, Eriksson L, Trygg J, Kettaneh N. Tech rep. Umea University; 2004. The PLS method-partial least squares projections to latent structures-and its applications in industrial RDP (research, development, and production) • de Jong S. SIMPLS: An alternative approach to partial least squares regression. Chemometrics and Intelligent Laboratory Systems. 1993;18:251–263. doi: 10.1016/0169-7439(93)85002-X. [Cross Ref] • Lorber A, Wangen L, Kowalski B. A theoretical foundation for the PLS algorithm. Journal of Chemometrics. 1987;1:13. • Tenenhaus M. La régression PLS: théorie et pratique. Editions Technip; 1998. • Wegelin J. Tech Rep 371. Department of Statistics, University of Washington, Seattle; 2000. A survey of Partial Least Squares (PLS) methods, with emphasis on the two-block case. • Zou H, Hastie T, Tibshirani R. Sparse principal component analysis. Journal of Computational and Graphical Statistics. 2006;15:265–286. doi: 10.1198/106186006X113430. [Cross Ref] • Shen H, Huang JZ. Sparse Principal Component Analysis via Regularized Low Rank Matrix Approximation. Journal of Multivariate Analysis. 2008;99:1015–1034. doi: 10.1016/j.jmva.2007.06.007. [Cross • Robert P, Escoufier Y. A unifying tool for linear multivariate statistical methods: the RV-coefficient. Applied Statistics. 1976;25:257–265. doi: 10.2307/2347233. [Cross Ref] • Thioulouse J, Chessel D, Dolédec S, Olivier J. ADE-4: a multivariate analysis and graphical display software. Statistics and Computing. 1997;7:75–83. doi: 10.1023/A:1018513530268. [Cross Ref] • Butte A, Tamayo P, Slonim D, Golub T, Kohane I. Discovering functional relationships between RNA expression and chemotherapeutic susceptibility using relevance networks. Proc Nat Acad Sci U S A. 2000;97:12182–12186. doi: 10.1073/pnas.220392197. [PMC free article] [PubMed] [Cross Ref] • Staunton J, Slonim D, Coller H, Tamayo P, Angelo M, Park J, Scherf U, Lee J, Reinhold W, Weinstein J, Mesirov J, Lander E, Golub T. Chemosensitivity prediction by transcriptional profiling. Proceedings of the National Academy of Sciences. 2001;98:10787. doi: 10.1073/pnas.191368598. [PMC free article] [PubMed] [Cross Ref] • Ross D, Scherf U, Eisen M, Perou C, Rees C, Spellman P, Iyer V, Jeffrey S, Rijn M Van de, Waltham M, Pergamenschikov A, Lee J, Lashkari D, Shalon D, Myers T, Weinstein J, Botstein D, Brown P. Systematic variation in gene expression patterns in human cancer cell lines. Nat Genet. 2000;24:227–35. doi: 10.1038/73432. [PubMed] [Cross Ref] • Scherf U, Ross D, Waltham M, Smith L, Lee J, Tanabe L, Kohn K, Reinhold W, Myers T, Andrews D, Scudiero D, Eisen M, Sausville E, Pommier Y, Botstein D, Brown P, Weinstein J. A gene expression database for the molecular pharmacology of cancer. Nat Genet. 2000;24:236–244. doi: 10.1038/73439. [PubMed] [Cross Ref] • Fredman P, Hedberg K, Brezicka T. Gangliosides as Therapeutic Targets for Cancer. BioDrugs. 2003;17:155. doi: 10.2165/00063030-200317030-00002. [PubMed] [Cross Ref] • González I, Déjean S, Martin P, Goncalves O, Besse P, Baccini A. Highlighting Relationships Between Heteregeneous Biological Data Through Graphical Displays Based On Regularized Canonical Correlation Analysis. Journal of Biological Systems. 2008. • Jolliffe I, Trendafilov N, Uddin M. A Modified Principal Component Technique Based on the LASSO. Journal of Computational & Graphical Statistics. 2003;12:531–547. doi: 10.1198/1061860032148. [ Cross Ref] • Calvano S, Xiao W, Richards D, Felciano R, Baker H, Cho R, Chen R, Brownstein B, Cobb J, Tschoeke S, Miller-Graziano C, Moldawer L, Mindrinos M, Davis R, Tompkins R, Lowry S. A network-based analysis of systemic in ammation in humans. nature. 2005;437:1032. doi: 10.1038/nature03985. [PubMed] [Cross Ref] • Yang J, Weinberg R. Epithelial-Mesenchymal Transition: At the Crossroads of Development and Tumor Metastasis. Developmental Cell. 2008;14:818–829. doi: 10.1016/j.devcel.2008.05.009. [PubMed] [ Cross Ref] • Portoukalian J, Zwingelstein G, Dore J. Lipid composition of human malignant melanoma tumors at various levels of malignant growth. Eur J Biochem. 1979;94:19–23. doi: 10.1111/ j.1432-1033.1979.tb12866.x. [PubMed] [Cross Ref] • Juliano R, Reddig P, Alahari S, Edin M, Howe A, Aplin A. Integrin regulation of cell signalling and motility. Biochemical Society Transactions. 2004;32:443–446. doi: 10.1042/BST0320443. [PubMed] [Cross Ref] Articles from BMC Bioinformatics are provided here courtesy of BioMed Central • SPARSE INTEGRATIVE CLUSTERING OF MULTIPLE OMICS DATA SETS[The annals of applied statistics. 2013] Shen R, Wang S, Mo Q. The annals of applied statistics. 2013 Apr 9; 7(1)269-294 • Sparse canonical correlation analysis relates network-level atrophy to multivariate cognitive measures in a neurodegenerative population[NeuroImage. 2014] Avants BB, Libon DJ, Rascovsky K, Boller A, McMillan CT, Massimo L, Coslett HB, Chatterjee A, Gross RG, Grossman M. NeuroImage. 2014 Jan 1; 84698-711 • The peripheral blood transcriptome reflects variations in immunity traits in swine: towards the identification of biomarkers[BMC Genomics. ] Mach N, Gao Y, Lemonnier G, Lecardonnel J, Oswald IP, Estellé J, Rogel-Gaillard C. BMC Genomics. 14894 • Group sparse canonical correlation analysis for genomic data integration[BMC Bioinformatics. ] Lin D, Zhang J, Li J, Calhoun VD, Deng HW, Wang YP. BMC Bioinformatics. 14245 • GUESS-ing Polygenic Associations with Multiple Phenotypes Using a GPU-Based Evolutionary Stochastic Search Algorithm[PLoS Genetics. 2013] Bottolo L, Chadeau-Hyam M, Hastie DI, Zeller T, Liquet B, Newcombe P, Yengo L, Wild PS, Schillert A, Ziegler A, Nielsen SF, Butterworth AS, Ho WK, Castagné R, Munzel T, Tregouet D, Falchi M, Cambien F, Nordestgaard BG, Fumeron F, Tybjærg-Hansen A, Froguel P, Danesh J, Petretto E, Blankenberg S, Tiret L, Richardson S. PLoS Genetics. 2013 Aug; 9(8)e1003657 See all... Your browsing activity is empty. Activity recording is turned off. See more...
{"url":"http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2640358/?tool=pubmed","timestamp":"2014-04-21T15:38:04Z","content_type":null,"content_length":"151083","record_id":"<urn:uuid:06943b31-a933-4852-82ee-09cfe216b4dd>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Mind Blowing: Mathematicians Prove Sum of Infinity is… Negative One-Twelfth? (VIDEO) You might think that 1 + 2 + 3 + 4 +5… (and on in to infinity) would equal a very big number. But what if someone told you that the sum of all natural numbers is actually negative one-twelfth! That’s right, according to physicists, the following is true: 1 + 2 + 3 + 4 + 5 + (infinity) = -1/12 What? How? Huh? No way! This must be some kind of mathematical hocus pocus, right? Not at all. In fact, it’s a solution that physicists can not only prove on paper, but one they also have observed in the natural world. Watch this fascinating video, as physicists explain why such a counter-intuitive answer is actually correct: If you understood all of that, give yourself a gold star. (via Sploid) (Image: Bluedharna.Flickr) Here is a relatively simple explanation of what is wrong with this proof: http://www.databonanza.com/2014/01/why-sum-of-all-natural-numbers-is-not.html. Thanks, I knew there was some mathematics shenanigans going up but what not able to explain. Did think the shifting was wrong in some way but you nailed it. Same type of stuff as proving that 1=2 where some where in proof you divide by Zero. Which makes whole proof false. Usually hidden by x-x etc.
{"url":"http://www.thecollegefix.com/post/15961/","timestamp":"2014-04-18T20:45:08Z","content_type":null,"content_length":"25654","record_id":"<urn:uuid:3570543b-befa-4bbe-ad49-feaa63e1da9d>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00625-ip-10-147-4-33.ec2.internal.warc.gz"}
ratio close to 2. (In fact, some researchers in the 19th century, suspecting that rationality was associated with instability, speculated that a slight change in the distance between Saturn and Jupiter would be enough to send Saturn shooting out of our solar system.) But the KAM theorem, combined with Birkhoff's earlier work, showed that there are little windows of stability within the ''unstable zones" associated with rationality; such a window of stability could account for the planets' apparent stability. (The theorem also answers questions about the orbit of Saturn's moon Hyperion, the gaps in Saturn's rings, and the distribution of asteroids; it is used extensively to understand the stability of particles in accelerators.) In the years following the KAM theorem the emphasis in dynamical systems was on stability, for example in the work in the 1960s by topologist Stephen Smale and his group at the University of California, Berkeley. But more and more, researchers in the field have come to focus on instabilities as the key to understanding dynamical systems, and hence the world around us. A key role in this shift was played by the computer, which has shown mathematicians how intermingled order and disorder are: systems assumed to be stable may actually be unstable, and apparently chaotic systems may have their roots in simple rules. Perhaps the first to see evidence of this, thanks to a computer, was not a mathematician but a meteorologist, Edward Lorenz, in 1961 (Lorenz, 1963). The story as told by James Gleick is that Lorenz was modeling Earth's atmosphere, using differential equations to estimate the impact of changes in temperature, wind, air pressure, and the like. One day he took what he thought was a harmless shortcut: he repeated a particular sequence but started halfway through, typing in the midpoint output from the previous printout—but only to the three decimal places displayed on his printout, not the six decimals calculated by the program. He then went for a cup of coffee and returned to find a totally new and dramatically altered outcome. The small change in initial conditions—figures to three decimal places, not six—produced an entirely different answer. Sensitivity to Initial Conditions Sensitivity to initial conditions is what chaos is all about: a chaotic system is one that is sensitive to initial conditions. In his talk McMullen described a mathematical example, "the simplest dynamical system in the quadratic family that one can study," iteration of the polynomial x^2 + c when c = 0. To iterate a polynomial or other function, one starts with a num-
{"url":"http://www.nap.edu/openbook.php?record_id=1859&page=49","timestamp":"2014-04-20T03:24:05Z","content_type":null,"content_length":"36762","record_id":"<urn:uuid:41fda747-994b-4571-8dc8-fda5c7ea7ac9>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Integration involving complex numbers October 24th 2009, 08:53 PM #1 Junior Member Aug 2009 Integration involving complex numbers I am trying to Integrate this equation $\int\frac{dx}{x^2-6x+34}<br />$ i have tried to factorise the polynomial on the denominator and thus $x=\frac {-b\pm\sqrt {b^2-4c}}{2}=3\pm5i<br />$ where b = -6, c = 34 using my knowledge of complex functions this can be written as $e^{3x} (c_1 cos 5x + c_2 sin 5x)<br />$ thus the integral can be written as $\int\frac{dx}{e^{3x} (c_1 cos 5x + c_2 sin 5x)}$ not sure where to go now? any ideas? I am trying to Integrate this equation $\int\frac{dx}{x^2-6x+34}$ $<br />$ i have tried to factorise the polynomial on the denominator and thus $x=\frac {-b\pm\sqrt {b^2-4c}}{2}=3\pm5i$ $<br />$ where b = -6, c = 34 using my knowledge of complex functions this can be written as $e^{3x} (c_1 cos 5x + c_2 sin 5x)$ $<br />$ thus the integral can be written as $\int\frac{dx}{e^{3x} (c_1 cos 5x + c_2 sin 5x)}$ not sure where to go now? any ideas? $x^2 - 6x + 34 = (x - 3)^2 + 25$. This should suggest a standard form to you, especially if you first make the substitution $u = x - 3$. October 24th 2009, 09:24 PM #2
{"url":"http://mathhelpforum.com/calculus/110221-integration-involving-complex-numbers.html","timestamp":"2014-04-20T10:37:47Z","content_type":null,"content_length":"37844","record_id":"<urn:uuid:7dc6254e-ad56-49bb-a692-49b20e06204f>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00060-ip-10-147-4-33.ec2.internal.warc.gz"}
the encyclopedic entry of set upright In statistics, a histogram is a graphical display of tabulated frequencies, shown as bars. It shows what proportion of cases fall into each of several categories. A histogram differs from a bar chart in that it is the area of the bar that denotes the value, not the height, a crucial distinction when the categories are not of uniform width (Lancaster, 1974). The categories are usually specified as non-overlapping intervals of some variable. The categories (bars) must be adjacent. The word histogram is derived from Greek: histos 'anything set upright' (as the masts of a ship, the bar of a loom, or the vertical bars of a histogram); gramma 'drawing, record, writing'. The histogram is one of the seven basic tools of quality control, which also include the Pareto chart, check sheet, control chart, cause-and-effect diagram, flowchart, and scatter diagram. A generalization of the histogram is kernel smoothing techniques. This will construct a very smooth probability density function from the supplied data. As an example we consider data collected by the U.S. Census Bureau on time to travel to work (2000 census, , Table 5). The census found that there were 124 million people who work outside of their homes. This rounding is a common phenomenon when collecting data from people. Data by absolute numbers Interval Width Quantity Quantity/width This histogram shows the number of cases per unit interval so that the height of each bar is equal to the proportion of total people in the survey who fall into that category. The area under the curve represents the total number of cases (124 million). This type of histogram shows absolute numbers. Data by proportion Interval Width Quantity (Q) Q/total/width 0 5 4180 0.0067 5 5 13687 0.0220 10 5 18618 0.0300 15 5 19634 0.0316 20 5 17981 0.0289 25 5 7190 0.0115 30 5 16369 0.0263 35 5 3212 0.0051 40 5 4122 0.0066 45 15 9200 0.0049 60 30 6461 0.0017 90 60 3435 0.0004 This histogram differs from the first only in the vertical scale. The height of each bar is the decimal percentage of the total that each category represents, and the total area of all the bars is equal to 1, the decimal equivalent of 100%. The curve displayed is a simple density estimate. This version shows proportions, and is also known as a unit area histogram. In other words a histogram represents a frequency distribution by means of rectangles whose widths represent class intervals and whose areas are proportional to the corresponding frequencies. They only place the bars together to make it easier to compare data. Activities and demonstrations resource pages contain a number of hands-on interactive activities demonstrating the concept of a histogram, histogram using Java applets and Mathematical definition In a more general mathematical sense, a histogram is a mapping $m_i$ that counts the number of observations that fall into various disjoint categories (known as bins), whereas the graph of a histogram is merely one way to represent a histogram. Thus, if we let $n$ be the total number of observations and $k$ be the total number of bins, the histogram $m_i$ meets the following conditions: $n = sum_\left\{i=1\right\}^k\left\{m_i\right\}.$ Cumulative histogram A cumulative histogram is a mapping that counts the cumulative number of observations in all of the bins up to the specified bin. That is, the cumulative histogram $M_i$ of a histogram $m_i$ is defined as: $M_i = sum_\left\{j=1\right\}^i\left\{m_j\right\}$ Number of bins and width There is no "best" number of bins, and different bin sizes can reveal different features of the data. Some theoreticians have attempted to determine an optimal number of bins, but these methods generally make strong assumptions about the shape of the distribution. You should always experiment with bin widths before choosing one (or more) that illustrate the salient features in your data. The number of bins $k$ can be calculated directly, or from a suggested bin width $h$: $k = left lceil frac\left\{max x - min x\right\}\left\{h\right\} right rceil.$ The braces indicate the ceiling function .Sturges' formula: $k = lceil log_2 n + 1 rceil$ which implicitly bases the bin sizes on the range of the data, and can perform poorly if .Scott's choice: $h = frac\left\{3.5 s\right\}\left\{n^\left\{1/3\right\}\right\}$ is the common bin width, and is the sample standard deviation . Freedman-Diaconis' choice: $h = 2 frac\left\{operatorname\left\{IQR\right\}\left(x\right)\right\}\left\{n^\left\{1/3\right\}\right\}$ which is based on the interquartile range Continuous data The idea of a histogram can be generalized to continuous data. Let $f in L^1\left(R\right)$ Lebesgue space ), then the cumulative histogram operator can be defined by: $H\left(f\right)\left(y\right) =$ with only finitely many intervals of monotony this can be rewritten as
{"url":"http://www.reference.com/browse/set+upright","timestamp":"2014-04-20T08:06:24Z","content_type":null,"content_length":"85867","record_id":"<urn:uuid:2b96d5d1-413e-4381-bd05-1f002e2b7f41>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
Each data type has an external representation determined by its input and output functions. Many of the built-in types have obvious external formats. However, several types are either unique to PostgreSQL, such as open and closed paths, or have several possibilities for formats, such as the date and time types. Most of the input and output functions corresponding to the base types (e.g., integers and floating-point numbers) do some error-checking. Some of the input and output functions are not invertible. That is, the result of an output function may lose precision when compared to the original input. Some of the operators and functions (e.g., addition and multiplication) do not perform run-time error-checking in the interests of improving execution speed. On some systems, for example, the numeric operators for some data types may silently underflow or overflow. Numeric types consist of two-, four-, and eight-byte integers, four- and eight-byte floating-point numbers, and fixed-precision decimals. Table 5-2 lists the available types. Table 5-2. Numeric Types │ Type name │Storage size│ Description │ Range │ │smallint │2 bytes │small range fixed-precision │-32768 to +32767 │ │integer │4 bytes │usual choice for fixed-precision│-2147483648 to +2147483647 │ │bigint │8 bytes │large range fixed-precision │-9223372036854775808 to 9223372036854775807 │ │decimal │variable │user-specified precision, exact │no limit │ │numeric │variable │user-specified precision, exact │no limit │ │real │4 bytes │variable-precision, inexact │6 decimal digits precision │ │double precision│8 bytes │variable-precision, inexact │15 decimal digits precision │ │serial │4 bytes │autoincrementing integer │1 to 2147483647 │ │bigserial │8 bytes │large autoincrementing integer │1 to 9223372036854775807 │ The syntax of constants for the numeric types is described in Section 1.1.2. The numeric types have a full set of corresponding arithmetic operators and functions. Refer to Chapter 6 for more information. The following sections describe the types in detail. The types smallint, integer, bigint store whole numbers, that is, numbers without fractional components, of various ranges. Attempts to store values outside of the allowed range will result in an The type integer is the usual choice, as it offers the best balance between range, storage size, and performance. The smallint type is generally only used if disk space is at a premium. The bigint type should only be used if the integer range is not sufficient, because the latter is definitely faster. The bigint type may not function correctly on all platforms, since it relies on compiler support for eight-byte integers. On a machine without such support, bigint acts the same as integer (but still takes up eight bytes of storage). However, we are not aware of any reasonable platform where this is actually the case. SQL only specifies the integer types integer (or int) and smallint. The type bigint, and the type names int2, int4, and int8 are extensions, which are shared with various other SQL database systems. Note: If you have a column of type smallint or bigint with an index, you may encounter problems getting the system to use that index. For instance, a clause of the form ... WHERE smallint_column = 42 will not use an index, because the system assigns type integer to the constant 42, and PostgreSQL currently cannot use an index when two different data types are involved. A workaround is to single-quote the constant, thus: ... WHERE smallint_column = '42' This will cause the system to delay type resolution and will assign the right type to the constant. The type numeric can store numbers with up to 1,000 digits of precision and perform calculations exactly. It is especially recommended for storing monetary amounts and other quantities where exactness is required. However, the numeric type is very slow compared to the floating-point types described in the next section. In what follows we use these terms: The scale of a numeric is the count of decimal digits in the fractional part, to the right of the decimal point. The precision of a numeric is the total count of significant digits in the whole number, that is, the number of digits to both sides of the decimal point. So the number 23.5141 has a precision of 6 and a scale of 4. Integers can be considered to have a scale of zero. Both the precision and the scale of the numeric type can be configured. To declare a column of type numeric use the syntax NUMERIC(precision, scale) The precision must be positive, the scale zero or positive. Alternatively, selects a scale of 0. Specifying without any precision or scale creates a column in which numeric values of any precision and scale can be stored, up to the implementation limit on precision. A column of this kind will not coerce input values to any particular scale, whereas numeric columns with a declared scale will coerce input values to that scale. (The SQL standard requires a default scale of 0, i.e., coercion to integer precision. We find this a bit useless. If you're concerned about portability, always specify the precision and scale explicitly.) If the precision or scale of a value is greater than the declared precision or scale of a column, the system will attempt to round the value. If the value cannot be rounded so as to satisfy the declared limits, an error is raised. The types decimal and numeric are equivalent. Both types are part of the SQL standard. The data types real and double precision are inexact, variable-precision numeric types. In practice, these types are usually implementations of IEEE Standard 754 for Binary Floating-Point Arithmetic (single and double precision, respectively), to the extent that the underlying processor, operating system, and compiler support it. Inexact means that some values cannot be converted exactly to the internal format and are stored as approximations, so that storing and printing back out a value may show slight discrepancies. Managing these errors and how they propagate through calculations is the subject of an entire branch of mathematics and computer science and will not be discussed further here, except for the following points: • If you require exact storage and calculations (such as for monetary amounts), use the numeric type instead. • If you want to do complicated calculations with these types for anything important, especially if you rely on certain behavior in boundary cases (infinity, underflow), you should evaluate the implementation carefully. • Comparing two floating-point values for equality may or may not work as expected. Normally, the real type has a range of at least -1E+37 to +1E+37 with a precision of at least 6 decimal digits. The double precision type normally has a range of around -1E+308 to +1E+308 with a precision of at least 15 digits. Values that are too large or too small will cause an error. Rounding may take place if the precision of an input number is too high. Numbers too close to zero that are not representable as distinct from zero will cause an underflow error. The serial data type is not a true type, but merely a notational convenience for setting up identifier columns (similar to the AUTO_INCREMENT property supported by some other databases). In the current implementation, specifying CREATE TABLE tablename ( colname SERIAL is equivalent to specifying: CREATE SEQUENCE tablename_colname_seq; CREATE TABLE tablename ( colname integer DEFAULT nextval('tablename_colname_seq') NOT NULL Thus, we have created an integer column and arranged for its default values to be assigned from a sequence generator. A NOT NULL constraint is applied to ensure that a null value cannot be explicitly inserted, either. In most cases you would also want to attach a UNIQUE or PRIMARY KEY constraint to prevent duplicate values from being inserted by accident, but this is not automatic. To use a serial column to insert the next value of the sequence into the table, specify that the serial column should be assigned the default value. This can be done either be excluding from the column from the list of columns in the INSERT statement, or through the use of the DEFAULT keyword. The type names serial and serial4 are equivalent: both create integer columns. The type names bigserial and serial8 work just the same way, except that they create a bigint column. bigserial should be used if you anticipate the use of more than 2^31 identifiers over the lifetime of the table. The sequence created by a serial type is automatically dropped when the owning column is dropped, and cannot be dropped otherwise. (This was not true in PostgreSQL releases before 7.3. Note that this automatic drop linkage will not occur for a sequence created by reloading a dump from a pre-7.3 database; the dump file does not contain the information needed to establish the dependency link.) Furthermore, this dependency between sequence and column is made only for the serial column itself; if any other columns reference the sequence (perhaps by manually calling the nextval()) function), they may be broken if the sequence is removed. Using serial columns in fashion is considered bad form. Note: Prior to PostgreSQL 7.3, serial implied UNIQUE. This is no longer automatic. If you wish a serial column to be UNIQUE or a PRIMARY KEY it must now be specified, just as with any other data
{"url":"http://www.postgresql.org/docs/7.3/static/datatype.html","timestamp":"2014-04-18T13:19:39Z","content_type":null,"content_length":"33427","record_id":"<urn:uuid:8ace13cb-4548-40e3-9b7a-6825bdefe60e>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: A diagrammatic Alexander invariant of tangles Abstract We give a new construction of the one-variable Alexander polyno- mial of an oriented knot or link, and show that it generalizes to a vector valued invariant of oriented tangles. AMS Classification 57M27 ; Keywords Alexander polynomial, tangle, skein theory, planar algebra. 1 Introduction The Alexander polynomial is the unique invariant of oriented knots and tangles that is one for the unknot and satisfies the Alexander-Conway skein relation. - = (q - q-1 ) . Many other equivalent definitions are known. The aim of this paper is to give yet another definition of the Alexander polynomial, which we will prove is equivalent to the above skein theoretic definition. An advantage of our definition is that it generalizes immediately to give an invari- ant of oriented tangles. Other generalizations of the Alexander polynomial to tan- gles have been given in [CT07] and [Arc08]. Their definitions are for the multivari- able Alexander polynomial, whereas this paper only concerns the single variable
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/991/1383752.html","timestamp":"2014-04-19T02:12:58Z","content_type":null,"content_length":"8158","record_id":"<urn:uuid:31ad5a7c-8d4f-4d5e-8d11-f56d2d6b35e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00103-ip-10-147-4-33.ec2.internal.warc.gz"}
What If That Asteroid Were to Hit Earth? At 6.28 p.m. Easter time today, an asteroid ¼ of a mile long will pass between the Earth and the Moon (see graphic) and roughly 201,700 miles from our planet. By any reckoning of astronomical distance standards this is what's called a "grazing pass" or if you prefer more colorful parlance, a "near miss". Again, two hundred thousand miles doesn't sound like a "near miss" but by the astronomical standards of astronomical units (1 AU - 93 million miles) and light years, it certainly is. We understand from the data that no major effects or influences will occur, including gravitational ones - which should mean no tides or any earthquake activity triggered by a differential gravitational pull on Earth's crust. However, it is instuctive to consider the possible effects if this moderate asteroid actually struck the Earth at 46,400 kilometers per hour (12, 900 meters per second), and with a putative or effective mass of 1.2 x 10^11 kg. (After ablation, evaporation of some mass on entry through the atmosphere.) First, the effective kinetic energy (½ mv^2) would be: ½ (1.2 x 10^11 kg) (12, 900 m/s)^2 = 9.9 x 10^18 J This is roughly the equivalent of 400 single megaton H-bombs going off, and from my computations, would carve out a crater nearly 7 miles wide and 3,000' deep. In other words, if such a monster hit Manhattan, that would pretty well be the end of it. If it struck Barbados nearly a third of the island would be cratered. A water or ocean strike, meanwhile, would trigger a 90' high tsunami though some low-balled values of 70' have been circulated. (But bear in mind, in the scheme of "rogue waves", 70' is considered more or less high normal and we're talking about an object 200' longer than the Nimitz aircraft carrier striking the ocean at more than 12,000 meters per second!) It is known that a scale exists, called the "Torino scale" to measure and reference the potential destructive scale of asteroids, much like the Richter scale does for earthquakes. Some of the Torino scale levels and gradations (registered by mass and velocity) are as follows: Regionally devastating impact , e.g. June 30, 1908 Tunguska impact. Devastation range approx. 10,000 sq. kilometers, killing crops, humans, animals. Size of object: 20 m (~ 66') to 100m (~330') diameter . Explosive release: 1 Megaton to 100 megatons TNT equivalent. Collision probability between ~ 1 in 100 yrs. and 1 in 1000 yrs Mass extinction impact : e.g. KT-boundary impact of 65 million years ago . Devastation range ~ 10 million sq. km., killing all extant dinosaurs and hundreds of other species. Size of object: 100m (~330') diameter to 1 km (3330') dia. Explosive release: 100 Megatons to 100,000 megatons TNT equivalent. Collision probability: between ~ 1 in 1000 yrs. and 1 in 100,000 yrs. Earth Sterilizing Impact : Example......not yet. Would annihilate every last species on the planet, and sterilize it for thousands of years to come. Devastation-affected area: > 50 x 10^6 sq. km. Size of object: >> 1 km (3330') dia. (Likely source: any of one hundred Apollo asteroids whose orbits intersect with Earth's) Explosive release: >> 100,000 megatons TNT equivalent. Collision probability: Unknown but at least one asteroid specialist (Dr. Basil Booth) has predicted an Apollo asteroid collision some time in the next 250,000 yrs. As one sees by surveying the preceding, this asteroid falls about midway into the "Mass extinction impact" range. In other words, if it struck the Earth, we'd be looking at a serious disaster indeed, whether water or land strike. Some bloggers have opined and worried about " How can any scientists actually know that this thing will miss us The answer is that they make use of the highly precise branch of astronomy known as celestial mechanics , especially that branch that has the objective of obtaining the perturbations which an astronomical object is likely to experience under the combined influence of the gravitational forces of other (e.g. larger, more massive ) objects acting upon it. See, by way of illustration, Once the object's orbital elements are known, then they can be processed into the perturbation equations and the knowledge obtained of whether there will be any direct interaction or not. Given I have used celestial mechanics over many decades myself, first in an undergrad astronomy course to compute the position of Jupiter in the year 2010 (from the year 1970) and later to use it to find the perturbations on Halley's comet in 1986, I have no fear...ZERO...that any mistakes will be made. However, I do believe this close pass endorses once again the saner position of not giving up on manned space flight. As both the late Arthur C. Clarke and Isaac Asimov have observed, it makes little rational sense - given we do have the technology or the potential to develop it - to keep all our eggs on this one little orb in the hope that we will never ever face a mass-sterilizing impact and being wiped out. The dinos had no choice, as they were merely dumb beasts with brains the size of walnuts. We have no similar excuses, and saying "we don't have the money" merits a swift kick in the posterior, and nothing more! No comments:
{"url":"http://brane-space.blogspot.com/2011/11/what-if-that-asteroid-were-to-hit-earth.html","timestamp":"2014-04-18T05:32:54Z","content_type":null,"content_length":"70668","record_id":"<urn:uuid:b4f4e5f3-6080-4131-800d-1cb4613f4adc>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
Inferring the Context for Evaluating Physics Algebraic Equations When the Scaffolding Is Removed C. W. Liew, Joel A. Shapiro, and D. E. Smith This paper describes our continuing work on enabling a tutor to evaluate algebraic solutions to word problems in physics. Current tutoring systems require students to explicitly define each variable that is used in the algebraic equations. We have developed a constraint propagation based heuristic algorithm that finds the possible dimensions and physics concepts for each variable. In earlier work we developed techniques that worked for a small set of problems and evaluated them on a small number of students. The work described here covers an extension to and evaluation of a much larger class of problems and a larger number of students. The results show that our technique uniquely determines the dimensions of all the variables in 89% of the sets of equations. By asking the student for dimension information about one variable, an additional 3% of the sets can be determined. Thus a physics tutoring system can use this technique to reason about a student’s answers even when the scaffolding and context are removed. This page is copyrighted by AAAI. All rights reserved. Your use of this site constitutes acceptance of all of AAAI's terms and conditions and privacy policy.
{"url":"http://aaai.org/Library/FLAIRS/2004/flairs04-079.php","timestamp":"2014-04-17T21:23:20Z","content_type":null,"content_length":"3036","record_id":"<urn:uuid:0f7a9eb5-c1d3-46e4-8e43-85f12a3c9bc8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
More on String Theory String Theory 101 The basic idea of string theory is simple. According to particle physics, there are a fairly large number of elementary constituents – building blocks from which everything is constructed: the electron, the photon, the quarks, and so forth. As far as experiments can tell, all these particles are point-like, or zero-dimensional – they have no spatial extent at all. According to string theory, on the other hand, there is only one elementary constituent: a tiny, one-dimensional "string," which can either have both its ends free, called an open string, or both its ends joined to form a loop, called a closed string. In simplified form, these can be pictured as shown in the figure below. Oscillating closed and open strings. Click image to enlarge How might an elementary string account for all known elementary particles? Again, the basic idea is simple: an elementary string, open or closed, can oscillate in many different ways, similar to the strings of a guitar or violin. Just as different oscillations of a musical string produce notes of different pitch, and combinations produce tones of different timbre, different oscillations of an elementary string correspond to different values for the physical properties of the string, such as its mass or spin. To get a feel for how this works, let's recall that in the quantum world there is a close relationship between the frequency with which something oscillates, and its energy. Einstein provided the first example, suggesting that the energy of a quantum of light – a photon – is related to its colour, or frequency of oscillation: the higher the frequency, the higher the energy. Moreover, in Einstein’s relativistic world, there is also a relationship between energy and mass: E = mc^2. If we combine these two relationships, we see that in a world that is both quantum and relativistic (our world), there is a close relationship between the frequency with which something oscillates – an elementary string in our case, and its physical mass, i.e., the mass of the elementary particle it is mimicking. Thus, the different elementary particles we see – the electron, the photon, the quarks, and so on – may all be the same entity: an elementary string, just singing different notes. This potential to unify particle physics is one of the very compelling aspects of string theory. If string theory is true, why have our most detailed probes of the elementary particles never revealed any such "stringy" structure? Why do we see only point-like entities? The answer is simple: elementary strings, if they exist, are far too tiny to see. It is unlikely that even the most sophisticated microscope or particle accelerator that could be constructed would ever have enough resolution to directly "see" the strings of string theory. This is similar to the fact that we typically do not see the individual pixels on our computer screen when looking at it. They are simply too small. We must be clever and find convincing indirect evidence. Why All the Interest in String Theory? Besides the potential to unify particle physics, there are several other reasons string theory is extremely intriguing. For one thing, it is naturally free of the sort of "infinities" that plagued particle physics in the mid-twentieth century. Consider two electrons, which repel because they have a like charge. The closer they are, the greater the repulsive force. If electrons are truly point particles, they can be brought infinitely close together, resulting in an infinite repulsive force. It turns out this property of point particles wreaks havoc with trying to extract sensible predictions from the theory. Eventually, a "fix" was found for this, but one that works well for only three of the four fundamental forces of nature; it works also for the fourth force, gravity, but only when gravity is so weak that it can be neglected. A significant new idea is needed. String theory does not have this problem of infinities, because strings are extended objects. For example, two nearby strings may "interact" by exchanging a third string, as shown in the figure. This interaction is spread out smoothly over space and time, and there are no nasty infinities. This figure shows two closed loop strings “interacting” by exchanging a third closed loop string. [More]. Figure courtesy of Steuard Jensen. Click image to enlarge Moreover, there is one particular oscillation of the closed string, which must occur in every well-behaved (mathematically self-consistent) string theory and has just the right properties to give a sensible quantum mechanical description of the gravitational attraction between matter particles. This is an extraordinary property of string theory, for after all, formulating a theory of quantum gravity is widely seen as the most important open problem in theoretical physics. String theory automatically contains a theory of quantum gravity! Whether it is the correct theory of quantum gravity is still not known. Another amazing feature of string theory is that, in order for an elementary string to be able to mimic both force particles (photon, gluon, etc.) and matter particles (electron, quarks, etc.), the theory makes a definite prediction about the types of particles that must exist in nature. Roughly speaking, for every type of force particle, there must exist a type of matter particle with certain properties, and vice versa. For example, corresponding to the photon, there must exist a matter particle called a photino; corresponding to the electron, there must exist a force particle called the selectron. This hypothetical relationship between force and matter particles is called "supersymmetry," and is what the "super" means in "superstring" theory (which is what string theory is sometimes called). While no experimental evidence has yet been found for such supersymmetric partner particles, this might simply be because all such particles are too massive to have been produced (and observed) in any previous particle accelerator. This is one reason why there is such excitement about the Large Hadron Collider (LHC) recently constructed at CERN. The hope is to discover evidence for supersymmetry, which would bolster the case for superstring theory. Perhaps the most astounding property is that, in string theory, physicists found a theory of nature so deep that it is able to make a prediction about something at the very foundations of the structure of the universe. Not just a prediction about things that may or may not exist in spacetime, but a fundamental prediction about spacetime itself, namely, the number of dimensions it must have. This is unprecedented. It should be added that this prediction is intimately connected with the quantum nature of the universe. In contrast, Einstein's theory of general relativity – our best (non-quantum) theory of space, time, and gravity – works just as well in 4 spacetime dimensions as in 24. It has no preference. String theory, on the other hand, works in only a certain number of spacetime dimensions: 10 (or 11, depending on how you look at it). Whether this prediction of spacetime dimension is correct or not is beside the point. It's significant because it raises hope that a mathematical description of nature, with a predictive power sufficient to be called a "theory of everything," may very well exist. Where Are the Extra Dimensions? As to the correctness of the prediction, that’s another matter. On the surface, it appears to be dead wrong. There are "obviously" only 3 space dimensions, not 9. But the situation is considerably more subtle (and delightful) than this. Perhaps the extra 6 dimensions are simply "small" enough to have escaped our notice. To see how this might be possible, imagine that, instead of 3-dimensional beings living in a 3-dimensional space, we are 2-dimensional beings living in a 2-dimensional space, as shown in the figure. All of our movements are confined to up-down and left-right; there is no such thing as forward-backward. One day, a magician arrives who makes our world 3-dimensional by slightly "thickening" it in the forward-backward direction. If the additional freedom of motion this affords us were sufficiently small compared to the size of our bodies, our new freedom would be imperceptible to our senses. String theorists have in mind something like 10^-35 m, which is clearly in the "sufficiently small" category, even for the most precise experiments we could imagine! The first figure (left) shows a 2-dimensional being living in a 2-dimensional space [more] Click image to enlarge Moreover, such extra spatial dimensions would be "curled up." In our previous example, we had only one extra dimension: movement forward or backward a short distance. If we represent this dimension of movement as a short line segment, we can curl it up by connecting the two ends of the segment, thus forming a circle. Now when we move forward, instead of encountering a boundary – the "end of space" – we simply re-emerge into the same space from the other end – a finite-sized space, but with no boundary! With two extra dimensions, we begin with a two-dimensional square instead of a one-dimensional line segment (see figure). If we connect its front and back edges, we get a cylinder; then connecting the remaining left and right edges gives us a torus (donut), as shown. Curling up a 2-dimensional space into a torus. Click image to enlarge Again, we have a finite-sized space with no boundary. With 6 extra dimensions, there are many ways to curl them up; the figure below attempts to illustrate the potential complexity. A 3-dimensional projection of a 6-dimensional "Calabi-Yau manifold," [more] Click image to enlarge In addition to the elementary string oscillations discussed above, the precise way in which these extra dimensions are curled up affects the predictions string theory makes regarding the kind and properties of the elementary particles we observe in our familiar 4-dimensional spacetime. On a positive note, this increases the possibility that at least one of these ways will yield the properties of the elementary particles we know. The problem is that string theory does not seem to prefer one way over another. In other words, nature has many choices, but why she made the choice she did might be beyond string theory to explain. At the other extreme, these 6 extra dimensions might be "large." Our 3-dimensional world might be floating through a 9-dimensional space, like a 2-dimensional sheet of paper through 3-dimensional space. In fact, string theory predicts the existence of extended objects called branes (short for "membranes"), which can have zero dimensions (like particles), one dimension (like the elementary strings), or any other number up to the number of space dimensions itself. Our world could be a 3-dimensional brane, and what we detect as point-like elementary particles could be the ends of open strings that are trapped on the brane. Below is a schematic depiction of our world brane (shown as a 2-dimensional surface) floating through a higher (three)-dimensional space, with another brane right behind it. A "brane-world" scenario, depicting our world as a 2-dimensional brane [more] Click image to enlarge Might it be possible to somehow detect such large extra dimensions? By definition, we – and all of our measuring instruments – are confined to our 3-dimensional world brane, and so it would seem the answer is no. But it could be yes. One idea is that gravity, which has always been an "outsider" to the particle physics world, might be an outsider quite literally. Unlike the other three fundamental forces, which, along with matter, are confined to the brane, gravity might live in the higher-dimensional space such that a nearby brane world could affect our brane world via gravity. Perhaps our experience of gravity is like the shadows in Plato’s famous allegory of the cave. Remarkably, there may be a measurable signature of this fantastic scenario, and experimental research is currently underway! In another spectacular demonstration of its potential, string theory has led to a possible concrete realization of the holographic principle, called the Maldacena conjecture. Proposed by Gerard 't Hooft and Leonard Susskind, the basic idea of the holographic principle is that in order for quantum theory and Einstein’s theory of gravity to coexist in our universe, there must be much less information about what’s physically happening inside any given 3-dimensional volume of space (e.g., objects moving this way and that) than we had previously expected. In fact, the amount cannot exceed what we would expect of a physical reality existing in a 2-dimensional surface – the surface bounding that volume. While it's a bit subtler than this, it's nevertheless quite analogous to the way the information in a 3-dimensional scene can be stored in a 2-dimensional hologram. In the context of string theory, Juan Maldacena discovered connections of exactly this sort between certain physical theories in different numbers of dimensions. String theory is, if nothing else, certainly a mathematically rich theory that has had considerable impact on many areas of mathematics. For instance, the study of the different ways the extra dimensions can be curled up has led to important new insights into exotic types of geometry of great interest to pure mathematicians. It is also a potentially very rich theory from a physics perspective. It touches deeply on many aspects of the universe, ranging from the building blocks of all matter and radiation to the dimensions of spacetime, the quantum nature of gravity, and even the holographic principle. A hologram (left) can be made by illuminating a 3-dimensional subject with laser light [more] Click image to enlarge But mathematical elegance and tantalizing physical insights, as deep as they may be, are not enough. To be successful, string theory needs to be related to observations and testable predictions. Like many other theories of quantum gravity, most of its predictions are currently not accessible by present-day experiments. However, string theory is young. Researchers are still trying to understand various ways of testing it to see if it matches reality. One promising line of research that may produce testable predictions relates string theory to cosmology. For example, researchers are trying to understand if string theory might be able to explain inflation in the early universe, and if so, what observational signatures this might have left behind. If, one day, string theory is confirmed by experiments, scientists will have found what is arguably the "holy grail of physics": a single theory that describes the nature of the universe at the most fundamental level. To learn more about string theory at Perimeter Institute and the researchers, please click here. Perimeter Institute Resources The following selection of Perimeter Institute multi-media presentations by leading scientists is particularly relevant to superstring theory. Click on the link to read a full description of each talk and choose your viewing format. Specially for Teachers and Students These multi-media talks by Perimeter Institute researchers and visiting scientists were presented to youth and educators during Perimeter Institute's ISSYP, EinsteinPlus or other occasions. Suggested External Resources
{"url":"http://www.perimeterinstitute.ca/research/research-areas/quantum-fields-and-strings/more-string-theory","timestamp":"2014-04-20T11:25:41Z","content_type":null,"content_length":"54716","record_id":"<urn:uuid:84d975ef-047b-4c3e-a529-dc749ae2baf7>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
A Unified Algebraic Framework for Classical Geometry &nbsp The study of relations among Euclidean, spherical and hyperbolic geometries dates back to the beginning of last century. The attempt to prove Euclid's fifth postulate led C. F. Gauss to discover hyperbolic geometry in the 1820's. Only a few years passed before this geometry was rediscovered independently by N. Lobachevski (1829) and J. Bolyai (1832). The strongest evidence given by the founders for its consistency is the duality between hyperbolic and spherical trigonometries. This duality was first demonstrated by Lambert in his 1770 memoir [L1770]. Some theorems, for example the law of sines, can be stated in a form that is valid in spherical, Euclidean, and hyperbolic geometries [B1832]. To prove the consistency of hyperbolic geometry, people built various analytic models of hyperbolic geometry on the Euclidean plane. E. Beltrami [B1868] constructed a Euclidean model of the hyperbolic plane, and using differential geometry, showed that his model satisfies all the axioms of hyperbolic plane geometry. In 1871, F. Klein gave an interpretation of Beltrami's model in terms of projective geometry. Because of Klein's interpretation, Beltrami's model is later called Klein's disc model of the hyperbolic plane. The generalization of this model to n-dimensional hyperbolic space is now called the Klein ball model [CFK98]. In the same paper Beltrami constructed two other Euclidean models of the hyperbolic plane, one on a disc and the other on a Euclidean half-plane. Both models are later generalized to n-dimensions by H. Poincare [P08], and are now associated with his name. All three of the above models are built in Euclidean space, and the latter two are conformal in the sense that the metric is a point-to-point scaling of the Euclidean metric. In his 1878 paper [K1878], Killing described a hyperboloid model of hyperbolic geometry by constructing the stereographic projection of Beltrami's disc model onto the hyperbolic space. This hyperboloid model was generalized to n-dimensions by Poincare. There is another model of hyperbolic geometry built in spherical space, called hemisphere model, which is also conformal. Altogether there are five well-known models for the n-dimensional hyperbolic geometry: □ the half-space model, □ the conformal ball model, □ the Klein ball model, □ the hemisphere model, □ the hyperboloid model. The theory of hyperbolic geometry can be built in a unified way within any of the models. With several models one can, so to speak, turn the object around and scrutinize it from different viewpoints. The connections among these models are largely established through stereographic projections. Because stereographic projections are conformal maps, the conformal groups of n -dimensional Euclidean, spherical, and hyperbolic spaces are isometric to each other, and are all isometric to the group of isometries of hyperbolic (n+1)-space, according to observations of Klein [K1872], [K1872]. It seems that everything is worked out for unified treatment of the three spaces. In this chapter we go further. We unify the three geometries, together with the stereographic projections, various models of hyperbolic geometry, in such a way that we need only one Minkowski space, where null vectors represent points or points at infinity in any of the three geometries and any of the models of hyperbolic space, where Minkowski subspaces represent spheres and hyperplanes in any of the three geometries, and where stereographic projections are simply rescaling of null vectors. We call this construction the homogeneous model. It serves as a sixth analytic model for hyperbolic geometry. We constructed homogeneous models for Euclidean and spherical geometries in previous chapters. There the models are constructed in Minkowski space by projective splits with respect to a fixed vector of null or negative signature. Here we show that a projective split with respect to a fixed vector of positive signature produces the homogeneous model of hyperbolic geometry. Because the three geometries are obtained by interpreting null vectors of the same Minkowski space differently, natural correspondences exist among geometric entities and constraints of these geometries. In particular, there are correspondences among theorems on conformal properties of the three geometries. Every algebraic identity can be interpreted in three ways and therefore represents three theorems. In the last section we illustrate this feature with an example. The homogeneous model has the significant advantage of simplifying geometric computations, because it employs the powerful language of Geometric Algebra. Geometric Algebra was applied to hyperbolic geometry by H. Li in [L97], stimulated by Iversen's book [I92] on the algebraic treatment of hyperbolic geometry and by the paper of Hestenes and Ziegler [HZ91] on projective geometry with Geometric Algebra.
{"url":"http://geocalc.clas.asu.edu/html/UAFCG.html","timestamp":"2014-04-18T08:02:08Z","content_type":null,"content_length":"20132","record_id":"<urn:uuid:9cd1f73b-60ca-4bca-8c73-b0745b19185f>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00505-ip-10-147-4-33.ec2.internal.warc.gz"}
Resolution: standard / high Figure 3. Volumetric Changes of the Caudate. The plots show the significant caudate volumetric differences in the high vs. low frequency migraine subjects (left: p < 0.025 and right: p < 0.006). The volumes have been normalized to the total intracranial volume to scale for the brain volume for each subject. Bar heights represent the mean value for each volumetric measurement. Error bars represent the 95% confidence interval of the mean. * denotes significance. Maleki et al. Molecular Pain 2011 7:71 doi:10.1186/1744-8069-7-71 Download authors' original image
{"url":"http://www.molecularpain.com/content/7/1/71/figure/F3","timestamp":"2014-04-18T08:15:50Z","content_type":null,"content_length":"11845","record_id":"<urn:uuid:6f5ce25d-eab8-49c9-beec-65e3c00b94e9>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00653-ip-10-147-4-33.ec2.internal.warc.gz"}
Corona, NY Algebra 1 Tutor Find a Corona, NY Algebra 1 Tutor ...Lastly, I feel comfortable teaching material in many different ways to ensure full understanding. In the past, for example, I've drawn pictures, sung songs, broken down information into bullet points, acted it out, made flash cards, and so many more! Whatever works best for the student works for me. 37 Subjects: including algebra 1, chemistry, physics, calculus ...I also have experience tutoring elementary school students on their reading and writing, and teaching English to Spanish-speaking adults in Manizales, Colombia. I have taught at educational summer camps in Texas and California. I base my teaching style on making our sessions relaxed and fun. 13 Subjects: including algebra 1, Spanish, English, algebra 2 ...I was fortunate enough to have graduated high school from a school of performing arts and then returned there to complete my student teaching. With the guidance of my professors and cooperating teachers, I became prepared to teach Albegra 1, Geometry, Algebra2/ Trigonometry, and Precalculus. I ... 5 Subjects: including algebra 1, geometry, algebra 2, precalculus ...I have been teaching/tutoring the math section of SSAT for many years. During the first session, I evaluate the student(s) to see which areas of the test the student needs to improve on. Afterwards, I use an individualized approach focusing on strengthening student's weakness as well as mastering other parts. 23 Subjects: including algebra 1, English, reading, geometry ...I have a knack for traps and advanced strategy, and although I'm not as practiced with all the openings, I have a talent for utilizing different defenses in creative ways. If your a beginner or intermediate player looking for a consistent challenge and advice on how to formalize, systematize, and improve your game, please contact me. My chess rates are extremely competitive. 5 Subjects: including algebra 1, ASVAB, GRE, chess
{"url":"http://www.purplemath.com/corona_ny_algebra_1_tutors.php","timestamp":"2014-04-17T15:31:10Z","content_type":null,"content_length":"24156","record_id":"<urn:uuid:c3b04d84-41f5-4f0f-863f-42d91a8b47b2>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00173-ip-10-147-4-33.ec2.internal.warc.gz"}
Re: st: margins after stcox with time-dependent covariate Notice: On March 31, it was announced that Statalist is moving from an email list to a forum. The old list will shut down at the end of May, and its replacement, statalist.org is already up and [Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index] Re: st: margins after stcox with time-dependent covariate From Steve Samuels <sjsamuels@gmail.com> To statalist@hsphsun2.harvard.edu Subject Re: st: margins after stcox with time-dependent covariate Date Thu, 7 Mar 2013 12:53:50 -0500 Correction: -stcurve- after -stcox- _can_ plot smoothed estimates of the hazard function. I prefer -stpm2- because of it has more flexible models for non-proportional hazards. On Mar 6, 2013, at 11:05 AM, Steve Samuels wrote: Mario Petretta asked about -margins- after -stcox-. Maarten (http://www.stata.com/statalist/archive/2013-03/msg00000.html) stated that -margins- after -stcox- can deal with quantities related to the estimated hazard ratios. This is because -margins- operates on e(b). He recommends -stpm2-, but, -margins- after -stpm2- cannot operate on all functions of the baseline hazard. For example, . margins i.x, predict(meansurv) works, but . margins i.x, predict(hazard) does not. I agree with Maarten that -stpm2- is preferable to -stcox- for computing descriptive statistics for a survival distribution. Notably, -stpm2- has a -predict, hazard - command with at() options. I recommend that Mario use these to describe the two-way interaction in his model. He will need to center remaining variables and also use the "zero" option. For time-dependent covariates. I think that the plotted hazard function is the only useful descriptive summary. (Hazard ratios are comparative summaries). The survival function starting from time 0 is not descriptive when covariate values change. What would the curve S(t| X(t)= c ) describe? It would describe survival only for a population for whom the value of X was C throughout. Data with time-dependent covariates is multiple record data. -stcox- has a cluster(id) option that ensures correct standard errors for such data, but-stpm2- does not. An earlier version, -stpm- by Patrick Royce (from SSC), does have the cluster option, so Mario should use that for estimating standard errors. Like -stpm2-, -stpm- has a tvc() option for estimating time-varying-coefficients of some variables. This is useful for checking for non-proportionality. The algorithm in -stpm2- is more flexible than that in -stpm-, so I would do the check with -stpm2-. One other point: Mario -stsplit- at failures. This works for -stcox- because only values at failures are relevant. But it will not work for -stpm- or -stpm2-, which require the exact times of change for time-dependent covariates. The last example in the Manual entry for -stset- shows how to prepare this kind of data. There is also a Stata Tip by Ben Jann: Stata tip 8: Splitting time-span records with categorical time-varying covariates. The Stata Journal (2004), 4, Number 2, pp. 221–222. This can be downloaded for free. * For searches and help try: * http://www.stata.com/help.cgi?search * http://www.stata.com/support/faqs/resources/statalist-faq/ * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2013-03/msg00336.html","timestamp":"2014-04-17T07:13:07Z","content_type":null,"content_length":"10054","record_id":"<urn:uuid:f8cb72e8-2382-4100-82f5-4ed69f21a86f>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00589-ip-10-147-4-33.ec2.internal.warc.gz"}
Alviso Trigonometry Tutor Find an Alviso Trigonometry Tutor ...I can also teach Chinese at all levels. I am patient and kind. I care about the student's academic growth as well as their personal growth. 11 Subjects: including trigonometry, calculus, statistics, geometry ...I like to talk through examples and discuss the problems, to ensure there is a true understanding of the concepts. I'm currently in school to gain my credentialing in teaching in order to teach Mathematics for grades 6-12, and have been tutoring for over 10 years. I do have a passion for Math because I have my Bachelors of Science in Mathematics and Masters of Science in Actuarial 9 Subjects: including trigonometry, geometry, algebra 1, algebra 2 ...I've taught several students in differential equations before and am well versed with the methods involved in solving all common types of differential equation - be it analytic, numerical, or symbolic (computational). I work as a mathematician and machine learning expert in an analysis group at ... 28 Subjects: including trigonometry, chemistry, physics, statistics ...I have taught junior high, high school, and community college, and I have developed and taught high school courses for homeschoolers. I also was a MATHCOUNTS coach for fourteen years and worked with the Charlotte Math Club for almost 20 years. I am a very patient person and enjoy working with students who have lots of questions about math. 15 Subjects: including trigonometry, calculus, geometry, algebra 1 ...In addition, I've prepared many students for the SAT 1, SAT 2, and ACT standardized tests including some who achieved perfect scores. If my experience as an educator has taught me anything, it has taught me that every student is different: different personalities, different motivations, differen... 14 Subjects: including trigonometry, chemistry, calculus, physics
{"url":"http://www.purplemath.com/Alviso_trigonometry_tutors.php","timestamp":"2014-04-20T16:00:38Z","content_type":null,"content_length":"24058","record_id":"<urn:uuid:8d8213a8-7156-4c14-9c7c-f84353a3b5b8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00586-ip-10-147-4-33.ec2.internal.warc.gz"}
Differential Equations: x(dy/dx) = 6y + (12x^4)[y^(2/3)] I'm stuck right now on this equation: x(dy/dx) = 6y + (12x^4)[y^(2/3)] The above equation looks nonlinear, and not separable. At the moment, I can't think of any method to employ when it comes to solving this equation for y, of the few methods I've learned so far up to this point. Again, separating variables didn't work for me, and it doesn't look like the integrating factor method will work either. Plus, it doesn't look exact at all. Any ideas? Re: Differential Equations: x(dy/dx) = 6y + (12x^4)[y^(2/3)] Substitute u^3 = y. Hence 3u^2 (du/dx) = dy/dx. Equation becomes 3xu^2 (du/dx) = 6u^3 + 12x^4u^2 Divide out 3u^2 and the equation is linear and can be solved for u in the usual way for linear first order equations. Then backsubstitute u to get y. Re: Differential Equations: x(dy/dx) = 6y + (12x^4)[y^(2/3)] Karl wrote:Substitute u^3 = y. Hence 3u^2 (du/dx) = dy/dx. Equation becomes 3xu^2 (du/dx) = 6u^3 + 12x^4u^2 Divide out 3u^2 and the equation is linear and can be solved for u in the usual way for linear first order equations. Then backsubstitute u to get y. Aren't you the same Karl that runs Karl's Calculus Tutor? Anyway, I'm just making sure that I've done everything correctly as you said: u^3 = y 3u^2 (du/dx) = dy/dx 3xu^2 (du/dx) = 6u^3 + 12x^4u^2 Now, the next part, dividing out 3u^2 as you said I should leaves me with: x(du/dx) = 2u + 4x^2 Dividing both sides by x gives: du/dx = (2/x)u + 4x (du/dx) - (2/x)u = 4x I suppose the next step is to solve for the integrating factor (which I'll let be p(x)): p(x) = e^integral[(-2/x)] dx p(x) = e^(-2ln|x| + C) p(x) = Ce^ln(1/x^2) p(x) = C/x^2 Multiplying the ODE by this integrating factor (omitting the C since it's sort of extraneous anyway, so now it's just x^-2) gives me: (x^-2)(du/dx) - (2x^-3)u = 4x^-1 The left side can be converted to: (d/dx)[ux^-2] = 4x^-1 Multiplying both sides by dx: d(ux^-2) = 4x^-1 dx Integrating both sides: ux^-2 = 4ln|x| + C Dividing both sides by x^-2 leaves: u = 4x^2ln|x| + Cx^2 u^3 = (4x^2ln|x| + Cx^2)^3 y = (4x^2ln|x| + Cx^2)^3 That should be the solution, from what I calculated. Re: Differential Equations: x(dy/dx) = 6y + (12x^4)[y^(2/3)] Yes I am the same Karl as on KCT. In my previous post on this thread I got as far as 3xu^2 (du/dx) = 6u^3 + 12x^4u^2 where u^3 = y Dividing out 3u^2: x (du/dx) = 2u + 4x^4 You seem to have miscopied the last exponent in your solution. This equation is linear. To put it into standard form, divide by x and rearrange: du/dx - 2u/x = 4x^3 Your integrating factor appears to be correct. So other than a clerical mistake, you appear to have it nailed. Run your steps with the correction, then try putting the resulting solution back into the original equation to see if checks out. Re: Differential Equations: x(dy/dx) = 6y + (12x^4)[y^(2/3)] Karl wrote:Yes I am the same Karl as on KCT. In my previous post on this thread I got as far as 3xu^2 (du/dx) = 6u^3 + 12x^4u^2 where u^3 = y Dividing out 3u^2: x (du/dx) = 2u + 4x^4 You seem to have miscopied the last exponent in your solution. This equation is linear. To put it into standard form, divide by x and rearrange: du/dx - 2u/x = 4x^3 Your integrating factor appears to be correct. So other than a clerical mistake, you appear to have it nailed. Run your steps with the correction, then try putting the resulting solution back into the original equation to see if checks out. I probably should've done the work on paper instead of typing it all out; maybe then, I could have avoided that careless error early on. But yes, now I see my mistake. I'll continue from (du/dx) - (2/x)u = 4x^3: x^-2(du/dx) - (2x^-3)(u) = 4x (d/dx)(ux^-2) = 4x d(ux^-2) = 4x dx ux^-2 = integral(4x)dx ux^-2 = 2x^2 + C u = 2x^4 + Cx^2 u^3 = (2x^4 + Cx^2)^3 y = (2x^4 + Cx^2)^3 x(dy/dx) = 6y + (12x^4)[y^(2/3)] 3x[(2x^4 + Cx^2)^2](8x^3 + 2Cx) = 6[(2x^4 + Cx^2)^3] + (12x^4)(2x^4 + Cx^2)^2 Next step: factoring out (2x^4 + Cx^2)^2 from each side: (2x^4 + Cx^2)^2[3x(8x^3 + 2Cx)] = (2x^4 + Cx^2)^2[6(2x^4 + Cx^2) + 12x^4] Dividing both sides by (2x^4 + Cx^2)^2 leaves: 3x(8x^3 + 2Cx) = 6(2x^4 + Cx^2) + 12x^4 24x^4 + 6Cx^2 = 12x^4 + 6Cx^2 + 12x^4 Subtracting 6Cx^2 from both sides leaves: 24x^4 = 12x^4 + 12x^4 24x^4 = 24x^4 So everything checks out. Re: Differential Equations: x(dy/dx) = 6y + (12x^4)[y^(2/3)] Excellent work. The more general lesson here is that any time you have an equation in the form of: $p(x)\frac{dy}{dx} = q(x)y + r(x)y^s$ the way to attack it is to substitute $u^{\left(\frac{1}{1-s}\right)} = y$ so that $\left(\frac{1}{1-s}\right)u^{\left(\frac{s}{1-s}\right)}\left(\frac{du}{dx}\right) = \frac{dy}{dx}$ When you make this substitution, you will always be able to divide $u^{\left(\frac{s}{1-s}\right)}$ out of the equation to make it into a first-order linear. The only exception is when $s=1$, but then the equation is already linear.
{"url":"http://www.purplemath.com/learning/viewtopic.php?f=14&t=130&p=373","timestamp":"2014-04-20T13:53:53Z","content_type":null,"content_length":"28997","record_id":"<urn:uuid:4ae848a1-03bd-4888-b9de-744a846196cc>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00561-ip-10-147-4-33.ec2.internal.warc.gz"}
Reverse Polish Calcullator 11-19-2012 #1 Good evening. I'm studying the reverse polish calculator of KnR but there are some lines I'm not understanding very well. #include <stdio.h> #include <stdlib.h> #ifndef MAXOP #define MAXOP 100 #ifndef NUMBER #define NUMBER '0' int getop(char[]); void push(double); double pop(void); int main(void) int type; double op2; char s[MAXOP]; while( (type = getop(s)) != EOF) case NUMBER: case '+': push(pop() + pop()); case '-': op2 = pop(); push(pop() - op2); case '*': push(pop() * pop()); case '/': op2 = pop(); if(op2 != 0.0) push(pop() / op2); printf("error: zero divisor\n"); case '\n': printf("\t%.8g\n", pop()); printf("error: unknown command %s\n", s); return 0; #define MAXVAL 100 int sp = 0; double val[MAXVAL]; void push(double f) if(sp < MAXVAL) val[sp++] = f; printf("error: stack full, can't push %g\n", f); double pop(void) if(sp > 0) return val[--sp]; else { printf("error: stack empty\n"); return 0.0; #include <ctype.h> int getch(void); void ungetch(int); /* getop: get next operator or numeric operand */ int getop(char s[]) int i, c; while( (s[0] = c = getch()) == ' ' || c == '\t') s[1] = '\0'; if(!isdigit(c) && c != '.') return c; /* not a number */ i = 0; if(isdigit(c)) /* collect integer part */ while(isdigit(s[++i] = c = getch())) if(c == '.') /* collect fraction part */ while(isdigit(s[++i] = c = getch())) s[i] = '\0'; if(c != EOF) return NUMBER; #define BUFSIZE 100 char buf[BUFSIZE]; /* buffer for ungetch */ int bufp = 0; /* next free position in buf */ int getch(void) return (bufp > 0)? buf[--bufp] : getchar(); void ungetch(int c) if(bufp >= BUFSIZE) printf("Ungetch: too many characters\n"); buf[bufp++] = c; this section while( (s[0] = c = getch()) == ' ' || c == '\t') s[1] = '\0'; if(!isdigit(c) && c != '.') return c; /* not a number */ i = 0; if(isdigit(c)) /* collect integer part */ while(isdigit(s[++i] = c = getch())) Why did he insert the null terminator in s[1] ? to stop getting blanks ? after that, did he reset i to overwrite '\0' with an input ? (isdigit(s[++i]) ) ... if so, why did he insert the null terminator in the first place? I don't get it ... This code in my eyes does the following (line by line) □ Line 1 --> It will read a character.It will store this character to c.Then c will be stored in s[0].Then i will check if what was read was a space or a tab.If so read again and execute all the procedure again.In other words he 'eats' whitespaces and tabs by this way. □ Line 2 --> An empty while body, because all work is done in line 1. □ Line 3 --> Set s[1] to the null terminator, because no matter how many times the while loop is going to be executed we always will assign c to s[0] □ Line 4 --> If c is not a digit AND it is not a dot, then go to line 5 □ Line 5 --> return c because it is not a number □ Line 6 --> Assign to i the value of zero □ Line 7 --> If c is a digit go to line 8 □ Line 8 --> First increment i by one.Then read a char, assign it to c, and then assign c to s[i].Check if s[i] is digit.If so the while loop will be executed Hmm.. I admit it is not clear why he sets s[1] to null terminator.But if you look the whole function, when on line 98 and 101 the if conditions are false, then this means that i still has value of zero,thus he will assign first element with null terminator ( it was read by the first while in the function ).But again i am not really convinced that s[1] = '\0' was mandatory... EDIT : Can you please provide me with the page of the book where this function lies to? Last edited by std10093; 11-19-2012 at 05:01 PM. when on line 98 and 101 the if conditions are false But if the input isn't a digit, then c will be returned. by the away, congratulations for the 1000th post !!! [open champagne] Haha,yeah you are right....!! I see your point.. can you tell me the page ? page 90 of the second edition. have a good night people! Reading symbols from /home/ethereal/C/DataStructures/reversepolish...done. (gdb) break 101 Breakpoint 1 at 0x4008fe: file reversepolish.c, line 101. (gdb) break 111 Breakpoint 2 at 0x4009ba: file reversepolish.c, line 111. (gdb) break 114 Breakpoint 3 at 0x4009d6: file reversepolish.c, line 114. (gdb) run Starting program: /home/ethereal/C/DataStructures/reversepolish 2.5 3.5 + 4.0 * Breakpoint 2, getop (s=0x7fffffffe640 "2.5 ") at reversepolish.c:111 111 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "2.5") at reversepolish.c:114 114 return NUMBER; (gdb) cont Breakpoint 2, getop (s=0x7fffffffe640 "3.5 ") at reversepolish.c:111 111 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "3.5") at reversepolish.c:114 114 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "+") at reversepolish.c:101 101 return c; /* not a number */ (gdb) cont Breakpoint 2, getop (s=0x7fffffffe640 "4.0 ") at reversepolish.c:111 111 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "4.0") at reversepolish.c:114 114 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "*") at reversepolish.c:101 101 return c; /* not a number */ (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "\n") at reversepolish.c:101 101 return c; /* not a number */ (gdb) cont I think I understood the reason of assigning '\0' to s[1]. he's eliminating ' ' or '\t'. As you can see, Those don't appear inside s after the digits start to be input. My idea is strengthened by the fact the null terminator is overwritten with the first digit input. Almost in the same way, he overwrites the next char after the digit with '\0' (like the program did with ' '). That's the meaning of assigning '\0' to s[i]. Last edited by thames; 11-20-2012 at 06:31 AM. Still i am not convinced.Which will the problem if we remove this line? Still i am not convinced.Which will the problem if we remove this line? the problem, I don't know. But I noticed a difference in the last three breaks. There's a ".0" after some chars. without s[1] = '\0' gdb -q reversepolish Reading symbols from /home/ethereal/C/DataStructures/reversepolish...done. (gdb) break 99 Breakpoint 1 at 0x40086f: file reversepolish.c, line 99. (gdb) break 108 Breakpoint 2 at 0x4008e5: file reversepolish.c, line 108. (gdb) break 111 Breakpoint 3 at 0x400901: file reversepolish.c, line 111. (gdb) run Starting program: /home/ethereal/C/DataStructures/reversepolish 1.5 2.5 + 4.0 * Breakpoint 2, getop (s=0x7fffffffe640 "1.5 ") at reversepolish.c:108 108 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "1.5") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 2, getop (s=0x7fffffffe640 "2.5 ") at reversepolish.c:108 108 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "2.5") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "+.5") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont Breakpoint 2, getop (s=0x7fffffffe640 "4.0 ") at reversepolish.c:108 108 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "4.0") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "*.0") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "\n.0") at reversepolish.c:99 99 return c; /* not a number */ with s[1] = '\0'; 1.5 2.5 + 4.0 * Breakpoint 2, getop (s=0x7fffffffe640 "1.5 ") at reversepolish.c:108 108 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "1.5") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 2, getop (s=0x7fffffffe640 "2.5 ") at reversepolish.c:108 108 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "2.5") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "+") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont Breakpoint 2, getop (s=0x7fffffffe640 "4.0 ") at reversepolish.c:108 108 s[i] = '\0'; (gdb) cont Breakpoint 3, getop (s=0x7fffffffe640 "4.0") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "*") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "\n") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont Oh, now i see your point. Great observation.Bravo Thank you for the compliment. But what does that ".0" mean? a kind of junk ? When you encounter code such as this one, you can rewrite/refactor it to make it more readable: /* getop: get next operator or numeric operand */ int getop(char s[]) int i, c; do { c = getch(); } while (c == ' ' || c == '\t'); s[0] = c; s[1] = '\0'; if(!isdigit(c) && c != '.') return c; i = 0; /* Integer part */ do { s[++i] = c; c = getch(); } while (isdigit(c)); /* Fractional part */ if(c == '.') do { s[++i] = c; c = getch(); } while (isdigit(c)); s[i] = '\0'; if(c != EOF) return NUMBER; When the function is written this way, the intent becomes clear. On lines 6-8 the function reads the next character, skipping all spaces and tabs. On lines 10-11, the function constructs a string of that character into the char array specified by the caller. On lines 13-14, if the character was not a number, the function returns the character code. (The char array was filled with a string, to make other operations easier.) On lines 17-22, the function adds the integer part, if any, of a number into the character array. On lines 25-29, the function adds the fractional part, if any, starting with the decimal point, into the character array. On line 31, the function adds a NUL to the character array, so that the contents form a proper string. (The number just read.) The first character not part of the number is still in variable c. Therefore, on lines 33-34, if there really was a character (and not just end of input), that character is unread: put back into the stream. (This is internal to the C library, just something that makes parsing complicated stuff easier. It is not visible outside the process in any way, it is just part of the buffering mechanisms the C library does.) Compare the original function and my version side-by-side; perhaps the contrast will make it easier to decipher the intent behind the code? (I do take this approach when I see obtuse code. One must be extra careful to test that the rewritten code works exactly like the original. To my shame I admit I was too lazy to do that step here, so there may be bugs or typos in my version.) Not really. This is what was left by previous call. I will use your own - beautiful - in my mind explanation ( which i really appreciate and thank you for that, because it really gave me the answer in a question i was thinking all the time in bed the last night. Thank you again .) See the following breakpoints Without the s[1] = '\0'; Breakpoint 3, getop (s=0x7fffffffe640 "2.5") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "+.5") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont Now with the s[1] = '/0'; Breakpoint 3, getop (s=0x7fffffffe640 "2.5") at reversepolish.c:111 111 return NUMBER; (gdb) cont Breakpoint 1, getop (s=0x7fffffffe640 "+") at reversepolish.c:99 99 return c; /* not a number */ (gdb) cont So in the first call we have 2.5 inside s <- the parameter of getop. Then we get the plus operator ( + ) . If we remove line s[1] = '\0' (first piece of code) then we have in the buffer 2.5 , but then when we read the + operator, because it is only one element - one cell, only the first element of the buffer is going to be overwritten, while the the second (had the '.' from previous call) and third (had the '5' from previous call) elements will not be overwritten. As a result in the second call of getop we will have inside the buffer +.5 Now ,with the line s[1]='\0'; we have in the first call of getop 2.5 inside the buffer and in the second call we have + inside the buffer , which is what we want Then in the last three breakpoints you have , you receive in the first call of the function 4.0 , thus buffer gets the 4.0 as content of it.Then you read * , but without the s[1]='\0' line you will get *.0 . The dot and the zero come from the 4.0 , because again only the first element of buffer (array s) will be overwritten. Can you understand what i am saying? Again thank for this beautiful question and your nice breakpoints (This is internal to the C library, just something that makes parsing complicated stuff easier. It is not visible outside the process in any way, it is just part of the buffering mechanisms the C library does.) actually, getch and ungetch were coded. If we remove line s[1] = '\0' (first piece of code) then we have in the buffer 2.5 , but then when we read the + operator, because it is only one element - one cell, only the first element of the buffer is going to be overwritten, while the the second (had the '.' from previous call) and third (had the '5' from previous call) elements will not be overwritten. Thank you. As clear as water!!! "many thanks high power. " Nominal code was very enlightening. I couldn't picture the getch char was being assigned to c before assigning it to s[0]. To be honest, that simultaneous assigning confused me a bit. Last edited by thames; 11-20-2012 at 04:52 PM. 11-19-2012 #2 11-19-2012 #3 11-19-2012 #4 11-19-2012 #5 11-19-2012 #6 11-20-2012 #7 11-20-2012 #8 11-20-2012 #9 11-20-2012 #10 11-20-2012 #11 11-20-2012 #12 Join Date Oct 2011 11-20-2012 #13 11-20-2012 #14 11-20-2012 #15
{"url":"http://cboard.cprogramming.com/c-programming/152460-reverse-polish-calcullator.html","timestamp":"2014-04-18T14:06:56Z","content_type":null,"content_length":"114531","record_id":"<urn:uuid:655237e5-4c8e-4153-88ec-f92a4e6eb80c>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00027-ip-10-147-4-33.ec2.internal.warc.gz"}
Cumulative Probability & Mean (Discrete & Continuous) October 21st 2010, 01:59 PM Cumulative Probability & Mean (Discrete & Continuous) Hi Members, I am in a panic trying to work out the formulas below, I am unsure if I have posted in the correct place, my lecturer said it is basic primary statistics, (which I can’t believe, not in my primary school) I have to solve a similar formula for my “Simulation of Multi-media Networks” exam at university. The lecturer went over it all at breakneck speed and I could not grasp it, I am a mature student and it has been a long time since I did maths and as far as statistics are concerned I have only learned to do quartiles, mean & median. I know it is a big ask but I was hoping someone could help break down the formulas and explain how the process the lecturer went through to get her answers. (I will be provided with the formula in the exam, which will be no good unless I know how to use it) I have copied the details below as they are from handouts. Cumalative Probability Distribution: Cumulative probabilities from a (discrete) probability distribution.. $<br /> \begin{tabular}{|c|c|c|c|}<br /> \hline<br /> x&0&1&2\\<br /> \hline<br /> P(x)&0.2&0.5&0.3\\<br /> \hline<br /> \sum P(x)&0.2&0.7&1\\<br /> \hline<br /> \end{tabular}<br /> <br />$ Note: I can kind of see how row $\sum P(x)$is worked out, but I can't figure out how row $P(x)$is worked out at all. Mean or Expected Value: $<br /> \begin{tabular}{|c|c|c|c|}<br /> \hline<br /> x&0&1&2\\<br /> \hline<br /> P(x)&0.2&0.5&0.3\\<br /> \hline<br /> \end{tabular}<br /> <br />$ Note: I can of see the correlation of the answer to the table e.g. $x$multiplied by $P(x)$in each column then added together givesthe answer 1.1 but on how this was worked out using the formula I am totally lost. For the continuous variate: $f(x)=2x$Attachment 19406 $0\leq x \leq1$ $<br /> E(x)=\int_0^12x^2dx=\left[ \frac{2x^3}{3} \right]_0^1=\frac{2}{3}<br />$ Note: On the continuous variate I am also totaly lost. Any help anyone has to offer to better my understanding of this formula would be gratefuly appreciated. October 21st 2010, 05:56 PM For your first question, the values of P(X) at x = 0, 1, and 2 are given to you from which you find $\sum P(x)$ To find the mean (of a discrete distribution), which is also called the Expected Value of x: $\sum x \times P(X=x)$ As stated by your question, this discrete random variable X takes the values 0, 1 and 2. Then the Mean or the Expected Value is calculated by multiplying each x value by its probability and summing them up. $E[X] = 0P(X=0)+1P(X=1)+2P(X=2)= 0(0.2)+1(0.5)+2(0.3)$ The expected value or the mean of a continuous random variable, X, that has a value between a and b(In this case, 0 and 1) is computed by integrating x times its probability density function (p.d.f.) over the interval [a,b]. Your pdf here is f(x)=2x over the interval[0,1]: $\displaystyle{E(X) = \int_0^1 x \times f(x) dx = \int_0^1 x \times 2x \mbox{dx} = \int_0^1 2x^2 \mbox{dx}}$ October 23rd 2010, 12:06 PM Thanks very much harish21.
{"url":"http://mathhelpforum.com/statistics/160537-cumulative-probability-mean-discrete-continuous-print.html","timestamp":"2014-04-19T00:38:28Z","content_type":null,"content_length":"11164","record_id":"<urn:uuid:7291f152-6c21-41da-8517-8d458b6e75a8>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
Nonlinerality of QFT The Maxwell equations don't lead to self-interaction. That is perfectly fine, as there is no direct (without other fields involved) self-interaction of electromagnetic fields. In addition, "Nonlinearlity of QFT produces interactions" does not mean the reverse ("Interactions in QFT have to come from nonlinearity") has to be true.
{"url":"http://www.physicsforums.com/showthread.php?s=5e95af48311940c355caaba4985afd2d&p=4537423","timestamp":"2014-04-18T23:27:27Z","content_type":null,"content_length":"33529","record_id":"<urn:uuid:9884ebab-da00-4afe-8120-dfba08fafb02>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00342-ip-10-147-4-33.ec2.internal.warc.gz"}
Chocolate Chip Probability Word Problem Date: 9/14/95 at 10:32:39 From: Anonymous Subject: Combinatorics This is a puzzle that was given to me about two years ago. Given 12 chocolate cookies there are 7 chocolate chips that are randomly placed into the cookies. What is the probability that at least one cookie has at least two chips? We've not found a way to calculate this, but have simulated it using the Monte Carlo method. Thanks for your help!! Date: 9/17/95 at 15:42:0 From: Doctor Robin Subject: Re: Combinatorics My interpretation of the problem is that each of the 7 chips is equally likely to be in each of the 12 cookies, independently of the others. For an exact answer in this case, simply drop in the chips one at a time and see whether each chip goes into a different cookie. The probability that the second chip goes somewhere other than the first is 11/12; then the third chip goes somewhere different from either of the first two with probability 10/12, and so on. This shows that the probability of all seven landing in different cookies is (1) (11 * 10 * 9 * 8 * 7 * 6) / 12^6 = .1114... and thus that the probability you want is 1 - .1114... = .8886. In general, if the number of chips is k and the number of cookies is n, you can get a reasonable approximation as follows. There are k*(k-1)/2 pairs of chips; each pair will fall in the same cookie with probability 1/n; thus the probability no two chips fall in the same cookie would be (2) (1 - 1/n)^[k*(k+1)/2] = approx exp ( - k(k+1)/(n2n)) if all the pairs acted independently. While they don't, you still get a good approximation. In the above example, for example, (2) yields .1609 = approx .1738 instead of .1114. The approximation gets better as n and k grow. One could also interpret the problem under the assumption that the chips are inherently indistinguishable, so that every distribution of 7 chips into cookies 1 thru 12 is equally likely. There are 18 choose 7 such arrangements, and 12 choose 7 of these have no two chips in a cookie, so this gives 11 * 10 * 9 * 8 * 7 * 6 / (18*17*16*15*14*13) = approx .0249, so here the probability you're looking for is .9751. This is a reasonable interpretation only if the chips behave like protons or other bosons being partitioned into quantum mechanical states, which is a dubious assumption. Finally, if both cookies and chips are indistinguishable, one gets only 1 arrangement with no double chips out of a total of 15 arrangements, giving a chance of 14/15 or .9333. --- Dr. Robin
{"url":"http://mathforum.org/library/drmath/view/56514.html","timestamp":"2014-04-21T15:30:32Z","content_type":null,"content_length":"7481","record_id":"<urn:uuid:4b3d660e-daeb-4fd9-819a-ef579bb14e2f>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Problem 338. Specific toolboxes Given a string that is the name of a MATLAB toolbox, return true if it is available on the Cody solvers evaluation system, false otherwise Problem Comments 1 player likes this problem 2 Comments David Young on 18 Feb 2012 I'm having difficulty understanding why solution 45103 to this problem fails - am I doing something really stupid? David Young on 20 Feb 2012 The results are quite misleading. Although the IPT is shown as available using the ver command, it actually is not in the path - and Helen Chen confirms that this is deliberate in her reply to my question at http://www.mathworks.co.uk/matlabcentral/answers/29536-cody-toolbox-availability Solution Comments 3 Comments David Young on 18 Feb 2012 I'd be grateful if someone could explain why this solution fails. Alfonso Nieto-Castanon on 18 Feb 2012 despite being 'installed', the image processing toolbox does not seem to be in the path, and the cody machines do not seem to have access to a license for it... David Young on 20 Feb 2012 Thanks Alfonso - that seems to be the right explanation, and Helen Chen has commented at http://www.mathworks.co.uk/matlabcentral/answers/29536-cody-toolbox-availability
{"url":"http://www.mathworks.nl/matlabcentral/cody/problems/338-specific-toolboxes","timestamp":"2014-04-21T02:07:03Z","content_type":null,"content_length":"39632","record_id":"<urn:uuid:a223fcb4-e32e-4450-b3ba-9b5778ded131>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00500-ip-10-147-4-33.ec2.internal.warc.gz"}
Annoying algebra May 19th 2013, 04:57 PM Annoying algebra It's late and I really am at an end on this, I just can't seem to get it. Note: A+B = C+D So, I need to find B in terms of A. A+B / A-B = k1/k2 * C+D / C-D cross multiply (A+B)k2 / (A-B)k1 = C+D / C-D but A+B = C+D (A+B)k2 / (A-B)k1 = A+B / C-D => (A+B)k2 / (A-B)k1) / (A+B) = C-D (A+B)k2 / (A-B)k1 * (A+B) = C-D Sorry, I can't do more, please help May 19th 2013, 10:52 PM Re: Annoying algebra Hey froodles01. Just for clarification can you tell us all the information you started off with? You said A+B=C+D which is one piece of information: is there any more? Since you have four variables to start off with. it means you will need one more two get a result in two variables. Can we assume that your second result involving A-B is given? (Sorry if it seems stupid to ask but you said assume one thing and then you state something completely different). May 20th 2013, 12:07 AM Re: Annoying algebra yes, there's always more . . . . A steady beam of particles travels in the x-direction and is incident on a finite square barrier of height V0, extending from x =0 to x = L. Each particle in the beam has mass m and total energy E0 =2V0. Outside the region of the barrier, the potential energy is equal to 0. In the stationary-state approach, the beam of particles is represented by an energy eigenfunction of the form Ae^ik1x + Be^−ik1x for x< 0 Ce^ik2x + De^−ik2x for 0 ≤ x ≤ L Fe^ik1x for x>L where A, B, C, D and F are constants and k1 and k2 are wave numbers appropriate for the different regions. Express the wave numbers k1 and k2 in terms of V0, m and n. (which I have done - hooray [k1 = √2mE0/ħ and k2 = √2m(E0-V0)/ħ]) Now consider the special case where k2L = pi/2 (which does not correspond to a transmission resonance). Use your answer to part (b) to show that A+B / A-B = k1/k2 * C+D / C-D Use your answer for part (a) and the above equation to find B in terms of A. Hence calculate the reflection coefficient, R, of the beam and deduce the value of the transmission coefficient, T. Useful equations include A + B = C + D and k1A – k1B = k2C – k2D Ce^ik2L + De^-ik2L = Fe^ik1L and k2Ce^ik2L – k2De^-ik2L = k1Fe^ik1L k2L = pi/2 . . . . .well, you did ask! I should be able to do this, but left myself too little time & now I'm too thick to think enough.
{"url":"http://mathhelpforum.com/algebra/219096-annoying-algebra-print.html","timestamp":"2014-04-20T17:36:01Z","content_type":null,"content_length":"6170","record_id":"<urn:uuid:ae7edd59-986e-4ea2-9d78-ec026e590ea0>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00097-ip-10-147-4-33.ec2.internal.warc.gz"}
4) 5) The first two terms of an arithmetic sequence are 11 and 9.5; find the tenth term. The first and second terms of an arithmetic sequence are 7 and 4, respectively. Great Discoveries in Mathematics Great Discoveries in Mathematics CTY Course Syllabus Week 1 Day Topics Structure and Methods 1 Introductions and Getting Started Origin of Numbers The origin of the ... Linear Equations and Arithmetic Sequences Step 3 The points * n, u n * of an arithmetic sequence are always collinear because to get from one point to the next, you always move over 1 unit and up b units, where b is the ... 1-115 MAC2-PS-823590 Identify rule: Begin with 1 and multiply each the sequence as arithmetic , geometric , term by 5. or neither . HINT: The first term is 1 and the second is 5. Emerging Order Accumulated Growth Worksheet Solutions Emerging Order Accumulated Growth Worksheet Solutions 1. Identify each of the following as geometric or arithmetic, write down a general formula for the n thterm u n and the n ... Honors Algebra 2 Unit Calendar beginning August 25th, 2010 ... Honors Geometry Honors Geometry Arithmetic Sequences and Series Sums from Rectangles This is a geometric approach to sums of arithmetic sequences with common ... What this Lesson is About: definition of arithmetic sequence You may ... Comparing Arithmetic and Geometric Sequences Comparing Arithmetic and Geometric Sequences Algebra 2 Chapter 11 Worksheets Classify each sequence as arithmetic, geometric, or neither. Then, find the next two terms: ... Algebra 2B Date_____Per_____ Arithmetic Sequence: the difference ... Sequences and Series: Worksheet #2 Using your calculator to find ... Sequences and Series: Worksheet #2 Using your calculator to find partial sums You have seen that there are convenient formulas for finding partial sums of arithmetic and ... Geometric Sequences Date Period Given a term in a geometric sequence and the ... eA 7lXg9eTb 5rka 3 42L.0 Worksheet by Kuta Software LLC Given the first term and the common ratio of a geometric sequence ... Sequences and Series patterns that are of interest are arithmetic and geometric sequences and series. A sequence is a list of numbers, such as 1,2,3, 4. They may stop at some point or continue ... Geometric Sequences A geometric sequence begins with 5 and has a common ratio of * * 1 4 *. What is the sequences 4th term? 10. Standardized Test Practice The 15th term of a geometric sequence is ... Quiz: Arithmetic and Geometric Sequences and Series untitled untitled. Name _____ Date _____ Quiz: Arithmetic and Geometric Sequences and Series Find the d or r, a n and S n 1 ... Section 2: Arithmetic Geometric Sequences and Series Teaching and Learning Guide 5: Finance and Growth Page 4 of 37 Section 2: Arithmetic Geometric Sequences and ... Sequences and Summations - Stanford University Handout #41 CS103A November 3, 2008 Robert Plummer Sequences and Summations A mathematician, like a poet or a painter, is a maker of patterns. G.H. Hardy A Mathematicians ... Math 110, Worksheet/Final Preparation: Sequences and Series Math 110, Worksheet/Final Preparation: Sequences and Series ... Given the following sequence determine if it is arithmetic, geometric or neither. Section 6-3Arithmetic and Geometric Sequences $9,000 9 11 1 4 EXERCISE 6-3 A In Problems 1 and 2, determine whether the following can be the first three terms of an arithmetic or geometric sequence, and, if so, find the ... Basic Terms - Central Valley Christian Schools, Visalia, CA Precalculus Chapter 12 Page 1 Section 12.1 - Arithmetic Sequences and Series Basic Terms A sequence is an ordered list of numbers. Each number in a sequence is called a term. YOSEMITE HIGH SCHOOL 50200 ROAD 427 - OAKHURST, CA 93644 yosemite high school 50200 road 427 - oakhurst, ca 93644 (559)683-4667 course title: algebra 2 department: mathematics requirement satisfied : high school: x model curriculum ... SEQUENCE AND SERIES WORKSHEET SOLUTIONS ... such that a, b, c, d form an increasing arithmetic sequence and a, b, d form a geometric ... PROBLEM 1 The sum of 27 consecutive integers is 2646. What is the largest of those integers? In any arithmetic sequence, the average is the median.
{"url":"http://www.cawnet.org/docid/arithmetic+and+geometric+sequence+worksheet/","timestamp":"2014-04-21T07:05:25Z","content_type":null,"content_length":"55487","record_id":"<urn:uuid:f2fab434-f82c-49b6-b91f-937f8555fd56>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
COMP 205: Official Syllabus (UNC-CH Computer Science) Search our Site ON THIS PAGE: Course Objectives Typical Text Course Outline COMP 205: Scientific and Geometric Computation (3 hours) Official Syllabus approved April 1984 Course Objectives Teach error analysis and efficiency analysis via a driving problem that requires geometric algorithms. COMP 122, MATH 166. Teach error analysis and efficiency analysis via a driving problem that requires geometric algorithms. Bring up error analysis approaches when they are relevant to the driving problem. Thus lectures should mix the motivating problem, the algorithms, and the methods of analysis. In addition, it is desirable that the course give an experience in scientific computation. Thus it should involve solving a real scientific problem, should require the solution of a range of numerical problems, and should involve the student in programming solutions to real parts of the problem, experiencing the speeds and accuracies that result from the choice of methods. Driving Problem In the syllabus below, the geometric algorithms, error analysis subjects, and efficiency analysis subjects are prescriptive. The particular driving problem is not. Here we give a particular driving problem to illustrate the feasibility of the approach, but the instructor is free to change the driving problem. Rigid body simulation using the laws of Newtonian dynamics. It consists of • Collision Detection • Contact Determination (Assume only point contacts) • Expressing Motion Constraints of the Bodies at each Contact Point as a system of constraint equations. Main emphasis on how to formulate and solve this constraint system. • Use of First Order Differential Equation--relating position and orientation with velocity and angular velocity, inertia tensor and mass. Setting up the partial differential equation system. • Solving Differential Equations. Dealing with ill-conditioned Jacobians, whenever an impulse is to be applied. • Use of Quadratic Programming to solve linear complementary problem and use of Mathematical Programming methods (an optimization problem). Use of Matrix Inversion. Dealing with Singular Systems. To solve the driving problem we need to know • Convex Hulls. Issues in computing convex hull in three dimensions. Problems of numerical stability. Eventually boils down to computing sign of a 4 * 4 determinant. Use of perturbation in floating point arithmetic. Use of exact arithmetic. • Voronoi Regions Computations. Issues of Numeric Stability. How to circumvent problems caused by floating point computation. • Contact Determination with imprecise information. Because of our representations, we have error in the model and error accumulated in computations. For example, due to floating point arithmetic we cannot exactly represent rotations for almost all angles (as the sines and cosines used in the matrix correspond to approximate representations). Problems with degenerate configurations. • Some knowledge of Newtonian Dynamics. • Solving first and second order differential equations. Integrating the equations by taking finite steps. • Dealing with Ill-Conditioned Jacobians. • Solving Optimization Problems, in particular a quadratic programming problems. Needs knowledge of different methods of solving linear systems and their comparisons in terms of efficiency and There is no course project. Students should implement the driving problem as a series of assignments. For example, start with easy dynamic simulation, such as air-hockey pucks (2D and circularly symmetric). Then add complexities week by week, so there is always a visualized dynamic simulation. Typical Text There is no single textbook that will cover this course. Text materials might include selected chapters from • Pizer with Wallace, To Compute Numerically • Preparata and Shamos, Computational Geometry • Hoffman, Geometric and Solid Modeling Course Outline • Convex Sets, Convex Hull in 2D and 3D, Voronoi Regions, Proximity Problems, Applications to Collision Detection between Polyhedral Objects, Contact Determination. • Robust and Error Free Geometric Computations, Problem of Consistency in Geometric Computation due to floating point errors. Line intersection conditioning. • Application to Dynamic Simulation. • Error analysis: generation: arithmetic and representation, approximation, propagation, stability and convergence. • Algorithm analysis: O() analysis • Solution of linear systems, ordinary and partial differential equations, function evaluation, nonlinear equations. Syllabus (in 75 minute lectures) This syllabus does not adequately intermix driving problem issues and subjects of algorithms and their analysis within lectures. Materials from adjacent lectures will need to be intermixed to accomplish this goal. • Lecture 1: Driving problem overview. Absolute and relative error • Lecture 2: Generated error, propagated error, and their interaction. • Lecture 3: Robust and Error-Free Geometric Operations. Problems of inconsistency in geometric computations by using line-intersection problems. Different sources of error (in the model, in the floating point computations) • Lectures 4, 5: Arithmetic and representation error, floating point computing • Lectures 6, 7: Convex sets and computing convex hull of points in 2D and 3D. Algorithm analysis. • Lecture 8: Problems in computing convex hull in 3D due to finite precision arithmetic and degenerate data. Use of perturbation methods and exact arithmetic. • Lectures 9-12: Error propagation, partial derivatives, calculation graphs, sensitivity, condition number, analysis of Gaussian elimination with pivoting. QR method and singular value • Lecture 13: Voronoi Regions and Proximity Problems. Algorithm analysis • Lecture 14: Collision Detection between Polyhedral Models. Emphasis on convex polytopes only. • Lecture 15: Closest Features computation and use of Voronoi regions. Application to collision detection and contact determination. • Lecture 16, 17: Approximation error and function evaluation, Taylor series error analysis (If the course needs to be shortened, these are lectures that might be cut to one or omitted, replacing it with a short statement of the existence of error in the approximation of functions and the existence of methods to bound that error.) • Lecture 18: Rigid Body Simulation. Formulation of Constraint Equations. • Lecture 19: How to solve the differential equations and taking care of ill-conditioned Jacobians. • Lectures 20, 21: Stability analysis for recurrent methods • Lecture 22: Contact force computation as problem with solution of nonlinear equations. • Lectures 23, 24: Convergence and error analysis for iterative methods, one method for solving partial differential equations, applications of error analysis to partial differential equations. • Lecture 25: Contact force computation as optimization problem. How to solve the Quadratic programming problem (which are convex programming problems)? • Lecture 26: Summary of error analysis. With application to problem of contact force optimization, overall error analysis and designing algorithms taking efficiency as well as numerical accuracy into account. • Lecture 27: Putting driving problem together, system implementation.
{"url":"http://www.cs.unc.edu/Admin/Courses/descriptions/205.html","timestamp":"2014-04-17T01:40:05Z","content_type":null,"content_length":"10827","record_id":"<urn:uuid:2052a229-be48-4d53-bd8f-0c85d4f363d6>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00080-ip-10-147-4-33.ec2.internal.warc.gz"}
Mattapan Calculus Tutor Find a Mattapan Calculus Tutor ...Through getting a grade of "A" in at least four or five college courses I bring a good understanding of the deeper principles that algebra grapples with, and I enjoy tutoring the subject at all levels. I have a masters degree in math, which included student teaching several elementary undergradu... 29 Subjects: including calculus, reading, English, geometry ...This translates to the way I have always approached teaching situations, which is to guide students to the answer, but allow them to arrive at it themselves. In this way, they are able to fully grasp the concepts behind the problem and gains a deeper understanding of the subject. Feel free to shoot me an email if you’re interested in studying with me, or if you have any questions. 38 Subjects: including calculus, reading, English, writing ...I can help you with your legal studies. I have many years of experience tutoring the SAT. I have many years of experience tutoring the SAT. 29 Subjects: including calculus, reading, geometry, GED ...I have worked with students with ADD & ADHD extensively in my private tutoring. Some have been on meds.; others not, some have been on school IEPs, some not, some have been high school students, others middle and elementary students. My tutoring work for the Lexington public school system, run... 34 Subjects: including calculus, reading, English, geometry ...During my MBA studies, I worked as a Research and Teaching Assistant for Marketing and Strategy courses. I researched material for class lectures and exams, and I graded exams. I have taken and excelled at numerous sociology classes during my years of schooling. 67 Subjects: including calculus, English, statistics, reading
{"url":"http://www.purplemath.com/Mattapan_calculus_tutors.php","timestamp":"2014-04-18T00:39:45Z","content_type":null,"content_length":"23817","record_id":"<urn:uuid:b0a9bc56-b604-4573-9e8d-9658e66dc1fd>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00413-ip-10-147-4-33.ec2.internal.warc.gz"}
Projection of Points - Engineering Drawing - Learn Engineering Drawing Online Projection of Points – Point’s position will be given in data. We need to find out its projection on H.P. (Horizontal Plane) and on V.P. (Vertical Plane). Problem 4.1 Projection of Points Problem 4.1 Projection of Points – Draw the projection of points, the position of as per data given.
{"url":"http://www.engineeringdrawing.org/category/projection_of_points/","timestamp":"2014-04-19T04:20:09Z","content_type":null,"content_length":"32192","record_id":"<urn:uuid:ab37ca3a-42da-4e5b-a4ff-6bce0b7740a1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00169-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability Tutors Irving, TX 75062 Math Certified & Experienced Tutor for Math, Sciences, ELA, Test Prep ...trices, introduction to complex numbers, exponential and logarithmic functions, rational expressions, polynomials, conic sections, , introduction to sequences and series. I have enjoyed chemistry since I was in elementary school and received a chemistry... Offering 10+ subjects including probability and statistics
{"url":"http://www.wyzant.com/Grand_Prairie_Probability_tutors.aspx","timestamp":"2014-04-21T00:17:47Z","content_type":null,"content_length":"59726","record_id":"<urn:uuid:4775e488-aeb3-41ff-bcd9-f3244f8685e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
Continuous-Time Systems A General Approach for Hyperchaotifying n-dimensional Continuous-Time Systems Z. Elhadj^1, J.C.Sprott^2 ^1Department of Mathematics, University of Tébessa, (12002), Algeria E-mail: zeraoulia@mail.univ-tebessa.dz / zelhadj12@yahoo.fr ^2Department of Physics, University of Wisconsin, Madison, WI 53706, USA E-mail: sprott@physics.wisc.edu This paper is concerned with the rigorous construction of hyperchaotic attractors in n-dimensional continuous-time systems. The method of analysis is based on the construction of a matrix controller and by using the standard dentition of Lyapunov exponents for an asymptotically stable limit set of the original system. The relevance of this approach is to show mathematically the possibility of controlling asymptotically stable limit sets of n-dimensional continuous-time systems to hyperchaos. This general approach is valid for all n-dimensional continuous-time systems with at least one asymptotically stable limit set. Ref: E. Zeraoulia and J. C. Sprott , SciTech Journal of Science & Technology , 106-109 (2012) The complete paper is available in PDF format. Return to Sprott's Books and Publications.
{"url":"http://sprott.physics.wisc.edu/pubs/paper384.htm","timestamp":"2014-04-20T16:01:42Z","content_type":null,"content_length":"3197","record_id":"<urn:uuid:aa3471a8-d5aa-45ae-b896-a4457948f6ad>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00540-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: February 2011 [00740] [Date Index] [Thread Index] [Author Index] Re: NIntegrate and speed • To: mathgroup at smc.vnet.net • Subject: [mg116787] Re: NIntegrate and speed • From: Daniel Lichtblau <danl at wolfram.com> • Date: Mon, 28 Feb 2011 04:59:54 -0500 (EST) ----- Original Message ----- > From: "Marco Masi" <marco.masi at ymail.com> > To: mathgroup at smc.vnet.net > Sent: Sunday, February 27, 2011 3:35:46 AM > Subject: [mg116780] NIntegrate and speed > I have the following problems with NIntegrate. > 1) I would like to make the following double numerical integral > converge without errors > R = 8000; Z = 1; rd = 3500; > NIntegrate[Exp[-k Abs[Z]]/(1 + (k rd)^2)^1.5 (NIntegrate[Cos[k R > Sin[\[Theta]]], {\[Theta], 0, \[Pi]}]), {k, 0, \[Infinity]}] > It tells non numerical values present and I don't understand why, > since it evaluates finally a numerical value? 0.000424067 You presented it as an iterated integral. Mathematically that is fine but from a language semantics viewpoint you now have a small problem. It is that the outer integral cannot correctly do symbolic analysis of its integrand but it may try to do so anyway. In essence, the outer integrand "looks" to be nonnumerical until actual values areplugged in for the outer variable of integration. There are (at least) two ways to work around this. One is to recast as a double (as opposed to iterated) integral. Timing[i1 = Exp[-k*Abs[Z]]/(1 + (k*rd)^2)^(3/2)* Cos[k*R*Sin[\[Theta]]], {\[Theta], 0, \[Pi]}, {k, 0, \[Infinity]}]] {39.733, 0.0004240679194556497} An alternative is to define the inner function as a black box that only evaluates for numeric input. In that situation the outer NIntegrate will not attempt to get cute with its integrand. ff[t_?NumericQ] := NIntegrate[Cos[t* R*Sin[\[Theta]]], {\[Theta], 0, \[Pi]}] In[90]:= Timing[ i2 = NIntegrate[ Exp[-k* Abs[Z]]/(1 + (k* rd)^2)^1.5 *ff[k], {k, 0, \[Infinity]}]] Out[90]= {26.63, 0.0004240673399701612} > 2) Isn't the second integrand a cylindrical Bessel function of order > 0? So, I expected that > NIntegrate[Exp[-k Abs[Z]]/(1 + (k rd)^2)^1.5 BesselJZero[0, k R], {k, > 0, \[Infinity]}] doing the same job. But it fails to converge and > gives 0.00185584- i4.96939*10^-18. Trying with WorkingPrecision didn't > make things better. How can this be fixed? Use the correct symbolic form of the inner integral. It involves BesselJ rather than BesselJZero. In[91]:= ff2[t_] = Integrate[Cos[t* Sin[\[Theta]]], {\[Theta], 0, \[Pi]}, Assumptions -> Element[t, Reals]] Out[91]= \[Pi] BesselJ[0, Abs[t]] In[92]:= Timing[ i3 = NIntegrate[ Exp[-k Abs[Z]]/(1 + (k *rd)^2)^(3/2)* ff2[k*R], {k, 0, \[Infinity]}]] Out[92]= {0.7019999999999982, 0.0004240679192434893} Not surprisingly this is much faster, and will help to get you past the speed bumps you allude to below. > 3) The above Nintegrals will go into a loop and should be evaluated as > fast as possible. How? With Compile, CompilationTarget -> "C", > Paralleization, etc.? > Any suggestions? > Marco. Compile will not help because most of the time will be spent in NIntegrate code called from the virtual machine of the run time library (that latter if you compile to C). Evaluating in parallel should help. Also there might be option settings that allow NIntegrate to handle this faster than by default but without significant degradation in quality of results. Here is a set of timings using a few different methods, and have PrecisionGoal set fairly low (three digits). In[109]:= Table[ Exp[-k Abs[Z]]/(1 + (k *rd)^2)^(3/2)* \[Pi] BesselJ[0, Abs[k*R]], {k, 0, \[Infinity]}, PrecisionGoal -> 3, Method -> meth]], {meth, {Automatic, "DoubleExponential", "Trapezoidal", "RiemannRule"}}] During evaluation of In[109]:= NIntegrate::ncvb: NIntegrate failed to converge to prescribed accuracy after 9 recursive bisections in k near {k} = {0.0002724458978988764}. NIntegrate obtained 0.00042483953211734914` and 0.000012161444876769691` for the integral and error estimates. >> Out[109]= {{0.6709999999999923, 0.0004240678889181539}, {0.0150000000000432, 0.0004240644189596502}, {0.03100000000000591, 0.0004240644189596502}, {0.04699999999996862, I rather suspect there are more option tweaks that could make this faster still without appreciable degradation in quality of results. Daniel Lichtblau Wolfram Research Prev by Date: Re: Numerical Convolution Previous by thread: Re: NIntegrate and speed Next by thread: Font size printing
{"url":"http://forums.wolfram.com/mathgroup/archive/2011/Feb/msg00740.html","timestamp":"2014-04-17T09:53:05Z","content_type":null,"content_length":"29127","record_id":"<urn:uuid:85b0cee8-6de4-4a75-83f9-f497c64e0e7a>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Des Moines, WA Prealgebra Tutor Find a Des Moines, WA Prealgebra Tutor ...I have been writing business documents and technical documents on science and engineering topics for over 40 years. The documents include business letters (of course), proposals for engineering projects, project reports, business and conference presentations, peer-reviewed scientific papers, tec... 21 Subjects: including prealgebra, chemistry, physics, English ...I thoroughly enjoy teaching, and tutored all through my college years. My goal as an instructor is to ensure that the student is comfortable with the material, understands the theory/process behind it, and through whatever means available (rote, practice, modeling, schematics, discussion, debate... 12 Subjects: including prealgebra, chemistry, geometry, ASVAB ...This covered a very large range of knowledge, so while I am educated in biology I am also well versed in other aspects of the life and physical sciences. I also managed undergraduates in my laboratory and taught them scientific techniques, data analysis, 'laboratory math', and safety training. ... 14 Subjects: including prealgebra, writing, geometry, algebra 1 ...Working with them and going over multiple problems until they understood the concepts they were struggling with. I have also taken a leadership program at the University of Berkeley and through it gained skills to successfully lead others through their challenges. Challenges such as ropes cours... 15 Subjects: including prealgebra, reading, Spanish, geometry ...Throughout my college career I had a special focus in mathematics. Outside of school, current events, video games, and the financial markets catches most of my attention--with the exception of my 5 month old dog, Misha. Tutoring: I currently tutor 9 students on a regular basis in a variety of s... 17 Subjects: including prealgebra, chemistry, physics, calculus
{"url":"http://www.purplemath.com/Des_Moines_WA_Prealgebra_tutors.php","timestamp":"2014-04-19T15:09:51Z","content_type":null,"content_length":"24218","record_id":"<urn:uuid:e24b159e-0481-4a7f-afd2-2da73a2ab21e>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00329-ip-10-147-4-33.ec2.internal.warc.gz"}
Fernwood, PA Algebra 2 Tutor Find a Fernwood, PA Algebra 2 Tutor Hi, my name is Jem. I hold a B.S. in Mathematics from Rensselear Polytechnic Institute (RPI), and I offer tutoring in all math levels as well as chemistry and physics. My credentials include over 10 years tutoring experience and over 4 years professional teaching experience. 58 Subjects: including algebra 2, reading, chemistry, calculus I have successfully tutored students from elementary school through college. With my patient and effective approach, students get personal attention tailored to their individual needs. I always encourage students to look beyond the obstacles and turn them into as solutions. 21 Subjects: including algebra 2, English, reading, algebra 1 ...Plus, over the years I have gained experienced working with students (through tutoring, teaching, and co-teaching) in various math subjects ranging from pre-algebra to calculus and several math subjects in between. I have a degree in mathematics. This includes many classes more advanced than Al... 16 Subjects: including algebra 2, English, physics, calculus ...Unlike the one-size-fits-all test-prep courses, and the overly-structured national tutoring companies, I always customize my methods and presentation for the student at hand. There is a big difference between a student striving for 700s on the SAT and one hoping to reach the 500s. Likewise, a student struggling with Algebra I is a far cry from one going for an A in honors 23 Subjects: including algebra 2, English, calculus, geometry ...The bottom line for the sciences is to utilize and apply its principles to the benefit of mankind. Sciences by itself is of no use to the common man unless its principles are applied to produce/manufacture goods that will benefit him directly or indirectly. Having industrial experience, I can e... 17 Subjects: including algebra 2, chemistry, physics, calculus Related Fernwood, PA Tutors Fernwood, PA Accounting Tutors Fernwood, PA ACT Tutors Fernwood, PA Algebra Tutors Fernwood, PA Algebra 2 Tutors Fernwood, PA Calculus Tutors Fernwood, PA Geometry Tutors Fernwood, PA Math Tutors Fernwood, PA Prealgebra Tutors Fernwood, PA Precalculus Tutors Fernwood, PA SAT Tutors Fernwood, PA SAT Math Tutors Fernwood, PA Science Tutors Fernwood, PA Statistics Tutors Fernwood, PA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Briarcliff, PA algebra 2 Tutors Bywood, PA algebra 2 Tutors Carroll Park, PA algebra 2 Tutors Darby, PA algebra 2 Tutors East Lansdowne, PA algebra 2 Tutors Eastwick, PA algebra 2 Tutors Kirklyn, PA algebra 2 Tutors Lansdowne algebra 2 Tutors Llanerch, PA algebra 2 Tutors Overbrook Hills, PA algebra 2 Tutors Primos Secane, PA algebra 2 Tutors Primos, PA algebra 2 Tutors Secane, PA algebra 2 Tutors Westbrook Park, PA algebra 2 Tutors Yeadon, PA algebra 2 Tutors
{"url":"http://www.purplemath.com/Fernwood_PA_algebra_2_tutors.php","timestamp":"2014-04-20T10:57:41Z","content_type":null,"content_length":"24253","record_id":"<urn:uuid:2653004b-cc46-43a8-be8c-30b2b9859f3c>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00554-ip-10-147-4-33.ec2.internal.warc.gz"}
October 11, 2008 This Week's Finds in Mathematical Physics (Week 270) John Baez Greg Egan has a new novel out, called "Incandescence" - so I want to talk about that. Then I'll talk about three of my favorite numbers: 5, 8, and 24. I'll show you how each regular polytope with 5-fold rotational symmetry has a secret link to a lattice living in twice as many dimensions. For example, the pentagon is a 2d projection of a beautiful shape that lives in 4 dimensions. Finally, I'll wrap up with a simple but surprising property of the number 12. But first: another picture of Jupiter's moon Io! Now we'll zoom in much closer. This was taken in 2000 by the Galileo probe: 1) A continuous eruption on Jupiter's moon Io, Astronomy Picture of the Day, http://apod.nasa.gov/apod/ap000606.html Here we see a vast plain of sulfur and silicate rock, 250 kilometers across - and on the left, glowing hot lava! The white dots are spots so hot that their infrared radiation oversaturated the detection equipment. This was the first photo of an active lava flow on another world. If you like pictures like this, maybe you like science fiction. And if you like hard science fiction - "diamond-scratching hard", as one reviewer put it - Greg Egan is your man. His latest novel is one of the most realistic evocations of the distant future I've ever read. Check out the website: 2) Greg Egan, Incandescence, Night Shade Books, 2008. Website at http://www.gregegan.net/INCANDESCENCE/Incandescence.html The story features two parallel plots. One is about a galaxy-spanning civilization called the Amalgam, and two of its members who go on a quest to our Galaxy's core, which is home to enigmatic beings that may be still more advanced: the Aloof. The other is about the inhabitants of a small world orbiting a black hole. This is where the serious physics comes in. I might as well quote Egan himself: "Incandescence" grew out of the notion that the theory of general relativity - widely regarded as one of the pinnacles of human intellectual achievement - could be discovered by a pre-industrial civilization with no steam engines, no electric lights, no radio transmitters, and absolutely no tradition of astronomy. At first glance, this premise might strike you as a little hard to believe. We humans came to a detailed understanding of gravity after centuries of painstaking astronomical observations, most crucially of the motions of the planets across the sky. Johannes Kepler found that these observations could be explained if the planets moved around the sun along elliptical orbits, with the square of the orbital period proportional to the cube of the length of the longest axis of the ellipse. Newton showed that just such a motion would arise from a universal attraction between bodies that was inversely proportional to the square of the distance between them. That hypothesis was a close enough approximation to the truth to survive for more than three centuries. When Newton was finally overthrown by Einstein, the birth of the new theory owed much less to the astronomical facts it could explain - such as a puzzling drift in the point where Mercury made its closest approach to the sun - than to an elegant theory of electromagnetism that had arisen more or less independently of ideas about gravity. Electrostatic and magnetic effects had been unified by James Clerk Maxwell, but Maxwell's equations only offered one value for the speed of light, however you happened to be moving when you measured it. Making sense of this fact led Einstein first to special relativity, in which the geometry of space-time had the unvarying speed of light built into it, then general relativity, in which the curvature of the same geometry accounted for the motion of objects free-falling through space. So for us, astronomy was crucial even to reach as far as Newton, and postulating Einstein's theory - let alone validating it to high precision, with atomic clocks on satellites and observations of pulsar orbits - depended on a wealth of other ideas and technologies. How, then, could my alien civilization possibly reach the same conceptual heights, when they were armed with none of these apparent prerequisites? The short answer is that they would need to be living in just the right environment: the accretion disk of a large black hole. When SF readers think of the experience of being close to a black hole, the phenomena that most easily come to mind are those that are most exotic from our own perspective: time dilation, gravitational blue-shifts, and massive distortions of the view of the sky. But those are all a matter of making astronomical observations, or at least arranging some kind of comparison between the near-black-hole experience and the experience of other beings who have kept their distance. My aliens would probably need to be sheltering deep inside some rocky structure to protect them from the radiation of the accretion disk - and the glow of the disk itself would also render astronomy immensely difficult. Blind to the heavens, how could they come to learn anything at all about gravity, let alone the subtleties of general relativity? After all, didn't Einstein tell us that if we're free-falling, weightless, in a windowless elevator, gravity itself becomes impossible to detect? Not quite! To render its passenger completely oblivious to gravity, not only does the elevator need to be small, but the passenger's observations need to be curtailed in time just as surely as they're limited in space. Given time, gravity makes its mark. Forget about black holes for a moment: even inside a windowless space station orbiting the Earth, you could easily prove that you were not just drifting through interstellar space, light-years from the nearest planet. How? Put on your space suit, and pump out all the station's air. Then fill the station with small objects - paper clips, pens, whatever - being careful to place them initially at rest with respect to the walls. Wait, and see what happens. Most objects will eventually hit the walls; the exact proportion will depend on the station's spin. But however the station is or isn't spinning, some objects will undergo a cyclic motion, moving back and forth, all with the same period. That period is the orbital period of the space station around the Earth. The paper clips and pens that are moving back and forth inside the station are following orbits that are inclined at a very small angle to the orbit of the station's center of mass. Twice in every orbit, the two paths cross, and the paper clip passes through the center of the space station. Then it moves away, reaches the point of greatest separation of the orbits, then turns around and comes back. This minuscule difference in orbits is enough to reveal the fact that you're not drifting in interstellar space. A sufficiently delicate spring balance could reveal the tiny "tidal gravitational force" that is another way of thinking about exactly the same thing, but unless the orbital period was very long, you could stick with the technology-free approach and just watch and wait. A range of simple experiments like this - none of them much harder than those conducted by Galileo and his contemporaries - were the solution to my aliens' need to catch up with Newton. But catching up with Einstein? Surely that was beyond hope? I thought it might be, until I sat down and did some detailed calculations. It turned out that, close to a black hole, the differences between Newton's and Einstein's predictions would easily be big enough for anyone to spot without sophisticated instrumentation. What about sophisticated mathematics? The geometry of general relativity isn't trivial, but much of its difficulty, for us, revolves around the need to dispose of our preconceptions. By putting my aliens in a world of curved and twisted tunnels, rather than the flat, almost Euclidean landscape of a patch of planetary surface, they came better prepared for the need to cope with a space-time geometry that also twisted and curved. The result was an alternative, low-tech path into some of the most beautiful truths we've yet discovered about the universe. To add to the drama, though, there needed to be a sense of urgency; the intellectual progress of the aliens had to be a matter of life and death. But having already put them beside a black hole, danger was never going to be far behind. As you can tell, this is a novel of ideas. You have to be willing to work through these ideas to enjoy it. It's also not what I'd call a feel-good novel. As with "Diaspora" and "Schild's Ladder", the main characters seem to become more and more isolated and focused on their work as they delve deeper into the mysteries they are pursuing. By the time the mysteries are unraveled, there's almost nobody to talk to. It's a problem many mathematicians will recognize. Indeed, near the end of "Diaspora" we read: "In the end, there was only mathematics". So, this novel is not for everyone! But then, neither is This Week's Finds. In fact, I was carrying "Incandescence" with me when in mid-September I left the scorched and smoggy sprawl of southern California for the cool, wet, beautiful old city of Glasgow. I spent a lovely week there talking math with Tom Leinster, Eugenia Cheng, Danny Stevenson, Bruce Bartlett and Simon Willerton. I'd been invited to the University of Glasgow to give a series of talks called the 2008 Rankin Lectures. I spoke about my three favorite numbers, and you can see the slides here: 3) John Baez, My favorite numbers, available at http://math.ucr.edu/home/baez/numbers/ I wanted to explain how different numbers have different personalities that radiate like force fields through diverse areas of mathematics and interact with each other in surprising ways. I've been exploring this theme for many years here. So, it was nice to polish some things I've written and present them in a more organized way. These lectures were sponsored by the trust that runs the Glasgow Mathematical Journal, so I'll eventually publish them there. I plan to add a lot of detail that didn't fit in the talks. I began with the number 5, since the golden ratio and the five-fold symmetry of the dodecahedron lead quickly to a wealth of easily enjoyed phenomena: from Penrose tilings and quasicrystals, to Hurwitz's theorem on approximating numbers by fractions, to the 120-cell and the Poincare homology sphere. After giving the first talk I discovered the head of the math department, Peter Kropholler, is a big fan of Rubik's cubes. I'd never been attracted to them myself. But his enthusiasm was contagious, especially when he started pulling out the unusual variants that he collects, eagerly explaining their subtleties. My favorite was the Rubik's dodecahedron, or "Megaminx": 4) Wikipedia, Megaminx, http://en.wikipedia.org/wiki/Megaminx Then I got to thinking: it would be even better to have a Rubik's icosahedron, since its symmetries would then include M[12], the smallest Mathieu group. And it turns out that such a gadget exists! It's called "Dogic": 5) Wikipedia, Dogic, http://en.wikipedia.org/wiki/Dogic The Mathieu group M[12] is the smallest of the sporadic finite simple groups. Someday I'd like to understand the Monster, which is the biggest of the lot. But if the Monster is the Mount Everest of finite group theory, M[12] is like a small foothill. A good place to start. Way back in "week20", I gave a cute description of M[12] lifted from Conway and Sloane's classic book. If you get 12 equal-sized balls to touch a central one of the same size, and arrange them to lie at the corners of a regular icosahedron, they don't touch their neighbors. There's even room to roll them around in interesting ways! For example, you can twist 5 of them around clockwise so that this arrangement: becomes this: We can generate lots of permutations of the 12 outer balls using twists of this sort - in fact, all even permutations. But suppose we only use moves where we first twist 5 balls around clockwise and then twist 5 others counterclockwise. These generate a smaller group: the Mathieu group M[12]. Since we can do twists like this in the Dogic puzzle, I believe M[12] sits inside the symmetry group of this puzzle! In a way it's not surprising: the Dogic puzzle has a vast group of symmetries, while M[12] has a measly 8 × 9 × 10 × 11 × 12 = 95040 elements. But it'd still be cool to have a toy where you can explore the Mathieu group M[12] with your own hands! The math department lounge at the University of Glasgow has some old books in the shelves waiting for someone to pick them up and read them and love them. They're sort of like dogs at the pound, sadly waiting for somebody to take them home. I took one that explains how Mathieu groups arise as symmetries of "Steiner systems": 6) Thomas Beth, Dieter Jungnickel, and Hanfried Lenz, Design Theory, Cambridge U. Press, Cambridge, 1986. Here's how they get M[12]. Take a 12-point set and think of it as the "projective line over F11" - in other words, the integers mod 11 together with a point called infinity. Among the integers mod 11, six are perfect squares: Call this set a "block". From this, get a bunch more blocks by applying fractional linear transformations: z |→ (az + b)/(cz + d) where the matrix (a b) (c d) has determinant 1. These blocks then form a "(5,6,12) Steiner system". In other words: there are 12 points, 6 points in each block, and any set of 5 points lies in a unique block. The group M[12] is then the group of all transformations of the projective line that map points to points and blocks to blocks! If I make more progress on understanding this stuff I'll let you know. It would be fun to find deep mathematics lurking in mutant Rubik's cubes. Anyway, in my second talk I turned to the number 8. This gave me a great excuse to tell the story of how Graves discovered the octonions, and then talk about sphere packings and the marvelous E[8] lattice, whose points can also be seen as "integer octonions". I also sketched the basic ideas behind Bott periodicity, triality, and the role of division algebras in superstring theory. If you look at my slides you'll also see an appendix that describes two ways to get the E[8] lattice starting from the dodecahedron. This is a nice interaction between the magic powers of the number 5 and those of the number 8. After my talk, Christian Korff from the University of Glasgow showed me a paper that fits this relation into a bigger pattern: 7) Andreas Fring and Christian Korff, Non-crystallographic reduction of Calogero-Moser models, Jour. Phys. A 39 (2006), 1115-1131. Also available as arXiv:hep-th/0509152. They set up a nice correspondence between some non-crystallographic Coxeter groups and some crystallographic ones: the H[2] Coxeter group and the A[4] Coxeter group, the H[3] Coxeter group and the D[6] Coxeter group, the H[4] Coxeter group and the E[8] Coxeter group. A Coxeter group is a finite group of linear transformations of R^n that's generated by reflections. We say such a group is "non-crystallographic" if it's not the symmetries of any lattice. The ones listed above are closely tied to the number 5: H[2] is the symmetry group of a regular pentagon. H[3] is the symmetry group of a regular dodecahedron. H[4] is the symmetry group of a regular 120-cell. Note these live in 2d, 3d and 4d space. Only in these dimensions are there regular polytopes with 5-fold rotational symmetry! Their symmetry groups are non-crystallographic, because no lattice can have 5-fold rotational symmetry. A Coxeter group is "crystallographic", or a "Weyl group", if it is symmetries of a lattice. In particular: A[4] is the symmetry group of a 4-dimensional lattice also called A[4]. D[6] is the symmetry group of a 6-dimensional lattice also called D[6]. E[8] is the symmetry group of an 8-dimensional lattice also called E[8]. You can see precise descriptions of these lattices in "week65" - they're pretty simple. Both crystallographic and noncrystallographic Coxeter groups are described by Coxeter diagrams, as explained back in "week62". The H[2], H[3] and H[4] Coxeter diagrams look like this: The A[4], D[6] and E[8] Coxeter diagrams (usually called Dynkin diagrams) have twice as many dots as their smaller partners H[2], H[3] and H[4]: I've drawn these in a slightly unorthodox way to show how they "grow". In every case, each dot in the diagram corresponds to one of the reflections that generates the Coxeter group. The edges in the diagram describe relations - you can read how in "week62". All this is well-known stuff. But Fring and Korff investigate something more esoteric. Each dot in the big diagram corresponds to 2 dots in its smaller partner: o---o o---o---o---o A B B' A" B" A' o C" 5 | o---o---o o---o---o---o---o A B C C' B' A" B" A' o D" o C" 5 | o---o---o---o o---o---o---o---o---o A B C D D' C' B' A" B" A' If we map each generator of the smaller group (say, the generator D in H[4]) to the product of the two corresponding generators in the bigger one (say, D'D" in E[8]), we get a group homomorphism. In fact, we get an inclusion of the smaller group in the bigger one! This is just the starting point of Fring and Korff's work. Their real goal is to show how certain exactly solvable physics problems associated to crystallographic Coxeter groups can be generalized to these three noncrystallographic ones. For this, they must develop more detailed connections than those I've described. But I'm already happy just pondering this small piece of their paper. For example, what does the inclusion of H[2] in A[4] really look like? It's actually quite beautiful. H[2] is the symmetry group of a regular pentagon, including rotations and reflections. A[4] happens to be the symmetry group of a 4-simplex. If you draw a 4-simplex in the plane, it looks like a pentagram: So, any symmetry of the pentagon gives a symmetry of the 4-simplex. So, we get an inclusion of H[2] in A[4]. People often say that Penrose tilings arise from lattices in 4d space. Maybe I'm finally starting to understand how! The A[4] lattice has a bunch of 4-simplices in it - but when we project these onto the plane correctly, they give pentagrams. I'd be very happy if this were the key. What about the inclusion of H[3] in D[6]? Here James Dolan helped me make a guess. H[3] is the symmetry group of a regular dodecahedron, including rotations and reflections. D[6] consists of all linear transformations of R^6 generated by permuting the 6 coordinate axes and switching the signs of an even number of coordinates. But a dodecahedron has 6 "axes" going between opposite pentagons! If we arbitrarily orient all these axes, I believe any rotation or reflection of the dodecahedron gives an element of D[6]. So, we get an inclusion of H[3] in D[6]. And finally, what about the inclusion of H[4] in E[8]? H[4] is the symmetry group of the 120-cell, including rotations and reflections. In 8 dimensions, you can get 240 equal-sized balls to touch a central ball of the same size. E[8] acts as symmetries of this arrangement. There's a clever trick for grouping the 240 balls into 120 ordered pairs, which is explained by Fring and Korff and also by Conway's "icosian" construction of E[8] described at the end of my talk on the number 8. Each element of H[4] gives a permutation of the 120 faces of the 120-cell - and thanks to that clever trick, this gives a permutation of the 240 balls. This permutation actually comes from an element of E[8]. So, we get an inclusion of H[4] in E[8]. My last talk was on the number 24. Here I explained Euler's crazy "proof" that 1 + 2 + 3 + ... = -1/12 and how this makes bosonic strings happy when they have 24 transverse directions to wiggle around in. I also touched on the 24-dimensional Leech lattice and how this gives a version of bosonic string theory whose symmetry group is the Monster: the largest sporadic finite simple group. A lot of the special properties of the number 24 are really properties of the number 12 - and most of these come from the period-12 behavior of modular forms. I explained this back in "week125". I recently ran into these papers describing yet another curious property of the number 12, also related to modular forms, but very easy to state: 8) Bjorn Poonen and Fernando Rodriguez-Villegas, Lattice polygons and the number 12. Available at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.2555 9) John M. Burns and David O'Keeffe, Lattice polygons in the plane and the number 12, Irish Math. Soc. Bulletin 57 (2006), 65-68. Also available at http://www.maths.tcd.ie/pub/ims/bull57/M5700.pdf Consider the lattice in the plane consisting of points with integer coordinates. Draw a convex polygon whose vertices lie on this lattice. Obviously, the differences of successive vertices also lie on the lattice. We can create a new convex polygon with these differences as vertices. This is called the "dual" polygon. Say our original polygon is so small that the only lattice point in its interior is (0,0). Then the same is true of its dual! Furthermore, the dual of the dual is the original polygon! But now for the cool part. Take a polygon of this sort, and add up the number of lattice points on its boundary and the number of lattice points on the boundary of its dual. The total is 12. You can see an example in Figure 1 of the paper by Poonen and Rodriguez-Villegas: Note that p[2] - p[1] = q[1] and so on. The first polygon has lattice 5 points on its boundary; the second, its dual, has 7. The total is 12. I like how Poonen and Rodriguez-Villegas' paper uses this theorem as a springboard for discussing a big question: what does it mean to "explain" the appearance of the number 12 here? They write: Our reason for selecting this particular statement, besides the intriguing appearance of the number 12, is that its proofs display a surprisingly rich variety of methods, and at least some of them are symptomatic of connections between branches of mathematics that on the surface appear to have little to do with one another. The theorem (implicitly) and proofs 2 and 3 sketched below appear in Fulton's book on toric varieties. We will give our new proof 4, which uses modular forms instead, in full. Addenda: I thank Adam Glesser and David Speyer for catching mistakes. The only noncrystallographic Coxeter groups are the symmetry groups of the 120-cell (H[4]), the dodecahedron (H[3]), and the regular n-gons where n = 5,7,8,9,... The last list of groups is usually called I[n] - or better, I[2](n), so that the subscript denotes the number of dots in the Dynkin diagram, as usual. But Fring and Korff use "H[2]" as a special name for I[2](5), and that's nice if you're focused on 5-fold symmetry, because then H[2] forms a little series together with H[3] and H[4]. If you examine Poonen and Rodriguez-Villegas' picture carefully, you'll see a subtlety concerning the claim that the dual of the dual is the original polygon. Apparently you need to count every boundary point as a vertex! Read the papers for more precise details. For more discussion visit the n-Category Café. When the blind beetle crawls over the surface of a globe, he doesn't realize that the track he has covered is curved. I was lucky enough to have spotted it. - Albert Einstein © 2008 John Baez
{"url":"http://math.ucr.edu/home/baez/week270.html","timestamp":"2014-04-21T02:24:19Z","content_type":null,"content_length":"29111","record_id":"<urn:uuid:761a0996-7b92-4724-ad6f-2002e6b4fa10>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00437-ip-10-147-4-33.ec2.internal.warc.gz"}
Extensive GMAT Journey Author Message Extensive GMAT Journey [#permalink] 23 Oct 2011, 16:23 This post received I was encourage to post this on the forum by other users. Its basically my detailed GMAT log. I will post every Sunday night what I do during the week. GMAT Journey Log August 26, 2011 I have started to prepare for the GMAT. I followed the advice from Mark on the forum and did the Math Diagnostic test. It is 20 questions of math in the 300-500 range. I got 19/20 which means that I should skim over the foundations of GMAT math book and not spend too much time on it. I’ve decided to take the GMAT Club diagnostic test on the 28th. It is very long and it’s been a long day and I am too tired to take the diagnostic. On the 28th I plan on taking the diagnostic and going through the Foundations Math. I plan on finishing the book by the end of the weekend and start the Foundations of Verbal by Labour Day. As soon as I finish both books I will be taking the first MGMAT CAT. September 5, 2011 I took a while to restart this GMAT journey. However I am going to start getting serious after today. I scored 19/45 in the diagnostic test. This is quite low although I did not put that much focus to it. I also ran out of time. I am briefing my answers and I will be briefing them more in depth tomorrow. I will start making notes on foundations of Math GMAT. I got to take this seriously if I plan on writing on December. This is much harder than I thought. September 10, 2011 I reviewed the diagnostic test in detail to find out what I got wrong and how to improve my speed. I had to review pretty much everything, so I moved to the MGMAT Foundations of Math book. I’ve done chapter 1: equations on the book and it seems pretty elemental….however when I did the first 2 drill sets; most of my mistakes were careless. I have to pay more attention to detail when I do this massive math review. I will complete the entire Foundations of Math book before moving to foundations of verbal. September 24, 2011 I’ve found a better strategy to tackle foundations + strategy. Do the chapters that relate to the strategy. So I’ve finished chapters 1 – 3, 8 on the foundations, then I moved to the strategy book called Equations, Inequalities, and VICs. I have to be careful in the word problems as I tend to put the variable on the wrong side of the equations. Some of my questions were wrong based on small math errors, but I understood all of the foundations of math questions. Chapter 1 of the strategy book was good; it has a lot of tips. I do need a ton of September 25, 2011 I’ve done chapter 2 of E, I, VIC, but I need to go back and review the basics of exponents, in order to increase speed + accuracy. I will be using the calculus book for that as it was Manager very good. I will do this before I advance to chapter 3. I need to pick up my pace. I got stuck in a lot of the questions; I need to practice to be able to do them more naturally. Joined: 23 Jul October 1, 2011 I did chapters 3 -6 of E, I, VICs. Will try to finish the basic part of this book tomorrow so I can start with word translations next. Posts: 59 October 8, 2011 Finance, I decided to do odd problems of the strategy book in order to speed up the process. Finished the E, I, VICs book. October 9, 2011 GMAT 1: 630 Q40 V36 I Finished Chapters 1 and 2 of the Word Translations book, and I will require some more practice. I will do the first MGMAT practice test tomorrow to gauge which areas I will need to put more time going forward. I’m also very curious where I stand right now. Note that I have yet to do any verbal. GPA: 3.3 October 10, 2011 (1st Test Day) Followers: 1 I finally completed the first MGMAT practice test. Did quite poorly, scoring a 550, Q33 and V33. I did not finish the Quant section but I finished the verbal section with 20 minutes Kudos [?]: 8 [ remaining. Verbal was not that bad considering I have yet to review it, but this is going to be an uphill battle. 1] , given: 7 For verbal there’s no point in reviewing as I have yet studied for it. I got basically the same average for all three different sections which were between 50% - 60%. I believe that is a strong foundation, and I will be happy if after actually studying in detail I can average a score between 40 – 45, Meaning I will have to increase my score by 8 – 12 points. We will see if it is doable. For quants, I focused on reviewing the questions I should have known. I made a few careless mistakes as well as one question I went completely wrong about. I only got half of the questions I should have known which is not good. I’ve decided to switch my strategy a little bit. After seeing some of the questions that show up, I realized that the hardest questions come from WT and E, I, VIC sections. I’m going to take a step back and continue my review by doing the easier parts: number properties and FDP. I will finish the foundations of math book with the exception of chapter 9 before moving to the strategy books. Once I finish the NP and FDP books, I will go back and finish the WT book. I need to speed up the process as there is a lot of material to cover. After finishing the entire quants review, I will move to verbal foundation, followed by CR, RC, and SC in that order. I will then take the next MGMAT to see if my score improved. Target date for MGMAT 2: Saturday November 12. Finished Chapter 4 of Foundations of GMAT math with a near perfect drill set score. October 12, 2011 I Finished chapter 5, exponents and roots, and I’m glad I did as I needed good practice simplifying roots. Key is to do prime factorization – something I haven’t done since 10th grade. I also forget that even powers when rooted have 2 answers. I need to ingrain that in my head. October 16, 2011 I finished chapters 6 and 7 of Foundations of Math. I will finish chapter 9 once I finish all the other strategy books. October 17, 2011 I Finished Chapters 1 and 2 of Number Properties, and I only had 1 mistake in both problem sets. October 18, 2011 I Finished Chapters 3 and 4 of Number Properties, had 2 silly mistakes and 2 conceptual mistakes. My Extensive GMAT Journey My GMAT Experience Last edited by on 26 Oct 2011, 09:23, edited 1 time in total. Manhattan GMAT Discount Codes Kaplan GMAT Prep Discount Codes Veritas Prep GMAT Discount Codes Manager Re: Extensive GMAT Journey [#permalink] 26 Oct 2011, 09:21 Joined: 23 Jul GMAT Test Date: January 7 Posts: 59 My Extensive GMAT Journey Finance, My GMAT Experience GMAT 1: 630 Q40 V36 GPA: 3.3 Followers: 1 Re: Extensive GMAT Journey [#permalink] 01 Nov 2011, 13:57 October 25, 2011 Finished chapter 5 of NP, problem set was flawless. I do however have to pay attention to detail when these questions come up. Joined: 23 Jul 2011 October 26, 2011 Posts: 59 Chapter 6 of NP is done, however I need to be careful about the relationship about even exponents and even roots. This is one of the trickiest things I’ve found about the GMAT. Concentration: October 29, 2011 Accounting I finished the number properties book. This should be the book everyone should start with when starting to review quants if you’re not using the Foundations of Math book. I also finished chapters 1 -3 of the Fractions, Decimals, and Percents book. This book was fairly easy for me because these subjects are used heavily in finance. However I made careless GMAT 1: 630 mistakes because I’m yet to be agile at multiplying and dividing 3 and 4 digit numbers fast. Q40 V36 GPA: 3.3 My Extensive GMAT Journey Followers: 1 My GMAT Experience Unforseen Re: Extensive GMAT Journey [#permalink] 06 Nov 2011, 16:10 Manager November 6, 2011 Joined: 23 Jul I skipped a week of working on the GMAT and I’m not very happy about that especially after signing up for the test. I will need to make up approximately 10 lost hours of studying in 2011 this month. My target is to hit 50 hours of studying during November, or approximately 2 hours a day. Posts: 59 I’ve decided that to make up lost time I will be going to work 1 hour early and using that hour to study the Foundations of GMAT verbal book. Once I finish the MGMAT strategy guised which I’m hoping it will be on November 20, I will be staying one hour after work just to do questions, that way I avoid the temptation of getting lazy once I get home. Finance, I’ve finished the FDP book. While doing combinatorics, probabilities, I struggled. I finished chapters 3 – 6 of the Word Translations book. GMAT 1: 630 Q40 V36 My Extensive GMAT Journey GPA: 3.3 My GMAT Experience Followers: 1 Re: Extensive GMAT Journey [#permalink] 13 Nov 2011, 08:40 November 7, 2011 I finished Chapters 1 and 2 of the Foundations of Verbal book. Very basic but it has some good tips. I also finished chapter 7 of word translations. November 8, 2011 Finished chapter 3 of Foundations of GMAT Verbal November 9, 2011 I finished chapter 4 of FGMAT Verbal and the Word Translations book. I need more practice in the minor problem types. November 10, 2011 Finished chapter 5 of Foundations of Verbal Joined: 23 Jul 2011 November 12, 2011 Posts: 59 I finally finished the Concentration: Foundations of GMAT math Accounting book. I recommend this to everyone who hasn’t done basic math in a long time as it has helped me brush up immensely. On the last chapter’s drill set, I kept messing up how to get the area of the triangle as I kept forgetting to add the ½. In Geometry problems, taking a little more time to find the intermediate information is key to getting every question right. I GMAT 1: 630 plan to use the step process they suggest in the book. Q40 V36 I finished chapters 1-4 of the Geometry strategy book. I had very few conceptual errors in the problem sets. I think that I have figured this section pretty well. Know the formulas to GPA: 3.3 find angles, areas, perimeters, surface areas and volumes really well, and when tackling a question, get as much info written (intermediate steps they call it in the guide) as you can as fast as possible. The question then becomes elemental. Followers: 1 November 13, 2011 I finally finished the Geometry book. Note that I have yet to do the advanced sections of the strategy guides. I plan to write MGMAT 2 next Saturday morning and I have crafted a study schedule for the upcoming week. In the mornings I will keep hammering through the Foundations of Verbal Book, while in the afternoon I will be doing a cram 2 hour review of each guide in the following order: Monday NP, Tuesday E, I VICs, Wednesday FDP, Thursday WT, and Friday Geometry. After writing MGMAT 2, I plan to do an extensive review to find my quants weaknesses and I will be using Sunday morning to craft my strategy as to how to tackle the OG books. By next Sunday I will have one month and a half left and if I keep the momentum I feel confident I will be ready for January 7. My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 20 Nov 2011, 13:39 November 14, 2011 Finished chapter 6 of Foundations of Verbal November 15, 2011 Finished the sentence correction of Foundations of Verbal November 16, 2011 I finished chapter 8 of Foundations of Verbal. I will be doing a massive review of the math strategy guides on Saturday and I will be doing the MGMAT test 2 on Sunday instead. November 17, 2011 I finished chapter 9 of Foundations of Verbal. I really like this section as I’ve done it before in a logic class. Unforseen November 18, 2011 Manager I finished chapter 10 and 11 of Foundations of Verbal. If I find myself in trouble with CR I will go and review my logic book, which is much more explicit at finding argument flaws than the Joined: 23 Jul 2011 MGMAT book Posts: 59 . Concentration: November 19, 2011 Accounting I started reviewing math prior to the MGMAT test 2 and didn’t finish. I did 2 of the MGMAT question banks then spent a lot of time setting up the GMAT 1: 630 error log Q40 V36 , which is great because you input the data and it spits your weaknesses pretty clearly. I felt really weak on some sections and I still suck on Data Sufficiency. I will populate my GPA: 3.3 error log Followers: 1 , do the qbank for FDP, and do a timed MGMAT test tomorrow November 20, 2011 I spent the day populating the error log . I also planned my attack for the next week, where I am starting to do OG 12 questions early at work. My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 27 Nov 2011, 11:34 November 21, 2011 I did the QBANk for the FDP Section. Overall I did much better than NP and E, I VICs. I feel like I made a lot of concentration mistakes, which I should be able to correct through November 22, 2011 I completed the first 14 questions of the OG diagnostic case. It took me 40 minutes. I got 5 wrong which is OK in my opinion and a decent start. I will retake the WT QBANK again tomorrow November 23, 2011 I finished chapter 12 of Foundations of verbal. It didn't say much other than skimming through the passage is a formula for failure. I also did questions 15 - 28 of the OG guide diagnostic test in 30 minutes, closing to the 2min per question mark. My accuracy was very poor, which means that I am still eons from the optimal accuracy for a 45+ quantitative I will finish the diagnostic test tomorrow, and will be doing a major error dissection after I do the MGMAT 2 test on Saturday. I did the WT QBANK and it went fairly well, except most of my mistakes where conceptual. So far E, I, VIC and WT are my weakest sections, and were also the sections that I started with. I will review this strategy books again. My goals for Saturday night are the following: - Finish foundations of verbal - Finish OG guide diagnostic test - Finish MGMAT Quant Qbanks - do a timed MGMAT Test 2 (skipping AWA) - Dissect errors from QBANKS, MGMAT 2, and Diagnostic test and start finding major quant weaknesses to start tackling on Sunday Unforseen November 24, 2011 Manager I finished the diagnostic test from the OG guide timed and I was a little disappointed at the result especially the PS. My avg time per question was a little over 2min and my number of questions right per section was 13 out of 24. That being said I will be reviewing the diagnostic test along with MGMAT 2 on Saturday afternoon to ensure that my conceptual mistakes are Joined: 23 Jul covered and I know what to review. November 25, 2011 Posts: 59 I finished chapter 13 and 14 of Foundations of GMAT verbal, leaving vocabulary and idioms to do. The sections were small and easy but they gave me a great idea. Since I read a lot at Concentration: work, I'm going to read articles and tackle them GMAT style. I will read them timed and I will write down on a pad what I get from the articles. This way I will be doing both work and Finance, gmat practice at the same time. I'm looking forward to MGMAT CAT 2 Tomorrow. I 've also downloaded the mgmat flashcards. November 26, 2011 GMAT 1: 630 Q40 V36 I finished the Qbank of Geometry. I didn’t do as well as I thought mostly because of careless mistakes. GPA: 3.3 I finally completed MGMAT 2 and got a Followers: 1 600 , but I’m very disappointed on the quant section where I only achieved a 35 and finished only 30 questions. I was not expecting this at all and the breakdown leaves me even more confused. Even though I just finished a qbank on geometry, I got 0 geometry questions right. I am very frustrated that I had to leave 7 questions blank. As someone who considers himself a fast thinker, this is a huge blow. I did however improve on the data sufficiency questions. In verbal I had great improvements in RC and CR, and they definitely come because of the tips in the Foundations of Verbal chapters. SC remains unchanged, although for some reason this exam says that 90% of the questions were in the 700-800 range. That is very strange to me…are this questions a lot harder than MGMAT 1 verbal? I reviewed my wrong answers for RC and CR and most had to do with reading the question or choices properly. Some were really tough. I set some goals to finish for Saturday night. Lets re visit if I accomplished them all: My goals for Saturday night are the following: - Finish foundations of verbal - DONE - Finish OG guide diagnostic test - DONE - Finish MGMAT Quant Qbanks - DONE - do a timed MGMAT Test 2 (skipping AWA) - DONE - Dissect errors from QBANKS, MGMAT 2, and Diagnostic test and start finding major quant weaknesses to start tackling on Sunday So I did not do the last one, and I’m glad I didn’t because I’m exhausted. Tomorrow I will start with dissecting MGMAT 2. I will also set up a schedule for Monday-Thursday questions before work. November 27, 2011 I finished reviewing the MGMAT CAT 2, it took me over three hours but it was very extensive. I've also set up a schedule for Monday - Thursday. My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 03 Dec 2011, 12:02 November 28, 2011 I did the first 20 questions on OG guide and they were very easy although I had a couple of dumb mistakes. I also did a few of the GMAT Club daily questions. I have a backlog of about 2 months on those questions so I’ll have plenty to do on my down time at work. I also corrected the first 15 questions of the diagnostic test. Reviewing questions is taking me a lot of time, but I will now be using my down time to populate the error log . I hope this error log is useful for my purpose: to find my biggest weaknesses and review them again with the MGMAT strategy guides. November 29, 2011 Joined: 23 Jul 2011 I did the first 20 DS questions of the OG guide. Only 3 wrong and slightly over time which is so far my best performance. Posts: 59 November 30, 2011 Concentration: I did the next 30 questions of PS. I got over 80% accuracy with 6 minutes left which is the best performance yet. This gave me a confidence booster although I feel that the first Finance, questions are a lot easier. I’ve checked and reviewed some more of the diagnostic test questions. I’m happy I did 47 hours of timed studying this month. My goal for December is 60. GMAT 1: 630 Q40 V36 December 1, 2011 GPA: 3.3 I finished questions 21 - 50 of the OG guide but I was 17 minutes over time. This concerns me even though I only got 5 wrong. I need to do something about this. Followers: 1 December 3, 2011 I went through all my errors in the diagnostic test that I had left over, reviewed what type of questions I get wrong and wrote MGMAT 3. I got a 630 (Q37, V38) leaving quant questions 27-37 blank. I still can’t finish the damned quant section even though I did 100 questions of the OG guide, most of them under time. I also did a schedule for the upcoming week regarding questions before work. I will not be doing any work tomorrow, as I want to take a break and use the next two weekends for full on study + review. My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 11 Dec 2011, 15:04 December 6, 2011 I finished 20 more questions of the PS section of the GMAT and only got 1 wrong and took 30 minutes to do them. Is it me or these questions are very easy? December 7, 2011 One month from now I will be free from this GMAT prison that is my life. I will also put forth an effort of studying much harder than I have until today. Luckily, I am not going home for the holidays (or perhaps not so lucky?) so I will have plenty of days to study. I finished 20 more questions from the DS sections and I had trouble with some but I managed to skip the ones I had trouble and moved on. What's good about DS questions is that you can usually narrow it down to 2 possible answers…so your chances of getting a q wrong diminishes. I got 15/20 right with 1 minute over time. There is obviously room for improvement. I've also 'read carefully' an article either from Harvard mag, u of c mag, or scientific american. What I mean by 'carefully' is reading it and timing myself, writing the main point of each paragraph, then reading it carefully to see if I got the main points right. I think this keeps my RC fresh. I did part of RC but again I’m too tired at night to learn something new. I used the rest of my allotted time populating the error log December 8, 2011 I finished another 20 questions of OG 12 with only 3 wrong. I still find them quite easy. I also spent the afternoon populating the error log Joined: 23 Jul 2011 . Posts: 59 December 9, 2011 Concentration: I finished another 20 questions of DS of the OG guide. Got 8 wrong and I have a busy weekend ahead. Accounting December 10, 2011 GMAT 1: 630 I finished correcting the 90 questions that I’ve done in the Q40 V36 OG 12 GPA: 3.3 , Problem Solving section. While correcting, I found out that I was getting a lot of the Variables in Choice questions wrong, especially the ones that are easier through direct Followers: 1 algebra. This will be the first Quant section I’ll be reviewing. I finished correcting the 90 questions that I’ve done in OG 12 , Data Sufficiency section. I did over 6 hours of work today. Tomorrow, my plan is to correct MGMAT 3, do both 91-110 questions of the OG guide and correct them immediately after I do them so they are fresh, and then move into the sentence correction book. I’m also having trouble setting up the questions algebraically correctly. December 11, 2011 I finished reviewing the MGMAT CAT 3. As I mentioned, the summary states that 3/27 questions were in the 600-700 level, while the rest were in the 700-800 level. Some of wrong questions were extremely tough, others I think I have a knowledge gap in VICs and algebraic translations because I keep getting these wrongs. 2 were careless mistakes. I’m going to assume that the q37 is due to not finishing 10 of the questions. Also, some of the questions I got right were extremely hard and I’ve yet to see them in the OG guide, however they usually took me between 3 -4 minutes. This is a big issue. I finished both PS and DS from the OG guide 12 questions 91 – 110. I got 80% and 65% respectively. In DS, I’m starting to get questions wrong based on reading the question wrong or using information from statement 1 to answer statement 2. This is due to the fact that I’m doing them at a faster speed. Practice makes perfect I guess. I need to find time to cram the Sentence Correction guide. I’m running out of time! My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 18 Dec 2011, 11:20 December 12, 2011 I did half of the Verbal Diagnostic, scoring average in RC, and I already have 4 wrong in CR. They are different from MGMAT for sure. I also did chapters 1 and 2 of the SC book. December 13, 2011 I finished the diagnostic test, getting 10 done in CR and 11 in SC. I’m going to need to cram Verbal hard in the upcoming three weeks. December 14, 2011 I finished questions 111-130 of OG guide PS. Did fairly well and definitely boosted my confidence. December 16, 2011 Unforseen I finished question 111-130 of OG guide DS. The questions are getting tricky and touch! I got 12 right. Manager December 17, 2011 Joined: 23 Jul I debriefed the questions of OG guide I did over the week. PS is getting better while DS seems stagnant. The diagnostic test for Verbal was in line with expectations. I did MGMAT 4 and I got a Posts: 59 660 (Q 44, V36) Finance, . What a difference it makes to finish the quant section! I had to do the last 4 questions in a combined time of 2:10. I got 3 wrong so my timing isn’t that good just yet. My verbal Accounting slightly got worse but that is because I haven’t yet targeted my weakness: SC GMAT 1: 630 I have 20 days to go and I have to tackle my identified weaknesses: Algebraic translations, Variables in Choices, and sentence correction. I really need to get going on the SC strategy Q40 V36 book. I underestimated how hard it would be to study over the break…so many things come up! GPA: 3.3 I also populated the Followers: 1 error log with the OG questions and started debriefing the MGMAT 4. I don’t know if I should do MGMAT 5 tomorrow, but I think I just might in full test mode (Including AWA) December 18, 2011 I decided to forgo the MGMAT 5 and instead did 3 hours of review for the MGMAT 4. Again it came out that I need to review algebraic translations and VICs. I will do that next week. This week I’m focusing on the SC book. I won’t be doing much, only work on Monday, Wednesday, and Thursday. I will be back home for the 24th and I hope to do a couple of GMAT Club Tests My Extensive GMAT Journey My GMAT Experience Intern Re: Extensive GMAT Journey [#permalink] 18 Dec 2011, 17:02 Joined: 30 Aug Sorry to bring this up, but I don't think you are being too efficient with your time. I like keeping track of my studies as well, but think of the time that could be spent studying a 2011 little bit more... Posts: 24 Followers: 0 Kudos [?]: 0 [ 0], given: 6 Manager Re: Extensive GMAT Journey [#permalink] 18 Dec 2011, 17:39 Joined: 23 Jul I appreciate your thoughts man. I was already thinking of tracking less and studying more. My goal is to write the first GMAT Prep on the 27th, which means I need to spend significant 2011 amount of time studying for SC as well as reviewing the fundamentals of Algebraic Translations (My main weakness). There is essentially no time for tracking until then. Posts: 59 _________________ Concentration: My Extensive GMAT Journey Accounting My GMAT Experience GMAT 1: 630 Q40 V36 GPA: 3.3 Followers: 1 Re: Extensive GMAT Journey [#permalink] 25 Dec 2011, 15:04 Merry Christmas! December 19, 2011 I worked on the SC strategy book and did the questions they recommended on the OG guide. I’m in chapter 3 so far. I also did the MGMAT RC Unforseen qbank and did ok. Finally I finished debriefing the MGMAT 4 Verbal section. Manager December 20 - 22, 2011 Joined: 23 Jul Worked exclusively on SC both the strategy guide and the OG guide. December 24 - 25, 2011 Posts: 59 I took advantage of doing the GMAT Clubs math test for free on the 24th and did one of the tests scoring 23/27. I also reviewed the questions I’ve gotten wrong from the OG Guide. I Concentration: also reviewed Chinese Burned’s AWA guide in anticipation of doing the MGMAT CAT 5. Accounting I took my first full test today (AWA included) and scored a GMAT 1: 630 640 (Q44, V34) Q40 V36 . It’s a little demoralizing to get less than my previous test but after reviewing I saw that I did significantly worst CR than in previous tests. With two weeks to go, getting to 700 GPA: 3.3 will be an uphill battle. Followers: 1 Tomorrow and Tuesday are crucial days. On Tuesday, I plan to write GMAT Prep 1. That leaves tomorrow to prepare all day for the test. I plan to go buy the things I will be bringing to the test centre. I also plan to review the algebraic translations sections in the MGMAT strategy guides, followed by MGMAT SC and some questions of the OG Guide. Those three things will keep me busy tomorrow. My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 31 Dec 2011, 13:31 December 26, 2011 I did all of the OG Guide questions that MGMAT SC Guide suggested to do for chapter 4, and my result was decent at 33/43. I then reviewed algebraic translations and VIC chapters from the MGMAT quant guides. Tomorrow I will wake up and do a few advanced VIC questions, followed by 10 Qs in the OG Guide from each of the following sections: PS, DS, RC, CR, and finally overview ChineseBurned’s guide on AWA. I will take a short break and then move to GMAT Prep 1. Ideally I would not to any questions before hand but I have no choice. I also bought everything I will be eating the day of the exam. December 27, 2011 Today is my 23rd birthday. I started this wonderful day by eating breakfast and finishing the advanced VIC questions in the MGMAT Strategy guide, getting a few mistakes. I then continued with doing questions from the OG guide. I did 10 PS, DS, CR, and 11 RC. I scored 7/10, 7/10, 10/10, and 10/11 respectively. I then took a 40 minute break eating EXACTLY what I plan to eat prior to the test. I finished GMAT Prep 1 and got a 670 (Q47 V35) Unforseen . I’m happy that I finished all the quantitative question with no problem and didn’t really find them too hard. I’ve realized that MGMAT questions have too many “math” steps and too little “logic” steps, while GMAT Prep and OG Guide is the opposite. Verbal is still low, and I have 10 days to improve it. I want a V40, I’m going to get it. Plenty of my mistakes were careless and some I still don’t know why I got them wrong. I got 7 SC wrong, 2 RC wrong, and 3 CR wrong. I will be reviewing RC and CR shortly. I feel good Joined: 23 Jul about improving verbal if I keep hammering the SC guide and the OG Guide questions. December 29, 2011 Posts: 59 I did 20 questions of the OG Guide and chapter 5 of the Finance, MGMAT SC GMAT 1: 630 Q40 V36 December 30, 2011 GPA: 3.3 I did a few questions of RC and CR from the OG Guide 12, getting only 1 wrong. I also did a takeaway on what I got wrong from GMAT Test Prep 1. Followers: 1 December 31, 2011 I woke up and did 10 PS and 10 DS questions, getting 5 DS questions wrong. I reviewed ChineseBurned’s AWA guide again and after a short break I hammered GMAT Test Prep 2. AWA was fine except I didn’t click submit on the second one. On quant, I was feeling too confident and was ahead of pace up until question 12, then took my sweet time in 3 of the next 5 questions and screwed up my pace completely. I pretty much had to hurry to finish quant, guessing a few of the last 10 questions but completing the exam. On verbal I felt really well in RC and CR, as I kept seeing harder and harder questions show up. Over all I got 700 (Q46, V40) which is what I predict my score would be next Saturday. For the next 7 days I plan to spend the next two days trying to finish the MGMAT SC guide and in between just do questions in batches of 20 and reviewing them right after. I don’t want to do more CATs; just questions from GMAT Official guides and review. I will be posting my last post on Friday. Happy New Year GMAT Club! My Extensive GMAT Journey My GMAT Experience Re: Extensive GMAT Journey [#permalink] 05 Jan 2012, 17:24 January 3, 2012 I did the last DS questions of the OG Guide. Tricky! But I got most of them. I went over my mistakes and my mistakes of the GMAT Prep 2. Joined: 23 Jul 2011 January 4, 2012 Posts: 59 I did 20 PS questions of OG Guide 12. I got two wrong with one careless and I also finished with 3 minutes to spare. This gave me confidence as this is my last this was my last timed quant practice. I also went over every single quant question of GMAT Test Prep 2 and every wrong question of Verbal. Some of the verbal questions that I got wrong were very tough! Finance, January 5, 2011 Did some CR and RC questions in the morning, and SC in at night. This is my last entry before my debrief. I feel good and confident for test day. GMAT 1: 630 Q40 V36 I hope you’re looking forward to my debrief. GPA: 3.3 _________________ Followers: 1 My Extensive GMAT Journey My GMAT Experience Student Re: Extensive GMAT Journey [#permalink] 05 Jan 2012, 18:31 Joined: 12 Sep Ready for your debrief! I think your looking at a pretty strong score... stay motivated, and make sure to master the intangibles. You're almost there!! Posts: 900 Finance, New to the GMAT Club? <START HERE> My GMAT and BSchool Tips: GMAT 1: 710 Q48 V40 Followers: 112 Kudos [?]: 523 [0], given: gmatclubot Re: Extensive GMAT Journey [#permalink] 05 Jan 2012, 18:31
{"url":"http://gmatclub.com/forum/extensive-gmat-journey-122311.html","timestamp":"2014-04-19T07:22:15Z","content_type":null,"content_length":"209227","record_id":"<urn:uuid:fa954016-e8f7-4b82-bd60-42608bd4b7bf>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00598-ip-10-147-4-33.ec2.internal.warc.gz"}
GAO-06-24, Mortgage Financing: Additional Action Needed to Manage Risks of FHA-Insured Loans with Down Payment Assistance This is the accessible text file for GAO report number GAO-06-24 entitled 'Mortgage Financing: Additional Action Needed to Manage Risks of FHA-Insured Loans with Down Payment Assistance' which was released on November 14, 2005. This text file was formatted by the U.S. Government Accountability Office (GAO) to be accessible to users with visual impairments, as part of a longer term project to improve GAO products' accessibility. Every attempt has been made to maintain the structural and data integrity of the original printed product. Accessibility features, such as text descriptions of tables, consecutively numbered footnotes placed at the end of the file, and the text of agency comment letters, are provided but may not exactly duplicate the presentation or format of the printed version. The portable document format (PDF) file is an exact electronic replica of the printed version. We welcome your feedback. Please E-mail your comments regarding the contents or accessibility features of this document to Webmaster@gao.gov. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. Because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately. Report to the Chairman, Subcommittee on Housing and Community Opportunity, Committee on Financial Services, House of Representatives: November 2005: Mortgage Financing: Additional Action Needed to Manage Risks of FHA-Insured Loans with Down Payment Assistance: GAO Highlights: Highlights of GAO-06-24, a report to the Chairman, Subcommittee on Housing and Community Opportunity, Committee on Financial Services, House of Representatives. Why GAO Did This Study: The Federal Housing Administration (FHA) permits borrowers to obtain down payment assistance from third parties; but, research has raised concerns about the performance of loans with such assistance. Due to these concerns, GAO examined the (1) trends in the use of down payment assistance with FHA-insured loans, (2) the impact that the presence of such assistance has on purchase transactions and house prices, (3) how such assistance influences the performance of these loans, and (4) FHA’s standards and controls for these loans. What GAO Found: Almost half of all single-family home purchase mortgages that FHA insured in fiscal year 2004 had down payment assistance. Nonprofit organizations that received at least part of their funding from sellers provided assistance for about 30 percent of these loans and represent a growing source of down payment assistance. However, assistance from seller-funded nonprofits alters the structure of the purchase transaction. First, because many seller-funded nonprofits require property sellers to make a payment to their organization; assistance from these nonprofits creates an indirect funding stream from property sellers to homebuyers. Second, GAO analysis indicated that FHA-insured homes bought with seller-funded nonprofit assistance were appraised at and sold for about 2 to 3 percent more than comparable homes bought without such assistance. Regardless of the source of assistance and holding other variables constant, GAO analysis indicated that FHA-insured loans with down payment assistance have higher delinquency and claim rates than do similar loans without such assistance. Furthermore, loans with assistance from seller-funded nonprofits do not perform as well as loans with assistance from other sources. This difference may be explained, in part, by the higher sales prices of comparable homes bought with seller-funded assistance. Although FHA has implemented some standards and controls on loans with down payment assistance, stricter standards and additional controls could help in managing the risks these loans pose. FHA standards permit assistance from seller-funded nonprofits; in contrast, mortgage industry participants restrict such assistance. Further, government guidelines call for routine identification of risks that could impede meeting program objectives; however, FHA has not conducted routine analysis of the performance of loans with down payment assistance. What GAO Recommends: The Secretary of Housing and Urban Development should direct the FHA Commissioner to implement additional controls to manage the risks associated with loans that involve down payment assistance. Such controls could involve considering the presence and source of down payment assistance when underwriting loans. Further, the FHA Commissioner should consider additional controls for loans with down payment assistance from seller-funded nonprofits. In written comments, HUD generally agreed with the report’s findings. HUD also commented on certain aspects of selected recommendations. To view the full product, including the scope and methodology, click on the link above. For more information, contact William B. Shear at (202) 512-8678 or shearw@gao.gov [End of section] Results in Brief: The Percentage of Purchase Loans in FHA's Portfolio with Down Payment Assistance Has Been Increasing Since 2001: Seller-Funded Assistance Affects Home Purchase Transactions and Can Raise House Prices: FHA-Insured Loans with Down Payment Assistance, particularly from Seller-Funded Nonprofits, Do Not Perform as Well as Similar Loans without Assistance: Stricter Standards and Additional Controls Could Help FHA Manage the Risks Posed by Loans with Down Payment Assistance: Recommendations for Executive Action: Agency Comments and Our Evaluation: Appendix I: Objectives, Scope, and Methodology: Appendix II: Automated Valuation Model Analysis: Appendix III: Loan Performance Analysis: Appendix IV: Comments from the Department of Housing and Urban Appendix V: GAO Contact and Staff Acknowledgments: Table 1: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, National Sample, Fiscal Years 2000, 2001, and 2002: Table 2: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, MSA Sample, Fiscal Years 2000, 2001, and 2002: Table 3: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, Atlanta MSA Sample, Fiscal Years 2000, 2001, and 2002: Table 4: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, National Sample, March 2005: Table 5: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, National Sample, Fiscal Years 2000, 2001, and 2002: Table 6: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, MSA Sample, Fiscal Years 2000, 2001, and 2002: Table 7: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, Atlanta MSA Sample, Fiscal Years 2000, 2001, and 2002: Table 8: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, National Sample, March Table 9: Names and Definitions of the Variables Used in Our Regression Table 10: Delinquency Regression Results--National Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Table 11: Delinquency Regression Results--National Sample, Model Based on TOTAL Mortgage Scorecard Variables: Table 12: Delinquency Regression Results--National Sample, Augmented GAO Actuarial Model: Table 13: Delinquency Regression Results--National Sample, GAO Actuarial Model: Table 14: Delinquency Regression Results--MSA Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Table 15: Delinquency Regression Results--MSA Sample, Model Based on TOTAL Mortgage Scorecard Variables: Table 16: Delinquency Regression Results--MSA Sample, Augmented GAO Actuarial Model: Table 17: Delinquency Regression Results--MSA Sample, GAO Actuarial Table 18: Claim regression results - National Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Table 19: Claim Regression Results--National Sample, Model Based on TOTAL Mortgage Scorecard Variables: Table 20: Claim Regression Results--National Sample, Augmented GAO Actuarial Model: Table 21: Claim Regression Results--National Sample, GAO Actuarial Table 22: Claim Regression Results--MSA Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Table 23: Claim Regression Results--MSA Sample, Model Based on TOTAL Mortgage Scorecard Variables: Table 24: Claim Regression Results--MSA Sample, Augmented GAO Actuarial Table 25: Claim Regression Results--MSA Sample, GAO Actuarial Model: Table 26: Prepayment Regression Results--Quarterly Conditional Probability of Prepayment, National Sample: Table 27: Prepayment Regression Results--Quarterly Conditional Probability of Prepayment, MSA Sample: Table 28: Loss Regression Results--Loss Rate Given Default, National Table 29: Loss Regression Results--Loss Rate Given Default, MSA Sample: Figure 1: Number of FHA-Insured Single-Family Purchase Money Loans, Fiscal Years 2000 through 2005: Figure 2: Number of FHA-Insured Single-Family Purchase Money Loans and Percentage of Loans with Down Payment Assistance, by Source (Loans with LTV Ratio Greater Than 95 percent, Fiscal Years 2000-2005): Figure 3: Percentage of FHA-Insured Single-Family Purchase Money Loans Using Nonprofit Down Payment Assistance and House Price Appreciation Rates, by State: Figure 4: Structure of FHA Individual Purchase Transaction, with Nonseller-Funded Down Payment Assistance and with Seller-Funded Down Payment Assistance: Figure 5: Generic Illustration of Addendum to the Sales Contract Completed Prior to Closing that Facilitates Seller's Commitment to Providing Financial Payment to the Nonprofit Organization after Figure 6: Example of LTV Ratio Calculations for FHA-Insured Loans, by Source of Down Payment Funds: Figure 7: Delinquency and Claim Rates, by Maximum Age of Loan and Source of Down Payment Funds: Figure 8: Effect of Down Payment Assistance on the Probability of Delinquency and Claim, Controlling for Selected Variables: ARM: adjustable rate mortgage: AVM: Automated Valuation Model: CHUMS: Computerized Homes Underwriting Management System: FHA: Federal Housing Administration: GSE: government-sponsored enterprises: HAND: Homeownership Alliance of Nonprofit Downpayment Providers: HUD: The U.S. Department of Housing and Urban Development: IRS: Internal Revenue Service: LTV: loan-to-value: MSA: Metropolitan Statistical Area: OIG: Office of Inspector General: RHS: U.S. Department of Agriculture's Rural Housing Service: TOTAL: Technology Open to Approved Lenders: VA: Department of Veterans Affairs November 9, 2005: The Honorable Bob Ney: Subcommittee on Housing and Community Opportunity: Committee on Financial Services: House of Representatives: Dear Mr. Chairman: Mortgage insurance provided by the Federal Housing Administration (FHA) of the U.S. Department of Housing and Urban Development (HUD) insures billions of dollars in private home loans each year. One of FHA's primary goals is to expand homeownership opportunities for first-time homebuyers and other borrowers who would not otherwise qualify for conventional mortgages on affordable terms. Homebuyers who receive FHA- insured mortgages often have limited funds and, to meet the 3 percent borrower investment FHA requires, may obtain down payment assistance from a third party, including not only a relative but also a charitable organization (nonprofit) that is funded by the property seller. A purpose of a down payment is to create "instant equity" for the new homeowner, and our work and others have shown that loans with greater owner investment generally perform better.[Footnote 1] HUD's Office of Inspector General (OIG) has raised concerns about the performance of FHA-insured loans with down payment assistance from seller-funded nonprofits.[Footnote 2] In light of these concerns, you asked us to evaluate how FHA-insured home loans with down payment assistance perform compared with loans that are originated without such assistance. The insurance program is supported in part through insurance premiums that FHA charges its borrowers, and FHA estimates that the mortgage insurance fund operates at a profit. In response to your request, this report examines (1) trends in the use of down payment assistance in FHA-insured loans (e.g., volume and source), (2) the impact that the presence of down payment assistance has on the structure of the purchase transaction and the house price of FHA- insured loans, (3) the effect of down payment assistance on the performance of FHA-insured loans, and (4) the extent to which FHA standards and controls for loans with down payment assistance are consistent with government internal control guidelines and mortgage industry practices. To describe trends in the use of down payment assistance with FHA- insured loans, we obtained loan-level data from HUD on single-family purchase money mortgage loans.[Footnote 3] We analyzed the data by source of assistance to determine trends in loan volume and the proportion of loans with down payment assistance (including geographic variations). To examine the structure of the purchase transaction for loans with and without down payment assistance, we reviewed HUD policy guidebooks and reports and interviewed real estate agents, lenders, appraisers, and other key players involved in transactions with down payment assistance. To examine how down payment assistance impacted the house price of FHA-insured loans, we examined the sales prices of homes by the use and source of down payment assistance using property value estimates derived from an Automated Valuation Model (AVM).[Footnote 4] To examine how down payment assistance influences the performance of FHA-insured loans, we obtained from HUD a sample of single-family purchase money loans endorsed in fiscal years 2000, 2001, and 2002 and performance data on those loans (current as of June 30, 2005).[Footnote 5] To examine the extent to which FHA standards and controls for loans with down payment assistance were consistent with government internal control guidelines, we reviewed FHA regulations and guidelines for loans with down payment assistance and compared these with certain internal control standards.[Footnote 6] We also interviewed mortgage industry participants about the controls they used to manage the risks associated with affordable loan products that permit down payment assistance and, as appropriate, compared their practices with FHA's. We did not verify that these institutions did in fact use these controls. We selected these entities because they offered products intended to expand affordable homeownership opportunities in part by permitting down payment assistance. Appendix I provides a full description of our scope and methodology. We performed our audit work in Boston, Massachusetts, and Washington, D.C., from January 2005 to September 2005 in accordance with generally accepted government auditing Results in Brief: The proportion of FHA-insured loans that are financed in part by down payment assistance from various sources has increased substantially in the last few years, while the overall number of loans that FHA insures has fallen dramatically. Assistance from nonprofit organizations funded by sellers has accounted for a growing percentage of that assistance.[Footnote 7] From 2000 to 2004, the total proportion of FHA- insured loans with down payment assistance grew from 35 to nearly 50 percent. Approximately 6 percent of FHA-insured loans in 2000 received down payment assistance from seller-funded nonprofits, but by 2004 nonprofit assistance had grown to about 30 percent. Our analysis showed that those states where the use of nonprofit down payment assistance, primarily from seller-funded nonprofits, was higher than average tended to have lower-than-average house price appreciation rates. Down payment assistance provided by a seller-funded nonprofit can alter the structure of the purchase transaction in important ways. First, when a homebuyer receives assistance from a seller-funded nonprofit, many nonprofits require the property sellers to make a payment to the nonprofit that equals the amount of assistance the homebuyer receives plus a service fee, after the closing. This requirement creates an indirect funding stream from property sellers to homebuyers that does not exist in other transactions, even those involving some other type of down payment assistance. Second, mortgage industry participants reported, and a HUD contractor study found, that property sellers who provided down payment assistance through nonprofits often raised the sales price of the homes involved in order to recover the required payments that went to the organizations.[Footnote 8] Our AVM analyses found that homes bought with seller-funded nonprofit assistance appraised at and sold for higher prices than comparable homes bought without assistance, resulting in larger loans for the same collateral and higher effective loan-to-value (LTV) ratios.[Footnote 9] Specifically, we found that homes with seller-funded down payment assistance were appraised and sold for about 2 to 3 percent more than comparable homes without such assistance. That is, homebuyers would have less equity in the transaction than would otherwise be the case. FHA requires lenders to inform appraisers of the presence and source of down payment assistance but does not require that lenders identify whether the down payment assistance provider receives funding from property sellers. Without this information, appraisers cannot consider the impact that such assistance could have on the purchase price of a home and potentially on the appraiser's estimate of the home's market Loans with down payment assistance do not perform as well as loans without down payment assistance; this may be explained, in part, by the homebuyer having less equity in the transaction. Holding other variables constant, our analysis indicated that FHA-insured loans with down payment assistance had higher delinquency and claim rates than similar loans without such assistance. These differences in performance may be explained, in part, by the higher sales prices of comparable homes bought with seller-funded down payment assistance. FHA has implemented some standards and internal controls to manage the risks associated with loans with down payment assistance, but stricter standards and additional controls could help FHA better manage risks posed by loans with down payment assistance while meeting its mission of expanding homeownership opportunities. First, with regard to standards, like other mortgage industry participants, FHA generally applies the same underwriting standards to loans with down payment assistance that it applies to loans without such assistance. One important exception is that FHA, unlike others, does not limit the use of down payment assistance from seller-funded nonprofits. Some mortgage industry participants view down payment assistance from seller-funded nonprofits as a seller inducement to the sale and, therefore, either restrict or prohibit its use. FHA has not viewed such assistance as a seller inducement and, therefore, does not subject this assistance to the limits it otherwise places on contributions from sellers. Although FHA, like others, applies the same underwriting standards to loans with down payment assistance as it applies to loans without such assistance, because FHA's portfolio is heavily weighted toward loans with down payment assistance, stricter standards may be warranted for such loans. Second, with regard to controls, FHA has taken steps to assess and manage the risks associated with loans with down payment assistance, but additional controls may be warranted. For example, FHA has conducted ad hoc loan performance analyses of loans with down payment assistance and contracted for two studies to assess the use of such assistance with FHA-insured loans, but FHA has not routinely assessed the impact that the widespread use of down payment assistance has had on loan performance. Also, FHA has targeted monitoring of appraisers that do a high volume of loans with down payment assistance, but FHA has not targeted its monitoring of lenders that do a high volume of loans with down payment assistance, even though FHA holds lenders, as well as appraisers, accountable for ensuring a fair valuation of the property it insures. We make recommendations designed to better manage the risks of loans with down payment assistance generally and more specifically from seller-funded nonprofits. Overall, we recommend that in considering the cost and benefit of its policy permitting down payment assistance, FHA also consider risk mitigation techniques such as including down payment assistance as a factor when underwriting loans or monitoring more closely loans with such assistance. With regard to down payment assistance providers that receive funding from property sellers, we recommend that FHA take additional steps to mitigate the risk associated with these loans. These controls include treating such assistance as a seller contribution and, therefore, subject to existing limits on seller contributions. We provided a draft of this report to HUD, and the Assistant Secretary for Housing--Federal Housing Commissioner provided written comments, which are discussed later in this report and reprinted in appendix IV. HUD generally agreed with the report's findings, stating that the report confirmed its own analysis of loan performance and the findings of an independent contractor hired by FHA to evaluate how seller-funded down payment assistance programs operate. HUD also agreed to take steps to better identify the source of down payment assistance, which would permit it to better monitor the performance of these loans. HUD also agreed to consider incorporating the presence and source of down payment assistance when underwriting loans. HUD also commented on certain aspects of selected recommendations. First, although HUD agreed with the report's recommendation to perform routine and targeted loan performance analyses of loans with down payment assistance, it stated that FHA already monitors the performance of these loans. We recognized in our report that FHA does perform ad hoc analyses of loan performance, but because of the substantial number of FHA loans that involve some form of down payment assistance, and the risk of these loans, we continue to believe that FHA should more routinely monitor the performance of these loans. Second, HUD disagreed with our recommendation that it should revise its standards to treat assistance from a seller-funded nonprofit organization as a seller inducement to purchase, arguing, based on advice of HUD's Office of the General Counsel, that if the gift of down payment assistance is made by the nonprofit entity to the buyer before closing, while the seller's contribution to the nonprofit entity occurs after the closing, then the buyer has not received funds that can be traced to the seller's contribution. We realize that FHA relies on this advice to authorize sellers to do indirectly what they cannot do directly. Nevertheless, because gifts of down payment assistance from seller-funded nonprofits are ultimately funded by the sellers, they are like gifts of down payment assistance made directly by sellers. We, therefore, continue to believe that assistance from a seller-funded entity should be treated as a seller inducement to purchase. Finally, while the draft report was with the agency for comment, HUD's contractor completed the 2005 Annual Actuarial Review. Consistent with our recommendation, the contractor included the presence and source of down payment assistance as a factor in estimating loan performance--finding that it is a very important factor. However, in reviewing the contractor's methodology, we found certain limitations may understate the impact that down payment assistance has on estimates of loan performance. We, therefore, modified our recommendation to address one of these weaknesses and to emphasize the continuing need to consider the presence and source of down payment assistance in future loan performance models. Mortgage insurance, a commonly used credit enhancement, protects lenders against losses in the event of default. Lenders usually require mortgage insurance when a homebuyer has a down payment of less than 20 percent of the value of the home. FHA, the U.S. Department of Veterans Affairs (VA), the U.S. Department of Agriculture's Rural Housing Service (RHS), and private mortgage insurers provide this insurance. In 2003, lenders originated $3.8 trillion in single-family mortgage loans, of which more than 60 percent were for refinancing. Of all the insured loans originated in 2003, including refinancings, private companies insured about 64 percent, FHA about 26 percent, VA about 10 percent, and RHS a very small number. One of FHA's primary goals is to expand homeownership opportunities for first-time homebuyers and other borrowers who would not otherwise qualify for conventional mortgages on affordable terms. As a result, FHA plays a particularly large role in certain market segments, including first-time and low-income homebuyers. During 2001 to 2003, FHA insured about 3.7 million mortgages with a total value of about $425 billion. FHA insures most of its single-family mortgages under its Mutual Mortgage Insurance Fund (Fund), which is primarily funded with borrowers' insurance premiums and proceeds from the sale of foreclosed properties. FHA's mortgage insurance program is currently a negative subsidy program--that is, the Fund is self-financed and FHA estimates that it operates at a profit; however, the Fund is experiencing higher- than-estimated claims. The economic value of the Fund that supports FHA's guarantees depends on the relative size of cash outflows and inflows over time. Cash flows out of the Fund from payments associated with claims on defaulted loans and refunds of up-front premiums on prepaid mortgages. To cover these outflows, FHA receives cash inflows from borrowers' up-front and annual insurance premiums and net proceeds from recoveries on defaulted loans. If the Fund were to be exhausted, the U.S. Treasury would have to cover lenders' claims directly. We reported that FHA submitted a $7 billion reestimate for the Fund's credit subsidy and interest as of the end of 2003, primarily due to an increase in estimated and actual claims over what FHA previously estimated.[Footnote 10] Several recent events may help explain the increase in claims, including changes to underwriting guidelines, competition from the private sector, and an increase in down payment assistance. A program assessment included with the 2006 President's Budget noted that FHA's loan performance model is neither accurate nor reliable because it consistently under predicts claims. Since 1990, the National Housing Act has required an annual and independent actuarial analysis of the economic net worth and soundness of the Fund.[Footnote FHA has been backing mortgages with low down payments for many years. For example, almost 90 percent of FHA-insured mortgages originated in 2000 had an LTV ratio greater than 95 percent. LTV ratios are important because of the direct relationship that exists between the amount of equity borrowers have in their homes and the likelihood of default. The higher the LTV ratio, the less cash borrowers will have invested in their homes and the more likely it is that they may default on mortgage obligations, especially during times of economic hardship. The number of loans that FHA insures each year has fallen dramatically since 2000 (fig. 1). This decline is likely due, in part, to greater availability of low and no down payment products from the conventional market. Specifically, in 1992 Congress authorized HUD to establish housing goals for Fannie Mae and Freddie Mac that direct them to contribute to the affordability and availability of housing for low-and moderate-income families, underserved areas, and special affordable housing for very low-income families.[Footnote 12] In the 1990s, private mortgage insurers began insuring loans with low down payments; concurrently, Fannie Mae and Freddie Mac began purchasing these loans. More recently, the conventional market has introduced products such as zero-down payment loans that have attracted homebuyers who might otherwise have applied for an FHA-insured mortgage. Certain conventional mortgage products also permit down payment assistance. Figure 1: Number of FHA-Insured Single-Family Purchase Money Loans, Fiscal Years 2000 through 2005: [See PDF for image] Note: Loans insured by FHA's 203(b) program, its main single-family program, and its 234(c) condominium program. Small specialized programs, such as 203(k) rehabilitation and 221(d) subsidized mortgages, were not included. [End of figure] Homebuyers with FHA-insured loans need to make a 3 percent contribution toward the purchase of the property. FHA, like many conventional mortgage lenders, permits homebuyers to obtain these funds from certain third-party sources and use the money for the down payment and closing costs. Generally, mortgage industry participants accept as third-party sources relatives, a borrower's employer, government agencies, and charitable organizations (nonprofits).[Footnote 13] Among nonprofits that provide down payment assistance, some receive contributions from property sellers. When a homebuyer receives down payment assistance from one of these organizations, the organization requires the property seller to make a financial payment to their organization. These nonprofits are commonly called "seller-funded" down payment assistance providers. Examples of seller-funded nonprofits that provide the most down payment assistance to homebuyers with FHA-insured mortgages, include: Nehemiah Corporation of America; AmeriDream, Incorporated; and The Buyers Fund, Incorporated. A 1998 memorandum from HUD's Office of the General Counsel found that funds from a seller- funded nonprofit were not in conflict with FHA's guidelines that prohibit down payment assistance from sellers.[Footnote 14] In contrast, some nonprofits do not require property sellers to make a financial payment to their organization in return for providing down payment assistance to a homebuyer. Examples of these nonprofits that provide the most down payment assistance to homebuyers with FHA-insured mortgages, include the Clay Foundation, Incorporated; and Family Housing Resources, Incorporated. For a nonprofit to provide down payment assistance to a homebuyer, regardless of its funding source, FHA requires that the organization have a Taxpayer Identification Number.[Footnote 15] FHA does not approve down payment assistance programs administered by nonprofits; instead, lenders are responsible for assuring that the gift to the homebuyer from a nonprofit meets FHA FHA relies on lenders to underwrite the loans and determine their eligibility for FHA mortgage insurance. Lenders wanting to participate in FHA's mortgage programs receive approval from HUD. As of August 2004, over 10,000 lending institutions had been approved. These lenders review loan applications and assess applicants' creditworthiness and ability to make payments. FHA relies on these lenders to ensure compliance with FHA standards. Lenders often initiate the use of down payment assistance from seller-funded down payment assistance providers. Additionally, FHA and its lenders rely upon appraisers to provide an independent and accurate valuation of properties. A primary role of appraisals in the loan underwriting process is to provide evidence that the collateral value of a property is sufficient to avoid losses on a loan if the borrower is unable to repay the loan. Legislation sets certain standards for FHA-insured loans. Currently, depending on a property's appraised value and the average closing costs within a state, the LTV limits range from 97.15 to 98.75 percent.[Footnote 16] However, because FHA allows financing of the up- front insurance premium, borrowers can receive a mortgage with an effective LTV ratio of close to 100 percent. FHA also has flexibility in how it implements changes to an existing product. For example, the HUD Secretary can change underwriting requirements for existing products and has done this many times. Specific examples include a decrease in items considered as borrower's debts and an expanded definition of what can be included as borrower's effective income when lenders calculate qualifying ratios. Additionally, HUD is supporting a legislative proposal that would enable HUD to insure mortgages with no down payment. Borrowers would also be able to finance certain closing costs. FHA would charge borrowers premiums that would be higher than those for FHA's regular 203(b) mortgage product. The program is targeted to first-time homebuyers, and borrowers would be required to participate in homebuyer counseling. According to HUD, a zero down payment program would provide FHA with a better way to serve families in need of down payment assistance. We previously recommended that Congress and FHA consider a number of means to mitigate the risks that a no down payment product and any other new single-family insurance product may pose. Such means may include limiting the initial availability of new products, requiring higher premiums, and requiring stricter underwriting and enhanced monitoring. Such risk mitigation techniques would help protect the Fund while allowing FHA time to learn more about the performance of such loans.[Footnote 17] The mortgage industry is increasingly using credit scoring, automated underwriting, and mortgage scoring. Credit scoring models, which estimate the credit risk of individuals', use statistical analyses that identify the characteristics of borrowers who are most likely to make loan payments and then create a weight or score for each characteristic. Credit scores, also known as FICO scores because they are generally based on software developed by Fair, Isaac and Company, range from 300 to 850, with higher scores indicating a better credit history. Automated underwriting is the process of collecting and processing the data used in the underwriting process. During the 1990s, private mortgage insurers, the GSEs, and larger financial institutions developed automated underwriting systems, and by 2002 more than 60 percent of all mortgages were underwritten using these systems. This percentage continues to rise.[Footnote 18] Mortgage scoring is a technology-based tool that relies on the statistical analysis of millions of previously originated mortgage loans to determine how key attributes such as credit history, property characteristics, and mortgage terms affect future loan performance. FHA has developed and recently implemented a mortgage scoring tool, called the Technology Open to Approved Lenders (TOTAL) Mortgage Scorecard, that can be used in conjunction with existing automated underwriting systems. We identified and reviewed three studies that evaluated the extent to which the presence of down payment assistance impacts loan performance, but these analyses have been limited in that they do not consider other variables that may be important to delinquency and claim, such as borrowers' credit scores and the period during which a loan is observed. HUD's OIG conducted two studies looking at defaults on FHA- insured loans with down payment assistance.[Footnote 19] In the first study, the OIG found that the default rate for a sample of FHA-insured loans with down payment assistance provided by Nehemiah, a seller- funded nonprofit, was more than double that of loans that did not get assistance from this nonprofit (4.64 percent and 2.11 percent, respectively). The second more recent study found that the default rate for the same sample of Nehemiah-assisted loans had quadrupled to 19.42 percent. Moreover, this default rate was double the default rate for loans that did not get assistance from this nonprofit (9.7 percent). The OIG's studies did not adjust for other variables that could potentially explain these differences in loan performance, such as differences in borrowers' credit scores or house price appreciation after the loans were originated. In response to the OIG's findings, FHA contracted for analysis of a sample of FHA-insured loans to identify the presence and source of down payment assistance. A coalition of down payment assistance nonprofits, Homeownership Alliance of Nonprofit Downpayment Providers (HAND), released a study which found that delinquency rates for loans with assistance from nonprofits were about 11 percent higher than for loans with gifts from relatives. HAND also noted that the delinquency rates on loans with assistance from nonprofits were about the same as the delinquency rates on loans receiving other forms of assistance.[Footnote 20] The HAND study adjusted for geographic distribution, but not for other factors, such as borrowers' credit scores or the age of the loans. Because loans with assistance from nonprofits were a small portion of FHA's portfolio until 2000, most of the loans in this sample with assistance from nonprofits would have had little time in which to experience a delinquency, unlike other loans in the sample. The Percentage of Purchase Loans in FHA's Portfolio with Down Payment Assistance Has Been Increasing Since 2001: As the number of home mortgages FHA insures each year has fallen, the number of FHA-insured single-family purchase money loans with nonprofit down payment assistance has not. As a result, the proportion of loans with down payment assistance that FHA insures each year has increased significantly. From 2000 to 2004, the total proportion of FHA-insured single-family purchase money loans that had an LTV ratio greater than 95 percent and that also involved down payment assistance, from any source, grew from 35 to nearly 50 percent (fig. 2).[Footnote 21] Assistance from nonprofit organizations, about 93 percent of which were funded by sellers, accounted for an increasing proportion of this assistance. Approximately 6 percent of FHA-insured loans received down payment assistance from nonprofit organizations in 2000, but, by 2004 this figure had grown to about 30 percent.[Footnote 22] Our analysis of a sample of FHA-insured loans from 2000 to 2002 showed that the average amount of down payment assistance, regardless of source, was about $3,400 and that the amount of down payment assistance relative to sales price was about 3 percent.[Footnote 23] Figure 2: Number of FHA-Insured Single-Family Purchase Money Loans and Percentage of Loans with Down Payment Assistance, by Source (Loans with LTV Ratio Greater Than 95 percent, Fiscal Years 2000-2005): [See PDF for image] Note: Percentage of loans with down payment assistance by source for 2000, 2001, and 2002 are based on a representative sample of FHA- insured purchase money loans with an LTV ratio greater than 95 percent. Of the loans in the sample with nonprofit assistance, 93.5 percent had seller-funded assistance, 1.8 percent had nonseller-funded assistance, 0.5 percent had assistance from a nonprofit with both seller-funded and nonseller-funded programs, and 4.2 percent had assistance from nonprofits with a status that we could not identify. For these years, our category "nonprofit" includes only loans with assistance from nonprofit organizations we could verify as requiring funds from sellers as a condition of providing assistance. All other loans with nonprofit assistance were included in the nonseller-funded (other sources) group. Percentage of loans with down payment assistance by source for 2003 through April 2005 are based on the total universe of FHA-insured purchase money loans with an LTV ratio greater than 95 percent. For these years, our category "nonprofit" includes loans with assistance from all nonprofit organizations. We reviewed the nonprofit assistance provider for 95.2 percent of the loans with nonprofit assistance. Of these loans, 93.5 percent had seller-funded assistance, 1.5 percent had nonseller-funded assistance, 1.1 percent had assistance from a nonprofit with both seller-funded and nonseller-funded programs, and 3.9 percent had assistance from nonprofits with a status that we could not identify. We did not review nonprofit organizations that provided a low volume of assistance. [End of figure] As figure 2 illustrates, the total number of FHA-insured loans originated fell dramatically between 2001 and 2005. Realtors that we spoke to from across the country told us that fewer homebuyers were using FHA-insured mortgages, opting instead for conventional low and zero down payment mortgage products and loans with secondary financing that do not require private mortgage insurance. In addition, officials from government agencies that provide down payment assistance noted either a decrease in the use of FHA mortgage insurance, an increase in the demand for conventional mortgages, or both. Although the number of FHA-insured loans decreased markedly from 2001 to 2004, the number of FHA-insured loans with down payment assistance did not. As a result, these loans constitute a growing share of FHA's total portfolio. Growth in the number of seller-funded nonprofit providers and the growing acceptance of this type of assistance have contributed to the increase in the use of down payment assistance. According to industry professionals, relatives have traditionally provided such assistance, but in the last 10 years other sources have emerged, including not only seller-funded nonprofit organizations, but also government agencies and employers. The mortgage industry has responded by developing practices to administer this type of assistance, such as FHA's policies requiring gift letters and documentation of the transfer of funds. Lenders also reported that seller-funded down payment assistance providers, in particular, have developed practices accepted by FHA and lenders. For example, seller- funded programs have standardized gift letter and contract addendum forms for documenting both the transfer of down payment assistance funds to the homebuyer and the financial contribution from the property seller to the nonprofit organization. As a result, for FHA-insured loans, lenders are increasingly aware of and willing to accept down payment assistance, including from seller-funded nonprofits. States that have higher-than-average percentages of FHA-insured loans with nonprofit down payment assistance, primarily from seller-funded programs, tend to be states with lower-than-average house price appreciation rates (fig. 3).[Footnote 24] From May 2004 to April 2005, 34.6 percent of all FHA-insured purchase money loans nationwide involved down payment assistance from a nonprofit organization, and 15 states had percentages that were higher than this nationwide average. Fourteen of these 15 states also had house price appreciation rates that were below the median rate for all states. In addition, the eight states with the lowest house appreciation rates in the nation all had higher-than-average percentages of nonprofit down payment assistance. Generally, states with high proportions of FHA-insured loans with nonprofit down payment assistance were concentrated in the Southwest, Southeast, and Midwest. Figure 3: Percentage of FHA-Insured Single-Family Purchase Money Loans Using Nonprofit Down Payment Assistance and House Price Appreciation Rates, by State: [See PDF for image] [End of figure] Some real estate agents we spoke with commented that in housing markets with low house appreciation rates, sellers do not typically receive multiple offers for their properties. As a result, they may turn to seller-funded down payment assistance providers to attract and expand the pool of potential homebuyers and facilitate purchase transactions that can result in higher sales prices. In contrast, in real estate markets with high house appreciation rates, such as San Francisco and New York City, mortgage industry participants reported that they generally see more assistance in the form of secondary financing involving first and second mortgages. This assistance is often provided by government agencies and nonprofit instrumentalities of government. In addition, lenders and private mortgage insurers described housing markets located on the coasts, and in urban areas in general as having higher proportions of homebuyers utilizing down payment assistance in the form of secondary financing. Purchase transactions in which the seller was a builder had higher usage of nonprofit down payment assistance than did other purchase transactions. In our sample of loans endorsed in 2000, 2001, and 2002, homes sold by builders were more than twice as likely to involve down payment assistance from seller-funded nonprofits as homes sold by nonbuilder property sellers. Specifically, of the home purchase transactions involving nonbuilder property sellers, 8.3 percent had seller-funded down payment assistance, compared with 19.3 percent of transactions with homes sold by builders. Ninety-seven percent of the loans originated by one lender that was affiliated with a builder involved nonprofit down payment assistance. Seller-Funded Assistance Affects Home Purchase Transactions and Can Raise House Prices: The presence of down payment assistance from seller-funded nonprofits can alter the structure of purchase transactions and often results in higher house prices. As we have seen, homebuyers may receive down payment assistance from a variety of sources besides seller-funded nonprofits, including relatives and various government and nonprofit homebuyer assistance programs. When buyers receive assistance from sources other than seller-funded nonprofits, the home purchase takes place like any other purchase transaction--buyers use the funds to pay part of the house price, the closing costs, or both, reducing the mortgage by the amount they pay and creating "instant equity." However, seller-funded down payment assistance programs typically require property sellers to make a financial contribution and pay a service fee after the closing, creating an indirect funding stream from property sellers to homebuyers that does not exist in a typical transaction. Further, our analysis indicated and mortgage industry participants we spoke with reported that property sellers often raised the sales price of their properties in order to recover the contribution to the seller- funded nonprofit that provided the down payment assistance. In these cases, homebuyers may have mortgages that were higher than the true market value price of the house and would have acquired no equity through the transaction. Seller-Funded Down Payment Assistance Changes the Structure of the Purchase Transaction: FHA guidelines state that providers of down payment assistance may not have an interest in the sale of the property, noting that assistance from sellers, real estate agents, builders, and associated entities are considered an inducement to buy.[Footnote 25] FHA guidelines do allow sellers to contribute up to 6 percent of the sales price toward closing costs, although none of this money can be used to meet the 3 percent borrower contribution requirement.[Footnote 26] Contributions from sellers exceeding 6 percent of the sales price or exceeding the actual closing costs result in a dollar-for-dollar reduction to the sales price when calculating the loan's LTV ratio. In spite of these FHA requirements, FHA lists among acceptable providers not only relatives, a borrower's employer, and homeownership programs but also charitable organizations (nonprofits)--including those that are funded by contributions from property sellers. Like down payment assistance from all other sources, FHA does not limit the amount of assistance from seller-funded nonprofits, and homebuyers can use this assistance for the down payment and closing costs. As a result, individuals and entities that HUD has described as having an interest in the sale of a property may provide gift assistance to homebuyers indirectly through these nonprofits, effectively circumventing the 6 percent rule. The presence of this type of assistance changes the way a property is purchased by creating an indirect funding stream from the seller to the buyer (fig. 4). That is, after the closing, these organizations commonly require property sellers to provide both a financial payment equal to the amount of assistance paid to the homeowner and a service fee. Before the sale of the property, sellers that partner with these nonprofits often complete an addendum to the sales contract that outlines, as a condition of the sale, their commitment to providing a financial payment and fee after closing (fig. 5). Figure 4: Structure of FHA Individual Purchase Transaction, with Nonseller-Funded Down Payment Assistance and with Seller-Funded Down Payment Assistance: [See PDF for image] [End of figure] Figure 5: Generic Illustration of Addendum to the Sales Contract Completed Prior to Closing that Facilitates Seller's Commitment to Providing Financial Payment to the Nonprofit Organization after [See PDF for image] [End of figure] Seller-Funded Down Payment Assistance Often Results in Higher Sales When a homebuyer receives down payment assistance from a seller-funded nonprofit, property sellers often raise the sales price of the property to recover the required payment to the nonprofit providing the assistance. GAO analysis of a national sample of FHA-insured loans endorsed in 2000, 2001, and 2002 suggests that homes with seller-funded assistance were appraised and sold for about 3 percent more than comparable homes without such assistance.[Footnote 27] Additionally, our analysis of more recent loans, a sample of FHA-insured loans settled in March 2005, indicates that homes sold with nonprofit assistance were appraised and sold for about 2 percentage points more than comparable homes without nonprofit assistance.[Footnote 28] To examine the possibility that sales prices of homes with seller-funded assistance were in fact higher than sales prices of comparable homes without such assistance, we contracted with First American Real Estate Solutions to provide estimates of the value of homes in a sample of FHA- insured loans. The values were calculated for the month prior to the closing, using an AVM. AVMs, which use statistical processes to estimate the property values, using property characteristics and trends in sales prices in the surrounding areas, are widely used in the mortgage industry for quality control and other purposes. We examined the ratio of the estimated AVM values to the appraisal values and sales prices and found that the ratios for loans with seller-funded nonprofit down payment assistance ranged from about 2 to 3 percentage points lower than the ratios for loans without such assistance. In other words, for loans with seller-funded down payment assistance, the appraised value and sales price were higher as compared with loans without such assistance. See appendix II for the details of our In addition, some mortgage industry participants told us that homes purchased with down payment assistance from seller-funded nonprofits may be appraised for higher values than if the same homes were purchased without assistance. Appraisers we spoke with said that lenders, realtors, and sellers sometimes pressured them to "bring in the value" in order to complete the sale. Additionally, a prior HUD contractor study corroborates the existence of these pressures.[Footnote 29] FHA requires lenders to provide information to appraisers about the source and amount of assistance. However, FHA reporting requirements do not require lenders to inform appraisers whether the source of the assistance is a seller-funded nonprofit.[Footnote 30] HUD has issued several Mortgagee Letters that provide clarifications regarding FHA standards and requirements for loans with down payment assistance.[Footnote 31] For example, in January 2005, HUD issued a Mortgagee Letter to clarify FHA's standards requiring that appraisers be informed of the presence and source of down payment assistance, regardless of its source.[Footnote 32] Also in January 2005, HUD issued a Mortgagee Letter to reiterate that lenders are required to ensure that appraisals comply with FHA requirements.[Footnote 33] Lenders we spoke with reported that they document the source of the assistance--a relative, nonprofit, and a borrower's employer, for instance--but, typically do not inform appraisers about the relationship between the seller and the down payment assistance provider. Marketing materials from seller-funded nonprofits often emphasize that property sellers using these down payment assistance programs earn a higher net profit than property sellers who do not. These materials show sellers receiving a higher sales price, that more than compensates for the fee typically paid to the down payment assistance provider. For homebuyers who receive assistance from seller-funded nonprofits, the higher sales prices result in mortgages that are higher than mortgages made using other types of down payment assistance, such as a gift from a relative, or with no assistance at all. Additionally, several mortgage industry participants we interviewed noted that when homebuyers obtained down payment assistance from seller- funded nonprofits, property sellers increased their sales prices to recover their payments to the nonprofits providing the assistance. Again, a prior HUD contractor study corroborates the existence of this practice.[Footnote 34] A higher sales price results in a larger loan for the same collateral and, therefore, a higher effective LTV ratio (fig. 6). Figure 6: Example of LTV Ratio Calculations for FHA-Insured Loans, by Source of Down Payment Funds: [See PDF for image] [End of figure] The higher sales price that often results from a transaction involving seller-funded down payment assistance can have the perverse effect of denying buyers any equity in their properties and creating higher effective LTV ratios. As we have seen, FHA guidance stipulates that any financial assistance provided by a party with an interest in the sale of the property is limited to 6 percent of the sales price and can be used only for closing costs. Contributions from interested parties, such as sellers, that exceed 6 percent of the sales price or the actual closing costs result in a dollar-for-dollar reduction to the sales price when calculating the loan's LTV ratio. Along with the maximum allowable LTV ratio, the effect of this requirement is to ensure that FHA homebuyers obtain a certain amount of "instant equity" at closing. That is, when the sales price represents the fair market value of the house, and the homebuyer contributes 3 percent of the sales price at the closing, the LTV ratio is less than 100 percent. But when a seller raises the sales price of a property to accommodate a contribution to a nonprofit that provides down payment assistance to the buyer, the buyer's mortgage may represent 100 percent or more of the property's true market value. FHA-Insured Loans with Down Payment Assistance, particularly from Seller-Funded Nonprofits, Do Not Perform as Well as Similar Loans without Assistance: Holding other variables constant, FHA-insured loans with down payment assistance do not perform as well as similar loans without such assistance. Furthermore, loans with down payment assistance from seller- funded nonprofits do not perform as well as loans with assistance from other sources. This difference in performance may be explained, in part, by the higher sales prices of comparable homes bought with seller- funded down payment assistance. For our analyses, we used two samples (i.e., national and MSA) of FHA- insured single-family purchase money loans endorsed in 2000, 2001, and 2002.[Footnote 35] We grouped the loans into the following three * loans with assistance from seller-funded nonprofit organizations, * loans with assistance from nonseller-funded sources, and: * loans without assistance.[Footnote 36] We analyzed loan performance by source of down payment assistance, controlling for the maximum age of the loan. As shown in figure 7, in both samples and in each year, loans with down payment assistance from seller-funded nonprofit organizations had the highest rates of delinquency and claims, and loans without assistance the lowest. Specifically, between 22 and 28 percent of loans with seller-funded assistance had experienced a 90-day delinquency, compared to 11 to 16 percent of loans with assistance from other sources and 8 to 12 percent of loans without assistance. The claim rates for loans with seller- funded assistance ranged from 6 to 18 percent, for loans with other sources of assistance ranged from 5 to 10 percent, and for loans without assistance from 3 to 6 percent. Figure 7: Delinquency and Claim Rates, by Maximum Age of Loan and Source of Down Payment Funds: [See PDF for image] Note: Analysis based on data from two samples of loans drawn for a file review study funded by HUD and conducted by the Concentrance Consulting Group. The sampled loans were purchase money loans endorsed in 2000, 2001, and 2002 with LTV ratios greater than 95 percent. The national sample consisted of just over 5,000 loans, and the MSA sample consisted of 1,000 loans for each of the three MSAs: Atlanta, Indianapolis, and Salt Lake City. [End of figure] Even when other variables relevant to loan performance were held constant, loans with down payment assistance and, in particular, seller- funded assistance, had higher delinquency and claim rates. In order to test whether other factors correlated with the receipt of seller-funded assistance--for example, the concentration of these loans in slowly appreciating areas--we used regression analyses that controlled for this and other potentially relevant variables (see app. III for the details of our analyses).[Footnote 37] As figure 8 illustrates, seller- funded assistance was found to have a substantial impact on claim and delinquency in both the national and MSA samples. Specifically, the results from the national sample indicated that assistance from a seller-funded nonprofit raised the probability that the loan had gone to claim by 76 percent relative to similar loans with no assistance. Differences in the MSA sample were even larger; the probability that loans with seller-funded nonprofit assistance would go to claim was 166 percent higher than it was for comparable loans without assistance. Similarly, results from the national sample showed that down payment assistance from a seller-funded nonprofit raised the probability of delinquency by 93 percent compared with the probability of delinquency in comparable loans without assistance. For the MSA sample, this figure was 110 percent.[Footnote 38] Loans with down payment assistance from nonseller-funded sources did not perform as well as loans without assistance when other variables relevant to loan performance were held constant. We found that this type of down payment assistance had a substantial impact on the probability of claim and delinquency in both the national and MSA samples (see fig. 8). In the national sample, it raised the probability of claim by 49 percent and the probability of delinquency by 21 percent relative to similar loans with no down payment assistance.[Footnote 39] In the MSA sample, it raised the probability of claim by 45 percent and the probability of delinquency by 36 percent compared with loans without assistance.[Footnote 40] Figure 8: Effect of Down Payment Assistance on the Probability of Delinquency and Claim, Controlling for Selected Variables: [See PDF for image] Note: Loans without down payment assistance are set at 100 percent. The results show the effect of a change in the variable on the odds ratio- -that is, the probability of a claim (or delinquency) divided by the probability of not experiencing a claim (or delinquency). However, the probability of experiencing a claim or delinquency in any given quarter is fairly small; so, the change in the odds ratio is very close to the change in the probability. The analysis is based on data from two samples of loans drawn for a file review study funded by HUD and conducted by the Concentrance Consulting Group. The loans in the samples were endorsed in 2000, 2001, and 2002 and had LTV ratios greater than 95 percent. The national sample consisted of just over 5,000 loans and the MSA sample consisted of 1,000 purchase money loans for each of the three MSAs: Atlanta, Indianapolis, and Salt Lake City. The loan performance data (current as of June 2005) are from HUD's Single-Family Data Warehouse. For a detailed description of the regression model and other data sources, see appendix III. [End of figure] The higher probability of claims in the MSA sample, as compared to the national sample, may be attributable to higher house price appreciation rates at the national level as compared to the MSAs. Research suggests that delinquent borrowers who have accumulated equity in their properties are more likely than other borrowers to prepay in order to avoid claims.[Footnote 41] During the 5-year period from the first quarter of 2000 to the last quarter of 2004, the median house price increase in the national sample was about 39 percent. During the same period, the Salt Lake City, Indianapolis, and Atlanta MSAs realized increases in the median price of existing homes of 11 percent, 18 percent, and 32 percent, respectively. On average, then, borrowers in the national sample could be expected to have accumulated more equity than those in the MSAs and to be more likely to sell their homes and prepay their mortgages if they faced delinquency. The effect of the increased LTV ratio associated with loans with seller-funded down payment assistance may be less important in the presence of substantial accumulated equity.[Footnote 42] The effect of seller-funded down payment assistance on loan performance is substantial and to achieve an equivalent decline in loan performance requires substantial changes in other factors. For example, the presence of seller-funded down payment assistance increased claims by 76 percent. Adjusting other factors to increase claims by 76 percent would require lowering a borrower's credit score about 60 points, for example, or raising the payment to income ratio about 25 percentage points. Both of these adjustments to a loan are significant. We also examined differences in loss severities between loans with seller-funded assistance and unassisted loans. Although our analysis was tentative because many claims had not yet completed the property disposition process, it suggested that the ultimate losses from loans with seller-funded assistance were greater than other loans. We could determine the net profit or loss for only 184 loans from the national sample and for only 205 loans from the MSA sample. We used a regression to predict the loss rate, or the dollar amount of loss (or profit, in a few cases), divided by the original mortgage balances.[Footnote 43] The loss rate for loans with seller-funded assistance was about 5 percentage points higher in both samples. The differences were not statistically significant in the national sample but were in the MSA sample. Our analysis of loss severities indicated no significant differences in loss rates between unassisted loans and loans with nonseller-funded assistance in the national sample. In the MSA sample, loans with nonseller-funded assistance did have statistically significantly higher loss rates. The weaker performance of loans with seller-funded down payment assistance may be explained, in part, by the higher sales prices of homes when buyers receive such assistance, resulting in higher effective LTV ratios. Prior GAO analysis has found that, controlling for other factors, high LTV ratios lead to increased claims.[Footnote 44] Our analysis of AVM data in the national sample of loans endorsed in 2000, 2001, and 2002 indicated that the sales prices of homes with seller-funded down payment assistance were 3 percent higher than the sales prices of comparable homes without it, leading to higher effective LTV ratios for these loans. GAO analysis suggests that this 3 percent difference in sales price translates into a 16 percent increase in claims. Claim rates for loans with seller-funded assistance in the 2000-2002 national sample were about 19 percent to 39 percent higher than claim rates for loans with other forms of assistance--a difference that may largely explain the difference in claim rates between seller- funded and other forms of assistance.[Footnote 45] Stricter Standards and Additional Controls Could Help FHA Manage the Risks Posed by Loans with Down Payment Assistance: FHA has implemented some standards and internal controls to manage the risks associated with loans with down payment assistance, but stricter standards and additional controls could help the agency better manage the risks these loans pose. First, FHA applies the same standards to loans with down payment assistance that it applies to all loans but is less restrictive in the sources of down payment assistance it permits than other mortgage industry participants. Government internal control guidelines advise agencies to consider and recognize the value of industry practices that may be applicable to agency operations.[Footnote 46] Private mortgage insurers, Fannie Mae, and Freddie Mac offer practices that could be instructive in this instance. Mortgage industry participants told us that they viewed down payment assistance from seller-funded nonprofits as an inducement and, therefore, either restricted or prohibited its use. FHA does not share this view and has not held this assistance to the same limits it places on funds from sellers. Second, FHA has assessed, on an ad hoc basis, the performance of loans with down payment assistance. In contrast, government internal control guidelines recommend that agencies routinely identify risks that could impede efficient and effective management and develop approaches to analyze and manage risk. Finally, although FHA has implemented targeted monitoring of appraisers that do a high volume of loans with down payment assistance, the agency has not implemented targeted monitoring of lenders that do a high volume of loans with down payment assistance. FHA Standards Permit Borrowers to Obtain Down Payment Assistance from Seller-Funded Sources: Government internal control guidelines do not prescribe standards specifically for loans with down payment assistance but do advise agencies to consider and recognize the value of industry practices that may be applicable to agency operations. FHA practices related to down payment assistance are in many ways comparable to industry practices. The agency applies the same standards to loans with down payment assistance as it does to other FHA-insured loans--for example, placing a 6 percent cap on the amount of funds sellers can contribute to loan transactions and requiring borrowers to meet the same underwriting requirements as other borrowers. FHA does not consider the presence, source, or amount of down payment assistance as a factor in its underwriting guidelines; more specifically, FHA does not include down payment assistance as a variable in its TOTAL Mortgage Scorecard.[Footnote 47] Similarly, mortgage industry participants reported not imposing additional underwriting criteria for loans with down payment assistance. FHA's standards regarding sources of down payment assistance differ from those of key mortgage industry participants in one important respect--while FHA permits down payment assistance from seller-funded sources, mortgage industry participants restrict or prohibit such assistance. FHA, like other mortgage industry participants, does not permit homebuyers to obtain down payment assistance directly from property sellers but does permit them to get it from nonprofits that receive contributions from property sellers. Further, FHA does not include down payment assistance from seller-funded nonprofits in the 6 percent limit that it has imposed on seller contributions. In contrast, some mortgage industry participants we met with told us that they viewed down payment assistance from seller-funded nonprofits as an inducement and, therefore, either restricted or prohibited its use. Although some mortgage industry participants do permit homebuyers to use seller-funded nonprofits, these entities typically impose restrictions on the amount of assistance a homebuyer may receive and how the funds can be used. For example, Fannie Mae and Freddie Mac permit homebuyers to obtain funds provided by seller-funded nonprofits but only up to 3 percent of the sales price and only for closing costs. FHA standards for other sources of down payment assistance are similar to those of mortgage industry participants we spoke with. Specifically, neither limits the amount of assistance a homebuyer may receive from sources such as relatives, and this money can be used for the down payment, as well as the closing costs. Also, as mentioned earlier, FHA applies the same underwriting standards to loans with down payment assistance as it applies to loans without such assistance. Mortgage industry participants we spoke with cited three reasons for restricting down payment assistance from seller-funded nonprofits. First, some mortgage industry participants noted that seller-funded nonprofits are not disinterested third parties because of the contingency requiring contributions from sellers after the loan closes. Second, some mortgage industry participants noted that homebuyers receiving down payment assistance from seller-funded nonprofits often finance larger loan amounts than they would otherwise because sellers increase the sales price to compensate for the contribution. Third, some mortgage industry participants noted that, in effect, seller- funded nonprofits can be used as intermediaries to enable sellers to contribute funds in excess of HUD's 6 percent limit on seller Additionally, another HUD program has more restrictive standards on permitted sources of down payment assistance. The American Dream Downpayment Initiative, a program administered by HUD's Office of Community Planning and Development that provides grants for down payment assistance programs, does not permit seller-funded nonprofits to administer its funds.[Footnote 48] And, in 1999, HUD proposed a rule that would prohibit borrowers from obtaining down payment assistance from organizations that received funds from sellers. HUD stated that this rule was "intended to prevent a seller from providing funds to an organization as a quid pro quo for that organization's down payment assistance for purchase of one or more homes from the seller."[Footnote 49] HUD later withdrew this rule after receiving 1,871 public comments on the proposed rule; all but 21 opposed it. HUD officials noted that HUD permits seller-funded down payment assistance because the assistance does not compromise FHA guidance prohibiting homebuyers from using funds from property sellers and other interested parties toward a down payment. FHA considers seller contributions to the homebuyer in excess of 6 percent of the sales price and direct seller down payment assistance as inducements to purchase that must be factored into the purchase transaction.[Footnote 50] These funds result in a dollar-for-dollar reduction to the sales price before the LTV ratio is calculated. Further, FHA requires any down payment assistance be essentially a gift that is not subject to repayment. HUD officials stated that seller-funded nonprofits are not sellers and do not require homebuyers to pay back the funds. In addition, these officials noted that the seller and buyer--in a transaction involving seller-funded down payment assistance--agree on the sales price and pointed out that the contribution the nonprofit receives from the seller after the closing supports future homebuyers. For these reasons, we were told, HUD did not recognize a direct relationship between the property seller and the homebuyer stemming from the activities of the seller-funded nonprofit organization. Although FHA applies many of the same standards to loans with down payment assistance as it applies to other loans, it does impose additional documentation requirements on loans with down payment assistance. Lenders must obtain a "gift letter" that includes the donor's name and contact information; an explanation of the donor's relationship to the borrower; the dollar amount of the assistance; and a statement that specifies that no repayment is required. They must ensure that the down payment assistance meets FHA's requirements, document the Taxpayer Identification Numbers for all nonprofits, and provide evidence of the transfer of funds from the donor to the borrower.[Footnote 51] As noted earlier, lenders must also tell appraisers when a transaction involves down payment assistance and its source, and appraisers must include this information in their reports. However, FHA guidance does not require lenders to inform appraisers if the source of the assistance is a seller-funded nonprofit. FHA Does Not Conduct Routine Loan Performance Analyses on Loans with Down Payment Assistance: Government risk assessment guidelines recommend that agencies routinely identify risks that could impede efficient and effective management and develop approaches, either qualitative or quantitative, to analyze and manage these risks. Additionally, some mortgage industry participants reported that they did some quantitative loan performance analyses on loans with down payment assistance in order to understand the risks associated with these loans. FHA has conducted some risk analysis on its loans with down payment assistance. For example, FHA officials recently told us that they had been analyzing the performance of loans with down payment assistance on an ad hoc basis. FHA's Office of Evaluation has been conducting analyses since February 2000, comparing the performance of loans with down payment assistance with those made without assistance. For example, from January through July 2005, FHA carried out four ad hoc loan performance analyses of all FHA-insured loans. FHA's analyses indicate that loans with down payment assistance do not perform as well as loans without down payment assistance. However, according to FHA officials FHA has not undertaken ongoing periodic loan performance analyses that consider the presence and source of down payment HUD has also initiated two research efforts to evaluate down payment assistance as it relates to FHA-insured loans and down payment assistance. The first study evaluated the accuracy of loan-level data maintained in HUD's information systems and collected information on sources and amounts of gift assistance.[Footnote 52] The study included a comparison of data found in key documents FHA maintained with the information lenders had transmitted via the Computerized Homes Underwriting Management System (CHUMS).[Footnote 53] This research found that, for loans with down payment assistance, the gift amounts and sources in HUD's information system were frequently missing or different from the information in the documents. The study also found that needed Taxpayer Identification Numbers were missing for 74 percent of loans reviewed that involved assistance from nonprofit organizations. As a result of the study, HUD clarified the data requirements for loans with down payment assistance. For example, in January 2005 HUD reiterated its requirement for lenders to provide information on the presence, amount, and source of down payment The second study evaluated the influence of assistance from seller- funded nonprofits on the origination of FHA-insured loans through interviews with various mortgage industry participants.[Footnote 54] This study found that seller-funded down payment assistance providers serve primarily as conduits for the transfer of down payment funds between buyers and sellers in order to meet HUD's gift eligibility requirement. Additionally, the study found that many appraisers, mortgage lenders, underwriters, seller-funded down payment assistance providers, and real estate agents reported that homes sold with seller- funded down payment assistance had inflated appraised values and property sales prices. The second study resulted in a report issued in March 2005 and included several recommendations to FHA. FHA is currently assessing whether HUD should approach loans with down payment assistance differently (e.g., apply an enhanced risk-based premium structure on loans with down payment assistance from certain sources); but as of September 2005 FHA had not taken any action. FHA annually contracts for an actuarial review. A key component of this review is an assessment of loan performance. These analyses of loan performance--which also help in estimating program subsidy costs-- consider a number of factors including the loan's LTV ratio and mortgage age. However, the presence and source of down payment assistance were not included in these loan performance analyses prior to the actuarial review for 2005.[Footnote 55] This actuarial review indicates that down payment assistance has a significant impact on the performance of these loans. Specifically, when the actuarial review incorporated down payment assistance into the econometric model, the estimated value of FHA's insurance fund for 2005 decreased by $1.8 billion. The actuarial review also stated that down payment assistance "has had a major economic impact on the fund" and that these loans should be closely monitored. However, the analysis in the actuarial review may understate the magnitude of the effect of down payment assistance on claim rates because the gift letter source variable used in the actuarial review understates the number of loans with gift assistance for loans endorsed between 2000 and 2002, according to HUD's contractors. Additionally, the impact of down payment assistance may be greater than found in the actuarial review. Specifically, the actuarial review's estimates of loan performance are based on the historical experience of loans made with down payment assistance, most of which were originated between 2000 and 2005--a period marked by rapid house price appreciation. However, because down payment assistance has a greater impact in areas of low price appreciation, should the rate of house price appreciation decline in the future, the effects of down payment assistance may be greater. Further, the actuarial review does not examine the impact that the presence and source of down payment assistance may have on claim severity. As noted earlier, FHA recently took action to clarify data reporting requirements regarding the source and amount of down payment assistance, but these FHA reporting requirements do not differentiate seller-funded nonprofits from nonseller-funded types of nonprofits.[Footnote 56] FHA's Monitoring of Down Payment Assistance Lending is Limited: Government internal control guidelines advise agencies to monitor external entities that perform critical functions, in part to ensure that these entities are accountable for their operations. FHA relies on numerous outside entities--including lenders and appraisers--to perform critical functions, including functions specific to loans with down payment assistance. As we have seen, lenders must ensure that assistance provided by nonprofits organizations meets FHA requirements and that the nonprofits have current Taxpayer Identification Numbers. Furthermore, FHA and its lenders rely upon appraisers to provide an independent and accurate valuation of properties, including confirmation of sales and financing concessions such as down payment assistance and seller contributions. Two recent GAO reviews found that FHA performs some oversight of both lenders and appraisers, but that opportunities exist for improved monitoring.[Footnote 57] As we have seen, additional opportunities still exist for improving FHA's monitoring of loans with down payment assistance. FHA carries out risk-based monitoring of lenders and appraisers that are involved in the process of endorsing FHA-insured loans, using loan performance data (e.g., higher early defaults and claims), complaints of irregularities or fraudulent practices, the results of technical reviews of individual loans, and other factors to target lenders for review. However, FHA has not implemented targeted monitoring of lenders that do a high volume of loans with down payment assistance. HUD monitors appraisers that it has determined pose risks to FHA's insurance fund, targeting individual appraisers on several risk factors, such as involvement with loans that have early default rates and those that are insured under HUD programs known to be at a higher risk of fraud and abuse. FHA has also implemented targeted monitoring of appraisers that do a high volume of loans with down payment assistance. When an appraiser is targeted, FHA first does a desk review and then, if necessary, conducts a field review. Homebuyers receiving down payment assistance from seller-funded nonprofits pay higher purchase prices, reducing their initial equity in the home. In effect, these homebuyers are financing the down payment assistance and paying for it over time. Moreover, loans with down payment assistance--particularly from seller-funded sources--perform significantly worse than loans without such assistance. These loans have higher claims and delinquencies--meaning that some households receiving assistance ultimately lose their homes. However, down payment assistance has helped some households become homeowners, or become homeowners sooner than they might have without such assistance. Down payment assistance can impose additional risks to the loans FHA insures, and it has taken steps toward managing these risks by conducting ad hoc loan performance analyses and studies. More recently, HUD has supported legislation for a no down payment product that would help homebuyers who lack down payment funds, obviating the need for down payment assistance. This legislation includes tools for mitigating the risks of such loans with higher premiums and homebuyer counseling. We previously recommended that Congress and FHA consider a number of means, such as enhanced monitoring, to mitigate the risks that a no down payment product and any other new single-family insurance product may pose. Such techniques would help protect the Fund while allowing FHA time to learn more about the performance of such loans.[Footnote 58] Likewise, such tools may be useful in mitigating the risks associated with loans with down payment assistance. Although FHA has taken some steps to understand the risks associated with loans with down payment assistance, it could take additional steps to understand and manage the risks that loans with down payment assistance represent, while still meeting its mission of expanding homeownership opportunities. Furthermore, because the proportion of loans FHA insures that involve some form of down payment assistance has increased dramatically in the last 5 years, and because the risks associated with down payment assistance are substantial, the need for FHA to better manage these risks has become increasingly important. For example, FHA requires lenders to collect and report information on the presence and source of down payment assistance, but it does not require them to collect and report whether the entity providing the assistance is funded by property sellers. Without this information, FHA cannot, on a regular basis, monitor and evaluate the prevalence of this form of assistance or its impact on loan performance. More routine and systematic analysis of the impact that all forms of down payment assistance have on loan performance would also provide FHA with an ongoing assessment of the effect that the increasing use of down payment assistance is having on loan performance. Though we found that the presence and source of down payment assistance is an important predictor of loan performance, FHA does not now include it as a factor in its TOTAL Mortgage Scorecard automated underwriting tool. We recommended in our September 2005 report that FHA assess and report the impact that including the presence of down payment assistance would have on the forecasting ability of the loan performance models used in FHA's actuarial reviews of the Fund.[Footnote 59] Consistent with our recommendation, in October 2005, FHA, for the first time, included down payment assistance as a factor in its annual actuarial review estimates of loan performance. However, because data on the use and source of down payment assistance is still limited, the review may underestimate the impact that down payment assistance has on claims. Further the review does not consider the impact that down payment assistance may have on the severity of claims. Finally, although FHA holds lenders and appraisers accountable for the quality of appraisals, appraisers may not have complete information affecting the sales price of the home. Specifically, FHA requires lenders to inform appraisers of all contract terms, including seller concessions, which may include down payment assistance. However, FHA does not require lenders to inform appraisers when down payment assistance is provided by a seller-funded nonprofit. Further, as we have seen, such assistance creates an indirect funding stream from the seller to the buyer and, thus, becomes, in effect, a seller inducement. However, because FHA does not consider down payment assistance from a seller-funded nonprofit an inducement to purchase, it does not require that lenders reduce the sales price before applying the appropriate LTV Recommendations for Executive Action: While balancing the goals of providing homeownership opportunities and managing risk, FHA should consider implementing additional controls to manage the risks associated with loans that involve "gifts" of down payment assistance, especially from seller-funded nonprofit organizations, as these loans pose additional risks to the FHA mortgage insurance fund. Specifically, given the increased risks posed by loans with down payment assistance, from any source, we recommend that the Secretary of HUD direct the Assistant Secretary for Housing (Federal Housing Commissioner) to consider the following four actions to better understand and manage these risks: * To provide FHA with data that would permit the agency to identify whether down payment assistance is from a seller-funded down payment assistance provider, modify FHA's "gift letter source" categories to include "nonprofit seller-funded" and "nonprofit nonseller-funded" and require lenders to accurately identify and report this information when submitting loan information to FHA; * To more fully consider the risks posed by down payment assistance when underwriting loans, include the presence and source of down payment assistance as a loan variable in FHA's TOTAL Mortgage Scorecard during the underwriting process; * To ensure that FHA has an ongoing understanding of the impact that down payment assistance has on loan performance, implement routine and targeted performance monitoring of loans with down payment assistance, including analyses that consider the source of assistance; and: * To more accurately reflect the impact that down payment assistance has on loan performance, continue to include the presence and source of down payment assistance in future loan performance models. To enhance the actuarial reviews' estimates of claims, consider including in the annual review of actuarial soundness, the impact that the presence and source of down payment assistance has on claim severity. We further recommend that the Secretary of HUD direct the Assistant Secretary for Housing (Federal Housing Commissioner) to take the following two actions to balance the goals of expanding homeownership and sustaining the actuarial soundness of the Fund by managing the risks associated with loans that involve "gifts" of down payment assistance from nonprofit organizations that receive funding from * To ensure that appraisers have the information necessary to establish the market value of the properties, require lenders to inform appraisers about the presence of down payment assistance from a seller- funded source; and: * Because down payment assistance provided by seller-funded entities is, in effect, a seller inducement, revise FHA standards to treat assistance from seller-funded nonprofits as a gift from the seller and, therefore, subject to the prohibition against using seller contributions to meet the 3 percent borrower contribution requirement. Agency Comments and Our Evaluation: We provided a draft of this report to HUD for its review and comment. We received written comments from HUD's Assistant Secretary for Housing (Federal Housing Commissioner), which are reprinted in appendix IV. HUD generally agreed with the report's findings, noting that the analysis of loan performance is consistent with its own findings regarding the performance of loans with down payment assistance and how seller-funded down payment assistance programs operate. HUD also agreed to take steps that will improve its oversight of down payment assistance lending. Specifically, HUD will modify its information systems to document assistance from seller-funded nonprofits, and HUD will consider incorporating down payment assistance into FHA's TOTAL Mortgage Scorecard and requiring lenders to inform appraisers when assistance is provided by seller-funded nonprofits. The department commented on certain aspects of selected recommendations. First, although HUD agreed with the report's recommendation to perform routine and targeted loan performance analyses of loans with down payment assistance, it maintained that FHA already performs monitoring of these loans. We recognized that FHA has conducted ad hoc risk analyses of its loans with down payment assistance. Additionally, the actuarial review of FHA's insurance Fund for 2005 includes, for the first time, down payment assistance as a variable in its model of loan performance. Consistent with our findings, the 2005 actuarial review found the presence of down payment assistance to be a significant factor in explaining loan performance. Further, the 2005 actuarial review states that loans with down payment assistance should be closely monitored. We agree. Because the proportion of loans FHA insures that involve some form of down payment assistance is growing dramatically, and because the risks associated with down payment assistance are substantial, we continue to recommend that FHA more routinely monitor the performance of loans with down payment assistance. Second, HUD disagreed with our recommendation that it should revise its standards to prohibit the use of down payment assistance from seller- funded nonprofit organizations to meet the three percent borrower contribution requirement. Our recommendation was based on our conclusion that the down payment assistance provided by seller-funded nonprofits was, in effect, a seller inducement to purchase. As the basis of its disagreement with our recommendation, FHA cites a 1998 internal HUD Office of the General Counsel memorandum, acknowledged in our report. The 1998 HUD memorandum reasoned that as long as seller- funded down payment assistance is provided to the buyer before closing, and the seller's contribution to the nonprofit entity occurs after closing, the buyer has not received funds that can be directly traced to the seller's contribution. We realize that FHA relies on HUD's 1998 memorandum to authorize sellers to do indirectly what they cannot do directly, namely provide gifts of down payment assistance to buyers. We continue to believe that HUD should recognize that because gifts of down payment assistance from seller-funded nonprofits are ultimately funded by the sellers, they are like gifts of down payment assistance made directly by sellers. We, therefore, continue to believe that FHA should revise its standards to treat assistance from a seller-funded entity as a seller inducement to In addition, as noted in our report, HUD agreed with our conclusion and recommendation after it issued its 1998 memorandum. In 1999, HUD proposed a rule that would have prohibited use of gifts from nonprofit organizations for buyers' down payment assistance, if the organizations received funds for the gifts--directly or indirectly--from sellers. Although HUD later withdrew the rule without substantive explanation, we continue to believe HUD's rationale in proposing the rule was Third, in its comment letter, HUD stated that FHA has incorporated the source of down payment assistance in the 2005 actuarial review of the Mutual Mortgage Insurance Fund, which was published during the course of obtaining HUD's comments on a draft of this report. In response, we have added information describing the analyses contained in the 2005 actuarial review, and modified our recommendation to address a weakness in the actuarial review's analysis of down payment assistance, and to emphasize the need to continue considering the presence and source of down payment assistance in future loan performance models. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate Congressional Committees and the Secretary of Housing and Urban Development. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at [Hyperlink, http://www.gao.gov] [Hyperlink, http://www.gao.gov] If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [Hyperlink, shearw@gao.gov]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix V. Sincerely yours, Signed by: William B. Shear: Director, Financial Markets and Community Investment: [End of section] Appendix I: Objectives, Scope, and Methodology: To examine trends in the use of down payment assistance with loans insured by the Federal Housing Administration (FHA), we obtained loan data from the U.S. Department of Housing and Urban Development (HUD) on single-family purchase money mortgage loans--that is, loans used for the purchase of a home rather than to refinance an existing mortgage. First, to measure the use of down payment assistance from fiscal year 2000 to 2002, we used two samples of loans originally drawn for a file review study funded by HUD and conducted by the Concentrance Consulting Group (Concentrance).[Footnote 60] That study found that FHA's Single- Family Data Warehouse was not a reliable source for identifying loans with down payment assistance. A review of paper files indicated that down payment assistance was frequently not recorded in the database and that the source of the assistance (government, nonprofit, relative, etc.) was often miscoded. Therefore, we limited our review to the 8,294 files reviewed by Concentrance for which the presence, source, and amount of assistance had been ascertained from a review of the paper files. The national sample consisted of just over 5,000 loans from a simple random sample of FHA purchase money loans endorsed in fiscal years 2000, 2001, and 2002, while the Metropolitan Statistical Area (MSA) sample consisted of just over 1,000 purchase money loans from each of the three MSAs (Atlanta, Indianapolis, and Salt Lake City) endorsed over the same time period.[Footnote 61] Only loans with loan- to-value (LTV) ratios greater than 95 percent were sampled. The sample included loans insured by FHA's 203(b) program, its main single-family program, and its 234(c) condominium program. Small specialized programs, such as 203(k) rehabilitation and 221(d) subsidized mortgages were not included in the sample. Second, to measure the use of down payment assistance for fiscal years 2003, 2004, and 2005, we obtained from HUD loan-level data for single- family purchase money loans with an LTV ratio greater than 95 percent. We utilized HUD's loan-level data for these years, because in January 2003 FHA implemented changes to its data collection requirements for loans with down payment assistance. We believed that these changes should lead to improved data quality. We analyzed the data, by source of assistance, for trends in loan volume and in the proportion of loans with down payment assistance. For fiscal years 2000, 2001, and 2002, we generalized the percentage breakouts from the representative sample to the universe of FHA-insured single-family purchase money loans endorsed in these years. We also analyzed state-by-state variations in the proportion of loans with nonprofit down payment assistance; loans endorsed from May 2004 through April 2005 were included in this analysis. We met with appropriate FHA officials to discuss the quality of the data. Based on these discussions, we determined that the FHA data we used were sufficiently reliable for our analysis. To examine the structure of the purchase transaction for loans with and without down payment assistance, we reviewed HUD policy guidebooks and reports on down payment assistance. We also interviewed HUD officials; staff from Fannie Mae and Freddie Mac; staff from selected conventional mortgage providers, private mortgage insurers, mortgage industry groups representing realtors and appraisers, state and local government agencies, and nonprofit down payment assistance providers; and individual real estate agents and appraisers. During the interviews, we asked a structured set of questions designed for the particular type of industry participant. We also reviewed the Web sites of selected mortgage industry participants. To examine how down payment assistance impacts the prices of houses purchased with FHA-insured loans, we examined the sales prices of homes by the use and source of down payment assistance using property value estimates derived from an Automated Valuation Model (AVM).[Footnote 62] We contracted with First American Real Estate Solutions to obtain property value estimates derived from their AVMs on two samples of FHA- insured single-family purchase money loans. One sample included the data set of 8,294 loans endorsed in fiscal years 2000, 2001, and 2002- -the sample developed by Concentrance. The second sample included a stratified random sample of 2,000 FHA purchase money loans with first amortization dates in April 2005, extracted from FHA's Single-Family Data Warehouse.[Footnote 63] We used the AVM data as benchmarks to determine if a relationship existed between property valuation and the presence and source of down payment assistance by examining the ratio of the estimated AVM value to the appraised value and the sales price of the home. We met with staff of First American Real Estate Solutions to discuss the data and models in their AVM, including the steps the firm takes to verify the accuracy and maintain the integrity of the data. Based on these discussions, we determined that the AVM data we used were sufficiently reliable for our analysis. For a detailed description of our data sources and analysis, see appendix II. To evaluate the influence of down payment assistance on the performance of FHA-insured home mortgage loans, we conducted multiple loan performance analyses on HUD data for the sample of loans endorsed in fiscal years 2000, 2001, and 2002. We used information on the source of down payment funds--data developed by Concentrance; delinquency, claim, and loss data; and other factors that research had indicated can affect loan performance. The loan performance data we used were current through June 30, 2005. First, we analyzed loan performance by source of down payment assistance, controlling for the maximum age of the loan. Second, we compared the performance of the loans by the presence and source of down payment assistance while holding other variables constant. Third, we examined the size of the effect of down payment assistance on loan performance relative to the size of the effect of other variables that influence loan performance, including LTV ratio and credit score. Fourth, using AVM data obtained from First American Real Estate Solutions for these loans, we also assessed the extent to which higher sales prices explained any difference in the performance of FHA-insured loans with down payment assistance. For a detailed description of our data sources, performance measures, and risk models, see appendix III. To examine the extent to which FHA standards and controls for loans with down payment assistance are consistent with government internal control guidelines and, as appropriate, mortgage industry practices, we first assessed whether key FHA controls were consistent with the guidelines in GAO's August 2001 Internal Control Management and Evaluation Tool.[Footnote 64] These guidelines include (1) ensuring that an agency's operations are consistent with any applicable industry or business norms; (2) using qualitative and quantitative methods to identify risk and determine relative risk rankings on a scheduled and periodic basis; (3) ensuring that adequate mechanisms exist to identify risks to the agency arising from its reliance on external parties to perform critical agency operations; and (4) ensuring that statutory requirements--as well as agency requirements, policies, and regulations--are applied properly. Second, we compared FHA's standards and controls to mortgage industry practices, as appropriate. We interviewed officials from HUD, Fannie Mae, Freddie Mac, conventional mortgage providers, private mortgage insurers, state and local government agencies, and nonprofit down payment assistance providers. These entities provided us with information about the controls they reported using to manage the risks associated with affordable loan products that permit down payment assistance. We did not verify that these entities, in fact, used these controls. We also reviewed descriptions of mortgage products permitting down payment assistance that are supported by mortgage industry participants and compared the standards used by these entities. [End of section] Appendix II: Automated Valuation Model Analysis: This appendix describes our analysis of differences in the sales prices and appraised values of homes purchased with and without down payment assistance and insured by the Federal Housing Administration (FHA). The U.S. Department of Housing and Urban Development's (HUD) Office of Inspector General (OIG) and others have indicated that appraisals and sales prices may be higher for homes with seller-funded assistance, relative to comparable homes without such assistance. Higher prices for comparable collateral can lead to higher loan amounts when supported by higher appraisals, which may cause higher delinquency, claim, and loss rates for loans with seller-funded assistance. To examine this possibility, we contracted with First American Real Estate Solutions (First American) to provide estimated house values from their Automated Valuation Models (AVM). AVMs from First American and other vendors are widely used by lenders, mortgage insurers, HUD, and government- sponsored enterprises for quality control and other purposes. First American obtains data from local governments, large lenders, and other sources on house price sales and property characteristics across most of the United States. These data are used in statistical analyses that model the sales prices of properties, as a function of their characteristics, and appreciation trends for the surrounding neighborhoods. The models estimate a property's value on a given date, along with a likely range for that value and a confidence score, indicating the probability that the property's true value is within 10 percent of the estimated value. First American used four models to value the transactions we submitted, with about 95 percent of the cases relying on one of two models. Both of these are hybrid models, in that they use both hedonic regression to estimate property value and repeat sales methods to estimate a more precise estimated value for a property.[Footnote 65] Hedonic regression places values on the characteristics of a property, such as square footage, number of bathrooms, and presence of a garage, to use when examining comparable properties. The repeat sales method uses multiple sales of the same properties over time to estimate the growth rates, and then uses these growth rates to estimate a sales price based on the previous sales prices of the property and the estimated growth rate in prices. In about 5 percent of the cases, when these two models could not provide a value estimate, two other models that rely on neural net methods to produce value estimates were used.[Footnote 66] GAO provided First American with addresses for the 8,294 loans in the Concentrance Consulting Group (Concentrance) sample of loans endorsed in fiscal years 2000, 2001, and 2002.[Footnote 67] First American was asked to provide an estimate of each home's value with an "as-of" date 2 weeks before the loan's actual settlement date. GAO also provided addresses from a stratified random sample of 2,000 FHA purchase money loans extracted from FHA's Single-Family Data Warehouse with first amortization dates in April 2005. The stratification was based on the gift letter source code in FHA's system, so that 1,000 loans had gift assistance from a nonprofit, and 1,000 did not.[Footnote 68] As GAO did not have the settlement dates for this sample, we asked the contractor to value the homes as of March 1, 2005.[Footnote 69] We did not provide First American with any information pertaining to the source of the purchaser's down payment funds. First American might not be able to estimate the value of a particular property for a variety of reasons. For example, a data entry error or unusual address might prevent a match between FHA's database and the contractor's, or a local jurisdiction might not allow public access to property transaction records, reducing the number of properties in the contractor's database. In addition, there might be too few transactions in an area to allow a precise estimate of a property's value. "Hit rate" refers to the percentage of loans for which First American was able to make an estimate of property value. The hit rates were over 70 percent for the 2000, 2001, and 2002 national and Metropolitan Statistical Area (MSA) samples and 65 percent for the 2005 stratified national sample (tables 1-8). Hit rates were low for the Indianapolis component of the MSA sample, and confidence scores for Indianapolis were much lower than for the other two MSAs and for both national samples. Further, in Indianapolis, estimated values were much higher than sales prices for the loans that were valued. First American told us that Indiana is a nondisclosure state--that is, state law prohibits access to property transaction records by the general public.[Footnote 70] For this reason, the contractor used secondary sources to value properties in this state. Utah is also a nondisclosure state. Although hit rates and confidence scores were higher for Salt Lake City than for Indianapolis, sales price ratios were also high for this MSA. Therefore, we dropped the Indianapolis and Salt Lake City components from one set of MSA results, and we present one table with just the Atlanta results. While some nondisclosure states, such as Indiana and Kansas, had low confidence scores, others did not. For example, Texas is a nondisclosure state but had a high hit rate and high confidence scores. First American has an arrangement that allows them to access Multiple Listing Service data for several urban counties in Texas, providing a substitute for government records. For two cases that clearly represented outliers in the Concentrance data files, we replaced a value from the Concentrance review with a value from the Single-Family Data Warehouse.[Footnote 71] To examine the possibility that the presence of seller-funded nonprofit down payment assistance might increase appraisals and sales prices, we calculated the ratio of the AVM estimate of property value to the sales price and the appraised value from FHA's records. Both the numerator and denominator(s) were random variables. The AVM estimate was a model estimate with an associated error, and sales prices and appraisals reflected the buyer's or appraiser's estimate of a home's true value, which may have errors of varying magnitudes. The ratio of two normally distributed random variables has a Cauchy distribution (a distribution with fat tails and an undefined mean). Hence, tests of the difference in medians are generally more informative than tests of differences in means.[Footnote 72] We tested the difference in medians with a Kruskal- Wallis test and the difference in means with a T-test. We also tested the difference in medians or in means using only records with confidence scores of more than 50, rejecting transactions with low confidence; we report these results in tables 1-8 as the high confidence median and the high confidence mean. We also tested for differences in the trimmed means, rejecting the top and bottom 1 percent of the transactions; we report these results in tables 1-8 as the trimmed mean.[Footnote 73] Because of the statistical problems inherent in testing the mean of a ratio of random variables, we relied on the difference in medians as our primary indicator of a significant difference in valuations. The results of the analysis are presented in tables 1-4, which show the difference in the ratio of the AVM estimate to the appraised value and sales price for loans with and without nonprofit down payment assistance. The median ratio of the AVM estimate to the appraised value was slightly over 1, except for the MSA sample with Indianapolis included, for which the ratio was about 1.1.[Footnote 74] The median ratio of the AVM value to the sales price was generally 1 or 2 percentage points higher than the ratio of the AVM value to the appraised value, as appraised values were the same as sales prices for about half the transactions but were up to 4 percentage points higher than sales prices for most of the other half. In the national sample for 2000, 2001, and 2002, prices and appraisal ratios were both about 3 percentage points lower for loans with seller-funded assistance, indicating that sales prices and appraisals were typically about 3 percentage points higher for transactions with seller-funded assistance than they were for comparable homes without such assistance. The appraisal ratio was also 3 percentage points lower when the sample was restricted to estimated values with confidence scores above 50; in these cases, the sales price ratio was 4 percentage points lower, indicating that homes with seller-funded assistance sold for about 4 percentage points more than comparable homes without assistance. Differences in the MSA sample for these years were not as large, with a 1 percentage point difference in the median appraisal ratio and differences of about 2 percentage points for the price ratio and for the appraisal ratio when the sample was restricted to estimated values with high confidence scores. Kruskal-Wallis tests for a difference in medians were always significant at 1 percent in one-tailed tests.[Footnote 75] T-tests for differences in means were generally significant at 5 percent or more in one-tailed tests, except for the national sample appraisal ratio. T-tests were also conducted on differences in means with the top and bottom 1 percent of the ratio distribution excluded. These trimmed mean results were similar to the mean results but with higher significance levels and sometimes larger For the March 2005 national sample, median differences in both sales price and appraisal ratios were about 2.3 percentage points and were statistically significant with p-values of less than 1 percent in one- tailed tests. These findings indicate that sales prices and appraisals were about 2.3 percentage points higher for transactions with nonprofit assistance than they were for comparable homes without nonprofit assistance. Mean differences were slightly smaller, ranging between 1 and 2 percentage points. The mean price difference was statistically significant at 5 percent in a one-tailed test, while appraisal ratio differences in means were not significant. Again, because of the statistical difficulties inherent in testing the ratio of two random variables, we relied primarily on tests of the difference in medians. Table 1: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, National Sample, Fiscal Years 2000, 2001, and 2002: 78 % hit rate. Confidence score: 78 median: Type: Appraisal value ratio; 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: No; 78 % hit rate. Confidence score: 78 median: Mean: 1.071; 78 % hit rate. Confidence score: 78 median: Median: 1.030; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.068; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.063. 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: Yes; 78 % hit rate. Confidence score: 78 median: Mean: 1.055; 78 % hit rate. Confidence score: 78 median: Median: 1.002; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.041; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.043. 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: 78 % hit rate. Confidence score: 78 median: Mean: 0.016; 78 % hit rate. Confidence score: 78 median: Median: 0.028; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.027; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.020. 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: p- 78 % hit rate. Confidence score: 78 median: Mean: 0.084; 78 % hit rate. Confidence score: 78 median: Median: 0.001; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.008; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.006. 78 % hit rate. Confidence score: 78 median: Type: Sales price ratio; 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: No; 78 % hit rate. Confidence score: 78 median: Mean: 1.095; 78 % hit rate. Confidence score: 78 median: Median: 1.046; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.090; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.084. 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: Yes; 78 % hit rate. Confidence score: 78 median: Mean: 1.067; 78 % hit rate. Confidence score: 78 median: Median: 1.012; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.053; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.053. 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: 78 % hit rate. Confidence score: 78 median: Mean: 0.028; 78 % hit rate. Confidence score: 78 median: Median: 0.034; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.037; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.031. 78 % hit rate. Confidence score: 78 median: Nonprofit assistance: p- 78 % hit rate. Confidence score: 78 median: Mean: 0.011; 78 % hit rate. Confidence score: 78 median: Median: 0.001; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.001; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.001. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; p-values statistically significant at 5% or better are bold. [End of table] Table 2: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, MSA Sample, Fiscal Years 2000, 2001, and 2002: 85 % hit rate. Confidence score: 78 median: Type: Appraisal value ratio; 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: No; 85 % hit rate. Confidence score: 78 median: Mean: 1.106; 85 % hit rate. Confidence score: 78 median: Median: 1.080; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.086; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.102. 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: Yes; 85 % hit rate. Confidence score: 78 median: Mean: 1.096; 85 % hit rate. Confidence score: 78 median: Median: 1.067; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.068; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.093. 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: 85 % hit rate. Confidence score: 78 median: Mean: 0.010; 85 % hit rate. Confidence score: 78 median: Median: 0.013; 85 % hit rate. Confidence score: 78 median: High confidence mean: 0.018; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.009. 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: p- 85 % hit rate. Confidence score: 78 median: Mean: 0.093; 85 % hit rate. Confidence score: 78 median: Median: 0.024; 85 % hit rate. Confidence score: 78 median: High confidence mean: 0.008; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.058. 85 % hit rate. Confidence score: 78 median: Type: Sales price ratio; 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: No; 85 % hit rate. Confidence score: 78 median: Mean: 1.126; 85 % hit rate. Confidence score: 78 median: Median: 1.095; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.105; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.123. 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: Yes; 85 % hit rate. Confidence score: 78 median: Mean: 1.110; 85 % hit rate. Confidence score: 78 median: Median: 1.078; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.081; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.107. 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: 85 % hit rate. Confidence score: 78 median: Mean: 0.016; 85 % hit rate. Confidence score: 78 median: Median: 0.017; 85 % hit rate. Confidence score: 78 median: High confidence mean: 0.024; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.016. 85 % hit rate. Confidence score: 78 median: Nonprofit assistance: p- 85 % hit rate. Confidence score: 78 median: Mean: 0.015; 85 % hit rate. Confidence score: 78 median: Median: 0.003; 85 % hit rate. Confidence score: 78 median: High confidence mean: 0.001; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.006. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; p-values statistically significant at 5% or better are bold. [End of table] Table 3: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, Atlanta MSA Sample, Fiscal Years 2000, 2001, and 2002: 95 % hit rate. Confidence score: 85 median: Type: Appraisal value ratio; 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: No; 95 % hit rate. Confidence score: 85 median: Mean: 1.037; 95 % hit rate. Confidence score: 85 median: Median: 1.013; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.035; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.035. 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: Yes; 95 % hit rate. Confidence score: 85 median: Mean: 1.025; 95 % hit rate. Confidence score: 85 median: Median: 0.989; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.022; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.012. 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: 95 % hit rate. Confidence score: 85 median: Mean: 0.012; 95 % hit rate. Confidence score: 85 median: Median: 0.024; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.013; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 0.023. 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: p- 95 % hit rate. Confidence score: 85 median: Mean: 0.165; 95 % hit rate. Confidence score: 85 median: Median: 0.001; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.130; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 0.002. 95 % hit rate. Confidence score: 85 median: Type: Sales price ratio; 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: No; 95 % hit rate. Confidence score: 85 median: Mean: 1.057; 95 % hit rate. Confidence score: 85 median: Median: 1.028; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.056; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.057. 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: Yes; 95 % hit rate. Confidence score: 85 median: Mean: 1.039; 95 % hit rate. Confidence score: 85 median: Median: 1.001; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.036; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.026. 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: 95 % hit rate. Confidence score: 85 median: Mean: 0.018; 95 % hit rate. Confidence score: 85 median: Median: 0.027; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.020; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 0.031. 95 % hit rate. Confidence score: 85 median: Nonprofit assistance: p- 95 % hit rate. Confidence score: 85 median: Mean: 0.079; 95 % hit rate. Confidence score: 85 median: Median: 0.001; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.056; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 0.001. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; p-values statistically significant at 5% or better are bold. [End of table] Table 4: The Ratio of AVM Value to Appraisal Value and Sales Price-- Nonprofit Down Payment Assistance, National Sample, March 2005: 65 % hit rate. Confidence score: 77 median: Type: Appraisal value ratio; 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: No; 65 % hit rate. Confidence score: 77 median: Mean: 1.051; 65 % hit rate. Confidence score: 77 median: Median: 1.024; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.049; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.047. 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: Yes; 65 % hit rate. Confidence score: 77 median: Mean: 1.037; 65 % hit rate. Confidence score: 77 median: Median: 1.001; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.036; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.025. 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: 65 % hit rate. Confidence score: 77 median: Mean: 0.014; 65 % hit rate. Confidence score: 77 median: Median: 0.023; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.013; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.022. 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: p- 65 % hit rate. Confidence score: 77 median: Mean: 0.116; 65 % hit rate. Confidence score: 77 median: Median: 0.007; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.156; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.006. 65 % hit rate. Confidence score: 77 median: Type: Sales price ratio; 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: No; 65 % hit rate. Confidence score: 77 median: Mean: 1.079; 65 % hit rate. Confidence score: 77 median: Median: 1.044; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.075; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.070. 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: Yes; 65 % hit rate. Confidence score: 77 median: Mean: 1.058; 65 % hit rate. Confidence score: 77 median: Median: 1.021; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.057; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.048. 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: 65 % hit rate. Confidence score: 77 median: Mean: 0.021; 65 % hit rate. Confidence score: 77 median: Median: 0.023; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.018; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.022. 65 % hit rate. Confidence score: 77 median: Nonprofit assistance: p- 65 % hit rate. Confidence score: 77 median: Mean: 0.049; 65 % hit rate. Confidence score: 77 median: Median: 0.008; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.084; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.013. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; p-values statistically significant at 5% or better are bold. [End of table] We also tested for the differences in ratios between transactions with no gift assistance versus transactions with gift assistance from sources other than nonprofits (tables 5-8). We found no significant differences in any of the samples that we examined and no consistent pattern in the signs of the differences. Transactions with assistance had differences in medians that were sometimes slightly positive and sometimes slightly negative. Table 5: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, National Sample, Fiscal Years 2000, 2001, and 2002: 78 % hit rate. Confidence score: 78 median: Type: Appraisal value ratio; 78 % hit rate. Confidence score: 78 median: Other assistance: No; 78 % hit rate. Confidence score: 78 median: Mean: 1.072; 78 % hit rate. Confidence score: 78 median: Median: 1.032; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.069; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.064. 78 % hit rate. Confidence score: 78 median: Other assistance: Yes; 78 % hit rate. Confidence score: 78 median: Mean: 1.070; 78 % hit rate. Confidence score: 78 median: Median: 1.027; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.065; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.060. 78 % hit rate. Confidence score: 78 median: Other assistance: 78 % hit rate. Confidence score: 78 median: Mean: 0.002; 78 % hit rate. Confidence score: 78 median: Median: 0.005; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.004; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.004. 78 % hit rate. Confidence score: 78 median: Other assistance: p-value; 78 % hit rate. Confidence score: 78 median: Mean: 0.430; 78 % hit rate. Confidence score: 78 median: Median: 0.420; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.280; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.255. 78 % hit rate. Confidence score: 78 median: Type: Sales price ratio; 78 % hit rate. Confidence score: 78 median: Other assistance: No; 78 % hit rate. Confidence score: 78 median: Mean: 1.094; 78 % hit rate. Confidence score: 78 median: Median: 1.046; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.091; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.083. 78 % hit rate. Confidence score: 78 median: Other assistance: Yes; 78 % hit rate. Confidence score: 78 median: Mean: 1.096; 78 % hit rate. Confidence score: 78 median: Median: 1.046; 78 % hit rate. Confidence score: 78 median: High confidence mean: 1.089; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 1.086. 78 % hit rate. Confidence score: 78 median: Other assistance: 78 % hit rate. Confidence score: 78 median: Mean: -0.002; 78 % hit rate. Confidence score: 78 median: Median: 0.000; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.002; 78 % hit rate. Confidence score: 78 median: High confidence median: - 78 % hit rate. Confidence score: 78 median: Trimmed mean: -0.003. 78 % hit rate. Confidence score: 78 median: Other assistance: p-value; 78 % hit rate. Confidence score: 78 median: Mean: 0.500; 78 % hit rate. Confidence score: 78 median: Median: 0.397; 78 % hit rate. Confidence score: 78 median: High confidence mean: 0.420; 78 % hit rate. Confidence score: 78 median: High confidence median: 78 % hit rate. Confidence score: 78 median: Trimmed mean: 0.500. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; no differences were statistically significant at 5% or better. [End of table] Table 6: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, MSA Sample, Fiscal Years 2000, 2001, and 2002: 85 % hit rate. Confidence score: 78 median: Type: Appraisal value ratio; 85 % hit rate. Confidence score: 78 median: Other assistance: No; 85 % hit rate. Confidence score: 78 median: Mean: 1.108; 85 % hit rate. Confidence score: 78 median: Median: 1.079; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.085; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.103. 85 % hit rate. Confidence score: 78 median: Other assistance: Yes; 85 % hit rate. Confidence score: 78 median: Mean: 1.101; 85 % hit rate. Confidence score: 78 median: Median: 1.080; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.087; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.101. 85 % hit rate. Confidence score: 78 median: Other assistance: 85 % hit rate. Confidence score: 78 median: Mean: 0.007; 85 % hit rate. Confidence score: 78 median: Median: -0.001; 85 % hit rate. Confidence score: 78 median: High confidence mean: - 85 % hit rate. Confidence score: 78 median: High confidence median: - 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.002. 85 % hit rate. Confidence score: 78 median: Other assistance: p-value; 85 % hit rate. Confidence score: 78 median: Mean: 0.194; 85 % hit rate. Confidence score: 78 median: Median: 0.381; 85 % hit rate. Confidence score: 78 median: High confidence mean: 0.500; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.392. 85 % hit rate. Confidence score: 78 median: Type: Sales price ratio; 85 % hit rate. Confidence score: 78 median: Other assistance: No; 85 % hit rate. Confidence score: 78 median: Mean: 1.128; 85 % hit rate. Confidence score: 78 median: Median: 1.097; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.105; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.124. 85 % hit rate. Confidence score: 78 median: Other assistance: Yes; 85 % hit rate. Confidence score: 78 median: Mean: 1.122; 85 % hit rate. Confidence score: 78 median: Median: 1.093; 85 % hit rate. Confidence score: 78 median: High confidence mean: 1.106; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 1.121. 85 % hit rate. Confidence score: 78 median: Other assistance: 85 % hit rate. Confidence score: 78 median: Mean: 0.006; 85 % hit rate. Confidence score: 78 median: Median: 0.004; 85 % hit rate. Confidence score: 78 median: High confidence mean: - 85 % hit rate. Confidence score: 78 median: High confidence median: - 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.003. 85 % hit rate. Confidence score: 78 median: Other assistance: p-value; 85 % hit rate. Confidence score: 78 median: Mean: 0.224; 85 % hit rate. Confidence score: 78 median: Median: 0.375; 85 % hit rate. Confidence score: 78 median: High confidence mean: 0.500; 85 % hit rate. Confidence score: 78 median: High confidence median: 85 % hit rate. Confidence score: 78 median: Trimmed mean: 0.340. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; no differences were statistically significant at 5% or better. [End of table] Table 7: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, Atlanta MSA Sample, Fiscal Years 2000, 2001, and 2002: 95 % hit rate. Confidence score: 85 median: Type: Appraisal value ratio; 95 % hit rate. Confidence score: 85 median: Other assistance: No; 95 % hit rate. Confidence score: 85 median: Mean: 1.037; 95 % hit rate. Confidence score: 85 median: Median: 1.012; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.036; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.035. 95 % hit rate. Confidence score: 85 median: Other assistance: Yes; 95 % hit rate. Confidence score: 85 median: Mean: 1.036; 95 % hit rate. Confidence score: 85 median: Median: 1.017; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.033; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.036. 95 % hit rate. Confidence score: 85 median: Other assistance: 95 % hit rate. Confidence score: 85 median: Mean: 0.001; 95 % hit rate. Confidence score: 85 median: Median: -0.005; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.003; 95 % hit rate. Confidence score: 85 median: High confidence median: - 95 % hit rate. Confidence score: 85 median: Trimmed mean: -0.001. 95 % hit rate. Confidence score: 85 median: Other assistance: p-value; 95 % hit rate. Confidence score: 85 median: Mean: 0.450; 95 % hit rate. Confidence score: 85 median: Median: 0.500; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.400; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 0.500. 95 % hit rate. Confidence score: 85 median: Type: Sales price ratio; 95 % hit rate. Confidence score: 85 median: Other assistance: No; 95 % hit rate. Confidence score: 85 median: Mean: 1.056; 95 % hit rate. Confidence score: 85 median: Median: 1.026; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.056; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.057. 95 % hit rate. Confidence score: 85 median: Other assistance: Yes; 95 % hit rate. Confidence score: 85 median: Mean: 1.058; 95 % hit rate. Confidence score: 85 median: Median: 1.030; 95 % hit rate. Confidence score: 85 median: High confidence mean: 1.056; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 1.058. 95 % hit rate. Confidence score: 85 median: Other assistance: 95 % hit rate. Confidence score: 85 median: Mean: -0.002; 95 % hit rate. Confidence score: 85 median: Median: -0.004; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.000; 95 % hit rate. Confidence score: 85 median: High confidence median: - 95 % hit rate. Confidence score: 85 median: Trimmed mean: -0.001. 95 % hit rate. Confidence score: 85 median: Other assistance: p-value; 95 % hit rate. Confidence score: 85 median: Mean: 0.500; 95 % hit rate. Confidence score: 85 median: Median: 0.500; 95 % hit rate. Confidence score: 85 median: High confidence mean: 0.500; 95 % hit rate. Confidence score: 85 median: High confidence median: 95 % hit rate. Confidence score: 85 median: Trimmed mean: 0.500. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; no differences were statistically significant at 5% or better. [End of table] Table 8: The Ratio of AVM Value to Appraisal Value and Sales Price-- Down Payment Assistance from Other Sources, National Sample, March 65 % hit rate. Confidence score: 77 median: Type: Appraisal value ratio; 65 % hit rate. Confidence score: 77 median: Other assistance: No; 65 % hit rate. Confidence score: 77 median: Mean: 1.051; 65 % hit rate. Confidence score: 77 median: Median: 1.026; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.031; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.049. 65 % hit rate. Confidence score: 77 median: Other assistance: Yes; 65 % hit rate. Confidence score: 77 median: Mean: 1.053; 65 % hit rate. Confidence score: 77 median: Median: 1.024; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.021; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.044. 65 % hit rate. Confidence score: 77 median: Other assistance: 65 % hit rate. Confidence score: 77 median: Mean: -0.002; 65 % hit rate. Confidence score: 77 median: Median: 0.002; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.010; 65 % hit rate. Confidence score: 77 median: High confidence median: - 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.005. 65 % hit rate. Confidence score: 77 median: Other assistance: p-value; 65 % hit rate. Confidence score: 77 median: Mean: 0.500; 65 % hit rate. Confidence score: 77 median: Median: 0.354; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.373; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.379. 65 % hit rate. Confidence score: 77 median: Type: Sales price ratio; 65 % hit rate. Confidence score: 77 median: Other assistance: No; 65 % hit rate. Confidence score: 77 median: Mean: 1.073; 65 % hit rate. Confidence score: 77 median: Median: 1.040; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.069; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.069. 65 % hit rate. Confidence score: 77 median: Other assistance: Yes; 65 % hit rate. Confidence score: 77 median: Mean: 1.095; 65 % hit rate. Confidence score: 77 median: Median: 1.045; 65 % hit rate. Confidence score: 77 median: High confidence mean: 1.091; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 1.072. 65 % hit rate. Confidence score: 77 median: Other assistance: 65 % hit rate. Confidence score: 77 median: Mean: -0.022; 65 % hit rate. Confidence score: 77 median: Median: -0.005; 65 % hit rate. Confidence score: 77 median: High confidence mean: - 65 % hit rate. Confidence score: 77 median: High confidence median: - 65 % hit rate. Confidence score: 77 median: Trimmed mean: -0.003. 65 % hit rate. Confidence score: 77 median: Other assistance: p-value; 65 % hit rate. Confidence score: 77 median: Mean: 0.500; 65 % hit rate. Confidence score: 77 median: Median: 0.500; 65 % hit rate. Confidence score: 77 median: High confidence mean: 0.500; 65 % hit rate. Confidence score: 77 median: High confidence median: 65 % hit rate. Confidence score: 77 median: Trimmed mean: 0.500. Source: GAO. Notes: p-value is for one-tailed test; p-value of .001 means .001 or less; p-value of .5 means .5 or greater; no differences were statistically significant at 5% or better. [End of table] [End of section] Appendix III: Loan Performance Analysis: This appendix describes the econometric models that we built and the analysis that we conducted to examine the performance of mortgage loans that received down payment assistance and were insured by the U.S. Department of Housing and Urban Development's (HUD) Federal Housing Administration (FHA). We developed multiple regression models to forecast delinquency, claim, prepayment, and loss on two samples of FHA single-family purchase money loans endorsed in 2000, 2001, and 2002.[Footnote 76] The national sample included all 50 states and the District of Columbia but excluded U.S. territories. The Metropolitan Statistical Area (MSA) sample consisted of loans in three MSAs where the use of down payment assistance was relatively high: Atlanta, Indianapolis, and Salt Lake City. The data were current as of June 30, Our forecasting models used observations on loan quarters--that is, information on the characteristics and status of an insured loan during each quarter of its life - to predict conditional foreclosure and prepayment probabilities.[Footnote 77] Our model used a pair of binary logistic regressions to predict the probability of claim, or prepayment, as a function of several key predictor variables. Some of these variables, such as initial loan-to value (LTV) ratio, credit score, and the presence of down payment assistance, do not vary over the life of a loan, while others, such as accumulated equity from amortization and price appreciation, may change and are updated Data and Sample Selection: For our analysis, we used the 8,294 loans in the Concentrance Consulting Group's (Concentrance) sample of FHA single-family purchase money mortgage loans endorsed in fiscal years 2000, 2001, and 2002, for which the presence, source, and amount of assistance had been ascertained through a loan file review.[Footnote 78] Only loans with LTV ratios greater than 95 percent were sampled. The national sample consisted of just over 5,000 loans from a simple random sample of purchase money loans, while the MSA sample consisted of just over 1,000 purchase money loans from each of three MSAs: Atlanta, Indianapolis, and Salt Lake City. Concentrance's loan file review also recorded the borrowers' credit scores, an important predictor of loan performance that, at the time, was not captured in FHA's Single-Family Data We supplemented these files with information from FHA's Single-Family Data Warehouse. We then merged variables reflecting delinquency, claim, and prepayment information with the Concentrance files, along with information on borrowers' assets and data on national and local economic conditions. We obtained state-level unemployment rates from the Bureau of Labor Statistics, 30-year fixed rate mortgage rates from Freddie Mac, 1-and 10-year Treasury interest rates from the Federal Reserve, the Personal Consumption Expenditure Deflator from the Bureau of Economic Analysis, and median existing house prices at the state level from Global Insights, Inc., in order to measure house price appreciation over time. Table 9 lists the names and definitions of the variables used in the models. Table 9: Names and Definitions of the Variables Used in Our Regression Constructed risk; Combines the variables used in a prior GAO report to predict claim probability, including initial LTV ratio, price appreciation after origination, loan size, location, interest rate, unemployment rate, loan type, and other variables[A]. FICO score; FICO score of borrower in case binder (if two scores, it is the lower score; if three scores it is the median score). No FICO score; Equals 1 if no FICO score was available for the borrower. Borrower reserves; Equals 1 if the borrower had less than 2 months of mortgage payment in liquid assets after closing. Front-end ratio; Housing payments divided by income. Seller-funded down payment assistance; Equals 1 if the borrower received down payment assistance from a seller- funded program[B]. Nonseller-funded down payment assistance; Equals 1 if the borrower received down payment assistance from a source other than a seller- funded program. Underserved area; Equals 1 if the home is in a census tract designated by HUD as Equals 1 if the loan is a 234(c) condominium loan. First-time homebuyer; Equals 1 if the borrower was flagged in HUD's database as a first-time LTV ratio; The ratio of the original mortgage amount to the sales price of the 15-year mortgage; Equals 1 if the mortgage term is 25 years or less (mostly 15 year Endorsed in fiscal year 2000; Equals 1 if endorsed in fiscal year 2000. Endorsed in fiscal year 2001; Equals 1 if endorsed in fiscal year 2001. House price appreciation rate; Growth rate in the median price of existing housing, reduced by 0.5 percent per quarter to adjust for increasing quality of the housing First 6 quarters; Number of quarters since origination, up to 6. Next 6 quarters; Number of quarters since the sixth quarter after origination, up to 12. Following quarters; Number of quarters since the twelfth quarter after origination. Adjustable Rate Mortgage (ARM); Equals 1 if adjustable rate mortgage. Atlanta MSA; Equals 1 if in the Atlanta MSA sample. Salt Lake City MSA; Equals 1 if in the Salt Lake MSA sample. Relatively high equity; The ratio of the market value of the mortgage to the book value of the mortgage, when greater than 1.2: measures the incentive of the borrower to refinance the loan. Relatively low equity; The ratio of the market value of the mortgage to the book value of the mortgage, when less than 1.2. Initial interest rate; The initial interest rate on the mortgage. Original mortgage amount; The balance of the mortgage at time of origination. Source: GAO. [B] In a small number of cases borrowers received both types of assistance. In these cases, the record was assigned to the category with the larger amount of assistance. [C] Global Insights, Inc. [End of table] Specification of Delinquency and Claim Models: The models we estimated used logistic regression to predict the probability of a loan becoming seriously delinquent or resulting in a claim on FHA's insurance coverage, as a function of credit score, equity, and other variables. Equity and credit scores have consistently been found to be important predictors of mortgage credit risk and some studies have found that other variables, such as qualifying ratios, are important.[Footnote 79] The dependent variable is the conditional probability of a loan becoming 90 days delinquent, or resulting in a claim, in a given quarter, conditional on the loan having survived until that quarter.[Footnote 80] We estimated the delinquency and claim regressions using both national and MSA samples of loans. For each of these samples, we developed four different delinquency regressions and four different claim regressions. The first model used for delinquency and claim regressions we based on the variables used in the FHA Technology Open to Approved Lenders (TOTAL) Mortgage Scorecard (used by FHA's TOTAL Mortgage Scorecard automated underwriting algorithm as predictors of credit risk). These variables were initial LTV ratio, credit score, housing payment-to- income ratio (the front-end ratio), borrower reserves, and mortgage term (15-year or 30-year term). To these, we added variables for house price appreciation, variables reflecting the passage of time, and variables indicating the presence and source of down payment assistance. For the second model, we augmented the model based on the FHA TOTAL Mortgage Scorecard variables with indicators of whether the mortgage was an adjustable rate mortgage, the property was located in an underserved area, the property was a condominium, and the purchaser was a first-time homebuyer. We based the third regression model on GAO's model of FHA actuarial soundness that we estimated in 2001.[Footnote 81] That model used, among others, the initial LTV ratio, loan type (30-year fixed, 15-year fixed, investor, or adjustable rate mortgage), property type (one or multiple unit), Census division, accumulated equity stemming from house price appreciation and amortization, and a set of variables reflecting the passage of time, to predict the annual probability of a loan terminating in a claim. We created a variable called constructed risk, using the results of the 2001 actuarial study. Because that study used millions of loans in the model estimation, its estimates of the effects of certain variables, such as accumulated equity, may be more precise than those produced using the thousands of loans in the Concentrance sample. However, the actuarial study did not use credit score as a predictor variable or consider down payment assistance. Therefore, we included the constructed risk variable along with credit score information, borrower reserves, front-end ratio, and presence and source of down payment assistance. The fourth model augments GAO's actuarial model by adding three variables: underserved area, condominium, and first-time homebuyer. GAO estimated prepayments and losses twice, once in a national sample, and once in a MSA sample. The LTV ratio calculated from FHA's database will tend to understate the true LTV ratio of the mortgage if homes with seller-funded down payment assistance are sold for higher prices than are comparable homes without such assistance.[Footnote 82] Comparable homes would have the same value, yet the home purchased with assistance may have a larger loan. For example, FHA regulations allow the borrower to take out a mortgage for about $99,000 on a $100,000 home. With seller-funded down payment assistance, the same home might sell for $103,000 and qualify for a $102,000 loan.[Footnote 83] The calculated LTV ratio would be about 99 percent in each case ($99,000/$100,000 or $102,000/$103,000), but the transaction with seller-funded assistance would have a larger mortgage, backed by the same collateral. In such cases, the initial LTV ratio would be understated, the borrower's equity subsequent to origination would be overstated, and the risk of delinquency or claim for such loans should be higher than for loans with comparable LTV ratios and subsequent price appreciation. To test for this possibility, we included a variable, seller-funded down payment assistance, which was set equal to 1 for loans that received seller-funded down payment assistance. To test for the possibility that down payment assistance in general, and not just seller-funded assistance, raised delinquency and claim probabilities, we included a variable, nonseller-funded down payment assistance, which was set equal to 1 for loans that received down payment assistance from relatives, a borrower's employer, government programs, nonprofits that were not seller-funded, or nonprofits with a source of funding that was not ascertained. Estimation Results: Tables 10 through 17 present the estimation results for our 90-day delinquency regressions, and tables 18 through 25 present the results for our claim regressions for the national samples and MSA samples. Our results are consistent with other research that finds credit scores and accumulated equity to be important variables predicting delinquency and claims.[Footnote 84] In specifications that use the constructed risk variable (tables 12, 13, 16, 17, 20, 21, 24, and 25), we find it a statistically significant predictor of delinquency or claim. Additionally, credit score is highly significant. The front-end ratio, which FHA uses in its underwriting, is also very important. Borrower reserves, however, generally have the wrong sign, and are statistically insignificant. In some specifications indicators for condominium loans, for loans to first-time homebuyers, and for loans in underserved areas are added, and they are also found to be insignificant. In specifications that use TOTAL Mortgage Scorecard variables (tables 10, 11, 14, 15, 18, 19, 22, and 23), credit score has statistically significant effect of the expected sign. The front-end ratio is also an important predictor with the expected sign. Again reserves are not an important predictor; neither are the 15-year loan indicator, the initial LTV ratio, and indicators for condominiums or underserved The failure to find a significant effect for short-term loans is not surprising, as such loans constitute only about 1 percent of the loans in each sample. The lack of a significant effect for LTV ratio is also not surprising. The Concentrance samples are restricted to high-LTV loans, and about 85 percent of loans in the sample had LTV ratios in a very narrow range (98 to 100 percent). Over 99 percent of loans had LTV ratios between 96 and 102 percent. The lack of variation in this variable meant that the regression had little ability to identify its The lack of a significant effect for reserves in the claim and delinquency regressions is surprising. It may indicate that down payment assistance alters the relationship between reserves and credit risk. Without assistance, borrowers with substantial liquid assets may have few reserves after a down payment is made. With assistance, borrowers with substantial liquid assets may retain those assets by not making a down payment with their own funds. If liquid assets are a better measure of risk than are reserves, then reserves may be a less useful risk indicator when substantial numbers of loans have down payment assistance. Delinquency Results: In both the national and MSA samples, down payment assistance substantially increased the likelihood of 90-day delinquency. Using the augmented GAO actuarial model, results in the national sample indicated that down payment assistance from a seller-funded nonprofit raised the delinquency rate by 100 percent, compared with similar loans with no assistance (table 12).[Footnote 85] Assistance from other sources raised the delinquency rate by 20 percent, relative to similar loans with no assistance. With the model based on the augmented TOTAL Mortgage Scorecard variables, the results indicated that assistance from a seller-funded nonprofit raised the delinquency rate by 93 percent, while assistance from other sources raised the delinquency rate by 21 percent (table 10). The differences between loans with seller-funded assistance and loans without it are significant with a one-tailed test at a level of 1 percent in all variations of the model. The differences between seller-funded assistance and assistance from other sources were large and also significant at 1 percent in a one- tailed test in all variations. Differences in delinquency rates in the MSA sample were also substantial. Considering the augmented GAO actuarial model, loans with seller-funded down payment assistance had delinquency rates that were 105 percent higher than the delinquency rates on comparable loans without assistance, while loans with assistance from other sources had delinquency rates that were 34 percent higher than the delinquency rates of loans without assistance (table 16). The differences between seller-funded assistance and no assistance, and between seller-funded assistance and other assistance, were both significant at 1 percent in one-tailed tests in all variations.[Footnote 86] Claim Results: Down payment assistance also had a substantial impact on claims in both the national and MSA samples. Results from the national sample using the augmented GAO actuarial model indicated that assistance from a seller-funded nonprofit raised the claim rate by 81 percent, relative to similar loans with no assistance, as shown in the odds ratio point estimate column of table 20.[Footnote 87] Assistance from other sources raised the claim rate by 44 percent, relative to similar loans with no down payment assistance. With the model based on the augmented TOTAL Mortgage Scorecard variables, we found that assistance from a seller- funded nonprofit raised the claim rate by 76 percent, while assistance from other sources raised the claim rate by 49 percent (table 18). The differences between loans with down payment assistance and those without it were statistically significant with a one-tailed test at a level of 1 percent. Seller-funded assistance had a larger impact on claims than did assistance from other sources. Those differences, while large, were not quite significant at conventional levels.[Footnote 88] Differences in the MSA sample were even larger for seller-funded nonprofit assistance. Using the GAO actuarial model, loans with seller- funded down payment assistance had claim rates that were 134 percent higher than the claim rates on comparable loans without assistance, while loans with down payment assistance from other sources had claim rates that were 24 percent higher than the claim rates on loans without assistance (table 25). The difference between seller-funded assistance and no assistance, and the difference between seller-funded assistance and other assistance, were both significant at 1 percent in one-tailed tests in all variations of the model. Several explanations are possible for the increase in delinquency and claim rates associated with down payment assistance from nonseller- funded sources. It is possible that the gifts from relatives were actually loans, despite the inclusion of a gift letter indicating that repayment is not expected. In these cases, the LTV ratio would be misstated, not because the collateral value was overstated, but because the total amount of debt incurred in the transaction was understated. It is also possible that borrowers who could save for a down payment differed in key respects from borrowers who could not. For example, some researchers have suggested that households may increase their savings rates prior to purchasing a home.[Footnote 89] Others have found evidence that young households increased their earnings and savings by working more hours prior to purchasing their first home.[Footnote 90] It may be the case that households that can more easily increase earnings or reduce consumption in order to accumulate savings enter homeownership when a down payment is required but that both flexible and inflexible households purchase homes when no down payment is required. The inclusion of households with less flexibility would tend to increase delinquencies and claims. While delinquency differences are about the same for the MSA sample and the national sample, claim rate differences for seller-funded nonprofit assistance are much larger in the MSA sample than they are in the national sample. Research suggests that delinquencies are more likely to cure, or to prepay, than to claim if the borrower is projected to have accumulated equity.[Footnote 91] The rate of house price appreciation in the national sample is much higher than in the MSA samples, so that borrowers in the national sample would have accumulated more equity. Over the 5-year period from the first quarter of fiscal year 2000 to the last quarter of fiscal year 2004, the median house price of existing houses increased 11 percent the Salt Lake City MSA, 18 percent in the Indianapolis MSA, and 32 percent in the Atlanta MSA. The median increase in the national sample was about 39 percent and the mean increase was 51 percent. It is possible that substantial house price appreciation in the national sample weakened the effect of seller-funded down payment assistance on claims, as the assisted loans that became delinquent were more likely to be resolved without a claim in rapidly appreciating markets. Table 10: Delinquency Regression Results--National Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 2.9662; Analysis of maximum likelihood estimates: Standard error: 3.8624; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4425; Odds ratio estimates: Point estimate: [Empty]. Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: -0.00214; Analysis of maximum likelihood estimates: Standard error: 0.038; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.9551; Odds ratio estimates: Point estimate: 0.998. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.096; Analysis of maximum likelihood estimates: Standard error: 0.2587; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.7105; Odds ratio estimates: Point estimate: 1.101. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0119; Analysis of maximum likelihood estimates: Standard error: 0.000716; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.988. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.5569; Analysis of maximum likelihood estimates: Standard error: 0.1259; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.745. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.0634; Analysis of maximum likelihood estimates: Standard error: 0.0895; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4789; Odds ratio estimates: Point estimate: 1.065. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.477; Analysis of maximum likelihood estimates: Standard error: 0.5467; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0069; Odds ratio estimates: Point estimate: 4.38. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.1064; Analysis of maximum likelihood estimates: Standard error: 0.1047; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3096; Odds ratio estimates: Point estimate: 0.899. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: -0.0332; Analysis of maximum likelihood estimates: Standard error: 0.0979; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.7346; Odds ratio estimates: Point estimate: 0.967. Parameter: ARM; Analysis of maximum likelihood estimates: Estimate: - 0.3078; Analysis of maximum likelihood estimates: Standard error: 0.1678; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0667; Odds ratio estimates: Point estimate: 0.735. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.0703; Analysis of maximum likelihood estimates: Standard error: 0.0785; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3706; Odds ratio estimates: Point estimate: 1.073. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: -0.2547; Analysis of maximum likelihood estimates: Standard error: 0.1843; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1669; Odds ratio estimates: Point estimate: 0.775. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.0448; Analysis of maximum likelihood estimates: Standard error: 0.1064; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6736; Odds ratio estimates: Point estimate: 0.956. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.6583; Analysis of maximum likelihood estimates: Standard error: 0.1111; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.932. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.1911; Analysis of maximum likelihood estimates: Standard error: 0.0935; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.041; Odds ratio estimates: Point estimate: 1.211. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: -0.9398; Analysis of maximum likelihood estimates: Standard error: 0.7716; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2232; Odds ratio estimates: Point estimate: 0.391. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1997; Analysis of maximum likelihood estimates: Standard error: 0.0259; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.221. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.00186; Analysis of maximum likelihood estimates: Standard error: 0.0492; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.9698; Odds ratio estimates: Point estimate: 1.002. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: 0.0558; Analysis of maximum likelihood estimates: Standard error: 0.0496; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2603; Odds ratio estimates: Point estimate: 1.057. Source: GAO. [End of table] Table 11: Delinquency Regression Results--National Sample, Model Based on TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 0.3717; Analysis of maximum likelihood estimates: Standard error: 3.7917; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.9219; Odds ratio estimates: Point estimate: [Empty]. Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: 0.0249; Analysis of maximum likelihood estimates: Standard error: 0.037; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4997; Odds ratio estimates: Point estimate: 1.025. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.1153; Analysis of maximum likelihood estimates: Standard error: 0.2585; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6555; Odds ratio estimates: Point estimate: 1.122. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.012; Analysis of maximum likelihood estimates: Standard error: 0.000714; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.988. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.5782; Analysis of maximum likelihood estimates: Standard error: 0.1251; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.783. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.0545; Analysis of maximum likelihood estimates: Standard error: 0.0892; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5414; Odds ratio estimates: Point estimate: 1.056. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.4461; Analysis of maximum likelihood estimates: Standard error: 0.5442; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0079; Odds ratio estimates: Point estimate: 4.246. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.1498; Analysis of maximum likelihood estimates: Standard error: 0.1039; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1493; Odds ratio estimates: Point estimate: 0.861. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: -0.0351; Analysis of maximum likelihood estimates: Standard error: 0.0979; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.7197; Odds ratio estimates: Point estimate: 0.965. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.6384; Analysis of maximum likelihood estimates: Standard error: 0.1101; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.894. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.1911; Analysis of maximum likelihood estimates: Standard error: 0.0933; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0405; Odds ratio estimates: Point estimate: 1.211. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: -1.0039; Analysis of maximum likelihood estimates: Standard error: 0.7676; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.191; Odds ratio estimates: Point estimate: 0.366. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1994; Analysis of maximum likelihood estimates: Standard error: 0.0259; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.221. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.000889; Analysis of maximum likelihood estimates: Standard error: 0.0493; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.9856; Odds ratio estimates: Point estimate: 1.001. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: 0.0563; Analysis of maximum likelihood estimates: Standard error: 0.0497; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.257; Odds ratio estimates: Point estimate: 1.058. Source: GAO. [End of table] Table 12: Delinquency Regression Results--National Sample, Augmented GAO Actuarial Model: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 1.8293; Analysis of maximum likelihood estimates: Standard error: 0.498; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: . Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.1162; Analysis of maximum likelihood estimates: Standard error: 0.0175; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.123. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0116; Analysis of maximum likelihood estimates: Standard error: 0.000711; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.988. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.5583; Analysis of maximum likelihood estimates: Standard error: 0.1255; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.748. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.0476; Analysis of maximum likelihood estimates: Standard error: 0.0893; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5943; Odds ratio estimates: Point estimate: 1.049. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.2325; Analysis of maximum likelihood estimates: Standard error: 0.5399; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0224; Odds ratio estimates: Point estimate: 3.43. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.0415; Analysis of maximum likelihood estimates: Standard error: 0.0783; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5961; Odds ratio estimates: Point estimate: 1.042. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: -0.2416; Analysis of maximum likelihood estimates: Standard error: 0.1713; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1584; Odds ratio estimates: Point estimate: 0.785. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.047; Analysis of maximum likelihood estimates: Standard error: 0.1062; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6583; Odds ratio estimates: Point estimate: 0.954. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.6961; Analysis of maximum likelihood estimates: Standard error: 0.1086; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.006. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.1839; Analysis of maximum likelihood estimates: Standard error: 0.0932; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0484; Odds ratio estimates: Point estimate: 1.202. Source: GAO. [End of table] Table 13: Delinquency Regression Results--National Sample, GAO Actuarial Model: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 1.8124; Analysis of maximum likelihood estimates: Standard error: 0.4868; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: [Empty]. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.118; Analysis of maximum likelihood estimates: Standard error: 0.0174; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.125. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0117; Analysis of maximum likelihood estimates: Standard error: 0.000708; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.988. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.5652; Analysis of maximum likelihood estimates: Standard error: 0.1247; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.76. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.0448; Analysis of maximum likelihood estimates: Standard error: 0.0891; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.615; Odds ratio estimates: Point estimate: 1.046. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.191; Analysis of maximum likelihood estimates: Standard error: 0.5363; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0264; Odds ratio estimates: Point estimate: 3.29. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.6979; Analysis of maximum likelihood estimates: Standard error: 0.1083; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.01. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.1835; Analysis of maximum likelihood estimates: Standard error: 0.0929; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0483; Odds ratio estimates: Point estimate: 1.201. Source: GAO. [End of table] Table 14: Delinquency Regression Results--MSA Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -8.4601; Analysis of maximum likelihood estimates: Standard error: 8.0246; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2918; Odds ratio estimates: Point estimate: [Empty]. Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: 0.0457; Analysis of maximum likelihood estimates: Standard error: 0.079; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5634; Odds ratio estimates: Point estimate: 1.047. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.2383; Analysis of maximum likelihood estimates: Standard error: 0.5188; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6461; Odds ratio estimates: Point estimate: 1.269. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.011; Analysis of maximum likelihood estimates: Standard error: 0.000826; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.989. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.4604; Analysis of maximum likelihood estimates: Standard error: 0.1437; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0014; Odds ratio estimates: Point estimate: 1.585. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.0184; Analysis of maximum likelihood estimates: Standard error: 0.1163; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.8742; Odds ratio estimates: Point estimate: 0.982. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 2.2265; Analysis of maximum likelihood estimates: Standard error: 0.672; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0009; Odds ratio estimates: Point estimate: 9.268. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.2113; Analysis of maximum likelihood estimates: Standard error: 0.1344; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.116; Odds ratio estimates: Point estimate: 0.81. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: -0.0661; Analysis of maximum likelihood estimates: Standard error: 0.113; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5586; Odds ratio estimates: Point estimate: 0.936. Parameter: ARM; Analysis of maximum likelihood estimates: Estimate: - 0.0869; Analysis of maximum likelihood estimates: Standard error: 0.1367; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5249; Odds ratio estimates: Point estimate: 0.917. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.1458; Analysis of maximum likelihood estimates: Standard error: 0.0918; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1124; Odds ratio estimates: Point estimate: 1.157. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: 0.3403; Analysis of maximum likelihood estimates: Standard error: 0.2298; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1387; Odds ratio estimates: Point estimate: 1.405. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.1141; Analysis of maximum likelihood estimates: Standard error: 0.1258; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3643; Odds ratio estimates: Point estimate: 0.892. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.741; Analysis of maximum likelihood estimates: Standard error: 0.1146; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.098. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3074; Analysis of maximum likelihood estimates: Standard error: 0.1346; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0224; Odds ratio estimates: Point estimate: 1.36. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.1697; Analysis of maximum likelihood estimates: Standard error: 0.1149; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1399; Odds ratio estimates: Point estimate: 0.844. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.2951; Analysis of maximum likelihood estimates: Standard error: 0.1265; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0197; Odds ratio estimates: Point estimate: 1.343. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: 4.9561; Analysis of maximum likelihood estimates: Standard error: 2.2367; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0267; Odds ratio estimates: Point estimate: 142.033. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.2025; Analysis of maximum likelihood estimates: Standard error: 0.0294; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.224. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.0374; Analysis of maximum likelihood estimates: Standard error: 0.0589; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5256; Odds ratio estimates: Point estimate: 1.038. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: -0.0242; Analysis of maximum likelihood estimates: Standard error: 0.0626; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6988; Odds ratio estimates: Point estimate: 0.976. Source: GAO. [End of table] Table 15: Delinquency Regression Results--MSA Sample, Model Based on TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -4.3569; Analysis of maximum likelihood estimates: Standard error: 5.0712; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3903; Odds ratio estimates: Point estimate: [Empty] . Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: 0.00335; Analysis of maximum likelihood estimates: Standard error: 0.0465; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.9425; Odds ratio estimates: Point estimate: 1.003. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.2403; Analysis of maximum likelihood estimates: Standard error: 0.5184; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.643; Odds ratio estimates: Point estimate: 1.272. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0109; Analysis of maximum likelihood estimates: Standard error: 0.000822; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.989. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.463; Analysis of maximum likelihood estimates: Standard error: 0.1423; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0011; Odds ratio estimates: Point estimate: 1.589. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.0185; Analysis of maximum likelihood estimates: Standard error: 0.1164; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.8735; Odds ratio estimates: Point estimate: 0.982. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 2.1509; Analysis of maximum likelihood estimates: Standard error: 0.6709; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0013; Odds ratio estimates: Point estimate: 8.593. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.1969; Analysis of maximum likelihood estimates: Standard error: 0.1292; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1273; Odds ratio estimates: Point estimate: 0.821. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: -0.0439; Analysis of maximum likelihood estimates: Standard error: 0.1114; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6936; Odds ratio estimates: Point estimate: 0.957. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.7357; Analysis of maximum likelihood estimates: Standard error: 0.1138; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.087. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3091; Analysis of maximum likelihood estimates: Standard error: 0.1343; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0214; Odds ratio estimates: Point estimate: 1.362. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.1443; Analysis of maximum likelihood estimates: Standard error: 0.114; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2054; Odds ratio estimates: Point estimate: 0.866. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.3253; Analysis of maximum likelihood estimates: Standard error: 0.1244; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0089; Odds ratio estimates: Point estimate: 1.384. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: 4.9592; Analysis of maximum likelihood estimates: Standard error: 2.2289; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0261; Odds ratio estimates: Point estimate: 142.478. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.2026; Analysis of maximum likelihood estimates: Standard error: 0.0294; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.225. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.038; Analysis of maximum likelihood estimates: Standard error: 0.059; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5197; Odds ratio estimates: Point estimate: 1.039. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: -0.0256; Analysis of maximum likelihood estimates: Standard error: 0.0628; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6838; Odds ratio estimates: Point estimate: 0.975. Source: GAO. [End of table] Table 16: Delinquency Regression Results--MSA Sample, Augmented GAO Actuarial Model: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 1.0815; Analysis of maximum likelihood estimates: Standard error: 0.5621; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0543; Odds ratio estimates: Point estimate: [Empty]. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.1411; Analysis of maximum likelihood estimates: Standard error: 0.0239; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.152. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0108; Analysis of maximum likelihood estimates: Standard error: 0.000819; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.989. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.45; Analysis of maximum likelihood estimates: Standard error: 0.143; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0016; Odds ratio estimates: Point estimate: 1.568. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.0195; Analysis of maximum likelihood estimates: Standard error: 0.1159; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.8666; Odds ratio estimates: Point estimate: 0.981. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 2.1455; Analysis of maximum likelihood estimates: Standard error: 0.6696; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0014; Odds ratio estimates: Point estimate: 8.546. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.1265; Analysis of maximum likelihood estimates: Standard error: 0.0914; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1665; Odds ratio estimates: Point estimate: 1.135. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: 0.3149; Analysis of maximum likelihood estimates: Standard error: 0.1905; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0983; Odds ratio estimates: Point estimate: 1.37. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.1261; Analysis of maximum likelihood estimates: Standard error: 0.1256; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3152; Odds ratio estimates: Point estimate: 0.882. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.719; Analysis of maximum likelihood estimates: Standard error: 0.1125; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.052. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.2932; Analysis of maximum likelihood estimates: Standard error: 0.1342; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0289; Odds ratio estimates: Point estimate: 1.341. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.1538; Analysis of maximum likelihood estimates: Standard error: 0.1071; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1508; Odds ratio estimates: Point estimate: 0.857. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.1268; Analysis of maximum likelihood estimates: Standard error: 0.1222; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2991; Odds ratio estimates: Point estimate: 1.135. Source: GAO. [End of table] Table 17: Delinquency Regression Results--MSA Sample, GAO Actuarial Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 0.987; Analysis of maximum likelihood estimates: Standard error: 0.5534; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0745; Odds ratio estimates: Point estimate: [Empty]. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.1419; Analysis of maximum likelihood estimates: Standard error: 0.0238; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.152. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0107; Analysis of maximum likelihood estimates: Standard error: 0.000814; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.989. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.4409; Analysis of maximum likelihood estimates: Standard error: 0.1415; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0009; Odds ratio estimates: Point estimate: 1.554. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.017; Analysis of maximum likelihood estimates: Standard error: 0.1158; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.8831; Odds ratio estimates: Point estimate: 0.983. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 2.0408; Analysis of maximum likelihood estimates: Standard error: 0.6682; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0023; Odds ratio estimates: Point estimate: 7.697. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.7131; Analysis of maximum likelihood estimates: Standard error: 0.1118; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.04. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.2924; Analysis of maximum likelihood estimates: Standard error: 0.1338; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0289; Odds ratio estimates: Point estimate: 1.34. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.131; Analysis of maximum likelihood estimates: Standard error: 0.1065; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2185; Odds ratio estimates: Point estimate: 0.877. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.1711; Analysis of maximum likelihood estimates: Standard error: 0.1203; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1549; Odds ratio estimates: Point estimate: 1.187. Source: GAO. [End of table] Table 18: Claim regression results - National Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 4.6847; Analysis of maximum likelihood estimates: Standard error: 4.4388; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2912; Odds ratio estimates: Point estimate: . Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: -0.0575; Analysis of maximum likelihood estimates: Standard error: 0.0431; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1816; Odds ratio estimates: Point estimate: 0.944. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.4688; Analysis of maximum likelihood estimates: Standard error: 0.3668; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2012; Odds ratio estimates: Point estimate: 1.598. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00926; Analysis of maximum likelihood estimates: Standard error: 0.00116; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.991. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.7271; Analysis of maximum likelihood estimates: Standard error: 0.1946; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: 2.069. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.0933; Analysis of maximum likelihood estimates: Standard error: 0.1558; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5492; Odds ratio estimates: Point estimate: 0.911. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 2.1398; Analysis of maximum likelihood estimates: Standard error: 0.8949; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0168; Odds ratio estimates: Point estimate: 8.498. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: 0.0121; Analysis of maximum likelihood estimates: Standard error: 0.1814; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.9468; Odds ratio estimates: Point estimate: 1.012. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: 0.1217; Analysis of maximum likelihood estimates: Standard error: 0.1696; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.473; Odds ratio estimates: Point estimate: 1.129. Parameter: ARM; Analysis of maximum likelihood estimates: Estimate: - 0.7761; Analysis of maximum likelihood estimates: Standard error: 0.345; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0245; Odds ratio estimates: Point estimate: 0.46. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.0268; Analysis of maximum likelihood estimates: Standard error: 0.1304; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.837; Odds ratio estimates: Point estimate: 1.027. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: -0.3245; Analysis of maximum likelihood estimates: Standard error: 0.3088; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2933; Odds ratio estimates: Point estimate: 0.723. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.3168; Analysis of maximum likelihood estimates: Standard error: 0.1663; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0567; Odds ratio estimates: Point estimate: 0.728. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.5664; Analysis of maximum likelihood estimates: Standard error: 0.1924; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0032; Odds ratio estimates: Point estimate: 1.762. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3995; Analysis of maximum likelihood estimates: Standard error: 0.148; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.007; Odds ratio estimates: Point estimate: 1.491. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: -1.6943; Analysis of maximum likelihood estimates: Standard error: 1.0614; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1104; Odds ratio estimates: Point estimate: 0.184. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.448; Analysis of maximum likelihood estimates: Standard error: 0.0545; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.565. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1178; Analysis of maximum likelihood estimates: Standard error: 0.0554; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0333; Odds ratio estimates: Point estimate: 1.125. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: 0.0879; Analysis of maximum likelihood estimates: Standard error: 0.0543; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1052; Odds ratio estimates: Point estimate: 1.092. Source: GAO. [End of table] Table 19: Claim Regression Results--National Sample, Model Based on TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: 3.0763; Analysis of maximum likelihood estimates: Standard error: 5.0183; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5399; Odds ratio estimates: Point estimate: . Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: -0.0413; Analysis of maximum likelihood estimates: Standard error: 0.0488; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.398; Odds ratio estimates: Point estimate: 0.96. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.5144; Analysis of maximum likelihood estimates: Standard error: 0.3667; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1607; Odds ratio estimates: Point estimate: 1.673. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00929; Analysis of maximum likelihood estimates: Standard error: 0.00116; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.991. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.7393; Analysis of maximum likelihood estimates: Standard error: 0.1927; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.094. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.1276; Analysis of maximum likelihood estimates: Standard error: 0.1552; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4108; Odds ratio estimates: Point estimate: 0.88. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.9601; Analysis of maximum likelihood estimates: Standard error: 0.8969; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0288; Odds ratio estimates: Point estimate: 7.1. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.0442; Analysis of maximum likelihood estimates: Standard error: 0.181; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.807; Odds ratio estimates: Point estimate: 0.957. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: 0.1316; Analysis of maximum likelihood estimates: Standard error: 0.1698; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4384; Odds ratio estimates: Point estimate: 1.141. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.5012; Analysis of maximum likelihood estimates: Standard error: 0.1904; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0085; Odds ratio estimates: Point estimate: 1.651. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3786; Analysis of maximum likelihood estimates: Standard error: 0.1475; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0102; Odds ratio estimates: Point estimate: 1.46. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: -1.8949; Analysis of maximum likelihood estimates: Standard error: 1.0561; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0728; Odds ratio estimates: Point estimate: 0.15. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.4486; Analysis of maximum likelihood estimates: Standard error: 0.0545; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.566. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1189; Analysis of maximum likelihood estimates: Standard error: 0.0554; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.032; Odds ratio estimates: Point estimate: 1.126. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: 0.0863; Analysis of maximum likelihood estimates: Standard error: 0.0543; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1118; Odds ratio estimates: Point estimate: 1.09. Source: GAO. [End of table] Table 20: Claim Regression Results--National Sample, Augmented GAO Actuarial Model: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -2.2855; Analysis of maximum likelihood estimates: Standard error: 0.8291; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0058; Odds ratio estimates: Point estimate: [Empty]. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.2665; Analysis of maximum likelihood estimates: Standard error: 0.0244; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.305. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.0088; Analysis of maximum likelihood estimates: Standard error: 0.00116; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.991. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.7538; Analysis of maximum likelihood estimates: Standard error: 0.1937; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.125. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.1405; Analysis of maximum likelihood estimates: Standard error: 0.1559; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3674; Odds ratio estimates: Point estimate: 0.869. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.8786; Analysis of maximum likelihood estimates: Standard error: 0.8691; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0307; Odds ratio estimates: Point estimate: 6.544. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: -0.0771; Analysis of maximum likelihood estimates: Standard error: 0.1308; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5559; Odds ratio estimates: Point estimate: 0.926. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: -0.2178; Analysis of maximum likelihood estimates: Standard error: 0.2986; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4659; Odds ratio estimates: Point estimate: 0.804. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.2937; Analysis of maximum likelihood estimates: Standard error: 0.1662; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0771; Odds ratio estimates: Point estimate: 0.745. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.5947; Analysis of maximum likelihood estimates: Standard error: 0.1887; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0016; Odds ratio estimates: Point estimate: 1.812. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3641; Analysis of maximum likelihood estimates: Standard error: 0.1483; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0141; Odds ratio estimates: Point estimate: 1.439. Source: GAO. [End of table] Table 21: Claim Regression Results--National Sample, GAO Actuarial Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -2.6245; Analysis of maximum likelihood estimates: Standard error: 0.8108; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0012; Odds ratio estimates: Point estimate: . Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.2656; Analysis of maximum likelihood estimates: Standard error: 0.0242; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.304. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00861; Analysis of maximum likelihood estimates: Standard error: 0.00115; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.991. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.7174; Analysis of maximum likelihood estimates: Standard error: 0.1917; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: 2.049. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: -0.1575; Analysis of maximum likelihood estimates: Standard error: 0.1555; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3111; Odds ratio estimates: Point estimate: 0.854. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.7053; Analysis of maximum likelihood estimates: Standard error: 0.8705; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0501; Odds ratio estimates: Point estimate: 5.503. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.5894; Analysis of maximum likelihood estimates: Standard error: 0.1878; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0017; Odds ratio estimates: Point estimate: 1.803. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3443; Analysis of maximum likelihood estimates: Standard error: 0.1477; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0197; Odds ratio estimates: Point estimate: 1.411. Source: GAO. [End of table] Table 22: Claim Regression Results--MSA Sample, Model Based on Augmented TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -20.5482; Analysis of maximum likelihood estimates: Standard error: 9.2777; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0268; Odds ratio estimates: Point estimate: [Empty]. Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: 0.0309; Analysis of maximum likelihood estimates: Standard error: 0.0906; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.7334; Odds ratio estimates: Point estimate: 1.031. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.5153; Analysis of maximum likelihood estimates: Standard error: 0.6032; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.393; Odds ratio estimates: Point estimate: 1.674. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00643; Analysis of maximum likelihood estimates: Standard error: 0.00108; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.994. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.6042; Analysis of maximum likelihood estimates: Standard error: 0.1723; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0005; Odds ratio estimates: Point estimate: 1.83. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.179; Analysis of maximum likelihood estimates: Standard error: 0.1498; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2322; Odds ratio estimates: Point estimate: 1.196. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.3785; Analysis of maximum likelihood estimates: Standard error: 0.9056; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.128; Odds ratio estimates: Point estimate: 3.969. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.6808; Analysis of maximum likelihood estimates: Standard error: 0.1897; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0003; Odds ratio estimates: Point estimate: 0.506. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: -0.1985; Analysis of maximum likelihood estimates: Standard error: 0.1533; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1954; Odds ratio estimates: Point estimate: 0.82. Parameter: ARM; Analysis of maximum likelihood estimates: Estimate: - 0.3282; Analysis of maximum likelihood estimates: Standard error: 0.1857; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0771; Odds ratio estimates: Point estimate: 0.72. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.1533; Analysis of maximum likelihood estimates: Standard error: 0.1218; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2083; Odds ratio estimates: Point estimate: 1.166. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: 0.0761; Analysis of maximum likelihood estimates: Standard error: 0.2989; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.799; Odds ratio estimates: Point estimate: 1.079. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.0626; Analysis of maximum likelihood estimates: Standard error: 0.1733; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.7179; Odds ratio estimates: Point estimate: 0.939. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.9768; Analysis of maximum likelihood estimates: Standard error: 0.1576; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.656. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3724; Analysis of maximum likelihood estimates: Standard error: 0.1864; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0457; Odds ratio estimates: Point estimate: 1.451. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.5987; Analysis of maximum likelihood estimates: Standard error: 0.1764; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0007; Odds ratio estimates: Point estimate: 0.55. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.7624; Analysis of maximum likelihood estimates: Standard error: 0.1591; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.143. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: 13.3648; Analysis of maximum likelihood estimates: Standard error: 2.8741; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: >999.999. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.4356; Analysis of maximum likelihood estimates: Standard error: 0.0503; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.546. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1633; Analysis of maximum likelihood estimates: Standard error: 0.0513; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0015; Odds ratio estimates: Point estimate: 1.177. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: -0.00873; Analysis of maximum likelihood estimates: Standard error: 0.0535; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.8704; Odds ratio estimates: Point estimate: 0.991. Source: GAO. [End of table] Table 23: Claim Regression Results--MSA Sample, Model Based on TOTAL Mortgage Scorecard Variables: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -18.9754; Analysis of maximum likelihood estimates: Standard error: 7.322; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0096; Odds ratio estimates: Point estimate: . Parameter: LTV ratio; Analysis of maximum likelihood estimates: Estimate: 0.018; Analysis of maximum likelihood estimates: Standard error: 0.0691; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.7941; Odds ratio estimates: Point estimate: 1.018. Parameter: 15-year mortgage; Analysis of maximum likelihood estimates: Estimate: 0.5857; Analysis of maximum likelihood estimates: Standard error: 0.6014; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3301; Odds ratio estimates: Point estimate: 1.796. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00645; Analysis of maximum likelihood estimates: Standard error: 0.00108; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.994. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.6401; Analysis of maximum likelihood estimates: Standard error: 0.1701; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: 1.897. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.191; Analysis of maximum likelihood estimates: Standard error: 0.1499; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2027; Odds ratio estimates: Point estimate: 1.21. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.3525; Analysis of maximum likelihood estimates: Standard error: 0.9035; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1344; Odds ratio estimates: Point estimate: 3.867. Parameter: Endorsed in fiscal year 2000; Analysis of maximum likelihood estimates: Estimate: -0.6919; Analysis of maximum likelihood estimates: Standard error: 0.1865; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: 0.501. Parameter: Endorsed in fiscal year 2001; Analysis of maximum likelihood estimates: Estimate: -0.1672; Analysis of maximum likelihood estimates: Standard error: 0.1517; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2703; Odds ratio estimates: Point estimate: 0.846. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.9732; Analysis of maximum likelihood estimates: Standard error: 0.1569; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.647. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.3778; Analysis of maximum likelihood estimates: Standard error: 0.1863; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0426; Odds ratio estimates: Point estimate: 1.459. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.5566; Analysis of maximum likelihood estimates: Standard error: 0.1748; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0014; Odds ratio estimates: Point estimate: 0.573. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.765; Analysis of maximum likelihood estimates: Standard error: 0.1571; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.149. Parameter: House price appreciation rate; Analysis of maximum likelihood estimates: Estimate: 13.0362; Analysis of maximum likelihood estimates: Standard error: 2.8596; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: >999.999. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.4345; Analysis of maximum likelihood estimates: Standard error: 0.0503; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.544. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1621; Analysis of maximum likelihood estimates: Standard error: 0.0513; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0016; Odds ratio estimates: Point estimate: 1.176. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: -0.00953; Analysis of maximum likelihood estimates: Standard error: 0.0536; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.8589; Odds ratio estimates: Point estimate: 0.991. Source: GAO. [End of table] Table 24: Claim Regression Results--MSA Sample, Augmented GAO Actuarial Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -4.3382; Analysis of maximum likelihood estimates: Standard error: 0.7653; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: [Empty]. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.3949; Analysis of maximum likelihood estimates: Standard error: 0.0292; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.484. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00628; Analysis of maximum likelihood estimates: Standard error: 0.00108; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.994. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.5396; Analysis of maximum likelihood estimates: Standard error: 0.1727; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0018; Odds ratio estimates: Point estimate: 1.715. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.1587; Analysis of maximum likelihood estimates: Standard error: 0.1491; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2872; Odds ratio estimates: Point estimate: 1.172. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.1872; Analysis of maximum likelihood estimates: Standard error: 0.9057; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1899; Odds ratio estimates: Point estimate: 3.278. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: 0.0986; Analysis of maximum likelihood estimates: Standard error: 0.1218; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4181; Odds ratio estimates: Point estimate: 1.104. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: 0.1499; Analysis of maximum likelihood estimates: Standard error: 0.2587; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.5625; Odds ratio estimates: Point estimate: 1.162. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.0842; Analysis of maximum likelihood estimates: Standard error: 0.1732; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.6267; Odds ratio estimates: Point estimate: 0.919. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.8555; Analysis of maximum likelihood estimates: Standard error: 0.154; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.352. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.2174; Analysis of maximum likelihood estimates: Standard error: 0.1885; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2488; Odds ratio estimates: Point estimate: 1.243. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.4073; Analysis of maximum likelihood estimates: Standard error: 0.1515; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0072; Odds ratio estimates: Point estimate: 0.665. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.3428; Analysis of maximum likelihood estimates: Standard error: 0.1506; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0229; Odds ratio estimates: Point estimate: 1.409. Source: GAO. [End of table] Table 25: Claim Regression Results--MSA Sample, GAO Actuarial Model: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -4.3714; Analysis of maximum likelihood estimates: Standard error: 0.7543; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: [Empty]. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.3956; Analysis of maximum likelihood estimates: Standard error: 0.0291; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.485. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: -0.00625; Analysis of maximum likelihood estimates: Standard error: 0.00108; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.994. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: 0.5421; Analysis of maximum likelihood estimates: Standard error: 0.1696; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0014; Odds ratio estimates: Point estimate: 1.72. Parameter: Borrower reserves; Analysis of maximum likelihood estimates: Estimate: 0.1634; Analysis of maximum likelihood estimates: Standard error: 0.1486; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2716; Odds ratio estimates: Point estimate: 1.177. Parameter: Front-end ratio; Analysis of maximum likelihood estimates: Estimate: 1.12; Analysis of maximum likelihood estimates: Standard error: 0.9029; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2148; Odds ratio estimates: Point estimate: 3.065. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.8486; Analysis of maximum likelihood estimates: Standard error: 0.153; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.336. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.2154; Analysis of maximum likelihood estimates: Standard error: 0.1879; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2518; Odds ratio estimates: Point estimate: 1.24. Parameter: Atlanta MSA; Analysis of maximum likelihood estimates: Estimate: -0.3964; Analysis of maximum likelihood estimates: Standard error: 0.1512; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0087; Odds ratio estimates: Point estimate: 0.673. Parameter: Salt Lake City MSA; Analysis of maximum likelihood estimates: Estimate: 0.3626; Analysis of maximum likelihood estimates: Standard error: 0.1488; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0148; Odds ratio estimates: Point estimate: 1.437. Source: GAO. [End of table] Prepayment Model: Modeling conditional claim rates has a substantial advantage: It allows time-varying covariates such as post-origination increases in house prices to be incorporated into the regression model. But the use of conditional claim rates also poses one possible disadvantage. If certain borrowers, such as recipients of seller-funded assistance, had high rates of prepayment, their conditional claim rates could be high not because they had higher credit risk but because a small number of loans survived and eventually went to claim. That is, the hazard rate would be large because the denominator was small, not because the numerator was large. To examine this possibility, we used a logistic regression that predicted the quarterly conditional probability of prepayment to estimate the competing risk of loans terminating in prepayment. The results are presented in tables 26 and 27. The regressions used as explanatory variables two variables that represent the incentive to refinance--the ratio of the book value of the mortgage to the value of the mortgage payments evaluated at currently prevailing interest rates, split into two segments. One segment represented book value exceeding market value, the other represented book value that was less than market value. Additionally, the regression used variables that measured the passage of time, the constructed risk variable, credit scores, and indicators for down payment assistance. Results were as expected. Loans with an incentive to refinance that was driven by the interest rate had significantly higher rates of prepayment. High-risk loans and those with low credit scores prepaid more slowly. We also found that loans with seller-funded assistance prepaid more slowly than comparable loans without assistance, demonstrating that our estimate of the effect of assistance on loan performance was not inflated by rapid prepayment in this group of loans. Table 26: Prepayment Regression Results--Quarterly Conditional Probability of Prepayment, National Sample: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -15.0802; Analysis of maximum likelihood estimates: Standard error: 0.7199; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: [Empty]. Parameter: Relatively high equity; Analysis of maximum likelihood estimates: Estimate: 3.2059; Analysis of maximum likelihood estimates: Standard error: 0.1639; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 24.679. Parameter: Relatively low equity; Analysis of maximum likelihood estimates: Estimate: 5.5491; Analysis of maximum likelihood estimates: Standard error: 0.6957; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 256.998. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: -0.0232; Analysis of maximum likelihood estimates: Standard error: 0.0137; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0911; Odds ratio estimates: Point estimate: 0.977. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: 0.00374; Analysis of maximum likelihood estimates: Standard error: 0.000293; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.004. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: -0.2327; Analysis of maximum likelihood estimates: Standard error: 0.0691; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0008; Odds ratio estimates: Point estimate: 0.792. Parameter: ARM; Analysis of maximum likelihood estimates: Estimate: 0.6987; Analysis of maximum likelihood estimates: Standard error: 0.0853; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 2.011. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: 0.2131; Analysis of maximum likelihood estimates: Standard error: 0.0611; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0005; Odds ratio estimates: Point estimate: 1.238. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: -0.1744; Analysis of maximum likelihood estimates: Standard error: 0.0365; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.84. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: -0.1203; Analysis of maximum likelihood estimates: Standard error: 0.0445; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0068; Odds ratio estimates: Point estimate: 0.887. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: -0.2284; Analysis of maximum likelihood estimates: Standard error: 0.0641; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0004; Odds ratio estimates: Point estimate: 0.796. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: -0.0562; Analysis of maximum likelihood estimates: Standard error: 0.0412; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.173; Odds ratio estimates: Point estimate: 0.945. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.2094; Analysis of maximum likelihood estimates: Standard error: 0.0146; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.233. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: -0.0471; Analysis of maximum likelihood estimates: Standard error: 0.0236; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0461; Odds ratio estimates: Point estimate: 0.954. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: -0.049; Analysis of maximum likelihood estimates: Standard error: 0.0243; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.044; Odds ratio estimates: Point estimate: 0.952. Source: GAO. [End of table] Table 27: Prepayment Regression Results--Quarterly Conditional Probability of Prepayment, MSA Sample: Parameter: Intercept; Analysis of maximum likelihood estimates: Estimate: -16.4562; Analysis of maximum: Standard Error: 0.8684; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: [Empty]. Parameter: Relatively high equity; Analysis of maximum likelihood estimates: Estimate: 2.7283; Analysis of maximum: Standard Error: 0.2334; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 15.307. Parameter: Relatively low equity; Analysis of maximum likelihood estimates: Estimate: 6.6288; Analysis of maximum: Standard Error: 0.8161; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 756.558. Parameter: Constructed risk; Analysis of maximum likelihood estimates: Estimate: 0.0174; Analysis of maximum: Standard Error: 0.0228; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.4469; Odds ratio estimates: Point estimate: 1.018. Parameter: FICO score; Analysis of maximum likelihood estimates: Estimate: 0.00527; Analysis of maximum: Standard Error: 0.000391; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.005. Parameter: No FICO score; Analysis of maximum likelihood estimates: Estimate: -0.3271; Analysis of maximum: Standard Error: 0.0882; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0002; Odds ratio estimates: Point estimate: 0.721. Parameter: ARM; Analysis of maximum likelihood estimates: Estimate: 0.3858; Analysis of maximum: Standard Error: 0.1059; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0003; Odds ratio estimates: Point estimate: 1.471. Parameter: Condominium; Analysis of maximum likelihood estimates: Estimate: -0.1332; Analysis of maximum: Standard Error: 0.0929; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.1514; Odds ratio estimates: Point estimate: 0.875. Parameter: Underserved area; Analysis of maximum likelihood estimates: Estimate: -0.2159; Analysis of maximum: Standard Error: 0.0486; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.806. Parameter: First-time homebuyer; Analysis of maximum likelihood estimates: Estimate: 0.1096; Analysis of maximum: Standard Error: 0.0615; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0745; Odds ratio estimates: Point estimate: 1.116. Parameter: Seller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: -0.2208; Analysis of maximum: Standard Error: 0.0556; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 0.802. Parameter: Nonseller-funded down payment assistance; Analysis of maximum likelihood estimates: Estimate: 0.064; Analysis of maximum: Standard Error: 0.0579; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.2689; Odds ratio estimates: Point estimate: 1.066. Parameter: First 6 quarters; Analysis of maximum likelihood estimates: Estimate: 0.1487; Analysis of maximum: Standard Error: 0.0193; Analysis of maximum likelihood estimates: Pr > ChiSq: <.0001; Odds ratio estimates: Point estimate: 1.16. Parameter: Next 6 quarters; Analysis of maximum likelihood estimates: Estimate: -0.0359; Analysis of maximum: Standard Error: 0.0366; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.3256; Odds ratio estimates: Point estimate: 0.965. Parameter: Following quarters; Analysis of maximum likelihood estimates: Estimate: -0.1246; Analysis of maximum: Standard Error: 0.0404; Analysis of maximum likelihood estimates: Pr > ChiSq: 0.0021; Odds ratio estimates: Point estimate: 0.883. Source: GAO. [End of table] Loss Given Default Model: We also examined the severity of the loss for loans that resulted in a claim. The results of this analysis are limited because FHA's Single- Family Data Warehouse had profit or loss amounts for only 389 loans.[Footnote 92] We ran a regression to predict the loss rate, defined as the profit or loss amount divided by the original mortgage amount. Explanatory variables included the initial LTV ratio, credit score, initial interest rate, original mortgage amount, house price appreciation since time of origination, and indicators for whether the loan had seller-funded nonprofit down payment assistance, assistance from another source, or no assistance. The results of this analysis are in tables 28 and 29. In the national sample, loans with seller-funded nonprofit assistance had loss rates that were about 5 percentage points worse than those for loans with no assistance. Loans with assistance from other sources had loss rates about 2 percentage points better. Neither result was significant. In the MSA sample, loans with seller-funded nonprofit assistance also had loss rates about 5 percentage points worse, while loans with assistance from other sources had loss rates about 7 percentage points worse. Both were significant in a one-tailed test. Table 28: Loss Regression Results--Loss Rate Given Default, National Variable: Intercept; Parameter estimate: 0.41124; t Value: 0.22; Pr > |t|: 0.8259. Variable: LTV ratio; Parameter estimate: -0.01548; t Value: -0.79; Pr > |t|: 0.4308. Variable: Seller-funded down payment assistance; Parameter estimate: - 0.04978; t Value: -1.08; Pr > |t|: 0.2828. Variable: Nonseller-funded down payment assistance; Parameter estimate: 0.02139; t Value: 0.61; Pr > |t|: 0.5417. Variable: FICO score; Parameter estimate: 0.00023796; t Value: 0.82; Pr > |t|: 0.4147. Variable: No FICO score; Parameter estimate: -0.01532; t Value: -0.32; Pr > |t|: 0.7456. Variable: House price appreciation rate; Parameter estimate: 0.66013; t Value: 2.17; Pr > |t|: 0.0312. Variable: Initial interest rate; Parameter estimate: -0.03729; t Value: Pr > |t|: 0.0583. Variable: Original mortgage amount; Parameter estimate: 0.00000218; t Value: 5.11; Pr > |t|: <.0001. Variable: R-squared = 0.1897. Source: GAO. Note: N=184. [End of table] Table 29: Loss Regression Results--Loss Rate Given Default, MSA Sample: Variable: Intercept; Parameter estimate: 0.020059; t Value: 0.01; Pr > |t|: 0.9911. Variable: LTV ratio; Parameter estimate: -0.0251; t Value: -1.32; Pr > |t|: 0.1895. Variable: Seller-funded down payment assistance; Parameter estimate: - 0.05107; t Value: -1.78; Pr > |t|: 0.0769. Variable: Nonseller-funded down payment assistance; Parameter estimate: -0.0698; t Value: -2.05; Pr > |t|: 0.0416. Variable: FICO score; Parameter estimate: 0.00027289; t Value: 1.11; Pr > |t|: 0.2703. Variable: No FICO score; Parameter estimate: 0.02254; t Value: 0.72; Pr > |t|: 0.4743. Variable: House price appreciation rate; Parameter estimate: 1.54727; t Value: 3.41; Pr > |t|: 0.0008. Variable: Initial interest rate; Parameter estimate: 0.0048; t Value: 0.35; Pr > |t|: 0.7261. Variable: Original mortgage amount; Parameter estimate: 0.00000261; t Value: 5.44; Pr > |t|: <.0001. Variable: R-squared = 0.182. Source: GAO. Note: N=205. [End of table] [End of section] Appendix IV: Comments from the Department of Housing and Urban U.S. Department Of Housing And Urban Development: Washington, DC 20410-8000: Assistant Secretary For Housing-Federal Housing Commissioner: October 25, 2005: Mr. William B. Shear: Financial Markets and Community Investment: United States Government Accountability Office: 441 G Street, NW: Washington, D. C. 20548: Dear Mr. Shear: Thank you for permitting FHA to respond to the GAO Draft Report 06-24, "MORTGAGE FINANCING: Additional Action Needed to Manage Risks of FHA- insured Loans with Down Payment Assistance." As you know, FHA has been examining these types of down payment assistance programs for the past several years. The report confirms FHA's own analysis of loan performance and the findings of an independent contractor hired by FHA to evaluate how seller-funded gift programs operate. The GAO report provides additional analysis and reiterates that borrowers receiving seller-funded down payment assistance pay more for their homes than homebuyers who receive no such assistance or assistance from down payment programs funded without seller involvement. Borrowers who rely on seller-funded down payment assistance are representative of the population that FHA was established to serve, families who are otherwise underserved by the private sector. Because of this fact, FHA has determined that additional requirements or restrictions that would prevent these borrowers from obtaining FHA financing would not be beneficial, leaving this population with financing options that are more costly and riskier than FHA. Therefore, FHA has determined that charging a higher premium on these types of loans would be a more palatable alternative, compensating FHA for the additional risk, while still permitting these borrowers the advantage of a more affordable, less risky loan. FHA has also determined that a Zero Down program would better serve borrowers who have little savings for a down payment, but who have steady incomes and acceptable credit. The proposed Zero Down program was designed to address the concerns that GAO raises in the report-- that buyers using seller-funded gifts are paying too much for their homes and putting themselves in a risky position, as evidenced by the historical loan performance - and to ensure that FHA was keeping pace with the rest of the mortgage market, where 100% financing products have become increasingly common. That said, although the report reaffirms FHA's own findings, the agency is disappointed that the recommendations do not acknowledge that a Zero Down program would provide FHA with a better way to serve families in need of down payment assistance. FHA represents a better, safer financing alternative for many families with blemished credit. Providing a new product would serve these families well, by offering consumer protections to ensure that these families would not pay more than they should for their homes or their financing, and that these families would have the benefits of loss mitigation to help them stay in their homes should they experience any future financial hardship. FHA's responses to the individual recommendations are as follows: GAO Recommendation: To provide FHA with data that would permit it to identify whether down payment assistance is from a seller-funded down payment assistance provider, modify FHA's "gift letter source" categories to include "nonprofit seller-funded" and "nonprofit non- seller-funded" and require lenders to accurately identify and report this information when submitting loan to FHA. FHA Response: FHA agrees with this recommendation and will modify the systems to collect this additional information. GAO Recommendation: To more fully consider the risk posed by down payment assistance when underwriting loans, include the presence and source of down payment assistance as a loan variable in FHA's TOTAL FHA Response: Consistent with past practice, HUD will consider and incorporate into TOTAL all appropriate factors, including the presence and source of down payment assistance, that can with historical data be shown empirically relevant for assessing borrower credit risk with respect to loan performance. GAO Recommendation: To ensure that FHA has an ongoing understanding of the impact that down payment assistance has on loan performance, implement routine and targeted performance monitoring of loans with down payment assistance, including analyses that consider the source of FHA Response: FHA agrees and believes that it already performs monitoring of portfolios of such mortgages based on the information residing in its system of records. Obviously, FHA's concern, based on loan performance data, resulted in seeking the services of a contractor to analyze and explore these down payment assistance programs in GAO Recommendation: To improve the forecasting ability of the loan performance models used in the annual review of actuarial soundness, consider the presence and source of down payment assistance. FHA Response: FHA incorporated the source of down payment assistance into its FY 2005 Actuarial Review of the Mutual Mortgage Insurance Fund, a variable that has proved to have considerable explanatory power. FHA informed GAO that it planned to incorporate this variable during its interviews about down payment assistance. GAO Recommendation: To ensure appraisers have the information necessary to establish the market value of the property, require lenders to inform appraisers about the presence of down payment assistance from a seller-funded source. FHA Response: Lenders are required to inform appraisers about all seller concessions, including down payment assistance. Appraisers are aware of seller funded down payment assistance providers in their markets, as evidenced by the findings of the Concentrance study referenced several times in the GAO report. Regardless, FHA will consider imposing the additional requirement that the lender inform the appraiser when down payment assistance is provided by a nonprofit that relies on contributions from the seller. GAO Recommendation: Because down payment assistance provided by seller funded entities is, in effect, a seller inducement, revise FHA standards to treat assistance from a seller-funded nonprofit as a seller contribution, and therefore subject to the 6 percent limit on seller contributions and the prohibition against using seller contributions to meet the 3 percent borrower contribution requirement. FHA Response: HUD's Office of General Counsel has advised that the timing of the payments is a key point in whether there is a seller contribution that is an inducement to purchase. If a gift is made from a nonprofit entity (either directly or through an entity such as the closing agent), from the nonprofit's own funds, prior to the completion of the closing, the gift becomes the homebuyer's property so the buyer can make the three percent required down payment. After completion of the closing, a seller makes a contribution (perhaps through the closing agent as well) from the gross sales proceeds to the nonprofit entity. The donation is commingled with other nonprofit funds that later become a source of donations to buyers other than the buyer who has just closed the purchase of the seller's property. Because the buyer has not received funds from the nonprofit that can be traced to the seller's contribution, there has not been an inducement to purchase provided by the seller. Thank you again for the opportunity to review the GAO report. Consistent with the spirit of your report and its recommendations, HUD will continue to take all steps needed for responsible financial management of its down payment assistance programs, while ensuring that FHA programs serve effectively families who are otherwise underserved by the private sector. Signed by: D. Montgomery: Assistant Secretary for Housing-Federal Housing Commissioner: [End of section] Appendix V: GAO Contact and Staff Acknowledgments: GAO Contact: William B. Shear (202) 512-8678: Staff Acknowledgments: In addition to the individual named above, Mathew Scirè, Assistant Director; Anne Cangi; Emily Chalmers; Susan Etzel; Austin Kelly; John McGrail; Marc Molino; Heddi Nieuwsma; and Mitchell Rachlis made key contributions to this report. [1] GAO, Mortgage Financing: FHA's Fund Has Grown, but Options for Drawing on the Fund Have Uncertain Outcomes, GAO-01-460 (Washington, D.C.: Feb. 28, 2001). GAO, Mortgage Financing: FHA Has Achieved Its Home Mortgage Capital Reserve Target, GAO/RCED-96-50 (Washington, D.C.: Apr. 12, 1996). Dennis R. Capozza, Dick Kazarian, and Thomas A. Thomson. "Mortgage Default in Local Markets," Real Estate Economics, vol. 25 no. 4 (Winter 1997). [2] HUD Office of Inspector General, Final Report of National Audit; Down Payment Assistance Programs; Office of Insured Single Family Housing, 2000-SE-121-0001(Seattle, Wash. Mar. 31, 2000); HUD Office of Inspector General, Follow Up of Down Payment Assistance Programs Operated by Private Nonprofit Entities, 2002-SE-0001 (Seattle, Wash. Sept. 25, 2002). [3] Purchase money mortgage loans are used for the purchase of a home rather than to refinance an existing mortgage. In this report, we analyze purchase money mortgage loans. [4] Automated Valuation Model (AVM) is a broad term used to describe a range of computerized econometric models that are designed to provide estimates of residential real estate property values. AVMs may use regression, adaptive estimation, neural networking, expert reasoning, and artificial intelligence to estimate the market value of a residence. We assessed the reliability of the HUD and AVM data by discussing the data with knowledgeable HUD officials and staff from the contractor that provided the AVM data and, when possible, comparing the data with similar publicly available data. We determined that the data were sufficiently reliable for our analyses. [5] All years are fiscal years unless otherwise indicated. [6] GAO, Internal Control Management and Evaluation Tool, GAO-01-1008G (Washington, D.C. August 2001). [7] Seller-funded down payment assistance programs are supported, in part, by financial contributions and service fees collected by nonprofit organizations from participating property sellers. [8] Concentrance Consulting Group, An Examination of Downpayment Gift Programs Administered by Nonprofit Organizations, prepared for the U.S. Department of Housing and Urban Development (Washington D.C. March [9] We define effective LTV ratio to equal the loan amount divided by the true market value of the home that would exist without the presence of down payment assistance. [10] See GAO, Mortgage Financing: FHA's $7 Billion Reestimate Reflects Higher Claims and Changing Loan Performance Estimates, GAO-05-875 (Washington, D.C. Sept. 2, 2005). [11] 12 U.S.C. Section 1711 (g). [12] Fannie Mae and Freddie Mac are government-sponsored enterprises (GSE) chartered by Congress that purchase mortgages from lenders across the country, financing their purchases by borrowing or issuing mortgage- backed securities that are sold to investors. [13] Some mortgage industry participants consider secondary financing a type of down payment assistance. Secondary financing may take the form of an additional mortgage or secured loan that pays for a down payment, closing costs, or both. For the purposes of this report, we do not include secondary financing as a type of down payment assistance because the funds are not a gift. [14] HUD Office of the General Counsel, April 7, 1998; Subject: Nehemiah Homeownership 2000 Program--Downpayment Assistance. [15] A Taxpayer Identification Number is an identification number used by the IRS in the administration of tax laws. [16] 12 U.S.C. 1709 (b) (2) (B) (ii). [17] See GAO, Mortgage Financing: Actions Needed to Help FHA Manage Risks from New Mortgage Products, GAO-05-194 (Washington, D.C. Feb. 11, 2005). [18] Susan Wharton Gates, Vanessa Gail Perry, and Peter Zorn, "Automated Underwriting in Mortgage Lending: Good News for the Underserved," Housing Policy Debate, vol. 13, no. 2 (2002). [19] The first HUD OIG study evaluated a sample of Nehemiah loans in four cities that were originated between August 1997 and May 1999; the OIG evaluated the performance of these loans as of October 25, 1999. For the second study, the HUD OIG generated a random sample of FHA- insured loans originated in October 1997 through March 2001 and reevaluated the performance of the sample of FHA-insured loans in the first study as of February 2002. [20] This study analyzed loans endorsed in October 1997 through September 2001 and evaluated their performance, as of May 15, 2003. [21] The data sample we relied on included only FHA-insured, single- family purchase money loans with an LTV ratio greater than 95 percent. Loans with an LTV ratio greater than 95 percent account for almost 90 percent of FHA's total portfolio. [22] Loans insured by FHA's 203(b) program, its main single-family program, and its 234(c) condominium program. Small specialized programs, such as 203(k) rehabilitation and 221(d) subsidized mortgages, were not included. For 2000, 2001, and 2002, our analysis is based on a representative sample of FHA-insured purchase money loans with an LTV ratio greater than 95 percent. For 2003, 2004, and 2005, our analysis is based on the total universe of FHA-insured purchase money loans with an LTV ratio greater than 95 percent. HUD data do not differentiate between nonprofit down payment assistance providers that receive funding from sellers and those that do not. See the note to figure 2 for details on the proportions of loans in the samples with seller-funded assistance. [23] Ninety percent of assistance from seller-funded nonprofit organizations was between 2.8 and 5.5 percent of the sales price; however, 90 percent of assistance from other sources was between 1.0 percent and 8.8 percent of the sales price. [24] We measured house price appreciation using data from Global Insight, Inc., for the end of the fourth quarter of 2003 to the end of the fourth quarter of 2004. [25] HUD, Mortgage Credit Analysis for Mortgage Insurance, One to Four Family Properties, Handbook 4155.1 Rev-5. Chapter 2, Section 3, "Borrower's Cash Investment in the Property" (October 2003). [26] HUD, Handbook 4155.1 Rev-5, Chapter 1, Section 2, "Maximum Mortgage Amounts" (October 2003). [27] We drew the sample of loans for this analysis from a national sample of FHA-insured loans developed through a file review study funded by HUD and conducted by the Concentrance Consulting Group. The sample consisted of just over 5,000 purchase money loans endorsed in 2000, 2001, and 2002 with LTV ratios greater than 95 percent. [28] The sample of loans for this analysis is a stratified random sample of 2,000 FHA-insured purchase money loans with first amortization dates in April 2005, extracted from FHA's Single-Family Data Warehouse. [29] Concentrance Consulting Group, An Examination of Downpayment Gift Programs Administered by Nonprofit Organizations, prepared for the U.S. Department of Housing and Urban Development (Washington D.C. March [30] HUD: Mortgagee Letter 2005-02, Seller Concessions and Verification of Sales, Jan. 4, 2005. [31] HUD issues Mortgagee Letters to inform mortgage industry participants of changes in FHA's operations, policies, and procedures. [32] HUD: Mortgagee Letter 2005-02, Seller Concessions and Verification of Sales, Jan. 4, 2005. [33] HUD: Mortgagee Letter 2005-06, Lender Accountability for Appraisals, Jan. 28, 2005. [34] Concentrance Consulting Group, An Examination of Downpayment Gift Programs Administered by Nonprofit Organizations, prepared for the U.S. Department of Housing and Urban Development (Washington D.C. March [35] The data (current as of June 30, 2005) consisted of loans insured by FHA's 203(b) program, its main single-family program, and its 234(c), condominium program. Small specialized programs, such as 203(k) rehabilitation and 221(d) subsidized mortgages, were not in the sample. The national sample included all 50 states and the District of Columbia, but not U.S. territories. The Metropolitan Statistical Area (MSA) sample consisted of loans from three MSAs with high rates of down payment assistance (Atlanta, Indianapolis, and Salt Lake City). Performance is measured by claim rate, 90-day delinquency rate, and rate of loss given default. [36] HUD data does not differentiate between nonprofit down payment assistance providers that receive funding from sellers and those that do not. The group of seller-funded nonprofit organizations includes only nonprofit organizations we could verify as requiring funds from sellers as a condition of providing assistance. All other nonprofits were included in the nonseller-funded (other sources) group. In the national and MSA samples combined, 1,655 loans had at least one gift letter source indicating a nonprofit. Of those, 1,548 (93.5 percent) were seller-funded, 29 (1.8 percent) were not seller-funded, 8 (.5 percent) were from a nonprofit with both seller-funded and nonseller- funded programs, and 70 (4.2 percent) were from nonprofits with a status that we could not identify. [37] We built four econometric models with differing variables as predictors of the conditional probability of a loan becoming 90 days delinquent or resulting in a claim. For the analysis presented here, we used a model based on variables used in FHA's TOTAL Mortgage Scorecard, augmented with other variables. The variables included in the model based on the augmented TOTAL Mortgage Scorecard variables were: LTV (the initial loan-to-value ratio), FICO score (and an indicator variable for borrowers without a FICO score), borrower reserves, front- end ratio (housing payment to income ratio), year of endorsement, mortgage term (15 or 30 years), mortgage type (adjustable or fixed- rate), underserved area, condominium and first-time homebuyer indicators, house price appreciation measured at the state level, variables reflecting the passage of time, and variables indicating the presence and source of down payment assistance. [38] The differences between seller-funded down payment assistance and no down payment assistance are statistically significant with a one- tailed test at a level of 1 percent. [39] The differences between nonseller-funded assistance and no assistance in the national sample are statistically significant for claims and delinquencies at 1 percent and 5 percent, respectively, in one-tailed tests. [40] The differences between nonseller-funded assistance and no assistance in the MSA sample are statistically significant at 5 percent in one-tailed tests. [41] Brent W. Ambrose and Charles A. Capone, "The Hazard Rates of First and Second Defaults," Journal of Real Estate Finance and Economics, vol. 20, no. 3 (May 2000), 275-93; Michelle A. Danis and Anthony Pennington-Cross, "A Dynamic Look at Subprime Loan Performance," Federal Reserve Bank of St. Louis Working Paper 2005-029A (May 2005), available at http://research.stlouisfed.org/wp/2005/2005-029.pdf. [42] Our claim probability findings for nonseller-funded down payment assistance were similar with the national and MSA samples. [43] The other explanatory variables were the LTV ratio at the time the loan was originated, the interest rate on the mortgage at the time the loan was originated, the original mortgage balance, the borrower's credit score, and the estimated appreciation in house prices since the time the loan was originated, along with indicators for a gift from a seller-funded nonprofit or a gift from another source. [44] GAO-01-460. [45] The poorer performance of loans with down payment assistance from nonseller-funded sources relative to loans without assistance may be related to factors not captured by our regression models (see app. [46] GAO-01-1008G. [47] Although FHA's TOTAL Mortgage Scorecard does not directly consider the presence of down payment assistance, it is possible that a loan with down payment assistance "looks better" as compared with a loan without assistance, because (1) the effective LTV ratio is higher than the LTV ratio entered into the TOTAL Mortgage Scorecard because the dollar value used for the property value may be higher for transactions utilizing seller-funded down payment assistance and (2) the borrower reserves are higher (because the borrower doesn't have to use their own funds to make the down payment)--both of which would raise the borrower's score. [48] HUD's Office of Community Planning and Development administers this grant program, which provides down payment assistance funds to homebuyers. Initially, HUD awards funds to state and local governments that are participating jurisdictions. These jurisdictions may choose to designate nonprofit organizations to administer the funds, but not seller-funded nonprofits. [49] Proposed Rule, The U.S. Department of Housing and Urban Development, 24 C.F.R. Part 203, 64 F.R. 49956 (Sept. 14, 1999). [50] Other inducements can include repair allowances, moving costs, and items such as cars, furniture, and televisions. [51] Our review of FHA loan-level data found that a small percentage (less than 1 percent) of loans with down payment assistance from nonprofits did not have a documented Taxpayer Identification Number, but instead included the number "999999999." Additionally, we found that at least 1.97 percent of the loans had Taxpayer Identification Numbers that were not associated with a tax-exempt organization. Loan level data analyzed includes FHA single-family mortgages originated from October 2003 through April 2005. [52] Concentrance Consulting Group, Audit of Loans with Downpayment Assistance, prepared for the U.S. Department of Housing and Urban Development (Washington, D.C. Feb. 6, 2004). [53] FHA tracks the presence and source of down payment assistance in an information system (CHUMS). [54] Concentrance Consulting Group, An Examination of Downpayment Gift Programs Administered by Non-profit Organizations, prepared for the U.S. Department of Housing and Urban Development (Washington, D.C. March 2005). [55] Technical Analysis Center, Inc. with Integrated Financial Engineering, Inc. "An Actuarial Review of the Federal Housing Administration Mutual Mortgage Insurance Fund for Fiscal Year 2005," prepared for the U.S. Department of Housing and Urban Development (Fairfax, VA: Oct. 14, 2005). [56] HUD: Mortgagee Letter 2005-02, Seller Concessions and Verification of Sales, Jan. 4, 2005. [57] GAO, Single-Family Housing: Progress Made, but Opportunities Exist to Improve HUD's Oversight of FHA Lenders, GAO-05-13 (Washington, D.C. Nov. 12, 2004). GAO, Single-Family Housing: HUD's Risk-Based Oversight of Appraisers Could be Enhanced, GAO-05-14 (Washington, D.C. Nov. 5, [58] GAO-05-194. [59] GAO-05-875. [60] For a full description of this sample, see Concentrance Consulting Group, Audit of Loans with Downpayment Assistance, prepared for the U.S. Department of Housing and Urban Development, Feb. 6, 2004. [61] According to HUD officials, HUD selected the Atlanta and Indianapolis MSAs for the Concentrance review because the use of down payment assistance was relatively high in those MSAs. HUD chose the Salt Lake City MSA because it had relatively high rates of down payment assistance and relatively high claim rates. [62] AVM is a broad term used to describe a range of computerized econometric models that are designed to provide estimates of residential real estate property values. AVMs may use regression, adaptive estimation, neural networking, expert reasoning, and artificial intelligence to estimate the market value of a residence. [63] The date of first amortization is generally the first day of the month after settlement, so that most of these loans would have been settled during March 2005. [64] GAO-01-1008G. [65] See, for example, Bradford Case, Henry Pollakowski, and Susan Wachter, "On Choosing Among House Price Index Methodologies," AREUEA Journal, vol. 19 (1991), 286-307. [66] Neural nets are discussed in Paul Kershaw and Peter Rossini, "Using Neural Networks to Estimate Constant Quality House Price Indices," Fifth Annual Pacific Rim Real Estate Society Conference, Kuala Lumpur, Malaysia, January 1999. [67] For a more detailed description of the data developed by Concentrance, see appendix I: Objectives, Scope, and Methodology. [68] For the 2000, 2001, and 2002 Concentrance sample, when we had the name and often the Taxpayer Identification Number of the nonprofit, we divided the sample between seller-funded nonprofits and nonseller- funded sources, so that a gift from a nonprofit that was not clearly seller-funded was included in the nonseller-funded category. For the 2005 transactions, we used FHA's Single-Family Data Warehouse, which records the Taxpayer Identification Number but not the name of the nonprofit. Hence, for this analysis the samples were split between the categories "gift from a nonprofit" and "gift from a source other than nonprofit." We were able to link the Taxpayer Identification Number to the name of the nonprofit for almost 90 percent of the records in the Single-Family Data Warehouse sample, and found that the nonprofit was seller-funded in about 94 percent of those cases. [69] The date of first amortization is generally the first day of the month after settlement, so that most of these loans would have settled during March 2005. [70] The others are Alaska, Kansas, Mississippi, Missouri, New Mexico, Texas, and Utah. [71] In one case, a mortgage amount was recorded as $12,937, although the Single-Family Data Warehouse recorded it as $128,937, and the sales price was $130,000. In the other case, a sales price was recorded as $783,300, and the Single-Family Data Warehouse recorded the sales price as $78,300, and the mortgage was $77,362. [72] The problem of testing means for a Cauchy distribution and the use of medians as an alternative are discussed in E.L. Lehmann, Theory of Point Estimation. (West Sussex, England: John Wiley and Sons, Inc., 1983). In particular see 352-353 and 423. [73] Use of the trimmed mean for non-normal distributions is discussed in the National Institute of Standards and Technology's Engineering Statistics Handbook chapter on "Exploratory Data Analysis," [74] There may be a slight upward bias to the AVM estimates for a sample consisting solely of FHA-insured loans. Because FHA has a maximum loan amount, only the least expensive homes in a high-priced neighborhood will qualify for FHA, so there will be some tendency for FHA-insured properties to have lower values than neighboring properties. This tendency should not have a differential impact on assisted and unassisted transactions. [75] The level of statistical significance is shown as the p-value in tables 1-8. [76] The data consisted of loans insured in FHA's 203(b) program, FHA's main single-family program, and the 234(c) condominium program. Small specialized programs, such as 203(k) rehabilitation and 221(d) subsidized mortgages were not in the sample. [77] These probabilities are conditional because they are subject to the condition that the loan has remained active until a given quarter. [78] For a more detailed description of the data developed by Concentrance, see appendix I: Objectives, Scope, and Methodology. For a full description of the data, see Concentrance, Audit of Loans with Downpayment Assistance, prepared for the U.S. Department of Housing and Urban Development, Feb. 6, 2004. [79] See GAO-05-194 for a review of what published research indicates about the variables that are most important when estimating the risk level associated with individual mortgages. [80] Such termination probabilities are called hazard rates in statistical mortgage modeling. [81] This model is fully documented in GAO-01-460 (Washington, D.C. Feb. 28, 2001). [82] And the higher price is supported by an appraisal. [83] FHA requires a buyer contribution of about 3 percent, but allows the borrower to finance some closing costs and the mortgage insurance [84] GAO-05-194. [85] This can be calculated from the regression coefficients for seller- funded down payment assistance and non-seller-funded down payment assistance in table 12, by taking the exponent of the coefficient. See Betty Kirkwood and Johnathan Sterne, Essential Medical Statistics, 2ND edition (Oxford UK: Blackwell Publishing, 2003), 197- 198. [86] The model based on the TOTAL Mortgage Scorecard variables found even larger differences, with seller-funded nonprofit assistance loans having claim rates 109 percent higher and loans with assistance from other sources having claim rates 36 percent higher than comparable loans without assistance. [87] The odds ratio is the probability that an event, such as a claim or a prepayment, will occur, divided by the probability that the event will not occur. [88] The p values for a one-tailed test range from 0.11 to .12 with the constructed risk variable, and .2 to .27 with the TOTAL Mortgage Scorecard variables. [89] Ronald J. Krumm and Austin Kelly, "Effects of Homeownership on Household Savings," Journal of Urban Economics, vol. 26, (1989), 281- [90] Don Haurin, Pat Hendershott, and Susan Wachter, "Wealth Accumulation and Housing Choices of Young Households: An Exploratory Investigation" Journal of Housing Research, vol. 7, no. 1, (1996), 33- [91] Brent W. Ambrose and Charles A. Capone, "The Hazard Rates of First and Second Defaults," Journal of Real Estate Finance and Economics, vol. 20, no. 3 (May 2000), 275-293; Michelle A. Danis and Anthony Pennington-Cross, "A Dynamic Look at Subprime Loan Performance" Federal Reserve Bank of St. Louis Working Paper 2005-029A (May 2005), available at http://research.stlouisfed.org/wp/2005/2005-029.pdf . [92] These included 184 in the national sample and 205 in the MSA GAO's Mission: The Government Accountability Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO's commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony: The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO's Web site ( www.gao.gov ) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as "Today's Reports," on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select "Subscribe to e-mail alerts" under the "Order GAO Products" heading. Order by Mail or Phone: The first copy of each printed report is free. Additional copies are $2 each. A check or money order should be made out to the Superintendent of Documents. GAO also accepts VISA and Mastercard. Orders for 100 or more copies mailed to a single address are discounted 25 percent. Orders should be sent to: U.S. Government Accountability Office 441 G Street NW, Room LM Washington, D.C. 20548: To order by Phone: Voice: (202) 512-6000: TDD: (202) 512-2537: Fax: (202) 512-6061: To Report Fraud, Waste, and Abuse in Federal Programs: Web site: www.gao.gov/fraudnet/fraudnet.htm E-mail: fraudnet@gao.gov Automated answering system: (800) 424-5454 or (202) 512-7470: Public Affairs: Jeff Nelligan, managing director, (202) 512-4800 U.S. Government Accountability Office, 441 G Street NW, Room 7149 Washington, D.C. 20548:
{"url":"http://www.gao.gov/assets/250/248464.html","timestamp":"2014-04-21T04:40:42Z","content_type":null,"content_length":"276730","record_id":"<urn:uuid:30971b7f-cb6a-4222-a964-7a2319f39627>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00657-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: December 2010 [00329] [Date Index] [Thread Index] [Author Index] Re: Replacement Rule with Sqrt in denominator • To: mathgroup at smc.vnet.net • Subject: [mg114635] Re: Replacement Rule with Sqrt in denominator • From: Noqsi <noqsiaerospace at gmail.com> • Date: Sat, 11 Dec 2010 01:53:33 -0500 (EST) • References: <idl6vd$mkp$1@smc.vnet.net> <idnqq6$q5i$1@smc.vnet.net> <idsku3$6ko$1@smc.vnet.net> On Dec 10, 12:30 am, AES <sieg... at stanford.edu> wrote: > In article <idnqq6$q5... at smc.vnet.net>, > Noqsi <noqsiaerosp... at gmail.com> wrote: > > It is easy to see the kind of chaos the vague and ambiguous "rules > > should be interpreted semantically in a way that makes mathematical > > sense" would cause. How should > > a + b I /. I->-I > > be interpreted *semantically*? > I do not possess anything like the depth of knowledge of symbolic > algebra or the understanding of the principles of semantics that would > embolden me to offer any answer to the preceding question. Oh, come on. This really isn't hard to understand. > But I will offer the following opinion: However the above rule is to b= > interpreted, in any decent symbolic algebra system, assuming a and b > have not yet been assigned any values, the symbol I should be > interpreted (i.e., modified) identically -- i.e., in *exactly* the same > fashion -- for either of the inputs > a + b I /. I->-I OR a + 2 b I /. I->-I > This is NOT the case in Mathematica. And this is trivially understandable by looking at FullForm. "I" is part of the number in this case: it is not a separate symbol. As well expect 22/.2->1 to yield 11. > This behavior is a "gotcha" that > can be responsible for large and hard to trace difficulties for many > users Sure. And if you use a power saw carelessly, you'll cut your fingers off. That's a worse "gotcha", but it can't be helped in a foolproof way without crippling the saw. Just as this can't be helped without crippling Mathematica. > Furthermore, I believe that Mathematica WILL interpret (i.e. , modify) > the two inputs above in exactly the same fashion if the character I in > thee two expressions is replaced by ANY OTHER single upper or lower case > letter in the alphabet. Does anyone else find this not to be true? It's a consequence of two very simple considerations: 1. Pattern matching works on FullForm. 2. Pattern matching can't split atoms. What's so hard about this? There are only six kinds of atoms. When using a tool, it's preferable to exploit the way it actually works, rather than make up some impossible notion and complain that it should work that way.
{"url":"http://forums.wolfram.com/mathgroup/archive/2010/Dec/msg00329.html","timestamp":"2014-04-18T19:12:31Z","content_type":null,"content_length":"27666","record_id":"<urn:uuid:a6be98bd-60f1-47f5-8f4a-b3c7bc75ec1a>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00425-ip-10-147-4-33.ec2.internal.warc.gz"}
A Short Rant about Electromagnetism Texts I’d like to step aside from the main line to make one complaint. In refreshing my background in classical electromagnetism for this series I’ve run into something that bugs the hell out of me as a mathematician. I remember it from my own first course, but I’m shocked to see that it survives into every upper-level treatment I’ve seen. It’s about the existence of potentials, and the argument usually goes like this: as Faraday’s law tells us, for a static electric field we have $abla\times E=0$; therefore $E=abla\phi$ for some potential function $\phi$ because the curl of a gradient is zero. Let’s break this down to simple formal logic that any physics undergrad can follow. Let $P$ be the statement that there exists a $\phi$ such that $E=abla\phi$. Let $Q$ be the statement that $abla\ times E=0$. The curl of a gradient being zero is the implication $P\implies Q$. So here’s the logic: \displaystyle\begin{aligned}&Q\\&P\implies Q\\&\therefore P\end{aligned} and that doesn’t make sense at all. It’s a textbook case of “affirming the consequent”. Saying that $E$ has a potential function is a nice, convenient way of satisfying the condition that its curl should vanish, but this argument gives no rationale for believing it’s the only option. If we flip over to the language of differential forms, we know that the curl operator on a vector field corresponds to the operator $\alpha\mapsto*d\alpha$ on $1$-forms, while the gradient operator corresponds to $f\mapsto df$. We indeed know that $*ddf=0$ automatically — the curl of a gradient vanishes — but knowing that $d\alpha=0$ is not enough to conclude that $\alpha=df$ for some $f$. In fact, this question is exactly what de Rham cohomology is all about! So what’s missing? Full formality demands that we justify that the first de Rham cohomology of our space vanish. Now, I’m not suggesting that we make physics undergrads learn about homology — it might not be a terrible idea, though — but we can satisfy this in the context of a course just by admitting that we are (a) being a little sloppy here, and (b) the justification is that (for our purposes) the electric field $E$ is defined in some simply-connected region of space which has no “holes” one could wrap a path around. In fact, if the students have had a decent course in multivariable calculus they’ve probably seen the explicit construction of a potential function for a vector field whose curl vanishes subject to the restriction that we’re working over a simply-connected space. The problem arises again in justifying the existence of a vector potential: as Gauss’ law for magnetism tells us, for a magnetic field we have $abla\cdot B=0$; therefore $B=abla\times A$ for some vector potential $A$ because the divergence of a curl is zero. Again we see the same problem of affirming the consequent. And again the real problem hinges on the unspoken assumption that the second de Rham cohomology of our space vanishes. Yes, this is true for contractible spaces, but we must make mention of the fact that our space is contractible! In fact, I did exactly that when I needed to get ahold of the magnetic potential once. Again: we don’t need to stop simplifying and sweeping some of these messier details of our arguments under the rug when dealing with undergraduate students, but we do need to be honest that those details were there to be swept in the first place. The alternative most texts and notes choose now is to include statements which are blatantly false, and to rely on our authority to make students accept them unquestioningly. 27 Comments » 1. Exactly what’s wrong with almost everything these days. Avery Andrews | February 18, 2012 | Reply 2. Thank you!! In this particular case it’s completely inexcusable because for many students, this material is already so intimidating, any who are alert enough to catch the mistake will probably assume they’re misunderstanding something elsewhere. (There’s some sort of weird opposite Occam’s Razor at work here: if you’re neck deep in computations and something doesn’t add up, the error (one is tempted to reason) must be in the complicated computations, not in the basic logic!) Sam Alexander | February 18, 2012 | Reply 3. This is your _one_ problem with E&M texts? How about all the integrating through the singularity at the origin with nary a mention, or, for that matter, the fact that the entire subject is a fiction. I once read an article that pointed out that if one follows through the arguments of classical E&M, a point particle will accelerate on its own to infinity (something about the interaction of its electric and magnetic fields). Greg Friedman | February 18, 2012 | Reply 4. It’s not the only problem, Greg, but it’s among the most universal. Many, if not most, texts introduce delta functions and are explicit about the fact that they’re not going into the whole theory of singular distributions. They also do tend to bring up divergent integrals involved in self-interactions, and point out that resolving these problems is beyond their scope. My problem here is not just that the standard physics pedagogy plays fast and loose with the math; it’s that they don’t admit they’re playing fast and loose with the math the way they do Incidentally, I have a similar problem with multivariable calculus courses fudging the difference between points and position vectors, so mathematicians aren’t perfect either. John Armstrong | February 18, 2012 | Reply 5. This is an incredible post, to be reblogged immediately! I think that contemporary science education is veritably plagued by poor presentations. Just think of the illustration of gravity as a bowling ball on a mattress. It’s gravity that suppresses it in the first place! Thanks for this, notedscholar | February 20, 2012 | Reply 6. Hi, John, I wonder if we could have a compiled version of all your notes on electromagnetism from the past few months’ blog entries. Shuhao Cao | February 20, 2012 | Reply 7. Admittedly I don’t remember reading the textbook for the course 20 years ago, but I distinctly remember the words “Helmholtz decomposition” from at least one of the lectures. No mention of de Rham cohomology, of course. Given that the Helmholtz theorem is “the fundamental theorem of vector calculus”, I think it’s reasonable to state it in a physics class without proof, as long as you actually state it. Pseudonym | February 21, 2012 | Reply 8. Helmholtz is one possibility, and it works fine in real, three-dimensional space. It doesn’t generalize so well, though. John Armstrong | February 21, 2012 | Reply 9. [...] said should be all that new to a former physics major, although at some points we’ve infused more mathematical rigor than is typical. But now I want to go in a different [...] Pingback by Maxwell’s Equations in Differential Forms « The Unapologetic Mathematician | February 22, 2012 | Reply 10. Here’s the fundamental reason for your discomfort: as a mathematician, you don’t realize that scalar and vector potentials have *no physical significance* (or for that matter, do you understand the distinction between objects of physical significance and things that are merely convenient mathematical devices?). It really doesn’t matter how scalar and vector potentials are defined, found, or justified, so long as they make it convenient for you to work with electric and magnetic fields, which *are* physical (after all, if potentials were physical, gauge freedom would make no sense). On rare occasions (e.g. Aharonov-Bohm effect), there’s the illusion that (vector) potential has actual physical significance, but when you realize it’s only the *differences* in the potential, it ought to become obvious that, once again, potentials are just mathematically convenient devices to do what you can do with fields alone. P.S. We physicists are very happy with merely achieving self-consistency, thankyouverymuch. Experiments will provide the remaining justification. Peter Erge | March 8, 2012 | Reply □ Eh, here’s *my* discomfort – what in the world does *physical significance* mean? Talk about vacuous handwaving…. Occam's Strop | June 17, 2012 | Reply ☆ I’m sorry. Here I was, thinking I was among scientists, albeit those with more theoretical bent than most are comfortable with. Among scientists, something with “physical significance” means something that can be measured in a lab. (And among scientists, this simple fact needs no elaboration, unless I wanted to insult your intelligence, or, more likely, a lack thereof.) Peter Erge | June 17, 2012 | Reply 12. I thought my courses in Electro-Magnetism were the most confusing I had as an under-grad. If memory serves me correctly we worked our way through about one problem per 90 minute class with the TA going up to the Professor before, during break and after class and correcting his mistakes which were duly noted at the start of the next class period. It seems we need to do two things: a) Explain Vector Calculus very, very well – double the amount of time in class and work out the physics behind the calculus step by step and let the students struggle with problems in class instead of just going through one proof after another. b) Have very good and very explicit practical problems in Electro-Magnetism so that it is as clear as can be what is going on and why the calculus reveals the physics and vice versa. The amount of time students in the sciences spend in the classroom and lab should not be modeled after the amount of time that Liberal Arts students spend in class. 15 credit hours a terms may translate into 15 hours of lecture for them, but 15 credit hours of science should translate into 30 hours of class and lab time. Instead of 4 years to earn B.S. why not 5 years – there is so much more to learn and it is so much more important to learn it well. For the typical English Major – who is not going on to Grad School what real difference in their future work will it make if they confuse Dryden with Donne ? George Watson | July 8, 2012 | Reply 13. “I have a similar problem with multivariable calculus courses fudging the difference between points and position vectors” Of course the scalar product of two points in space has intrinsic meaning. Doesn’t it? And naturally you can multiply a point by a number and get another point. Right? This really bothered me in first semester physics, not because I knew anything about vector spaces, but simply because I could not make sense of the various things they were doing with those Naturally I was too proud to memorize formulas or do calculations by rote. 40+ years later, I still don’t know whether I was being pig-headed or sensible. Ralph Dratman | July 19, 2012 | Reply 14. How about when computing the electric field INSIDE a volume charge distribution using the integral form of Coulomb’s Law? The denominator of the integrand contains the term (r-r’), where r and r’ are the “field point” and “source point”, respectively. The field point is treated as a constant during this integration, as integration is with respect to the source point. If the field point r is allowed to be within the domain of integration, then the integrand is undefined at said when r’ coincides with r! Division by zero! What the hell! This would never fly in my vector analysis I took 2 quarters undergrad engineering E&M this past year. I just realized this problem approx. 1 week ago and it is driving me crazy. Matthew Kvalheim | August 12, 2012 | Reply 15. Peter Erge’s comment, if I understand it correctly, suggests that a physicist’s use of mathematics does not require that the physics and the math correspond precisely at every possible point — only that the mathematics can be used to make some predictions which have been found to match experimental observations over a certain range of configurations and measurement. Although Erge’s approach leaves familiar logical gaps, I think it does describe how physicists have to use mathematics. Ralph Dratman | August 12, 2012 | Reply □ That’s exactly my point—and mathematicians take something away from this exchange, too: what use would distributions have been except that some of physicists’ favorite “functions” (such as the Dirac delta function, which, BTW, is the result of doing the 1/r integral for a point charge (going from the differential form to integral form of Maxwell’s equations) in the complaint by Mr. Kvalheim above) are poorly defined as traditional functions? When one happens to know the answer (or has a way to check the answer), one can afford to be a bit sloppy—for the sake of faster progress, not of sloppiness itself. Peter Erge | August 12, 2012 | Reply 16. Ralph and Peter, you’re still ignoring what I’ve said over and over and over and over again here: I’m talking about how physics is taught, not how it’s used. And I’m not even asking for rigor in how it’s taught; I’m asking for a explicit mention of the fact that a given point is not rigorous. Seriously, you’re not even arguing the same point. Just stop before you embarrass yourselves further. John Armstrong | August 12, 2012 | Reply □ Nice dodge, but that can’t be it. Physics textbooks seldom claim the kind of mathematical rigor you are insisting on. For example, most electrodynamics textbooks (the classic textbooks Griffiths (undergrad level) and Jackson (grad level) both do this) will introduce electric potential in electrostatics—i.e. many chapters before Faraday’s law has even been introduced (after magnetostatics) and hence there were no reasons to worry about changing electric fields in the first place. You have inserted the rigor/logic that physicists don’t even bother to imagine and demanded that physicists hold up to mathematicians’ standards. That is unreasonable, not because physicists aren’t capable of that, but because we like to teach something substantial—and have the class end within a semester at the same time. Peter Erge | August 12, 2012 | Reply □ I also strongly agree with John Armstrong. The texts absolutely should point out, in some way, that the physics and the math differ. In fact, I remember reading the strange assertion you mention (that some potential function must exist because the curl of the gradient is zero). It immediately made no sense to me, even though I had no idea how the statement needed to be qualified. In fact, I am pretty surprised to read that all you need is a contractible space to make it true. I don’t usually have a longing to see a proof, but in that case I did want some kind of motivation for believing it — if only because it is so much easier to remember things that make sense. Ralph Dratman | August 12, 2012 | Reply 17. Here is the exact rewrite I am asking for, Peter. Before: … therefore $E=abla\phi$ for some potential function $\phi$ because the curl of a gradient is zero. … therefore choosing $E$ of the form $abla\phi$ for some potential function $\phi$ will always satisfy this condition because the curl of a gradient is zero. We will assume that $E$ is of this form from now on. Is that really so objectionable? John Armstrong | August 12, 2012 | Reply □ Yes. Because it detracts from the subject of the class: physics, not mathematical nitpicking. Peter Erge | August 12, 2012 | Reply ☆ I seriously disagree. In fact, I would personally prefer a little bit more explanation than is to be found in the edit John provides — even though it is adequate mathematically. I fail to see how a footnote (of any length) could disrupt flow of an argument or the comprehension of same by students. If you really worry about that, Peter, just tell them something like, “You are not responsible for any of the material in footnotes.” Then the ones who would be distracted won’t even look at them. The lack of consistency in logic and notation bothered me very much as an undergraduate physics student, and certainly not because I was a nitpicker. I just sometimes had trouble following the flow of ideas in certain areas, and I think, speaking very generally, that more mathematical explanation, when done properly, can translate into more sense. Ralph Dratman | August 12, 2012 | Reply 18. While I find that the majority of my engineering classmates don’t seem to care about the lack of rigor in our electromagnetics classes (or any class for that matter), it has always bothered me. I find myself agreeing with the point of view of Mr. Armstrong. In sweeping details under the rug, teachers frequently leave me feeling as though I am thinking about the subject matter completely wrong- because, as far as I know, nobody else had ever run into the same problem as myself. I have seen Griffiths’ Intro to Electrodynamics mentioned by a few others on this site. This was not the text used in my classes, but I am reading it now (about 150 pages in). I find this text EXTREMELY helpful, because although it does not seem rigorous, it is filled with footnotes where Dr. Griffiths at least acknowledges when he is not mentoning all the details. This has already provided me with a lot of insight. Another example- using a delta distribution for the divergence of 1/r^2 fields seemed a little suspect to me. Griffiths just sort of throws it in as though “oh, the divergence theorem doesn’t work here for mystery reasons. But it does if we put this thing called a delta function in.” I never truly understood Gauss’ Law until, while reading Vector Calculus by Marsden & Tromba, I saw a proof of Gauss’ Law using the divergence theorem for unions of simple elementary regions (without any delta distributions). (On a side note, can anyone explain why the integration I mentioned in my previous post is valid? Or at least point me to a resource that does? I apologize if this is too off topic) Matthew Kvalheim | August 12, 2012 | Reply 19. I am a physicist and I tend to agree with John – the presentation of relevant mathematics in physics books is not always sufficient. That gap should be filled mainly by dedicated class lead by teacher who understands the subtleties of mathematics for physicists, and also there should be at least footnotes setting things right like Mr. Griffiths makes. However, Peter has also a good point in that physicist should know and usually knows what is potential much sooner than he understands curl and div operators, so there is no real harm done to physics, as we always deal with simply-connected space R^3. then the integrand is undefined at said when r’ coincides with r! Division by zero! What the hell! Matthew, the integral you worry is free of problem if the charge density is finite function in some neighbourhood of the field point in question(where you seek the field). The singularity is indeed in the expression, but is often integrable and the result is finite and non-problematic. Jan Lalinsky | December 4, 2012 | Reply □ Jan, thanks for responding to my question. Do you mean that the integral is then defined in an “improper” sense? I.e., would you simply integrate over the charge distribution, but excluding a ball which encloses the field point, and let the radius of this ball approach zero? Or am I thinking about this incorrectly? I guess I’m just curious how one actually proves what you said. Matthew Kvalheim | December 5, 2012 | Reply • Recent Posts • Blogroll • Art • Astronomy • Computer Science • Education • Mathematics • Me • Philosophy • Physics • Politics • Science • RSS Feeds • Feedback Got something to say? Anonymous questions, comments, and suggestions at • Subjects • Archives
{"url":"http://unapologetic.wordpress.com/2012/02/18/a-short-rant-about-electromagnetism-texts/?like=1&source=post_flair&_wpnonce=c327cf8c6a","timestamp":"2014-04-18T10:41:47Z","content_type":null,"content_length":"114326","record_id":"<urn:uuid:5f0f60f3-debb-4ab1-ab86-9216bf4e0be6>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00106-ip-10-147-4-33.ec2.internal.warc.gz"}
Swap two nodes in linked list 09-12-2007 #1 Registered User Join Date Sep 2007 Swap two nodes in linked list I want to swap two nodes of a linked list. Like if its 12 23 34 45 then i want result like 21 32 43 54..... if input is 12345 then output should be 21435. something like that... I have written following .... but its not producing the desired output ..... somebody plz help me out struct list{ int roll_no; /* Storing roll number of a node */ // char name[N]; /* Storing name of node */ float marks; /* Storing marks of a node */ struct list *next; /* Storing next address */ /***** Redefining struct list as node *****/ typedef struct list node; node* swap(node *current) int rno; /* Roll number for swaping node*/ int t; /* Total number of nodes */ node *temp; /* Temporary copy of current */ node *tmp; /* Temporary variable */ printf("\nYou cannot swap the only node\n"); printf("\nEnter roll number whose node you want to swap with the next\n"); int main() head=swap(head); // head is the first node of the linked list Last edited by shounakboss; 09-12-2007 at 10:17 AM. so you want to reverse the integers? give a search here or elsewhere as im sure its been covered before. if you dont want to search and want specific help, then please post an attempt at reversing a number. once you have done that im sure your help will come. edit: after rereading, it looks like your first example just involved reversing each integer in a list. the second example is a different case in which one number is entered and the digits are switched. so it looks like two problems. the first one i talked about above, the second one, a starting point would be to determine if the length of the integer is odd or even, as since 12345 has an odd number of numbers, only the first 4 are switched, however if it were 1234 it should switch to 2143 in which case all the numbers are switched. i would grab some paper, draw out 12345 and think of the steps to get it to become 21435, writing out every single step (as you know a computer would have to be told). once you have that, do it again for a number 1234, as this is the second case. this doesnt really have anything to do with a linked list, so im sure im misunderstanding. Last edited by nadroj; 09-12-2007 at 10:15 AM. And investigate the [code] tags! "No-one else has reported this problem, you're either crazy or a liar" - Dogbert Technical Support "Have you tried turning it off and on again?" - The IT Crowd yes code tags would be wonderful. please see the edit of my previous post shounakboss. I want to swap two nodes of a linked list. Like if its 12 23 34 45 then i want result like 21 32 43 54 again this doesnt seem to involve swaping any nodes. it looks like traversing the linked list, and for each node the integer value of the data is reversed, dependent of any other node. 09-12-2007 #2 Registered User Join Date Oct 2006 09-12-2007 #3 09-12-2007 #4 Registered User Join Date Oct 2006
{"url":"http://cboard.cprogramming.com/c-programming/93500-swap-two-nodes-linked-list.html","timestamp":"2014-04-25T04:21:00Z","content_type":null,"content_length":"53689","record_id":"<urn:uuid:943a7d2d-e6ff-4c55-a423-007ff2277660>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Matrix Partitioning on a Virtual Shared Memory Parallel Machine April 1996 (vol. 7 no. 4) pp. 343-355 ASCII Text x Benjamin Charny, "Matrix Partitioning on a Virtual Shared Memory Parallel Machine," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 4, pp. 343-355, April, 1996. BibTex x @article{ 10.1109/71.494629, author = {Benjamin Charny}, title = {Matrix Partitioning on a Virtual Shared Memory Parallel Machine}, journal ={IEEE Transactions on Parallel and Distributed Systems}, volume = {7}, number = {4}, issn = {1045-9219}, year = {1996}, pages = {343-355}, doi = {http://doi.ieeecomputersociety.org/10.1109/71.494629}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Parallel and Distributed Systems TI - Matrix Partitioning on a Virtual Shared Memory Parallel Machine IS - 4 SN - 1045-9219 EPD - 343-355 A1 - Benjamin Charny, PY - 1996 KW - Basic matrix operations KW - matrix partitioning KW - minimax criteria KW - numerical algorithm KW - optimum load balance KW - parallel multiprocessor KW - performance KW - virtual shared memory. VL - 7 JA - IEEE Transactions on Parallel and Distributed Systems ER - Abstract—The general problem considered in the paper is partitioning of a matrix operation between processors of a parallel system in an optimum load-balanced way without potential memory contention. The considered parallel system is defined by several features the main of which is availability of a virtual shared memory divided into segments. If partitioning of a matrix operation causes parallel access to the same memory segment with writing data to the segment by at least one processor, then contention between processors arises which implies performance degradation. To eliminate such situation, a restriction is imposed on a class of possible partitionings, so that no two processors would write data to the same segment. On the resulting class of contention-free partitionings, a load-balanced optimum partitioning is defined as satisfying independent minimax criteria. The main result of the paper is an algorithm for finding the optimum partitioning by means of analytical solution of respective minimax problems. The paper also discusses implementation and performance issues related to the algorithm, on the basis of experience at Kendall Square Research Corporation, where the partitioning algorithm was used for creating high-performance parallel matrix libraries. [1] R.W. Numrich, "Memory Contention for Shared Memory Vector Multiprocessors," Proc. Supercomputing '92, pp. 316-324. IEEE CS Press, 1992. [2] D.H. Bailey,“Vector computer memory bank contention,” IEEE Trans. Computers, vol. 36, pp. 293-298, 1987. [3] I.Y. Bucher and Simmons, "Measurement of Memory Access Contentions in Multiple Vector Processors," Proc. Supercomputing '91, pp. 806-817, 1991. [4] C.H. Hoogendoorn, "A General Model for Memory Interference in Multiprocessors," IEEE Trans. Computers, vol. 26, pp. 998-1,005, 1977. [5] P. Tang and R.H. Mendez, "Memory Conflicts and Machine Performance," Proc. Supercomputing '89, pp. 826-831, 1989. [6] K. Li, "IVY: A Shared Virtual Memory System for Parallel Computing," Proc. Parallel Processing, Int'l Conf., vol. II, pp. 94-101, 1988. [7] K. Li, "Shared Virtual Memory on Loosely-Coupled Multiprocessors," Tech Report VALEU-RR-492, Yale Univ., 1986,. [8] K. Li and P. Hudak, "Memory Coherence in Shared Virtual Memory Systems," Proc. Fifth Ann. ACM Symp. Principles of Distributed Computing, pp. 229-239, 1986. [9] L.M. Censier and P. Featrier, "A New Solution to Coherence Problems in Malticache Systems," IEEE Trans. Computers, vol. 27, no. 12, pp. 1,112-1,118, 1978. [10] A.J. Smith, "Cache Memories," ACM Computing Surveys, Vol. 14, 1982, pp. 473-540. [11] B.M. Lampson and D.D. Redell, "Experience with Processes and Monitors in Mesa," Comm. ACM, vol. 27, no. 6, pp. 594-602, 1984. [12] P.J. Leach, P.H. Levine, B.P. Douros, J.A. Hamilton, D.L. Nelson, and B.L. Stumpf, "The Architecture of an Integrated Local Network," IEEE J. Selected Areas in Comm., 1983. [13] J. Archibald and J.L. Baer, "Cache Coherence Protocols: Evaluation Using a Multiprocessor Simulation Model," ACM Trans. Computer Systems, vol. 4, no. 4, Nov. 1986. [14] J.J. Dongarra et al., LINPACK : Users' Guide.Philadelphia: SIAM, 1979. [15] E. Anderson, Z. Bai, C. Bischof, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, S. Ostrouchov, and D. Sorensen, LAPACK Users' Guide. Philadelphia, Penn.: SIAM, [16] S. Frank, H. Burkhardt, and J. Rothnie, "The KSR1: Bridging the Gap between Shared Memory and MPPs," Compcon '93 Proc., pp. 285-294, 1993. [17] E. Burke, "An Overview of System Software for the KSR1," Compcon '93 Proc., pp. 295-299, 1993. [18] S. Breit, C. Pangali, and D. Zirl, "Technical Applications on the KSR1: High Performance and Ease of Use," Compcon '93 Proc., pp. 303-311, 1993. [19] T. Shavit, L. Lee, and S. Breit, "A Practical Parallel Runtime Environment on a Multiprocessor with Global Address Space," Proc. Fifth ECMW Workshop Use of Parallel Processors in Meteorology, pp. 1-20, 1992. [20] E.L. Boyd, J-D. Wellman, S. G. Abraham, and E.S. Davidson, "Evaluating the Communication Performance of MPPs Using Iterative Sparse Matrix Multiplications," Advanced Computer Architecture Laboratory, Dept. of Electrical Eng. and Computer Science, Univ. of Michigan, Ann Arbor, pp. 1-22, 1993. [21] U. Ramachandran, G. Shah, S. Ravikumar, and J. Muthukumarasamy, "Scalability Study of the KSR1," College of Computing, Georgia Inst. of Technology, Atlanta, 1993. [22] J.J. Modi, Parallel Algorithms and Matrix Computation.New York: Oxford Univ. Press, 1988. [23] K.A. Gallivan, R.J., Plemmons, and A.H. Sameh, "Parallel Algorithms for Dense Linear Algebra Computations," Parallel Algorithms for Matrix Computations, K.A. Gallivan et al. SIAM, 1989. [24] G.H. Golub and C.F. Van Loan, Matrix Computations. Johns Hopkins Univ. Press, 1989. Index Terms: Basic matrix operations, matrix partitioning, minimax criteria, numerical algorithm, optimum load balance, parallel multiprocessor, performance, virtual shared memory. Benjamin Charny, "Matrix Partitioning on a Virtual Shared Memory Parallel Machine," IEEE Transactions on Parallel and Distributed Systems, vol. 7, no. 4, pp. 343-355, April 1996, doi:10.1109/ Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/td/1996/04/l0343-abs.html","timestamp":"2014-04-20T03:46:35Z","content_type":null,"content_length":"56464","record_id":"<urn:uuid:1b5d6958-1c6e-4192-990e-b76885ff8cea>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00278-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Re: "Relativistic" mathematics? Charles Silver csilver at sophia.smith.edu Mon Oct 12 20:00:19 EDT 1998 On Tue, 13 Oct 1998, Vladimir Sazonov wrote: > If you will tell this story on Adam and Eve to somebody from the > street who really have NO mathematical experience and did not > learn at school some examples of mathematical proofs (and actually > of proof rules) he will be unable to understand even what are prime > numbers and any informal proof you will present. You are probably right. > Our teachers > actually (usually implicitly, by examples) said us which are > "correct" rules of inference. What if a teacher taught us the following rule: P --> Q Therefore: P Would that make it a correct rule because the teacher taught it to us? No. The teacher would be just plain wrong, which shows that it's not the mere inculcation of *some* formal system, but getting things right (based, I think, on prior principles that are intuitively acceptable). > So, if somebody have any > reasonable mathematical education and training, then he actually > knows something like first-order logic. (But, most probably, he > does not know that he knows this. But this does not matter.) > Then he will implicitly, without even knowing this, formalize > (some essential features of) your story. Anyway, any proof > which will present or understand that person will be formal in > some essential respect. Mathematical proof is something which > can be *checked* on correctness mostly relative to its form, > rather than to its content. You think formal systems come first. I think intuitions are epistemically prior. > Let me also recall what M. Randall Holmes" > <M.R.Holmes at dpmms.cam.ac.uk> wrote on Fri, 2 Oct 1998 11:16:41: > > One cannot be more or less rigorous if there is no standard > > of perfect rigor to approximate. > > > > We may _not_ doubt that the conclusion of a valid argument follows > > from the premises. We do have explicit standards, which we can spell > > out, as to what constitutes a valid argument. This is the precise > > sense in which mathematics is indubitable. No natural science is > > indubitable in this sense. Was mathematical induction correct prior to its being formalized? Or, did it become correct once it was formalized. In my viewd, it has been enshrined as a rule only because it was previously intuitively correct. (If you don't believe this, try gaining acceptance for an incorrect rule of inference, like the one presented earlier.) > I think that having *explicit* standards means having known > some rules of inference presented in any reasonable form. > Say, children learn at school how to use in geometry the rule > reductio ad absurdum. Incidentally (I admit this is not relevant to the point), I don't think we accepted Reductio proofs in elementary geometry class, not because the proofs didn't establish what they purported to, but because they seemed like cheating. > To sum, I mean by a formal system such a system of axioms and > proof rules to which the term 'formal' may be applicable in any > reasonable sense. Thus, even any semiformal proof is > mathematically rigorous. Maybe you are right, but then a proof can be rigorous but wrong, in the sense of it following rules that later turn out to be mistaken. I'll let you have the last word: > And finally, I think it is unnecessary to discuss very much that > mathematics deals only with *meaningful* formalisms based on > some *intuition* and that even formal proofs are mostly > understood by us intuitively, may be with the help of some > graphical and other images and that there is a very informal > process of discovering proofs so that on the intermediate steps > we have only some drafts of future formal proofs (and sometimes > of axioms and proof rules, as well). Anyway, in each historical > period (except the time of a scientific revolution) we usually > *know* what is the ideal of mathematical rigour to which we > try to approach in each concrete proof. Sorry, but I can't help asking one more question: Is the above an application of Kuhnian philosophy? Are you interpreting formal rules as examples of his "paradigms"? Charlie Silver More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/1998-October/002336.html","timestamp":"2014-04-16T10:15:14Z","content_type":null,"content_length":"7174","record_id":"<urn:uuid:66c80138-8c70-4f2f-ab1b-ec550616a9c0>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
Next Article Contents of this Issue Other Issues ELibM Journals ELibM Home EMIS Home Pick a mirror Aurel Bejancu and Hani Reda Farran Department of Mathematics and Computer Science, Kuwait University, Kuwait and Institute of Mathematics, Iasi Branch of the Romanian Academy, Romania; and Department of Mathematics and Computer Science, Kuwait University, Kuwait Abstract: Let $\mathbb{F}^m=(M,F)$ be a Finsler manifold and $G$ be the Sasaki–Finsler metric on the slit tangent bundle $TM^0=TM\setminus\{0\}$ of $M$. We express the scalar curvature $\widetilde\ rho$ of the Riemannian manifold $(TM^0,G)$ in terms of some geometrical objects of the Finsler manifold $\mathbb{F}^m$. Then, we find necessary and sufficient conditions for $\widetilde\rho$ to be a positively homogenenous function of degree zero with respect to the fiber coordinates of $TM^0$. Finally, we obtain characterizations of Landsberg manifolds, Berwald manifolds and Riemannian manifolds whose $\widetilde\rho$ satisfies the above condition. Keywords: Berwald manifold, Finsler manifold, Landsberg manifold, Riemannian manifold, scalar curvature, tangent bundle Classification (MSC2000): 53C60; 53C15 Full text of the article: (for faster download, first choose a mirror) Electronic fulltext finalized on: 6 Apr 2011. This page was last modified: 16 Oct 2012. © 2011 Mathematical Institute of the Serbian Academy of Science and Arts © 2011–2012 FIZ Karlsruhe / Zentralblatt MATH for the EMIS Electronic Edition
{"url":"http://www.emis.de/journals/PIMB/103/6.html","timestamp":"2014-04-21T07:41:30Z","content_type":null,"content_length":"5107","record_id":"<urn:uuid:683073bb-8ba9-4e06-a454-bcaad3c54e69>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00606-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Is there an easy way of understanding the concept of naming compounds (binary, polyatomic, etc)? Any help would be greatly appreciated Best Response You've already chosen the best response. There is really no way of "understanding" the concept of naming compounds. It's a pretty arbitrary system, really. However, there are a few steps you can take to simplify the process, and make it more easy to understand. Try... 1.) Identifying what type of compound the question is asking you to name. 2.) Performing step #1 should allow you to identify some algorithm by which you need to name this compound. Over time, you will develop different algorithms for naming specific types of compounds. 3.) Next, actually perform the algorithm you've developed from practice to the specific example you are given. The hardest part in any sort of problem solving is first developing your own algorithm that makes sense to YOU. Once you've done this, solving the actual problems is a piece of cake. Best Response You've already chosen the best response. If you need help on developing these algorithms, go ahead and ask. Are there any types of compounds with which you have particular trouble naming? Best Response You've already chosen the best response. Well, I know that there is a difference in the names (adding ide versus adding ate) between different elements; so I was wondering if there was a sort of easy method of remembering this/more steps Also, I'm still in algebra 1, so algorithms are sort of out there for me..... Best Response You've already chosen the best response. The word algorithm just means "a specific method for solving a problem". Although the word is usually associated with mathematics, it is certainly used outside of the field. The best way to remember or memorize something, in my opinion, is to practice. As you practice, try to use your notes less and less until you are completely independent of them. Once you can do this, you've both memorized the material that you need to memorize and you've increased your aptitude at solving that type of problem. Then, once you've memorized the things that you need to, you can begin working solely on the problem solving aspects of the problems. Best Response You've already chosen the best response. So here is my strategy. When you have been given a formula: 1. IF it is binary check if it has a metal. If it does it is an ionic compound. To name an ionic compound, you write the name of the metal first, then the next element, but ending in a -ide. So FeS is ionic, binary and should be Iron Sulf+ide = Iron Sulfide. 2. If it is binary and does not have a metal, start by naming the less electronegative atom first, and the second one has to end in an -ide. But, you need to indicate how many atoms of each. So CO2 is Carbon dioxide. N2O5 is Dinitrogen Pentaoxide. 3. If it is not a binary compound (more than 2 different elements) and it has a metal, you need to know the names of the common ionic groups like carbonate, phosphate etc. Then Fe3(PO4)2 is Iron Phosphate. Best Response You've already chosen the best response. The ide versus ate: ide : is always when you have an ion formed of a single element. sulfide, phosphide, chloride. When you have a polyatomic ion, usually with oxygen in it, the naming can end in a ate, ite, ous etc. Then the polyatomic ion with the most oxygens is the -ate, the next is the ite and so on. Best way is to just memorize them. Carbonate, phosphate, perchlorate, sulfate. Best Response You've already chosen the best response. Thanks everyone! It really helped me :D Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4f27360ae4b0a2a9c267bc6d","timestamp":"2014-04-16T16:17:50Z","content_type":null,"content_length":"44696","record_id":"<urn:uuid:e918c964-0483-4097-a5f8-4480cbab9740>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
Quickly, help please! January 14th 2006, 11:06 AM Quickly, help please! An easy problem, PLEASE answer quickly. Andrew, Boris, Cedric and Dan are collecting old paper. Toghether they collected 288 kg of it. How much paper did each of them collect if we know that Andrew collected 36kg more then Boris which equals 3/4 the ammount which Boris and Cedric collected together and if we know that Dan collected twice as much paper as Cedric. A complete explenations including all the algerbra stuff. Thanks to all od you :) January 14th 2006, 11:29 AM I need to hand it in tomorrow and I have to go to sleep in 1 hour. Hurry please. January 14th 2006, 11:47 AM Originally Posted by diamondfox An easy problem, PLEASE answer quickly. Andrew, Boris, Cedric and Dan are collecting old paper. Toghether they collected 288 kg of it. How much paper did each of them collect if we know that Andrew collected 36kg more then Boris which equals 3/4 the ammount which Boris and Cedric collected together and if we know that Dan collected twice as much paper as Cedric. A complete explenations including all the algerbra stuff. Thanks to all od you :) I can do this now because its Sunday morning here and no work for me, but I am slow at typing (one-finger typing), so it will be s-l-o-w-l-y. A = number of kg of old papers collected by Andrew B = ....by Boris C = ....by Cedric D = ....by Dan A +B +C +D = 288 --------(1) A = B +36 -----------------(2) A = (3/4)(B +C) ----------(3) D = 2C ------------------(4) 4 equations, 4 unknowns, solvable. One way to continue is this: The C is shown on 3 of the 4 equations, so we use that C. We eliminate A, B, D. A from (2) = A from (3), B +36 = (3/4)(B +C) Clear the fraction, multiply both sides by 4, 4B +144 = 3B +3C 4B -3B = 3C -144 B = 3C -144 -------------(5) Substitute that into (2), A = (3C -144) +36 A = 3C -108 --------------(6) Substitute those, and D=2C, into (1), (3C -108) +(3c -144) +C +2C = 288 3C +3C +C +2C = 288 +108 +144 9C = 540 C = 540/9 = 60 kg -----------answer. A = 3C -108 = 180 -108 = 72 kg ----------answer. B = 3C -144 = 180 -144 = 36 kg ------answer. D = 2C = 120 kg ---------------------answer. January 14th 2006, 11:53 AM Originally Posted by diamondfox Andrew, Boris, Cedric and Dan are collecting old paper. Toghether they collected 288 kg of it. How much paper did each of them collect if we know that Andrew collected 36kg more then Boris which equals 3/4 the ammount which Boris and Cedric collected together and if we know that Dan collected twice as much paper as Cedric. The hardest part is translating the information into mathematics. I'll call: Andrew = A, Boris = B, Cedric = C, Dan = D. Toghether they collected 288 kg of it => A + B + C + D = 288 Andrew collected 36kg more then Boris => A = B + 36 which equals 3/4 the ammount which B&C collected together => A = 3/4(B+C) Dan collected twice as much paper as Cedric => D = 2C This gives the following system of four lineair equations in four unknowns: $\left\{ \begin{gathered}<br /> A + B + C + D = 288 \hfill \\<br /> A = B + 36 \hfill \\<br /> A = 3/4\left( {B + C} \right) \hfill \\<br /> D = 2C \hfill \\ <br /> \end{gathered} \right$ January 14th 2006, 12:05 PM Thank you! It's wonderful how people help each other on these forums. Thanks again :) ticbol and TD! and thanks to all forum users.
{"url":"http://mathhelpforum.com/algebra/1624-quickly-help-please-print.html","timestamp":"2014-04-20T12:32:23Z","content_type":null,"content_length":"8339","record_id":"<urn:uuid:0e7b3787-bf50-4c20-b10b-4174eb247660>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00264-ip-10-147-4-33.ec2.internal.warc.gz"}
Creation of entanglement simultaneously gives rise to a wormhole A diagram of a wormhole, a hypothetical "shortcut" through the universe, where its two ends are each in separate points in spacetime. Credit: Wikipedia Quantum entanglement is one of the more bizarre theories to come out of the study of quantum mechanics—so strange, in fact, that Albert Einstein famously referred to it as "spooky action at a Essentially, entanglement involves two particles, each occupying multiple states at once—a condition referred to as superposition. For example, both particles may simultaneously spin clockwise and counterclockwise. But neither has a definite state until one is measured, causing the other particle to instantly assume a corresponding state. The resulting correlations between the particles are preserved, even if they reside on opposite ends of the universe. But what enables particles to communicate instantaneously—and seemingly faster than the speed of light—over such vast distances? Earlier this year, physicists proposed an answer in the form of "wormholes," or gravitational tunnels. The group showed that by creating two entangled black holes, then pulling them apart, they formed a wormhole—essentially a "shortcut" through the universe—connecting the distant black holes. Now an MIT physicist has found that, looked at through the lens of string theory, the creation of two entangled quarks—the building blocks of matter—simultaneously gives rise to a wormhole connecting the pair. The theoretical results bolster the relatively new and exciting idea that the laws of gravity holding together the universe may not be fundamental, but arise from something else: quantum entanglement Julian Sonner, a senior postdoc in MIT's Laboratory for Nuclear Science and Center for Theoretical Physics, has published his results in the journal Physical Review Letters, where it appears together with a related paper by Kristan Jensen of the University of Victoria and Andreas Karch of the University of Washington. The tangled web that is gravity Ever since quantum mechanics was first proposed more than a century ago, the main challenge for physicists in the field has been to explain gravity in quantum-mechanical terms. While quantum mechanics works extremely well in describing interactions at a microscopic level, it fails to explain gravity—a fundamental concept of relativity, a theory proposed by Einstein to describe the macroscopic world. Thus, there appears to be a major barrier to reconciling quantum mechanics and general relativity; for years, physicists have tried to come up with a theory of quantum gravity to marry the two fields. "There are some hard questions of quantum gravity we still don't understand, and we've been banging our heads against these problems for a long time," Sonner says. "We need to find the right inroads to understanding these questions." A theory of quantum gravity would suggest that classical gravity is not a fundamental concept, as Einstein first proposed, but rather emerges from a more basic, quantum-based phenomenon. In a macroscopic context, this would mean that the universe is shaped by something more fundamental than the forces of gravity. This is where quantum entanglement could play a role. It might appear that the concept of entanglement—one of the most fundamental in quantum mechanics—is in direct conflict with general relativity: Two entangled particles, "communicating" across vast distances, would have to do so at speeds faster than that of light—a violation of the laws of physics, according to Einstein. It may therefore come as a surprise that using the concept of entanglement in order to build up space-time may be a major step toward reconciling the laws of quantum mechanics and general relativity. Tunneling to the fifth dimension In July, physicists Juan Maldacena of the Institute for Advanced Study and Leonard Susskind of Stanford University proposed a theoretical solution in the form of two entangled black holes. When the black holes were entangled, then pulled apart, the theorists found that what emerged was a wormhole—a tunnel through space-time that is thought to be held together by gravity. The idea seemed to suggest that, in the case of wormholes, gravity emerges from the more fundamental phenomenon of entangled black holes. Following up on work by Jensen and Karch, Sonner has sought to tackle this idea at the level of quarks—subatomic building blocks of matter. To see what emerges from two entangled quarks, he first generated quarks using the Schwinger effect—a concept in quantum theory that enables one to create particles out of nothing. More precisely, the effect, also called "pair creation," allows two particles to emerge from a vacuum, or soup of transient particles. Under an electric field, one can, as Sonner puts it, "catch a pair of particles" before they disappear back into the vacuum. Once extracted, these particles are considered entangled. Sonner mapped the entangled quarks onto a four-dimensional space, considered a representation of space-time. In contrast, gravity is thought to exist in the next dimension as, according to Einstein's laws, it acts to "bend" and shape space-time, thereby existing in the fifth dimension. To see what geometry may emerge in the fifth dimension from entangled quarks in the fourth, Sonner employed holographic duality, a concept in string theory. While a hologram is a two-dimensional object, it contains all the information necessary to represent a three-dimensional view. Essentially, holographic duality is a way to derive a more complex dimension from the next lowest dimension. Using holographic duality, Sonner derived the entangled quarks, and found that what emerged was a wormhole connecting the two, implying that the creation of quarks simultaneously creates a wormhole. More fundamentally, the results suggest that gravity may, in fact, emerge from entanglement. What's more, the geometry, or bending, of the universe as described by classical gravity, may be a consequence of entanglement, such as that between pairs of particles strung together by tunneling wormholes. "It's the most basic representation yet that we have where entanglement gives rise to some sort of geometry," Sonner says. "What happens if some of this entanglement is lost, and what happens to the geometry? There are many roads that can be pursued, and in that sense, this work can turn out to be very helpful." More information: Holographic Schwinger Effect and the Geometry of Entanglement, prl.aps.org/abstract/PRL/v111/i21/e211603 2.2 / 5 (10) Dec 05, 2013 entangled black holes ... I love those Justin Wilson 4.2 / 5 (5) Dec 05, 2013 I love theoretical physics, but I'll wait until someone designs an experiment to test it. 1.7 / 5 (9) Dec 05, 2013 Ok, this may sound a bit, well, trekkie. But is it possible, with the discovery of the wormhole, that a SINGLE particle is occupying both spaces at once? 1.2 / 5 (9) Dec 05, 2013 Read Lee Smolin for his background independent physics, an interpretation of which could be that there is no distance difference place in this Universe, that separation is due to our limited perceptions of dimensions beyond our three, and that 'time' is an entirely human construct to help us think about a no-state Universe process. 1.8 / 5 (10) Dec 05, 2013 So I'm wondering if what they are saying is all Higgs Bosons were quantum entangled at creation. And are basically all connected by strings (gravity) which stretch over distance. Which is why they impart mass and gravity to atoms. So basically this is the quantum gravity/string theory? I am no theoretical physicist, is this what they are saying? So basically all mass and therefor all gravity was established at the big bang, and this is why there is a gravitational constant. Because there is a set amount of mass and a set amount of gravity in the universe. It is merely being stretched to different lengths but the total quantity of the force doesn't change. So this also means anti-gravity is impossible, unless there exists an anti Higgs particle? 1.4 / 5 (10) Dec 05, 2013 I think they are wrong. The 'shortcut' already exists for photons without the need of wormholes. At the speed of light time and distance cease to exist so that the point of emission and absorption of a photon effectively occur at adjacent points in space regardless of the distance measured by observers not travelling with the photon. Time, relative to a space observer is also frozen for black holes and thus the distance between two black holes is zero as measured by the observer at the event horizon despite the separation distance as measured by any other observer. No wormholes in space are necessary...leave them in the garden... 1.4 / 5 (10) Dec 05, 2013 This is all chicken feather voodoo physics. Just more cr@p to add to the mountain of cr@p that modern theoretical physics has been sitting on for some time now. It's pathetic. Sorry. The actual truth is that particles do not communicate over vast distances. There is simply no distance, period. Google "Why Space (Distance) Is an Illusion" if you're interested. met a more fishes 2 / 5 (10) Dec 05, 2013 I like John comment although I think its more along the lines of two points in space can be the same point in space under certain circumstances. Ichisan I believe is alluding to this. Not only does space not exist as we conceive of it, neither does time, or mass for that matter. met a more fishes 1.3 / 5 (8) Dec 05, 2013 The question of why these phenomena seem to exist at the time/size scale we term macroscopic is also a big question and I'm thinking intricately linked to "perception" 3 / 5 (8) Dec 05, 2013 Seems like creating many, many entanglements in the lab might produce an increase in local gravity. Should be very testable. 2.6 / 5 (9) Dec 06, 2013 It certainly is interesting that a photon, which is 'produced' by an electron's change in energy, does not experience time, which suggests for the photon there is no distance and all phenomena are implicitly entangled but, perhaps at a very subtle level and this is the issue of gravity - as alluded to by some here. The entanglement observed in the lab could well be a far less subtle form but, still does not let us get past the issue of relativity in terms of the time of information transfer from our reference It is a great mind-bender to place oneself upon the photon (and other particles) from the context of special relativity and especially so after consuming a couple of glasses of red wine in good female company, expending biochemical energy ;-) then have the fortunate experience of a powerful vivid dream leading to surrealist lucid dreams ! 1.3 / 5 (7) Dec 06, 2013 Phothon is the one who is actually making distances and time. The distane is measured by time it takes for photon to travel and time is measured by distance photon travels in time. 1.3 / 5 (7) Dec 06, 2013 Not only entangled particles form wormhole, also single particle can occupy big volume in space and instantly collapse (collect all his energy into tiny volume) and that happens also instantly, outside of space. This is what create the famous particle -wave duality and explain all related experiments (double slit!). 1.4 / 5 (10) Dec 06, 2013 You may downvote me above to your heart's content but it's still a bunch of cr@p. Do you people even know that nothing can move in spacetime, by definition? Apparently not. And do you know what a changeless spacetime means to wormholes? Never mind. You all just love this pathetic Star Trek voodoo physics. It's a religion to you people. A religion of cr@p, that is. Fire away. 1.9 / 5 (9) Dec 06, 2013 Going back to what I said earlier, and if it is correct that gravity is simply the quantum entanglement of all higgs bosons due to the big bang. All we have to do is find a way to break the entanglement of all the higgs in a given amount of matter at the same time and that would be anti gravity. This also makes unified field theory impossible as gravity is just entanglement of the Higgs it is not a force per se. If you did manage to undo the entanglement on one of them. Would all the others instantly become untangled as well? Causing the universe to fly apart? Maybe this is what dark matter is. Matter made up of Higgs that are for some reason not entangled with the rest and therefore outside the laws of gravity. Or possibly even anti-higgs/anti-gravity bosons explaining at least where some of the anti matter that should be in the universe is. 5 / 5 (5) Dec 06, 2013 You may downvote me above to your heart's content but it's still a bunch of cr@p. Do you people even know that nothing can move in spacetime, by definition? Apparently not. And do you know what a changeless spacetime means to wormholes? Never mind. You all just love this pathetic Star Trek voodoo physics. It's a religion to you people. A religion of cr@p, that is. Fire away. If you know so much about this subject, why are you wasting time posting here? Write a book, get famous and rich and make us all look stupid. Go on now, get going. Hurry up, I'm waiting. 1 / 5 (6) Dec 06, 2013 Looks to me like stars and black holes are the creators of everything. 1 / 5 (9) Dec 06, 2013 If you know so much about this subject, why are you wasting time posting here? Write a book, get famous and rich and make us all look stupid. Go on now, get going. Hurry up, I'm waiting. There's no need for me to write a book and I have no desire to get rich or famous. The undeniable fact is that nothing can move in Einstein's spacetime which makes claims of communication via wormholes look stupid indeed. Look it up. I did not make this stuff up. It's the dirty little secret of spacetime physics that nobody wants to talk about because it would make many big names in physics look bad. 2.5 / 5 (8) Dec 06, 2013 ichisan raised a potentially contentious issue with The undeniable fact is that nothing can move in Einstein's spacetime... Given Einstein developed special relativity (SR) which describes change of rate of time whilst in motion (and therefore Einstein confirms relative motion) then can you ichisan, reconcile the points raised , ie. In summary:- 1. There is no motion in Einstein's spacetime, per your claim 2. Einstein's predictions re SR & motion are experimentally proven to a high degree. *and* of course 3. We observe motion and in Minkowski space too which also confirms Einstein's SR. Can you articulate please why you so sternly believe there is no such motion in Einstein's spacetime to resolve the above seeming paradox of item 1, 2 & 3 ? 1 / 5 (9) Dec 06, 2013 Mike_Massen, time cannot change by definition. Why? Because a change in time necessarily implies a rate of change which would have to be given as v = dt/dt, which is nonsensical. So the idea that time is relative or can dilate is nonsense. What is one to make of SR's time dilation? In my opinion, time dilation is the biggest misnomer in science. It is not time that dilates but the clocks that slow down for whatever reason. GR and SR are pure mathematical theories, just like Newtonian physics before them. As such, they explain nothing. Math can only describe observed phenomena; it does not explain them. For that, one needs a causal theory, which is sorely lacking in physics. It is easy to show with simple logic that neither space nor time exists. By requiring the existence of a time dimension, Einstein's physics has retarded progress in the field by at least a century. All it gave us is a little bit more accuracy at the expense of being mired in deep 1.4 / 5 (9) Dec 06, 2013 Mike, you say that Einstein's theories have been confirmed to a high degree but this is not completely true. Einstein's physics squarely contradicts the reality of quantum entanglement (action at a distance). Take the electric field around an electron. Einstein's physics predicts the electric field changes at the speed of light. But this has never been shown. The more likely truth is that the electric field is a non-local phenomenon and that the entire field, regardless of how far away it is from the electron, will move precisely in lock step with the electron. And when all is said is done, it will be shown that gravity, too, is non-local and instantaneous, just as Newton's gravity equation assumed. 1 / 5 (6) Dec 07, 2013 guys, einstein helped build the very foundation of quantum mechanics even though he did not like what it suggested. "spooky action at a distance" ... he also showed the implausability of a wormhole due that even if it instantaneousely happened it would immediately collapse. Look up einstein rosen bridge. 1 / 5 (4) Dec 08, 2013 It's all wormholes, all the way down. 2.1 / 5 (7) Dec 08, 2013 - Can you maintain focus, I referred only Einstein re SR re Item 2 my comment. - Observations confirm distance & time & reproducible relational time aspect. - You claim 'definition' re time hmmm but, bear in mind best fit for SR observations is change in time rate, easy to conceptualise, no need to get stuck on 'definition'. - You may not like existing maths but, concept is plausible & consistent with radioactive decay (RD) rates. - Maths re SR & with RD work; Keating experiment, GPS etc, you have alternate maths fitting observations ? Please present instead of making weird unscientific comments. - Your attempt to vilify current maths re time does not invalidate myriad SR & GR confirmations. Eg. Re GR, If I move an instrument/reaction/isotope etc to Inertial Ref Frame (IRF) of high gravitational field (GF) for long enough to register differential & move it back to IRF of low GF it acts *exactly* as if rate of time has changed! If it quacks like a duck, then ? Alternative ? 1 / 5 (6) Dec 08, 2013 Can be photons entangled by wormhole, if they're massless? 1 / 5 (9) Dec 08, 2013 Massen, you're wasting both my time and your time. See you around. 2.8 / 5 (9) Dec 08, 2013 ichisan blurted a sense of weird misplaced anger with Massen, you're wasting both my time and your time. See you around. - Please focus on the Science & for those others observing your odd response? - Please articulate the mathematics which correlate well with observation but, are different to Einstein's SR which you claim are so incorrect ? - Perhaps you need a good day to cool down, you do seem angered severely - why ? - Looking forward to YOU following the highest ethic & integrity of Science to not only describe your theory well but lay the foundation for an Adult & mature dialectic ? - Are you up for the straightforward challenge then ichisan & without odd anger ? - Looking forward to your well thought out & considered theory citing, where possible mathematics, along with as many peer reviewed references you can muster ? - In the meantime, this might be of interest to you & all those watching your responses. vlaaing peerd not rated yet Dec 09, 2013 OK oK ok ok, not read the article yet, just the header. ... This is going to be tasty, if not the article itself, the discussion will. vlaaing peerd not rated yet Dec 09, 2013 ok, halfway through the article: " gravity is thought to exist in the next dimension as, according to Einstein's laws, it acts to "bend" and shape space-time, thereby existing in the fifth Wth is that kind of conclusion? Who made this up? Certainly not Einstein anyhow... vlaaing peerd not rated yet Dec 09, 2013 Finished it. Nice to hear something from the string theory corner, which at the least usually have an interesting theory to look at matter. And it doesn't even sound very impossible to comprehend as a layman (not that that brings any validity to the theory). I hope this won't be buried in 10 years of silence and scientists are able to put some of these things to the test, it would be really interesting to know more about this stuff. vlaaing peerd 4 / 5 (1) Dec 09, 2013 my opinion, As such, is deep crackpottery there, fixed it for ya. Sorry mate, but I already had my fair share of Einstein-deniers today And I actually thought at your first post you might have something sensible to say about shortcomings of theoretical science. Doesn't the very idea of knowing it better than Einstein without having any tangible form of knowledge to explain a reasonable alternative make you wonder about yourself? 1 / 5 (6) Dec 09, 2013 You want to know why China killed their intellectuals in the shake out of the revolution? Because they knew that thanks to our magnetar sun during its delayed magnetic reversal, there would be more. I'm a lot more interested in the magnetic reversal hanging for another decade or two and reading what our brilliant babies have to say. 1 / 5 (4) Dec 10, 2013 Does anyone else have some site called science x that is asking to make an account just to see your physorg profile ? 2 / 5 (4) Dec 10, 2013 Does anyone else have some site called science x that is asking to make an account just to see your physorg profile ? 1 / 5 (4) Dec 10, 2013 Does anyone else have some site called science x that is asking to make an account just to see your physorg profile ? I disabled it and it still redirects to science x. Every link under logged in as: goes to that stupid site :( 1 / 5 (4) Dec 10, 2013 Features & Advantages of ScienceX Your member profile page: access your favorite features and change your details and your password. LOL, screw physorg!!!! 3.7 / 5 (3) Dec 10, 2013 Features & Advantages of ScienceX Your member profile page: access your favorite features and change your details and your password. LOL, screw physorg!!!! Even if logged in, user profiles are 404 and your activity likewise. Work-in-progress? 2.3 / 5 (3) Dec 11, 2013 Our universe is just one of an infinite number of others in the Grand Multiverse. Time is infinite, both forward and backwards and possibly in other directions we can't imagine. There are multiple dimensions. Our three dimensional Euclidean existence is only a partial view. The energy that created our universe had to come from somewhere. It has been proven that energy never ceases to exist, it only changes form. In a multiverse, energy is infinite and unlimited. It moves/changes form from one 'verse to another as needed. We are also energy and the same rules apply. I don't accept the current viewpoint of most physicists about dark matter. When energy is outside our 3d existence and resides in another 'plane', it still has a connection. One of these is gravity. 2.3 / 5 (3) Dec 11, 2013 The fact that quantum entanglement creates a tunnel or wormhole makes perfect sense. This tunnel has to exist in some other dimension. So these extra dimensions are actually the glue that holds everything else together. 3 / 5 (2) Dec 11, 2013 Falaco solitons illustrate well the entanglement mediated by worm hole. My problem with it is, such a worm hole can be created just around massive bodies, which the photons aren't. Therefore, by this model the entanglement between photons shouldn't be possible - but we are still observing it. Whydening Gyre 2.3 / 5 (3) Dec 16, 2013 http://www.ussdiscovery.com/FalacoSystem.gif illustrate well the entanglement mediated by worm hole. My problem with it is, such a worm hole can be created just around massive bodies, which the photons aren't. Therefore, by this model the entanglement between photons shouldn't be possible - but we are still observing it. We've all heard of nano black holes by now. Are photons really just nano WHITE holes? not rated yet Dec 20, 2013 Those wormhole paths are 10,000 times shorter than normal space: http://www.extrem...an-light Captain Stumpy not rated yet Dec 22, 2013 Whydening Gyre says We've all heard of nano black holes by now. Are photons really just nano WHITE holes? I don't think that is possible. A white hole ejects mass and light, and a photon has zero rest mass…. If the best example of a white hole is the Big Bang, then a nano white hole, it seems, would have to eject at least SOME mass. At least, IMHO anyway. If this is wrong, I would love to hear why… especially by someone like Q-Star or someone in the field! 5 / 5 (1) Dec 22, 2013 @Captain Stumpy, Not that I am interested or in any way suggest your logic is congruent but any mass *or* energy exiting a system, star, planetary body will reduce its mass as they are bound by E=mc^2 As you say 'photons have zero rest mass' but, they aint at rest. They have energy, this may have arisen any number of ways prior to leaving a hole or sun or planet etc. So knowing the mass equivalent of a photon (@ particular frequency) then from E=mc^2 & one or more other formulae that escape me for moment (nice chicken massala with much beer), you can work out how much mass is leaving a body by knowing its energy etc. However, from special relativity a photon doesn't actually exist as it experiences no time as it travels on what is called a 'null geodesic', from our frame of reference the photon is; generated, travels & is absorbed. In the photons reference frame the cause & the effect are coincident & therefore as far as the time issue is concerned the two happen simultaneously. Whydening Gyre 5 / 5 (3) Dec 22, 2013 Technically, I'm in way over my head, here. But, as an artist I can attempt to visualize... Light=energy. Speed of light is the point where energy has been affected/slowed enough to become visible in our Universe as - light. Which then, by a series of complicated interactions between energy and the absence of energy that I'm not smart enough to detail, eventually becomes matter. It's a matter of "friction", if you will, between light and space that does the "work". I must therefore draw the conclusion that "space" is not "empty" - it just LOOKS that way... Damn - did I just sort of agree to Zeph's AWT? Captain Stumpy not rated yet Dec 22, 2013 thanks for the feedback… I am trying to learn. @Whydening Gyre According to what I have learned SO far… Space is not technically empty, as far as QM is concerned. It continually has a charge because virtual particles are constantly popping in and out of existence. This is according to Quantum theory, which has SCADS of proof, experimental and empirical data, unlike AWT. kThis link talks about empty space not being empty as well as the Casimir force which is due to space not being empty. Whydening Gyre 5 / 5 (2) Dec 22, 2013 thanks for the feedback… I am trying to learn. @Whydening Gyre According to what I have learned SO far… Space is not technically empty, as far as QM is concerned. It continually has a charge because virtual particles are constantly popping in and out of existence. This is according to Quantum theory, which has SCADS of proof, experimental and empirical data, unlike AWT. The "charge" being a wave generated by all those virtual particles interacting with light (when they happen to appear in the vicinity of it)...? I, also, am trying to learn... But it is LOOKING like QM is a possible proof of AWT.... Captain Stumpy not rated yet Dec 22, 2013 This link talks about empty space not being empty as well as the Casimir force which is due to space not being empty. Captain Stumpy 5 / 5 (2) Dec 24, 2013 Whydening Gyre But it is LOOKING like QM is a possible proof of AWT.... Sean Carroll actually comments about how some people have said that the existence of the Higgs field makes it look like the AWT thoery has credence, BUT also tells a bit about how it is totally different. when i find the link, i will post it. You can get his book on the Higgs and it is VERY detailed and explanatory. i loved it! also, you can watch some of his you-tube video's (there may be some where he explains the Higgs field, and relates it to AWT). the AWT theory was disproved, and until i see EMPIRICAL data, with experiments that can be done, then i will firmly side with the standard model... HOWEVER, i also keep an open mind. i go where the evidence points, not where i want the evidence to take me. part of being a good investigator. 5 / 5 (3) Dec 26, 2013 Captain Stumpy offered ..keep an open mind. i go where the evidence points, not where i want the evidence to take me... Admirable but, evidence by itself is dry, free of expectation & passion & hypothesis. I do appreciate your approach but, in order to make sense of evidence in some framework then imagination is essential & the more support of foundation/scaffolding then the more such physics has cohesiveness regardless of whether it appears to make sense. Eg. Probabilistic issues, which is what I infer from the so called Casimer effect. To me, any particle type is a "unit" which 'excludes' a certain range of probabilities. ie. It is REAL because it has excluded those virtually infinite probabilities which appear present in so called empty space & this is why the Casimir effect works ! In my universe, the degree to which a particle excludes more probabilities gives it more mass - as mass is a measure of "fixing" virtually infinite probabilities of empty space in the 'present'. Whydening Gyre 5 / 5 (4) Dec 26, 2013 Mike and Cap'n. Both way cool responses! I am not a scientist. However, as an artist my imagination is on fire! A couple of questions on Casimir effect. Virtual units appear between 2 closely aligned metal plates. Does that imply - a. the virtual particles are always there, but the metal (mass) causes them to slow down enough to (electro-magnetically) interact with matter thus creating energy? b. Or are the virtual particles products of the two plate's mass and proximity? Either way it sounds like an attraction process similar to magnetism... Also... We mathematically measure (particalize) space in order to measure it (ie-Planck space). If we can do that, then doesn't that really mean any space is a "virtual particle"? And ALL space is a wave of those "particles" constantly in motion? From what I surmise from the Casimir effect, any empty space has a certain amount of charge potential. That would mean it has something in it to carry/hold that potential, right? Captain Stumpy 5 / 5 (1) Dec 26, 2013 @Mike_Massen writes: Admirable but, evidence by itself is dry, free of expectation & passion & hypothesis. sorry, I make the approach I do because of my training. I am a professional investigator, retired. Old habits die hard. @Whydening Gyre Here is a link that may help explain it better than I can... Captain Stumpy 5 / 5 (1) Dec 26, 2013 @Whydening Gyre i wish i could answer you better, but i dont have the training, really. most of what i can give is my perspective, opinion....like: about space and virtual particles? according to what i have read and my interpretation of it, EMPTY space is just not as empty as we think, because empty space has virtual particles popping into and out of existence continuously. Therefore we cannot directly measure these virtual particles popping in and out, but we CAN measure the energy that they leave behind when they do. can anyone else help with this? Whydening Gyre 5 / 5 (2) Dec 27, 2013 @Whydening Gyre i wish i could answer you better, but i dont have the training, really. most of what i can give is my perspective, opinion....like: about space and virtual particles? according to what i have read and my interpretation of it, EMPTY space is just not as empty as we think, because empty space has virtual particles popping into and out of existence continuously. Therefore we cannot directly measure these virtual particles popping in and out, but we CAN measure the energy that they leave behind when they do. can anyone else help with this? If I accept the idea of VP popping in and out, my next question is - do they do it randomly or in a synchronized pattern? From both a geometric and timing perspective? Captain Stumpy 5 / 5 (2) Dec 27, 2013 If I accept the idea of VP popping in and out, my next question is - do they do it randomly or in a synchronized pattern? From both a geometric and timing perspective? well, not being a physicist I can only tell you what I think I understand thus far... and I would have to state that it is a random pattern, based upon the random-ness of quantum mechanics. here is a link that may help you understand more: take a look ... I am going through the pages now, whenever I get time to sit and read. hope that helps
{"url":"http://phys.org/news/2013-12-creation-entanglement-simultaneously-wormhole.html","timestamp":"2014-04-18T00:54:51Z","content_type":null,"content_length":"145673","record_id":"<urn:uuid:764d781f-e48a-4eda-9343-1a65d0e4252a>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00367-ip-10-147-4-33.ec2.internal.warc.gz"}
Silver Spring, MD Geometry Tutor Find a Silver Spring, MD Geometry Tutor ...More specifically, I have successfully completed algebra 1 & 2 classes, calculus 1, 2 & 3, general chemistry, physical chemistry, analytical chemistry, organic chemistry and college physics 1 & 2. I spend some of my free time tutoring classmates to help them understand concepts they are having p... 18 Subjects: including geometry, chemistry, physics, calculus ...I have availability on most days, as well as Saturday and Sunday afternoons.Algebra, even maths in general, is all about practice. Once you master it, it is not like other subjects the knowledge and skill will stay with you forever. So I will help you to master it. 15 Subjects: including geometry, physics, calculus, algebra 1 ...I received my BA in Political Science and am well-educated regarding Constitutional Law, the history of American Government, the contemporary political process, and many of the major issues debated in the political sphere today. I also have a strong background in political philosophy. I have al... 21 Subjects: including geometry, reading, writing, English ...As a Mechanical Engineer, I am extremely familiar with all types of Mechanical Physics problems. I am also familiar with the basics of EM Physics. I frequently use calculus in my profession, and I am thus quite adept with precalculus math. 32 Subjects: including geometry, reading, algebra 2, calculus ...While pre-algebra teaches students many different fundamental algebra topics, precalculus does not involve calculus per se; rather, it explores topics that will be applied in calculus. Examples of topics covered include: composite functions, polynomial functions, rational functions, trigonometr... 17 Subjects: including geometry, chemistry, ASVAB, calculus
{"url":"http://www.purplemath.com/silver_spring_md_geometry_tutors.php","timestamp":"2014-04-16T07:22:35Z","content_type":null,"content_length":"24401","record_id":"<urn:uuid:39f7bddb-c44f-4aeb-9dc0-9a8f1f4279b2>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00356-ip-10-147-4-33.ec2.internal.warc.gz"}
Injective, surjective, inclusion and exclusion question February 14th 2011, 03:17 AM #1 Junior Member Jan 2011 Injective, surjective, inclusion and exclusion question Hi i have the following question from an old exam paper im stuck on, its a three part question and goes as follows: (b) Which of the following functions are surjective, injective or bijective? If it is bijective, write down the inverse function. (Justify your answers.) f : Z12 → Z12 : f([a]12) = [a]12 · [5]12; (ii) f : Z12 → Z12 : f([a]12) = [a]12 · [3]12. [ 10 marks ] (c) Let A be the set of 3-digit decimal integers {000, 001, 002, . . . , 999}. Use the Inclusion- Exclusion Principle to find how many elements of A are not divisible by 2, nor by 3, nor by 5. (It may be useful to denote by A2 the set of 3-digit decimal integers that are divisible by The first question i have tried to find examples of this in text books but with no luck, i realsie that a one-one function is injective and is described as a function for which different inputs give different outputs, i am fine in the case of f: N->N where f(x)=x^2 but the format in which the question is shown above i havent got a clue For the inclusion and exclusion i am aware it will be something to do with the AuBuC equation mate letting a= not divisible by 2, b= not divisible by 2 etc then to find the relevant A and B etc to plug into the equation? Any help would be most appreciated, thanks in advance. We have $f([x])=f([x'])\Rightarrow [5][x]=[5][x']$ As $[5][5]=[1]$ , multiplying both sides by $[5]$ we deduce $[x]=[x']$ and $f$ is injective. On the other hand, $f([x])=5[x]=[y]$ implies $[x]=5[y]$, so $f$ is also surjective and $f^{-1}=f$ . f : z12 → z12 : f([a]12) = [a]12 · [3]12. Now, is different because there is no $[a]\in\mathbb{Z}_{12}$ such that $[a][3]=[1]$ . Try yourself. Fernando Revilla We have $f([x])=f([x'])\Rightarrow [5][x]=[5][x']$ As $[5][5]=[1]$ , multiplying both sides by $[5]$ we deduce $[x]=[x']$ and $f$ is injective. On the other hand, $f([x])=5[x]=[y]$ implies $[x]=5[y]$, so $f$ is also surjective and $f^{-1}=f$ . Now, is different because there is no $[a]\in\mathbb{Z}_{12}$ such that $[a][3]=[1]$ . Try yourself. I had a side question that follows this topic ... is what you wrote due to multiplicative inverses and zero divisors? Fernando Revilla We have $f([x])=f([x'])\Rightarrow [5][x]=[5][x']$ As $[5][5]=[1]$ , multiplying both sides by $[5]$ we deduce $[x]=[x']$ and $f$ is injective. On the other hand, $f([x])=5[x]=[y]$ implies $[x]=5[y]$, so $f$ is also surjective and $f^{-1}=f$ . Now, is different because there is no $[a]\in\mathbb{Z}_{12}$ such that $[a][3]=[1]$ . Try yourself. Fernando Revilla Oh i understand yes, many thanks so the inverse is [5]... we havent gone over anything along these lines so a bit like a duck out of wahter, for part two would you use [4] since you cant find the [a] that gives [1] instead do [4] that gives [0] again i aplogise as i havent seen a question like this before. For the second question i went back over it again and found the p(divible by 2), p(divisible by 3), p(divisible by 5) to obtain the inclusion exclusion forumula: p(aubuc)= 1/2+1/3+1/5-1/6-1/10-1/15+1/30=11/15 Then 1-p(aubuc) to get p(not divisble by 2 or 3 or 5) We have $f([x])=f([x'])\Rightarrow [5][x]=[5][x']$ As $[5][5]=[1]$ , multiplying both sides by $[5]$ we deduce $[x]=[x']$ and $f$ is injective. On the other hand, $f([x])=5[x]=[y]$ implies $[x]=5[y]$, so $f$ is also surjective and $f^{-1}=f$ . Now, is different because there is no $[a]\in\mathbb{Z}_{12}$ such that $[a][3]=[1]$ . Try yourself. Fernando Revilla I have managed the injective part of the second question, since f(a)=f(b)--> a=b then f([a]12) = [a]12 · [3]12 is not injective since we can let a=0 and a=4 so every element in the domain doesn't give a distinct image? However im struggling to show where it is surjective or not, i have to show for all values in the range there exist a value in the codomain? However im struggling to show where it is surjective or not, i have to show for all values in the range there exist a value in the codomain? A function from a finite set to itself (or to another set with the same number of elements) is an injection iff it is a surjection. You an multiply each number 0, ..., 11 by 3 and see which numbers are not produced. The original problem asks not about the fraction of elements, but about the exact number of elements. One has to be careful: for example, the number of elements divisible by 3 is 334. In general, I believe, the number of elements divisible by n is $\lceil1000/n\rceil$. My answer to question 2 is 266. February 14th 2011, 03:53 AM #2 February 14th 2011, 01:31 PM #3 Jan 2011 February 14th 2011, 09:25 PM #4 February 15th 2011, 02:30 AM #5 Junior Member Jan 2011 May 12th 2011, 04:10 AM #6 Junior Member Jan 2011 May 12th 2011, 07:58 AM #7 MHF Contributor Oct 2009 May 12th 2011, 01:22 PM #8 MHF Contributor Oct 2009
{"url":"http://mathhelpforum.com/discrete-math/171234-injective-surjective-inclusion-exclusion-question.html","timestamp":"2014-04-17T21:36:20Z","content_type":null,"content_length":"79126","record_id":"<urn:uuid:863ec20d-fa78-40dc-9111-bfff62f49947>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Idledale Geometry Tutor ...Perhaps it's not possible for your teacher/instructor to provide the extra help that you need for certain reasons, I happened to be here to provide what they couldn't. Although I'm new to Wyzant, I'm not new to teaching/tutoring! I have tutored Mathematics for the last fifteen years. 15 Subjects: including geometry, calculus, algebra 1, French ...I have tutored Math and Statistics, professionally and privately, for 15 years. I am proficient in all levels of math from Algebra and Geometry through Calculus, Differential Equations, and Linear Algebra. I can also teach Intro Statistics and Logic. 11 Subjects: including geometry, calculus, statistics, algebra 1 ...I am looking forward to meeting you and getting the chance to tutor you. LinHaving a strong function on Algebra 1 is strategic important to every student. I usually focus on the way to make Algebra 1 study easier than it looks. 27 Subjects: including geometry, calculus, Chinese, physics ...I have taught this subject at Mountain Vista High School. It is one of my most tutored subjects. I have taught this subject at Mountain Vista High School. 24 Subjects: including geometry, calculus, elementary math, ACT Math ...I have a Bachelor's Degree in Biology with a minor in Chemistry and have successfully passed all levels of math up to college Calculus. I have tutored throughout my high school and college career from Pre-Algebra to Organic Chemistry. Come August, I will be starting my Master's in Education at University of Denver. 14 Subjects: including geometry, chemistry, biology, algebra 1
{"url":"http://www.purplemath.com/idledale_geometry_tutors.php","timestamp":"2014-04-20T04:38:21Z","content_type":null,"content_length":"23689","record_id":"<urn:uuid:00e86f14-05be-468c-b39f-1d449e589108>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
[Numpy-discussion] Proposal for matrix_rank function in numpy Bruce Southey bsouthey@gmail.... Tue Dec 15 13:24:43 CST 2009 On 12/15/2009 12:47 PM, Alan G Isaac wrote: > On 12/15/2009 1:39 PM, Bruce Southey wrote: >> +1 for the function but we can not shorten the name because of existing >> numpy.rank() function. > 1. Is it a rule that there cannot be a name duplication > in this different namespace? In my view this is still the same numpy namespace. An example of the potential problems is just using an incorrect import statement somewhere: from numpy import rank instead of from numpy.linalg import rank For a package you control, you should really prevent this type of user > 2. Is there a commitment to keeping both np.rank and np.ndim? > (I.e., can np.rank never be deprecated?) I do not see that is practical because of the number of releases to actually remove a function. Also the current rank function has existed for a very long time in Numerical Python (as it is present in Numeric). So it could be confusing for a user to think that the function just has been moved rather than being a different function. > If the answers are both 'yes', > then perhaps linalg.rank2d is a possible shorter name. > Alan Isaac > _______________________________________________ Actually I do interpret rank in terms of linear algebra definition, but obviously other people have other meanings. I More information about the NumPy-Discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2009-December/047497.html","timestamp":"2014-04-18T20:52:52Z","content_type":null,"content_length":"4187","record_id":"<urn:uuid:d483f0a4-a675-48c4-adf0-7087b46748e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00582-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Help April 3rd 2010, 11:25 AM #1 Jan 2010 hiya all theres this question which i can't seem to get the right answer to... 3+ square root of 24 / 2 + square root of 6 you need to put it in the form of a + b square root of c / d the answer is 6- square root of 6 / 2 buut i can't get it!! thaank you Multiply by it's conjugate which is $2-\sqrt6$ $\frac{3+2\sqrt6}{2+\sqrt6} \times \frac{2-\sqrt6}{2-\sqrt6} = \frac{(3+2\sqrt6)(2-\sqrt6)}{(2+\sqrt6)(2-\sqrt6)} = \frac{6-3\sqrt6+4\sqrt6 -12}{4-6}$ I will leave you to simplify that thank you Hi katiethegreat, $\frac{3+\sqrt{24}}{2+\sqrt{6}}=\frac{3+\sqrt{6(4)} }{2+\sqrt{6}}=\frac{3+\sqrt{6}\sqrt{4}}{2+\sqrt{6} }=\frac{3+2\sqrt{6}}{2+\sqrt{6}}$ To get 2 under the line instead of a surd, multiply the entire fraction by 1, in such a way, that you get an integer under the line. This means getting rid of the surd part of the denominator. To do this, change the sign of one of the 2 values in the denominator to form this fraction. $\frac{3+2\sqrt{6}}{2+\sqrt{6}}=\frac{3+2\sqrt{6}}{ 2+\sqrt{6}}\ \frac{2-\sqrt{6}}{2-\sqrt{6}}=\frac{3+2\sqrt{6}}{2+\sqrt{6}}\ \frac{\sqrt{6}-2}{\sqrt{6}-2}$ Either way leads to the answer. $3+2\sqrt{6}=(2+\sqrt{6})(p+q\sqrt{6})=2(p+q\sqrt{6 })+\sqrt{6}(p+q\sqrt{6})$ $=2p+2q\sqrt{6}+p\sqrt{6}+q(6)=(2p+6q)+(p+2q)\sqrt{ 6}$ $p+2q=2\ \Rightarrow\ 2p+4q=4$ Solving the simultaneous equations gves $2q=-1\ \Rightarrow\ q=-\frac{1}{2}$ $\Rightarrow\ \frac{3+2\sqrt{6}}{2+\sqrt{6}}=3-\frac{1}{2}\sqrt{6}=\frac{6-\sqrt{6}}{2}$ April 3rd 2010, 11:30 AM #2 April 3rd 2010, 12:04 PM #3 Jan 2010 April 3rd 2010, 12:24 PM #4 MHF Contributor Dec 2009
{"url":"http://mathhelpforum.com/algebra/137125-surds.html","timestamp":"2014-04-16T04:45:52Z","content_type":null,"content_length":"42539","record_id":"<urn:uuid:92d50351-f123-4c2c-ad61-2929c4b692cf>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: New Powerful Online Calculator. Replies: 0 New Powerful Online Calculator. Posted: Feb 15, 2005 7:46 PM Ultimate Solution is a powerful new online calculator capable of solveing equations. Type in any equation as simple or as complex as you want and Ultimate Solution will solve it for you, and best of all it will show you the steps! Ulitmate solution can be found at http://www.UltimateSolution.tk or at its longer but slighy faster loading adress Below are some features of Ultimate Solution: * Solves any equation in any form * Shows the steps to solve the equation * Equations can have as many variables as there are letters in the * Upper and lower case variables are treated seperatly so multiply above by 2. * Support Numerical Exponents as well as roots (write roots in the form ^(1/n) ) * Has support for both quadratic and cubic equations. * Supports imaginary numbers (i). * Use as many parenthesis as you want in any equation. * Have as many fractions with in fractions with in fractions as you want. * Type in just an expression and it will be simplifyed e.g. 2(3x+5) -> 6x+10 * Supports inequalities e.g. >= <= equations * Run at a remote sever so there is littel strain on your machine * Use with any machine that can connect to the internet * Solve as many equations as you want each day. * Great tool for learning math! submissions: post to k12.ed.math or e-mail to k12math@k12groups.org private e-mail to the k12.ed.math moderator: kem-moderator@k12groups.org newsgroup website: http://www.thinkspot.net/k12math/ newsgroup charter: http://www.thinkspot.net/k12math/charter.html
{"url":"http://mathforum.org/kb/thread.jspa?threadID=1119433","timestamp":"2014-04-17T10:48:02Z","content_type":null,"content_length":"15440","record_id":"<urn:uuid:7e49951d-8526-4ac9-85cf-5645a8f64083>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00164-ip-10-147-4-33.ec2.internal.warc.gz"}
uniform distrubition of lab measurement question. March 15th 2012, 01:21 AM uniform distrubition of lab measurement question. student measured the length a width of a table several times and got the follwoing results. he measured the table with a ruller for which the distance between two of its scalemarks is 1 mimilemeter. 122.1, 121.9, 122.4, 122.3, 122.1 23.7, 26.1, 22.3, 23.7, 25.5, 24 what is the error for measuring the length? (in cm units) what is the error for measuring the width? C)write the formula for the error of the table area? D)what is the table area? how i tried to solve each one: regarding A,B: i was told by my prof that if we measure with an instrument that have scalses in it then the error is 0.3*(distance between two of its scalemarks) which is the error in uniform distribution so the error is 0.03 so the error in the length and width are the same and dont dependant on the mesurments themself they both are 0.03 correct? regarding C,D: i have a problem in the formula of the uniform distribution because the formula says where the a and b are just the point of one scale in the ruller so the data doesnt come into account. what i did is a ittok the smallest and bigest mesurment as a1 and b1 for the a+b/2 and 0.01 for the error but the problem is its wrong because a and b should be the same in the error and in the a+b/2 March 17th 2012, 06:12 AM Re: uniform distrubition of lab measurement question. Hi Transgalactic, For A and B, and think you are supposed to compute the standard deviations of the measurements; that is your estimate of the error. For C and D, do you know a formula for the uncertainty of a product? I don't think the uniform distribution enters in at all. [edit] The formula for the uncertainty in a product goes like this: If $p = xy$ then $\frac{\delta p}{|p|} \approx \frac{\delta x}{|x|} + \frac{\delta y}{|y|}$
{"url":"http://mathhelpforum.com/advanced-statistics/195987-uniform-distrubition-lab-measurement-question-print.html","timestamp":"2014-04-19T00:21:30Z","content_type":null,"content_length":"6253","record_id":"<urn:uuid:d3cc8d4b-ce4f-4a65-9b84-9faf5c9c8871>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
Writing linear equations using the slope-intercept form An equation in the slope-intercept form is written as $\\ y=mx+b \\$ Where m is the slope of the line and b is the y-intercept. You can use this equation to write an equation if you know the slope and the y-intercept. Find the equation of the line Choose two points that are on the line Calculate the slope between the two points $\\ m=\frac{y_{2}\, -y_{1}}{x_{2}\, -x_{1}}=\frac{\left (-1 \right )-3}{3-\left ( -3 \right )}=\frac{-4}{6}=\frac{-2}{3} \\$ We can find the b-value, the y-intercept, by looking at the graph b = 1 We've got a value for m and a value for b. This gives us the linear function $\\ y=-\frac{2}{3}x+1 \\$ In many cases the value of b is not as easily read. In those cases, or if you're uncertain whether the line actually crosses the y-axis in this particular point you can calculate b by solving the equation for b and then substituting x and y with one of your two points. We can use the example above to illustrate this. We've got the two points (-3, 3) and (3, -1). From these two points we calculated the slope $\\ m=-\frac{2}{3} \\$ This gives us the equation $\\ y=-\frac{2}{3}x+b \\$ From this we can solve the equation for b $\\ b=y+\frac{2}{3}x \\$ And if we put in the values from our first point (-3, 3) we get $\\ b=3+\frac{2}{3}\cdot \left ( -3 \right )=3+\left ( -2 \right )=1 \\$ If we put in this value for b in the equation we get $\\ y=-\frac{2}{3}x+1 \\$ which is the same equation as we got when we read the y-intercept from the graph. To summarize how to write a linear equation using the slope-interception form you 1. Identify the slope, m. This can be done by calculating the slope between two known points of the line using the slope formula. 2. Find the y-intercept. This can be done by substituting the slope and the coordinates of a point (x, y) on the line in the slope-intercept formula and then solve for b. Once you've got both m and b you can just put them in the equation at their respective position. Videolesson: Find the equation to the graph Next Class: Formulating linear equations, Writing linear equations using the point-slope form and the standard form
{"url":"http://www.mathplanet.com/education/algebra-1/formulating-linear-equations/writing-linear-equations-using-the-slope-intercept-form","timestamp":"2014-04-19T19:43:43Z","content_type":null,"content_length":"29412","record_id":"<urn:uuid:4e1a0713-01bf-423c-a382-0e7ac2459f87>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00370-ip-10-147-4-33.ec2.internal.warc.gz"}
Probability and Political Predictions Nov 02 2012 Probability and Political Predictions Watching the right wing rage at Nate Silver is kind of amusing to watch. Andrew Sullivan points out that Silver’s predictions on the results of the election are quite in line with all the other major poll analysts and betting markets, while Jonathan Last tries to explain how probability works: If Romney wins should that discredit Silver’s models? Only so far as anybody ever used them as oracular constructs instead of analytical tools. One final word: People seem to think that it would reflect badly on Silver if Romney were to win while Silver’s model shows only a 25 percent chance of victory. But isn’t 25 percent kind of a lot? If I told you there was a 1-in-4 chance of you getting hit by a bus tomorrow, would you think that 25 percent seemed like a big number or a little number? Or, to put it another way, a .250 hitter gets on base once a game, so you’d never look at him in any given at bat and think there was no chance he’d get a hit. This is something that any decent poker player knows, of course. Take the hand I wrote about the other day from the World Series of Poker, where one player had a pair of kings and the other had an ace and a king. They got all the money in before the flop and at that point the player with the kings was approximately a 70/30 favorite to win the hand. That means three times out of ten, the underdog is going to win. If you bat .300 in baseball, you’re probably in the hall of fame. And in that case, the underdog did win. And even prior to the very last card, he still had about a 15% chance of winning (he could hit any one of seven cards to win — the three remaining aces or the four remaining 4s; each “out” in poker is worth about 2.1% per card to come). So no, if Romney wins that won’t prove Silver, or any of the other statistical analysts handicapping the election, wrong. But I’d be willing to bet that if you look at his national and state by state predictions, he’ll be pretty close. Why would I be willing to bet that? Because I understand probability. 44 comments 1. 1 Reginald Selkirk ‘Psychic’ fails to beat random probability, jumps immediately to the anecdote However, in a result they maybe should have seen coming, neither scored more than one hit in five from the readings. Subsequent to the this article’s initial publication Ms Whitton contacted MailOnline to complain that various outlets covering the story had failed to mention that one of her sitters was very surprised to receive an accurate reading during the experiment. ‘Why do the skeptics and non-believers omit the very significant details I am trying to prove it seems one sided to me if they are only going to comment on their own disbeliefs without telling the true facts,’ she said in an email. 2. 2 Bronze Dog We really need to find a way to drill it into people’s heads that probability is a real thing. One extreme example in my personal life: I was playing the new X-COM and targeted an alien just a couple squares away. The chance of hitting it was 99%, and yet my soldier missed. Sure, it was annoying, but I didn’t start whining that the game lied to me or that it miscalculated the chance. One big factor I kept in mind was that I was going to take a lot of shots over the course of the game, so such unlikely events were bound to happen sooner or later. 3. 3 Ben P So no, if Romney wins that won’t prove Silver, or any of the other statistical analysts handicapping the election, wrong. Here’s the interesting thing. This is based on arguments I’ve had elsewhere. The problem isn’t necessarily that republicans misunderstand statistics (although they might), its that a lot of them simply seem to have no idea what Silver is actually saying, but just keep hearing from their own sources that he’s a liberal and thinks Obama will win. I was in an argument with some other people on another forum, and after a fairly lengthy argument one of the conservatives retorted with basically “well, you can believe Silver’s prediction that Obama has a 99% chance of winning the election if you want, we’ll see what the truth is on election day!” Someone then responds, “what are you even talking about, he’s said that Obama has about 70% chance of winning the election.” To which the response was just a “well I knew that, i was just exxagerating.” 4. 4 at that point the player with the kings was approximately a 70/30 favorite to win the hand. That means three times out of ten, the underdog is going to win. This is why large tournaments are so brutal. Over the course of several hundred (or thousand) hands, a good player is going to be the 70/30 favorite in many hands. If that good player is getting all their money in on each of those hands, the overall probability is that he or she is going to lose all their money on one of them, and be knocked out. Its not P(lose) = 0.3 that kills you. Its P(win every time) = 0.7expN where N is large that kills you. 5. 5 I wonder if there’s a misunderstanding (inadvertent or deliberate, who knows & who cares?) about how poll aggregate projections, which use probability estimates, differ from binary predictions of electoral outcomes? Hard to say Nate Silver’s “wrong” when all he’s saying is ‘Obama has a higher likelihood of winning the election than Romney, but Romney could still win it anyway’. 6. 6 “well, you can believe Silver’s prediction that Obama has a 99% chance of winning the election if you want,” The 99% figure comes out of Princeton. 7. 7 Michael Heath I get probability. However I struggle with understanding why odds are placed on pre-election polls relative to actual election results. That’s because I see a miss as a failure of methodology if a disproportionate number of polling result misses are outside of the margin of error. I really don’t understand how you can put odds on pre-election survey results beyond your result and margin of error. Here’s a non-realistic hypothetical to illustrate. If I have Obama/Romney at 49%/48% with a 3.5% margin of error as people walk into a particular voting precinct and I do this at 50 voting precincts, I get how the actual results would be something at the result +/- the margin of error about 95% of the time (assuming the last value is the pollster’s confidence). If 40 of the 50 polls are outside the margin of error, I’m skeptical of the methodology, even if people lied to pollsters*. I don’t understand where the probability of being right or wrong comes into play at all, if the poll results are outside the margin of error at a frequency greater than one’s confidence level, than I think we’ve got a methodology problem. Perhaps the odds are merely expressing the predicted outcomes within the margin of error that would swing the results from the expected result. E.g., my example above allows a Romney win while the result is still within the margin of error for my hypothetical poll. Is this what the odds are measuring? *Even if people lie, pollsters should and usually do take responsibility for the integrity of their method. A failure due to people lying merely points to the need for pollsters to become more sophisticated in massaging the data to find people’s actual positions after the poll but prior to publishing the poll, and/or improve their questioning methodology to make it easier to discern people’s actual positions. FWIW, I had Obama at 317 prior to the 1st debate. I now have him at 277 where Ohio is his path to victory. Every state where Romney is 2% or less behind in the latest state polls as aggregated at RealClear Politics I give to Romney, I give Romney all the states where he’s ahead. 2% is my racist factor. That’s the number of non-partisans that pollsters think will vote for Obama but will instead vote for Romney. Keys in Ohio: a) Romney’s argument that would have effectively taken down the North American auto industry and b) early voting, which will make it far more difficult for Republicans to deny Democrats their right to vote in Democratic-heavy voting precincts as they successfully did in 2004. 8. 8 Probability is one of my favorite subjects. Just yesterday I was telling my students in class about the Pennsylvania Lottery Rigging. After manipulating the machines the only possible numbers were 4′s and 6′s, and wonderfully enough the number drawn was 666. My favorite part if the story is that the head of the PA Lottery, who was not involved but was trying to quash the rumors, went on TV and said: “Look, since the lottery started every triple digit, 111, 222 etc has come up except 666, so people should not be surprised because” — and then those famous words in probability — “it was due!” 9. 9 The public at large (and actually most professionals I deal with) don’t understand probability well at all. Probabilities apply to groups (especially large groups) of people and events. Probabilities don’t mean much when you look at a single person or event (one of the biggest problems with “sciences” the rely almost entirely on statistics such as dietary studies). One approach I’ve used with some people is to use a deck of cards and pull out just black cards except for one red card. Give each person a card (face down) and ask them who has the red card. No one’s hand should go up (since the probability is 1 in 6 or 1 in 8 or however many people I’m dealing with). And when they do flip the cards, the realization starts to hit that one of them actually did have a red card. Just because there is a low probability doesn’t mean that you aren’t the one. My real life tales of low probabilities coming true include my own battle with Conn’s Syndrome (which was statistically likely to be the cause of my hypertension by far less than 1% especially due to the type of tumor they found) and my son getting a brain abscess (quite literally on the magnitude of 1 in a million). Doesn’t mean that the statistics are wrong, just means that you happened to be the one. 10. 10 Johathon Last: “Or, to put it another way, a .250 hitter gets on base once a game ….” Ed Brayton: “This is something that any decent poker player knows, of course.” The statement by Jonathon Last is flatly false. And Ed Brayton appears to agree with this Last fellow. If Ed truly does agree, then neither of them understands what an average is or how an average is calculated. Both of them need to repeat 6th grade. Moreover, neither of them understands baseball. Good grief! Love and lollipops from, (who is blown away by such rampant innumeracy, and who will, if necessary, explain in 50 examples and no less than 500 words) 11. 11 Cathy W If you find yourself needing to explain this to the non-mathy: pull out two coins; flip them, explaining, “Two heads is a Romney win.” …at least since most people don’t carry tetrahedral dice in their pockets; tabletop RPG players are another group of people you won’t have to explain this concept to. 12. 12 Michael Heath RandomFactor writes: The 99% figure comes out of Princeton. Thanks for the link RaondomFactor. My conclusion of Princeton’s prediction: No racism or voter suppression in the U.S., no siree. (I hope I’m wrong.) 13. 13 The problem isn’t necessarily that republicans misunderstand statistics (although they might), its that a lot of them simply seem to have no idea what Silver is actually saying Thats not just a Republican problem: I have a staunchly democratic friend who recently stated that according to Nate Silver, Obama was all but a sure thing, up a field goal with 3 minutes to go. To which quite a few jumped on him saying being up a field goal with three minutes to go is pretty damn far from a sure thing. I never bothered to check if the “up a field goal . . .” bit was a quote of Silver’s or my friends own interpolation. Given Silver’s penchant for analogizing the election to sports, I assumed that part came from one of his columns. 14. 14 Michael Heath unbound writes: Probabilities apply to groups (especially large groups) of people and events. Probabilities don’t mean much when you look at a single person or event . . . Doesn’t mean that the statistics are wrong, just means that you happened to be the one. I’m the extended family missionary on this point (one side of the family). Of course the counter argument is the dopamine-inducing exclamation a miracle occurred. My success rate probably approaches zero except with the smarter kids whose parents aren’t totally off the deep-end. 15. 15 unbound have you tried this one on your students? a. Take three cards, two black and one red, show the student that this is so. b. Taking note of where the red card is, lay the three cards face down in front of the student. c, Ask them to point to the card they think is the red card. d. Turn over one of the black cards. e. Ask if they want to swap, or stay with the card they think is red. Then ask them why they want to stay with their initial guess (by far the most likely outcome). f. Turn over the card they picked, most likely black. (Ask the students to note if it’s black or red). Rinse & Repeat until some inkling to how to calculate probabilities seeps in. :) Dingo 16. 16 I never bothered to check if the “up a field goal . . .” bit was a quote of Silver’s or my friends own interpolation. Given Silver’s penchant for analogizing the election to sports, I assumed that part came from one of his columns. Yep, it’s from this one from a few days ago. He was saying that being up a field goal with 3 minutes to go gives any given football team on average a 70% chance of winning the game. He was using it as an analogy to show that Obama was the favorite but not by any means the certain winner. 17. 17 DJ #15, In the civilized world, that’s known as the Monty Hall Problem. 18. 18 We really need to find a way to drill it into people’s heads that probability is a real thing. table top RPGs would work. 19. 19 As anyone who ever read a Diskworld book can tell you, million to one chances crop up nine times out of ten. 20. 20 I found that nice, simple but illuminating probability question is this: “Suppose China modified its one-child policy so that every family could keep having children until they had a son, then they had to stop having children. What would it do to the distribution of boys and girls?” 21. 21 Heddle – indeed, but still people fall for the ‘sucker bet’ (even in the real civilised world*). :( Dingo * you know, places that don’t have the death penalty… ;) 22. 22 Scott Simmons It’s very confusing to most people to deal with probabilities in predictions of singular events. An example that I like to trot out comes from a gag bit that a Dallas sports radio program did a few years ago. One of their hosts was ‘interviewing’ another, who was doing an impersonation of Cowboys’ owner Jerry Jones. They ran down the list of games in the upcoming NFL season, and the interviewer asked ‘Jerry’ who he favored in each game. The response was always the Cowboys. At the end of the list, the interviewer then asked for clarification: “So, you’re saying the Cowboys will go 16-0?” “I didn’t say that,” responds fake Jerry, “you’re putting words in my mouth.” Cue audience laughter. Fact is, fake Jerry was actually right. Suppose, for simplicity, that he thinks the ‘boys have a 70% to win the game against any opponent, home or road. Then, if you ask him to pick any one game, he’ll say the favorite to win is Dallas. But if you ask him what he expects their record to be for the season, the answer (based on that same assumption) is 11-5. (Most likely outcome in a sample size of 16.) It sounds inconsistent, but it’s really not … 23. 23 So, there’s this thing that I’ve noticed… It seems like a lot of the most credible people who do (ideally) non-partisan fact-related work regarding politics — e.g. fact checkers, poll aggregators, etc. — happen to be liberals themselves. I have my own hypotheses why this might be the case (reality having a well-known liberal bias and all), but whatever the cause, the result is somewhat problematic: It’s very easy for conservatives to read bias when there is none. And I don’t even entirely blame them. I found myself in the uncomfortable position yesterday of linking to The Blaze, of all things, to debunk a Facebook meme that was going around. In this particular case, since it was an anti-Romney meme, obviously a conservative website had the most interest in doing the legwork to debunk it. And the information in this particular article in The Blaze was easily verifiable, and it checked out. In short: They were right, regardless of motivations. And they were the best debunking of the meme I could find in a quick google search. So I took a big gulp and linked to The Blaze. But it wasn’t easy. It made me feel… dirty. I imagine it must be difficult for conservatives, even those rare conservatives who actually care about the truth, when they have to defer to a source that they know damn well is run by a pinko leftie such as myself. 24. 24 Area Man I found that nice, simple but illuminating probability question is this: “Suppose China modified its one-child policy so that every family could keep having children until they had a son, then they had to stop having children. What would it do to the distribution of boys and girls?” Assuming people keep having children until a son is born, the gender ratio would be 1:1. Do I win? 25. 25 Area Man, Assuming people keep having children until a son is born, the gender ratio would be 1:1. Do I win? Sort of. Your answer is correct, but your assumption is unnecessary. 26. 26 Area Man Sort of. Your answer is correct, but your assumption is unnecessary. On further reflection, I see that the assumption is indeed unnecessary. The simplest way to reason it is that every birth has a 50/50 chance of being a boy or girl, period. It doesn’t matter how many children people have. 27. 27 Ed Brayton Nate Silver actually did use the analogy of Obama being up by 3 in a football game with only a few minutes to play, but he also explained that, historically, teams in that position win about 79% of the time. 28. 28 per your comment about lottery examples in class: my students find this story amusing. 29. 29 @10 OK I’ll bite if no one else will, I think I have a passable understanding of probability and averages (or at least the commonly used measures) and I can’t see what is wrong with Ed’s post. It could be because I know nothing at all about baseball (or poker) and haven’t the fogiest what a .250 is, indeed I don’t even know why the initial 0 is left off let alone what it is a measure of. Please could you explain, what he got wrong and in particular how a Romney win would discredit the idea the Obama had a 70% probability of winning based on polls? 30. 30 @dingojack #15, in your problem, the probability from the student’s point of view whether he had pointed to a red card depends on whether the student knows that (1) you knew where the red card is and (2) you intended to always turn over a black card. If he knows (1) and (2), then he knows that a black card would have been turned over no matter what, so no new information was gained when he saw the black card turned over, and so the odds that his original card is red remains 1/3 just as it was before seeing the black card. On the other hand, if the student does not know (1) and (2) and assumes you turned over a card at random and that there was actually a 1/3 chance that you would have turned over a red card, then he did gain information when he saw the black card and not a red card, and so the odds that his original card is red increases to 1/2. The game totally depends on what the student knows about your motivation. 31. 31 imback #30, @dingojack #15, in your problem, the probability from the student’s point of view whether he had pointed to a red card depends on whether the student knows that (1) you knew where the red card is and (2) you intended to always turn over a black card. No. First of all there is no probability “from the student’s point of view,” there is just a probability (1/3) period. The easiest way to see this is to imagine that the same deal is presented on tv to two different viewers. One knows the dealer’s strategy and one doesn’t. The probability has to be the same for both viewers regardless of whether or not they know the dealer’s strategy. 32. 32 The extra information from the turned over black card isn’t relevant (it’s part of the ‘suckering’). Because the player knows that one of the cards not picked is black they’ll think they have a 50-50 chance of picking the red card from the two face down cards. Psychologically they will be unwilling to swap to the card they didn’t choose. However – The card the player chooses has a 1/3 chance of being red and 2/3 chance of being black, both when first chosen and after the one of the black cards is turned over. Because the two hidden cards have a probability of 1 that one of them is red, the card the player didn’t choose has a probability of being red equal to (1-(1/3)) = 2/3. Always swap, you’ll double your odds. 33. 33 OK, I’m one of those people who gets easily tripped up with probability. I have a question about 538′s “chance of winning” number. I’m not sure this question even makes sense, but here goes: Right now 538 says that Obama has an 83.7% chance of winning the election. Is the assertion that if we had accurate enough polling data one could theoretically assign 100% probability to the outcome, but that existing polling data carries enough noise that the best we can do is assign 83.7% probability? Or is the assertion that if we held the election five different times, Obama would be the victor four times and Romney once? Does what I’m asking make sense? 34. 34 Right now 538 says that Obama has an 83.7% chance of winning the election. Is the assertion that if we had accurate enough polling data one could theoretically assign 100% probability to the outcome, but that existing polling data carries enough noise that the best we can do is assign 83.7% probability? There’s the “noise”, as you call it There’s the possibility that every polling compagny made some big unforseen mistake in their models (say, underestimating the number of latino voters or overestimating the 18-30 years old turnout) which would “skew” every polls in the same direction And there’s the possibility of a last minute change of heart from the voters themselves (that’s how Truman won against Dewey). 35. 35 Re Laurentweppe @ #34 Actually, the problem with the polls in 1948 was that Gallup and the others stopped polling more then a month before the election, under the delusion that Truman couldn’t possibly catch up. It should be noted that they didn’t make the same mistake 20 years later when polls in September showed Nixon 15 points ahead and they kept polling until the day before the election, correctly predicting a close one. 36. 36 dean #28, That’s fantastic! 37. 37 Re matty1 @ #29 A .250 hitter in American baseball gets a safe hit once every 4 times at bat (.250 equals 25%) Since, on average, such a hitter gets to bat 4 times in a nine inning game (not counting bases on balls or games where he ends up batting more then 4 times), that means that, on average, he hits safely once per game. It should be pointed out that a .250 hitter is considered a weak sister, unless a high percentage of his hits are home runs. Generally, a good hitter bats .300 or better. 38. 38 @heddle #31 and @dingojack #32: Here are three possible motivation models for the dealer to follow: A. The dealer knows where the red card is, and will always turn over a black card that the student didn’t point to. B. The dealer always turns over a card at random from the two the student didn’t point to. Whether the dealer knows where the red card is irrelevant then. C. The dealer knows where the red card is, and only turns over a black card and offers a trade if the student pointed to the red card. Otherwise the dealer shows the student he lost by turning over the black card he pointed to. This is the Evil Monty Hall model. Under model A, there was no chance the turned over card would be red, so when it turned up black, the student had no new information, and so the odds the student pointed to the red card do not change from 1/3. The student will double his chances by trading. Under model B, there was a 1/3 chance the turned over card would be red, so when it turned out black, the student did have new information, and the odds the student pointed to the red card increase to 1/2. There is no loss or gain of chances by trading. Under model C, the dealer only turns over a card if the pointed to card is red, so when a black card is turned over, the odds the student pointed to the red card is now 1. He would lose a sure win by trading. So when the student sits for the very first deal, if the dealer gives no clue about his motivation, the student can only guess at what model to use. Of course, after a few deals, the dealer’s behavior would give some clues. This logic puzzle works best if it is clear that the dealer is up front with the student about his motivation from the beginning. Probabilities are not immutable. They change with different information. For instance, if the dealer knows where the red card is, when the student pointed to a card, the probability from the dealer’s point of view that the pointed to card is red is not 1/3 but either 0 or 1. Another example is when watching poker on TV, a viewer getting to see everyone’s hole cards computes a different set of win probabilities than each of the players do who only know their own hand. 39. 39 If knowing the dealer’s model affects the probability then you need to explain the scenario I described: Monty deals the cards and flips over a black. We are both watching. You know his model (that he only flips over a black) and I do not. We are each given the option to stay or switch. Are you saying that the probabilities of winning (by switching or not) is different for the two of us playing exactly the same hand? 40. 40 @heddle #39, the answer is yes. For a starker contrast, let’s say the dealer privately told me he was playing under model A, always flipping a black card, and he privately told you he was playing under model C, flipping a black card only if a red card is first chosen. Let’s say we both believe him even though he must have been lying to at least one of us. He deals out the cards, as a team we choose a card, and then he flips a black card (not ours). As a team, we must choose whether to trade (without discussion). I would say trade and you would say keep, given what we think we It’s like poker on TV, where we see the guy fold who was a lock to win. He could very likely have made the best calculated decision given the information available to him. 41. 41 Player behaviour ≠ probability. If I play aggressively at poker that doesn’t make me suddenly more likely to draw an inside straight or royal flush, does it? 42. 42 I see. My point is: If we replay that scenario a thousand times and we both switch every time, then you, knowing the strategy, will win about 2/3 of the time and I, not knowing the strategy, will win the exact same ~2/3 of the time. The probability is the same (2/3 if we switch) regardless of what we know. That is, there is a 2/3 chancing of winning by switching regardless of whether I know the strategy. What you are saying is, knowing the strategy you will switch every time while I, not knowing, will switch about half the time (actually less than half for psychological reasons.) So you will win more. True. I am talking about a priori probabilities. You are talking about strategies to improve winning in which extra knowledge is not changing the a priori probabilities, but rather using them 43. 43 @heddle, well you’re not exactly talking about true prior probabilities in the Bayesian sense, since the probability you’re talking about is determined after learning the motivation of the dealer (to always flip a black card). That key piece of information is notably missing from the student’s point of view in the set up of the problem in #15. Again, probabilities are not immutable. Say in Texas hold-em, the flop is a A-K-Q. What are the odds Ed will get an A-high straight in the end? His opponent Fred can calculate the odds given the number of jacks and tens out and that there are 4 of Ed’s cards to learn in the final hand. However, Ed knows he has a Q-J and can recalculate the odds given there are 4 tens out and 2 cards to come. He calculates the odds are worthwhile to go all in. But on the TV broadcast, say they have an X-ray camera that can look through the already prepared deck and show viewers the last two community cards. Alas neither is a ten. The TV viewers know Ed’s a goner. All three parties at the same moment have a different view of the odds of Ed getting a straight. It’s all dependent on the amount of information known. 44. 44 @dingo, you’re missing my point. Of course, betting or bluffing has no effect on what the next cards are. But behavior does matter when the dealer looks at the hidden cards and then reacts to that, and then that information is pertinent to the student’s choice. You must be logged in to post a comment.
{"url":"http://freethoughtblogs.com/dispatches/2012/11/02/probability-and-political-predictions/","timestamp":"2014-04-17T08:42:00Z","content_type":null,"content_length":"129068","record_id":"<urn:uuid:f647da24-4593-4a32-b356-bd3d2de18c1c>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00375-ip-10-147-4-33.ec2.internal.warc.gz"}
PSY K300 3769 Statistical Techniques Psychology | Statistical Techniques K300 | 3769 | O. Sporns Introduction to statistics and statistical methods. This course will cover: nature of statistical data; ordering and manipulation of data; measures of central tendency and dispersion; elementary probability; concepts of statistical inference and decision; estimation and hypothesis testing; regression; correlation. Emphasis on applications to problems within the behavioral sciences. Instructor: Olaf Sporns, Assistant Professor Office: TBA Psychology, phone TBA Office Hours: TBA Email: TBA (current: sporns@nsi.edu) Course Web Page: to be added. Required Text: Gravetter, F.J., and Wallnau, L.B. Statistics for the Behavioral Sciences, West Publishing Company, 5th edition, 1999. Format: Lecture, discussion is encouraged. Exams: 3 in-class exams and one final. Homework assignments at least once per week. Grades will be based on a combination of exams, final exam and homework.
{"url":"http://www.indiana.edu/~deanfac/blfal00/psy/psy_k300_3769.html","timestamp":"2014-04-20T23:48:38Z","content_type":null,"content_length":"1449","record_id":"<urn:uuid:a33e560a-05e5-4900-be8e-f94cd9ced06a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Trig Derivative October 11th 2009, 06:04 PM Trig Derivative Find the derivative $r = \left(\frac{1+Sin \Theta}{1-Cos \Theta}\right)^2$ I believe the chain rule and quotient rule are used here. So here's my first step, if someone can confirm I did this correct than I could probably go on from there. $r' = 2 \left(\frac{1+Sin \Theta}{1-Cos \Theta}\right) \left(\frac{(1-Cos \Theta)(Cos \Theta) - (1 + Sin \Theta)(Sin \Theta)}{(1-Cos \Theta)^2}\right)$ October 11th 2009, 06:26 PM looks good to me, from there on is pure trig. October 11th 2009, 06:34 PM $r' = 2 \left(\frac{1+Sin \Theta}{1-Cos \Theta}\right) \left(\frac{Cos \Theta - Cos^2 \Theta - Sin \Theta + Sin^2 \Theta}{(1-Cos \Theta)^2}\right)$ Would be final answer right? October 11th 2009, 06:46 PM If you want to complicate it October 11th 2009, 06:52 PM If you want to complicate it Nah I'll stick with what I posted xD
{"url":"http://mathhelpforum.com/calculus/107459-trig-derivative-print.html","timestamp":"2014-04-17T10:46:35Z","content_type":null,"content_length":"6383","record_id":"<urn:uuid:6691361a-2500-4767-94bb-b716fd825866>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00078-ip-10-147-4-33.ec2.internal.warc.gz"}
02-19-2002, 10:21 PM I need to know how to keep variables values the same from Ex. Form1 to Form2. Say in Form1 I need hp(hitpoints) to equal 20 and when Form2 loads up I still need hp to equal 20 from the previous form1. How do I do this? I know this is probably a really simple thing but i haven't figured it out. Any help is appreciated.
{"url":"http://www.xtremevbtalk.com/archive/index.php/t-19244.html","timestamp":"2014-04-17T18:47:27Z","content_type":null,"content_length":"5327","record_id":"<urn:uuid:44f33534-6147-496d-b9e7-1d5c5f8c44cd>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00245-ip-10-147-4-33.ec2.internal.warc.gz"}
Oakwood, CA Science Tutor Find an Oakwood, CA Science Tutor ...For example, to remember the relationship of the sides of a right triangle and the functions sine, cosine, tangent: Oscar Had A Headache Over Algebra (sine=Opposite/Hypotenuse, cosine=Adjacent /Hypotenuse, tangent=Opposite/Adjacent). I make flashcards for the students. I have found that this hel... 31 Subjects: including physics, ADD/ADHD, Aspergers, autism ...Though it's a cliche, I really do make learning fun. Before entering the world of education, I worked in the film industry as a professional script reader, in high-tech as a sales writer, and in journalism at The Washington Post. In high school, I was awarded a four-year full-tuition scholarshi... 25 Subjects: including ACT Science, English, writing, geometry ...Quick Keys, Hot corners, etc.) Breaking down a word to learn it better. May be mixed in with grammar to learn the difference between similar sounding words. I played basketball from 5th grade to 10th grade and was a student assistant coach for the sophomore team and team manager for JV/varsity my junior year. 13 Subjects: including astronomy, English, writing, basketball ...My social science background has allowed me to refine my excellent reading and writing skills. Besides having a thorough understanding of my subject matter, tutoring is something that I love to do and I hope to cultivate my love for learning in the students I teach. If you think you or your chi... 23 Subjects: including chemistry, biology, reading, calculus ...I have worked professionally on the stage as an actor and a musician, and also have experience working an eclectic mix of jobs including but not limited to the following: stone masonry & carpentry; guiding spelunking tours; volunteering as a wheelchair operator in rural France; hoop breaking and ... 23 Subjects: including philosophy, English, reading, geometry Related Oakwood, CA Tutors Oakwood, CA Accounting Tutors Oakwood, CA ACT Tutors Oakwood, CA Algebra Tutors Oakwood, CA Algebra 2 Tutors Oakwood, CA Calculus Tutors Oakwood, CA Geometry Tutors Oakwood, CA Math Tutors Oakwood, CA Prealgebra Tutors Oakwood, CA Precalculus Tutors Oakwood, CA SAT Tutors Oakwood, CA SAT Math Tutors Oakwood, CA Science Tutors Oakwood, CA Statistics Tutors Oakwood, CA Trigonometry Tutors Nearby Cities With Science Tutor Bicentennial, CA Science Tutors Cimarron, CA Science Tutors Dockweiler, CA Science Tutors Farmer Market, CA Science Tutors Foy, CA Science Tutors Glassell, CA Science Tutors Lafayette Square, LA Science Tutors Miracle Mile, CA Science Tutors Pico Heights, CA Science Tutors Rancho Park, CA Science Tutors Rimpau, CA Science Tutors Sanford, CA Science Tutors Santa Western, CA Science Tutors Vermont, CA Science Tutors Wilcox, CA Science Tutors
{"url":"http://www.purplemath.com/Oakwood_CA_Science_tutors.php","timestamp":"2014-04-18T06:00:58Z","content_type":null,"content_length":"24069","record_id":"<urn:uuid:a9f255c4-25d6-4df5-b09e-fea1de347331>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00306-ip-10-147-4-33.ec2.internal.warc.gz"}
How do i work out an average? Re: How do i work out an average? I'm having the hardest time figuring out this average.. I'm not bad at math, but my answers don't match up with the 4 they give me. Here's the question: You work Monday through Friday from 7:00 am to 4:00 pm. On Monday, you sold $1,231.11 in gas and store product, on Tuesday you sold $843.09, on both Wednesday and Thursday you sold $1,425.25 and on Friday you sold $1,633.56. What is your sales average per day for this week? Here are the answers $1,286.25 $1,314.25 $1,311.65 $1,311.66 Re: How do i work out an average? Hi SomeCrazyAverageness; Welcome to the forum! So you would round down to $1311.65 In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How do i work out an average? Thank you, I solved it shortly after posting it. I must have been typing it in wrong Re: How do i work out an average? Hi SomeCrazyAverageness; No problem, I do that all the time. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. sidney hargreaves Re: How do i work out an average? Charlotte wrote: hey can someone please help me out a little. gotta find the average of these numbers: 80, 78, 60 and then to find out the average of these numbers too: if anyone can help would be much appreciated:D add them altogether and then divide by the number of numbers. I this case 3. 80 + 78 + 60 = 218 divided by 3 = 72.66 Re: How do i work out an average? im still really stuck on my averages please help me!! eye colour girls blue/gray 1 blue/green 2 brown 3 hazel 0 grey 1 green 2 blue 6 i have to work out the average for girls eye colour and... eye colour boys blue/gray 2 blue/green 0 brown 5 hazel 2 grey 0 green 0 blue 6 the average boys eye colour im really stuck please help me!! Re: How do i work out an average? hi nonnyxxx Welcome to the forum. Average for eye colour? Not the 'mean' then; it wouldn't make sense. I think you need to tell us more about this. If it's a question (from a book or homework), then please state the question exactly as written. If it's part of a project or coursework, then what are the requirements? eg. Pick a subject to investigate; collect some data; use statistics to describe what you have found out. Post back and I'll try to help you. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: How do i work out an average? I love this forum! Re: How do i work out an average? hi GOVINDH Welcome to the forum! This thread is one of MIF's most popular. That is because if you 'google' "How do I work out an average" it is the number one hit. Freelancer_Man asked this in 2005 and members have been helping people with this topic ever since. Amazing isn't it ? You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: How do i work out an average? Hi GOVINDH; Welcome to the forum! In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Re: How do i work out an average? I think you are all doing a fine job. Re: How do i work out an average? Thanks you Brena. Welcome to the forum! You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Real Member Re: How do i work out an average? At the risk of sounding irrespectful to some, it seems bots have become more believable. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: How do i work out an average? hi Stefy, Are you saying I've been talking to a bot? eeekkk! And she sounded so sweet. You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Real Member Re: How do i work out an average? She is sweet. And I haven't claimed she is a bot. I leave all conclusions to you.. The limit operator is just an excuse for doing something you know you can't. “It's the subject that nobody knows anything about that we can all talk about!” ― Richard Feynman “Taking a new step, uttering a new word, is what people fear most.” ― Fyodor Dostoyevsky, Crime and Punishment Re: How do i work out an average? Hi Brena; Thanks for those kind words. You have really touched my heart. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Full Member Re: How do i work out an average? Here's an interesting approach to averaging. Averages work similar to game scores. Let the Home team be on top and the Visitors on the bottom, supposing that we are dealing with football. By quarters suppose we have: 1st Quarter 2nd Quarter 3rd Quarter 4th Quarter Totals Home Team 7 3 14 6 30 Visitors 3 6 7 7 23 We can write this more concisely as "scores" as follows: 7 + 3 +14 + 6 30 ~ ~ ~~ ~ = ~~ so the Home team won by 7 points. 3 + 6 + 7 + 7 23 Note that we are interested in the difference not the quotient of the two numbers, hence we use the "~" to distinguish these from fractions which use the "-" or "/". Now averages work similarly. To average test scores of 80, 70 and 90 we obtain 80 + 70 + 90 240 80 ~~ ~~ ~~ = ~~~ which reduced by 3 gives ~~. 1 + 1 + 1 3 1 The reducing is like asking what same score should be made on all three tests to get 240 points. Obviously it is 80, 80 and 80 (which is the 240 divided by 3). Now for a more difficult question we could ask what must one average on the next two tests to raise the overall 5 test average to 85? 80 + 70 + 90 + x + x 240 + 2x 85 ~~ ~~ ~~ ~ ~ = ~~~~~~ = ~~ 1 + 1 + 1 + 1 + 1 5 1 We set the sum equal to the desired five-test average. Cross multiplication of the last two expressions yields 240+2x = 425 2x = 185 x = 92.5 Of course it is not likely that one would make 92.5 or each of the two tests. More likely it would occur as 92 and 93, or 91 and 94, or 90 and 95, etc. The idea and notation of "scores" also works well with some other types of algebra word problems, for example mixture problems and some rate/time/distance problems. (Scores are to signed numbers as fractions are to rational numbers. Fractions can be viewed as "pre-rational numbers." So scores can be viewed as "pre-signed numbers." .5 corresponds to {1/2, 2/4, 3/6, ...} whereas -2 corresponds to { ~, ~, ~, ~, ...} 2. corresponds to {2/1, 4/2, 6/3, ...} whereas +2 corresponds to { ~, ~, ~, ~, ...} -2 is like the Home team being 2 behind and +2 is like the Home team being 2 ahead. Example of a rate/time/distance problem: A man made a trip in two legs. The first leg he went two miles in 3 hours and the second leg he went 3 miles in 4 hours (pushing his car perhaps!). What was his average speed on the trip? 2mi 3mi If we treat these ratios as fractions we have ---- and ---- . If we add, subtract, multiply or 3hr 4hr divide these fractions, none of the results are what we need to solve the problem. On the other hand if we treat them as scores we get 2 + 3 5mi 5/7 mi ~ ~ = ~~~ and reducing this by 7 we obtain ~~~~ ; that is, 5/7 mph. 3 + 4 7hr 1 hr So this problem which is difficult for many people to figure out is only TWO steps. 1) Add the scores and 2) reduce by the bottom number to get 1hr in the bottom. An additional question: How long should a 3rd leg of 5mi take to raise the 3leg average to 1mph? 2 + 3 + 5 1 10 1 ~ ~ ~ = ~ becomes ~~~ = ~ on cross multiplying becomes 7+x=10 so x = 3hrs. 3 + 4 + x 1 7+x 1 The language of fractions is the wrong language to work this kind of problem as well as averages. We probably compare two numbers by differences as much as or more than we do as quotients. The language of fractions is available for the quotient comparisons. The language of scores for difference comparisons is analogous. Here I am getting a bit "long winded" but one more observation might be helpful. The idea of WEIGHTED averages works well with scores also. For example if we have three 90's and two 80's and a 70 to average we can write out all six scores individually and add them or we can write x 90 + x 80 + 70 270 + 160 + 70 500 500/6 83.3 3 ~~ 2 ~~ ~~ = ~~~ ~~~ ~~ = ~~~ which reduces to ~~~~ = ~~~~ x 1 + x 1 + 1 3 + 2 + 1 6 5/5 1 Then we might ask how low an average can we get on the next 4 tests to keep at least an 80 average on the ten tests altogether. x z 4z 80 500+4z 80 Just add in 4 ~; that is, ~~ and set the total equal to ~~ and solve: ~~~~~ = ~~ x 1 4 1 10 1 500+4z = 800 so 4z = 300 so z = 300/4 = 75. Grade point averages used so much at colleges are basically weighted averages also. Scores are a "whole different ball game" so to speak. Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional). LaTex is like painting on many strips of paper and then stacking them to see what picture they make. Re: How do i work out an average? Mark.S wrote: Thankyou very much Ganesh! I don't know much maths, but I needed to work out an average and it took me 10 seconds to find this on google. Thankyou! yeh same thanks Re: How do i work out an average? bob bundy wrote: hi Stefy, Are you saying I've been talking to a bot? eeekkk! And she sounded so sweet. what is a moderator:lol: Re: How do i work out an average? Hi ralph; A moderator is someone who monitors the content of the posting that the forum receives. In mathematics, you don't understand things. You just get used to them. I have the result, but I do not yet know how to get it. All physicists, and a good many quite respectable mathematicians are contemptuous about proof. Full Member Re: How do i work out an average? Hi MIF! I was checking out "The Mean Machine" and I couldn't get it to clear out the previous set of data. It kept bringing in values from the previous set of data. Am I missing something here about how to clear the data before entering a new set of data? I was hitting "Enter" on the keyboard instead of clicking "go". When I hit "enter" it just cleared all the values in the data window, but didn't do the calculation, Might that have something to do with it? Last edited by noelevans (2012-11-05 13:36:28) Writing "pretty" math (two dimensional) is easier to read and grasp than LaTex (one dimensional). LaTex is like painting on many strips of paper and then stacking them to see what picture they make. Re: How do i work out an average? I've got my mock exams tomorrow and I've managed to find the paper online on emaths. I get how to work out the average and everything but it's in a table so I'm not entirley sure how to work it out. Here it is: Place Season Mean rainfall Number of months Months A: Dry-10cm per month 8 Jan to Aug Wet-20cm per month 4 Sept to Dec B: Dry 5cm per month 10 July to Apr Wet 50cm per month 2 May to June Re: How do i work out an average? do you just add the 1O, 2O, 5 and 5O together? ahahah i spent 5 minutes working out the mean again..its actually unbelievable how dumb i am..how am i even in 2/8 sets in maths? Re: How do i work out an average? stay online I'm just looking at this You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei Re: How do i work out an average? see below Is this the information? You cannot teach a man anything; you can only help him find it within himself..........Galileo Galilei
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=243205","timestamp":"2014-04-25T01:12:21Z","content_type":null,"content_length":"51753","record_id":"<urn:uuid:01e5e091-cd94-4e6d-8750-7666e6387cc3>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00147-ip-10-147-4-33.ec2.internal.warc.gz"}
How many gallons is equal to 1,000 liters? • 1 liter equals how many millimeters CHELMSFORD 1 Liter Equals How Many Millimeters best outboard fuel http ... Honda Civic Methane fuel http://FuelSavingTab.com : Clean Burning 30 MPG at only $1.99 per gallon. Price:$26,9994. Nissan Leaf : 100 miles on electric. Zero Pollution. • Half gallon, liters: confusion at gas pump Confusion in inevitable at gasoline pumps this year as oil companies wrestle with new ways to price their $1-plus-a ... things being equal, Americans doubtless would prefer to keep buying their gasoline in gallons -- but few know what a liter is. • Petrol stations given April 1 deadline to sell oil in litres Petrol stations in the UAE are given until April 1 to comply with the government directive to sell ... The new system will not change the prices of petrol or diesel. Imperial gallon is equal to 4.546 litres. "The change is in accordance with the ... • 2014 Ford Fiesta hands-on review: 45 mpg from a 1-liter gasoline engine? Based on a day-long drive, I believe most people won’t know, or care, how many cylinders ... it will burn 300 gallons — $1,050 of regular gasoline, at $3.50 a gallon. In comparison, the Fiesta with a decent 1.6-liter, 120-hp normally aspirated (no ... • How a school was built by social media, bayanihan Many netizens who saw her post were moved. Some donated hollow blocks and others gave cash. Noemi and the school netted more than P70,000. The school had its second ... Still a few problems Students have to bring 1.5 liters of water in Coke bottles every ...
{"url":"http://answerparty.com/question/answer/how-many-gallons-is-equal-to-1-000-liters","timestamp":"2014-04-17T18:56:30Z","content_type":null,"content_length":"18515","record_id":"<urn:uuid:ca8b15f3-ca7f-4939-bbbb-159d1debae50>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00597-ip-10-147-4-33.ec2.internal.warc.gz"}
Find a Precalculus Tutor ...Both students were up to grade level by the end of the academic year. My skill is in assessing exactly where a student's strengths and weaknesses lie, then making fast progress. I communicate well with children and have never met a student who could not learn. 32 Subjects: including precalculus, reading, English, chemistry ...Change includes change of date, change of time, change of duration and cancellation.** If you change a scheduled session without 36 hours notice, I reserve the right to charge a one hour fee. If you fail to show up, the cost of the whole scheduled session will be charged.I tutor AP calculus. I ... 10 Subjects: including precalculus, calculus, geometry, algebra 1 ...If you are looking for someone who can challenge you without making you feel overwhelmed, then I am the tutor for you.I am qualified to tutor discrete mathematics because it was required course in attaining my degree. I have knowledge of concepts such as truth tables, including conjunctive and d... 19 Subjects: including precalculus, calculus, algebra 2, ASVAB I have been working as a personal tutor since November 2007 for the George Washington University (GWU) Athletic Department. I have hundreds of hours of experience and am well-versed in explaining complicated concepts in a way that beginners can easily understand. I specialize in tutoring math (from pre-algebra to differential equations!) and statistics. 16 Subjects: including precalculus, calculus, geometry, statistics ...I am highly qualified and hold teaching licenses in both VA and PA. I charge $60/hour and am willing to arrange a time and location of mutual convenience. I serve in Reston, Herndon, Vienna, Great Falls, Sterling, Fairfax, and McLean. 7 Subjects: including precalculus, geometry, algebra 1, SAT math
{"url":"http://www.purplemath.com/southern_md_facility_md_precalculus_tutors.php","timestamp":"2014-04-24T15:51:36Z","content_type":null,"content_length":"24033","record_id":"<urn:uuid:51d00b0b-2d8c-4d3b-9700-c8275eecbd09>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Posts by bleh Total # Posts: 7 Geometry PLEASE HELP ME pick a vertex. find the slope of the perpendicular to the opposite side. find the equation of the line with that slope, going through the vertex. pick another vertex and repeat find the intersection of the two lines. That's the orthocenter. Ur not Even good Ur not Even good 50 mL of a 2.50 M solution of ammonium phosphate is added to 100mL of a 1.50 M solution of calcium nitrate. At the end of the reaction 11.2 grams of precipitate produced. What is the percent yield a ton of calc basically, my teacher gave us a bunch of optimization problems and i've been working on them for hours and can't get them. if i could have help with maybe the first four, that would be AWESOME. thanks. 1) find the point on the graph of the function y = x^2 that is clos... word unscramble in french
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=bleh","timestamp":"2014-04-21T14:12:13Z","content_type":null,"content_length":"7032","record_id":"<urn:uuid:c7f7401f-43b0-46df-a9bf-aeba19c1541d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00154-ip-10-147-4-33.ec2.internal.warc.gz"}
The conjectural relation between mixed motivic sheaves and the perverse t-structure. up vote 5 down vote favorite As far as I remember, there 'should exist' an exact etale realization functor from the category of mixed motivic sheaves (over a base scheme $S$) to the category of perverse $l$-adic sheaves over $S$. However, I was not able to find this conjecture in the literature. I would be deeply grateful for any references or details and comments here! Upd. I would like to understand here the interplay between the residue fields of a scheme and their separable closures. It seems that I know most of the main reference on the subject; yet I cannot find a very precise answer to my question (possibly it's still there but is difficult to see). motives perverse-sheaves Is this asserted in Huber's book, or does she restrict $S$ to be the spectrum of a field? – S. Carnahan♦ Dec 18 '10 at 13:14 Most of the book is dedicated to motives over a field; possibly, in some small part of it 'relative' motives are considered. – Mikhail Bondarko Dec 18 '10 at 18:18 A (slightly informal) version of such a conjecture is discussed in Jannsen's article in Motives 1 (section 4.8), and he refers to Beilinson's height pairings paper for a reference. – Bhargav Dec 18 '10 at 18:35 I looked at the Beilinson's height pairing paper; yet I was not able to find a precise formulation. – Mikhail Bondarko Dec 18 '10 at 18:52 add comment 1 Answer active oldest votes For triangulated category of geometric motives over a regular scheme $S$, the $\ell$-adic realisation has been constructed by Florian Ivorra in his thesis. I think the functor is expected to be t-exact for the motivic and perverse t-structure but don't know if it has been explictly written as a conjecture. There is also a chapter about the perverse up vote 0 down t-structures in the motivic setting in Ayoub's thesis. vote accepted I am affraid that there could be some complicated details here.:) – Mikhail Bondarko Dec 18 '10 at 18:17 add comment Not the answer you're looking for? Browse other questions tagged motives perverse-sheaves or ask your own question.
{"url":"http://mathoverflow.net/questions/49797/the-conjectural-relation-between-mixed-motivic-sheaves-and-the-perverse-t-struct","timestamp":"2014-04-21T01:33:57Z","content_type":null,"content_length":"56252","record_id":"<urn:uuid:e2147ce2-58d9-4622-9d8f-283bffe80d2a>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00076-ip-10-147-4-33.ec2.internal.warc.gz"}
Need some help.. Re: Need some help.. I do not know the concept of nested list, can you please explain a little ? Johny22 wrote:I do not know the concept of nested list, can you please explain a little ? I hope this is mostly a language barrier, because this is rather fundamental to programming. A list is an ordered set of objects. Lists are nested when one of the object contained by a list is a list itself. Lisp uses a syntax for lists where they are delimited by parentheses and contained objects are separated by whitespace. So in the Code: Select all (+ (* a b)(/ c a )) example there is a list of three objects, two of which are lists themselves, each of which contain three objects. In the prefix syntax for expressions, the first element of a list, sometimes called `head`, is an operator, and the remaining elements are its arguments. Hence, there are are no operators 'in the middle' of the expression, because they all are in front of a list, even if their particular list is inside, or nested, in another list. Example recursive function This example shows one way of traversing nested lists, dispatching on what you find, and collecting results. Code: Select all (defun recur-sum (list) "sum the numbers in a tree of lists, return (values sum count)" (let ((sum 0) (count 0)) (dolist (x list) ((symbolp x) (error "Can't add symbol ~A" x)) ((null x)) ;; optimization: ignore empty lists ((listp x) (multiple-value-bind (subsum subcount) (recur-sum x) (incf sum subsum) (incf count subcount))) ((numberp x) (incf sum x) (incf count 1)) (error "Can't add ~A ~A" (type-of x) x)))) (values sum count))) ;; test case (trace recur-sum) (recur-sum '(1 2 (3 4) 5)) (untrace recur-sum) If recur-sum took an "&optional (count 0)" parameter and recurred using "(recur-sum x count)", then the recursive calls would know how many items were before them. Re: Need some help.. For the second problem with nth var, how can i use delete-if ? i tried writing some code but it didn't work Code: Select all (defun nth_var (el l) (if (null l) '() (and (delete-if-not #' (lambda (x) alpha-char-p (x)) (l))(delete-if-not #' (lambda (x) alphanumericp (x)) (l)) (nth el (l)) I tried to modify the list so the math operators will be removed and than to get the nth element from the list. I also need it to be a macro, how can i do that ? PS: I tried that flattening method but nothing, so i tried this method but this one also doesn't work. Re: Need some help.. for the problem where i need to get the nth variable from an expresion, I found how to flatten the list Code: Select all (defun flatten (list) (loop for i in list if (listp i) append (flatten i) else collect i) Now i want to use the alpha-char-p and alphanumericp functions with the remove-if function so i can remove all math operators from my flattened list and than to use the nth function so i can get the nth element from my final list. Can someone pls help me with this ? PS: It needs to be a macro. Re: Need some help.. I've wrote these 3 functions : Code: Select all (defun flatten (list) "Flatten the list" (if (null list) '() (loop for i in list if (listp i) append (flatten i) else collect i) (defun rem_mo (list) "Remove the math operators" (if (null list) '() (delete-if-not #'alphanumericp list) (defun nth_var (el list) "Get the nth variable from the list" (if (null list) '() (nth el list) Can someone show me how to make them work together ? I want to flatten the list gotten as an argument, tham to remove the math operators and finaly get the nth element that also was given as argument. All this needs to be a macro. Anyone knows how to do this ? Re: Need some help.. Hint: you want the nth_var of the rem_mo of the flattened list. Why do you need this to be a macro? Other than defmacro instead of defun, macros use the same syntax as functions; they just get their arguments unevaluated and can't easily be used at runtime. So (fun '(1 2 3)) becomes (mac (1 2 3)). Re: Need some help.. I made them work :d, thanks to all that tried to help me
{"url":"http://www.lispforum.com/viewtopic.php?f=2&t=492&start=10","timestamp":"2014-04-19T19:35:11Z","content_type":null,"content_length":"28388","record_id":"<urn:uuid:11704655-54a7-4cad-933e-8a8d3b0fb492>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00495-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Solve V = π r2h for h. • one year ago • one year ago Best Response You've already chosen the best response. V = 2(pi)rh Divide by 2(pi)r on both sides V/2(pi)r = 2(pi)rh/2(pi)r V/2(pi)r = h h = V/2(pi)r Best Response You've already chosen the best response. does that help you out Best Response You've already chosen the best response. I'm confused Best Response You've already chosen the best response. your trying to find H correct? Best Response You've already chosen the best response. Best Response You've already chosen the best response. what it is telling you to do is divide both sides of the equation by 2(pi)r to get the value of H Best Response You've already chosen the best response. Look how the equation is written and it explains how it is done Best Response You've already chosen the best response. okay I get what you did now Best Response You've already chosen the best response. that is how for these equations if you are trying to find the value the correct way should be how it is done! Best Response You've already chosen the best response. Thank you Best Response You've already chosen the best response. Your welcome Best Response You've already chosen the best response. I have another question. Solve P = 2(l + w) for l. What are the missing values in the table? P w l 14 2 ? 22 8 ? Best Response You've already chosen the best response. okay here goes and follow these is best I can help you Best Response You've already chosen the best response. Basically distribute the 2 with the l and w, so you end up with an equation of P = 2l+2w. Since you have the values for P and W, you can just plug them in, so the only term left would be "l". Does that make it more clear? Best Response You've already chosen the best response. all you have to do is move the constants to one side and make "l" by itself, that would be your solution to "l". Best Response You've already chosen the best response. you had me up until you said plug in Best Response You've already chosen the best response. place the number that is already given Best Response You've already chosen the best response. Best Response You've already chosen the best response. got it Best Response You've already chosen the best response. yeah I got p-2w/2=l but how do I plug them in Best Response You've already chosen the best response. type them in Best Response You've already chosen the best response. So 8 and 14 is my answer for l Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50af9461e4b09749ccac0b5e","timestamp":"2014-04-18T03:50:26Z","content_type":null,"content_length":"77465","record_id":"<urn:uuid:24be76cc-13a3-4792-9cfd-05c476363ddf>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
Normally distributed and uncorrelated does not imply independent 34,117pages on this wiki Assessment | Biopsychology | Comparative | Cognitive | Developmental | Language | Individual differences | Personality | Philosophy | Social | Methods | Statistics | Clinical | Educational | Industrial | Professional items | World psychology | Statistics: Scientific method · Research methods · Experimental design · Undergraduate statistics courses · Statistical tests · Game theory · Decision theory In probability theory, it is almost a cliche to say that uncorrelatedness of two random variables does not entail independence. In some contexts, uncorrelatedness implies at least pairwise independence (as when the random variables involved have Bernoulli distributions). It is sometimes mistakenly thought that one context in which uncorrelatedness implies independence is when the random variables involved are normally distributed. Here are the facts: • Suppose two random variables X and Y are jointly normally distributed. That is the same as saying that the random vector (X, Y) has a multivariate normal distribution. It means that the joint probability distribution of X and Y is such that for any two constant (i.e., non-random) scalars a and b, the random variable aX + bY is normally distributed. In that case if X and Y are uncorrelated, i.e., their covariance cov(X, Y) is zero, then they are independent. • But it is possible for two random variables X and Y to be so distributed jointly that each one alone is normally distributed, and they are uncorrelated, but they are not independent. Examples appear below. • Suppose X has a normal distribution with expected value 0 and variance 1. Let W = 1 or −1, each with probability 1/2, and assume W is independent of X. Let Y = WX. Then X and Y are uncorrelated, both have the same normal distribution, and X and Y are not independent. Again, the distribution of X + Y concentrates positive probability at 0, since Pr(X + Y = 0) = 1/2. • Suppose X has a normal distribution with expected value 0 and variance 1. Let $Y=\left\{\begin{matrix} -X & \mbox{if}\ \left|X\right|<c \\ X & \mbox{if}\ \left|X\right|>c \end{matrix}\right.$ where c is a positive number to be specified below. If c is very small, then the correlation corr(X, Y) is near 1; if c is very large, then corr(X, Y) is near −1. Since the correlation is a continuous function of c, the intermediate value theorem implies there is some particular value of c that makes the correlation 0. That value is approximately 1.54. In that case, X and Y are uncorrelated, but they are clearly not independent, since X completely determines Y. To see that Y is normally distributed—indeed, that its distribution is the same as that of X—let us find its cumulative distribution function: $\Pr(Y \leq x) = \Pr(\{|X|<c\mbox{ and }-X<x\}\mbox{ or }\{|X|>c\mbox{ and }X<x\})\,$ $= \Pr(|X|<c\mbox{ and }-X<x) + \Pr(|X|>c\mbox{ and }X<x)\,$ $= \Pr(|X|<c\mbox{ and }X<x) + \Pr(|X|>c\mbox{ and }X<x)\,$ (This follows from the symmetry of the distribution of X and the symmetry of the condition that |X| < c.) $= \Pr(X<x).\,$ Observe that the sum X + Y is nowhere near being normally distributed, since it has a substantial probability (about 0.88) of it being equal to 0, whereas the normal distribution, being a continuous distribution, has no discrete part, i.e., does not concentrate more than zero probability at any single point. Consequently X and Y are not jointly normally distributed, even though they are separately normally distributed.
{"url":"http://psychology.wikia.com/wiki/Normally_distributed_and_uncorrelated_does_not_imply_independent?oldid=42197","timestamp":"2014-04-19T00:02:23Z","content_type":null,"content_length":"61785","record_id":"<urn:uuid:1b3c321f-568d-4845-bed4-cc25a3345d36>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
Elements of Programming Author: Alexander Stepanov & Paul McJones Follow Publisher: Addison-Wesley @Iprogrammerinfo ISBN: 978-0321635372 Aimed at: C++ academics Rating: 2 Pros: Rigorous mathematical approach Cons: A difficult read that fails to motivate the reader Reviewed by: Mike James What can a practicing programmer say about a book such as this one? I have a lot of time for Bjarne Stroustrup (the inventor of C++) and his opinion of the book is glowing - "There are many good books, but few great ones. 'Elements' is a great book ..." He then says "Reading "Elements" requires maturity both with mathematics and with software development..." So basically if I don't like the book its down to me - I'm not mature enough either as a mathematician or as a software developer. As you might guess I'm leading up to saying that I don't like this book at all. I'm not unfamiliar with the idea of giving a mathematical basis to programming and on balance I think it's a very good idea but at the present time nothing much has presented itself to the world that rises above the concerns of mathematics. That is, if you need to construct algorithms that perform permutations, factor numbers or nested exponential powers then there are theories that will provide organisations that help with correctness and generalisation. This is more or less what this book does and within the confines of its little box this is fine. The problem is that I can't see any way of taking the ideas and breaking out of the mathematicy applications into the sort of thing real world programming is all about. I also don't see the "bigger picture" where the mathematical organisation proposed is strong enough to provide a programming methodology to rival object oriented programming.The key idea seems to be the notion that iterators can be used to provide an algebraic like basis for defining sequential algorithms and their properties. The bottom line is that despite the glowing reviews by people you need to take seriously the book is boring. It fails to inspire or impress on the reader the importance of the topic being discussed. It is suggested that it should be on the syllabus of every computer science course - if so we can add another abstract mathematical unit that students will in the main try to forget. I think that the principles expressed in this book could probably be explained without the rigorous mathematics in a few pages and then the math could be added by any reasonable mathematician quite easily. So what is there to say about this book... It is mathematical and you will need to be happy with advanced algebra not quite at the level of group theory but heading in that direction - sets, order relations, partial ordering.. . Despite the occasional informal comment there is very little by the way of insight into the methods conveyed by the authors. Like so many poor maths books you are left to guess at the meaning and importance of dense symbolic expressions. As a result it is a difficult read even if you are happy with mathematics and if it has any gem hidden within it then it will take a great deal of effort to extract. Last Updated ( Thursday, 08 October 2009 ) RSS feed of book reviews only RSS feed of all content Copyright © 2014 i-programmer.info. All Rights Reserved.
{"url":"http://www.i-programmer.info/bookreviews/20-theory/393-elements-of-programming-.html","timestamp":"2014-04-19T17:21:01Z","content_type":null,"content_length":"33702","record_id":"<urn:uuid:4f7dc586-71c2-48af-a153-9edea4937ed9>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00384-ip-10-147-4-33.ec2.internal.warc.gz"}
Weighing Bolts Date: 2/14/96 at 8:43:58 From: Anonymous Subject: math puzzle My math teacher gave me this problem to solve: Jim works in a hardware plant and makes rivets 6 inches long and 6 ounces in weight. One day Jim didn't come to work, and Harry, his replacement, made a drum of rivets 6 inches long, but which only weighed 5 ounces each. Harry's drum was stored along with 9 other proper drums. When Harry returned, he was given an ultimatum: Find the rivets or lose your job. Harry could only use a scale one time, and could take only one reading from that scale. So if he is guessing, it had better be a good guess. How could poor Harry be sure of picking the right drum out of the ten? I really don't know how to approach this problem. I have solved it in three weighings, but how can I do it in just one? Date: 2/14/96 at 9:33:19 From: Doctor Byron Subject: Re: math puzzle As with a lot of problems like this, there's a bit of a trick to this one that you have to catch on to - namely, that you don't have to weigh the full contents of each drum. Consider the possibilities if you can take individual bolts or groups of bolts from the drums.... Want to give it another shot? Now would be a good time, because if you read on, I'm going to give away the solution... Okay, here's the idea: You take a different number of bolts from each of the ten drums, one from the first, two from the second, etc. This will give you a total of 55 bolts. If each weighs 6 oz., the total weight should be 330 oz. When you weigh this group of bolts, however, the weight will be off by a number equal to the number of bolts taken from the 5 oz. batch. For example, if the fifth drum (from which 5 bolts were taken) is the odd batch, the weight will read 325 oz or 5 less than would be expected otherwise. Pretty sneaky, huh? -Doctor Byron, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/57937.html","timestamp":"2014-04-21T15:06:49Z","content_type":null,"content_length":"6956","record_id":"<urn:uuid:93a85aa7-5473-42cc-b638-5308159f4a79>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00468-ip-10-147-4-33.ec2.internal.warc.gz"}
A theorem of Gauss This is Gauss-Lucas Theorem. Highlights of the proof: , then is a convex combination of the roots of , otherwise: , and take the logarithmic derivative of this: , which is valid whenever . So if is a root of but not of we get: , and taking conjugates and dividing we finally get: , and it's easy now to see these are barycentric coordinates of (please do pay attention closely to the last expression's indexes).
{"url":"http://mathhelpforum.com/math-challenge-problems/148169-theorem-gauss.html","timestamp":"2014-04-17T02:55:28Z","content_type":null,"content_length":"39886","record_id":"<urn:uuid:7a78e039-8099-474e-9718-ad5602190299>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
Box "trigonometric integrals" We start by looking at trigonometric integrals. If you came here for hyperbolic functions, see the note at the end. Integrals from expressions involving trigonometric functions appear quite often. For many of them there are standard procedures, many can be also found in lists of integrals. In most cases it is enough to know how to integrate various combinations of sines and cosines, as all other trig functions can be written using these two. However, this sometimes leads to complicated expressions, so we will also look at some popular combinations of tangents and cotangents. And finally, since secant and cosecant are sometimes used, we will pay some attention to these, too. As a last resort we cover the universal substitution. Because general methods usually lead to long calculations, we will start with types of integrals for which there are more convenient procedures. • Type f is some trigonometric function and n is a positive integer. For this type of integrals we have reduction formulas that hold for integers n greater than or equal to 2: These formulas can be deduced using trig identities and integration by parts, as you can see here. This is a good opportunity for a note concerning the usefulness of secants and cosecants: Notice that the reduction formulas only work for positive n! This means that if you have a cosine or sine in the denominator in a positive power, then you have to use the secant, respectively cosecant Since every application of a reduction formula decreases the power by two, there are two possibilities: 1. If n is even, the corresponding trig function eventually disappears and we end up integrating a constant. Example: 2. If n is odd, we end up integrating some trigonometric function. Sine and cosine are elementary integrals and thus we should have no trouble. Example: Now we look at the remaining four trig functions, we start with tangent: This result can be also written as follows: Cotangent is integrated in a similar way, as we already saw in Theory - Methods of integration - Substitution. Secant and cosecant rarely appear as such, more often we encounter them in the form of reciprocal cosine and sine. These integrals are a bit tougher: We have to use the add-subtract trick (or, more precisely, multiply-divide), substitution and partial fractions: An alternative solution suitable for those really familiar with secants and cosecants is here. Since the integral of cosecant is done similarly, we just quote the answer: Note: When integrating reciprocal sine or cosine, it is good to remember that the second power in the denominator is actually an elementary integral: Important remark: If we integrate an odd power of sine or cosine, we can avoid the repetitive use of reduction formulas. Instead, we can separate one sine or cosine which we use for substitution, and the remaining sines (or cosines) can be easily changed to the complementary function (the power is even now). An example explains it best: Note that if n is even, then this substitution trick does not help. It is the presence of an "extra sine" or "extra cosine" that makes substitution possible - and it is also a clear indication that we should use it. After all, we already saw this happening when we integrated tangent and secant. Note that in secant, the "extra cosine" is in the denominator; however, we did not mind, because we were able to move it into the numerator easily. Another alternative - sometimes very effective - is to reduce the power using various trigonometric identities. This brings us to the next type where we look closer at this method. • Type f and g are sines and/or cosines. These integrals are best solved using trig identities. It is usually easier if you first reduce powers using Typically one would start using the first formula to eliminate all possible products sin(Ax)cos(Ax) and then use the other two on all powers that are left. Roughly speaking, these identities allow us to halve powers in exchange for doubling arguments. In the end we are left with products of the form f(Ax)g(Bx) with A B, where the above identities do not help. But we have another set of identities handling exactly this case: However, there is an opposite strategy, where we try to use identities to remove multiples at the variable, usually in exchange for powers, in general we thus arrive at a linear combination of integrals with expressions sin^m(x)cos^n(x), which sounds tempting. Disadvantage of this procedure is that one never knows beforehand whether it ends up well or not. While the previous procedure always leads to integrals that can be easily integrated using linear substitutions, here it is a lottery, because one has to use approaches from the next type below. If one of the numbers m, n is even and the other odd, then integrating such an expression can be relatively simple via a substitution, see the first possibility below. The other possibilities (two odd, two even) are the other extreme, they are so complicated that the first recommended procedure tends to be better. If we try this alternative procedure on our last example above, it turns out that we are very lucky and integrate easily using "extra cosine". The moral of the story is that we cover some standard procedures here, but a good knowledge of trig identities often offers better possibilities and trig integrals tend to be one-of-a-kind. For another example we return to the fourth power of sine, which we solved using reduction formulas above and here you can see how it can be done using identities. However, the method of using trig identities can be sometimes quite taxing, for instance this problem is cruel. • Type R is a rational function of two variables. This type of integral is transformed via a suitable substitution to an integral of a rational function, then a standard approach is to apply the partial fractions decomposition and we know how this goes on. This cathegory does not allow directly for multiples of arguments in sine/cosine. However, trig identities can be used to manipulate expressions sin(Ax) and cos(Ax) with A an integer to get rid of this A, so perhaps all reasonable trig integrals (including those above) fit this type and it is worth knowing how to deal with them. There is a standard universal substitution for this type of trig integrals, but the calculations are usually rather complicated, so we first look at two cases for which we have easier substitutions: 1. The easiest case is when there is an "extra sine" or "extra cosine" in the numerator (or denominator!) and at all other places, sines and cosines have even powers. Then we apply the strategy that we already saw for instance when integrating secant and tangent, here is one more example: Occasionally we can get away with a similar trick using a tangent substitution - if we can change the integrated function into an expression consisting only of tangents and with extra squared cosine in the denominator. Then the substitution y = tan(x) works. Similarly, we can use the cotangent substitution if we can get the function into a form with cotangents and extra squared sine in the 2. If all sines and cosines in the integral have even powers, we cannot "borrow" one of them for substitution. Then we use the substitution y = tan(x). Other trig functions can be deduced from this equation. We outline how it is done for sine, cosine is similar: Now we see why this substitution would not work if odd powers appeared. We will show this substitution on a suitable example: The problem can be finished using partial fractions, a brief sketch is here. Here we have to make an important remark. Note that sine and cosine are defined on the whole real line. Since in our example, in the denominator we add even powers of these, it follows that the denominator cannot be equal to zero (it could only happen if sine and cosine were simultaneously zero, which is not possible). Therefore the integral exists on the whole real line. However, the moment we applied our substitution, we had to restrict ourselves to some integral where the tangent exists, say, This is not just a formal problem mathematicians love to worry about, there are even simple integrals where this comes up, and in general the procedure for handling it is not exactly simple, we show one such situation in this example in Solved Problems - Integrals. However, we often get lucky and the answer comes up reasonable, so a good strategy is simply to check on what set are the given function and the answer continuous, the integral should work there. 3. If we have a combination of sines and cosines that does not fit the first two situations, then we have to use the dreaded universal trig substitution. Since it is universal, it could have been used to solve all problems that we saw in the trig integrals section; but as we will now see, it is a substitution that one tries to avoid, just look at the transformation equations: Here you find how to deduce these equations. As you can see, all trig functions can be expressed using powers of y. Example: This was actually quite simple (although trying to show that this answer is equal to the one we had above is rather tough). Unfortunately, usually it is a different story. For instance, we will now try to apply the universal substitution to the problem above that we originally solved using the substitution y = tan(x): Theoretically we could finish this problem using partial fractions. But note that the denominator is a polynomial of the eighth degree, so it is highly unlikely that we could factor it. As you can see, the universal substitution often leads to rational functions involving polynomials of high degrees, which leads to practical problems. This is a good motivation for always looking for some way to avoid this substitution. We will now try to apply all these methods to the integral of tan^2(x). As a power it can be done by the appropriate reduction formula: If we write the tangent using sine and cosine and square, we get a reasonable function featuring only even powers of sine and cosine. This means that there is no extra function that would allow us to use a nice substitution, but we can still do better than the universal substitution. Namely, we try the tangent substitution: Actually, an experienced integrator would still do something different: Simply put, knowing identities is the best advice. Other trig integrals can be found in Solved Problems - Integrals, namely here, here, and here. Note that in exposition of the last substitutions (sine, tangent and the universal one) we restricted our attention to simpler cases, namely hen R is a rational function. This is not really a problem, since in most cases we use these substitutions in this way, but they also work in more general cases, as we explain in this note. It might come helpful. Remark concerning hyperbolic functions: Since all the trig identities that we used here have their hyperbolic counterparts (which are rather treacherous, being almost but not quite the same), we can solve corresponding types of hyperbolic integrals using similar methods, in particular we appreciate substitutions corresponding to "extra sinh" or "extra cosh" in integrals. Example: In Solved Problems - Integrals there is a problem whose solution involves a hyperbolic integral. Some examples of hyperbolic integrals are also at the end of the section Methods Survey - Integration - roots of quadratics. Note that unlike trig functions, hyperbolic functions can be also expressed using exponentials, which opens another possibility for solving hyperbolic integrals. The solution via exponentials is sometimes easier, but often has a serious drawback. The answer would feature exponentials, but since the question is given in the language of hyperbolic functions, we should also express the answer using hyperbolic functions, which may not be easy. But sometimes it is not necessary (for instance with definite integrals), then this trick can be often very efficient. Next box: integrals with roots of quadratics Back to Methods Survey - Methods of integration
{"url":"http://math.feld.cvut.cz/mt/txtd/3/txe3db3g.htm","timestamp":"2014-04-18T11:52:49Z","content_type":null,"content_length":"17760","record_id":"<urn:uuid:7fcc545f-a423-4671-adae-3e579c6fccc9>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
System description: The MathWeb software bus for distributed mathmatical reasoning Results 1 - 10 of 14 - Number 4180 in LNAI , 2006 "... This Document is an online version of the OMDoc 1.2 Specification published by ..." - of Lecture Notes in Computer Science , 2007 "... Abstract. Heterogeneous specification becomes more and more important because complex systems are often specified using multiple viewpoints, involving multiple formalisms. Moreover, a formal software development process may lead to a change of formalism during the development. However, current resea ..." Cited by 30 (21 self) Add to MetaCart Abstract. Heterogeneous specification becomes more and more important because complex systems are often specified using multiple viewpoints, involving multiple formalisms. Moreover, a formal software development process may lead to a change of formalism during the development. However, current research in integrated formal methods only deals with ad-hoc integrations of different formalisms. The heterogeneous tool set (Hets) is a parsing, static analysis and proof management tool combining various such tools for individual specification languages, thus providing a tool for heterogeneous multi-logic specification. Hets is based on a graph of logics and languages (formalized as so-called institutions), their tools, and their translations. This provides a clean semantics of heterogeneous specification, as well as a corresponding proof calculus. For proof management, the calculus of development graphs (known from other large-scale proof management systems) has been adapted to heterogeneous specification. Development graphs provide an overview of the (heterogeneous) specification module hierarchy and the current proof state, and thus may be used for monitoring the overall correctness of a heterogeneous development. 1 - PROCEEDINGS OF THE 18TH CONFERENCE ON AUTOMATED DEDUCTION (CADE–18), VOLUME 2392 OF LNAI , 2002 "... ..." - In 20th International FLAIRS Conference , 2007 "... In the domain of qualitative constraint reasoning, a subfield of AI which has evolved in the past 25 years, a large number of calculi for efficient reasoning about spatial and temporal entities has been developed. Reasoning techniques developed for these constraint calculi typically rely on so-calle ..." Cited by 5 (2 self) Add to MetaCart In the domain of qualitative constraint reasoning, a subfield of AI which has evolved in the past 25 years, a large number of calculi for efficient reasoning about spatial and temporal entities has been developed. Reasoning techniques developed for these constraint calculi typically rely on so-called composition tables of the calculus at hand, which allow for replacing semantic reasoning by symbolic operations. Often these composition tables are developed in a quite informal, pictorial manner and hence composition tables are prone to errors. In view of possible safety critical applications of qualitative calculi, however, it is desirable to formally verify these composition tables. In general, the verification of composition tables is a tedious task, in particular in cases where the semantics of the calculus depends on higher-order constructs such as sets. In this paper we address this problem by presenting a heterogeneous proof method that allows for combining a higherorder proof assistance system (such as Isabelle) with an automatic (first order) reasoner (such as SPASS or VAMPIRE). The benefit of this method is that the number of proof obligations that is to be proven interactively with a semi-automatic reasoner can be minimized to an acceptable level. - In Lecture Notes in Computer Science , 2004 "... Abstract. As the amount of online formal mathematical content grows, for example through active efforts such as the Mathweb [21], MOWGLI [4], Formal Digital Library, or FDL [1], and others, it becomes increasingly valuable to find automated means to manage this data and capture semantics such as rel ..." Cited by 3 (1 self) Add to MetaCart Abstract. As the amount of online formal mathematical content grows, for example through active efforts such as the Mathweb [21], MOWGLI [4], Formal Digital Library, or FDL [1], and others, it becomes increasingly valuable to find automated means to manage this data and capture semantics such as relatedness and significance. We apply graph-based approaches, such as HITS, or Hyperlink-Induced Topic Search, [11] used for World Wide Web document search and analysis, to formal mathematical data collections. The nodes of the graphs we analyze are theorems and definitions, and the links are logical dependencies. By exploiting this link structure, we show how one may extract organizational and relatedness information from a collection of digital formal math. We discuss the value of the information we can extract, yielding potential applications in math search tools, theorem proving, and education. - IN: PROCEEDINGS OF THE 27TH GERMAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (KI 2004) , 2004 "... The year 2004 marks the fiftieth birthday of the first computer generated proof of a mathematical theorem: “the sum of two even numbers is again an even number” (with Martin Davis’ implementation of Presburger Arithmetic in 1954). While Martin Davis and later the research community of automated dedu ..." Cited by 3 (3 self) Add to MetaCart The year 2004 marks the fiftieth birthday of the first computer generated proof of a mathematical theorem: “the sum of two even numbers is again an even number” (with Martin Davis’ implementation of Presburger Arithmetic in 1954). While Martin Davis and later the research community of automated deduction used machine oriented calculi to find the proof for a theorem by automatic means, the Automath project of N.G. de Bruijn – more modest in its aims with respect to automation – showed in the late 1960s and early 70s that a complete mathematical textbook could be coded and proof-checked by a computer. Classical theorem proving procedures of today are based on ingenious search techniques to find a proof for a given theorem in very large search spaces – often in the range of several billion clauses. But in spite of many successful attempts to prove even open mathematical problems automatically, their use in everyday mathematical practice is still limited. The shift - Electronic Notes in Theoretical Computer Science "... This position paper discusses various issues concerning requirements and design of proof assistant user interfaces (UIs). After a review of some of the difficulties faced by UI projects in academia, it presents a high-level description of proof assistant interaction. This is followed by an expositio ..." Cited by 2 (0 self) Add to MetaCart This position paper discusses various issues concerning requirements and design of proof assistant user interfaces (UIs). After a review of some of the difficulties faced by UI projects in academia, it presents a high-level description of proof assistant interaction. This is followed by an exposition of use cases and object identification. Several examples demonstrate the usefulness of these requirement elicitation techniques in the theorem proving domain. The second half of the paper begins with a consideration of the “principle of least effort ” for the design of theorem prover user interfaces. This is followed by a brief review of the “GUI versus text mode ” debate, proposals for better use of GUI facilities and a plea for better support of customisation. The paper ends with a discussion of architecture and system design issues. In particular, it argues for a platform architecture with an extensible set of components and the use of XML protocols for communication between UIs and proof assistant backends. - Towards Mechanized Mathematical Assistants, Lecture Notes in Computer Science , 2007 "... Over the last decade several environments and formalisms for the combination and integration of mathematical software systems have been proposed. Many of these systems aim at a traditional automated theorem proving approach, in which a given conjecture is to be proved or refuted by the cooperation o ..." Cited by 2 (2 self) Add to MetaCart Over the last decade several environments and formalisms for the combination and integration of mathematical software systems have been proposed. Many of these systems aim at a traditional automated theorem proving approach, in which a given conjecture is to be proved or refuted by the cooperation of different reasoning engines. However, they offer little support for experimental mathematics in which new conjectures are constructed by an interleaved process of model computation, model inspection, property conjecture and verification. In particular, despite some previous research in that direction, there are currently no systems available that provide, in an easy to use environment, the flexible combination of diverse reasoning system in a plug-and-play fashion via a high level specification of experiments. [2, 3] presents an integration of more than a dozen different reasoning systems — first order theorem provers, SAT solvers, SMT solvers, model generators, computer algebra, and machine learning systems — in a general bootstrapping algorithm to generate novel theorems in the specialised algebraic domain of , 2003 "... FoC is a computer algebra library with a strong emphasis on formal certification of its algorithms. We present in this article our work on the link between the FoC language and OMDoc, an emerging XML standard to represent and share mathematical contents. On the one hand, we focus on the elaborat ..." Cited by 2 (0 self) Add to MetaCart FoC is a computer algebra library with a strong emphasis on formal certification of its algorithms. We present in this article our work on the link between the FoC language and OMDoc, an emerging XML standard to represent and share mathematical contents. On the one hand, we focus on the elaboration of the documentation system FoCDoc. After an analysis of an OMDoc approach of the documentation we present our own XML implementation (FoCDoc) and how we generate, from a FoC program, documentation files in HTML (MathML), LaTEX and OMDoc. On the other
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=349502","timestamp":"2014-04-19T23:49:24Z","content_type":null,"content_length":"38299","record_id":"<urn:uuid:0ab0e399-37b0-4cc5-b113-4b6c243f794e>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00358-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 251 If 0.08 mol of LiCl is added to a one liter solution that has dissolved 0.01 mol of PbCl2, will a precipitate occur? The rack of weight plated at the fitness center has a total of 148 plates weighing 2.5,5,and 10 pounds each. The total weight is 500 pounds. There are ten times as many 5 pound plates as 10 pound plates. Set up a matrix equations and then solve it to determine the number of ea... in other terms of units, what is a joule equal to? a)N*m b)N/m c)W/s d)s/W e)N*s I need an acrostic poem for the element Sodium?? puhleez?! THANX!!! how does mark twain show racism and slavery in huckleberry fin? thanks! World History For my research project, I need to choose an absolute monarch. I want to do one on a monarch/emperor/king from Vietnam because I have Vietnamese origins and am interested in learning more about its history. Is there any well known Vietnamese absolute monarchs? Thank you Explain what acids and bases are in terms of hydrogen ions. Which has more hydrogen ions and which has less? the moon has a period of 27.3 days and a mean distance of 3.90x10^8 m from the center of the earth. find the period of a satellite in orbit 6.7x10^6 m from the center of earth. thank you! i have to make a prayer video and i dont know what to make it about, please help! thank you Given that Mg(ClO3)2 decomposes into magnesium chloride and oxygen: A) Write a balanced equation for this reaction. I got Mg(ClO3)2 yields MgCl2 + 3 O2 Is that right? B) If 0.302 grams of O2 is lost from 1.890 grams of a mixture of Mg(ClO3)2 and an inert material, what is the ... Given that Mg(ClO3)2 decomposes into magnesium chloride and oxygen: A) Write a balanced equation for this reaction. I got Mg(ClO3)2 yields MgCl2 + 3 O2 Is that right? B) If 0.302 grams of O2 is lost from 1.890 grams of a mixture of Mg(ClO3)2 and an inert material, what is the ... in the scarlet letter what influence did the witches have on pearl thanks! yea i had to read the book over the summer but my teacher wants us to find a celebrity that kinda represents them. like how lindsey lohan is like a pearl im doing a character analysis and i need help on what celebrity is like hester and one like pearl Can you proofread my essay? Life in the Fast Lane Have you ever changed so much in life that you couldn t possibly bare it? Well, that person is called a dynamic character, someone who changes throughout a drama. In the drama, The Crucible, by Arthur Miller th... why is the columbian exchange and pilgrims important to the U.S history? thank you! WHO IS saint. cecilia? thank you so much!! It's a difference of two squares. It'll be (x+sqrt(4)) and (x-(sqrt(4)), for example---replacing 4 with whatever number is being subtracted from x squared. That one is (x-2)(x+2). Another example: x^ 2-16 = (x-4)(x+4) You can even do this: x^4-16 = (x^2-4)(x^2+4) Can you proofread my essay and I also need a good title? Christopher Columbus was a brave, strong, hard-worker who took sea to America in 1470. What is biased? Biased is your opinion or idea on someone. In Christopher Columbus journal excerpt, there were many examples of how h... In a class of 80 seniors, there are 3 boys for every 5 girls. In the junior class, there are 3 boys for every 2 girls. If the two classes combined have an equal number of boys and girls, how many students are in the junior class? there are no variations within the catholic liturgy. all liturgy is identical worldwide. true or false? Part 1: Write two conditional statements for which the converse is true. 1. Statement: 2. Converse: 3. Statement: 4. Converse: Part 2: Write two conditional statements for which the converse is false. 5. Statement: 6. Converse: 7. Statement: 8. Converse: simplify sin(540-2x) this is to do with trig Verify the identity. (secx + tanx)/(secx - tanx) = (1 + 2sinx + sin(^2)x)/cos(^2)x find the general solution for the equation: sin2x+ square root 3cosx =0 a dolls house questions: what religious themes do you see in the play? What is the significance of Nora s words, to hell and be damned in Act I? Was Nora wrong to have acquired the money for the trip to save Torvald s health? What elements of Nora and Tor... The stem-and-leaf plot represents the average monthly high temperatures in Tampa, Florida. What is the first quartile (Q1) of the temperatures? Round to the nearest tenth. Stem leaf 6 9 7 1267 8 Does this set of data have any outliers? Data: 1, 2, 2, 4, 6, 6, 7, 8, 9, 10, 12, 13, 16, 16, 18. no or yes The data below represents the average score per game of the top 40 NBA players, rounded to the nearest whole number. Find the minimum, Q1, median, Q3, maximum for the data. Stem l leaf 1 l 55667777888888899 2 l 00000011123445566679 3 l 135 AP Chemistry yea you're right! College Statistics A random variable X has a Gamma distribution with (alpha = a, beta = b). Show that P(X > 2ab) < (2/e)^a. hint: X > c is equivalent to e^Xt > e^ct How many liters of hydrogen gas can be produced at 300.0K and 1.55atm if 20.0g of sodium metal is reacted with water? How many liters of hydrogen gas can be produced at 300.0K and 1.55atm if 20.0g of sodium metal is reacted with water? I just wish that people could post a helpful answer, Translational motion is movement of an object without a change in its orientation relative to a fixed point, as opposed to rotational motion, in which the object is turning about an axis. With that in mind a leaf blowing acr... Is a package of 8 hot dog buns proportional to a package of 10 hot dog franks? A ball of moist clay falls 10.4 m to the ground. It is in contact with the ground for 20.5 ms before stopping. (a) What is the average acceleration of the clay during the time it is in contact with the ground? (Treat the ball as a particle.) m/s2 (b) Is the average acceleratio... math 7th grade hint: use pemdas math 7th grade what is 234(2-1+1092-924+18) ? A 16000 kg sailboat experiences an eastward force of 12400 N due to tide pushing its hull while the wind pushes the sails with a force of 51900 N directed toward the northwest (45 westward of North or 45 northward ofWest). What is the magnitude of the resultant ac- c... 3rd grade the answer is precisley 9 hands:) An invoice of $12,000 is dated April 5. The terms 5/10,n/30 are offered. Find the amount due if the discount is earned. Show your work. If i have .0030M BaSO4 how do I calculate the moles of sulfate? a plane needs to fly at 40 degrees n of e when flying through a wind that blows 80 mph @ 70 degrees S of E. If the plane goes 320 mph in still air in what direction should the plane aim? A stone is dropped from the top of a well. If the well is 23.1 meters deep, how long with the stone take to reach the bottom? An object with a mass of 3kg is dropped from 56.3 meters. How far above the ground is it's kinetic energy 1073J? A metal sphere of radius 5.00 cm is initially uncharged. How many electrons would have to be placed on the sphere to produce an electric field of magnitude 1.59 105 N/C at a point 7.98 cm from the center of the sphere? AP Chem how many milliliters of 0.10 M HCl must be added to 50.0 mL of 0.40 HCl to give a final solution that has a molarity of 0.25 M? 4% x 60month x 2000= 4800 I am really struggling with this problem. What would the rate of return be on an Individual Retirement Account with a contribution of $2000.00, at 4% Interest rate, for 5 years? math algebra what are 4 consecutive integers whos sum is -2 English (Mrs,Sue!!) Mrs.Sue, I wrote a paper on research proposal. Now, my teacher is asking us to write the steps on how we wrtoe the paper. Can you reword these questions for me? thank you so much!! "What techniques did you use as you thought through the issue? " "What prewriting... Ms.Susy Please Help With Time!!!!!! the answer is 5:30 hours +y| / F | / | / | / /a_ _ _ _ _ +x 40N This is pretty much what the picture looks like. Your answer was incorrect! "Only two forces act on an object (mass = 3.90 kg), as in the drawing. (F = 76.0 N.) Find the magnitude and direction (relative to the x axis) of the acceleration of the object. " (The angle is 45 degrees and the horizontal force is 40N) "Only two forces act on an object (mass = 3.90 kg), as in the drawing. (F = 76.0 N.) Find the magnitude and direction (relative to the x axis) of the acceleration of the object. " (The angle is 45 degrees and the horizontal force is 40N) Is my answer correct then?? Thank you!! R varies jointly as S and the square of T. If R is 32.4 when S=0.2 and T=9, find R when S=0.5 and T=5. I believe the answer is R=25, could someone check this for me? for his new room Daren will need 154 feet sq of wall to wall to cover of his new square room. what is the length of each wall of his room? How do you find standard deviation? a 25 kg box is released from rest on a rough inclined plane tilted at an angle of 33.5 degrees to the horizontal. The coefficient of kinetic friction between the box and the inclined plane is 0.200. a. determine the force of kinetic friction acting on the box b. determine the ... Yes it does. Thanks. my work: 243.228= 18.0153 * 4.184* (Tfinal-Tinitial) 1825.228=75.376Tfinal Tfinal= 24.2149 I got this answer and it is incorrect. can u please help me? and confirm the right answer. thanks :) I have this same question on my hw and it is marking me wrong. Can someone please say what the right answer is or confirm it? I still don't get how to do it. I did what you said and it keeps marking me incorrect :( No this is a different person. Thanks I got it. At constant pressure, which of these systems do work on the surroundings? Check all that apply. 2A (g) + B(s) --> 3C(g) 2A (g) +2B (g) -->5C(g) A(g)+ B(g) --> C(g) A (s) +2B(g)--> C(g) For a 5.0 g sample of CoCl3·9H2O, what weight of water (in grams) will be lost if the sample is heated to constant weight (all water driven off)? Film Literature What was the first comedy movie ever made? at a local community college, it costs 150 to enroll in a morning section of an algebra course. Supopose that the varible n stands for the number of students who enroll. then 150n stands for the total amount of money collected for this course. how much is collected if 41 stude... 9th gradae physical science teh density of copper is 8.92 g/cm3. If you plotted the mass of copper in grams versus the volume in cubic centimeters, what would the slope of the line be? Intro to Psychology 1 A behaviorist is asked to evaluate five students in the second grade who exhibt impluse control problems.The behaviorist decides it is best initially to study them in the classroom without interferring with the behavior.This behaviorist is conducting _____ research. A.) survey... ok thank you! yes i had read this previously, i guess im just not sure how to apply this to the question. any suggestions? can anyone tell me a little about MacArthur and Wilson's theory of island biogeography? I need to answer the question Describe a series of small habitats that exist within 10 to 20 kilometers of your house and which might be subject to MacArthur and Wilson's theory of ... If im thinking correctly Aristotle paired things with similar characteristics. I would do some quick research on him before answering but I would go with D. number of people who hear a rumor after t hours is N(t) = 2000/ 1 + 499e^-0.3t How long will it take for 200 people to hear the rumor? Suppose you spend 30.0 minutes on a stair-climbing machine, climbing at a rate of 75 steps per minute, with each step 8.00 inches high. If you weigh 150 lb and the machine reports that 690 kcal have been burned at the end of the workout, what efficiency is the machine using in... Sweating is one of the main mechanisms with which the body dissipates heat. Sweat evaporates with a latent heat of 2430 kJ/kg at body temperature, and the body can produce as much as 1.3 kg of sweat per hour. If sweating were the only heat dissipation mechanism, what would be ... An ideal monatomic gas expands isothermally from 0.590 m3 to 1.25 m3 at a constant temperature of 670 K. If the initial pressure is 8.00 104 Pa. (a) Find the work done on the gas. Answer is in J (b) Find the thermal energy transfer Q. Answer is in J The answer is no solution, unless ur teacher says it has one. I had ethe same problem and its no solution. Hope this helps For the first time in a very long time (perhaps ever!), the concept of financial risk and risk management has become a topic of concern at Presidential press conferences. Such concern has centered on esoteric financial products such as derivatives that are used to manage risk.... what change of state happens when a gas loses energy? what is the cisternal maturation hypothesis for Golgi apparatus function? how does the explosive polymerization of G-actin propels L. monocytogenes through the cytoplasm? what is a mechanism for how EB1 can mediate the extension of ER cisternae along microtubules? Spanish- SraJMcGin please recheck Thank you Spanish- SraJMcGin please recheck I think it does-so I was okay when I used mejor que in my sentence? I wasn't suppose to change anything?? Spanish- SraJMcGin please recheck I'm not sure what you meant by correcting this sentence--Thank you for any further explanation you can give-- 3. Tocar el piano es______tocar la guitarra. (+bueno) Tocar el piano es mejor que tocar la guitarra. You said in an earlier post: mejor que(irregular - or comparat... For number ten and eleven above I think I put the wrong answer. To place verb in present progressive it would be: 10.Yo escucho it would be Yo estoy escuchando, correct? 11. nosotros hablamos would be Nosotros estamos hablando, correct? Thank you Please check these- Fill in comarative form for the first three: 1.Surfear es ____ interesante (cazar.) Surfear es más divertido que cazar. 2.El fútbol es ____ el golf.(interesante) El fútbol es más interesante que el golf. 3. Tocar el piano es_____... Algebra-Please check For the ellipse (x-2)^2/49 + (y+1)^2/25=1 List center: (2,-1) the foci: -(sqrt 74) AND +(SQRT 74,0) Major axis: x=2 Minor axis: y= -1 Vertices: (7,0) (-7,0) (0,5) (0,-5) Is this correct? A ball has a diameter of 3.70 cm and average density of 0.0841 g/cm3. What force is required to hold it completely submerged under water? I need to find the magnitude in N Thank you! (a) Calculate the absolute pressure at the bottom of a fresh-water lake at a depth of 23.2 m. Assume the density of the water is 1.00 103 kg/m3 and the air above is at a pressure of 101.3 kPa. Pa (b) What force is exerted by the water on the window of an underwater vehicle at ... Four objects are held in position at the corners of a rectangle by light rods . m1 2.90(kg) m2 1.90(kg) m3 4.50(kg) m41.90 (kg) (a) Find the moment of inertia of the system about the x-axis. Answer in kg · m2 (b) Find the moment of inertia of the system about the y-axis... can you help me brain storm? I want some points on why Per Capita GDP might be a bad way to compare wellbeing across countries. I don't understand what is meant when a question asks you for the "term" Historical significance. for example:john maynard keynes. I know everything about him and his economic ideas but, if i was asked to write a short paragraph of his historical significance i w... Pages: 1 | 2 | 3 | Next>>
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=Allie","timestamp":"2014-04-17T16:19:24Z","content_type":null,"content_length":"27714","record_id":"<urn:uuid:7a7c7b4c-e971-498a-8485-3de3be1d8620>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Mathematical Models and Dynamics of Functional Equations The above mentioned RIMS workshop will take place. Anyone interested is welcome! Room 115 Research Institute for Mathematical Sciences (RIMS) Kyoto University 10-14 November, 2003 Monday, November 10 14:00 `14:30 Takeshi Nishikawa (The Univ. of Electro-Communications), Nguyen Van Minh (Hanoi Univ. of Science), Toshiki Naito (The Univ. of Electro-Communications) Necessary and sufficient conditions for all mild solutions of Functional Differential Equations to be asymptotic periodic 14:30 `15:00 Yasuhisa Saito (Shizuoka Univ.) A Time-Delay Model for Inverse Trophic Relationship 15:10 `15:35 Naoki Yoshida (Osaka Prefecture Univ.) Permanence of a Delayed $SIR$ Epidemic Model with Logistic Process 15:35 `16:05 Yasuhiro Takeuchi (Shizuoka Univ.) Global Stability of an SIR Epidemic Model with Time Delay 16:05 `16:35 Makoto Iima, Keita Suzuki (Hokkaido Univ) Numerical analysis of a Tag model in circle Tuesday, November 11 9:30 `10:10 Kazufumi Shimano (Tokyo Metropolitan Univ.) Homogenization and penalization of Hamilton-Jacobi equations with integral terms 10:10 `10:50 Hiroyuki Usami (Hiroshima Univ.), Tomomitsu Teramoto (Onomichi Univ.) Oscillation theorems of quasilinear elliptic equations with arbitrary nonlinearities 11:00 `11:50 Miyashita Toshiya, Takashi Suzuki (Osaka Univ.) On a semilinear elliptic eigenvalue problem with non-local term 13:30 `14:00 Shinji Nakaoka (Osaka Prefecture Univ.) Stability analysis for a physiological clock model with delayed negative feedvack loop 14:05 `14:55 Jingan @Cui C(Nanjing Normal Univ.) The effect of dispersal on population dynamics 15:05 `15:45 Jong Son Shin (Korea Univ.), Toshiki Naito (The Univ. of Electro-Communications) A representation of solutions of linear difference equations with constant coefficients 15:45 `16:15 Hidekazu Asakawa (Gifu University) Existence of slowly decaying positive solutions for some superlinear 2nd order ODE 16:15 `16:45 Manabu Naito (Ehime Univ.) Oscillation Theorems for 4-dimensional Emden-Fowler differential systems Wednesday, November 12 9:30 `9:50 Keita Ashizawa, Rinko Miyazaki (Shizuoka Univ.) An application of the monotone theory to a delay differential equation 10:00 `11:30 mSpecial Session] Hiroshi Matano (Univ. of Tokyo) Order-preserving dynamical systems in the presence of symmetry: theory and applications I 13:30 `15:00 mSpecial Session n Hiroshi Matano (Univ. of Tokyo) Order-preserving dynamical systems in the presence of symmetry: theory and applications II 15:10 `15:30 Shigeru Haruki (Okayama Univ. of Science), Shin-ichi Nakagiri (Kobe Univ.) Partial Difference Functional Equations Arising from the Cauchy-Riemann Equations 15:30 `16:00 Kiyoyuki Tchizawa (Musashi Institute of Tech) Duck Solutions in Higher Dimension 16:10 `16:40 Mami Suzuki (Aichi Gakusen Univ.) Analytic solutions of Nonliner Differece Equation 16:40 `17:00 Yoshiaki Muroya (Waseda Univ), Ishiwata Emiko (Tokyo Univ. of Science) Global stability for nonlinear difference equations with variable delays 17:00 `17:20 Ryusuke Kon (Shizuoka Univ.) Competitive exclusion in discrete dynamical systems Thursday, November 13 9:05 `9:25 Toru Sasaki (Okayama Univ.) Local prevension in an SIS model with diffusion 9:30 `10:00 Katumi, Kamioka (Univ. of Tokyo) Coexistence steady state of sessile metapopulation model: The case of two species and two habitat 10:00 `10:30 Yukio Kan-on (Ehime Univ.) Bifurcation structure of stationary solutions of a Lotka-Volterra competition model with diffusion 10:35 `11:05 Fumio Nakajima (Iwate Univ.) A predator-prey system model of singular equations 11:10 `11:40 Jito Vanualailai (Univ. of the South Pacific), Shin-ichi Nakagiri (Kobe Univ.) Exponential and Non-exponential Convergence of Solutions in Some Class of Nonlinear Systems with Applications to Neural Networks 13:30 `14:00 Naoto Yamaoka, Jitsuro Sugie (Shimane Univ.) An oscillation theorem for second-order nonlinear differential equations with p-Laplacian 14:05 `14:35 Satoshi Tanaka (Hachinohe National College of Technology) Uniqueness of solutions with prescribed numbers of zeros for two-point boundary value problems 14:35 `15:05 Tomomitsu Teramoto (Onomichi Univ.) Nonexistence of nonnegative entire solutions of second order semilinear elliptic systems 15:15 `15:55 Shin-ichi Nakagiri (Kobe Univ), Junhong Ha (Korea Univ. of Technology and Education) Identification Problems for Nonlinear Perturbed Sine-Gordon Equations 15:55 `16:25 Junhong Ha (Korea Univ. of Technology and Education), Jito Vanualailai (Univ. of the South Pacific), Shin-ichi Nakagiri (Kobe Univ.) A Collision Avoidance and Attraction Problem of a Vehicle 16:25 `16:55 Jin-Soo Hwang, Shin-ichi Nakagiri (Kobe Univ.) Boundary Control Problems for Viscoelastic Systems with Long Memory Friday, November 14 9:05 `9:25 Tsuyoshi Kajiwara, Takuma Iuchi (Okayama Univ.) Mathematical models of infectious disease with delay 9:30 `10:00 Yanagiya Akira (Advanced Institute For Complex Systems Waseda Univ.) Mathematical Models for Grain Grooving 10:00 `10:30 Tomoyuki Tanigawa (Toyama National College of Technology) Existence of rapidly varying solutions of second order half-linear differential equations 10:40 `11:10 Seiji Saito (Osaka Univ.) Fredholm Equations and Volterra Equations Arising from Fuzzy Boundary Differential Equations 11:10 `11:40 Osamu Tagashira (Osaka Prefecture Univ.) Delayed feedback control chemostat models 11:40 `12:10 Rinko Miyazaki (Shizuoka Univ.) Stabilization of unstable periodic orbits on planar systems
{"url":"http://www.sys.eng.shizuoka.ac.jp/~miyazaki/rims-e.html","timestamp":"2014-04-19T14:29:10Z","content_type":null,"content_length":"7181","record_id":"<urn:uuid:594f263f-243a-42dd-b5ea-98ff94eaf460>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00584-ip-10-147-4-33.ec2.internal.warc.gz"}
Efficient algorithms for computing the nearest polynomial with constrained roots , 2005 "... In his 1999 Sigsam Bulletin paper [7], H. J. Stetter gave an explicit formula for finding the nearest polynomial with a given zero. This present paper revisits the issue, correcting a minor omission from Stetter’s formula and explicitly extending the results to different polynomial bases. Experiment ..." Cited by 2 (2 self) Add to MetaCart In his 1999 Sigsam Bulletin paper [7], H. J. Stetter gave an explicit formula for finding the nearest polynomial with a given zero. This present paper revisits the issue, correcting a minor omission from Stetter’s formula and explicitly extending the results to different polynomial bases. Experiments with our implementation demonstrate that the formula may not after all, fully solve the problem, and we discuss some outstanding issues: first, that the nearest polynomial with the given zero may be identically zero (which might be surprising), and, second, that the problem of finding the nearest polynomial of the same degree with a given zero may not, in fact, have a solution. A third variant of the problem, namely to find the nearest monic polynomial (given a monic polynomial initially) with a given zero, a problem that makes sense in some polynomial bases but not others, can also be solved with Stetter’s formula, and this may be more satisfactory in some circumstances. This last can be generalized to the case where some coefficients are intrinsic and not to be changed, whereas others are empiric and may safely be changed. Of course, this minor generalization is implicit in [7]; This paper 1 simply makes it explicit. - In Proceedings of the Fifth Conference on Real Numbers and Computers , 2003 "... 52, avenue de Villeneuve ..." "... quantifier elimination and also aim at practicality. This is realized by utilizing the scheme for robust control design by [1]. The organization of the rest of the paper is as follows: The idea of robust control synthesis based on SDC and special QE algorithm is explained in 52. \S 3 is devoted to o ..." Add to MetaCart quantifier elimination and also aim at practicality. This is realized by utilizing the scheme for robust control design by [1]. The organization of the rest of the paper is as follows: The idea of robust control synthesis based on SDC and special QE algorithm is explained in 52. \S 3 is devoted to our QE approach to robust control analysis. \S 4 provides our QE approach to various synthesis problems. We show several concrete analysis and synthesis examples demonstrating the validity of our approach in \S 5. \S 6 addresses the concluding remarks. 2Parametric approach to robust control design via QE Consider afeedback control system shown in Fig 1. $\mathrm{p}=[p_{1},p_{2}, \cdots,p_{s}] $ is the vector of uncertain real parameters in the plant G. $\mathrm{x}=[x_{1}, x_{2}, \ cdots,x_{t}] $ is the vector of real parameters of controller $C $. Assume that the controller considered here is of fixed order. Figure 1: Astandard feedback system The performance of the control system can often be characterized by avector $\mathrm{a}=[a_{1}, \cdots,a\iota] $ which are functions of the plant and controller parameters $\mathrm{p}$ $H_{\infty}$-norm constraints, and "... When working with empirical polynomials, it is important not to introduce unnecessary changes of basis, because that can destabilize fundamental algorithms such as evaluation and rootfinding: for more details, see e.g. [3, 4]. Moreover, in ..." Add to MetaCart When working with empirical polynomials, it is important not to introduce unnecessary changes of basis, because that can destabilize fundamental algorithms such as evaluation and rootfinding: for more details, see e.g. [3, 4]. Moreover, in "... In this paper, we consider the problem of a nearest polynomial with a given root in the complex field (the coefficients of the polynomial and the root are complex numbers). We are interested in the existence and the uniqueness of such polynomials. Then we study the problem in the real case (the coef ..." Add to MetaCart In this paper, we consider the problem of a nearest polynomial with a given root in the complex field (the coefficients of the polynomial and the root are complex numbers). We are interested in the existence and the uniqueness of such polynomials. Then we study the problem in the real case (the coefficients of the polynomial and the root are real numbers), and in the real-complex case (the coefficients of the polynomial are real numbers and the root is a complex number). We derive new formulas for computing such polynomials. , 1998 "... for problems of continuous mathematics whilst in the introduction to [2], R. Loos defines Computer Algebra: that part of computer science which designs, analyzes, implements, and applies algebraic algorithms. Clearly these definitions are similar, and the major distinction is continuous versus algeb ..." Add to MetaCart for problems of continuous mathematics whilst in the introduction to [2], R. Loos defines Computer Algebra: that part of computer science which designs, analyzes, implements, and applies algebraic algorithms. Clearly these definitions are similar, and the major distinction is continuous versus algebraic. Until one introduces a notion of topology, the concepts of algebra and analysis (or continuous mathematics) are entirely independent and complementary. We can therefore expect to find much rich material in their interaction. In comparing these definitions, one really needs to appeal to examples, such as the following, to make the distinctions clearer. Computing the approximate value of a definite integral by Gauss-Chebyshev quadrature is a numerical computation:
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=168636&sort=cite&start=10","timestamp":"2014-04-18T10:55:23Z","content_type":null,"content_length":"27537","record_id":"<urn:uuid:10d13d1f-40ea-448e-8caf-e80c1afeb132>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00131-ip-10-147-4-33.ec2.internal.warc.gz"}
4K Resolution - The New Frontier in Home Theater & Media Rooms post #31 of 40 8/7/12 at 2:24am Randomoneh is right. Viewing distance isn't chosen like that. You don't move half as close to the screen when watching a DVD (instead of a BD). Rather than asking yourself "at what viewing distance does the size and resolution of my TV approach 'perfection'?", you ask yourself "at what viewing distance does the size of my TV offer the highest immersion, in spite of other limitations?". You watch from the distance that 'feels' best to you, or provides optimal 'immersion', as Randomoneh put it. Originally Posted by Randomoneh Warning: Spoiler! (Click to show) No, everything is fine. Somehow I missed "vertical" resolution. Now, there is an error with how Mr. Clark at ClarkVision calculates needed resolution and "resolution of the eye". He simply multiplies number of pixels per degree (200 for 0.3 arcmin-per-pixel) with number of degrees that print / display is occupying. Try doing that and you'll see how wrong it is. Displays and printed materials are straight and one degree of FOV might occupy 1 inch in the center of the display and 2 inches at the edge. I've made that same mistake before. So, his example is this: "Consider a 20 x 13.3-inch print viewed at 20 inches. The Print subtends an angle of 53 x 35.3 degrees, thus requiring 53*60/.3 = 10600 x 35*60/.3 = 7000 pixels, for a total of ~74 megapixels to show detail at the limits of human visual acuity." At 20 inches, apparent size of every pixel should be 0.3 arcminutes, that is 0.0017453292531 inches. That makes 572.957795 pixels per inch or 11459 x 7620 for whole print. Center of his image (10060 x 7000 doesn't match 200 pixels per degree / 0.3 arcminutes per pixel. His error would even higher if imaginary print occupies higher angle of viewer's field of view. Here's number of pixels needed for some viewing angles. Degrees of field of view / needed number of pixels: 1 = 200 5 = 1000.6097 10 = 2005.04157 15 = 3017.1764 20 = 4041.01414 25 = 5080.73843 30 = 6140.78725 35 = 7225.93252 40 = 8341.37157 45 = 9492.8346 50 = 10686.713 55 = 11930.2151 60 = 13231.5576 65 = 14600.2042 70 = 16047.1673 75 = 17585.3928 80 = 19230.2588 85 = 21000.2305 90 = 22917.73 95 = 25010.3136 100 = 27312.2871 105 = 29866.9673 110 = 32729.9105 115 = 35973.6303 120 = 39694.6728 125 = 44024.5498 130 = 49147.2306 135 = 55328.2946 140 = 62965.9458 145 = 72685.7534 150 = 85530.1329 155 = 103375.2 160 = 129972.906 165 = 174077.442 170 = 261950.853 Cotangent of 0.3 arcminutes can be used for easy calculation of needed viewing distance (in inches) or needed ppi value of display / print. Needed viewing distance = 11459.155895344 / PPI Needed PPI value = 11459.155895344 / distance in inches For 7680 x 4320 displays, viewing distance = 1.29923485 x diagonal measurement. Or 2.65258238 x image height. As for your graphs, I like them. I've made something similar before, based on 0.3 arcminute per pixel value, too. I like your attention to details. I think your chart looks very professional, much more advanced than mine! Yes, to calculate the number of dots needed on a piece of paper (I will use the words "pixels" and "screen"), you follow the same procedure as I did for 16:9 screens. Only this time you know the height and width of the screen beforehand, so you won't have to bother with the Pythagorean theorem and aspect ratios. First, you calculate the desired pixel pitch, of the pixel which is the very closest to your eyes, based on the viewing distance. Then, you go on to calculate the target PPI (pixels per inch) or target resolution from that. The only problem I can see is that, on the diagonal pixel pitch will be wider (due to the Pythagorean theorem). Again, it's an approximation. I only used Dr. Clark's sources, not his calculations, so I haven't really looked at them. But, I can make my own: To calculate the desired pixel pitch p when viewed from distance y: p = y * tan( 0.3 arcminutes ) Example: The desired pixel pitch when viewed from a distance of 20 inches: ( 20 in ) * tan( 0.3 arcminutes ) = 44.3313631 microns From this, the desired PPI can be calculated: ppi = ( 1 in ) / ( y * tan( 0.3 arcminutes ) ) Which is the same as: ((( y ) * tan( 0.3 arcminutes ))^(-1))*(1 in) Example: The desired PPI when viewed from a distance of 20 inches: ((( 20 in ) * tan( 0.3 arcminutes ))^(-1))*(1 in) = 572.958 That is very close to the standard 600 DPI resolution of printers at home. To calculate the desired resolution r for a specific length (dimension) w of screen, replace "1 in" with w: r = w / ( y * tan( 0.3 arcminutes ) ) Example: The 13.3 inches from a distance of 20 inches in Clark's example: ( 13.3 in ) / ( ( 20 in ) * tan( 0.3 arcminutes ) ) = 7620.34 Now, you asked, what if we only know the field of view the paper or screen occupies? Well, the FoV really only lets you calculate the size of the sceen, so you can replace the screen size with that calculation in the above formula. If the screen occupies an angle of q when centered, perpendicular to your line of vision, and viewed from distance y, its length is: 2 * tan( q / 2 ) * y Example: The length of a screen occupying 35.3 degrees when viewed from 20 inches: (2 * tan( 35.3 degrees / 2 ) * 20 in) in inches = 12.7271772 inches Now we can just merge the two formulae into one: ( 2 * tan( q / 2 ) * y ) / ( y * tan( 0.3 arcminutes ) ) And simplify: 2 * tan( q / 2 ) / tan( 0.3 arcminutes ) Example: Pixels needed to approach theoretical "perfection" on a screen occupying 35.3 degrees: 2 * tan( (35.3 degrees) / 2 ) / tan( 0.3 arcminutes ) = 7292.14 Hope that helps. Edited by MisterMuppet - 8/7/12 at 11:08am Really making this harder - Why 4K TVs are stupid (still). This is pretty funny as the vast majority of people can't even tell the diff between HD and SD when viewed on similar widescreen sets. I was at CES a few years ago and Samsung had two sets side by side with identical programming, one in 720P and the other in 1080P and none of the experts walking by could tell the diff., to me it wasn't close but it told me a lot about what people really saw if they didn't know what to look for on the screen. Originally Posted by HughScot This is pretty funny as the vast majority of people can't even tell the diff between HD and SD when viewed on similar widescreen sets. I was at CES a few years ago and Samsung had two sets side by side with identical programming, one in 720P and the other in 1080P and none of the experts walking by could tell the diff., to me it wasn't close but it told me a lot about what people really saw if they didn't know what to look for on the screen. Well, you'll understand if I tell you I have more belief in scientific study that deals with much wider range of [angular] resolutions than anecdotal evidence. You Folks are missing a point. With 8k, or 4k, or even 1080p, you can see exactly what you want to see. What if you knew someone on the TV screen, and it was in 8k or 4k? You could then zoom in on that face in the crowd. With lower resolutions, you will get poorer results. The display will then take on new markets, not just movies anymore. Professionals could use the features to show off their projects. Law enforcement could zoom in on a suspect. Outer space will be more visible. Whole new markets could develop that we can't yet imagine. I am looking forward to that day. Have a good day. 4096x2160 is the resolution for professtional 4K which is used in 4K Digital Cinema. 3840x2160 (Quad HD) will be the consumer version. It's the same with HD. Digital Cinema uses 2048x1080 while consumer HD is 1920x1080. Why the difference? Isn't there a benefit from having a resolution that is power of 2 (2^12 in case of 4096 and 2^13 in case of 8192)? I believe it has to do with the chips that digital cinema projectors use. Not 100% sure though. I am all for new technology like 4K but if you dont have a huge screen to see the difference then whats the point? I believe to really enjoy the 4K experience you are going to need a really big HDTV or a projection system. Full HD 1080p is more then good enough for me right now. Indeed! I shall see real advantage shall be in 70 plus screens where people have to sit close , needing much pixel density post #32 of 40 8/16/12 at 10:46am post #33 of 40 8/23/12 at 5:52pm • 820 Posts. Joined 9/2001 • Location: Charlotte, NC • Thumbs Up: 11 post #34 of 40 8/23/12 at 9:09pm post #35 of 40 8/29/12 at 8:50am • 249 Posts. Joined 4/2008 • Location: Broadus, Montana • Thumbs Up: 22 post #36 of 40 9/17/12 at 1:08pm post #37 of 40 9/17/12 at 3:44pm post #38 of 40 9/17/12 at 4:26pm post #39 of 40 9/24/12 at 7:17am • 1,384 Posts. Joined 7/2008 • Thumbs Up: 105 post #40 of 40 9/24/12 at 7:54am • 10,005 Posts. Joined 3/2002 • Thumbs Up: 14
{"url":"http://www.avsforum.com/t/1411160/4k-resolution-the-new-frontier-in-home-theater-media-rooms/30","timestamp":"2014-04-18T19:21:14Z","content_type":null,"content_length":"149017","record_id":"<urn:uuid:44cc4f65-2cd7-438e-8dbf-ddd9073edc70>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00472-ip-10-147-4-33.ec2.internal.warc.gz"}
Fair Sampling Loophole now closed for Photons Did you already find the time to read that article? I hope to find time in the coming week - but that's unlikely... I found out that these two references are behind paywall: 14. P. H. Eberhard, Background Level and Counter Efficiencies Required for a Loophole-Free Einstein-Podolsky-Rosen Experiment, Physical Review A 47, 747–750 (1993). 18. J. F. Clauser, M. A. Horne, Experimental consequences of objective local theories, Physical Review D 10, 526–535 (1974). But it seems that the proof of inequality that is used in the paper for interpretation of results is rather simple. Inequality (3) is: [tex]J=S(\alpha_1)+S(\beta_1)+C(\alpha_2,\beta_2)-C(\alpha_1,\beta_1)-C(\alpha_1,\beta_2)-C(\alpha_2,\beta_1)\geqslant 0[/tex] I will check my proof and then I will post it here.
{"url":"http://www.physicsforums.com/showthread.php?p=4187884","timestamp":"2014-04-17T21:26:39Z","content_type":null,"content_length":"89507","record_id":"<urn:uuid:c7309869-f997-46f8-a456-6cce40e31b23>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00616-ip-10-147-4-33.ec2.internal.warc.gz"}
Pure or Applied maths with stats Hi all, This post will be long, but I hope in giving more relevant information, the replies will be more helpful. Apologies for the length, and thanks for reading, given now. I'll state the question here though to avoid you having to search for it: Does it make any real difference whether one takes either applied maths or pure maths, with statistics at undergraduate level, if there is no interest in a PhD in either? I have a BSc and an MA in philosophy. I love it and wanted to do a PhD in it and get a uni job. This was my single-minded dream. But the state of uni employment has gotten worse, especially for lots of arts and humanities subjects, since my undergrad ended in 2008. I've decided it's not worth the sacrifice. Studying philosophy meant mathematical logic. I enjoyed it and did well. This led me to start a part-time degree in mathematics and statistics in 2009. I had to stop for my philosophy MA but am now studying it again. My goal is to become a professional statistician. Almost certainly in medical statistics or public health as, of stat jobs, this seems to fit best with my several motivations (increase store of human knowledge; expose charlatans who peddle fear and prey on people with health problems and surplus cash; help finding effective treatments/interventions for diseases; a good work-life balance (I would never, ever, ever want some 70+hour a week, every week, with long commute, weekend working, staying away, on-call at 3am job)). Of course I will take whatever stepping-stone job I can get right now that will help down the line when I am more qualified. (I know a master's is required for many stats jobs in this field and I have been encouraged by a top uni here to apply for a 1-year course in maths/stats for semi-numerate graduates to gain access to their MSc Statistics. But I feel I need more of my current undergrad courses to make sure I pass this access intensive crash course. Only 65% or above guarantees an MSc place. And it's all far more expensive than my current undergrad institution.) In this undergraduate I am currently on, either track covers the same maths content up to the second year. I am completing my first 100 of 360 credits (unless I transfer elsewhere for the MSc). Pure track then covers: formal proof, abstract structures, linear algebra, analysis, group theory. Applied track covers: differential equations, linear algebra, vector calculus. I can only pick one. If it is relevant, almost the entire final year is statistics. 1/4 is applied probability. 1/4 is linear modelling. 1/4 is mathematical statistics. The last quarter would be the same on either track - I would pick a course on applied discrete mathematics. Also covered in the second year are: multivariate statistics, time series, Bayesian inference, and medical statistics. I include this content as it is probably relevant regarding my question. You can see all but one stats module is applied. And being 28, in massive student debt, two degrees for an aborted career, living with parents, no car (can't afford driving lessons even), no savings and working in a supermarket at night, stacking shelves, I made a deal with myself: I must study something I enjoy, but it must be a powerful aid to my CV and lead to a fulfilling and financially comfortable career. Not necessarily 6 figures, but certainly halfway there after a decade at least. This all makes me think I should pick applied maths. Statistical jobs in industry are, after all, applied. And I need the step from study to work to be as small and swift as possible. But, for what it's worth, I do not like calculus (I know this is in both tracks but it appears there is less in the pure track), and did really enjoy mathematical logic during philosophy, so that makes me think pure. Sorry again for such a long post. I really have no time or money now to waste on fascinating but only useful-in-periphery subjects (I know, 'unreasonable effectiveness of mathematics', 'logical thinking is always useful etc. etc.) but by the same token I do not want to slog through a track I do not enjoy so much - which is how the applied track appears to me - if it is not much more beneficial to my goal of becoming a professional statistician. Thankyou so much in advance.
{"url":"http://www.physicsforums.com/showthread.php?p=4199825","timestamp":"2014-04-21T09:47:34Z","content_type":null,"content_length":"54494","record_id":"<urn:uuid:4bd76251-78c8-4d92-8ae9-3a97cde876c9>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00150-ip-10-147-4-33.ec2.internal.warc.gz"}
Optimisation problem help September 9th 2012, 03:45 AM #1 Aug 2012 Optimisation problem help The answer given in the textbook is : 16 and 24, however I'm finding two different values, 15 and 25. Any assistance would be greatly appreciated. The sum of two positive integers is 40. Find the two integers such that the product of the square of one number and the cube of the other number is a maximum: P = (x^2)(y^3) As x + y = 40, x = 40 - y Substituting this back into the product expression: P = (40-y)^2(y^3) P = (1600 -80y + y^2)y^3 P = 1600y^3 -80y^4 +y^5 dP/dy = 4800y^2 -320y^3 +5y^4 When such equals zero, 5y^2(960-64y) = 0 Obviously y=0 is not a solution, so 64y = 960 , y = 15 Is there another method of finding the solution? Last edited by DonGorgon; September 9th 2012 at 04:59 AM. Re: Optimisation problem help Look again at the line where you remove $5y^{2}$ as a factor. Re: Optimisation problem help As indicated, you have factored incorrectly, but to answer your question about whether there is another method, one can use Lagrange multipliers. We are given the function: subject to the constraint: giving the system: which implies: The first factor gives us the possible solutions: $(x,y)=(0,40),\,(40,0)$ both of which yield: The second factor gives: and substituting into the constraint, we find: September 9th 2012, 04:11 AM #2 Super Member Jun 2009 September 9th 2012, 07:45 AM #3
{"url":"http://mathhelpforum.com/calculus/203144-optimisation-problem-help.html","timestamp":"2014-04-16T21:14:10Z","content_type":null,"content_length":"36813","record_id":"<urn:uuid:81e54df1-2df5-4ce1-ad1b-f36b305e587c>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
E. Ballico Analytic Subsets of Products of Infinite-Dimensional Projective Spaces Let $V_i$, $1\le i \le s$, be complex topological vector spaces with $V_1$ infinite-dimensional and $Y$ a closed analytic subset of finite codimension of ${\bf {P}}(V_1)\times \dots \times {\bf {P}}(V_s)$. Here we show that $Y$ is algebraic (at least if each $V_i$ is a Banach space) and that any two points of $Y$ may be connected by a chain of $s+3$ lines contained in $Y$.
{"url":"http://www.emis.de/journals/GMJ/vol10/10-4-1.htm","timestamp":"2014-04-16T07:14:28Z","content_type":null,"content_length":"908","record_id":"<urn:uuid:e6401d30-c77f-4f69-be3d-4b1de7d071f2>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Im having a massive brain fart at the moment and Hicky and Rayne are on 09-06-2012, 01:08 PM Im having a massive brain fart at the moment and Hicky and Rayne are on A right triangle is formed in the first quadrant by the x- and y-axes and a line through the point (9,5). Write the length L of the hypotenuse as a function of x. I have no clue how to do this at all. Help. 09-06-2012, 01:08 PM hicky is an english major 09-06-2012, 01:09 PM I don't know he's a smart guy. 09-06-2012, 01:09 PM 09-06-2012, 01:10 PM Im imagining the end formula is going to be some sort of distance formula, how I get there and eliminate the Ys is beyond me. 09-06-2012, 01:13 PM I think I have to make some sort of imaginary line, have it connect with imaginary points, and find the distance of the imaginary points in a forumula for distance... 09-06-2012, 01:17 PM i seem to remember this is pretty early trigonometry but on account of being an english major i have forcefully forgot how to maths 09-06-2012, 01:25 PM L=the square root of ((l9xl +5)/x)^2)+(l9xl+5)^2 Am i right? x being the slope Edited! 2ce now so its clear what i ment, i knew what i ment in my head, but wrote it wrong. 09-06-2012, 01:26 PM the triangle is formed by the x axis and y axis. since it's a right triangle, the y intercept and x intercept are equal and the slope of the hypotenuse is -1. use the point slope form of the equation of a line: 09-06-2012, 01:28 PM 09-06-2012, 01:28 PM wait what? just because its a right traingle doesn't mean the middle piece is sloped in any specific way. The x axis leg could extend far longer than the y axis leg, meaning the slope of the line connecting the two(hypotnuse) would be really small making the line really big. 09-06-2012, 01:29 PM fuck im retarded 09-06-2012, 01:29 PM HOLD UP 09-06-2012, 01:29 PM Its not an equilateral right triangle. 09-06-2012, 01:31 PM there is no way to solve this without more information 09-06-2012, 01:32 PM i edited my formula had a brain fart and used the slope i derived the forumla with instead of x, now it should be right. 09-06-2012, 01:32 PM hm im not sure immediately. i thought about constructing a function for y based on its limits but i dont think that'll work. 09-06-2012, 01:32 PM the length can be any number above 392 09-06-2012, 01:34 PM 09-06-2012, 01:34 PM wait a second no it cant abifucwgofnqpegvfqwwd 09-06-2012, 01:34 PM i hate myself right now 09-06-2012, 01:35 PM im 99% sure im right. The legth of it is based on the slope(the only unknown factor( so x) and a point on the line. I simplified a bunch of formulas into one easy formula, i can't believe i'm not correct. Really someone explain how my formula doesn't work for every single slope value ever(slopes have to be negative btw) 09-06-2012, 01:35 PM not only is this wrong but it didnt even answer the question. WHAT THE FUCKING F 09-06-2012, 01:36 PM SHIT WHEN I SAY 9x +5 i mean the slope absolute valued + 5 09-06-2012, 01:37 PM
{"url":"http://pokedream.com/forums/printthread.php?t=21692&pp=25&page=1","timestamp":"2014-04-25T04:05:24Z","content_type":null,"content_length":"17428","record_id":"<urn:uuid:2131b916-33ed-401b-bb64-4f1680b198ae>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00088-ip-10-147-4-33.ec2.internal.warc.gz"}
Formula for lift: Question - PPRuNe Forums Originally Posted by It does not matter any way as it is incorrect theory, (and it was only a theory after all), before you start typing have a look at NASA's website and type "incorrect theory" in the search engine. Only 2% of lift is generated by the bernouli theory and the rest is Coanda effect. Don't argue with me argue with NASA, (you'll lose) Well, a quick search of the nasa site under incorrect theory doesn't show anything on the ongoing coanda/bernoulli/deflected airstream debate (well at least on the first page of results). Nevertheless, it amazes me that people continue with the misconception that the three explanations are different. They are all different ways of looking at the same effect. It goes as follows if you decide to start with the momentum change associated with deflecting the airstream. You can, of course, start from coanda or bernoulli and get the same conclusions. It is the same physical effect. 1) In order to produce lift we need to produce a change in momentum (F = dp/dt, where p is momentum). This is achieved by deflecting the airstream passing over the airfoil. 2) In order to deflect an airstream, you need to have a pressure gradient across the airstream. To deflect the airstream downwards the pressure above the airstream needs to be higher than the pressure below. 3) For the air going over the top of the airfoil, the pressure above it is just the free stream pressure, therefore in order for there to be lower pressure below this stream, the pressure at the top of the airfoil needs to be lower than the free stream pressure. 4) Conversely, for the air going underneath the airfoil, the pressure below it is just the free stream pressure, so the pressure just below the airfoil needs to be higher than the free stream 5) The high pressure below the airfoil and the low pressure above it results in the airfoil being pushed up. This is the mechanism by which the force due to the change in momentum of the airstream is conveyed to the airfoil. 6) Because the pressure above the airfoil is lower, the air velocity needs to be higher to comply with bernoulli's law (P + 1/2*rho*V^2 = constant). Conversely, because the pressure below the airfoil is higher, the velocity needs to be lower. 7) Circulation is defined as the integral of air velocity around a closed path encircling the airfoil. Basically, you can think of the faster upper surface velocity and the slower lower surface velocity as being comprised of an equal velocity above and below superimposed with a weaker circulating flow that goes from the back of airfoil to the front along the lower surface, and then comes back along the top surface. The flow circulating flow along the top adds to the base velocity leading to a higher velocity over the top of the airfoil. The circulating flow is subtracted from the flow underneath the airfoil, resulting in a lower velocity under the airfoil. Note very carefully that the circulation is really just a mathematical tool to describe the difference in speeds above and below the airfoil. 8) The coanda effect basically states that the lift of the airfoil is proportional to the circulation times the free stream velocity. But remember, we only have a circulation because there is a difference in speeds above and below the airfoil, and we only have the difference of speeds because bernoulli requires it to generate the difference in pressures, and we only have the difference in pressures because we are deflecting the airstream. So, you can't claim that only 2% of the lift is due to the "coanda effect", it is all due to the coanda effect, but then it is also all due to the deflection of the airstream, etc... They are all part of one and the same physical phenomenon. The theory of circulation (coanda effect) is important mainly from a theoretical point of view, it provides a mechanism for calculating the strengths of downwashes and trailing vorticies. All very useful when Nick Lappos and co want to work out how to make a more efficient, quieter rotor.
{"url":"http://www.pprune.org/rotorheads/256783-formula-lift-question.html","timestamp":"2014-04-21T15:39:59Z","content_type":null,"content_length":"103159","record_id":"<urn:uuid:0a96103f-d261-4516-b1f4-376d6def1bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00303-ip-10-147-4-33.ec2.internal.warc.gz"}
st: Re: spaces between variables [Date Prev][Date Next][Thread Prev][Thread Next][Date index][Thread index] st: Re: spaces between variables From baum <baum@bc.edu> To statalist@hsphsun2.harvard.edu Subject st: Re: spaces between variables Date Thu, 15 Aug 2002 08:39:33 -0400 --On Thursday, August 15, 2002 2:33 -0400 Nick Winter wrote: With the file command, you can roll your own version of this fairly version 6 file open myfile using <filename>, text write replace local N=_N forval i=1/`N' { local line : di _col(1) varname1[`i'] _col(25) varname2[`i'] ... file write myfile `"`line'"' _n file close myfile version 7 I should note that the original request was for a single space between variables -- then the fifth line would look like this: local line : di varname1[`i'] " " varname2[`i'] " " varname3[`i'] ... And in that case it would be quite straightforward to build up the macro to be written with a forvalues loop over the variables to be written, allowing such a routine to accept a varlist of indeterminate length and generate the desired single-space-delimited output. However, when using space delimiting, one must always worry about string variables with embedded spaces. If there is no risk of that, this would work quite nicely. Here is a quick and dirty hack that nevertheless seems to work (well, watch out for string variables): . adotype ssconcat * ssconcat: write contents of varlist to an external file with space delimiters * cfb 2815 program define ssconcat,rclass version 7.0 syntax varlist using/, [replace] file open myfile using `using', text write `replace' local N=_N forval i=1/`N' { local out foreach v of local varlist { local elt = `v'[`i'] local out "`out'`elt' " file write myfile `"`out'"' _n file close myfile . ssconcat price mpg headroom using testss,replace . type testss 4099 22 2.5 4816 20 4.5 * For searches and help try: * http://www.stata.com/support/faqs/res/findit.html * http://www.stata.com/support/statalist/faq * http://www.ats.ucla.edu/stat/stata/
{"url":"http://www.stata.com/statalist/archive/2002-08/msg00266.html","timestamp":"2014-04-16T16:10:38Z","content_type":null,"content_length":"6825","record_id":"<urn:uuid:1b6acbf5-db4a-42fd-acf8-44e1521930f8>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00374-ip-10-147-4-33.ec2.internal.warc.gz"}
The University of Sydney ELEC5732: Foundations of Electricity Networks (2014 - Semester 1) Unit: ELEC5732: Foundations of Electricity Networks (6 CP) Mode: Normal-Day On Offer: Yes Level: Postgraduate Faculty/School: School of Electrical and Information Engineering Unit Coordinator/s: Dr Verbic, Gregor Session options: Semester 1 Versions for this Unit: Site(s) for this Unit: https://elearning.sydney.edu.au Campus: Camperdown/Darlington Pre-Requisites: None. Prohibitions: ELEC3203. This unit of study provides an introduction to electrical power engineering and lays the groundwork for more specialised units. It assumes a competence in first year mathematics (in particular, the ability to work with complex numbers), in elementary circuit theory and in elements of introductory physics. A revision will be carried out of the use of phasors in Brief Handbook steady state ac circuit analysis and of power factor and complex power. The unit comprises an overview of modern electric power system with particular emphasis on generation and Description: transmission. The following specific topics are covered. The use of three phase systems and their analysis under balanced conditions. Transmission lines: calculation of parameters, modelling, analysis. Transformers: construction, equivalent circuits. Generators: construction, modelling for steady state operation. The use of per unit system. The analysis of systems with a number of voltage levels. The load flow problem: bus and impedance matrices, solution methods. Power system transient stability. The control of active and reactive power. Electricity markets, market structures and economic dispatch. Types of electricity grids, radial, mesh, networks. Distribution systems and smart grids. Assumed This unit of study assumes a competence in first year mathematics (in particular, the ability to work with complex numbers), in elementary circuit theory and in basic Knowledge: electromagnetics. Lecturer/s: Dr Verbic, Gregor Timetable: ELEC5732 Timetable # Activity Name Hours per Week Sessions per Week Weeks per Semester 1 Lecture 2.00 1 13 Time Commitment: 2 Tutorial 2.00 1 13 3 Laboratory 3.00 1 13 4 Independent Study 3.00 13 T&L Activities: Independent Study: Independent Study Attributes listed here represent the key course goals (see Course Map tab) designated for this unit. The list below describes how these attributes are developed through practice in the unit. See Learning Outcomes and Assessment tabs for details of how these attributes are assessed. Attribute Development Method Attribute Developed The solution of specific technical problems is covered in the tutorials. Design (Level 2) Per unit systems and the use of laod flows are specific to the discipline of high voltage power engineering. Measurment techniques in a power laboratory Engineering/IT Specialisation (Level 4) Basic electromagnetism and analysis of steady state ac circuits. Maths/Science Methods and Tools (Level 4) Written communication in the nature of report. Communication (Level 3) Some understanding is acquired of the role of power engineers working in high voltage systems. Professional Conduct (Level 1) Required in carrying out experimental work and preparing final report. Project Management and Team Skills (Level For explanation of attributes and levels see Engineering & IT Graduate Outcomes Table. Learning outcomes are the key abilities and knowledge that will be assessed in this unit. They are listed according to the course goal supported by each. See Assessment Tab for details how each outcome is assessed. Design (Level 2) 1. Ability to solve problems specific to the operation of engineering power systems by undertaking information investigation and selection and adopting a system based approach. Engineering/IT Specialisation (Level 4) 2. Demonstrable understanding of per unit systems to the extent of the course content. 3. Ability to perform analysis using per unit systems. 4. Ability to demonstrate an understanding of specific tools such as load flow software and the information provided by such tools to the extent of exercises and projects set throughout the course. 5. Proficiency in examining the relationship between load flow software and other computer based software used in modern power systems, by looking into the concepts, principles and techniques Maths/Science Methods and Tools (Level 4) 6. Ability to demonstrate applicability of fundamental scientific concepts and procedures to the specific engineering models developed in the unit. Communication (Level 3) 7. Ability to write a report to communicate complex project specific information concisely and accurately and to the degree of specificity required by the engineering project at hand. Project Management and Team Skills (Level 3) 8. Ability to work in a group, manage or be managed by a leader in roles that optimise the contribution of all members, while showing initiative and receptiveness so as to jointly achieve engineering project goals in a laboratory environment. # Name Group Weight Due Week Outcomes 1 Lab Report No 20.00 Multiple Weeks 1, 2, 3, 4, 5, 6, 7, 8, Assessment 2 Mid‐semester exam #1 No 10.00 Week 5 1, 2, 3, 4, 6, Methods: 3 Mid‐semester exam #2 Yes 10.00 Week 9 1, 2, 3, 4, 6, 4 Design project No 20.00 Week 13 1, 2, 3, 4, 5, 6, 7, 5 Final Exam No 40.00 Exam Period 1, 2, 3, 4, 5, 6, Lab Report: Laboratory practice and report Assessment Mid-Sem Exam: One hour closed book Design project: Power system planning excersise using industry grade power flow software Final Exam: Two hour closed book Grade Type Description Standards Final grades in this unit are awarded at levels of HD (High Distinction), D (Distinction), CR (Credit), P (Pass) and F (Fail) as defined by University of Sydney Assessment Grading: Based Policy. Details of the Assessment Policy are available on the Policies website at http://sydney.edu.au/policies . Standards for grades in individual assessment tasks and the Assessment summative method for obtaining a final mark in the unit will be set out in a marking guide supplied by the unit coordinator. Policies & See the policies page of the faculty website at http://sydney.edu.au/engineering/student-policies/ for information regarding university policies and local provisions and procedures Procedures: within the Faculty of Engineering and Information Technologies. Prescribed Text/s: Note: Students are expected to have a personal copy of all books listed. Recommended Reference/s: Note: References are provided for guidance purposes only. Students are advised to consult these books in the university library. Purchase is not required. Online Course Content: https://elearning.sydney.edu.au Note that the "Weeks" referred to in this Schedule are those of the official university semester calendar https://web.timetable.usyd.edu.au/calendar.jsp Week Description Week 1 Overview of unit: syllabus, assessment, assumed knowledge, learning outcomes, relationship to other units of study. Relevance of text and web pages. Notation used. Brief history and overview of electric power systems. Generation, transmission and distribution. Week 2 Revision of ac circuit analysis and complex power. Analysis of three phase circuits under balanced conditions. Per phase equivalent circuits. Week 3 Construction of overhead lines and cables. Calculation of inductance and capacitance. Bundling of conductors. Geometric mean distance and geometric mean radius. Week 4 Modelling of transmission lines with distributed inductance and capacitance. Short, medium length and long line models. A,B,C,D parameters. Transmission capability of lines. Surge impedance loading. Line compensation. Review of transformers. Equivalent circuit of a single phase transformer. Three phase transformer connections. Per phase equivalent circuits for three phase transformers. Per unit systems Week 5 for single phase and three phase systems. Change of base. Assessment Due: Mid‐semester exam #1 Week 6 Generation. Construction of synchronous generators; turbo- and hydro-generators. Models of generators for steady state operation. Week 7 The formulation of the load flow problem. The bus admittance matrix. Solution of non-linear algebraic equations using Newton Raphson method. Setting up the load flow equations. Week 8 Modelling large systems containing generators, lines, transformers and loads. Calculations for simple networks. Week 9 Power system transient stability. The swing equation. The equal area criterion. Simplified synchronous machine model. A two-axis synchronous machine model. Assessment Due: Mid‐semester exam #2 Week 10 Power system control. Voltage and Reactive Power Control. Turbine governor control Load frequency control. Week 11 Electricity markets. Market Structures. Economic dispatch. Optimal Power Flow. Ancillary services. Week 12 Types of electricity grids, radial, mesh, networks. Distribution systems. Smart grids. Week 13 Invited industry lecture Assessment Due: Design project Exam Assessment Due: Final Exam Course Relations The following is a list of courses which have added this Unit to their structure. Course Year(s) Offered Master of Professional Engineering (Power) 2010, 2011, 2012, 2013, 2014 Graduate Certificate in Engineering 2011 Graduate Diploma in Engineering 2011 Master of Professional Engineering (Electrical) 2010, 2011, 2012, 2013, 2014 Course Goals This unit contributes to the achievement of the following course goals: Attribute Practiced Assessed Design (Level 2) Yes 16.67% Engineering/IT Specialisation (Level 4) Yes 56.67% Maths/Science Methods and Tools (Level 4) Yes 16.67% Communication (Level 3) Yes 7% Professional Conduct (Level 1) Yes 0% Project Management and Team Skills (Level 3) Yes 3% These goals are selected from Engineering & IT Graduate Outcomes Table which defines overall goals for courses where this unit is primarily offered. See Engineering & IT Graduate Outcomes Table for details of the attributes and levels to be developed in the course as a whole. Percentage figures alongside each course goal provide a rough indication of their relative weighting in assessment for this unit. Note that not all goals are necessarily part of assessment. Some may be more about practice activity. See Learning outcomes for details of what is assessed in relation to each goal and Assessment for details of how the outcome is assessed. See Attributes for details of practice provided for each goal.
{"url":"http://cusp.sydney.edu.au/students/view-unit-page/alpha/ELEC5732","timestamp":"2014-04-20T00:37:42Z","content_type":null,"content_length":"94796","record_id":"<urn:uuid:4551ef0c-140f-45cb-88e0-92c6d0708b10>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00574-ip-10-147-4-33.ec2.internal.warc.gz"}
domain of composite function November 21st 2009, 08:28 PM #1 Oct 2006 domain of composite function If the function fg (f compose g) exist, then domain of fg = domain of g. Is the result above true? Not necessary, right? Domain of fg will just be a subset of domain of g, am i correct? No it is not necessarily true. The domain of the composite will be those elements x in the domain of g for which g(x) is in the domain of f. good luck! November 22nd 2009, 08:35 AM #2
{"url":"http://mathhelpforum.com/pre-calculus/116017-domain-composite-function.html","timestamp":"2014-04-18T00:27:44Z","content_type":null,"content_length":"32269","record_id":"<urn:uuid:07749c48-306b-41a3-b499-eae458ffd16a>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00129-ip-10-147-4-33.ec2.internal.warc.gz"}
Min, Max under negation and an AABB trick The two obvious identities: min(a,b) = -max(-a, -b) max(a,b) = -min(-a, -b) can be used to rewrite algorithms using mixed min/max expressions in terms of just min (or just max). This can sometimes be useful when working with data that is intended to be processed with SIMD instructions, because it can be used to make the dataflow more regular. Let me give you a simple example to show what I mean: computing the axis-aligned bounding box (or AABB for short) of the union of several 2D AABBs. AABB of the union of N 2D AABBs A common representation for a 2D AABB just stores the extrema in both X and Y: union AlignedBox2 { struct { float min_x, min_y; float max_x, max_y; Vec4 simd; The AABB for the union of N such AABBs can then be computed by computing the min/max over all bounds in the array, as follows: AlignedBox2 union_bounds(const AlignedBox2 *boxes, int N) // N >= 1 AlignedBox2 r = boxes[0]; for (int i=1; i < N; i++) { r.min_x = min(r.min_x, boxes[i].min_x); r.min_y = min(r.min_y, boxes[i].min_y); r.max_x = max(r.max_x, boxes[i].max_x); r.max_y = max(r.max_y, boxes[i].max_y); return r; A typical 4-wide SIMD implementation can apply the operations to multiple fields at the same time, but ends up wasting half the SIMD lanes on fields it doesn’t care about, and does some extra work at the end to merge the results back together: AlignedBox2 union_bounds_simd(const AlignedBox2 *boxes, int N) Vec4 mins = boxes[0].simd; Vec4 maxs = boxes[0].simd; for (int i=1; i < N; i++) { mins = min(mins, boxes[i].simd); maxs = max(maxs, boxes[i].simd); AlignedBox2 r; r.minx = mins[0]; // or equivalent shuffle... r.miny = mins[1]; r.maxx = maxs[2]; r.maxy = maxs[3]; return r; But the identities above suggest that it might help to use a different (and admittedly somewhat weird) representation for 2D boxes instead, where we store the negative of max_x and max_y: union AlignedBox2b { struct { float min_x, min_y; float neg_max_x, neg_max_y; Vec4 simd; If we write the computation of the union bounding box of two AABBs A and B in this form, we get (the interesting part only): r.min_x = min(a.min_x, b.min_x); r.min_y = min(a.min_y, b.min_y); r.neg_max_x = min(a.neg_max_x, b.neg_max_x); r.neg_max_y = min(a.neg_max_y, b.neg_max_y); where the last two lines are just the result of applying the identity above to the original computation of max_x / max_y (with all the sign flips thrown in). Which means the SIMD version in turn becomes much easier (and doesn’t waste any work anymore): AlignedBox2b union_bounds_simd(const AlignedBox2b *boxes, int N) AlignedBox2b r = boxes[0]; for (int i=1; i < N; i++) r.simd = min(r.simd, boxes[i]); return r; And the same approach works for intersection too – in fact, all you need to do to get a box intersection function is to turn the min into a max. Now, this is just a toy example, but it shows the point nicely – sometimes a little sign flip can go a long way. In particular, this trick can come in handy when dealing with 3D AABBs and the like, because groups of 3 don’t fit nicely in typical SIMD vector sizes, and you don’t always have another float-sized value to sandwich in between; even if you don’t store the negative of the max, it’s usually much easier to sign-flip individual lanes than it is to rearrange them. 1. What is representation of Vec4? just four int’s? or this can be any primitive type? □ Any primitive type will do, as long as there’s corresponding min and max operations.
{"url":"http://fgiesen.wordpress.com/2013/01/14/min-max-under-negation-and-an-aabb-trick/","timestamp":"2014-04-21T02:25:34Z","content_type":null,"content_length":"56185","record_id":"<urn:uuid:3468d369-2f74-4d6b-9113-29cc612f462d>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00204-ip-10-147-4-33.ec2.internal.warc.gz"}
Divisibility of central binomial coefficient July 10th 2010, 01:34 PM Divisibility of central binomial coefficient I have a question! Is it true that the central binomial coefficient $\binom{2n}{n}$ is divisible by a $p$ prime number if $\left\{ \frac{n}{p}\right\} \geq\frac{1}{2}$, where $\left\{ \frac{n}{p} \right\}$ means fractional part? July 10th 2010, 02:03 PM The number of times $p$ appears in the numerator is $\left\lfloor\frac{2n}{p}\right\rfloor+\left\lfloor \frac{2n}{p^2}\right\rfloor+\left\lfloor\frac{2n}{ p^3}\right\rfloor+\ldots$ The number of times $p$ appears in the denominator is $2\left(\left\lfloor\frac{n}{p}\right\rfloor+\left\ lfloor\frac{n}{p^2}\right\rfloor+\left\lfloor\frac {n}{p^3}\right\rfloor+\ldots\right)$ See if you can use this to get your answer. July 10th 2010, 02:11 PM The number of times $p$ appears in the numerator is $\left\lfloor\frac{2n}{p}\right\rfloor+\left\lfloor \frac{2n}{p^2}\right\rfloor+\left\lfloor\frac{2n}{ p^3}\right\rfloor+\ldots$ The number of times $p$ appears in the denominator is $\left(\left\lfloor\frac{n}{p}\right\rfloor+\left\l floor\frac{n}{p^2}\right\rfloor+\left\lfloor\frac{ n}{p^3}\right\rfloor+\ldots\right)^2$ See if you can use this to get your answer. July 10th 2010, 02:58 PM Thanks! :) July 10th 2010, 03:03 PM Also sprach Zarathustra I think that OP question can also be proved by using: Bertrand's postulate - Wikipedia, the free encyclopedia July 10th 2010, 04:09 PM Here is what I've tried. $n=pb+r$, with $p/2\leq r<p$ (becuase $r/p$ is the fractional part). Then $\left \lfloor \frac{n}{p} \right \rfloor=b$ and $\left \lfloor \frac{2n}{p} \right \rfloor=\ left \lfloor 2b+\frac{2r}{p} \right \rfloor=2b+\left \lfloor \frac{2r}{p} \right \rfloor\geq 2b+1$ (since $2r\geq p$). So, $2\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \ right \rfloor$. (Remember this). In general, $\left \lfloor a+b \right \rfloor\geq\left \lfloor a \right \rfloor+\left \lfloor b \right \rfloor$, so $\left \lfloor \frac{2n}{p^k} \right \rfloor=\left \lfloor \frac{n}{p^k}+\frac {n}{p^k} \right \rfloor\geq\left \lfloor \frac{n}{p^k} \right \rfloor+\left \lfloor \frac{n}{p^k} \right \rfloor=2\left \lfloor \frac{n}{p^k} \right \rfloor$. Therefore, $\sum_{k=1}^{\infty}\left \lfloor \tfrac{2n}{p^k} \right \rfloor>\sum_{k=1}^{\infty}2\left \lfloor \tfrac{n}{p^k} \right \rfloor$, since when $k=1$ we have a strict inequality. Now, due to Legendre, the exponent of the largest power of $p$ that divides $\frac{(2n)!}{n!n!}$ is $\sum_{k=1}^{\infty}\left \lfloor \tfrac{2n}{p^k} \right \rfloor-\sum_{k=1}^{\infty}2\left \ lfloor \tfrac{n}{p^k} \right \rfloor<br />$, and this expresstion is positive. This implies that $p$ divides $\binom{2n}{n}$. I hope this helps... July 10th 2010, 04:22 PM Here is what I've tried. $n=pb+r$, with $p/2\leq r<p$ (becuase $r/p$ is the fractional part). Then $\left \lfloor \frac{n}{p} \right \rfloor=b$ and $\left \lfloor \frac{2n}{p} \right \rfloor=\ left \lfloor 2b+\frac{2r}{p} \right \rfloor=2b+\left \lfloor \frac{2r}{p} \right \rfloor\geq 2b+1$ (since $2r\geq p$). So, $\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \ right \rfloor$. (Remember this). In general, $\left \lfloor a+b \right \rfloor\geq\left \lfloor a \right \rfloor+\left \lfloor b \right \rfloor$, so $\left \lfloor \frac{2n}{p^k} \right \rfloor=\left \lfloor \frac{n}{p^k}+\frac {n}{p^k} \right \rfloor\geq\left \lfloor \frac{n}{p^k} \right \rfloor+\left \lfloor \frac{n}{p^k} \right \rfloor=2\left \lfloor \frac{n}{p^k} \right \rfloor$. Therefore, $\sum_{k=1}^{\infty}\left \lfloor \tfrac{2n}{p^k} \right \rfloor>\sum_{k=1}^{\infty}2\left \lfloor \tfrac{n}{p^k} \right \rfloor$, since when $k=1$ we have a strict inequality. Two comments: You've shown $\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \right \rfloor$, but this doesn't imply $2\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \ right \rfloor$. Also it's possible to have $\left\{\frac np\right\}=0$ and still have $p\mid \binom{2n}{n}$. Ex: $2\mid\binom{28}{14}$. July 10th 2010, 05:00 PM Reply to comments. Two comments: You've shown $\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \right \rfloor$, but this doesn't imply $2\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \ right \rfloor$. Also it's possible to have $\left\{\frac np\right\}=0$ and still have $p\mid \binom{2n}{n}$. Ex: $2\mid\binom{28}{14}$. chiph588@, thanks for your comments! Your first comment: I meant $2\left \lfloor \frac{n}{p} \right \rfloor<\left \lfloor \frac{2n}{p} \right \rfloor$ (I've already edited). This is because $2\left \lfloor \frac{n}{p} \right \rfloor =2b$ and $\left \lfloor \frac{2n}{p} \right \rfloor \geq 2b+1$. Your second comment: I agree with your example, however, the OP says that $\left\{ \frac{n}{p}\right\} \geq\frac{1}{2}<br />$ so I went from there.... July 10th 2010, 05:28 PM I was hoping with the addition of $\left\{\frac np\right\}=0$ and a constraint on the size of $p$, this was an if and only if statement, i.e. "if $0<\left\{\frac np\right\}<\frac12$, then $pot|\ This turns out to be false as $3\mid\binom{14}{7}$ and $\left\{\frac73\right\}=\frac13$. July 10th 2010, 06:39 PM $\lfloor{ a + b \rfloor} = \lfloor{ a \rfloor} + \lfloor{ b \rfloor} + \lfloor{ \{ a \} + \{ b \} \rfloor}$ Then $\lfloor{ \frac{2n}{p^k} \rfloor} = 2 \lfloor{ \frac{n}{p^k} \rfloor} + \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor}$ or $\lfloor{ \frac{2n}{p^k} \rfloor} - 2 \lfloor{ \frac{n}{p^k} \rfloor} = \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor}$ so the value of the power of the prime still be contained in the central coefficient is : $\displaystyle{ \sum_{k\in \mathb{N} }} \lfloor{ \frac{2n}{p^k} \rfloor} - 2 \lfloor{ \frac{n}{p^k} \rfloor} = \displaystyle{ \sum_{k\in \mathb{N} }} \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor$ . We need to ensure that there exists $k$ such that $2 \{ \frac{n}{p^k} \} \geq 1$ or equivalently $\{ \frac{n}{p^k} \} \geq \frac{1}{2}$ . I believe $k$ is not necessarily to be $1$ . For example , $n = 16 ~~ ,~~ p = 5$ we have $\frac{16}{5} = 3.2$ and $\frac{16}{25} = 0.64$ but we still have $5 | \binom{32}{16}$ July 10th 2010, 06:56 PM $\lfloor{ a + b \rfloor} = \lfloor{ a \rfloor} + \lfloor{ b \rfloor} + \lfloor{ \{ a \} + \{ b \} \rfloor}$ Then $\lfloor{ \frac{2n}{p^k} \rfloor} = 2 \lfloor{ \frac{n}{p^k} \rfloor} + \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor}$ or $\lfloor{ \frac{2n}{p^k} \rfloor} - 2 \lfloor{ \frac{n}{p^k} \rfloor} = \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor}$ so the value of the power of the prime still be contained in the central coefficient is : $\displaystyle{ \sum_{k\in \mathb{N} }} \lfloor{ \frac{2n}{p^k} \rfloor} - 2 \lfloor{ \frac{n}{p^k} \rfloor} = \displaystyle{ \sum_{k\in \mathb{N} }} \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor$ . We need to ensure that there exists $k$ such that $2 \{ \frac{n}{p^k} \} \geq 1$ or equivalently $\{ \frac{n}{p^k} \} \geq \frac{1}{2}$ . I believe $k$ is not necessarily to be $1$ . For example , $n = 16 ~~ ,~~ p = 5$ we have $\frac{16}{5} = 3.2$ and $\frac{16}{25} = 0.64$ but we still have $5 | \binom{32}{16}$ Nice argument...! July 10th 2010, 07:01 PM $\lfloor{ a + b \rfloor} = \lfloor{ a \rfloor} + \lfloor{ b \rfloor} + \lfloor{ \{ a \} + \{ b \} \rfloor}$ Then $\lfloor{ \frac{2n}{p^k} \rfloor} = 2 \lfloor{ \frac{n}{p^k} \rfloor} + \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor}$ or $\lfloor{ \frac{2n}{p^k} \rfloor} - 2 \lfloor{ \frac{n}{p^k} \rfloor} = \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor}$ so the value of the power of the prime still be contained in the central coefficient is : $\displaystyle{ \sum_{k\in \mathb{N} }} \lfloor{ \frac{2n}{p^k} \rfloor} - 2 \lfloor{ \frac{n}{p^k} \rfloor} = \displaystyle{ \sum_{k\in \mathb{N} }} \lfloor{ 2 \{ \frac{n}{p^k} \} \rfloor$ . We need to ensure that there exists $k$ such that $2 \{ \frac{n}{p^k} \} \geq 1$ or equivalently $\{ \frac{n}{p^k} \} \geq \frac{1}{2}$ . I believe $k$ is not necessarily to be $1$ . For example , $n = 16 ~~ ,~~ p = 5$ we have $\frac{16}{5} = 3.2$ and $\frac{16}{25} = 0.64$ but we still have $5 | \binom{32}{16}$ We can assume $k=1$ because it's in the hypothesis (I assume you're talking about the OP's original question). July 10th 2010, 07:13 PM Yes , so the if-then statement is correct , by the way i am not sure if this can help in this situation : The max. power of a given prime $p$ denoted by $a$ such that $p^a | \binom{2n}{n}$ can be obtained by this formula ( without a formal proof , could be wrong ) : $\frac{ 2S(n) - S(2n) }{p-1}$ where $S(n)$ is the sum of the digits of $n$ in the representation of base $p$ . For example , $n = 15 ~,~ p = 2$ we have $15 = 1111_{(2)} ~,~ 30 = 11110_{(2)}$ so $2S(15) - S(30) = 2(1+1+1+1) - (1+1+1+1) = 4$ so $a = 4/1 = 4$ which is true because $\binom{30}{15} = 2^4 \times 9694845$ . July 11th 2010, 04:31 AM My proof: Let $p$ a prime number, so the highest power of $p$ which divides $n!$ is $\sum_{k=1}^{\infty}\left\lfloor \frac{n}{p^{k}}\right\rfloor$, so the highest power of $p$ which divides the central binomial coefficient is $\sum_{k=1}^{\infty}\left(\left\lfloor \frac{2n}{p^{k}}\right\rfloor -2\left\lfloor \frac{n}{p^{k}}\right\rfloor \right)$. And that is completely trivial $\left\lfloor 2x\ right\rfloor -2\left\lfloor x\right\rfloor$ will be 1 for all real number if $\left\{ x\right\} \geq\frac{1}{2}$, and we know that there are only two values possible: 0 or 1, so the summation is nonnegative. Thus, it is anough if $\left\lfloor \frac{2n}{p^{k}}\right\rfloor -2\left\lfloor \frac{n}{p^{k}}\right\rfloor =1$ happens at least once in the summation, and the initial condition shows that, because $\left\{ \frac{n}{p}\right\} \geq\frac{1}{2}$. July 11th 2010, 01:54 PM My proof: Let $p$ a prime number, so the highest power of $p$ which divides $n!$ is $\sum_{k=1}^{\infty}\left\lfloor \frac{n}{p^{k}}\right\rfloor$, so the highest power of $p$ which divides the central binomial coefficient is $\sum_{k=1}^{\infty}\left(\left\lfloor \frac{2n}{p^{k}}\right\rfloor -2\left\lfloor \frac{n}{p^{k}}\right\rfloor \right)$. And that is completely trivial $\left\lfloor 2x\ right\rfloor -2\left\lfloor x\right\rfloor$ will be 1 for all real number if $\left\{ x\right\} \geq\frac{1}{2}$, and we know that there are only two values possible: 0 or 1, so the summation is nonnegative. Thus, it is anough if $\left\lfloor \frac{2n}{p^{k}}\right\rfloor -2\left\lfloor \frac{n}{p^{k}}\right\rfloor =1$ happens at least once in the summation, and the initial condition shows that, because $\left\{ \frac{n}{p}\right\} \geq\frac{1}{2}$. Just a minor remark. $\sum_{k=1}^{\infty}\left\lfloor \frac{n}{p^{k}}\right\rfloor$ is the exponent of the highest power of $p$ that divides $n!$.
{"url":"http://mathhelpforum.com/number-theory/150574-divisibility-central-binomial-coefficient-print.html","timestamp":"2014-04-19T07:55:54Z","content_type":null,"content_length":"55573","record_id":"<urn:uuid:bd509e13-f157-4141-8698-94b9f2cee7d0>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00360-ip-10-147-4-33.ec2.internal.warc.gz"}
Kalman filtering application February 20th 2009, 01:21 AM #1 Junior Member Feb 2009 Kalman filtering application Dear all I am doing a project which is about energy comsumption estimation. First, I need to read all parameters available in different time instant, i.e. x1(ti), x2(ti), x3(ti),... x6(ti) etc. ti means time at the ith interval. Suppose x6 represents power consumption at that instant while i believes that x6 is the dependent variable while x1, x2, .... x5 are all independent variables. Then, i can formulate a linear equation, like x6 = a x1 + b x2 +c x3 + d x4 + e x5. Since i can get data for a series of time, say i = 1 to 100. Then, i can have 100 linear equations. Kalam filter is used to estimate a, b, c, d and e. Once their values are know, i can preduct x6(t101) based on x1(t101), x2(t101), ....x5(t101). Then, i can compare such x6 at t101 with the real x6 from the simulator. Since it is a linear approximation, a, b, c, d and e will keep on changing. So, i need to write a program to keep track of these five constants and do predict. Then plot the graph of predicted x6 with the real x6 to finish my work. Since i am new to Kalman filter, i have no idea how can i start my job with the filter. Can you guys give me some guidance how to start? I have read many materials about kalman filter, but i don't know which aspects of kalman filter is applicable to my project..... Thank you Dear all I am doing a project which is about energy comsumption estimation. First, I need to read all parameters available in different time instant, i.e. x1(ti), x2(ti), x3(ti),... x6(ti) etc. ti means time at the ith interval. Suppose x6 represents power consumption at that instant while i believes that x6 is the dependent variable while x1, x2, .... x5 are all independent variables. Then, i can formulate a linear equation, like x6 = a x1 + b x2 +c x3 + d x4 + e x5. Since i can get data for a series of time, say i = 1 to 100. Then, i can have 100 linear equations. Kalam filter is used to estimate a, b, c, d and e. Once their values are know, i can preduct x6(t101) based on x1(t101), x2(t101), ....x5(t101). Then, i can compare such x6 at t101 with the real x6 from the simulator. Since it is a linear approximation, a, b, c, d and e will keep on changing. So, i need to write a program to keep track of these five constants and do predict. Then plot the graph of predicted x6 with the real x6 to finish my work. Since i am new to Kalman filter, i have no idea how can i start my job with the filter. Can you guys give me some guidance how to start? I have read many materials about kalman filter, but i don't know which aspects of kalman filter is applicable to my project..... Thank you Identify the state vector, the measurement matrix, the measurement (co)variance and if possible any prior knowlege of the state (so we can initialise the state and its covarianve matrix). Also you may have to identify the state kinematics (looks like your have a stationary state so we already have that). Then you just use the equations from the book (assuming we can treat the plant noise as non-existant). Sorry for disturbance and thank you for your reply. Do you mind to give me some information about how to determine state vector, the measurement matrix, the measurement (co)variance and state kinematics? I have found some of them but don't really know what that means.... Your state vector is a column vector containing all the information needed to describe the system, here that is ${\bf{X}} = [a,b,c,d,e]^T$, the measurement matrix is the matrix $\bf{H}$ such where $z$ is what is measured in this case $x_6$ and $w$ is the measurement error/noise. So for this problem it is the row vector: ${\bf{H}}=[x_1,x_2,x_3,x_4]$. In your case as the state is just a vector of constants there is no kinematics, and so if you use the eqaations form a text the state propagation matrix is: $<br /> {\bf{\Phi}}={\bf{I}}_{5,5}<br />$ how to work out the w? Is the $z={\bf{HX}}+w$ the process equation ? Is it necessary to find the measurement equation? $Y(K)=H(K) X(K) + V(K)$ where k is time variable, y-noise measurement of x, H-measurement matrix, v-noise. If it do, how to determine it? thank you Last edited by mr fantastic; February 21st 2009 at 04:09 AM. Reason: Fixed up the various typesetting problems. There will be a lot of confusion as there are a large number of conventions on the notation. Your Y(K) is my z. I also drop the K from my notation, as this is a sequantial process, and the update always always referes to the current measurement the prior estimate of the state and generates a posterior mesurement of the state. The measurement error/noise my z and your V should be known processes and so the variance should be known. Last edited by mr fantastic; February 21st 2009 at 04:10 AM. it'seems that the idea of kalman filtering you proposed has some difference with mine. do you mind to read the following website to get what 's my idea? KALMAN FILTERS This website showed one equation (the dynamic equation) to estimate the measurement of posterior with prior estimation of state , another equation(y(k)=Mx(t1)+w(1)) is the measurement of noise in estimating X. Do the idea proposed in this website applicable to my project? can you give me an example of measurement noise? thank you Last edited by atlove; February 21st 2009 at 05:58 PM. Reason: add question it'seems that the idea of kalman filtering you proposed has some difference with mine. do you mind to read the following website to get what 's my idea? KALMAN FILTERS This website showed one equation (the dynamic equation) to estimate the measurement of posterior with prior estimation of state , another equation(y(k)=Mx(t1)+w(1)) is the measurement of noise in estimating X. Do the idea proposed in this website applicable to my project? can you give me an example of measurement noise? You have no dynamic, that site's F is my $\bf{\Phi}$ and as there is no dynamic in this case if we insist on using it it is the identity transfoem. Also as far as I am aware you have no plant noise (u(k) in that sites notation, which represents the (unknown and modelled as random) divergence of the true state from the state that the dynamic predicts). That site's M is my $\bf{H}$ which I beleive is the more common notation, and we agree on w. Note that the site you quote is only presenting a 1-D KF, you need the multi-dimensional case. can you give me an example of measurement noise? I want to measure the height of one of my children so we mark their height on the door post and measure that height with a extending steel rule. So if their true height is $h_t$ and I measure height $h_m$ we have: where $w$ represents the error in this measurement process. This error is composed of multiple terms, lets say the accuracy of making the mark, and that of reading the height on the rule. What is important is the distribution of this error, we would hope it has zero mean, and has a standard deviation we can estimate. After thinking about how the mark is made we may conclude that the standard deviation of that is $\approx 1 \text{cm}$ and that the standard deviation of reading the rule is $\approx 0.25 \text{cm}$, so the SD of $w$ is $\approx 1.03 \text{cm}$. For convienience we also assume that $w$ is normally distributed so we have assumed: $w \sim N(0,1.03^2)$. I would suggest that you try to find a copy of Applied Optimal Estimation by Gelb et al. which in my opinion contains the very best presentation of the practical (as well as theoretical) aspects of Kalman filtering. Last edited by CaptainBlack; February 22nd 2009 at 12:41 AM. ThX for the book, i found the book in my campus library fortunately I will read this book first. Thank you do you have some example about the application of kalman filtering which somehow similar to my project? i have read many material , i want some example to further my understanding. if you don't have , just leave it, doesn't matter, thank you very much No, all the applications I have used are to target tracking and bearings only analysis. I have implemented a 2-D verion of your problem in Matlab, the code follows: function rv=DataGen This generates a data set such that the third element is the sum of the first two plus a normally distributed random number of unit variance. (the model is $d_3=d_1+d_2+w$, I use d here as x is usually reserved for the state ) function [X,P]=KalmanInov(Data,R,X,P) % Kalman Filter Inovation Processing H=[Data(1),Data(2)]; %extract the H matrix or vector from the data z=Data(3); %extract the measurement from the data % Below is the standard Kalman inovation equations in matrix form S=H*P*H'+R; %note this is a scalar The above is the Kalman innovation process straight from the book X on input is the old state estimate, and on exit the new state estimate, and P on input is the prior state covariance matrix and on exit is the posterior state estimate. function rv=KalmanSim P=[100,0;0,100]; %initial covariance of state estimate X=[0;0]; %initial state estimate R=1; %measurement variance for idx=1:10 D=DataGen; % generate some data rv=[rv,X]; %accumulate the state estimate The above loops over 10 sets of data, plotting the rows of the returned matrix will show the evolution of the state estimate, which tend to [1,1]' Note there is no need for a dynamic with this model. The attached figure shows the evolution of the components of the state estimate. CaptainBlack, thx for spending time on making such a simulation. I need to read some matlab material to check what the code actually doing. Is your code use many built-in functions from matlab? since i didn't see any calculation within the code. No, but it does use the built in matrix arithmetic operators and random number generators. What language were you intending to program this in? (by the way I now have a version of the filter in Matlab and the simulation which operates with the full model of you original post) February 20th 2009, 02:15 AM #2 Grand Panjandrum Nov 2005 February 21st 2009, 02:48 AM #3 Junior Member Feb 2009 February 21st 2009, 03:16 AM #4 Grand Panjandrum Nov 2005 February 21st 2009, 03:23 AM #5 Junior Member Feb 2009 February 21st 2009, 03:32 AM #6 Grand Panjandrum Nov 2005 February 21st 2009, 05:48 PM #7 Junior Member Feb 2009 February 22nd 2009, 12:20 AM #8 Grand Panjandrum Nov 2005 February 22nd 2009, 12:50 AM #9 Junior Member Feb 2009 February 22nd 2009, 01:45 AM #10 Junior Member Feb 2009 February 22nd 2009, 02:09 AM #11 Grand Panjandrum Nov 2005 February 22nd 2009, 04:30 AM #12 Grand Panjandrum Nov 2005 February 22nd 2009, 06:28 PM #13 Junior Member Feb 2009 February 22nd 2009, 08:08 PM #14 Grand Panjandrum Nov 2005 February 23rd 2009, 02:39 AM #15 Grand Panjandrum Nov 2005
{"url":"http://mathhelpforum.com/advanced-applied-math/74644-kalman-filtering-application.html","timestamp":"2014-04-17T05:08:15Z","content_type":null,"content_length":"94031","record_id":"<urn:uuid:77adf528-e186-4ee9-841b-691e3e606eff>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
What college-level math do you (as a sysadmin) use? I'm planning on writing a piece about math so I thought I'd survey my readers. I assume that most of my readers took a lot of math in high school and college. Of all the math you learned, what parts of it do you use today as you do your system administration? For example, of all the statistics I learned, pretty much all I actually use now is standard deviation. (and I just use it in analogies I make) What path of your math education do you use today? 12 Comments | Leave a comment I tend to think this depends on how much programming you're doing - I find that that CS and programming specific stuff, like knowing the program you wrote will run in O(n) time can be useful to keep in the back of your head. Similarly, some parts of discreet mathematics, such as inductive proofs, set theory, and FSM's can come in handy at times. But honestly much of what is taught is aimed at the physical and life sciences - things like geometry and calculus I haven't touched in years. More than any of the specific mathematical techniques, I use the mathematical and scientific rigors that I learned in University. I actually went to school for a BSc in computer network and information systems. We took basic college math (pre-algebra and algebra with some trig), basic physics, financial accounting, finite, and statistics; i've actually never taken full-blown calc. This may not sit well with some in the CS world (not taking calc), but I don't feel limited by it. It certainly hasn't affected my day to day life as a sysadmin. +1 on DTK's comment. One bit of maths that I keep re-visiting is statistics. Beyond the usual 95th %ile for network traffic, coming up with decent models for traffic growth (linear regression, arima, holt-winters, etc.), determining if a change in system performance is statistically significant or not, and other such things have all proven useful. Most of it makes me wish I'd concentrated more on that, rather the more matrix-related skills I'd focused on for my main interest at the time, which was computer graphics. A book on "Statistics for System Administration" would be a wonderful thing. I got my degree in meteorology, so I had a lot of advanced math to take (2 semesters of differential equations -- yuck!). As a sysadmin, I think it's fair to say that I use absolutely nothing I learned in college math classes. What I use somewhat regularly is my high school statistics class. I've used basic statistical analysis to show that HDFS placed a minimal load on a host system, that one server is more responsive than another, etc. The math that I still use the most is statistics (and only the most basic) and some analytical geometry to think about performance, capacity and other "curves". No calc, no differential equations, and entirely too much "commercial math" for budgets and ROI. I'm going to semi-echo most people's comments and say that the only "higher" math that comes up is statistics, and I wish I had taken more of it. When it comes to programming, I write mostly functional scripts which require zero math beyond basic binary logic. I use so little math that I'm really considering taking a college course to refresh myself. It's been forever since I've done any, and I kind of like it, although I honestly wasn't much good at it (aside from Geometry, which just kind of clicked with me). I use SPC (Statistical Process Control), specifically the EWMA algorithm, to trigger alarms without setting hardcoded limits (often very difficult to do). I also use boxplots to summarize daily measurements (response times, utilizations, etc). Recently I have been thinking about monitoring daily boxplot data (median or Q3) with SPC so that I don't have to look at a lot of data unless a statistically significant deviation occurs. For comparing WAN circuits, I have plotted RTT boxplots for typical days as a function of the great circle distance between endpoints. These plots include a "speed of light in fiber" line as the "Physics limit". An "unreasonable" circuit will appear as an outlier. As with DTK et al, higher mathematics helped mostly by making me think hard thoughts. I've used bits of probability, statistics, discrete math, combonitorics, and formal logic. Graph theory too. CS doesn't count as math, right? I use everything from automata and complexity theory to programming concepts. The thing is, many of the problems that I solve with math I could just as easily approximate an answer to or ignore. Example: I once used simple probability to determine that file transfers from a particular vendor were failing because they were trying to send an encrypted binary data over an ASCII FTP connection. Encrypted binary data appears to be uniformly random, so the chances of the byte sequence \r\n appearing randomly are (2^8)^2. Files sized as these were would have that sequence once every three days or so which was how frequent the jobs were failing! od -c | grep further verified that the corrupted files we'd received had no \r\n sequences. Now I could've called the vendor and asked, but I decided to use math to firm up my understanding beforehand. So Tom, if you're trying to gauge the usefulness of mathematical techniques, please be mindful of this built in bias: your survey subjects can not tell you about those times that they do not apply a technique which might be useful and lost out for it. The Rumsfeldian unknown unknowns. My degree is Mechanical Engineering. I had Calculus, Differential Equations. I don't remember statistics but I wish I did. In my sysadmin work, I do shell scripting mostly. Algebra and basic spreadsheet type maths. I've done admin for Fluid Dynamics groups. My Engineering training was useful for understanding what my engineers were doing, but I never use my math directly. +1 DTK. I don't even have need to use statistics. Basic algebra and maths is all that it takes.
{"url":"http://everythingsysadmin.com/2010/08/what-college-level-math-do-you.html","timestamp":"2014-04-19T14:29:00Z","content_type":null,"content_length":"39517","record_id":"<urn:uuid:7ac3e00c-3275-4499-9eda-8eb119e0d94f>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00524-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts by Total # Posts: 5 Statistics (binomial random variable) Let x be a binomial random variable with n = 10 and p = .4 Find the values. P (x 4) P (x≤4) I could really use some examples to help me get started and understand it. Thanks Assuming a standard deck of 52 playing cards. Calculate the the probability. Draw one card thatis a queen and a heart. is it 4/52 + 13/52 - 1/52 = 16/52 or 4/13 is this right?? Draw one card that is a queen or a heart 4/52 +13/52 - 2/52 =15/52 not sure if I have the right or w... roll dice Event A you roll the die and it's even Event B you roll the die and it's less than 5 Roll the die once. What is the probability that Event A will occur given that Event B has alrady occured? would it be 3/4 = .75 or 75% not sure need help and, Roll the die once. what i... The first one is to draw one card that is either a queen and a heart thanks On a standard 52 card deck, calculate the probability. that on card is either a queen or a heart draw one card, what is the probability that it is a king, given that it is a club? Thanks
{"url":"http://www.jiskha.com/members/profile/posts.cgi?name=sonney32","timestamp":"2014-04-20T19:18:30Z","content_type":null,"content_length":"7148","record_id":"<urn:uuid:75644c45-79cc-45f8-9a24-d1b5291000dc>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00244-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about books on My Brain is Open Following is my review of Boosting : Foundations and Algorithms (by Robert E. Schapire and Yoav Freund) to appear in the SIGACT book review column soon. Book : Boosting : Foundations and Algorithms (by Robert E. Schapire and Yoav Freund) Reviewer : Shiva Kintali You have k friends, each one earning a small amount of money (say 100 dollars) every month by buying and selling stocks. One fine evening, at a dinner conversation, they told you their individual “strategies” (after all, they are your friends). Is it possible to “combine” these individual strategies and make million dollars in an year, assuming your initial capital is same as your average friend ? You are managing a group of k “diverse” software engineers each one with only an “above-average” intelligence. Is it possible to build a world-class product using their skills ? The above scenarios give rise to fundamental theoretical questions in machine learning and form the basis of Boosting. As you may know, the goal of machine learning is to build systems that can adapt to their environments and learn from their experience. In the last five decades, machine learning has impacted almost every aspect of our life, for example, computer vision, speech processing, web-search, information retrieval, biology and so on. In fact, it is very hard to name an area that cannot benefit from the theoretical and practical insights of machine learning. The answer to the above mentioned questions is Boosting, an elegant method for driving down the error of the combined classifier by combining a number of weak classifiers. In the last two decades, several variants of Boosting are discovered. All these algorithms come with a set of theoretical guarantees and made a deep practical impact on the advances of machine learning, often providing new explanations for existing prediction algorithms. Boosting : Foundations and Algorithms, written by the inventors of Boosting, deals with variants of AdaBoost, an adaptive boosting method. Here is a quick explanation of the basic version of AdaBoost makes iterative calls to the base learner. It maintains a distribution over training examples to choose the training sets provided to the base learner on each round. Each training example is assigned a weight, a measure of importance of correctly classifying an example on the current round. Initially, all weights are set equally. On each round, the weights of incorrectly classified examples are increased so that, “hard” examples get successively higher weight. This forces the base learner to focus its attention on the hard example and drive down the generalization errors. AdaBoost is fast and easy to implement and the only parameter to tune is the number of rounds. The actual performance of boosting is dependent on the data. Chapter 1 provides a quick introduction and overview of Boosting algorithms with practical examples. The rest of the book is divided into four major parts. Each part is divided into 3 to 4 chapters. Part I studies the properties and effectiveness of AdaBoost and theoretical aspects of minimizing its training and generalization errors. It is proved that AdaBoost drives the training error down very fast (as a function of the error rates of the weak classifiers) and the generalization error arbitrarily close to zero. Basic theoretical bounds on the generalization error show that AdaBoost overfits, however empirical studies show that AdaBoost does not overfit. To explain this paradox, a margin-based analysis is presented to explain the absence of overfitting. Part II explains several properties of AdaBoost using game-theoretic interpretations. It is shown that the principles of Boosting are very intimately related to the classic min-max theorem of von Neumann. A two-player (the boosting algorithm and the weak learning algorithm) game is considered and it is shown that AdaBoost is a special case of a more general algorithm for playing a repeated game. By reversing the roles of the players, a solution is obtained for the online prediction model thus establishing a connection between Boosting and online learning. Loss minimization is studied and AdaBoost is interpreted as an abstract geometric framework for optimizing a particular objective function. More interestingly, AdaBoost is viewed as a special case of more general methods for optimization of an objective function such as coordinate descent and functional gradient descent. Part III explains several methods of extending AdaBoost to handle classifiers with more than two output classes. AdaBoost.M1, AdaBoost.MH and AdaBoost.MO are presented along with their theoretical analysis and practical applications. RankBoost, an extension of AdaBoost to study ranking problems is studied. Such an algorithm is very useful, for example, to rank webpages based on their relevance to a given query. Part IV is dedicated to advanced theoretical topics. Under certain assumptions, it is proved that AdaBoost can handle noisy-data and converge to the best possible classifier. An optimal boost-by-majority algorithm is presented. This algorithm is then modified to be adaptive leading to an algorithm called BrownBoost. Many examples are given throughout the book to illustrate the empirical performance of the algorithms presented. Every chapter ends with Summary and Bibliography mentioning the related publications. There are well-designed exercises at the end of every chapter. Appendix briefly outlines some required mathematical background. Boosting book is definitely a very good reference text for researchers in the area of machine learning. If you are new to machine learning, I encourage you to read an introductory machine learning book (for example, Machine Learning by Tom M. Mitchell) to better understand and appreciate the concepts. In terms of being used in a course, a graduate-level machine learning course can be designed from the topics covered in this book. The exercises in the book can be readily used for such a course. Overall this book is a stimulating learning experience. It has provided me new perspectives on theory and practice of several variants of Boosting algorithms. Most of the algorithms in this book are new to me and I had no difficulties following the algorithms and the corresponding theorems. The exercises at the end of every chapter made these topics much more fun to learn. The authors did a very good job compiling different variants of Boosting algorithms and achieved a nice balance between theoretical analysis and practical examples. I highly recommend this book for anyone interested in machine learning. Recreational Math Books – Part II In my previous post (see here) I mentioned some interesting puzzle books. In today’s post I will mention different type of recreational math books i.e., biographical books. Here are my top three books in this category. They are must-read books for anybody even remotely interested in mathematics. There are about three mathematics : (1) Andrew Wiles, whose determination to solve Fermat’s Last Theorem inspires future generations and gives a strong message that patience and focus are two of the most important assets that every mathematician should posses. (2) Paul Erdos, whose love for mathematics is so deep and prolific and (3) Srinivasa Ramanujan, whose story is different from any other mathematician ever. 1) Fermat’s Enigma: The Epic Quest to Solve the World’s Greatest Mathematical Problem This is one of the first “recreational” books I read. It starts with the history of Fermat’s last theorem (FLT), discusses the life style of early mathematicians and moves on to talk about Andrew Wiles’s 8 year long journey proving FLT. Watch this BBC documentary for a quick overview of Andrew Wiles’s story. 2) The Man Who Loved Only Numbers: The Story of Paul Erdos and the Search for Mathematical Truth Paul Erdos is one of the greatest and most prolific mathematicians ever. The title of my blog is inspired by one of his famous sayings “My Brain is Open”. I don’t want to reveal any details of this book. You will enjoy this book more if you read it without knowing anything about Paul Erdos. I should warn you that there are some really tempting open problems in this book. When I first read this book (during my PhD days) I spent almost one full semester reading papers related to Twin Prime Conjecture and other number-theoretic problems. I also wrote a paper titled “A generalization of Erdos’s proof of Bertrand-Chebyshev’s theorem”. Watch this documentary “N is a number” for a quick overview of Paul Erdos’s story. 3) The Man Who Knew Infinity: A Life of the Genius Ramanujan This is a very dense book. I bought it five years back and only recently finished reading it. This books covers lots of “topics” : south indian life-style, Hardy’s life, Ramanujan’s proofs and his flawed proofs, his journey to work with Hardy, his health struggles etc. It is definitely worth-reading to know the details of Ramanujan’s passion for mathematics. Recreational Math Books – Part I Most of us encounter math puzzles during high-school. If you are really obsessed with puzzles, actively searching and solving them, you will very soon run out of puzzles !! One day you will simply realize that you are not encountering any new puzzles. No more new puzzles. Poof. They are all gone. You feel like screaming “Give me a new puzzle“. This happened to me around the end of my undergrad days. During this phase of searching for puzzles, I encountered Graceful Tree Conjecture and realized that there are lots of long-standing open “puzzles”. I don’t scream anymore. Well… sometimes I do scream when my proofs collapse. But that’s a different kind of screaming. Sometimes, I do try to create new puzzles. Most of the puzzles I create are either very trivial to solve (or) very hard and related to long-standing conjectures. Often it takes lots of effort and ingenuity to create a puzzle with right level of difficulty. In today’s post, I want to point you to some of the basic puzzle books that everybody should read. So, the next time you see a kid screaming “Give me a new puzzle“, simply point him/her to these books. Hopefully they will stop screaming for sometime. If they comeback to you soon, point them to Graceful Tree Conjecture :) 1) Mathematical Puzzles: A Connoisseur’s Collection by Peter Winkler 2) Mathematical Mind-Benders by Peter Winkler 3) The Art of Mathematics: Coffee Time in Memphis by Bela Bollobás 4) Combinatorial Problems and Exercises by Laszlo Lovasz 5) Algorithmic Puzzles by Anany Levitin and Maria Levitin I will mention more recreational math books in part 2 of this blog post. Open Problems from Lovasz and Plummer’s Matching Theory Book I always have exactly one bed-time mathematical book to read (for an hour) before going to sleep. It helps me learn new concepts and hopefully stumble upon interesting open problems. Matching Theory If you are interested in learning the algorithmic and combinatorial foundations of Matching Theory (with a historic perspective), then this book is a must read. Today’s post is about the open problems mentioned in Matching Theory book. If you know the status (or progress) of these problems, please leave a comment. 1 . Consistent Labeling and Maximum Flow Conjecture (Fulkerson) : Any consistent labelling procedure results in a maximum flow in polynomial number of steps. 2. Toughness and Hamiltonicity The toughness of a graph $G$, $t(G)$ is defined to be $+\infty$, if $G = K_n$ and to be $min(|S|/c(G-S))$, if $G eq K_n$. Here $c(G-S)$ is the number of components of $G-S$. Conjecture (Chvatal 1973) : There exists a positive real number $t_0$ such that for every graph $G$, $t(G) \geq t_0$ implies $G$ is Hamiltonian. 3. Perfect Matchings and Bipartite Graphs Theorem : Let $X$ be a set, $X_1, \dots, X_t \subseteq X$ and suppose that $|X_i| \leq r$ for $i = 1, \dots, t$. Let $G$ be a bipartite graph such that a) $X \subseteq V(G)$, b) $G - X_i$ has a perfect matching , and c) if any edge of $G$ is deleted, property (b) fails to hold in the resulting graph. Then, the number of vertices in $G$ with degree $\geq 3$ is at most $r^3 {t \choose 3}$. Conjecture : The conclusion of the above theorem holds for non-bipartite graphs as well. 4. Number of Perfect Matchings Conjecture (Schrijver and W.G.Valiant 1980) : Let $\Phi(n,k)$ denote the minimum number of perfect matchings a k-regular bipartite graph on 2n points can have. Then, $\lim_{n \to \infty} (\Phi(n,k))^ {\frac{1}{n}} = \frac{(k-1)^{k-1}}{k^{k-2}}$. 5. Elementary Graphs Conjecture : For $k \geq 3$ there exist constants $c_1(k) > 1$ and $c_2(k) > 0$ such that every k-regular elementary graph on 2n vertices, without forbidden edges , contains at least $c_2(k){\cdot} c_1(k)^n$ perfect matchings. Furthermore $c_1(k) \to \infty$ as $k \to \infty$. 6. Number of colorations Conjecture (Schrijver’83) : Let G be a k-regular bipartite graph on 2n vertices. Then the number of colorings of the edges of G with k given colors is at least $(\frac{(k!)^2}{k^k})^n$. 7. The Strong Perfect Graph Conjecture (resolved) Theorem : A graph is perfect if and only if it does not contain, as an induced subgraph, an odd hole or an odd antihole. Book Review of “Elements of Automata Theory” During summer 2010 I started reading a book titled Elements of Automata Theory by Jacques Sakarovitch Book : Elements of Automata Theory by Jacques Sakarovitch Reviewer : Shiva Kintali During my undergrad I often found myself captivated by the beauty and depth of automata theory. I wanted to read one book on automata theory and say that I “know” automata theory. Couple of years later I realized that it is silly to expect such a book. The depth and breadth of automata theory cannot be covered by a single book. My PhD thesis is heavily inspired by automata theory. I had to read several (fifty year old) papers and books related to automata theory to understand several fundamental theorems. Unfortunately, the concepts I wanted to learn are scattered in multiple books and old research papers, most of which are hard to find. When I noticed that Prof. Bill Gasarch is looking for a review of Elements of Automata Theory, I was very excited and volunteered to review it, mainly because I wanted to increase my knowledge about automata theory. Given my background in parsing technologies and research interests in space-bounded computation I wanted to read this book carefully. This book is around 750 pages long and it took me around one year to (approximately) read it. It is very close to my expectations of the one book on automata theory. First impressions : Most of the books on automata theory start with the properties of regular languages, finite automata, pushdown automata, context-free languages, pumping lemmas, Chomsky hierarchy, decidability and conclude with NP-completeness and the P vs NP problem. This book is about “elements” of automata theory. It focuses only on finite automata over different mathematical structures. It studies pushdown automata only in the context of rational subsets in the free group. Yes, there is 750 pages worth literature studying only finite automata. This book is aimed at people enthusiastic to know the subject rigorously and not intended as a textbook for automata theory course. It can also be used by advanced researchers as a desk reference. There is no prerequisite to follow this book, except for a reasonable mathematical maturity. It can be used as a self-study text. This book is a direct translation of its french original. This book is divided into five major chapters. The first three chapters deal with the notions of rationality and recognizability. A family of languages are rationally closed if it is closed under rational operations (union, product and star). A language is reconizable if there exists a finite automata that recognizes it. The fourth and fifth chapters discuss rationality in relations. Chapter 0 acts as an appendix of several definitions of structures such as relations, monoids, semirings, matrices, graphs and concepts such as decidability. Following is a short summary of the five major chapters. There are several deep theorems (for example, Higman’s Theorem) studied in this book. I cannot list all of them here. The chapter summaries in the book have more details. Chapter 1 essentially deals with the basic definitions and theorems required for any study of automata theory. It starts with the definitions of states, transitions, (deterministic and nondeterministic) automaton, transpose, ambiguity and basic operations such as union, cartesian product, star, quotient of a language. Rational operators, rational languages and rational expressions are defined and the relation between rationality and recognizability is established leading to the proof of Kleene’s theorem. String matching (i.e., finding a word in a text) is studied in detail as an illustrative example. Several theorems related to star height of languages are proved. A fundamental theorem stating that `the language accepted by a two-way automaton is rational’ is proved. The distinction between Moore and Mealy machines is introduced. Chapter 2 deals with automata over the elements of an arbitrary monoid and the distinction between rational set and recognizable set in this context. This leads to a better understanding of Kleene’s theorem. The notion of morphism of automata is introduced and several properties of morphisms and factorisations are presented. Conway’s construction of universally minimal automaton is explained and the importance of well quasi-orderings is explained in detail. Based on these established concepts, McNaughton’s theorem (which states that the star height of a rational group language is computable) is studied with a new perspective. Chapter 3 formalizes the notion of “weighted” automata that count the number of computations that make an element be accepted by an automaton, thus generalizing the previously introduced concepts in a new dimension. Languages are generalized to formal series and actions are generalized to representations. The concepts and theorems in this chapter makes the reader appreciate the deep connections of automata theory with several branches of mathematics. I personally enjoyed reading this chapter more than any other chapter in this book. Chapter 4 builds an understanding of the relations realized by different finite automata in the order they are presented in chapters 1, 2 and 3. The Evaluation Theorem and the Composition Theorem play a central role in understanding this study. The decidability of the equivalence of transducers (with and without weigths) is studied. This chapter concludes with the study of deterministic and synchronous relations. Chapter 5 studies the functions realized by finite automata. Deciding functionality, sequential functions, uniformisation of rational relations by rational functions, semi-monomial matrix representation, translations of a function and uniformly bounded functions are studied. There are exercises (with solutions) at the end of every section of every chapter. These exercises are very carefully designed and aid towards better understanding of the corresponding concepts. First time readers are highly encouraged to solve (or at least glance through) these exercises. Every section ends with Notes and References mentioning the corresponding references and a brief historical summary of the chapter. Overall I found the book very enlightening. It has provided me new perspectives on several theorems that I assumed I understood completely. Most of the concepts in this book are new to me and I had no problems following the concepts and the corresponding theorems. The related exercises made these topics even more fun to learn. It was a joy for me to read this book and I recommend this book for anyone who is interested in automata theory (or more generally complexity theory) and wants to know the fundamental theorems of theory of computing. If you are a complexity theorist, it is worthwhile to look back at the foundations of theory of computing to better appreciate its beauty and history. This book is definitely unique in its approach and the topics chosen. Most of the topics covered are either available in very old papers or not accesible at all. I applaud the author for compiling these topics into a wonderful free-flowing text. This book is nicely balanced between discussions of concepts and formal proofs. The writing is clear and the topics are organized very well from the most specific to the most general, making it a free-flowing text. On the other hand, it is very dense and requires lots of motivation and patience to read and understand the theorems. The author chose a rigorous way of explaining rationality and recognizability. Sometimes you might end up spending couple of hours to read just two pages. Such is the depth of the topics covered. Beginners might find this book too much to handle. I encourage beginners to read this book after taking an introductory automata theory course. This is definitely a very good reference text for researchers in the field of automata theory. In terms of being used in a course, I can say that a graduate level course can be designed from a carefully chosen subset of the topics covered in this book. The exercises in the book can be readily used for such a course. This is an expensive book, which is understandable based on the author’s efforts to cover several fundamental topics (along with exercises) in such a depth. If you think it is expensive, I would definitely suggest that you get one for your library.
{"url":"http://kintali.wordpress.com/category/books/","timestamp":"2014-04-16T19:03:53Z","content_type":null,"content_length":"95741","record_id":"<urn:uuid:e61ce935-f33c-4af0-b579-8b7fb4007f80>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00038-ip-10-147-4-33.ec2.internal.warc.gz"}
Trouble rationalising June 8th 2009, 03:36 PM #1 Junior Member Dec 2008 Trouble rationalising A snippet from a calculus question however i am finding the simplifying of a rationalisation hard. This $\frac{\frac{1}{\sqrt{x+h}}-\frac{1}{\sqrt{x}}}{h}$ needs to be rationalised now i know i multiply both top and bottom by $\frac{1}{\sqrt{x+h}}+\frac{1}{\sqrt{x}}$ but I'm having mega difficulty actually doing it and getting an equation that actually looks half decent at the end, if someone could just show me step by step, how to really simplify the equation I would be extremely grateful. Thanks. You could simplify the top line to be... $\frac{1}{\sqrt{x+h}} - \frac{1}{\sqrt{x}}$ $= \frac{\sqrt{x} - \sqrt{x+h}}{\sqrt{x+h}\sqrt{x}}$. But if i do that i can't rationalise it. A snippet from a calculus question however i am finding the simplifying of a rationalisation hard. This $\frac{\frac{1}{\sqrt{x+h}}-\frac{1}{\sqrt{x}}}{h}$ needs to be rationalised now i know i multiply both top and bottom by $\frac{1}{\sqrt{x+h}}+\frac{1}{\sqrt{x}}$ but I'm having mega difficulty actually doing it and getting an equation that actually looks half decent at the end, if someone could just show me step by step, how to really simplify the equation I would be extremely grateful. Thanks. first understand that you can move the h right into the deminator to simplify things a bit. So first do as deadstar showed you. Then whenever you want to ratyionalize either a denominator or numerator, simply multiply both top and bottom by the conjugate of the one you want to rationalize. I assume here you want to rationalize the numerator so that you can remaove that nasty h from the problem. * $\frac{1}{h}*\frac{(\sqrt{x}+\sqrt{x+h})}{(\sqrt{x} +\sqrt{x+h})}$ = $\frac{x-x+h}{h\sqrt{x}\sqrt{x+h}(\sqrt{x}+\sqrt{x+h})}$ = $\frac{1}{x\sqrt{x+h}+\sqrt{x}(x+h)}$ I'll take it one more step for you as $h\to{0}$ we get June 8th 2009, 04:26 PM #2 June 8th 2009, 04:58 PM #3 Junior Member Dec 2008 June 8th 2009, 06:23 PM #4
{"url":"http://mathhelpforum.com/algebra/92228-trouble-rationalising.html","timestamp":"2014-04-18T11:52:17Z","content_type":null,"content_length":"40534","record_id":"<urn:uuid:ae14593d-964c-46c1-be5f-6a813bdda84e>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00359-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. Jannis from Northcross Intermediate said: "I think Kaia was right. The ties come out of the closet randomly. We know each tie has 1 eighth of a chance to be pulled out, so its quite likely that the father will not wear 1 tie twices in the week. But its random so there is a chance of him wearing a tie twice in the week." Making the point that they're probabilities so you can't be sure. Matthew and James from Stradbroke Primary did an experiment, and found that they picked the same tie twice in 9 out of 10 weeks. Ege and Onur from FMV Ozel Erenkoy Isik Primary School, Turkey, also did experiments, and you can find their results here and here . Some people have tried to calculate the probability of picking the same tie twice, but none correctly yet. Here is what happened when we tried the experiment: We found that there were: 6 weeks out of 20 when he wore a different tie every day (green), so the probability of this is 6/20, 12 weeks out of 20 when he wore the same tie twice in a week (blue), so the probability is 12/20, and 2 weeks out of 20 when he wore the same tie more than twice, so the probability of this is 2/20. These are only approximate probabilities, and each time you do the experiment you will get a different answer. You can improve the approximation by increasing the number of times you do the experiment and adding up all the results as this will be more accurate. Here are some some photos of children working on this problem: Preparing materials Drawing ties Completing the table Collating results
{"url":"http://nrich.maths.org/5516/solution?nomenu=1","timestamp":"2014-04-19T05:31:21Z","content_type":null,"content_length":"5991","record_id":"<urn:uuid:21e4190a-5141-44a6-bd91-673272a00cfb>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00155-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: I'm new to matlab; why is this function wrong? Replies: 5 Last Post: Dec 6, 2012 12:41 PM Messages: [ Previous | Next ] Alex Re: I'm new to matlab; why is this function wrong? Posted: Dec 6, 2012 11:54 AM Posts: 4 Registered: 12/6/ "Sargondjani" wrote in message <k9qhho$obr$1@newscl01ah.mathworks.com>... 12 > you can only define functions inside a function-file and not in a script (dont ask me why, because it bothered me too, but matlab developers usually have good reasons for doing such things) > so either: > 1) transform your script into a function: > start with: "function my_script()" and end with "end" > 2) define the function in an m-file that you call in your script > i hope this helps... Thanks Sargondjani. Now when I run this script: hold on for n = 1:1:730 [R V] = orbits(Ro, Vo, del_t); hold on; calling function: function [R V] = orbits(Ro, Vo, del_t) Ro=[1.5e11, 0]; Vo=[0, 2*pi*Ro(1)/(365*24*60*60)]; G = 1.67e-11; Ms = 1.97e30; d = (Ro(1)^2 + Ro(2)^2)^(3/2); a = -(G*Ms/d)* Ro; V = Vo + a * del_t; R = Ro + Vo * del_t + .5 * a * del_t^2; I get Error in ==> two_dimensional_ac at 5 [R V] = orbits(Ro, Vo, del_t); Date Subject Author 12/6/12 I'm new to matlab; why is this function wrong? Alex 12/6/12 Re: I'm new to matlab; why is this function wrong? Alex 12/6/12 Re: I'm new to matlab; why is this function wrong? Sargondjani 12/6/12 Re: I'm new to matlab; why is this function wrong? Alex 12/6/12 Re: I'm new to matlab; why is this function wrong? Sargondjani 12/6/12 Re: I'm new to matlab; why is this function wrong? Alex
{"url":"http://mathforum.org/kb/thread.jspa?threadID=2418997&messageID=7933233","timestamp":"2014-04-20T16:59:00Z","content_type":null,"content_length":"23055","record_id":"<urn:uuid:fbb785fd-96d6-4f79-8ee2-0f8798d6410e>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00613-ip-10-147-4-33.ec2.internal.warc.gz"}
Discrete Cosine Transformations The topic of this post is the Discrete Cosine Transformation, abbreviated pretty universally as DCT. DCTs are used to convert data into the summation of a series of cosine waves oscillating at different frequencies (more on this later). They are widely used in image and audio compression. They are very similar to Fourier Transforms, but DCT involves the use of just Cosine functions and real coefficients, whereas Fourier Transformations make use of both Sines and Cosines and require the use of complex numbers. DCTs are simpler to calculate. Both Fourier and DCT convert data from a spatial-domain into a frequency-domain and their respective inverse functions convert things back the other way. Why are DCTs so useful? As mentioned above they are used extensively in image and audio compression. To compress analog signals, often we discard information (called lossy compression), to enable efficient compaction. We have to be carefaul about what information in a signal we should discard (or smooth out) when removing bits to compress a signal. DCT helps with this process. Thankfully, our eyes, ears and brain are analog devices and we are less sensitive to distortion around edges, and we are less likely to notice subtle differences fine textures. Also, for many audio signals and graphical images the amplitudes and pixels are often similar to their near neighbors. These factors provide a solution; if we are careful at removing the higher-frequency elements of an analog signal (things that change between short ‘distances’ in the data) there is a good chance that, if we don’t take this too far, our brains might not perceive a difference. The JPEG (Joint Photographic Experts Group) format uses DCT to compress images (we’ll describe how later). Below is a test image of some fern leaves. In raw format this image is 67,854 bytes in size. To the right of it are a series of images made with increasing levels of compression. At each stage, the image storage size gets smaller, but frequency information in the image is lost as increasingly higher compression is applied. With a small amount of compression, it’s practically impossible for the brain to notice the difference. As we move further to the right, defects become more obvious. File Size: 67,854 bytes File Size: 7,078 bytes File Size: 5,072 bytes File Size: 3,518 bytes File Size: 2,048 bytes File Size: 971 bytes How is this compression achieved? By using a DCT transform, the image is shifted into the frequency domain. Then, depending on how much compression is required, the higher frequency coefficients of the signal are masked off and removed (the digital equivalent of applying a low-pass analog filter). When the image is recreated using the truncated coefficients, the higher frequency components are not present. Notice the ‘blocky’ appearance of the image on the far right? This is an artifact of how the DCT compression is calculated. Again, we'll look at this later. Example: One Dimension Let’s start with analysis in one dimension. Imagine we have 16 points as shown below. Time is nominally on the x-axis, with a variety of values on the y-axis. We’re going to apply DCT to these data and see the effects how these individual cosine components add together to approximate the source. The math required to permform DCT is not very complicated, but it's also not rated PG-13. I'm opening with the concepts, then I'm going to brush straight past the implementation steps and move onto the results. For those that like to code, and want to experiment with this, let me give the standard advice: “"Google is your friend!"” – you can find plenty of implementations of DCT in the language of your choice on the web. Using DCT, we can break this curve into a series of Cosine waves of various frequencies. It is by the superposition (adding together) of these fundamental waves that we recreate the original For the above sixteen points, I’ve broken down the data using DCT. The graphs on the left below show the cosine function of that frequency component and the image on the right shows the superposition of this to the running total (this component added to all those above it). The lowest frequency components are shown first, and as we move down the page, the higher frequency components are added. Above each graph is a number showing the coefficient ‘weight’ of each frequency component. Typically (though not always), these numbers get smaller as the contribution to the overall shape from these higher frequencies gets smaller. As we move down the charts we can see that shape getting closer and closer in approximation to the original data (the errors between the actual data and the curve approximation getting smaller – the differences being the higher frequency ‘wiggles’ in the raw data). Depending on the level of compression we needed we could truncate the higher frequency components as needed and decided where to draw the line at a ‘good-enough’ approximation. Χ[0] = 7.867 [0] Χ[1] = 6.742 [0–1] Χ[2] = 2.886 [0–2] Χ[3] = -2.001 [0–3] Χ[4] = -3.991 [0–4] Χ[5] = 1.546 [0–5] Χ[6] = -0.084 [0–6] Χ[7] = -0.703 [0–7] Χ[8] = -1.149 [0–8] Χ[9] = 0.526 [0–9] Χ[10] = 0.439 [0–10] Χ[11] = -0.202 [0–11] Χ[12] = -0.165 [0–12] Χ[13] = 0.382 [0–13] Χ[14] = -.026 [0–14] Χ[15] = 0.368 [0–15] Example: Two Dimensions (and the basics of JPEG compression) We can apply just the same technique in two dimensions, this time breaking the image into blocks of pixels and looking at the harmonics in each block. The result of this analysis is a matrix of coefficients. Moving down and to the right are the coefficients with increasingly higher frequency components. As before, we can compress an image by masking off and truncating the coefficients at frequencies higher than we care about. In JPEG compression, the block size used is 8 x 8, resulting in a similar size matrix of coefficients. However, for my examples, I’m going to select a larger block size of 16 x 16 (I think it is easier to visualize things at this size.) As you probably know, colors in computer images are often describe by the relative mixing of their Red , Green and Blue components. This is, internally, how computers store the image data, but this is not the only way to describe colors. Another method is called YUV. It’s outside the scope of this article (follow the link for more background), but this format describes colors by their luminance and chrominance (Sort of like the brightness of the color and the shade of color). Our eyes are more sensitive to changes in brightness than changes in shade, and this can be exploited for more compression by converting RGB colors in YUV space, then UV channels can be sub-sampled (quantized) and reduced in dynamic range – Because of lower senstitivity you need fewer distinct levels of shade of a color than the brightness of a color to still maintain a smooth image. Because we are working on image processing, we have to roll out Lenna, the “The First Lady of the Internet”. This image is probably the most widely used test image in computer history (click on the link for background details). We're going to look at a selection of 16 x 16 pixel blocks around the hat. Below are sixteen version of this region rendered using coefficients truncated at various levels. Initially the individual blocks are clearly visible. As the coefficients are truncated higher, the edges of the blocks become less discernable but the images still look a little blurry. As the coefficient cut-off increases, the images get sharper. JPEG Advanced The above paragraphs explain the concept of how JPEG compression works (reduction of the higher frequency components), in reality, it is a little more complicated. The DCT is applied over an 8 x 8 block, to produce an 8 x 8 matrix of frequency coefficients. Each block is calculated distinctly, and the color of the top-left pixel of each block determines a fixed reference and the other pixels in the same block are described relative to this pixel (this helps reduce the dynamic range needed for the DCT function, and typically benefits the quality of the image as often there is not a massive variation in colors between pixels in the same block). Rather than simply truncating and masking off the higher frequency components of the DCT matrix as we did above, however, in JPEG the frequency coefficients are scaled (individually) by a quantization matrix. This matrix scales (divides) each coefficient by a numeric term. The quantization matrix is pre-calculated and defined by the JPEG standard and naturally favors the items in the top left corner of the matrix (the more frequency significant terms) than the lower right. Each coefficient has a different weighting. The results of this division are then converted to integers (from floating point), causing further quantization. A typical output matrix of this process is show above on the right. Notice this, as we expect, biases the top left corner where we know the most ‘important’ coefficients reside. The final step in the JPEG compression is called ‘entropy encoding’ and this is the mechanism for how the coefficients of the final matrix are encapsulated. The technique used is a run-length encoding algorithm which losslessly compresses the matrix because there is often redundancy of multiple occurrences of repeated values. This works especially well because, rather than simply reading the values in a traditional, grid format, the input stream to the compression zig-zags through the matrix starting at the top left corner, and ending in the lower As you can appreciate, this increases the chances that adjacent values will be of similar value (and often, as you can see in our example, it’s typical for there to be many zeros at the end of the string which compact very well indeed!) Finally, let’s end with a little bit of fun … It is possible to merge two images with different spatial frequencies. When this hybrid image is viewed from a close distance, the higher-frequency elements stand out, revealing these components of the image in high contrast. When the hybrid image is viewed from a far distance, these higher frequency subtleties are not discernible and the eye/brain smoothes and interpolates the lower frequency components. To make a hybrid image containing two images we use the digital equivalent of a high-pass and low-pass filter. In the example below, I’ve taken the Lena image and passed it through DCT to convert it to the frequency domain. (To make this a little clearer, for this example, I’ve also converted the image to gray scale and also used a larger block size). Then I removed the high frequency coefficients from the matrix (a low pass filter). Next I took another image (this time a self-portrait), converted this to gray scale, passed it through DCT, but this time only preserved the higher frequency coefficients of the frequency matrix (a high pass filter). These two sparse matrices were then combined and passed into the inverse DCT function. Here is the hybrid result: If you look at the hybrid image whilst sitting directly infront of your monitor/tablet you should clearly see the ghostly outline of my face and shirt (lucky you!). Now get up and walk to the other side of the room and look at the image again. This time you should see the Lena image, and only the Lena image – my face has gone. Spooky! Can't see it? Click here to show the image slowly You can also experience the same effect by squinting at the hybrid image. Squinting artificially reduces the size of your pupil creating a Pinhole reduced for you. effect, increasing the depth of field for your eyes. Hidden Letters This hybridization is not just academic. Have you ever been paranoid that someone is looking over your shoulder as you use the ATM?, or watching you from the opposite side of the aisle you type something sensitive on your laptop during a flight? Imagine if these devices used a hybrid image technology! They could display a hybrid image created from a superposition of different sources. Not only would the evesdropper not see what you are seeing, but you could make the “fake” image (the low-frequency domain image) display bogus information. Only someone viewing from the appropriate spatial-distance would see the correct image (with care, more than two images could be combined by selecting the appropriate band-pass filters and combining the resultant matrices). To show an example of this look at this hybrid image below of the Mona Lisa. Viewed close up you should be able to make out the password. Now look at the screen from a few feet away. See it now? Need more convincing? The image below is the same image as the one to the left. All I've done, if you View Source for the page, is overide the width of the image from the default of 512 pixels to 128 pixels. Your browser has scaled down the image using an appropiate algorithm, and has averaged out the pixels, reducing the higher frequency components. This is similar to what happens when you stand further away from the screen. Can’t see it? Here’s a version where the “Hidden Text” is even higher contrast. If you squint at this one, you should still be able to make the hidden text go away. Check out other interesting blog^articles.
{"url":"http://datagenetics.com/blog/november32012/index.html","timestamp":"2014-04-16T10:35:14Z","content_type":null,"content_length":"24338","record_id":"<urn:uuid:938d0b21-26f4-4644-b18e-530f0b807d61>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00020-ip-10-147-4-33.ec2.internal.warc.gz"}
Rounding to three decimal places Would 0.70999 be 0.7099 rounded? and 0.10066 be 0.106 and 0.67463 be 0.676 0.70999 will become 0.710 or 0.71 In this case you look at the 4th number after the decimal place, as it is >4 then the 3rd place increases. Also as it was a 9 it goes to 0 and the second place then needs to increase. Make sense?
{"url":"http://mathhelpforum.com/math-topics/154338-rounding-three-decimal-places.html","timestamp":"2014-04-17T01:43:47Z","content_type":null,"content_length":"34425","record_id":"<urn:uuid:c9d5c881-3d49-4d02-a990-83a861c84668>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00179-ip-10-147-4-33.ec2.internal.warc.gz"}
Inverse of a function with natural log September 22nd 2012, 05:42 PM #1 Junior Member Mar 2011 Inverse of a function with natural log I am asked to find the inverse of $f(x)=ln(x+\sqrt{x^2 +1})$ I understand the basics of how to find the inverse but I get stuck on solving this one. Here is what I have so far. $y=ln(x+\sqrt{x^2 +1})$ $e^y =x+ \sqrt{x^2+1}$ I'm not sure how to proceed from here. I thought maybe squaring both sides would be the next step but I can never come up with the correct answer. How do I solve this? Re: Inverse of a function with natural log I am asked to find the inverse of $f(x)=ln(x+\sqrt{x^2 +1})$ I understand the basics of how to find the inverse but I get stuck on solving this one. Here is what I have so far. $y=ln(x+\sqrt{x^2 +1})$ $e^y =x+ \sqrt{x^2+1}$ I'm not sure how to proceed from here. I thought maybe squaring both sides would be the next step but I can never come up with the correct answer. How do I solve this? \displaystyle \begin{align*} e^y &= x + \sqrt{x^2 + 1} \\ e^y - x &= \sqrt{x^2 + 1} \\ \left(e^y - x \right)^2 &= x^2 + 1 \\ e^{2y} - 2x\,e^y + x^2 &= x^2 + 1 \\ e^{2y} - 2x\,e^y &= 1 \\ e^{2y} - 1 &= 2x\,e^y \\ e^y - e^{-y} &= 2x \\ \frac{1}{2}\left( e^y - e^{-y} \right) &= x \end{align*} Re: Inverse of a function with natural log What is happening when you go from: $e^{2y} -1=2xe^y$ $e^y - e^{-y} =2x$ Re: Inverse of a function with natural log Re: Inverse of a function with natural log Ahh right that's what I thought but I just got confused for a second. Thanks, this helps a ton! Re: Inverse of a function with natural log I have another inverse function question that I am having trouble with. I don't even really know how to start solving this one. Re: Inverse of a function with natural log $x = \arctan\left(\frac{y-1}{y+1}\right)$ $\tan{x} = \frac{y-1}{y+1}$ solve for $y$ ... in future, start a new problem with a new thread. September 22nd 2012, 05:51 PM #2 September 22nd 2012, 06:03 PM #3 Junior Member Mar 2011 September 22nd 2012, 06:03 PM #4 September 22nd 2012, 06:08 PM #5 Junior Member Mar 2011 September 23rd 2012, 12:55 PM #6 Junior Member Mar 2011 September 23rd 2012, 01:11 PM #7
{"url":"http://mathhelpforum.com/pre-calculus/203898-inverse-function-natural-log.html","timestamp":"2014-04-16T04:39:40Z","content_type":null,"content_length":"55017","record_id":"<urn:uuid:844035ab-412a-46d0-9976-3155f320131f>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00372-ip-10-147-4-33.ec2.internal.warc.gz"}