content
stringlengths
86
994k
meta
stringlengths
288
619
Maimi, OK ACT Tutor Find a Maimi, OK ACT Tutor ...After a few months in a 4th grade classroom, I'd found my niche. For the past 20 years I've been teaching 4th and 5th grades and tutoring/mentoring/coaching students through middle and high school in Providence, San Francisco, Miami and in an American School in Sao Paulo, Brazil (I speak fluent ... 61 Subjects: including ACT Math, English, reading, Spanish ...I have experience in tech support, virus removal, knowledge management, networking, and computer hardware repair. I have worked with Microsoft Windows at an Administrator level for over 8 years. I understand troubleshooting, OS install and removal, drive partitioning, multiple OS boot, Windows Services, Command Prompt, and Networking. 23 Subjects: including ACT Math, reading, English, biology I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including ACT Math, physics, calculus, geometry ...In the past I have tutored students ranging from elementary school to college in a variety of topics including FCAT preparation, Biology, Anatomy, Math and Spanish. I enjoy teaching and helping others and always do my best to make sure the information is enjoyable and being presented effectively... 30 Subjects: including ACT Math, reading, calculus, biology I have been teaching for 15 years. I have had experience with a wide range of ages from kindergarten to sixth grade. I am certified in Elementary Education, grades K-6 (all subjects) and am currently working on certification for Middle Grades Math Grades 5-9. 12 Subjects: including ACT Math, reading, algebra 1, SAT math
{"url":"http://www.purplemath.com/Maimi_OK_ACT_tutors.php","timestamp":"2014-04-18T13:42:20Z","content_type":null,"content_length":"23682","record_id":"<urn:uuid:964029ce-0f12-4d84-a068-023cb75da63d>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00017-ip-10-147-4-33.ec2.internal.warc.gz"}
Billboards + Ortho Projection [Archive] - OpenGL Discussion and Help Forums 04-09-2003, 12:01 PM I'm trying to Billboard some quads in a viewer that uses an orthographic projection. I've been through the tutorial by Antonio at Lithhouse and have tried the Cylindrical Cheat as well as True Cylinfrical methods and can't make them work. The cheat does keep the quads facing the camera but their position is static and won't change as you rotate the scene. The true method rotates the quads but their orientation doesn't change. Is my problem the ortho projection or does it have to do with the way my ModelView matrix is being formed?
{"url":"http://www.opengl.org/discussion_boards/archive/index.php/t-135626.html","timestamp":"2014-04-20T11:05:11Z","content_type":null,"content_length":"6061","record_id":"<urn:uuid:65e82de8-f699-42c8-bc98-68a73e479fa1>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00140-ip-10-147-4-33.ec2.internal.warc.gz"}
Machine Learning in R: Clustering Clustering is a very common technique in unsupervised machine learning to discover groups of data that are "close-by" to each other. It is broadly used in customer segmentation and outlier detection. It is based on some notion of "distance" (the inverse of similarity) between data points and use that to identify data points that are close-by to each other. In the following, we discuss some very basic algorithms to come up with clusters, and use R as examples. This is the most basic algorithm 1. Pick an initial set of K centroids (this can be random or any other means) 2. For each data point, assign it to the member of the closest centroid according to the given distance function 3. Adjust the centroid position as the mean of all its assigned member data points. Go back to (2) until the membership isn't change and centroid position is stable. 4. Output the centroids. Notice that in K-Means, we not only require the distance function to be defined but also requiring the mean function to be specified as well. Of course, we also need K (the number of centroids) to be K-Means is highly scalable with O(n * k * r) where r is the number of rounds, which is a constant depends on the initial pick of centroids. Also notice that the result of each round is undeterministic. The usual practices is to run multiple rounds of K-Means and pick the result of the best round. The best round is one who minimize the average distance of each point to its assigned Here is an example of doing K-Means in R > km <- kmeans(iris[,1:4], 3) > plot(iris[,1], iris[,2], col=km$cluster) > points(km$centers[,c(1,2)], col=1:3, pch=8, cex=2) > table(km$cluster, iris$Species) setosa versicolor virginica with the following visual output. Hierarchical Clustering In this approach, it compares all pairs of data points and merge the one with the closest distance. 1. Compute distance between every pairs of point/cluster. Compute distance between pointA to pointB is just the distance function. Compute distance between pointA to clusterB may involve many choices (such as the min/max/avg distance between the pointA and points in the clusterB). Compute distance between clusterA to clusterB may first compute distance of all points pairs (one from clusterA and the other from clusterB) and then pick either min/max/avg of these pairs. 2. Combine the two closest point/cluster into a cluster. Go back to (1) until only one big cluster remains. In hierarchical clustering, the complexity is O(n^2), the output will be a Tree of merge steps. It doesn't require us to specify K or a mean function. Since its high complexity, hierarchical clustering is typically used when the number of points are not too high. Here is an example of doing hierarchical clustering in R > sampleiris <- iris[sample(1:150, 40),] > distance <- dist(sampleiris[,-5], method="euclidean") > cluster <- hclust(distance, method="average") > plot(cluster, hang=-1, label=sampleiris$Species) with the following visual output Fuzzy C-Means Unlike K-Means where each data point belongs to only one cluster, in fuzzy cmeans, each data point has a fraction of membership to each cluster. The goal is to figure out the membership fraction that minimize the expected distance to each centroid. The algorithm is very similar to K-Means, except that a matrix (row is each data point, column is each centroid, and each cell is the degree of membership) is used. 1. Initialize the membership matrix U 2. Repeat step (3), (4) until converge 3. Compute location of each centroid based on the weighted fraction of its member data point's location. Notice that the parameter m is the degree of fuzziness. The output is the matrix with each data point assigned a degree of membership to each centroids. Here is an example of doing Fuzzy c-means in R > library(e1071) > result <- cmeans(iris[,-5], 3, 100, m=2, method="cmeans") > plot(iris[,1], iris[,2], col=result$cluster) > points(result$centers[,c(1,2)], col=1:3, pch=8, cex=2) > result$membership[1:3,] [1,] 0.001072018 0.002304389 0.9966236 [2,] 0.007498458 0.016651044 0.9758505 [3,] 0.006414909 0.013760502 0.9798246 > table(iris$Species, result$cluster) setosa 0 0 50 versicolor 3 47 0 virginica 37 13 0 with the following visual output (very similar to K-Means) Multi-Gaussian with Expectation-Maximization Generally in machine learning, we will to learn a set of parameters that maximize the likelihood of observing our training data. However, what if there are some hidden variable in our data that we haven't observed. Expectation Maximization is a very common technique to use the parameter to estimate the probability distribution of those hidden variable, compute the expected likelihood and then figure out the parameters that will maximize this expected likelihood. It can be explained as follows ... Now, we assume the underlying data distribution is based on K centroids, each a multi-variate Gaussian distribution. To map Expectation / Maximization into this, we have the following. The order of complexity is similar to K-Means with a larger constant. It also requires K to be specified. Unlike K-Means whose cluster is always in circular shape. Multi-Gaussian can discover cluster with elliptical shape with different orientation and hence it is more general than K-Means. Here is an example of doing multi-Gaussian with EM in R > library(mclust) > mc <- Mclust(iris[,1:4], 3) > plot(mc, data=iris[,1:4], what=c('classification'), > table(iris$Species, mc$classification) setosa 50 0 0 versicolor 0 45 5 virginica 0 0 50 with the following visual output Density-based Cluster In density based cluster, a cluster is extend along the density distribution. Two parameters is important: "eps" defines the radius of neighborhood of each point, and "minpts" is the number of neighbors within my "eps" radius. The basic algorithm called DBscan proceeds as follows 1. First scan: For each point, compute the distance with all other points. Increment a neighbor count if it is smaller than "eps". 2. Second scan: For each point, mark it as a core point if its neighbor count is greater than "minpts" 3. Third scan: For each core point, if it is not already assigned a cluster, create a new cluster and assign that to this core point as well as all of its neighbors within "eps" radius. Unlike other cluster, density based cluster can have some outliers (data points that doesn't belong to any clusters). On the other hand, it can detect cluster of arbitrary shapes (doesn't have to be circular at all) Here is an example of doing DBscan in R > library(fpc) # eps is radius of neighborhood, MinPts is no of neighbors # within eps > cluster <- dbscan(sampleiris[,-5], eps=0.6, MinPts=4) > plot(cluster, sampleiris) > plot(cluster, sampleiris[,c(1,4)]) # Notice points in cluster 0 are unassigned outliers > table(cluster$cluster, sampleiris$Species) setosa versicolor virginica with the following visual output ... (notice the black points are outliers, triangles are core points and circles are boundary points) Although this has covered a couple ways of finding cluster, it is not an exhaustive list. Also here I tried to illustrate the basic idea and use R as an example. For really large data set, we may need to run the clustering algorithm in parallel. Here is my earlier blog about how to do K-Means using Map/Reduce as well as Canopy clustering as well. 2 comments: Satpreet said... Excellent article! Matthew Orlinski said... Thanks for this. I've followed it and found it really interesting. Just a quick comment. In the Multi-Gaussian with Expectation-Maximization example the plot causes an error formal argument "data" matched by multiple actual arguments. To solve it you can change the plot command to plot(mc, what=c('classification'), dimens=c(3,4))
{"url":"http://horicky.blogspot.com/2012/04/machine-learning-in-r-clustering.html","timestamp":"2014-04-18T13:33:36Z","content_type":null,"content_length":"107120","record_id":"<urn:uuid:1d85c021-cad2-469f-af1a-d053d41f91ef>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00592-ip-10-147-4-33.ec2.internal.warc.gz"}
Division Theorem [proof by induction] First let me say that this is not technically the Division Theorem that I will be proving. Our book calls it the Euclidean Algorithm, but this is clearly not true, it is closer to the Division Theorem, imo. Anyway, my book wants us to prove the following proposition. Note: Here natural numbers will include 0. Also a positive number is a natural number that is not equal to 0. Proposition. Let n be a natural number, and let q be a positive number. Then there exist natural numbers m, r such that [itex]0 \leq r < q[/itex] and [itex]n = mq + r[/itex] Proof: (Holding q fixed, and using induction on n) Base Case (n=0) If n = 0, then we can take m = 0 and r = 0, and we will have [itex]0 \leq r < q[/itex] since [itex]0 \leq 0 < q[/itex] and we will also have [itex]n = 0 = 0q + 0 = 0[/itex]. So the proposition is true when n = 0. Inductive Step Assume that for some natural number n and fixed positive number q, that there exist natural numbers m, r such that [itex]0 \leq r < q[/itex] and [itex]n = mq + r[/itex]. Then we want to show that the same properties are true for n + 1. We have [itex]0 \leq r < q[/itex] and I will break this up into two cases. Case 1: ([itex]0 \leq r < q - 1[/itex]) n = mq + r => n + 1 = mq + r + 1 but [itex]0 \leq r < q - 1[/itex] so [itex]0 + 1 \leq r + 1 < q[/itex] thus we can say that [itex]0 \leq r + 1 < q [/itex] and we have n + 1 = mq + (r + 1) as required. Case 2: ([itex]r = q - 1[/itex]) again n = mq + r by our inductive hypothesis so n + 1 = mq + r + 1 = mq + (q - 1) + 1 = mq + q = (m + 1)q = (m + 1)q + 0 m+1 is a natural number, and our remainder is 0, and [itex]0 \leq 0 < q[/itex] as required. Thus by induction we have proved this proposition. QED. Is this proof sufficient? I think it is, but there are much more things involved here than in most of the other proof by inductions I have done, so I just want to make sure I did not screw anything up. Also, any ideas about other ways to prove this proposition using induction? edit... To be a little more specific about what I am worried about in the proof. In my proof I am using [itex]0 \leq r < q[/itex] since I believe I can because it is true by our inductive hypothesis, this is correct and OK to do right?
{"url":"http://www.physicsforums.com/showthread.php?t=130250","timestamp":"2014-04-17T07:27:13Z","content_type":null,"content_length":"27874","record_id":"<urn:uuid:bbd9da0b-c69e-4270-b702-5fa66fc47d1a>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00579-ip-10-147-4-33.ec2.internal.warc.gz"}
Iron bar bending force 1. The problem statement, all variables and given/known data Given an iron bar, of round cross-section, fixed by its extremities (bar measures 152 mm in length and 4.8 mm in diameter), a weight of 88.5 kgs is hung from its middle, bending it at least 90 degrees. How much weight/force would be needed to bend another bar of the same material and size, but of hexagonal or square 2. Relevant equations Torque = Force * R Sin(90°) * R 3. The attempt at a solution Haven't got any idea. There should be something lacking in the text or more probably some calculus skills given as knows and hence omitted. Hope you could clear up this question, thank you in advance
{"url":"http://www.physicsforums.com/showthread.php?t=240215","timestamp":"2014-04-20T14:12:47Z","content_type":null,"content_length":"22905","record_id":"<urn:uuid:eab979b0-9136-4f42-8fda-403f3597a6c3>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
129 coins October 30th 2009, 11:24 AM #1 Apr 2009 129 coins There is a pile of 129 coins on a table, all unbiased except for one which has heads on both sides. David chooses a coin at random and tosses it eight times. The coin comes up heads every time. What is the probability that it will come up heads the ninth time as well? The first step to this problem is going to be figuring out the probability that David has the rigged coin, given that he has flipped 8 consecutive heads. As a general rule, if you are dealing with conditional probabilities that look difficult, you should ask yourself "would this be easier if I could flip the conditional around?" If the answer to that question is "yes" then the most straightforward way of tackling the problem is usually Bayes Rule. Let "A" mean the coin is rigged, and "B" be that 8 flips are heads. Then, using Bayes Rule: $P(A|B) = \frac{P(B|A)*P(A)}{P(B|A)*P(B) + P(B|\mbox{Not }A) * P(\mbox{Not }A)}$ Obviously P(B|A) = 1, since if the coin is rigged, we always flip 8 heads. P(A) = 1/129, since there is only 1 rigged coin in the 129. P(B| Not A) = 1/(2^8) since this is the fair probability of flipping a coin 8 times and getting a head each time. And 128/129 is P(Not A) since there are 128 fair coins and 129 coins in total. If you grind out that probability you should get 2/3. Now that we have the probability that David has the unfair coin, it is straightforward to calculate the chances of another head. 2/3rds of the time, he is for sure flipping a head. 1/3rd of the time, he has a 1/2 chance of flipping a head. So, the probability of another head is (1)(2/3) + (1/2)(1/3) = 5/6. [Assuming I didn't make any stupid mistakes] Last edited by theodds; October 31st 2009 at 07:25 AM. The first step to this problem is going to be figuring out the probability that David has the rigged coin, given that he has flipped 8 consecutive heads. As a general rule, if you are dealing with conditional probabilities that look difficult, you should ask yourself "would this be easier if I could flip the conditional around?" If the answer to that question is "yes" then the most straightforward way of tackling the problem is usually Bayes Rule. Let "A" mean the coin is rigged, and "B" be that 8 flips are heads. Then, using Bayes Rule: $P(A|B) = \frac{P(B|A)*P(B)}{P(B|A)*P(B) + P(B|\mbox{Not }A) * P(\mbox{Not }A)}$ Obviously P(B|A) = 1, since if the coin is rigged, we always flip 8 heads. P(A) = 1/129, since there is only 1 rigged coin in the 129. P(B| Not A) = 1/(2^8) since this is the fair probability of flipping a coin 8 times and getting a head each time. And 128/129 is P(Not A) since there are 128 fair coins and 129 coins in total. If you grind out that probability you should get 2/3. Now that we have the probability that David has the unfair coin, it is straightforward to calculate the chances of another head. 2/3rds of the time, he is for sure flipping a head. 1/3rd of the time, he has a 1/2 chance of flipping a head. So, the probability of another head is (1)(2/3) + (1/2)(1/3) = 5/6. [Assuming I didn't make any stupid mistakes] Hi what I did was: P(Biased and 9 heads) = 1/129 P(Fair coin and 9 heads) = 128/129 * (1/2)^9 = 1/498 So adding the 2, it is 5/498 Any comments on why my method is wrong Also I havent done Baye's Theorem before, but I just tried it with the normal conditional probability rule P(A l B) = P (A n B) / P(B) and I got the same answer as you. Last edited by Aquafina; October 30th 2009 at 11:10 PM. The fact that 8 heads have already occured influences your opinion about which coin David has. Given that he has flipped 8 consecutive heads it turns out you feel that he has the unfair coin 2/ 3rds of the time. Another issue is that your method finds the probability of flipping 9 consecutive heads, not the probability of flipping a 9th head given that 8 have already been flipped. You clearly can't have an answer of less than 1/2 for this problem. You can use the definition of conditional probability (which is what you did) and get the right answer. Bayes Rule is essentially the same thing, and I have a feeling that if you used the definition you probably ended up using Bayes Rule without knowing. EDIT: There was a typo on my Bayes Formula above (now fixed), but the answer should still be right. October 30th 2009, 04:12 PM #2 Senior Member Oct 2009 October 30th 2009, 10:45 PM #3 Apr 2009 October 31st 2009, 07:22 AM #4 Senior Member Oct 2009
{"url":"http://mathhelpforum.com/statistics/111388-129-coins.html","timestamp":"2014-04-18T22:15:33Z","content_type":null,"content_length":"40383","record_id":"<urn:uuid:a81429e3-d329-4364-9a41-14754ed705b0>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00400-ip-10-147-4-33.ec2.internal.warc.gz"}
Princeton Alumni Weekly: Features Web Exclusives Web Exclusives:Features a PAW web exclusive column January 24, 2001: The Sound of Math Turning a mathematical theorem and proof into a musical How do you make a musical about a bunch of dead mathematicians and one very alive, very famous, Princeton math professor? Andrew Wiles, the Eugene Higgins professor of math, gained worldwide fame for his 1993 solution to Fermat's last theorem, which dates to 1637. The theorem states that for the equation xn+yn=zn there are no positive whole numbers that solve this when "n" is greater than 2. Jump three hundred years to the 1980s and '90s: Wiles worked on proving the theorem exclusively and secretly for seven years, and startled the world of mathematics by publishing his proof, which was accepted, after a modification, in 1995. Which leads us back to the original question: How do you make a musical about math? In the musical The Sound of Music, the nuns grappled with the troublesome novice Maria - who talks and sings too much - and in exasperation sang a song whose first line was "How do you solve a problem like Maria? Would a mathematical musical follow the same idea? "How do solve a problem like Fermat's? Why do you know a proof and not jot it down? How do you sing about the dead guy Fermat? A mathematician! An illusive Frenchman. A clown?" "How do you write a musical about math? When do you sing and when do you dance around? What can you say about the men of math, Who're bright, but very dead and not around?" Those are the kinds of silly lyrics that came to mind when news of a musical based on Andrew Wiles and Pierre de Fermat was published last year. Happily, composer Joshua Rosenblum and his lyricist, Joanne Sydney Lessner (who is also his wife), grappled with the problem and solved it magnificently, with a story and song much more inspired than the verses penned above (which they did not write). The show, Fermat's Last Tango, now on stage at the York Theatre Company in New York City brings to vivid life not only Pierre de Fermat (outfitted in courtly clothes and gold-and-red shoes), Andrew Wiles (portrayed as Daniel Keane in corduroy pants, tweedy jacket, and horn-rimmed glasses), but also the mathematicians Pythagoras, Euclid (who throughout the show measures the angles her own body makes), Sir Isaac Newton, and Carl Friedrich Gauss. Many of the songs are funny, and one of the most brilliant bits of invention is the place the mathematicians go after death: the Aftermath. Andrew Wiles and his wife, Nada Canaan Wiles '83 *88, took their children to see the show in December, and PAW asked him how he liked it. Below is his response. I went with my whole family, and yes, we really liked the show. My six-year-old was quite captivated by Fermat (or Theorem as she called him) and kept trying to tell me something during the performance. I couldn't hear what she was saying but afterwards she told me that she had wanted to tell me that he was lying and that he didn't have a proof! I had not communicated with the composers so the first I knew about it being on was when some local publications (including the Prince) started calling to ask me about it. I thought that a musical on such a theme would be impossible but we all thought it was very cleverly done. I think that putting Fermat in the role of tormentor was very inspired and really was the key to the success of the show - both as he personified the struggle that research in mathematics involves and also as he gave vent to some of the all too human characteristics of real life personalities in math. I think that it did especially capture the feeling that one sometimes has when one is doing mathematics that obstacles have been put there deliberately to taunt you, but also the feeling of wonder at the beauty and simplicity of it all when one finally sees the light. I thought the Aftermath very clever, as well as the use of the mathematicians to carry the storyline. But we also really liked the portrayal of the personal part of the story - the whole idea of the threesome at the tango was beautifully done. We came away feeling that it had been very intelligently written (not something I could say for every musical!) and that the cast really seemed to have caught the spirit of the story, York Theatre, 619 Lexington Ave., N.Y., N.Y. 10022; 212-239-6200 (Tele-charge).
{"url":"http://www.princeton.edu/~paw/web_exclusives/features/features_14.html","timestamp":"2014-04-20T17:07:17Z","content_type":null,"content_length":"11819","record_id":"<urn:uuid:85006447-53f1-4b93-9209-0d0d9ff4a489>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00177-ip-10-147-4-33.ec2.internal.warc.gz"}
Journal of the South African Institution of Civil Engineering Services on Demand Related links Print version ISSN 1021-2019 J. S. Afr. Inst. Civ. Eng. vol.52 no.2 Midrand Oct. 2010 TECHNICAL PAPER Numerical modelling for the evaluation of progressive damage to plain concrete structures S A Sadrnejad Among the various numerical simulation models of plain concrete, the micro-plane models perform exceptionally well. They are not as complicated as the microscopic models, such as the discrete particle models, and do not have the shortcomings of the macroscopic models based on stress or strain invariants. The constitutive equations for the mechanical behaviour of concrete, capable of predicting damage effects or crack growth under loading, unloading or reloading, were developed on a micro-plane framework. The proposed damage formulation is based on combinations of five fundamental types of stress/strain, which essentially may occur on any of the micro-planes. Model verification was done using different loading, unloading and reloading stress/strain paths. The proposed model is capable of yielding a pre-failure/post-failure history of stress/strain on different predefined sampling planes through material. This micro-plane damage model of plain concrete was developed in a 3D finite element code to show its abilities in crack/damage analysis and prediction of failure mechanisms as compared with plain concrete tests. The proposed code is able not only to predict the crack path, but also to determine which combination of loading conditions occurs on damaged micro-planes. The validity of the proposed model is investigated through a few test cases and a double curvature arch concrete dam. Keywords: crack type, micro-planes, progressive failure, plain concrete structures Surveys through the continuum macroscopic constitutive models of concrete show that micro-planes-based models perform exceptionally well. In these models, instead of presenting constitutive relations in the shape of stress and strain tensors, the stress and strain vectors, which are in turn the projections of stress or strain tensors, are used. This method not only provides a more physical conceptual base, but also leads to more simple mathematical formulation. On the other hand, invariant-based continuum macroscopic models lose some of the important features of material behaviour because they are basically not able to capture and store the data properties in the different directions around a material point, whereas the micro-planes models inherently include the directional characteristics of a material point. Many of the available 3D commercial analysing codes are not equipped with the proper damage concrete material model, so are not able to perform precisely crack/damage analysis, analysis of crack orientations and analysis of progressive failure of concrete structures. The basic idea, namely that the constitutive material behaviour as a relation between strain and stress tensors can be "assembled" from the behaviour of material in the planes with different orientations within the material (such as slip planes, micro-planes, particle contacts, etc), can be traced back to the failure envelopes of Mohr (1900) and the "slip theory of plasticity" of Taylor (1938) who was the first to implement this theory for modelling the behaviour of polycrystalline metals. Taylor's idea was formulated in detail by Batdorf and Budiansky (1949). This theory was soon recognised as a realistic constitutive model for plastic-hardening metals. This idea was used in arguments about the physical origin of strain hardening, and was shown to allow easy modelling of anisotropy, as well as the vertex effects for loading increments to the side of a radial path in stress space. All the formulations considered that only inelastic shear strains ("slips"), with no inelastic normal strains, took place on what is now called the "micro-planes". The theory was also adapted to anisotropic rocks and soils under the name "multi-laminate model" (Zienkiewicz & Pande 1977; Pande & Sharma 1981; Sadrnejad & Pande 1989; Sadrnejad 1992). The static constraint formulation was used extensively and referred to as "slip theory" for metals or "multi-laminate theory" for anisotropic rocks until the first application of this theory by Bažant and Gambarova in 1984 to continuum damage mechanics and cohesive-frictional materials; they referred to this theory as "micro-plane" theory. The object of this study is the application of a proposed micro-plane, damage-based model of plane concrete (Labibzadeh & Sadrnejad 2008) in a 3D finite element code to show its abilities in crack/ damage analysis and analysis of progressive failure of concrete structures such as concrete double curvature arch dams. The proposed code is not only able to predict the crack line, but also allows the determination of which combination of loading conditions occurs on damaged micro-planes. The validity of the proposed code is investigated by means of a few tests and examples of concrete The orientation of a micro-plane is characterised by the unit normal n of components n[i] (indices i and j refer to the components in Cartesian co-ordinates x[i]). In the formulation with a kinematic constraint, which makes it possible to describe the softening behaviour of plane concrete in a stable manner, the strain vector Figure 1) is the projection of the macroscopic strain tensor ε[ij]. So the components of this vector are ε[Ni] = n[i]ε[Ni] that is where repeated indices imply summation over i = 1, 2, 3. The mean normal strain, called the volumetric strain ε[v] and the deviatoric strain ε[D] on the micro-plane can also be introduced, defined as This separation of ε[v] and ε[D] is useful when the effect of the hydrostatic pressure on a number of cohesive frictional materials, such as concrete, needs to be captured. To characterise the shear strains on the micro-plane (Figure 1), we need to define two co-ordinate directions, M and L, given by two orthogonal unit co-ordinate vectors m and l of components m[i] and l[i] lying on the micro-plane. To minimise the directional bias of m and l among micro-planes, one of the unit vectors m and l tangential to the plane is considered to be horizontal (parallel to x - y plane). The magnitudes of the shear strain components on the micro-plane in the direction of m and l are ε[M] = m[i](ε[ij]n[j]), ε[L] = l[i](ε[ij]n[j]). Because of the symmetry of tensor ε[ij], the shear strain components may be written as follows (e.g. Bažant et al 1996): in which the following symmetry tensors were introduced: Once the strain components on each micro-plane have been obtained, the stress components are updated through micro-plane constitutive laws, which can be expressed in algebraic or differential forms. In the kinematic constraint micro-plane models, the stress components on the micro-planes are equal to the projections of the macroscopic stress tensor σ[ij] only in some particular cases when the micro-plane constitutive laws are specifically prescribed in a manner such that this condition can be satisfied. This happens for example in the case of elastic laws at the micro-plane level, defined with elastic constants chosen so that the overall macroscopic behaviour is the usual elastic behaviour (Carol & Bažant 1997; see also Carol et al 1992, 2001). In general, the stress components determined independently on the various micro-planes will not be related to one another in such a manner that they can be considered as projections of a macroscopic stress tensor. Thus the static equivalence or equilibrium between the micro-level stress components and macro-level stress tensor must be enforced by other means. This can be accomplished by application of the principle of virtual work, yielding: where Ω is the surface of a unit hemisphere; σ[v] and σ[D] are the volumetric and deviatoric parts of the normal stress component; and σ[L ]and σ[M] are shear stress components on the micro-planes respectively. Eq (5) is based on the equality of the virtual work inside a unit sphere and on its surface, rigorously justified by Bažant et al (1996). The integration in Eq (5) is performed numerically by Gaussian integration using a finite number of integration points on the surface of the hemisphere. Such an integration technique corresponds to considering a finite number of micro-planes, one for each integration point. An approximate formula consisting of 26 integration points is proposed in this study. The direction cosines and weights of the integration points are listed in Table 1 and in Figure 2 their positions on the surface of the unit sphere are shown. This numerical integration technique for evaluation of the integral statement in Eq (5) yields: N[m] is the number of integration points on the hemisphere. Based on this formulation, the following macroscopic constitutive matrix in the proposed model is obtained as follows: where E and v are the elastic modulus and Poisson's coefficient. The deviatoric part of the constitutive matrix is computed from superposition of its counterparts on the micro-planes. Such counterparts in turn are calculated according to the types of damage that occur on each plane, depending on its specific loading conditions. This damage is evaluated on the basis of the five separate damage functions. These five loading conditions are: A specific damage function is assigned according to the authoritative laboratory test results available in the literature. Then, for each state of plane loading, one of the five introduced damage functions is computed with respect to the history of micro-stress/strain components. The five damage functions are as follows: Parameters a to k in the above relations are computed according to the laboratory test results obtained for each specific concrete. In Eq (9), ε[eq] is an average strain, and in the other relations it is the magnitude of the projected deviatoric strain vector on each micro-plane. To obtain the parameters a to k, uniaxial compression tests are used to determine the planes' shear-compression; therefore, parameters d, e, f, g, h, i, j and k are found by numerical trial and error. Adjusting the stress path in the triaxial test to provide a plane with pure tension or pure shear can lead to the determination of a, b and c. In this formulation, for simplicity, we consider just two basic material parameters, namely the elastic modulus and Poisson's ratio. Accordingly, a simple isotropic linear elasticity governs the intact material and all non-linearity, including damage effects, is accounted for through damage functions on the sampling planes. Figure 3 shows the computational sequence used in the proposed To demonstrate the validity of the proposed model, correlation studies were done between the model's analytical results and experimental evidence from tests regarding the stress-strain response of concrete specimens under different loading conditions. The tests used were the uniaxial compression (UC) test (Figures 4 and 5), the conventional (cyclic) triaxial compression (CTC) test and the uniaxial tension (UT) test. The proposed model should be able to show the activity of each predefined micro-plane up to and even after failure. This specific novel aspect has been evaluated for each The material parameters used in the above analysis are: E = 25 000 MPa and v = 0,20. Uniaxial compression (UC) testing In Figure 5, the volumetric changes of the concrete specimen under uniaxial compressive loading are shown to compare well with the experimental results of Kupfer and coworkers (1969). Figure 6 compares the variation in micro-stress. As the figure shows, during the application of the uniaxial compressive load, micro-plane number 11 (see Figure 2) is in compressive stress, whereas micro-planes numbers 9, 10, 12 and 13, which are orientated normal to the load direction on the unit sphere, experience only tensile stress. Compressive stress accompanied by shear strains affects the remaining planes. It is noted that during increments of the uniaxial compressive load, the compressive and shear stress components acting on micro-planes numbers 1 to 8 increase simultaneously, with a greater rise of shear stress at first; however, near to the peak stress ( Figure 7 shows the growth of the damage function values of different micro-planes during the uniaxial compression test of concrete obtained with the proposed model. As can be observed from this figure, damage evolves faster on micro-planes numbers 9, 10, 12 and 13 on the unit sphere than on the other planes. This is because there are different modes of loading on those planes. In the uniaxial compression test done by the proposed model, the axial compressive load is applied on the x-axis that is normal to micro-plane number 11 (Figure 2). So on this plane there is only a normal compressive load (mode I) which means that no damages could be caused to it. On micro-planes numbers 5, 6, 7 and 8 there is a shear stress combined with the normal compressive load (mode IV); this causes less damage than on micro-planes 1, 2, 3 and 4 on which there is the same mode of loading (mode III). This is because on micro-planes numbers 1, 2, 3 and 4 the magnitude of the compressive stress component is less than that on micro-planes numbers 5 to 8 (see Figure 6), so damage grows faster. Finally, on micro-planes numbers 9, 10, 12 and 13 there is only normal tension (mode II), which causes damage to grow faster than on all the other planes. If the points on the top and bottom loading plates are assumed not to be horizontally constant, the priority of micro-plane activity in the above test is the other way round. This means that damage on micro-planes numbers 1, 2, 3 and 4 will be greater and cracks will be initiated first on these micro-planes. This phenomenon is in agreement with the model's predictions and is depicted in Figure In this test, the hydrostatic pressure is first applied to the specimen up to a certain level and then the axial compression is increased while the lateral or confining pressure is held constant. Therefore, in this test, up to a certain level of hydrostatic pressure there should be no shear forces on the micro-planes. This can be seen in Figure 9 which shows the evolution of the micro-stress components on the different micro-planes during the CTC test. Finally, the effect of lateral confining pressures on the compressive cylindrical strength of concrete specimens simulated by the proposed model is compared with the experimental data of Ansari and Li (1998) in Figure 10. The stress-strain response of the concrete cylindrical specimen under axial tension load is depicted in Figure 11. Cyclic triaxial compression (CTC) testing Generally, most damage models fail to reproduce the irreversible strains and the slopes of the curves in the unloading and reloading regions. To overcome this problem, plasticity and damage models are often combined. In the case of unloading or reloading, any gap opened on the plane starts closing and a new history of damage on the plane starts with regard to the existing stress/strain on the plane. This is similar to elastoplastic behaviour, although the intact material remains elastic and damage functions can make up and adjust the residual plane strength. In Figure 12, the predicted response of the model under the cyclic compression test is compared with the experimental results of Sinha et al (1964). In addition, Figure 13 shows the behaviour of concrete simulated by the proposed model under complete cyclic triaxial loading. Finite element code Three-dimensional (3D) finite element software was developed using the proposed damage model for concrete as an alternative to the non-linear material model. This new software can perform 3D crack/ damage analysis of concrete structures under static/dynamic and monotonic/cyclic loadings. The operating sequence of the proposed code for non-linear static analysis is illustrated in Figure 14. Three-point bending test Figure 15 shows the geometry and finite element meshes for this test. The tensile strength of concrete and the elastic modulus are assumed to be E = 27 500 MPa. For modelling, 428 3D eight-noded brick elements and 958 nodes are used. The path of crack growth simulated with the proposed model is illustrated in Figure 16. As can be seen from this figure, the damaged region of the specimen coincides with experimental observations. Figure 17 shows a comparison of the load-displacement curve obtained with the proposed model with that found by Bažant and Luzio (2004). Clearly, there is reasonable agreement between the obtained and previously reported results. This example shows the mode I fracture of concrete in this 3D condition. Furthermore, because of the specific formulation of damage on micro-planes, the suggested model is able to predict the direction of the crack line at each integration point. This can be observed in Figure 18, in which the damaged micro-planes at integration point number 681 are illustrated (orange planes). On these planes there is only tensile strain (load condition II) and mode I fracture has occurred on them. A comparison between the damage values obtained with the proposed code on various micro-planes at integration point 681 is shown in Figure 19. Clearly, in the case of non-symmetrical conditions, different stress/strain paths appear on the sampling planes and the arrangement of the active planes leads to a different failure configuration from that presented in the next section. Anti-symmetrical four-point shear test To show how the model is also able to predict a crack with a curved path, a notched specimen, subjected to an anti-symmetrical four-point shear loading, is considered. The thickness of the beam is 38 mm. A mesh of 428 3D brick elements is used (Figure 20). The material parameters are E = 27 500 MPa and v = 0,20. When the anti-symmetrical load is applied, a crack starts from the left-hand corner of the notch and grows upwards to the left-hand side of the loading plate. Figure 21 shows the crack path in the specimen obtained from the proposed model and the actual monitored crack path obtained from experiment. The experimentally obtained crack ends in the left-hand corner of the loading test plate, while the predicted one ends in the right-hand corner of the load plate. There are two possible reasons for this difference. The first is that point loads are applied in the model, whereas plate loading is applied in the test. The second is that there is a small mismatch between the sampling plane orientations and those of the real cracked planes. Increasing the number of sampling planes can sort out this mismatching to some extent. Furthermore, if other element types are chosen, or a denser finite element mesh is used, the predicted crack pattern may be unchanged from the prediction with the coarser mesh of linear brick elements. Obviously, as Figure 21 shows, the proposed model is able to simulate the crack line fairly well. Figure 22 shows the modelled crack evolution in the direct tensile test. Mode II fracture has occurred on microplanes numbers 5, 6, 7 and 8 (blue planes in Figure 18) due to a combination of tensile and shear stresses (load condition V) at the damage integration points. To demonstrate further the ability of the proposed model, a double curvature arch dam was selected. This dam is in the feasibility stage of consulting studies in Dez-Ab consulting engineering company and is located in the south-west of Iran near Aligoodarz city. Dam geometry The geometry of the dam, including the plan and the profiles of the crown cantilever and centre lines are shown in Figures 23 and 24. The dam has a height of 220 m and the length of its crown is about 430 m. Material library The elastic modulus and Poisson's ratio of plain concrete for the dam are considered to be E[c]= 30 000 MPa and v[c] = 0,20. The mass density of the concrete is p = 2 400 kg/m^3. The elastic modulus of the rock mass is selected as E[c] = 20 000 MPa. Finite element mesh, results and failure mechanism The model discretisation of the arch dam used in this analysis is shown in Figure 24. A total of 652 3D brick elements and 1 146 nodes are used for the finite element calculations. The proposed arch dam is analysed by the code in Figure 14 under its own weight and the hydrostatic pressure of the reservoir in the first stage. Damaged portions of the dam are shown in Figure 25 in terms of the damage parameters (damage value w Figure 25 occurs on micro-planes numbers 7 and 8 where tensile and shear stress components (mode II fracture) exist. This indicates that the combination of tensile and shear stresses (load condition V) have the greatest damage effects on the arch dam under its own weight and the pressure of the reservoir at that stage. The dam's deformations and stresses under its own weight and normal water pressure are shown in Figure 26. The stress paths of the tensile and shear components on micro-planes numbers 7 and 8 due to the initial loading up to the point of failure are shown in Figure 27. The six stages of progressive failure are shown in Figure 28. Damage starts in the bottom right-hand location under the dam's own weight and water pressure increases. With the compliance matrices kept constant for the damaged planes, failure extends to the left and upwards to configure a failure mechanism that finally divides the dam body into five pieces (Figure 29). A constitutive damage model for concrete predicting the effects of any arbitrary loading conditions was developed using the approach of a theoretical framework of micro-planes and damage functions. This model has some characteristics that distinguish it from similar work done recently in this area of research. Some of those differences are listed below. dxdydz element leads to one of the five stress cases introduced into plane conditions and the use of corresponding damage functions. Therefore, the proposed model is capable of predicting the concrete behaviour under any arbitrary strain/stress path. These five force conditions are: (I) hydrostatic compression, (II) hydrostatic extension, (III) pure shear, (IV) shear + compression, and (V) shear + extension. These damage evolution functions were constructed with reference to the experimental test results for concrete specimens under compressive and tensile loading conditions reported by the various In our formulation, according to the damage theory, the value of total damage functions in each micro-plane varies between zero and one, the first value being related to the undamaged state (no crack initiation on the micro-plane) and the second value to the totally damaged state (where the crack opening on the micro-plane is greater than the specified critical value). V) could result in much more damage than the other situations. Thereafter, hydrostatic extension (II), pure shear (III), shear + compression (IV) and hydrostatic compression (I) respectively have a smaller effect on damage evolution. Ansari, F & Li, Q B 1998. Fiber Optic Sensors for Construction Materials and Bridges. Lancaster, PA: Technomic Publishing Co., p 368. [ Links ] Batdorf, S B & Budiansky, B 1949. A mathematical theory of plasticity based on the concept of slip. Technical Note 1871, National Advisory Committee for Aeronautics. [ Links ] Bažant Z P & Luzio G D 2004. Nonlocal microplane model with strain-softening yield limits. International Journal of Solids and Structures, 41: 7209-7240. [ Links ] Bažant, Z, Xiang, Y & Prat, P C 1996. Microplane model for concrete. I. Stress-strain boundaries and finite strain. American Society of Civil Engineers, Journal of Mechanical Engineering, 122(3): 245-254. [ Links ] Bažant, Z P & Gambarova, PG 1984. Crack shear in concrete: crack band micro plane model. Journal of Structural Engineering, ASCE, 110: 2015-2036. [ Links ] Carol, I, Jirasek, M & Bažant, Z P 2001. A thermodynamically consistent approach to micro plane theory. Part I: Free energy and consistent micro plane stresses. International Journal of Solids and Structures, 38: 2921-2931. [ Links ] Carol, I & Bažant, Z 1997. Damage and plasticity in micro plane theory. International Journal of Solids & Structures, 34: 3807-3835. [ Links ] Carol, I, Bažant, Z P & Prat, P. 1992. New explicit micro-plane model for concrete: theoretical aspects and numerical implementation. International Journal of Solids and Structures, 29: 1173-1191. [ Links ] Kupfer, H B, Hilsdorf, H K & Rusch, H 1969. Behaviour of concrete under biaxial stresses. Journal of the American Concrete Institute, 66(8): 656-666. [ Links ] Labibzadeh, M & Sadrnejad, S A 2008. Mesoscopic damage-based model for plane concrete under static and dynamic loadings. American Journal of Applied Science, 3(9): 2011-2019. [ Links ] Pande, G N & Sharma, K G 1983. Multilaminate model of clays - A numerical evaluation of the influence of rotation of principal stress axes. International Journal of Numerical and Analytical Methods in Geomechanics, 7: 397-418. [ Links ] Sadrnejad, S A & Pande G N 1989. A multi-laminate model for sand. Proceedings, 3rd International Symposium on Numerical Models in Geomechanics, NUMOG-III, Niagara Falls, Canada, 8-11 May 1989. [ Links ] Sadrnezhad, S A 1992. Multi-laminate elasto-plastic model for granular media. Journal of Engineering, (Iran), 5(1 & 2): 11. [ Links ] Sinha, B P, Gerstle, K H & Tulin, L G 1964. Stress-strain relations for concrete under cyclic loading. Journal of the American Concrete Institute, 61(2): 195-211. [ Links ] Taylor, G I 1938. Plastic strain in metals. Journal of the Institute of Metals, 62: 307-324. [ Links ] Zienkiewicz, O C, & Pande, G N 1977. Time-dependent multi-laminate model of rocks. International Journal of Numerical and Analytical Methods in Geomechanics, 1: 219-247. [ Links ] Contact details: Department of Civil Engineering KN Toosi University of Technology Tehran Iran Tel: 98 21 8888 1128 Fax: 98 21 8877 9385 e-Mail: sadrnejad@kntu.ac.ir SEYED AMIRODIN SADRNEJAD received his BSc from Sharif University of Technology, Teheran, his MSc from University College Cardiff, UK, and his PhD from University College Swansea, UK. He is Professor in Civil Engineering at the K N Toosi University of Technology and vice-principal of the university. He is a consultant to Teheran Municipality on design and construction activities for residential and industrial construction. His fields of interest include earth and concrete dams, numerical methods in geomechanics and structural engineering, finite element methods, hydromechanics and hydraulic structures.
{"url":"http://www.scielo.org.za/scielo.php?script=sci_arttext&pid=S1021-20192010000200003&lng=en&nrm=iso&tlng=en","timestamp":"2014-04-23T21:52:08Z","content_type":null,"content_length":"63252","record_id":"<urn:uuid:e90dd037-30c0-44e8-a64d-d9c8dece12f3>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00040-ip-10-147-4-33.ec2.internal.warc.gz"}
German mathematical terms like "Nullstellensatz" up vote 21 down vote favorite There are quite a few german mathematical theorems or notions which usually are not translated into other languages. For example, Nullstellensatz, Hauptvermutung, Freiheitssatz, Eigenvector (the "Eigen" part), Verschiebung. For me, as a German, this is quite entertaining. Do you know other examples? Please one per answer, please give a reference for the term or a short explanation of what it means. It would be great to see an explanation why there is no translation. EDIT: Some more examples can be found at Wikipedia: Ansatz, Entscheidungsproblem, Grossencharakter, Hauptmodul, Möbius band, quadratfrei, Stützgerade, Vierergruppe, Nebentype. terminology big-list 5 Does Eigenvalue count as an answer...? – Abel Stolz Apr 19 '11 at 9:05 9 Hauptidealsatz (sometimes) – KConrad Apr 19 '11 at 9:07 12 The notation $\mathbb Z$ comes from "Zahlen". – Roland Bacher Apr 19 '11 at 9:09 7 By the way, there are also non-mathematical words in English that are simply taken over from German, e.g. kindergarten, gesundheit, doppelgänger, ... – Tom De Medts Apr 19 '11 at 11:57 5 @Micheal Lugo: I assume the second German should be English. That is, "band" is a perfectly reasonable English word. – quid Apr 19 '11 at 17:35 show 18 more comments 53 Answers active oldest votes And what about the Wiedersehen metric? up vote 3 down vote 2 Isn't it auf Wiedershehen metric, with auf and all? – Mariano Suárez-Alvarez♦ Apr 19 '11 at 17:05 show 1 more comment Spiegelungssatz. The meaning of this theorem is briefly discussed in the article: Iwasawa theory and $p$-adic deformations of motives [MR1265554 (95i:11053)] by Ralph Greenberg. Let $p$ be an odd prime, and $K_\infty=\mathbf{Q}(\mu_{p^\infty})$. Let $L_\infty$ denote the maximal unramified abelian pro-$p$ extension of $K_\infty$, and $M_\infty$ the maximal abelian pro-$p$-extension of $K_\infty$ that is unramified outside the primes above $p$. Let $Y_\infty={\rm Gal}(L_\infty/K_\infty)$ and $X_\infty={\rm Gal}(M_\infty/K_\infty)$. We can decompose ${\rm up Gal}(K_\infty/\mathbb{Q})\cong\Delta\times\Gamma$, where $\Delta={\rm Gal}(\mathbf{Q}(\mu_p)/\mathbf{Q})$ and $\Gamma\cong\mathbf{Z_p}$. Both $Y_\infty$ and $X_\infty$ have a natural structure vote 3 of $\Lambda$-modules ($\Lambda=\mathbf{Z_p}[[\Gamma]]$) coming from the action of ${\rm Gal}(K_\infty/\mathbf{Q})$ by inner automorphisms. The latter action gives in particular an action of $\ down Delta$, and hence we can decompose $Y_\infty=\bigoplus_{i=0}^{p-2}Y_\infty^{\omega^i}$ and $X_\infty=\bigoplus_{j=0}^{p-2}X_\infty^{\omega^j}$ as $\Lambda$-modules, where the superscript vote denotes isotypical component under the action of $\Delta$, and $\omega:\Delta\rightarrow\mu_{p-1}$ denotes the mod $p$ cyclotomic character. The spliegelungsatz is then described by Greenberg in loc. cit. as an argument using Kummer theory and class field theory that allows to relate the structures of $X_\infty^{\omega^j}$ and $Y_\infty^{\omega^i}$ for $i+j\equiv 1\pmod{p-1}$ as $\ show 1 more comment The practice to use Gothic letters sometimes for ideals ($\mathfrak{a}$, $\mathfrak{b}$, ...) and often for Lie algebras ($\mathfrak{g}$, $\mathfrak{h}$, ..) seems to be of German up vote 3 down vote Also to use the lesser known "kernel" instead of the better known "core" seems to stem from the German "Kern". 2 The choice of "core" for Kern would have led to "cocore" for Cokern. – Chandan Singh Dalawat Apr 22 '11 at 6:12 show 2 more comments I would like to mention a handful of examples that may be considered passé nowadays, but were prominent at some point in time. • schlicht: I dare to address this one again because I consider that the feedback in the comments below Gottfried's entry is kind of misleading. About this one, Boas says that (see [1, page 97]): «... When I was an undergraduate, there was no regular colloquium Harvard, but there was a Mathematical Club, whose meeting were regularly attended by faculty. Once somebody gave a talk on schlicht functions. After the talk, Julian Lowell Coolidge asked plaintively whether there was an English word for 'schlicht'. Osgood replied, "Well, you could call them univalent functions, and everybody would know that you meant 'schlicht'". You need to know that Osgood had been trained in Germany, wrote his treatise on complex analysis in German, and was apt to tell German jokes to his classes. » • limes: That's right... It was not a typo in Ahlfors's text on Complex Analysis. I recently came across this one in another book, but I just can't recall which one it was. • eine Drehstreckung: Tristan Needham recalls this one when he apologizes for the coinage of the term 'amplitwist'. More specifically, he writes up vote «... To the expert reader I would like to apologize for having invented the word 'amplitwist' ... as a synonym (more or less) for 'derivative', as well the component terms 'amplification' 3 down and 'twist'. I can only say that the need for some such terminology was forced on me in the classroom: if you try teaching the ideas in this book without using such language, I think you vote will quickly discover what I mean! Incidentally, a precedence argument in defence (sic) of 'amplitwist' might be that a similar term was coined by the older German school of Klein, Bieberbach, et al. They spoke of 'eine Dhrestreckung', from 'drehen' (to twist) and 'strecken' (to stretch). » Last but not least, in several works of old (z.B., Perron's Die Lehre von den Kettenbrüchen, Knopp's Theory and Application of Infinite series, Khinchin's Continued Fractions), there appears the following notation for general continued fractions: $\underset{j=1}{\overset{\infty}{\LARGE\mathrm K}}\frac{a_j}{b_j}=\cfrac{a_1}{b_1+\cfrac{a_2}{b_2+\cfrac{a_3}{b_3+\ddots}}}.$ Guess what the $\mathrm{K}$ stands for... [1] Lion Hunting & Other Mathematical Pursuits: A Collection of Mathematics, Verse and Stories by Ralph P. Boas Jr. 3 I thought "limes" is from Latin. – Gerald Edgar Jun 6 '12 at 14:38 show 3 more comments The following theorem is known as Kugelsatz: Let $X$ be an open set in $\mathbb{C}^n, \quad n \geq 2$ and $K \subset X$ a compact subset such that $X\setminus K$ is connected. Then the restriction map $\rho: \mathcal{O}(X) \mapsto \ mathcal{O}(X \setminus K)$ is an isomprphism of $\mathbb{C}$-algebras (this version after: Volker Scheideman, Introduction to Complex Analysis in Several Variables, Birkh\"{a}user 2005). up vote 2 down vote The first result of this kind is due to Hartogs, with $X$ and $K$ being concentric euclidean balls, hence the name (Kugel=ball). many texts in several complex variables have been written by German-speaking authors (Grauert+Fritzsche, Kaup brothers are other examples), so the German name stuck even in the English version. The theorem is also referred to as "tomato can add comment Zusammenstellung. Means "compilation" or "survey". Can be used in the first section of a paper, as one starts compiling "preliminary facts" to refer to later in the paper. That's the up vote 2 down way I've seen it used in a paper by Raoul Bott. add comment There's a kind of combinatorial design called a gerechte design - essentially it's a Latin square with additional block constraints. (I gather there's been a fad in recent years for newspapers to print partial gerechte designs of a certain kind for readers to complete.) As a technical term, the word comes from the following paper: up vote 2 W. U. Behrens (1956). Feldversuchsanordnungen mit verbessertem Ausgleich der Bodenunterschiede. Zeitschrift für Landwirtschaftliches Versuchs- und Untersuchungswesen, 2, 176–193. down vote Behrens' gerechte designs were 'fair' in how they apportioned plots of land to different treatments in an agricultural trial. add comment I believe Albrecht Frölich uses the german term beweis, instead of the english proof, in his chapter of the classic "Algebraic number theory". (EDIT: In my original version, I up vote 2 down translated beweis to example. I shouldn't trust my poor knowledge of German... ) 1 And to think, the number of times we tell students 'an example is not a proof'... – Colin Reid Oct 3 '11 at 23:49 add comment If you think of the symbols, you can also see Gothic, alternatively called German, letters. Also, in algebraic topology, it is common to show the cycles by $Z$, which is the first letter of Also, many words that are Latin or Greek, in terms of the ingredients, were first coined and used in German, like Topologie which used to be called Analyse Situs. up vote It was common to show curvature by $K$, which stands for Krummung. Also, it was common to show a domain by B, for Bereiche. Or in riemannian geometry, the metric tensor is represented by 2 down $g$, which stands for Gravit\"at Also, Faltung used to be common in English before the word convolution took over. I can also add Umlaufssatz in the differential geometry of surfaces. There are so many more... There is also the inverse tendency that the German terms tend to be forgotten, now that English has become so prevalent. Many German students will happily use "Konvolution" when they read 1 it in a paper before I teach them to use "Faltung". Similarly, "bottleneck" like in "bottleneck objective function" tends to be sometimes literally translated into "Flaschenhals" instead of "Engpass" (meaning narrow pass, which is (or used to be) the usual term in this situation). A case which I particularly deplore is the thoughtless translation of "line segment" into "Liniensegment" instead of "Strecke". – Günter Rote Mar 1 '13 at 11:27 add comment Einheit = word for unit in algebra. Hence, some use the notation $e\in G$ to denote the element of a group such that $ex = xe = x , \forall x \in G$. Unit is the appropriate up vote 2 translation, yet some algebraist still use the letter $e$ to denote the identity element in a group. down vote 3 Well, sometimes it's also accidents of language that force this: e.g. "identity" is a good word to describe the unit, but the letter i was not really available any more, was it? – Thierry Zell Apr 19 '11 at 16:31 2 According to Cajori, $i$ was first used by Euler in 1777 in a memoir which was not printer until 1794, after his death. It apparently did not appear anywhere else until 1801, when Gauss started to use it systematically. – Mariano Suárez-Alvarez♦ Apr 19 '11 at 19:15 show 2 more comments Schubfachprinzip ("drawer principle" or "shelf principle" or "Dirichlet's box principle"). It is now easy to guess we are talking about P-H P. up vote 2 down vote add comment In topology the separation axioms $T_0$ , $T_1$ .. etc, where the $T$ stands for Trennungsaxiom up vote 2 down vote show 3 more comments up vote 1 down vote add comment Bew (short for beweisbar, introduced by Gödel's incompleteness paper) is still used as a provability predicate in some mathematical logic papers. In physics and other subjects (not so much in math) we hear about plenty of Gedankenexperiments. up vote 1 Don't forget Hilbert's Satz 90, anomalous because of the "90" and not just the "Satz". down vote There are also French words like étale cohomology. 2 A distinguished mathematician once referred to an assertion he was making in a conference talk as a "Theorem 90". He went on to explain that he was he was 50% sure of the proof--and that he had explained to a colleague, who was 40% sure. – Tom Goodwillie Apr 19 '11 at 21:58 4 @Thierry Zell, easy as child's play to find one more :) ... dessin d'enfant – quid Apr 20 '11 at 0:41 show 5 more comments There is Ahlfor's scheibensatz in complex function theory, which is a generalization of Ahlfors five islands theorem up vote 1 down vote add comment In "Functional Analysis" by Kosaku Yosida he denotes the closure of a set $M$ by $M^a$. He explains that it is a shortcut from German abgeschlossene Hulle. up vote 1 down vote add comment There is also the Quermassintegral (mixed volumes of the form $V(K,K,\ldots,B,B)$ where $B$ is the unit ball, see Wikipedia), which I'm not even sure is German (not a lot of Qs in up vote 1 down German usually). show 1 more comment One that is similar in spirit "eigenvalue" in that it mixes the two languages is $$ \text{umkehr map} $$ up vote 1 down vote 2 Thanks John! I couldn't really believe this, but in fact there are even papers titled "Umkehr maps" ;-) – Martin Brandenburg Jun 6 '12 at 21:24 add comment It's early in the morning, so maybe I missed it in the answers above, but, if we're including symbols, then the obvious example is $\mathbbm{Z}$, the integers, or zahlen! up vote 0 down vote Ooops! It is early in the morning... I see that Roland noted that the symbol for the integers (which I also can't seem to get to process properly) just a few comments above. add comment In Swedish, a field is called a 'kropp', a body. This of course from the German word Körper. up vote 0 down vote 1 Fields (in this algebraic sense) are called bodies in most languages. – Tom Goodwillie Mar 13 '12 at 22:01 show 2 more comments "schlichtartig" refers to a surface on which every simple closed curve which separates locally, also separates globally. Hence it means roughly "planar". This is used in the conformal up vote 0 mapping theory of Riemann surfaces. Introduction to Riemann Surfaces, Springer, p. 91. I know only a little German but it seems to translate something like "simply behaved"? down vote add comment Anzahl-theorems is one I have recently read in Wan's book on classical groups. up vote 0 down vote add comment up vote The $\int$ symbol is a german S introduced by Leibniz and stands for Summe (Sum) -3 down 10 There is nothing German about the glyph $\int$ for the letter S. It can be found in almost all French and English books of the time. – Chandan Singh Dalawat Apr 19 '11 at 11:50 15 Even because Leibnitz wrote in Latin. – Pietro Majer Apr 19 '11 at 12:07 Several aspects of the typography stand out to modern eyes. Most noticeable of these is the use of the 'long s', visually resembling (but not pronounced as) a modern 'f' (as in the word 3 'Goſpel'). The modern form of the letter 's' was only used at the end of words, and in a few other specific circumstances. The 'long s' persisted in English print until the late 1700s, and survives in mathematics today as the symbol to denote an integral ('s' to denote a sum of infinitesimals).$$ $$ bl.uk/onlinegallery/sacredtexts/kingjames.html $$ $$ – Chandan Singh Dalawat Apr 22 '11 at 7:07 add comment Not the answer you're looking for? Browse other questions tagged terminology big-list or ask your own question.
{"url":"http://mathoverflow.net/questions/62218/german-mathematical-terms-like-nullstellensatz/62745","timestamp":"2014-04-19T02:48:00Z","content_type":null,"content_length":"148922","record_id":"<urn:uuid:037b64c0-7a00-4ac9-bfe9-e17a884f5f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00190-ip-10-147-4-33.ec2.internal.warc.gz"}
Bisecting an Angle Geometry construction using a compass and straightedge How to bisect an angle with compass and straightedge or ruler. To bisect an angle means that we divide the angle into two equal (congruent) parts without actually measuring the angle. This Euclidean construction works by creating two congruent triangles. See the proof below for more on this. Printable step-by-step instructions The above animation is available as a printable step-by-step instruction sheet, which can be used for making handouts or when a computer is not available. This construction works by effectively building two congruent triangles. The image below is the final drawing above with the red lines added and points A,B,C labelled. Argument Reason 1 QA is congruent to QB They were both drawn with the same compass width 2 AC is congruent to BC They were both drawn with the same compass width 3 ∆QAC and ∆QBC are congruent Three sides congruent (sss). QC is common to both. 4 Angles AQC, BQC are congruent CPCTC. Corresponding parts of congruent triangles are congruent 5 The line QC bisects the angle PQR Angles AQC, BQC are adjacent and congruent - Q.E.D Try it yourself Click here for a printable worksheet containing three angle bisection problems. When you get to the page, use the browser print command to print as many as you wish. The printed output is not Constructions pages on this site Right triangles Triangle Centers Circles, Arcs and Ellipses Non-Euclidean constructions (C) 2009 Copyright Math Open Reference. All rights reserved Math Open Reference now has a Common Core alignment. See which resources are available on this site for each element of the Common Core standards. Check it out
{"url":"http://www.mathopenref.com/constbisectangle.html","timestamp":"2014-04-19T22:05:07Z","content_type":null,"content_length":"15793","record_id":"<urn:uuid:8a809fd3-a796-451a-9e17-28b09d1d4dd0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00440-ip-10-147-4-33.ec2.internal.warc.gz"}
MathGroup Archive: July 2001 [00374] [Date Index] [Thread Index] [Author Index] RE: Triangular Probability Distributions • To: mathgroup at smc.vnet.net • Subject: [mg30020] RE: [mg30009] Triangular Probability Distributions • From: "tgarza01 at prodigy.net.mx" <tgarza01 at prodigy.net.mx> • Date: Sat, 21 Jul 2001 16:16:45 -0400 (EDT) • Sender: owner-wri-mathgroup at wolfram.com Hello Michael, I'm not clear as to what you mean by "methodology". Still, what you may do is define the triangular probability density function and work from it. For example, if you want your density defined in the interval [a, b], Here you obtained the graph of the triangular density in [0,1]. The distribution function is then defined as which you plot with (it takes a little while, due to the fact that the numerical integration is slow because of the peak at x = 0.5; you may integrate one part after the other and then it runs very quickly): I turned off the messages to avoid looking at them. The k-th moment is for any k. Tomas Garza Original Message: From: loopm at yahoo.com (Michael Loop) To: mathgroup at smc.vnet.net Subject: [mg30020] [mg30009] Triangular Probability Distributions I have been looking for a methodology for using the triangular probability distribution in Mathematica. I have not found anything that allows me to do this. Has anyone found a way to use the triangular distribution? Are there any add-on packages that would include this distribution? Thank you, Michael Loop Minneapolis MN Mail2Web - Check your email from the web at http://www.mail2web.com/ .
{"url":"http://forums.wolfram.com/mathgroup/archive/2001/Jul/msg00374.html","timestamp":"2014-04-19T17:10:00Z","content_type":null,"content_length":"35905","record_id":"<urn:uuid:ee8f6d15-eb5c-4586-958b-59b973435e75>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00260-ip-10-147-4-33.ec2.internal.warc.gz"}
Super Bowl Squares Contest January 23, 2012 at 2:02 pm 2 comments Laurence Tynes, the hero; Billy Cundiff, the goat. And so we head to Super Bowl XLVI with a rematch of the game four years ago. One can only hope that this game will be half as exciting as that one. Your math/football trivia for the day? Super Bowl XLVI is the second to require each of the first four Roman numerals (I, V, X, L); the first was Super Bowl XLIV two years ago. [Thanks to Eric Langen for pointing out my previous error.] Personally, I’m looking forward to Super Bowl LXVI, when the first four Roman numerals will occur in decreasing order. A real treat will occur in 3532, when Super Bowl MDLXVI will be played, wherein all six of the Roman numerals will appear in decreasing order. While I’m fairly certain I won’t be around to see that one, I hold out hope that I am reincarnated as a star football player who earns that game’s MVP honors; though it’s far more likely that I will return as a football to be used by adolescents in a backyard game. Buoyed by the success of the online version of my favorite game, I’ve decided to run another online contest. This one relates to Super Bowl XLVI, and you’re asked to predict the units digit of each team’s score at the end of each quarter when the Patriots and Giants square off on Sunday, February 5. Probably the most common type of office betting pool is a square football pool, which is often referred to as just The Squares. The pool is played on a 10 × 10 grid, and contestants can buy squares within the grid for a certain amount of money. After all 100 squares have been purchased, the numbers 0‑9 are randomly assigned to each row and column. The numbers for each row represent the units digit of the score for one team, and the numbers for each column represent the units digit of the score for the other team. The winners are the four people whose squares correspond to the units digit of the actual score of the game at the end of the 1st, 2nd, 3rd, and 4th quarters. Feel free to use this Excel spreadsheet if you’d like to run your own version of this game. (Though be sure to check all applicable laws, to ensure that you’re not in violation of local or state gaming laws.) The difference between the typical version of this game and the version I’m running here is that you get to pick which pairs of numbers you want. Consequently, winning isn’t solely a matter of random luck. But there’s a catch — you can pick the most likely number pairs, but chances are other folks will pick those numbers, too, and the winnings are divided among everyone who picked that pair. So, should you pick 0‑0 and divide the pot with a thousand others; or should you pick the highly unlikely 5‑2 and have the winnings all to yourself? Please note that the game I’m running is for entertainment only. No money is required to play, and there will be no pay-out to the winners. If all goes well this year, perhaps next year there will be a real version that allows you to wager your hard-earned money in such a silly manner — assuming, of course, that I can find a way to skirt the myriad state gaming laws that would prevent me from running such a contest. In case you’re wondering, “Why are you doing this?” remember that I’m the author of a math joke blog. Why do I do any of the things I do? For fun, mainly, and because I’m a certifed math geek. I like the math psychology of this game, and I’m just interested in the numbers that people will pick. Here are the official rules: • Imagine that you have $5, and each square costs $1, so you can buy up to five squares. It’s your money, spend it how you like — if you want to choose the same pair of numbers for all five bets, go ahead, knock yourself out. And what the hell do I care? Enter as often as you like; if you’ve got nothing better to do with your time than repeatedly submit entries for this contest, well, that’s your problem. • All money bet will be divided equally among the four quarters, so the total amount will be equal to $5n, where n is the number of contestants. (Should a contestant enter fewer than five choices, the last entered choice will be repeated multiple times to get the total to five.) • If you pick a winning square, you will share the winnings with everyone else who picked the same square. (For example, if 200 people play this game, there will $1,000 in the pot, so the winning amount for each quarter will be $250. If ten people choose 7-3 and it hits for one quarter, each person will receive $25.) • Enter your five choices as two-digit numbers, where the tens digit represents the Patriots’ score and the units digit represents the Giants’ score. (For instance, if you want Patriots 7, Giants 3, enter 73; but if you want Patriots 0, Giants 7, enter 07.) That’s it. Access the form via the link below: My friends Andy and Casey Frushour have been keeping data about which pairs of numbers occur most often. Before making your picks, you might want to check out their analysis of data from six years of NFL games as well as from all 45 Super Bowls. Bets will be accepted until 11:59 p.m. ET on Saturday, February 4, and an image showing the number of times each square was chosen will be posted at: Super Bowl Squares Contest – Summary of All Bets The complete results for this contest will be posted on Monday, February 6, at the URL below. (But note that this link will return a “404 Error - File not Found” message prior to February 6.) Super Bowl Squares Contests – Results Good luck! Entry filed under: Uncategorized. Tags: betting, Giants, New England, New York, Patriots, pool, squares, Super Bowl, XLVI. • “Your math/football trivia for the day? Super Bowl XLVI is the first one that requires each of the first four Roman numerals (I, V, X, L).” Um… what about Super Bowl XLIV two years ago? • Yeah, lol. XLIV used all four two years ago. Trackback this post | Subscribe to the comments via RSS Feed
{"url":"https://mathjokes4mathyfolks.wordpress.com/2012/01/23/super-bowl-squares-contest/","timestamp":"2014-04-19T13:28:29Z","content_type":null,"content_length":"81723","record_id":"<urn:uuid:3b80038b-848a-494d-9347-08b262ff905f>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00090-ip-10-147-4-33.ec2.internal.warc.gz"}
This Article Bibliographic References Add to: Uncertainty Analysis in Software Reliability Modeling by Bayesian Analysis with Maximum-Entropy Principle November 2007 (vol. 33 no. 11) pp. 781-795 ASCII Text x Yuan-Shun Dai, Min Xie, Quan Long, Szu-Hui Ng, "Uncertainty Analysis in Software Reliability Modeling by Bayesian Analysis with Maximum-Entropy Principle," IEEE Transactions on Software Engineering , vol. 33, no. 11, pp. 781-795, November, 2007. BibTex x @article{ 10.1109/TSE.2007.70739, author = {Yuan-Shun Dai and Min Xie and Quan Long and Szu-Hui Ng}, title = {Uncertainty Analysis in Software Reliability Modeling by Bayesian Analysis with Maximum-Entropy Principle}, journal ={IEEE Transactions on Software Engineering}, volume = {33}, number = {11}, issn = {0098-5589}, year = {2007}, pages = {781-795}, doi = {http://doi.ieeecomputersociety.org/10.1109/TSE.2007.70739}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, RefWorks Procite/RefMan/Endnote x TY - JOUR JO - IEEE Transactions on Software Engineering TI - Uncertainty Analysis in Software Reliability Modeling by Bayesian Analysis with Maximum-Entropy Principle IS - 11 SN - 0098-5589 EPD - 781-795 A1 - Yuan-Shun Dai, A1 - Min Xie, A1 - Quan Long, A1 - Szu-Hui Ng, PY - 2007 KW - Software Reliability KW - Uncertainty analysis KW - Bayesian method KW - Monte Carlo KW - Markov model KW - Graph theory VL - 33 JA - IEEE Transactions on Software Engineering ER - In software reliability modeling, the parameters of the model are typically estimated from the test data of the corresponding component. However, the widely used point estimators are subject to random variations in the data, resulting in uncertainties in these estimated parameters. For large complex systems made up of many components, the uncertainty of each individual parameter amplifies the uncertainty of the total system reliability. Ignoring the parameter uncertainty can result in grossly underestimating the uncertainty in the total system reliability. This paper attempts to study and quantify the uncertainties in the software reliability modeling of a single component with correlated parameters and in a large system with numerous components. Previous works on quantifying uncertainties have assumed a sufficient amount of available data. However, a characteristic challenge in software testing and reliability is the lack of available failure data from a single test which often makes modeling difficult. This lack of data poses a bigger challenge in the uncertainty analysis of the software reliability modeling. To overcome this challenge, this paper proposes to utilize experts' opinions and historical data from previous projects to complement the small number of observations to quantify the uncertainties. This is done by combining the Maximum-Entropy Principle (MEP) into the Bayesian approach. This paper further considers the uncertainty analysis at the system level which contains multiple components, each with its respective model/parameter/ uncertainty using a Monte Carlo approach. Some examples with different modeling approaches (NHPP, Markov, Graph theory) are illustrated to show the generality and effectiveness of the proposed approach. Furthermore, we illustrate how the proposed approach for considering the uncertainties in various components improves a large-scale system reliability model proposed in Dai & Levitin (2006) by relaxing a critical assumption. [1] A.E. Abbas, “Entropy Methods for Joint Distributions in Decision Analysis,” IEEE Trans. Eng. Management, vol. 53, no. 1, pp. 146-159, 2006. [2] T. Adams, “Total Variance Approach to Software Reliability Estimation,” IEEE Trans. Software Eng., vol. 22, no. 9, pp. 687-688, Sept. 1996. [3] J. Berger, “The Case for Objective Bayesian Analysis,” Bayesian Analysis, vol. 1, no. 3, pp. 385-402, 2006. [4] J. Berger, Statistical Decision Theory and Bayesian Analysis. Springer-Verlag, 1985. [5] A.L. Berger, S.A. Della, and V.J. Della, “A Maximum Entropy Approach to Natural Language Processing,” Computational Linguistics, vol. 22, no. 1, pp. 39-71, 1996. [6] J. Bernardo and A. Smith, Bayesian Theory. Wiley, 1994. [7] Y.S. Dai and G. Levitin, “Reliability and Performance of Tree-Structured Grid Services,” IEEE Trans. Reliability, vol. 55, no. 2, pp.337-349, 2006. [8] Y.S. Dai, M. Xie, K.L. Poh, and G.Q. Liu, “A Study of Service Reliability and Availability for Distributed Systems,” Reliability Eng. and System Safety, vol. 79, pp. 103-112, 2003. [9] Y.S. Dai, M. Xie, K.L. Poh, and B. Yang, “Optimal Testing-Resource Allocation with Genetic Algorithm for Modular Software Systems,” J. Systems and Software, vol. 66, pp. 47-55, 2003. [10] M.H. DeGroot and M.J. Schervish, Probability and Statistics. Addison-Wesley, 2002. [11] G.L. Eyink and S. Kim, “A Maximum Entropy Method for Particle Filtering,” J. Statistical Physics, vol. 123, no. 5, pp. 1071-1128, 2005. [12] A.L. Goel and K. Okumoto, “Time Dependent Error-Detection Rate Model for Software Reliability and Other Performance Measures,” IEEE Trans. Reliability, vol. 28, pp. 206-211, 1979. [13] M. Goldstein, “Subjective Bayesian Analysis: Principles and Practice,” Bayesian Analysis, vol. 1, no. 3, pp. 403-420, 2006. [14] B.R. Haverkort and A.M.H. Meeuwissen, “Sensitivity and Uncertainty Analysis of Markov-Reward Models,” IEEE Trans. Reliability, vol. 44, no. 1, pp. 147-154, 1995. [15] D.E. Holmes, “Toward a Generalized Bayesian Network,” Proc. Am. Inst. Physics Conf.—Bayesian Inference and Maximum Entropy Methods in Science and Eng., vol. 872, pp. 195-202, 2006. [16] E.T. Jaynes, “Information Theory and Statistical Mechanics,” Statistical Physics, pp. 181-218, 1963. [17] Z. Jelinski and P.B. Moranda, “Software Reliability Research,” Statistical Computer Performance Evaluation, W. Freiberger, ed., pp.465-497, Academic Press, 1972. [18] W.S. Jewell, “Bayesian Extensions to a Basic Model of Software Reliability,” IEEE Trans. Software Eng., vol. 11, no. 12, pp. 1465-1471, Dec. 1985. [19] J. Kapur, Maximum-Entropy Models in Science and Engineering. John Wiley & Sons, 1989. [20] S. Kim, F.B. Bastani, I.L. Yen, and I.R. Chen, “Systematic Reliability Analysis of a Class of Application-Specific Embedded Software Framework,” IEEE Trans. Software Eng., vol. 30, no. 4, pp.218-230, 2004. [21] D. Kurowicka and R. Cooke, Uncertainty Analysis with High Dimensional Dependence Modeling. Wiley, 2006. [22] M. Masera, “Uncertainty Propagation in Fault Tree Analyses Using Lognormal Distributions,” IEEE Trans. Reliability, vol. 36, no. 1, pp. 145-149, 1987. [23] K.W. Miller, L.J. Morell, R.E. Noonan, S.K. Park, D.M. Nicol, B.W. Murrill, and J.M. Voas, “Estimating the Probability of Failure When Testing Reveals No Failures,” IEEE Trans. Software Eng., vol. 18, no. 1, pp. 33-43, Jan. 1992. [24] I. Myrtveit, E. Stensrud, and M. Shepperd, “Reliability and Validity in Comparative Studies of Software Prediction Models,” IEEE Trans. Software Eng., vol. 31, no. 5, pp. 380-391, May 2005. [25] P.D.T. O'Connor, “Quantifying Uncertainty in Reliability and Safety Studies,” Microelectronics and Reliability, vol. 35, nos. 9-10, pp. 1347-1356, 1995. [26] H. Pham, Software Reliability. Springer-Verlag, 2000. [27] C. Robert, The Bayesian Choice: A Decision Theoretic Motivation. Springer-Verlag, 1994. [28] R.W. Selby, “Enabling Reuse-Based Software Development of Large-Scale Systems,” IEEE Trans. Software Eng., vol. 31, no. 6, pp.495-510, June 2005. [29] C.E. Shannon, “A Mathematical Theory of Communication,” The Bell System Technical J., vol. 27, pp. 379-423, 1948. [30] P. Soundappan, E. Nikolaidis, R.T. Haftka, R. Grandhi, and R. Canfield, “Comparison of Evidence Theory and Bayesian Theory for Uncertainty Modeling,” Reliability Eng. and System Safety, vol. 85, nos. 1-3, pp. 295-311, 2004. [31] C.Y. Tseng, “Entropic Criterion for Model Selection,” Physica A: Statistical and Theoretical Physics, vol. 370, no. 2, pp. 530-538, 2005. [32] K.S. Trivedi, Probability and Statistics with Reliability, Queuing, and Computer Applications. Prentice-Hall, 1982. [33] D.A. Wooff, M. Goldstein, and F.P.A. Coolen, “Bayesian Graphical Models for Software Testing,” IEEE Trans. Software Eng., vol. 28, no. 5, pp. 510-525, May 2002. [34] M. Xie, Y.S. Dai, and K.L. Poh, Computing System Reliability: Models and Analysis. Kluwer Academic, 2004. [35] M. Xie, G.Y. Hong, and C. Wohlin, “A Study of the Exponential Smoothing Technique in Software Reliability Growth Prediction,” Quality and Reliability Eng. Int'l, vol. 13, no. 6, pp. 347-353, [36] B. Yang and M. Xie, “A Study of Operational and Testing Reliability in Software Reliability Analysis,” Reliability Eng. and System Safety, vol. 70, pp. 323-329, 2000. [37] L. Yin, M.A.J. Smith, and K.S. Trivedi, “Uncertainty Analysis in Reliability Modeling,” Proc. Ann. Reliability and Maintainability Symp., pp. 229-234, 2001. [38] L. Yin and K.S. Trivedi, “Confidence Interval Estimation of NHPP-Based Software Reliability Models,” Proc. 10th Int'l Symp. Software Reliability Eng., pp. 6-11, 1999. Index Terms: Software Reliability, Uncertainty analysis, Bayesian method, Monte Carlo, Markov model, Graph theory Yuan-Shun Dai, Min Xie, Quan Long, Szu-Hui Ng, "Uncertainty Analysis in Software Reliability Modeling by Bayesian Analysis with Maximum-Entropy Principle," IEEE Transactions on Software Engineering, vol. 33, no. 11, pp. 781-795, Nov. 2007, doi:10.1109/TSE.2007.70739 Usage of this product signifies your acceptance of the Terms of Use
{"url":"http://www.computer.org/csdl/trans/ts/2007/11/e0781-abs.html","timestamp":"2014-04-19T02:44:02Z","content_type":null,"content_length":"63861","record_id":"<urn:uuid:3d70eb33-f587-483d-ba03-6b46d928f83f>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00220-ip-10-147-4-33.ec2.internal.warc.gz"}
Commission on Mathematical and Theoretical Crystallography International Union of Crystallography Commission on Mathematical and Theoretical Crystallography Summer Schools on Mathematical Crystallography Nancy, France, 21 June - 2 July 2010 On the occasion of the fifth anniversary of its foundation, the Commission on Mathematical and Theoretical Crystallography organised two summer schools devoted to the topology of crystal structures and to the irreducible representations of space groups Topological Crystal Chemistry: Theory and Practice The explosive growth in inorganic and organic materials chemistry has seen a great upsurge in the synthesis of crystalline materials with extended framework structures (zeolites, coordination polymers/coordination networks, Metal Organic Frameworks MOFs, supramolecular architectures formed by Hydrogen bonds and/or Halogen bonds etc.). There is a concomitant interest in simulating such materials and in designing new ones. However, it is a truism that before one can embark on systematic design of materials, one must know what the possibilities Clck for are. Indeed, in the last two decades there have been many parallel outcomes in the theoretical aspects of description and analysis of periodic structures (nets, tilings, surfaces, etc.), details of in the elaboration of databases, and in the development of software for analyzing and describing (illustrating) topological aspects of both real crystal structures and theoretical extended the image. architectures. With these achievements, materials science and crystal chemistry comes up to a new level of their development that is characterized by deeper integration of mathematical methods, computer algorithms and programs into modeling and interpretation of periodic systems of chemical bonds in crystals. The goal of this school is to give an introduction to this whole new area that we call Topological Crystal Chemistry. There will be large time dedicated to hands-on session on the use of the novel and still not widespread computer methods/software/databases so the student at the end of the course should be able to analyze any kind of extended structure through the eye of Clck for the topology and describe it in term of nets, entanglements, catenation etc. details of the image. The target audience is young scientists (graduate students and postdoctoral associates) actively engaged in materials research, (experimental and/or theoretical) but also crystallographers who want to look at familiar structure types with a different eye. Some basic knowledge in chemistry and crystallography will be assumed which will provided during the pre-school day, for Clck for those needing a basic introduction details of the image. Clck for details of the images. Irreducible representations of space groups Group Theory is an indispensable mathematical tool in many branches of chemistry and physics. The school aims at giving the necessary background and practical skills for an efficient use of the group-theoretical methods in specific problems of solid-state physics, structural chemistry and material sciences. After a revision of the basic concepts of spatial symmetry and its description by crystallographic point and space groups according to International Tables of Crystallography, the principal results of the theory of group representations will be introduced with an emphasis on the practical aspects of the subject. Irreducible representations of crystallographic point and space groups and their derivation will be discussed in details. The abstract theory is limited to a reduced set of fundamental facts and statements. More attention is paid to different tools and techniques necessary for practical applications of the symmetry methods in solid-state problems as molecular dynamics, spectroscopy, electronic bands, phonon spectra, Landau theory of phase transitions. The applications of group-theoretical methods to molecular vibrations including the concept of normal modes of vibrations will be discussed in details. The students will learn how, starting from symmetry requirements, to determine the spectral-transition selection rules with special attention to infrared and Raman spectra. The important role of representations of crystallographic groups in the classification, labeling and the analysis of the degeneracies of the lattice vibrations and electronic energy bands of crystalline solids will be reviewed. The applications related to phase transition studies will include the introduction of efficient techniques allowing the determination of the principle characteristics of a system undergoing a phase transition. For example, the determination of the order parameter from the knowledge of the initial and final phases, or the enumeration of all symmetry allowed phases that can result from a continuous phase transition. The symmetry-mode analysis of structural phase transitions results in the decomposition of the symmetry-breaking distortion, present in the distorted structure into contributions from different symmetry modes. The exposition of the general theory and methods will be illustrated with number of examples of typical phase transitions of different nature so that the participant can learn to apply the group-theoretical procedures in practice for the analysis of phase-transition mechanisms and in the search for new functional materials. A tutorial and practical guide to the Bilbao Crystallographic server (www.cryst.ehu.es) forms an essential part of the course. The server provides an excellent on-line tool for the study of crystallographic symmetry and its applications. It gives access to databases with symmetry information on crystallographic groups, their group-subgroup relations and irreducible representations. The school aims at giving the necessary background and practical skills for an efficient use of the computer databases and programs on the Bilbao Crystallographic Server focused on solid-state physics and chemistry applications. The participants of the school will benefit from the practical training in the application of advanced symmetry methods in solid state physics and crystal chemistry problems. The minimal mathematical prerequisites for the school widen the participation audience to students and researchers from chemistry, physics, geological sciences and engineering. The two schools run one after the other, with a pre-school optional day where the basic concepts necessary to attend the schools have been presented. Participants to the pre-school day were required doing some concrete exercises allowing them to get familiar with the bases that are assumed understood during the school. The weekend between the two schools was devoted to presenting additional concepts that are pre-requisite to attend the second school. Pre-school day 21 June: Introduction to crystal symmetry; space groups, Hermann-Mauguin symbols, exercises on the International Tables for Crystallography Topological Crystal Chemistry: Theory and Practice The first school will run on four days, from 22 to 25 June Periodic Structures and Crystal Chemistry... aka the Topological Approach to Crystal Chemistry Graph, Nets & Tilings (Quotient Graphs & Natural Tilings) Topological Analysis of Entanglement : interpenetration, polycatenation & more Computer crystallochemical analysis: an overview Applied computer crystallochemical analysis PRACTICE WITH PROGRAMS TOPOS, Systre, 3dt Module 1. Standard topological analysis and classification of nets in MOFs (Metal-Organic Frameworks), organic and inorganic crystals Creating a database from CIF, SHELX or Systre formats Computing adjacency matrix (complete set of interatomic bonds) for chemical compounds with different chemical bonding (valence, H bonding, specific interactions, intermetallic compounds) Visualizing 0D, 1D, 2D and 3D structures Standard simplified representations of MOFs or hydrogen-bonded organic crystals Computing topological indices (coordination sequences, point, Schläfli and vertex symbols) Topological identification of nets. Working with TTD collection and Systre Taxonomy of nets. Working with TTO collection Module 2. Special topological methods of searching for building units in crystal structures Special methods of simplification. Edge nets and ring nets. Analysis of synthons Standard cluster representation of MOFs Nanocluster representation of intermetallic compounds Module 3. Analysis of entanglements in MOFs and molecular crystals Visualization, topological analysis and classification of interpenetrating MOFs Detection and description of other types of entanglement in MOFs: polycatenation, self-catenation and polythreading Module 4. Analysis of microporous materials and fast-ion conductors with natural tilings Computing natural tilings and their parameters. Visualizing tiles and tilings (TOPOS & 3dt). Simple and isohedral tilings. Constructing dual nets Analysis of zeolites and other microporous materials, constructing migration paths in fast-ion conductors Module 5. Crystal design and topological relations between crystal structures Group-subgroup relations in periodic nets. Subnets and supernets Maximum-symmetry embedding of the periodic net, working with the Systre program Mappings between space-group symmetry and topology of the periodic net Searching for topological relations between nets and working with net relation graph Applications of net relations to crystal design, reconstructive phase transitions, taxonomy of crystal structures Participants are invited to bring their own data/structures to be analyzed as well as personal computers to install the software. Weekend intermission 26-27 June: preparation to the second school 1. Basic facts on crystallographic groups 1. Point groups. Elements of point symmetry. Groups, subgroups and theorem of Lagrange. Generators. Classes of conjugation. Abelian groups and cyclic groups. Crystallographic point groups and abstract groups. Generation of point groups by composition series. Classification of crystallographic point groups. 2. Crystallographic symmetry operations and their presentation by matrices. Space groups. Translation groups and coset decompositions of space groups. Symmorphic and non-symmorphic space groups. Generation of space groups by composition series. 3. Group-subgroup relations of point and space groups. Irreducible representations of space groups The second school will run on five days, from 28 June to 2 July 2. Representations of crystallographic groups (3 days) 1. General remarks on representations. Representations of discrete groups. Equivalence of representations. Unitary representations. Invariant subspaces and reducibility. Theorem of orthogonality. Characters of representations and character tables. 2. Representations of point groups. Representations of Abelian groups: cyclic groups and direct products of cyclic groups. Character tables of representations of point groups. Online databases for point-group representations. 3. Induction procedure for the derivation of the representations of crystallographic groups. Subduced and induced representations. Conjugate representations and orbits. Little groups, allowed representations and induction theorem. Induction procedure for indices 2 and 3. Representations of some point groups by the induction procedure. 4. Representations of space groups Representation of the translation group. Star of a representation. Little groups and small representations. Representations of symmorphic and non-symmorphic groups. Online tools for the derivation of space-group representations. 3. Applications of representations theory in solid-state physics and chemistry (2 days) 1. Vibrations in molecules and solids 1. Molecular dynamics. Small oscillations and normal modes. Zero modes and vibrational modes. Mechanical and vibrational representations. Dynamical matrix in symmetry adapted coordinates. 2. Electronic energy bands and phonon spectra. Assignment of small representations. Compatibility relations. Symmetry-adapted bases. Partial diagonalization of the dynamical matrix. 3. Direct products of irreducible representations and selection rules - general formulation. Selection rules in molecular spectroscopy: rotational and vibrational absorption, infrared and Raman effect. Direct products of space-group representations and selection rules. Online tools for infrared and Raman selection rules. 2. Structural phase transitions 1. Representation theory tools in the analysis of phase transitions. Primary and secondary order parameters; couplings and faintness index. Order parameter direction and isotropy subgroups. Group-theoretical formulation of the necessary conditions for second-order phase transitions. 2. Symmetry-mode analysis of structural phase transitions. Hierarchy of modes. Symmetry-modes applications in structure refinement. Online tools for symmetry-mode analysis. • 9.00-10.30 - morning session I • 10:30-11:00 - coffee break • 11.00-12.30 - morning session II • 12:30-14:00 - lunch • 14.00-16.00 - afternoon session I • 16:00-16:30 - coffee break • 16.30-19.00 - afternoon session II The official language of the schools was English. No simultaneous interpretation was provided. • Prof. Vladislav Blatov, Samara State University (Russia) • Prof. Davide Proserpio, Department CSSI - University of Milan (Italy) • Prof. Mois Aroyo, Universidad del Pays Vasco (Spain) • Prof. Juan Manuel Perez-Mato, Universidad del Pays Vasco (Spain) • Prof. Boriana Mihailova, University of Hamburg (Germany) • Dr. Bernd Souvignier, Radboud University Nijmegen (The Netherlands) • Prof. Massimo Nespolo, Nancy-Université (France) Local organizing committee • Prof. Massimo Nespolo, CRM2, Institut Jean Barriol, Nancy-Université • Ms Anne Clausse, CRM2, Institut Jean Barriol, Nancy-Université Online documents The Schools were held at the Amphitheatre No. 8 of the Faculty of Sciences and Technologies of the Université Henri Poincaré Nancy I (GPS coordinates: Latitude 48.6653088, Longitude 6.1589755). The Faculty campus is located at Vandoeuvre-les-Nancy, in the immediate suburb of Nancy, and can be reached from Nancy railway station in about 15-20 minutes. Google Map of the campus. List of participants Name Country email Topological school Irreps school 1 Erik Arroyabe Austria X 2 Volker Kahlenberg Austria X 3 Liliana Dobrzanska Belgium X 4 Jian Lu China X X 5 Yang Tao China X X 6 Guo-Ping Yang China X X 7 Neven Krajina Croatia X 8 Frantisek Laufek Czech Republic X 9 Abdellatif Bensegueni France X 10 Mariya Brezgunova France X X 11 Slawomir Domagala France X X 12 Charlotte Martineau France X 13 Narjes Beigom Mortazavi France X 14 Agnieszka Paul France X X 15 Isabella Pignatelli France X X 16 Pascalita Prosper France X X 17 Romain Sibille France X X 18 Michael Bodensteiner Germany X X 19 Tatiana Gorelik Germany X 20 Daniel Lassig Germany X 21 Jörg Lincke Germany X 22 Axel Pelka Germany X X 23 Guntram Schmidt Germany X X 24 Barbara Szafranowska Germany X X 25 Jagan Rajamony India X X 26 Mattia Allietta Italy X 27 Giulio Giulio Italy X 28 Pavlo Solokha Italy X X 29 Adrian Mermer Poland X X 30 Agnieszka Plutecka Poland X 31 Magdalena Wilk Poland X X 32 Eugenia Peresypkina Russia X X 33 Alexander Virovets Russia X X 34 Anjana Chanthapally Singapore X X 35 Goutam Kumar Kole Singapore X X 36 Maria Celeste Bernini Spain X 37 Ainhoa Calderon Casado Spain X 38 Richard Dvries Spain X 39 Roberto Fernandez de Luis Spain X 40 Manuela Eloïsa Medina Munoz Spain X 41 Josefina Perles Spain X 42 Ana Platero Spain X 43 Jimmy Retrepo Guisao Spain X 44 Edurne Serrano Larrea Spain X 45 Emre Tasci Spain X 46 Julia Dshemuchadse Switzerland X 47 Arkadiy Simonov Switzerland X 48 Asli Ozturk Turkey X X 49 Thirumurugan Alagarsamy UK X 50 Vladimir Bon Ukraine X X 51 Amartya Sankar Banerjee USA X 52 Maw Lin Foo USA X 53 Maciej Haranczyk USA X 54 Vincent Jusuf USA X 55 Lusann Wreng Yang USA X X 56 Elliott S. Ryan USA X Participants to the school on Topological Crystal Chemistry Inquiries should be sent to . The Organizers of the Nancy 2010 MaThCryst schools have observed the basic policy of non-discrimination and affirms the right and freedom of scientists to associate in international scientific activity without regard to such factors as citizenship, religion, creed, political stance, ethnic origin, race, colour, language, age or sex, in accordance with the Statutes of the International Council for Science. At these schools no barriers existed which would have prevented the participation of bona fide scientists.
{"url":"http://crm2.univ-lorraine.fr/mathcryst/nancy2010.php","timestamp":"2014-04-20T10:48:17Z","content_type":null,"content_length":"59598","record_id":"<urn:uuid:8bf793be-5ba8-498c-a7e5-b7aa6fc87396>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00271-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Do We Learn All the Math We Need For Ordinary Life Before 5th Replies: 1 Last Post: Jan 15, 2013 1:50 PM Messages: [ Previous | Next ] Richard Hake Do We Learn All the Math We Need For Ordinary Life Before 5th Posts: 1,205 Posted: Jan 14, 2013 12:04 PM From: Woodland Hills, CA 91367 Registered: 12/4/04 Some subscribers to MathEdCC might be interested in a recent post "Do We Learn All the Math We Need For Ordinary Life Before 5th Grade?" [Hake (2013)]. The abstract reads: ABSTRACT: In response to my post "Einstein on Testing" [Hake (2013)] at <http://bit.ly/UHjqET> the following lively exchange was recorded on the archives <http://yhoo.it/iNTxrH> of EDDRA2 [non-subscribers may have to set up a "Yahoo account" as instructed at a. Literature major and Standardista-basher Susan Ohanian <http://www.susanohanian.org/> stated that she (paraphrasing) "never seemed to gain any insight from solving the calculus problems in Courant's text, which struck her then as plodding and now as without b. Susan Harman then opined (my CAPS) "WE LEARN ALL THE MATH WE NEED FOR ORDINARY LIFE BEFORE 5TH GRADE." c. Guy Brandenberg countered by calling attention to David Berlinski's "Tour of the Calculus" <http://amzn.to/11sZIUv> whose publisher states: "Were it not for the calculus, mathematicians would have no way to describe the acceleration of a motorcycle or the effect of gravity on thrown balls and distant planets, or to prove that a man could cross a room and eventually touch the opposite wall." d. And Susan Ramlo made the point that students in her algebra-based physics class "almost always make a comment about how suddenly . . .[[after exposure to the *real-world* of physics]]. . . much more of calculus makes sense." With regard to Harman's opinion that "We Learn All the Math We Need For Ordinary Life Before 5th Grade," basic to "ordinary life" is motion and change, requiring the rudiments of calculus for proper understanding (see the Bartlett signature quote). And I agree with Ramlo's point about students' better understanding calculus after exposure to the *real world* of physics. In "Interactive-engagement methods in introductory mechanics courses" at <http://bit.ly/aH2JQN> I wrote: "the term 'substantive non-calculus-based mechanics course' is an oxymoron." To access the complete 13 kB post please click on <http://bit.ly/10sYmKl>. Richard Hake, Emeritus Professor of Physics, Indiana University Links to Articles: <http://bit.ly/a6M5y0> Links to Socratic Dialogue Inducing (SDI) Labs: <http://bit.ly/9nGd3M> Academia: <http://bit.ly/a8ixxm> Blog: <http://bit.ly/9yGsXh> GooglePlus: <http://bit.ly/KwZ6mE> "The greatest shortcoming of the human race is our inability to understand exponential change." - Albert Bartlett <http://bit.ly/VpN2pm> [I have taken the liberty of substituting "exponential change" for Bartlett's more esoteric "the exponential function."] REFERENCES [URL shortened by <http://bit.ly/> and accessed on 13 Jan 2013.] Hake, R.R. 2013."Do We Learn All the Math We Need For Ordinary Life Before 5th Grade?" online on the OPEN! AERA-L archives at <http://bit.ly/10sYmKl>. Post of 13 Jan 2013 16:52:01-0800 to AERA-L and Net-Gold. The abstract and link to the complete post are being transmitted to several discussion lists and are also on my blog "Hake'sEdStuff" at <http://bit.ly/RQkucu> with a provision for Date Subject Author 1/14/13 Do We Learn All the Math We Need For Ordinary Life Before 5th Richard Hake 1/15/13 Re: Do We Learn All the Math We Need For Ordinary Life Before 5th Richard Hake
{"url":"http://mathforum.org/kb/message.jspa?messageID=8067429","timestamp":"2014-04-17T01:32:01Z","content_type":null,"content_length":"22654","record_id":"<urn:uuid:02f37b49-447c-4845-a650-7635b6b75977>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00240-ip-10-147-4-33.ec2.internal.warc.gz"}
Limit on two variables using the polar form September 12th 2010, 08:26 AM #1 May 2010 Limit on two variables using the polar form Well, I've made a double limit using the polar forms. The thing is the limit is wrong, I've made a plot, and then I saw that the limit doesn't exist, and what I wanna know is what I'm reasoning wrong, and some tips to get a deeper comprehension on this limits, and on what I am doing. For the last one I wanna know the limit value, I think it doesn't exists neither. Is it because the sine and cosine oscillates? $\displaystyle\lim_{(x,y) \to{(0,0)}}{\displaystyle\frac{xy}{xy+(x-y)^2}}$ $\begin{Bmatrix} x=r\cos\theta\\y=r\sin\theta\end{matrix}$ $\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{r^2\cos\theta\sin\thet a}{r^2\cos\theta\sin\theta+(r\cos\theta-r\sin\theta)^2}}=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{r^2cos\theta\sin \theta }{r^2[\cos\theta\sin\theta(cos\theta-\sin\theta)^2]}}=$ $=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{cos\theta\sin\theta}{\ cos\theta\sin\theta+cos^2\theta-2\cos\theta\sin\theta+\sin^2\theta}}=\displaystyle \lim_{r \to{0}+}{\displaystyle\frac {cos\theta\sin\theta}{r ^2-\cos\theta\sin\theta}}=-1$ r is always positive, as we defined it. $\displaystyle\lim_{(x,y) \to{(-1,3)}}{\displaystyle\frac{\sqrt[ ]{x+y-2}}{(x+1)^2+(y-3)^2}}$ $\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{-1+r\cos\theta+3+r\sin\theta}}{r^2}}=\displaystyle\ lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{r}\sqrt[ ]{\cos\theta+\sin\theta}}{r^2}} =\displaystyle\lim_{ r \to{0}+}{\displaystyle\frac{\sqrt[ ]{-1+r\cos\theta+3+r\sin\theta}}{r^2}}=\displaystyle\ lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{\cos\theta+\sin\theta}}{r^{\frac{3} Bye there, thanks for posting. Well, I've made a double limit using the polar forms. The thing is the limit is wrong, I've made a plot, and then I saw that the limit doesn't exist, and what I wanna know is what I'm reasoning wrong, and some tips to get a deeper comprehension on this limits, and on what I am doing. For the last one I wanna know the limit value, I think it doesn't exists neither. Is it because the sine and cosine oscillates? $\displaystyle\lim_{(x,y) \to{(0,0)}}{\displaystyle\frac{xy}{xy+(x-y)^2}}$ $\begin{Bmatrix} x=r\cos\theta\\y=r\sin\theta\end{matrix}$ $\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{r^2\cos\theta\sin\thet a}{r^2\cos\theta\sin\theta+(r\cos\theta-r\sin\theta)^2}}=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{r^2cos\thetasen\ theta \sin\theta}{r^2[\cos\theta\sin\theta(cos\theta-\sin\theta)^2]}}=$ $=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{cos\thetasen\theta}{\c os\theta\sin\theta+cos^2\theta-2\cos\theta\sin\theta+\sin^2\theta}}=\displaystyle \lim_{r \to{0}+}{\displaystyle\frac{cos \thetasen\theta}{r^ 2-\cos\theta\sin\theta}}=-1$ r is always positive, as we defined it. $\displaystyle\lim_{(x,y) \to{(-1,3)}}{\displaystyle\frac{\sqrt[ ]{x+y-2}}{(x+1)^2+(y-3)^2}}$ $\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{-1+r\cos\theta+3+r\sin\theta}}{r^2}}=\displaystyle\ lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{r}\sqrt[ ]{\cos\theta+\sin\theta}}{r^2}} =\displaystyle\lim_{ r \to{0}+}{\displaystyle\frac{\sqrt[ ]{-1+r\cos\theta+3+r\sin\theta}}{r^2}}=\displaystyle\ lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{\cos\theta+\sin\theta}}{r^{\frac{3} Bye there, thanks for posting. well already at this point you see that limit doesn't exist $=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{cos\thetasen\theta \sin\theta}{\cos\theta\sin\theta+cos^2\theta-2\cos\theta\sin\theta+\sin^2\theta}}={\displaystyl e\frac{cos\thetasen\theta \ sin\theta}{1+ \cos\theta\sin\theta-2\cos\theta\sin\theta}} ={\displaystyle\frac{cos\thetasen\theta \sin\theta}{1-\cos\theta\sin\theta}}$ because for different values of the $\theta$ you have different values of limit... Last edited by yeKciM; September 12th 2010 at 09:19 AM. Thanks. I've lost a sine on the way, I've already corrected it, but I think your answer holds. Anyway, as I got the same in the numerator and in the denominator for the first limit, excepting that in the denominator I got the square of the radius and the expression is negative I thought it tended to be -1 when the radius ->0 whats wrong with that? Thanks. I've lost a sine on the way, I've already corrected it, but I think your answer holds. Anyway, as I got the same in the numerator and in the denominator for the first limit, excepting that in the denominator I got the square of the radius and the expression is negative I thought it tended to be -1 when the radius ->0 whats wrong with that? $\sin ^2 \theta + \cos ^2 \theta = 1$ that's something you miss there $r^2$ like you wrote there , so $\displaystyle \lim _{r \to anything}$ of something without $r$ is just that something P.S. I edited a little that #2 post Last edited by yeKciM; September 12th 2010 at 09:22 AM. Well, I've made a double limit using the polar forms. The thing is the limit is wrong, I've made a plot, and then I saw that the limit doesn't exist, and what I wanna know is what I'm reasoning wrong, and some tips to get a deeper comprehension on this limits, and on what I am doing. For the last one I wanna know the limit value, I think it doesn't exists neither. Is it because the sine and cosine oscillates? $\displaystyle\lim_{(x,y) \to{(0,0)}}{\displaystyle\frac{xy}{xy+(x-y)^2}}$ $\begin{Bmatrix} x=r\cos\theta\\y=r\sin\theta\end{matrix}$ $\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{r^2\cos\theta\sin\thet a}{r^2\cos\theta\sin\theta+(r\cos\theta-r\sin\theta)^2}}=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{r^2cos\theta\sin \theta }{r^2[\cos\theta\sin\theta(cos\theta-\sin\theta)^2]}}=$ $=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{cos\theta\sin\theta}{\ cos\theta\sin\theta+cos^2\theta-2\cos\theta\sin\theta+\sin^2\theta}}$ Okay, I am with you to here $=\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{cos\theta\sin\theta}{r ^2-\cos\theta\sin\theta}}=-1$ How did the "r" get back into this fraction? (Added: $sin^2(\theta)+ cos^2(\theta)= 1$, not $r^2$!) From the line above this you can cancel both the " $r^2$ and $cos(\theta)sin(\theta)$ in the numerator with the same in the denominator getting $\lim_{r\to 0^+}\frac{1}{(cos(\theta)- sin(\theta)^2}$ which clearly depends on $\theta$. for example, it $\theta= 0$, that is 1. If $\theta= \pi/4$ it is not defined. That alone is enough to tell you that the limit does not exist. r is always positive, as we defined it. $\displaystyle\lim_{(x,y) \to{(-1,3)}}{\displaystyle\frac{\sqrt[ ]{x+y-2}}{(x+1)^2+(y-3)^2}}$ $\displaystyle\lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{-1+r\cos\theta+3+r\sin\theta}}{r^2}}=\displaystyle\ lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{r}\sqrt[ ]{\cos\theta+\sin\theta}}{r^2}} =\displaystyle\lim_{ r \to{0}+}{\displaystyle\frac{\sqrt[ ]{-1+r\cos\theta+3+r\sin\theta}}{r^2}}=\displaystyle\ lim_{r \to{0}+}{\displaystyle\frac{\sqrt[ ]{\cos\theta+\sin\theta}}{r^{\frac{3} Bye there, thanks for posting. You're right hallsofIvy, what happenned with that r^2 is that I thought of the sine and the cosine as beeing x and y :P I see my mistake now, your help was clarifying. Thank you both. Bye there! September 12th 2010, 08:31 AM #2 September 12th 2010, 08:44 AM #3 May 2010 September 12th 2010, 08:52 AM #4 September 12th 2010, 01:42 PM #5 MHF Contributor Apr 2005 September 12th 2010, 01:51 PM #6 May 2010
{"url":"http://mathhelpforum.com/calculus/155908-limit-two-variables-using-polar-form.html","timestamp":"2014-04-17T08:17:13Z","content_type":null,"content_length":"60862","record_id":"<urn:uuid:9c6de928-f282-4183-9bdf-3fc47bc34e61>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Derivative and Tangent Line Help October 3rd 2007, 07:19 PM #1 Junior Member Sep 2007 I need help with these two problems: 1. Find the Derivative of f(x)=((2*x)/(x=1))^2 2. Find the points on the graph of the function f(x) = (x+1)/(x-1) where the tangent line(s) to the graph are parallel to the line x+2y = 6. Can someone walk me through the second one more carefully? I can't find an example of something like it in my notebook. Thanks for any help in advance. Last edited by ebonyscythe; October 4th 2007 at 05:17 AM. Reason: Questions Solved Hello, ebonyscythe! 2. Find the points on the graph of the function $f(x) \:= \:\frac{x+1}{x-1}$ where the tangent line(s) to the graph are parallel to the line $x+2y \:=\: 6$ The line is: . $y \:=\:-\frac{1}{2}x+3.$ . It has slope: $m = -\frac{1}{2}$ The slope of the tangent to $f(x)$ is: . $f'(x) \;=\;\frac{(x-1)\!\cdot\!1 - (x+1)\!\cdot\!1}{(x-1)^2} \;=\;\frac{-2}{(x-1)^2}$ Since they are parallel, their slopes are equal: . $-\frac{1}{2}\;=\;\frac{-2}{(x-1)^2}$ Multiply by $-2(x-1)^2\!:\;\;(x-1)^2 \:=\:4\quad\Rightarrow\quad x-1\:=\:\pm2\quad\Rightarrow\quad x \:=\:1 \pm 2$ Hence:. $x \:=\:3,\,-1$ The corresponding y-values are: . $y \:=\:2,\,0$ Therefore, the points are: . $(3,\,2),\;(-1,\,0)$ Perfect! Thank you... I was thinking that the concept was harder then that, but you made it crystal clear. Thanks again! When I worked on question 1., I keep getting a different answer then what my Ti-89 gives me. I seem to have misplaced a negative or something. Here's my work: f(x) = (2x/(x+1))^2 f '(x) = 2(2x/(x+1))((2x*1 - 2(x+1))/(x+1)^2) -->(Chain rule and quotient rule) f '(x) = 2((2x/(x+1))*(2x-2(x+1)/(x+1)^2)) f '(x) = 2((4x^2-4x(x+1))/(x+1)^3) f '(x) = 2((4x^2-4x^2-4x)/(x+1)^3) f '(x) = -8x/(x+1)^3 And my calculator gives me 8x/(x=1)^3. What did I do wrong? Never mind! I realized I was using the quotient rule wrong... October 3rd 2007, 07:53 PM #2 Super Member May 2006 Lexington, MA (USA) October 4th 2007, 04:45 AM #3 Junior Member Sep 2007 October 4th 2007, 05:16 AM #4 Junior Member Sep 2007
{"url":"http://mathhelpforum.com/calculus/19947-derivative-tangent-line-help.html","timestamp":"2014-04-17T22:20:05Z","content_type":null,"content_length":"41061","record_id":"<urn:uuid:1346e1bf-725b-4380-a73d-4cfa4b548a06>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
Math, CyberKids and the Internet Third grade students visited Mr. Bulaevsky's website - Integer Bars - and had fun using his colored bars to make sums - only one of many possible math lessons using this applet. In the first activity the students were told to find as many ways to make the sum of 8 using exactly 3 integer bars. At first this was a challenge for some but they eventually arranged their integer bars on the screen to total the number 8. After the students arranged "number trains" using 3 bars to make a total of 8, they were challenged to make as many "number trains" as they could using only 4 bars to make the number 10. Math class flew by in a flash and the students did not have time to write the number sentences for their "trains." Mrs. Weeg printed their work (using a print screen capture) and the next day the students wrote number sentences for each train that equals the desired sum.
{"url":"http://globalclassroom.org/authors/florida/math/integer.html","timestamp":"2014-04-20T10:54:30Z","content_type":null,"content_length":"3612","record_id":"<urn:uuid:ff69615f-e255-4b4f-9955-b8a8f57ae354>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00319-ip-10-147-4-33.ec2.internal.warc.gz"}
Category:Statistical indicator From Statistics Explained The category Statistical indicator of the Statistics Explained glossary contains definitions of variables for which statistical data are collected or calculated. Pages in category "Statistical indicator" The following 200 pages are in this category, out of 469 total. previous 200 ) ( next 200 L O cont. R cont. M P S N R T previous 200 ) ( next 200
{"url":"http://epp.eurostat.ec.europa.eu/statistics_explained/index.php?title=Category:Statistical_indicator&from=Job+vacancy+rate+(JVR)","timestamp":"2014-04-20T18:24:52Z","content_type":null,"content_length":"66513","record_id":"<urn:uuid:302be2db-38bd-4e7a-80e1-d8c6f22b2ee5>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00419-ip-10-147-4-33.ec2.internal.warc.gz"}
[SciPy-User] scipy.sparse.csr_matrix.matmat deprecation question Nathan Bell wnbell@gmail.... Sun Jun 27 14:05:34 CDT 2010 On Fri, Jun 25, 2010 at 1:09 PM, Andrew Schein <andrew@andrewschein.com> wrote: > I would like to perform a matrix multiplication of the form > A * B > where A is dense and B is sparse CSR or COO. Does scipy.sparse have this > capability and will it in the future? How fast is the scipy implementation > in comparison to INTEL MKL? > It appears that there is a .matmat function that has been deprecated. Does > this reflect a retreat, or is the functionality found in some other place? > Thanks, Hi Andrew, All sparse matrix multiplication functionality is exposed via __mul__() now, so the matmat function is unnecessary. Simply using A*B should do the appropriate thing. I don't know how the speed compares to MKL, but the code is implemented in C++ so it should be reasonably fast. Nathan Bell wnbell@gmail.com More information about the SciPy-User mailing list
{"url":"http://mail.scipy.org/pipermail/scipy-user/2010-June/025881.html","timestamp":"2014-04-16T14:23:37Z","content_type":null,"content_length":"3869","record_id":"<urn:uuid:8875bf4e-818a-46ba-a951-d57e48790b32>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00534-ip-10-147-4-33.ec2.internal.warc.gz"}
Lagrange Multipliers (Question) March 6th 2010, 04:58 PM Lagrange Multipliers (Question) Hello, I am trying to find the absolute maximum and/or minimum of $f(x,y)=xy$ on the circle $x^2 + y^2=1$ I am stuck and would like some help if any1 can help me. This is what I have so far, $abla f(x,y)=\lambda (g(x,y))$ $abla f(x,y)=yi+xj$ $abla g(x,y)=\lambda (2xi+2yj)$ setting the components equal to each other: constraint: $x^2 + y^2=1$ I am having trouble solving for $x, y, \lambda$ I have 3 equations (including constraint) and 3 unknowns, It should work out. Can anyone help me? This is an algebra question, I feel like a moron. March 6th 2010, 05:34 PM Hello, VkL! Sorry, I don't understand the method you were taught. Find extreme values of: . $f(x,y)\:=\:xy\:\text{ on the circle }\:x^2 + y^2\:=\:1$ We have: . $F(x,y,\lambda) \;=\;xy + \lambda(x^2+y^2-1)$ Set the partial derivative equal to zero and solve: . . $\begin{array}{ccccccc}F_x &=& y + 2x\lambda &=& 0 & [1] \\ \\[-3mm]<br /> F_y &=& x + 2y\lambda &=& 0 & [2] \\ \\[-3mm]<br /> F_{\lambda} &=& x^2+y^2-1 &=& 0 & [3] \end{array}$ Solve [1] for $\lambda\!:\;\;\lambda \:=\:-\frac{y}{2x}\;\;[4]$ Solve [2] for $\lambda\!:\;\;\lambda \:=\:-\frac{x}{2y}\;\;[5]$ Equate [4] and [5]: . $-\frac{y}{2x} \:=\:-\frac{x}{2y} \quad\Rightarrow\quad y^2\:=\:x^2\quad\Rightarrow\quad y \:=\:\pm x\;\;[6]$ Substitute into [3]: . $x^2+x^2-1 \:=\:0 \quad\Rightarrow\quad 2x^2\:=\:1 \quad\Rightarrow\quad x \:=\:\pm\frac{1}{\sqrt{2}}$ Substitute into [6]: . $y \:=\:\pm\frac{1}{\sqrt{2}}$ Therefore: . $\begin{Bmatrix}\text{Maximum:} & \left(\dfrac{1}{\sqrt{2}}\,,\:\dfrac{1}{\sqrt{2}}\ right) & \left(-\dfrac{1}{\sqrt{2}}\,,\:-\dfrac{1}{\sqrt{2}}\right) \\<br /> <br /> \\[-3mm] \ text{Minimum:} & \left(\dfrac{1}{\sqrt{2}}\,,\;-\dfrac{1}{\sqrt{2}}\right) & \left(-\dfrac{1}{\sqrt{2}}\,,\:\dfrac{1}{\sqrt{2}}\right) \end{Bmatrix}$ March 6th 2010, 10:48 PM Both of these are the same. Creating this F is the same as making the two gradients a multiple of each other. The third equation is our constraint, g(x,y). March 7th 2010, 03:24 AM Soroban, Thank you very much for clearly explaining step by step! It was solving for lambda I got stuck on. I don't know why. Once again, thank you very much! March 7th 2010, 03:57 AM Since the Lagrange multiplier method typically results in equations like $f_1(x,y,z)= \lambda g_1(x,y,z)$, $f_2(x,y,z)= \lambda g_2(x,y,z)$, etc. and the value of $\lambda$ itself is not relevant to the solution, I find it useful to divide one equation by another, immediately eliminating $\lambda$. In this case, setting $abla f(x,y,z)= \lambdaabla g(x,y,z)$, which is the method I tend to use and is, of course, equivalent to Soroban's, we have $y= 2\lambda x$ and $x= 2\lambda y$. Dividing the first by the second, $\frac{y}{x}= \frac{2\lambda x}{2\lambda y}= \frac{x}{y}$ which immediately gives $x^2= y^2$ and so $x= \pm y$. Putting those into the constraint $x^2+ y^2= 1$ gives Soroban's solutions.
{"url":"http://mathhelpforum.com/calculus/132378-lagrange-multipliers-question-print.html","timestamp":"2014-04-23T19:34:59Z","content_type":null,"content_length":"13636","record_id":"<urn:uuid:0da8ddae-db6e-4716-b8d2-136a763f180b>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Explanations > Social Research > Analysis > Z-test Description | Discussion | See also The Z-test compares sample and population means to determine if there is a significant difference. It requires a simple random sample from a population with a Normal distribution and where where the mean is known. The z measure is calculated as: z = (x - m) / SE where x is the mean sample to be standardized, m (mu) is the population mean and SE is the standard error of the mean. SE = s / SQRT(n) where s is the population standard deviation and n is the sample size. The z value is then looked up in a z-table. A negative z value means it is below the population mean (the sign is ignored in the lookup table). The Z-test is typically with standardized tests, checking whether the scores from a particular sample are within or outside the standard test performance. The z value indicates the number of standard deviation units of the sample from the population mean. Note that the z-test is not the same as the z-score, although they are closely related. See also
{"url":"http://changingminds.org/explanations/research/analysis/z-test.htm","timestamp":"2014-04-20T20:55:52Z","content_type":null,"content_length":"31346","record_id":"<urn:uuid:00c2e1d9-fcc3-43fa-a09f-fee754012d20>","cc-path":"CC-MAIN-2014-15/segments/1397609539230.18/warc/CC-MAIN-20140416005219-00254-ip-10-147-4-33.ec2.internal.warc.gz"}
[uf-new] an equation/MathML/TeX microformat? Jeff McNeill jeff at jeffmcneill.com Thu Oct 25 20:52:23 PDT 2007 Aloha Paul, et al, Microformats won't fix any display problems that folks have, but could help with say the problem of searching for a given formula across semantically marked up content. However, since the problem to be solved seems to be rendering the math visually, with a variety of ways of doing that, would a microformat require all the formulations (aka mathml, texvc, mimetex) to provide support for yet another markup? (Paul, I recall your comments on my blog entry trying to implement just such rendering on a mediawiki install You are right, it is a mess. Not sure how microformats could help, though. Jeff McNeill On 10/25/07, Paul Topping <pault at dessci.com> wrote: > Hi, > I'm trying to determine whether microformats is the right venue for > developing a standard math representation within HTML. > Back in '98, many of us involved with the W3C's MathML standard had > hopes of it being widely supported within most browsers in a few years. > That has sort of happened. MathML is supported natively within Firefox > but users experience font problems and it only works if pages are XHMTL, > rather than HTML. My company's free MathPlayer plugin makes MathML work > in Internet Explorer. MathML support is still missing from Safari, > Opera, and other browsers. People interested in publishing math on the > web still find serving up pages as XHTML challenging (getting the MIME > type right, etc.). Some websites, blogs, and wikis convert TeX or LaTeX > to images on the server to handle equations in content. Quite frankly, > the space is a mess. > Regardless of whether the math is represented using MathML, TeX, LaTeX, > or some other notation, it is important to expose the mathematical > structure behind the equation to the client in order to support > accessibility (ie, allow screen readers to speak the math) and > interoperability (eg, allow users to copy equations from pages into > Mathematica, MS Word docs, MathType, or new pages). What is needed is a > consistent way to associate an underlying math representation with its > visual representation regardless of whether it is a GIF or PNG image or > MathML formatted by the browser (or a browser plugin). > This seems like a job for a microformat but I must admit that I have > limited knowledge of the microformat philosophy. On one hand, > microformats embed semantic representations in HTML in a practical but > rigorous way. On the other hand, in most (all?) microformats the > representation is visible in the browser. In the kind of representation > I'm imagining, the user won't actually see the actual MathML or TeX code > in the browser window. > Thoughts? Is microformats the right place for this kind of thing? > Paul Topping > Design Science, Inc. > www.dessci.com > _______________________________________________ > microformats-new mailing list > microformats-new at microformats.org > http://microformats.org/mailman/listinfo/microformats-new More information about the microformats-new mailing list
{"url":"http://microformats.org/discuss/mail/microformats-new/2007-October/001181.html","timestamp":"2014-04-17T06:42:30Z","content_type":null,"content_length":"6507","record_id":"<urn:uuid:77573b03-3265-4683-ac8e-9d76eda70503>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00228-ip-10-147-4-33.ec2.internal.warc.gz"}
Hellertown Science Tutor Find a Hellertown Science Tutor ...Applied Economics 4. Managerial Economics 5. Economics, Theory of Organizations My knowledge of soccer is derived from 10 years of league play, 10 years of youth coaching and countless hours of coaching clinics through out the years. 27 Subjects: including physical science, psychology, ACT Science, calculus ...I have worked with students from all backgrounds; I have also worked extensively with children with disabilities. I hold a BA in Anthropology from Florida Atlantic University and am currently a graduate student in the Anthropology Department. As a lifelong student and educator, excellent study skills are essential to my daily responsibilities. 55 Subjects: including astronomy, biology, ecology, physical science ...Each problem, diagram, or animation would be explained step by step to ensure understanding and help secure the students confidence on the problem material. By using a step by step method of explanation, I have found that the student has more of the ability to pinpoint the exact issue that needs... 3 Subjects: including biology, chemistry, anatomy ...For instance, many of my organic chemistry tutees at Lehigh signed up after a poor performance on the notoriously difficult first exam. Despite this, every student who consistently came to my sessions completed the course with a B or better. I'm a born educator, and I dream about lecturing and conducting research as a university professor. 37 Subjects: including physics, LSAT, proofreading, public speaking ...I am currently a health teacher in a middle school. I have an additional 45 credits in grad work on methodologies in the class room, administration, and instruction. My passion to work with kids to help them with their studies runs deep. 4 Subjects: including nutrition, study skills, Microsoft Word, Microsoft PowerPoint
{"url":"http://www.purplemath.com/Hellertown_Science_tutors.php","timestamp":"2014-04-17T19:29:57Z","content_type":null,"content_length":"23935","record_id":"<urn:uuid:0b8aa2e7-f293-4ba9-baf7-c085cdb35a1e>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00573-ip-10-147-4-33.ec2.internal.warc.gz"}
PDEtools[IntegratingFactors] - computes generalized integrating factors for a system of differential equations (DE) PDEtools[IntegratingFactorTest] - tests whether a given list of expressions is a list of generalized integrating factors for the given system of differential equations Calling Sequence IntegratingFactors(PDESYS, DepVars, _mu = ..., displayfunctionality = ..., jetnotation = ..., simplifier = ..., split = ...) IntegratingFactorTest(Mu, PDESYS, DepVars) PDESYS - a system consisting of an equation or a list of equations involving partial and/or ordinary (possibly not) differential equations Mu - a generalized integrating factor returned by IntegratingFactors DepVars - optional - a specification of the unknown(s) in PDESYS _mu = ... - optional - indicates the functional form of the generalized integrating factors displayfunctionality = ... - optional - can be true (default) or false, to display the functionality on the left-hand-side of the mu[k] (generalized integrating factors) functions jetnotation = ... - (optional) can be false (default), jetvariables, jetvariableswithbrackets, jetnumbers or jetODE; to respectively return or not using the different jet notations order - optional - indicates the maximum differential order of the derivatives entering the dependence of the integrating factors simplifier = ... - optional - indicates the simplifier to be used instead of the default simplify/size split = ... - optional - can be true (default) or false, to split the DE system to be solved in order to compute the J[k] functions typeofintegratingfactor = - optional - can be polynomial or functionfield. • Given a system consisting of N equations , , where the independent variables are , and the dependent variables are , with denoting the set of partial derivatives of , the generalized integrating factors are expressions such that = 0, so is a conserved current. These generalized integrating factors, also called characteristic functions of conserved currents (see reference [1]), coincide with the traditional integrating factors when there is only one independent variable, so that is a system of ODEs. • The command IntegratingFactors computes these generalized integrating factors. The command IntegratingFactorTest verifies the result for correctness. • Given the system the output of IntegratingFactors is as a sequence of lists, each one containing N , where , satisfying = 0. • The are computed constructing the PDE system they satisfy by applying Euler's operator to , then solving this system for the using pdsolve. • By default, the integrating factors are searched as functions depending on the derivatives of the unknowns of the system (specified as DepVars or automatically detected) up to the order d-1, where d is the highest order of derivatives entering PDESYS. This default can be changed by optionally passing the argument order = m, where m is a nonnegative integer. • By default, the conserved currents are searched as functions with no pre-especified form, just with the depencency explained in the previous paragraph. This default can be changed with the option typeofconservedcurrent = ... where the right-hand-side can be polynomial or functionfield, respectively indicating a conserved current of polynomial type, or of a functionfield type with the meaning explained in FunctionField. • By default, the functionality of , entering the left-hand-sides of each element in the returned lists, is displayed, the output is presented in functional notation instead of jet notation and is simplified with respect to its size. The PDE system solved to compute the is also split, when that is possible, before being tackled. All these defaults can be changed by passing the optional arguments displayfunctionality = ..., jetnotation = ..., simplifier = ..., split = false. • It is also possible to directly specify the functionality expected for the using _mu = .... See the examples for a demonstration of the use of this parameter. • To avoid having to remember the optional keywords, if you type the keyword misspelled, or just a portion of it, a matching against the correct keywords is performed, and when there is only one match, the input is automatically corrected. Consider the following PDE "system" consisting of a single pde Two generalized integrating factors are Note that is already the divergence of a function, so that a constant (the number 1 in the result above) is an integrating factor. To verify for correctness these integrating factors use The conserved currents are related to the generalized integrating factors via Sigma mu[alpha, n] pde[n] = Divergence J[alpha] = 0. These are the J[alpha] corresponding to the mu[alpha] computed above; they depend on arbitrary functions To verify these results use An example where the integrating factor depends on an arbitrary function For this example, integrating factors up to order 1, that is, depending at most on first order derivatives, are which is in agreement with the general result obtained first. This is a related conserved current of order 1 Specifying directly the functionality expected also confirms that there is no non-trivial integrating factor depending only on and but there is one depending on an arbitrary function of , and In various cases it is simpler, or of more use, to compute integrating factors of polynomial type, or with a mathematical function dependency on the field of functions of the input system. For these purposes use the option typeofintegratingfactor = ... where the right-hand-side can be polynomial or functionfield. For example, for , a polynomial integrating factor, presented without specializing the arbitrary constants (option split = false) is The following application of Euler's operator to shows that is already a divergence of a function This is a conserved current with the same functionality of the last integrating factor computed and a verification of the result See Also ConservedCurrents, ConservedCurrentTest, Euler, PDEtools [1] Olver, P.J. Applications of Lie Groups to Differential Equations. Graduate Texts in Mathematics. Springer-Verlag, 1993. Was this information helpful? Please add your Comment (Optional) E-mail Address (Optional) What is This question helps us to combat spam
{"url":"http://www.maplesoft.com/support/help/Maple/view.aspx?path=PDEtools/IntegratingFactors","timestamp":"2014-04-18T23:28:56Z","content_type":null,"content_length":"228711","record_id":"<urn:uuid:d5bdf630-5edd-4a0a-b985-7c98e2a83964>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00602-ip-10-147-4-33.ec2.internal.warc.gz"}
spatial relations Understanding spatial relations A primary function of a geographic information system is determining the spatial relationships between features. The distance separating a hazardous waste disposal site and hospital, school, or housing development is an example of a spatial relationship. Predicates are Boolean functions that return TRUE if a test passes and FALSE, otherwise, to determine if a specific relationship exists between a pair of geometries. Other functions return a value as a result of a spatial relationship. The result returned by the distance function, the space separating two geometries, is a double precision number. Alternatively, functions like intersection return a geometry as the result of combining two geometries. Predicates return t (TRUE) if a comparison meets the functions criteria; otherwise, they return f (FALSE). Predicates that test for a spatial relationship compare pairs of geometry that can be a different type or dimension. Predicates compare the X and Y coordinates of the submitted geometries. The Z coordinates and measure values, if they exist, are ignored. Geometries that have Z coordinates or measures can be compared with those that don't. The Dimensionally Extended 9 Intersection Model (DE-9IM) developed by Clementini, et al., dimensionally extends the 9 Intersection Model of Egenhofer and Herring. DE-9IM is a mathematical approach that defines the pair-wise spatial relationship between geometries of different types and dimensions. This model expresses spatial relationships among all types of geometry as pair-wise intersections of their interior, boundary, and exterior with consideration for the dimension of the resulting intersections. Related Concepts: Given geometries a and b, I(a), B(a), and E(a) represent the interior, boundary, and exterior of a, and I(b), B(b), and E(b) represent the interior, boundary, and exterior of b. The intersections of I(a), B(a), and E(a) with I(b), B(b), and E(b) produces a 3-by-3 matrix. Each intersection can result in geometries of different dimensions. For example, the intersection of the boundaries of two polygons could consist of a point and a linestring, in which case the dim function would return the maximum dimension of 1. The dim function returns a value of -1, 0, 1, or 2. The -1 corresponds to the null set that is returned when no intersection was found or dim(Æ). Interior Boundary Exterior Interior dim(I(a)ÇI(b)) dim(I(a)ÇB(b)) dim(I(a)ÇE(b)) Boundary dim(B(a)ÇI(b)) dim(B(a)ÇB(b)) dim(B(a)ÇE(b)) Exterior dim(E(a)ÇI(b)) dim(E(a)ÇB(b)) dim(E(a)ÇE(b)) The results of the spatial relationship predicates can be understood or verified by comparing the results of the predicate with a pattern matrix that represents the acceptable values for the DE-9IM. The pattern matrix contains the acceptable values for each of the intersection matrix cells. The possible pattern values are: TAn intersection must exist; dim = 0, 1, or 2. FAn intersection must not exist; dim = -1. *It does not matter if an intersection exists or not; dim = -1, 0, 1, or 2. 0An intersection must exist and its maximum dimension must be 0; dim = 0. 1An intersection must exist and its maximum dimension must be 1; dim = 1. 2An intersection must exist and its maximum dimension must be 2; dim = 2. Each predicate has at least one pattern matrix, but some require more than one to describe the relationships of various geometry type combinations. The pattern matrix of the Within predicate for geometry combinations has the following form: Interior Boundary Exterior Interior T * F a Boundary * * F Exterior * * * Simply put, the Within predicate returns true when the interiors of both geometries intersect, and the interior and boundary of a does not intersect the exterior of b. All other conditions do not Equal returns t (TRUE) if two geometries of the same type have identical X,Y coordinate values. Geometries are equal if they have matching X,Y coordinates. The DE-9IM pattern matrix for equality ensures that the interiors intersect and that no part interior or boundary of either geometry intersects the exterior of the other. Interior Boundary Exterior Interior T * F a Boundary * * F Exterior F F * Disjoint returns t (TRUE) if the intersection of the two geometries is an empty set. Geometries are disjoint if they do not intersect one another in any way. The disjoint predicates pattern matrix simply states that neither the interiors nor the boundaries of either geometry intersect. Interior Boundary Exterior Interior F F * a Boundary F F * Exterior * * * Intersects returns t (TRUE) if the intersection does not result in an empty set. Intersects returns the exact opposite result of disjoint. The intersects predicate will return TRUE if the conditions of any of the following pattern matrices returns TRUE. The intersects predicate returns TRUE if the interiors of both geometries intersect. Interior Boundary Exterior Interior T * * a Boundary * * * Exterior * * * The intersects predicate returns TRUE if the boundary of the first geometry intersects the boundary of the second geometry. Interior Boundary Exterior Interior * T * a Boundary * * * Exterior * * * The intersects predicate returns TRUE if the boundary of the first geometry intersects the interior of the second. Interior Boundary Exterior Interior * * * a Boundary T * * Exterior * * * The intersects predicate returns TRUE if the boundaries of either geometry intersect. Interior Boundary Exterior Interior * * * a Boundary * T * Exterior * * * Touch returns t (TRUE) if none of the points common to both geometries intersect the interiors of both geometries. At least one geometry must be a linestring, polygon, multilinestring, or Touch returns TRUE if either of the geometries' boundaries intersect or if only one of the geometry's interiors intersects the other's boundary. The pattern matrices show us that the touch predicate returns TRUE when the interiors of the geometry don't intersect and the boundary of either geometry intersects the others interior or boundary. The touch predicate returns TRUE if the boundary of one geometry intersects the interior of the other but the interiors do not intersect. Interior Boundary Exterior Interior F T * a Boundary * * * Exterior * * * The touch predicate returns TRUE if the boundary of one geometry intersects the interior of the other but the interiors do not intersect. Interior Boundary Exterior Interior F * * a Boundary T * * Exterior * * * The touch predicate returns TRUE if the boundaries of both geometries intersect but the interiors do not. Interior Boundary Exterior Interior F * * a Boundary * T * Exterior * * * Overlap compares two geometries of the same dimension and returns t (TRUE) if their intersection set results in a geometry different from both but of the same dimension. Overlap returns t (TRUE) only for geometries of the same dimension and only when their intersection set results in a geometry of the same dimension. In other words, if the intersection of two polygons results in polygon, then overlap returns t (TRUE). This pattern matrix applies to polygon/polygon, multipoint/multipoint and multipolygon/multipolygon overlays. For these combinations the overlap predicate returns TRUE if the interior of both geometries intersects the others interior and exterior. Interior Boundary Exterior Interior T * T a Boundary * * * Exterior T * * This pattern matrix applies to linestring/linestring and multilinestring/multilinestring overlaps. In this case the intersection of the geometries must result in a geometry that has a dimension of 1 (another linestring). If the dimension of the intersection of the interiors had resulted in 0 (a point) the overlap predicate would return FALSE; however, the cross predicate would have returned Interior Boundary Exterior Interior 1 * T a Boundary * * * Exterior T * * Cross returns t (TRUE) if the intersection results in a geometry whose dimension is one less than the maximum dimension of the two source geometries and the intersection set is interior to both source geometries. Cross returns t (TRUE) for only multipoint/polygon, multipoint/linestring, linestring/linestring, linestring/polygon, and linestring/multipolygon comparisons. Cross returns t (TRUE) if the dimension of the intersection is one less than the maximum dimension of the source geometries and the interiors of both geometries are intersected. This cross predicate pattern matrix applies to multipoint/linestring, multipoint/multilinestring, multipoint/polygon, multipoint/multipolygon, linestring/polygon, and linestring/multipolygon. The matrix states that the interiors must intersect and that at least the interior of the primary (geometry a) must intersect the exterior of the secondary (geometry b). Interior Boundary Exterior Interior T * T a Boundary * * * Exterior * * * This cross predicate matrix applies to linestring/linestring, linestring/multilinestring, and multilinestring/multilinestring. The matrix states that the dimension of the intersection of the interiors must be 0 (intersect at a point). If the dimension of this intersection was 1 (intersect at a linestring) the cross predicate would return FALSE but the overlap predicate would return TRUE. Interior Boundary Exterior Interior 0 * * a Boundary * * * Exterior * * * Within returns t (TRUE) if the first geometry is completely within the second geometry. Within tests for the exact opposite result of contains. Within returns t (TRUE) if the first geometry is completely inside the second geometry. The boundary and interior of the first geometry are not allowed to intersect the exterior of the second geometry and the first geometry may not equal the second geometry. The within predicate pattern matrix states that the interiors of both geometries must intersect and that the interior and boundary of the primary geometry (geometry a) must not intersect the exterior of the secondary (geometry b). Interior Boundary Exterior Interior T * F a Boundary * * F Exterior * * * Contains returns t (TRUE) if the second geometry is completely contained by the first geometry. The contains predicate returns the exact opposite result of the within predicate. Contains returns t (TRUE) if the second geometry is completely inside the first. The boundary and interior of the second geometry are not allowed to intersect the exterior of the first geometry and the geometries may not be equal. The pattern matrix of the contains predicate states that the interiors of both geometries must intersect and that the interior and boundary of the secondary (geometry b) must not intersect the exterior of the primary (geometry a). Interior Boundary Exterior Interior T * * a Boundary * * * Exterior F F * Minimum distance The minimum distance separating disjoint features could represent the shortest distance an aircraft must travel between two locations. The distance function reports the minimum distance separating two disjoint features. If the features aren't disjoint the function will report a zero minimum distance. Intersection of geometries The intersection function returns the intersection set of two geometries. The intersection set is always returned as a collection that is the minimum dimension of the source geometries. For example, for a linestring that intersects a polygon, the intersection function returns that portion of the linestring common to the interior and boundary of the polygon as a multilinestring. The multilinestring contains more than one linestring if the source linestring intersected the polygon with two or more discontinuous segments. If the geometries do not intersect or if the intersection results in a dimension less than both source geometries, an empty geometry is returned. The following figure illustrates some examples of the intersection function. The intersection function returns the intersection set as the geometry that is the minimum dimension of the source geometries. Difference of geometries The difference function returns the portion of the primary geometry that isnt intersected by the secondary geometrythe logical AND NOT of space. The difference function only operates on geometries of like dimension and returns a collection that has the same dimension as the source geometries. In the event that the source geometries are equal, an empty geometry is returned. Difference returns that portion of the first geometry that is not intersected by the second. Union of geometries The union function returns the union set of two geometries the Boolean logical OR of space. The source geometries must have the same dimension. Union always returns the result as a collection. Union returns the union set of two geometries. Symmetric difference of geometries The symmetricdiff function returns the symmetric difference of two geometriesthe logical XOR of space. The source geometries must have the same dimension. If the geometries are equal, the symmetricdiff function returns an empty geometry; otherwise, the function returns the result as a collection. Symmetricdiff returns the portions of the source geometries that are not part of the intersection set. The source geometries must be of the same dimension.
{"url":"http://edndoc.esri.com/arcsde/9.1/general_topics/understand_spatial_relations.htm","timestamp":"2014-04-18T05:31:41Z","content_type":null,"content_length":"155886","record_id":"<urn:uuid:7b863610-c327-4940-b486-87e011fb9c15>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00389-ip-10-147-4-33.ec2.internal.warc.gz"}
Pivot quantity-question October 14th 2008, 08:16 AM Pivot quantity-question Hi! I'd be thankful if anyone could check my work to see if I've done this correct: Let Y have a pdf: $fy(y) = \frac{2(\theta -y)}{\theta ^2}, 0<y<\theta <br />$ Show that $Y/\theta$ is a pivotal quantity. I start out with a new variable: $U = Y/\theta$ $P(U\leq u) = P(Y/\theta \leq u) = P(Y\leq u\theta)$ $\int _0^{u\theta} f(y)dy = \int _0^{u\theta} \frac{2(\theta -y)}{\theta ^2} dy$ $1/\theta ^2 \int _0^{u\theta} 2(\theta -y) dy = 2/\theta ^2 \int _0^{u\theta} \theta -y dy$ $2/\theta ^2 [\theta y -\frac{y^2}{2}]_0^{u\theta}$ $2/\theta ^2 [\theta u\theta -\frac{(u\theta) ^2}{2}]$ $2/\theta ^2 * 1/2[\theta u\theta -(u\theta) ^2] = 2/2\theta ^2[\theta^2 u -(u\theta) ^2]$ $\frac{2\theta ^2u - 2(u\theta )^2}{2\theta ^2} = \frac{2\theta ^2u -2u^2\theta ^2}{2\theta ^2}$ $= u-u^2$ So this is the distribution function of U. I take the derivative of that to find the pdf of U, which equals $1 - 2u.$ Then $Y/\theta$ fulfils the conditions for a pivotal quantity. Which is: 1. a function of the sample measurement, Y and the unknown parameter $\theta$ 2. It's probability distribution does not depend on the parameter $\theta$. Is this correct? Thanks in advance. October 14th 2008, 03:35 PM mr fantastic Hi! I'd be thankful if anyone could check my work to see if I've done this correct: Let Y have a pdf: $fy(y) = \frac{2(\theta -y)}{\theta ^2}, 0<y<\theta <br />$ Show that $Y/\theta$ is a pivotal quantity. I start out with a new variable: $U = Y/\theta$ $P(U\leq u) = P(Y/\theta \leq u) = P(Y\leq u\theta)$ $\int _0^{u\theta} f(y)dy = \int _0^{u\theta} \frac{2(\theta -y)}{\theta ^2} dy$ $1/\theta ^2 \int _0^{u\theta} 2(\theta -y) dy = 2/\theta ^2 \int _0^{u\theta} \theta -y dy$ $2/\theta ^2 [\theta y -\frac{y^2}{2}]_0^{u\theta}$ $2/\theta ^2 [\theta u\theta -\frac{(u\theta) ^2}{2}]$ $2/\theta ^2 * 1/2[{\color{red}2} \theta u\theta -(u\theta) ^2] = 2/2\theta ^2[\theta^2 u -(u\theta) ^2]$Mr F says: Small mistake here. The red 2 needs to be included. $\frac{2\theta ^2u - 2(u\theta )^2}{2\theta ^2} = \frac{2\theta ^2u -2u^2\theta ^2}{2\theta ^2}$ $= u-u^2$ So this is the distribution function of U. I take the derivative of that to find the pdf of U, which equals $1 - 2u.$ Then $Y/\theta$ fulfils the conditions for a pivotal quantity. Which is: 1. a function of the sample measurement, Y and the unknown parameter $\theta$ 2. It's probability distribution does not depend on the parameter $\theta$. Is this correct? Thanks in advance. Fix the small mistake and you get $F(u) = 2 u - u^2$. By the way, you realise that the pdf is non-zero for 0 < u < 1, right? So you require F(1) = 1, right? That was the first thing I checked. Your answer had F(1) = 0, so I knew there was a mistake Nevertheless, nice work, chum.
{"url":"http://mathhelpforum.com/advanced-statistics/53646-pivot-quantity-question-print.html","timestamp":"2014-04-19T21:28:23Z","content_type":null,"content_length":"13127","record_id":"<urn:uuid:a50bf8e0-359e-4fe0-b2c8-71172b556e3a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00221-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of inference in logic, derivation of conclusions from given information or premises by any acceptable form of reasoning. Inferences are commonly drawn (1) by deduction, which, by analyzing valid argument forms, draws out the conclusions implicit in their premises, (2) by induction, which argues from many instances to a general statement, (3) by probability, which passes from frequencies within a known domain to conclusions of stated likelihood, and (4) by statistical reasoning, which concludes that, on the average, a certain percentage of a set of entities will satisfy the stated conditions. See also deduction; implication. Learn more about inference with a free trial on Britannica.com.
{"url":"http://dictionary.reference.com/browse/inference","timestamp":"2014-04-19T10:18:02Z","content_type":null,"content_length":"105642","record_id":"<urn:uuid:73ac3bff-85e5-410c-94e3-57c846ecb8e5>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
Matches for: CRM Proceedings & Lecture Notes 2002; 218 pp; softcover Volume: 31 ISBN-10: 0-8218-2804-5 ISBN-13: 978-0-8218-2804-5 List Price: US$74 Member Price: US$59.20 Order Code: CRMP/31 The area of inverse scattering transform method or soliton theory has evolved over the past two decades in a vast variety of exciting new algebraic and analytic directions and has found numerous new applications. Methods and applications range from quantum group theory and exactly solvable statistical models to random matrices, random permutations, and number theory. The theory of isomonodromic deformations of systems of differential equations with rational coefficents, and most notably, the related apparatus of the Riemann-Hilbert problem, underlie the analytic side of this striking The contributions in this volume are based on lectures given by leading experts at the CRM workshop (Montreal, Canada). Included are both survey articles and more detailed expositions relating to the theory of isomonodromic deformations, the Riemann-Hilbert problem, and modern applications. The first part of the book represents the mathematical aspects of isomonodromic deformations; the second part deals mostly with the various appearances of isomonodromic deformations and Riemann-Hilbert methods in the theory of exactly solvable quantum field theory and statistical mechanical models, and related issues. The book elucidates for the first time in the current literature the important role that isomonodromic deformations play in the theory of integrable systems and their applications to physics. Titles in this series are co-published with the Centre de Recherches Mathématiques. Graduate students, research mathematicians, and physicists. Isomonodromic Deformations • A. Bolibruch -- Inverse problems for linear differential equations with meromorphic coefficients • J. Harnad -- Virasoro generators and bilinear equations for isomonodromic tau functions • A. A. Kapaev -- Lax pairs for Painlevé equations • D. A. Korotkin -- Isomonodromic deformations and Hurwitz spaces • Y. Ohyama -- Classical solutions of Schlesinger equations and twistor theory • M. A. Olshanetsky -- \(W\)-geometry and isomonodromic deformations • C. A. Tracy and H. Widom -- Airy kernel and Painlevé II Applications in Physics and Related Topics • M. Bertola -- Jacobi groups, Jacobi forms and their applications • P. A. Clarkson and C. M. Cosgrove -- Symmetry, the Chazy equation and Chazy hierarchies • F. Göhmann -- Universal correlations of one-dimensional electrons at low density • F. Göhmann and V. E. Korepin -- A quantum version of the inverse scattering transformation • Y. Nakamura -- Continued fractions and integrable systems • A. Yu. Orlov and D. M. Scherbin -- Hypergeometric functions related to Schur functions and integrable systems • J. Palmer -- Ising model scaling functions at short distance • N. A. Slavnov -- The partition function of the six-vertex model as a Fredholm determinant
{"url":"http://ams.org/bookstore?fn=20&arg1=crmpseries&ikey=CRMP-31","timestamp":"2014-04-19T20:11:30Z","content_type":null,"content_length":"17126","record_id":"<urn:uuid:af4ca880-8534-4f44-b7ce-ba7b6e74acf3>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00426-ip-10-147-4-33.ec2.internal.warc.gz"}
The Work of Creation And God saw every thing that he had made, and, behold, it was very good. And the evening and the morning were the sixth day. Thus the heavens and the earth were finished, and all the host of them. Genesis 1.31 God's great work of Creation is founded upon the Number Six and its associated symbolic meanings and geometric structures. This manifests in many ways on multiple independent, yet integrated, levels. It begins with the plain text of Scripture which states that God created the Cosmos in Six Days. This then became the basis of the Fourth Commandment (Exodus 20.9f): Six days shalt thou labour, and do all thy work: But the seventh day is the sabbath of the LORD thy God: in it thou shalt not do any work, thou, nor thy son, nor thy daughter, thy manservant, nor thy maidservant, nor thy cattle, nor thy stranger that is within thy gates: For in six days the LORD made heaven and earth, the sea, and all that in them is, and rested the seventh day: wherefore the LORD blessed the sabbath day, and hallowed it. Given its centrality in both Creation and the Ten Commandments, the relation between the Number Six and Work - of both Man and God - is without doubt one of the most well established numerical teachings of Scripture. The Biblical integration of the God's Work, the Number Six, and the Creation of the Cosmos manifests in this pair of fundamental identities: These identities reiterate the fundamental "sixness" of the Cosmos exhibited in the six directions that characterize three-dimensional space: This image follows the pattern of the Chi Rho, the Monogram of Christ, the Creator, so it would appear that the Lord God Almighty has stamped His Creation with His Initials or designed His Name upon the same template as His Creation. Either way, it glorifies the Lord and gives new meaning to Psalm 19.1: The heavens declare the glory of God; and the firmament sheweth his handywork. The divinely ordained relation between the Number Six and Creation forms the context needed to properly understand the miracle that is the text of Genesis 1.1, the Creation Holograph. I begin with the Hebrew text consisting of 7 words and 28 letters: At the center of these seven words we see the Aleph-Tav - the Sign and Seal of the Lord God Almighty - upon which His entire Word is built. The number of letters - 28 - relates to the number of words - 7 - by being the seventh Triangular Number: 28 = 1 + 2 + 3 + 4 + 5 + 6 + 7 = Sum(7) The n^th Triangular Number is represented by Sum(n) because it is a general property of Triangular Numbers that they are equal to the sum of all numbers from 1 to n. The sum of the entire verse is also a Triangular Number - specifically, the 73^rd: Sum of Genesis 1.1 = 2701 = 37 x 73 = Sum(73) The Numbers 37 and 73 are a palindromic pair of geometrically related primes. In general, the n^th Centered Hexagonal Number is the core of the n^th Star Number. Here are the first few in the series: This means that the integration of the Work of Creation with the Number Six extends all the way down to the numeric values of the letters of the text. I am greatly indebted to Vernon Jenkins' profound research and exposition of the relation between these geometric numbers, the text of Genesis 1.1, and the Name of the Creator. A good place to start is his article called Judging By Appearances. I recomend reading all his work. The Numbers 37-as-Hexagam and 73-as-Star result from the self-intersection of the Tenth Triangular Number, the Number 55: The Number 55 is the value of Foundation of Creation and the HoloDec which divides into two halves that are both multiples of 55 = Sum(10), hence the sum of the Whole is also a multiple of 55. In other words, the Ten Commandments are built on the Tenth Triangle, which also is deeply integrated with the Creation Holograph! The Numbers 37 and 73 are an extremely significant pair of primes. The essential significance (singular!) of these two prime numbers is found in this pair of identities generated by one word: 37 = 8 + 11 + 13 + 5 (Ordinal Value) 73 = 8 + 20 + 40 + 5 (Standard Value) These identities reveal the supernatural integration of the two fundamental Gematriot: the Ordinal Value, calculated using the place value of the letters, and the Standard Value, calculated using the Base Ten numerical weights in use since ancient times, as listed in Alphabet Table. Vernon Jenkins also notes this pair of identities in his article A Numerical View of Beginnings, where he discusses its relation to the Wisdom presented in association with the numerical analysis of words (Revelation 13.18) The direct association of Wisdom with both prime factors of Genesis 1.1 amplifies the the teaching of Jeremiah 10.12: He hath made the earth by his power, he hath established the world by his wisdom, and hath stretched out the heavens by his discretion. The author of the Zohar also recognized a deep relation between the Number Six, the Six Directions of the Cosmos, and the idea of Widsom (Vol. I:3b): BERASHITH (In the beginning). Said Rabbi Yudai: 'What is the mening of Bershith? It means "with Wisdom", the Wisdom on which the world is based, and through this it introduces us to deep and recondite mysteries. In it, too, is the inscription of the six chief supernal directions. ... This is implied in the word berashith which can be analysed into bara-shith [He created six]. The relation between these figurate numbers and the Number Six, which Scripture explicitely relates to the Work of Creation and the Cosmos, is, of course, absolutely impossible to miss. This is none other than the Work of God! Both factors of the Creation Holograph are prime hexagonal Star Numbers that relate directly to the Hebrew word for Wisdom by which - according to the text of Jeremiah and ancient Hebraic Tradition - is the Wisdom by which God created the Sixfold Cosmos! Endless glory! Divine perfection! Yet we have just begun.
{"url":"http://www.biblewheel.com/GR/GR_Creation_Work.php","timestamp":"2014-04-16T10:12:01Z","content_type":null,"content_length":"41725","record_id":"<urn:uuid:5cc2c44d-4731-465b-bdd3-9bfc92f3f28d>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00550-ip-10-147-4-33.ec2.internal.warc.gz"}
Summary: Abstract A purely geometric definition of Gaussian curva- ture is used for the extraction of the sign of Gauss- ian curvature from photometric data. Consider a point p on a smooth surface S and a closed curve on S which encloses p. The image of on the unit normal Gaussian sphere is a new curve . The sign of Gaussian curvature at p is determined by the rel- ative orientations of the closed curves and . The relative orientation of two such curves is directly computed from intensity data. We employ three unknown illumination conditions to create a photo- metric scatter plot. This plot is in one-to-one corre- spondence with the subset of the unit Gaussian sphere containing the mutually illuminated surface normals. This permits direct computation of the sign of Gaussian curvature without the recovery of surface normals. Our method is albedo invariant. We assume diffuse reflectance, but the nature of diffuse reflectance can be general and unknown.
{"url":"http://www.osti.gov/eprints/topicpages/documents/record/765/1255932.html","timestamp":"2014-04-18T04:16:54Z","content_type":null,"content_length":"8235","record_id":"<urn:uuid:dcbb7229-9f55-42c8-b189-f705f1783fc1>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00202-ip-10-147-4-33.ec2.internal.warc.gz"}
Long-term remote single-dish observations of blazar radio variability. Carter, SJB (2008) Long-term remote single-dish observations of blazar radio variability. PhD thesis, University of Tasmania. PDF (Front Matter) - Requires a PDF viewer PDF (Whole Thesis) - Requires a PDF viewer This research has demonstrated that a small, remotely operated radio telescope can perform well enough to monitor blazar radio variability over periods of months to years. Such observations are not possible using premier telescope facilities, given observing time demands, and they enable scintillation effects intrinsic to the source to be disentangled from scintillation due to scattering of radio waves by the interstellar medium. This exercise provides insight into the nature of the source, and also provides a probe of turbulence in the interstellar medium. The University of Tasmania’s 30 m antenna near Ceduna in South Australia was converted to a radio telescope facility in 1997 from its former use as an Earth station. The COntinuous Single dish Monitoring of Intraday variables at Ceduna (COSMIC) campaign started in March 2003, and extended to early 2005. It observed a number of blazars, with the telescope remotely operated from Tasmania. The blazars were divided into groups lying south and north of the zenith at Ceduna, with each group served by a calibrator source and observed in turn for periods of 10-15 days. A source scanning strategy was developed, and semi-automatic software procedures were written to process raw data into calibrated flux density data sets, corrected for gain-elevation and pointing, and subject to quality control tests. The consistency in calibrator observations over the ~2 year period shows that a 30 m antenna can carry out long term monitoring of blazars with strengths 1 Jy to the accuracy needed to identify variability on time scales of days, and better performance is expected in future campaigns The antenna’s 1/f noise is ~1% of the total flux density, and is likely due to electronic gain fluctuations. It is about 2½ times greater than thermal noise at the integration times relevant to the Ceduna flux density measurements. COSMIC campaign data contain 0.15 Jy systematic flux density fluctuations, that have a thermal origin. These fluctuations were initially believed to be genuine variability, and are most evident on diurnal time scales. The raw data processing exercise cannot be adjusted to remove the fluctuations for the blazars of interest to this research, PKS B1622-253 and PKS B1519-273, but the genuine variability in these two blazars occurs on time scales of ~1-10 days. A method of filtering and correcting the flux density data was developed, the strategy being to smooth through the diurnal systematic effects, remove longer term flux density trends, correct for systematic effects on weekly and seasonal time scales, and hence isolate the genuine variability. A suite of variability analysis tools appropriate for Ceduna data was developed, using the scintle peak-to-peak period, Tperiod , to define the characteristic variability time scale. Values of T0.5 or T1/e can also be estimated, enabling examination of decorrelation timescales, but with caveats due to the peculiarities of the Ceduna data sets, whose data gaps and other characteristics provide challenges to an analysis of variability on a time scale of days. - iv - Tperiod values are determined for each 10-15 day observing period by spectral analysis, using a power spectral density function obtained as the Fourier transform of a discrete autocorrelation function. Empirical scintle counting and data folding exercises cross-check the Tperiod values. Scintle periods are well modelled as Gaussian distributions that are similar for the two blazars, since both sources are large enough to band-limited the scintillation process in similar ways. The statistical properties of the scintle periods provide empirical error bars estimates for the Tperiod values. Also, the 95% confidence interval error bars for Tperiod values calculated from a typical set of scintles are comparable to the 2 10% upper limit of the stochasticity in Tperiod values that Monte Carlo modelling predicted would enable Tperiod to be computed with fair accuracy. For both PKS B1622-253 and PKS B1519-273, the Tperiod values computed for each observing period over the COSMIC campaign exhibit clear annual cycles, which unequivocally proves that in both cases the observed scintillation is primarily due to scattering by the interstellar medium. Multi-frequency observations of PKS B1519-273 have shown that its scintillation is associated with the weak scattering régime at the 6.7 GHz Ceduna observing frequency, and this is also believed to be the case for PKS B1622-253. The annual cycles in the variability time scales (i.e. Tperiod values) of the two blazars are well fitted by the standard model of interstellar scintillation. Tperiod values for PKS B1622-253 and PKS B1519-273 range from about 2 – 10 days and about 1-5 days respectively. The strength of PKS B1519-273 fell below ~½ Jy in mid-2004, precluding accurate determination of Tperiod values in the final months of the COSMIC project. For both sources, the best annual cycle model fit is for highly anisotropic scintles and large velocity offsets of the scattering screen with respect to the Local Standard of Rest. This is unsurprising, since scintillation on time scales of days is associated with distant scattering screens, typically hundreds of parsecs from Earth, which are often in motion with respect to the LSR. The annual cycle model fits to the PKS B1622-253 and PKS B1519-273 Tperiod values have reduced chi-square values of 2.12 and 0.83 respectively, confirming that the empirically determined error bar estimates are appropriate, and that the annual cycle model credibly describes the variation in the variability time scales of the two blazars. The variability characteristics of PKS B1519-273, and the annual cycle in its variability time, agree well with previous analyses of this source based on more limited data, but data recorded by much better telescopes. This agreement confirms the success of the COSMIC project. An annual cycle in the variability time scale of PKS B1622-253 has not previously been observed. The main follow-on research tasks are to study the implications of the variability characteristics of both PKS B1622-253 and PKS B1519-273, with consideration of anisotropy; eliminate the problem of systematic fluctuations; and examine the other blazars monitored in the COSMIC campaign. Item Type: Thesis (PhD) Additional Information: Copyright 2008 the Author ID Code: 8404 Deposited By: UTAS ePrints Officer Deposited On: 25 Feb 2009 15:40 Last Modified: 25 Jul 2012 12:34 ePrint Statistics: View statistics for this ePrint Repository Staff Only: item control page
{"url":"http://eprints.utas.edu.au/8404/","timestamp":"2014-04-19T22:16:17Z","content_type":null,"content_length":"39257","record_id":"<urn:uuid:97c721c9-e819-4d76-918f-91ecacf4175a>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00541-ip-10-147-4-33.ec2.internal.warc.gz"}
geometric interpretation of complex numbers August 3rd 2009, 09:56 AM #1 Jul 2009 geometric interpretation of complex numbers Hello, I would very much appreciate your looking through my answer to the following question and letting me know any hick ups it may have... Question: The points A, B, C, in an Argand diagram represent the complex numbers a, b, c, and a = (1-λ)b + λc. Prove that if λ is real then A lies on BC and divides BC in the ratio λ: 1-λ, but if λ is complex then, in triangle ABC, AB:BC = (modulus of λ) : 1 and angle ABC = argλ. My attempt to answer: λ=x+jy so a = [(1-x)+j(-y)]b+(x+jy)c. If λ is real then y=0 and a is simply the convex combination of b and c and lies on BC dividing it in the ratio λ: 1-λ ( Hello, I would very much appreciate your looking through my answer to the following question and letting me know any hick ups it may have... Question: The points A, B, C, in an Argand diagram represent the complex numbers a, b, c, and a = (1-λ)b + λc. Prove that if λ is real then A lies on BC and divides BC in the ratio λ: 1-λ, but if λ is complex then, in triangle ABC, AB:BC = (modulus of λ) : 1 and angle ABC = argλ. My attempt to answer: λ=x+jy so a = [(1-x)+j(-y)]b+(x+jy)c. If λ is real then y=0 and a is simply the convex combination of b and c and lies on BC dividing it in the ratio λ: 1-λ ( I think the only way you could simplify your answer a bit would be to write a = (1–λ)b + λc as a = b + λ(c–b). You can see from that formulation that multiplication by λ stretches the length of BC by a factor of |λ|, and rotates BC about B through an angle arg(λ), in order to get from B to A. yeah... thanks indeed, that is much better! i hope you can find some time to help with the follwoing question also... question: if the points a and b are two vertices of an equilateral triangle, prove that the third vertex is either (1) b + w (b-a) or (2) b + w^2 (b-a) where w is cos(2pi/3) + j sin(2pi/3) what i have done is to equate (1) with c and (2) with c' and then divide th two sides of (1) and (2) and arrive at (c-b)/(c'-b) = 1/w, which tells us that vector BC' is vector BC roted about B through the angle 2pi/3 wich is true. However I have been (c'-b) = w^2 (b-a) which seems to me (having drawn a diagram) to be respectively angle pi/3 too far and angle pi/3 too short? This is probabably obvious, so sorry/thanks in advance August 3rd 2009, 12:49 PM #2 August 3rd 2009, 08:54 PM #3 Jul 2009 August 4th 2009, 12:02 AM #4 Jul 2009
{"url":"http://mathhelpforum.com/calculus/96861-geometric-interpretation-complex-numbers.html","timestamp":"2014-04-17T07:40:56Z","content_type":null,"content_length":"41966","record_id":"<urn:uuid:ec6a70d4-4ddf-4fb3-bb8a-7843e2debc2b>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00222-ip-10-147-4-33.ec2.internal.warc.gz"}
Angular velocity From Wikipedia, the free encyclopedia In physics, the angular velocity is defined as the rate of change of angular displacement and is a vector quantity (more precisely, a pseudovector) which specifies the angular speed (rotational speed ) of an object and the axis about which the object is rotating. The SI unit of angular velocity is radians per second, although it may be measured in other units such as degrees per second, degrees per hour, etc. Angular velocity is usually represented by the symbol omega (ω, rarely Ω). The direction of the angular velocity vector is perpendicular to the plane of rotation, in a direction which is usually specified by the right-hand rule.^1 Angular velocity of a particle Particle in two dimensions The angular velocity of a particle is measured around or relative to a point, called the origin. As shown in the diagram (with angles ɸ and θ in radians), if a line is drawn from the origin (O) to the particle (P), then the velocity (v) of the particle has a component along the radius (radial component, v[‖]) and a component perpendicular to the radius (cross-radial component, v[⊥]). If there is no radial component, then the particle moves in a circle. On the other hand, if there is no cross-radial component, then the particle moves along a straight line from the origin. A radial motion produces no change in the direction of the particle relative to the origin, so for purposes of finding the angular velocity the radial component can be ignored. Therefore, the rotation is completely produced by the perpendicular motion around the origin, and the angular velocity is completely determined by this component. In two dimensions the angular velocity ω is given by $\omega = \frac{d\phi}{dt}$ This is related to the cross-radial (tangential) velocity by:^1 An explicit formula for v[⊥] in terms of v and θ is: Combining the above equations gives a formula for ω: In two dimensions the angular velocity is a single number that has no direction, but it does have a sense or orientation. In two dimensions the angular velocity is a pseudoscalar, a quantity that changes its sign under a parity inversion (for example if one of the axes is inverted or if they are swapped). The positive direction of rotation is taken, by convention, to be in the direction towards the y axis from the x axis. If parity is inverted, but the sense of a rotation does not, then the sign of the angular velocity changes. There are three types of angular velocity involved in the movement on an ellipse corresponding to the three anomalies (true, eccentric and mean). Particle in three dimensions In three dimensions, the angular velocity becomes a bit more complicated. The angular velocity in this case is generally thought of as a vector, or more precisely, a pseudovector. It now has not only a magnitude, but a direction as well. The magnitude is the angular speed, and the direction describes the axis of rotation. The right-hand rule indicates the positive direction of the angular velocity pseudovector. Being $\vec u$ a unitary vector over the instantaneous rotation axis, so that from the top of the vector the rotation is counter-clock-wise the angular velocity vector $\vec \omega$ can be defined $\vec\omega = \frac{d\phi}{dt}\vec u$ Just as in the two dimensional case, a particle will have a component of its velocity along the radius from the origin to the particle, and another component perpendicular to that radius. The combination of the origin point and the perpendicular component of the velocity defines a plane of rotation in which the behavior of the particle (for that instant) appears just as it does in the two dimensional case. The axis of rotation is then a line normal to this plane, and this axis defined the direction of the angular velocity pseudovector, while the magnitude is the same as the pseudoscalar value found in the 2-dimensional case. Using the unit vector $\vec u$ defined before, the angular velocity vector may be written in a manner similar to that for two dimensions: $\vec\omega=\frac{|\mathrm{\mathbf{v}}|\sin(\theta)}{|\mathrm{\mathbf{r}}|}\,\vec u$ which, by the definition of the cross product, can be written: Addition of angular velocity vectors If a point rotates with $\omega_2$ in a frame $F_2$ which rotates itself with angular speed $\omega_1$ with respect to an external frame $F_1$, we can define the addition of $\omega_1 + \omega_2$ as the angular velocity vector of the point with respect to $F_1$. With this operation defined like this, angular velocity, which is a pseudovector, becomes also a real vector because it has two operations: • An internal operation (addition) which is associative, commutative, distributive and with zero and unity elements • An external operation (external product), with the normal properties for an external product. This is the definition of a vector space. The only property that presents difficulties to prove is the commutativity of the addition. This can be proven from the fact that the velocity tensor W (see below) is skew-symmetric. Therefore $R=e^{Wt}$ is a rotation matrix and in a time dt is an infinitesimal rotation matrix. Therefore it can be expanded as $R = I + W\cdot dt + {1 \over 2} (W \cdot dt) ^2 + ...$ The composition of rotations is not commutative, but when they are infinitesimal rotations the first order approximation of the previous series can be taken and $(I+W_1\cdot dt)(I+W_2 \cdot dt)= (I+W_2 \cdot dt)(I+W_1\cdot dt)$ and therefore $\omega_1 + \omega_2 = \omega_2 + \omega_1$ Rotating frames Given a rotating frame composed by three unitary vectors, all the three must have the same angular speed in any instant. In such a frame each vector is a particular case of the previous case (moving particle), in which the module of the vector is constant. Though it is just a particular case of the previous one, is a very important one for its relationship with the rigid body study, and special tools have been developed for this case. There are two possible ways to describe the angular velocity of a rotating frame: the angular velocity vector and the angular velocity tensor. Both entities are related and they can be calculated from each other. Angular velocity vector for a frame It is defined as the angular velocity of each of the vectors of the frame, in a consistent way with the general definition. It is known by the Euler's rotation theorem that for a rotating frame there exists an instantaneous axis of rotation in any instant. In the case of a frame, the angular velocity vector is over the instantaneous axis of rotation. Any transversal section of a plane perpendicular to this axis has to behave as a two dimensional rotation. Thus, the magnitude of the angular velocity vector at a given time t is consistent with the two dimensions case. Angular velocity is a vector defining an addition operation. Components can be calculated from the derivatives of the parameters defining the moving frame (Euler angles or rotation matrices) Addition of angular velocity vectors in frames As in the general case, the addition operation for angular velocity vectors can be defined using movement composition. In the case of rotating frames, the movement composition is simpler than the general case because the final matrix is always a product of rotation matrices. As in the general case, addition is commutative $\omega_1 + \omega_2 = \omega_2 + \omega_1$ Components from the vectors of the frame Substituting in the expression any vector e of the frame we obtain $\vec \omega=\frac{\vec {e}\times \dot{\vec{e}}}{|{\vec{e}}|^2}$, and therefore $\vec \omega = \vec {e}_1\times \dot{\vec{e}}_1 = \vec {e}_2\times \dot{\vec{e}}_2 = \vec {e}_3\times \dot{\vec{e}}_3.$ As the columns of the matrix of the frame are the components of its vectors, this allows also to calculate $\omega$ from the matrix of the frame and its derivative. Components from Euler angles The components of the angular velocity pseudovector were first calculated by Leonhard Euler using his Euler angles and an intermediate frame made out of the intermediate frames of the construction: • One axis of the reference frame (the precession axis) • The line of nodes of the moving frame respect the reference frame (nutation axis) • One axis of the moving frame (the intrinsic rotation axis) Euler proved that the projections of the angular velocity pseudovector over these three axes was the derivative of its associated angle (which is equivalent to decompose the instant rotation in three instantaneous Euler rotations). Therefore:^2 $\vec \omega = \dot\alpha \bold u_1 +\dot\beta \bold u_2 +\dot\gamma \bold u_3$ This basis is not orthonormal and it is difficult to use, but now the velocity vector can be changed to the fixed frame or to the moving frame with just a change of bases. For example, changing to the mobile frame: $\vec \omega = (\dot\alpha\sin\beta\sin\gamma+\dot\beta\cos\gamma){\bold I} +(\dot\alpha\sin\beta\cos\gamma-\dot\beta\sin\gamma){\bold J} +(\dot\alpha\cos\beta+\dot\gamma){\bold K}$ where IJK are unit vectors for the frame fixed in the moving body. This example has been made using the Z-X-Z convention for Euler angles.^3 Components from infinitesimal rotation matrices The components of the angular velocity vector can be calculated from infinitesimal rotations (if available) as follows: • As any rotation matrix has a single real eigenvalue, which is +1, this eigenvalue shows the rotation axis. • Its module can be deduced from the value of the infinitesimal rotation. Angular velocity tensor It can be introduced from rotation matrices. Any vector $\vec r$ that rotates around an axis with an angular speed vector $\vec \omega$ (as defined before) satisfies: $\frac {d \vec r(t)} {dt} = \vec{\omega} \times\vec{r}$ We can introduce here the angular velocity tensor associated to the angular speed $\omega$: $W(t) = \begin{pmatrix} 0 & -\omega_z(t) & \omega_y(t) \\ \omega_z(t) & 0 & -\omega_x(t) \\ -\omega_y(t) & \omega_x(t) & 0 \\ \end{pmatrix}$ This tensor W(t) will act as if it were a $(\vec \omega \times)$ operator : $\vec \omega(t) \times \vec{r}(t) = W(t) \vec{r}(t)$ Given the orientation matrix A(t) of a frame, we can obtain its instant angular velocity tensor W as follows. We know that: $\frac {d \vec r(t)} {dt} = W \cdot \vec{r}$ As angular speed must be the same for the three vectors of a rotating frame, if we have a matrix A(t) whose columns are the vectors of the frame, we can write for the three vectors as a whole: $\frac {dA(t)} {dt} = W \cdot A (t)$ And therefore the angular velocity tensor we are looking for is: $W = \frac {dA(t)} {dt} \cdot A^{-1}(t)$ Properties of angular velocity tensors In general, the angular velocity in an n-dimensional space is the time derivative of the angular displacement tensor which is a second rank skew-symmetric tensor. This tensor W will have n(n-1)/2 independent components and this number is the dimension of the Lie algebra of the Lie group of rotations of an n-dimensional inner product space.^4 Exponential of W In three dimensions angular velocity can be represented by a pseudovector because second rank tensors are dual to pseudovectors in three dimensions. As $\frac {dA(t)} {dt} = W\cdot A(t)$. This can be read as a differential equation that defines A(t) knowing W(t). $\frac {dA(t)} {A} = W \cdot {dt}$ And if the angular speed is constant then W is also constant and the equation can be integrated. The result is: $A(t) = e^{W \cdot t}$ which shows a connection with the Lie group of rotations. W is skew-symmetric It is possible to prove that angular velocity tensor are skew symmetric matrices which means that a $W = \frac {dR(t)}{dt}\cdot {R^t}$ satisfies $W^t= -W$. To prove it we start taking the time derivative of $\mathcal{R}\mathcal{R}^t$ being R(t) a rotation matrix: $\mathcal{I}=\mathcal{R}\mathcal{R}^t$ because R(t) is a rotation matrix Applying the formula (AB)^t = B^tA^t: $0 = \frac{d\mathcal{R}}{dt}\mathcal{R}^t+\left(\frac{d\mathcal{R}}{dt}\mathcal{R}^t\right)^t = W + W^t$ Thus, W is the negative of its transpose, which implies it is a skew symmetric matrix. Duality with respect to the velocity vector The tensor is a matrix with this structure: $W(t) = \begin{pmatrix} 0 & -\omega_z(t) & \omega_y(t) \\ \omega_z(t) & 0 & -\omega_x(t) \\ -\omega_y(t) & \omega_x(t) & 0 \\ \end{pmatrix}$ As it is a skew symmetric matrix it has a Hodge dual vector which is precisely the previous angular velocity vector $\vec \omega$: Coordinate-free description At any instant, $t$, the angular velocity tensor represents a linear map between the position vectors $\mathbf{r}(t)$ and their velocity vectors $\mathbf{v}(t)$ of a rigid body rotating around the $\mathbf{v} = W\mathbf{r}$ where we omitted the $t$ parameter, and regard $\mathbf{v}$ and $\mathbf{r}$ as elements of the same 3-dimensional Euclidean vector space $V$. The relation between this linear map and the angular velocity pseudovector $\omega$ is the following. Because of W is the derivative of an orthogonal transformation, the $B(\mathbf{r},\mathbf{s}) = (W\mathbf{r}) \cdot \mathbf{s}$ bilinear form is skew-symmetric. (Here $\cdot$ stands for the scalar product). So we can apply the fact of exterior algebra that there is a unique linear form $L$ on $\Lambda^2 V$ that $L(\mathbf{r}\wedge \mathbf{s}) = B(\mathbf{r},\mathbf{s})$ where $\mathbf{r}\wedge \mathbf{s} \in \Lambda^2 V$ is the wedge product of $\mathbf{r}$ and $\mathbf{s}$. Taking the dual vector L* of L we get $(W\mathbf{r})\cdot \mathbf{s} = L^* \cdot (\mathbf{r}\wedge \mathbf{s})$ Introducing $\omega := *L^*$, as the Hodge dual of L*, and apply further Hodge dual identities we arrive at $(W\mathbf{r}) \cdot \mathbf{s} = * ( *L^* \wedge \mathbf{r} \wedge \mathbf{s}) = * (\omega \wedge \mathbf{r} \wedge \mathbf{s}) = *(\omega \wedge \mathbf{r}) \cdot \mathbf{s} = (\omega \times \ mathbf{r}) \cdot \mathbf{s}$ $\omega \times \mathbf{r} := *(\omega \wedge \mathbf{r})$ by definition. Because $\mathbf{s}$ is an arbitrary vector, from nondegeneracy of scalar product follows $W\mathbf{r} = \omega \times \mathbf{r}$ Angular velocity as a vector field For angular velocity tensor maps velocities to positions, it is a vector field. In particular, this vector field is a Killing vector field belonging to an element of the Lie algebra so(3) of the 3-dimensional rotation group SO(3). This element of so(3) can also be regarded as the angular velocity vector. Rigid body considerations The same equations for the angular speed can be obtained reasoning over a rotating rigid body. Here is not assumed that the rigid body rotates around the origin. Instead it can be supposed rotating around an arbitrary point which is moving with a linear velocity V(t) in each instant. To obtain the equations it is convenient to imagine a rigid body attached to the frames and consider a coordinate system that is fixed with respect to the rigid body. Then we will study the coordinate transformations between this coordinate and the fixed "laboratory" system. As shown in the figure on the right, the lab system's origin is at point O, the rigid body system origin is at O' and the vector from O to O' is R. A particle (i) in the rigid body is located at point P and the vector position of this particle is R[i] in the lab frame, and at position r[i] in the body frame. It is seen that the position of the particle can be written: The defining characteristic of a rigid body is that the distance between any two points in a rigid body is unchanging in time. This means that the length of the vector $\mathbf{r}_i$ is unchanging. By Euler's rotation theorem, we may replace the vector $\mathbf{r}_i$ with $\mathcal{R}\mathbf{r}_{io}$ where $\mathcal{R}$ is a 3x3 rotation matrix and $\mathbf{r}_{io}$ is the position of the particle at some fixed point in time, say t=0. This replacement is useful, because now it is only the rotation matrix $\mathcal{R}$ which is changing in time and not the reference vector $\mathbf{r}_ {io}$, as the rigid body rotates about point O'. Also, since the three columns of the rotation matrix represent the three versors of a reference frame rotating together with the rigid body, any rotation about any axis becomes now visible, while the vector $\mathbf{r}_i$ would not rotate if the rotation axis were parallel to it, and hence it would only describe a rotation about an axis perpendicular to it (i.e., it would not see the component of the angular velocity pseudovector parallel to it, and would only allow the computation of the component perpendicular to it). The position of the particle is now written as: Taking the time derivative yields the velocity of the particle: where V[i] is the velocity of the particle (in the lab frame) and V is the velocity of O' (the origin of the rigid body frame). Since $\mathcal{R}$ is a rotation matrix its inverse is its transpose. So we substitute $\mathcal{I}=\mathcal{R}^T\mathcal{R}$: $\mathbf{V}_i = \mathbf{V}+\frac{d\mathcal{R}}{dt}\mathcal{I}\mathbf{r}_{io}$ $\mathbf{V}_i = \mathbf{V}+\frac{d\mathcal{R}}{dt}\mathcal{R}^T\mathcal{R}\mathbf{r}_{io}$ $\mathbf{V}_i = \mathbf{V}+\frac{d\mathcal{R}}{dt}\mathcal{R}^T\mathbf{r}_{i}$ $\mathbf{V}_i = \mathbf{V}+W\mathbf{r}_{i}$ where $W = \frac{d\mathcal{R}}{dt}\mathcal{R}^T$ is the previous angular velocity tensor. It can be proved that this is a skew symmetric matrix, so we can take its dual to get a 3 dimensional pseudovector which is precisely the previous angular velocity vector $\vec \omega$: Substituting ω for W into the above velocity expression, and replacing matrix multiplication by an equivalent cross product: It can be seen that the velocity of a point in a rigid body can be divided into two terms – the velocity of a reference point fixed in the rigid body plus the cross product term involving the angular velocity of the particle with respect to the reference point. This angular velocity is the "spin" angular velocity of the rigid body as opposed to the angular velocity of the reference point O' about the origin O. We have supposed that the rigid body rotates around an arbitrary point. We should prove that the angular velocity previously defined is independent from the choice of origin, which means that the angular velocity is an intrinsic property of the spinning rigid body. See the graph to the right: The origin of lab frame is O, while O[1] and O[2] are two fixed points on the rigid body, whose velocity is $\mathbf{v}_1$ and $\mathbf{v}_2$ respectively. Suppose the angular velocity with respect to O[1] and O[2] is $\boldsymbol{\omega}_1$ and $\boldsymbol{\omega}_2$ respectively. Since point P and O[2] have only one velocity, $\mathbf{v}_1 + \boldsymbol{\omega}_1\times\mathbf{r}_1 = \mathbf{v}_2 + \boldsymbol{\omega}_2\times\mathbf{r}_2$ $\mathbf{v}_2 = \mathbf{v}_1 + \boldsymbol{\omega}_1\times\mathbf{r} = \mathbf{v}_1 + \boldsymbol{\omega}_1\times (\mathbf{r}_1 - \mathbf{r}_2)$ The above two yields that $(\boldsymbol{\omega}_1-\boldsymbol{\omega}_2) \times \mathbf{r}_2=0$ Since the point P (and thus $\mathbf{r}_2$) is arbitrary, it follows that $\boldsymbol{\omega}_1 = \boldsymbol{\omega}_2$ If the reference point is the instantaneous axis of rotation the expression of velocity of a point in the rigid body will have just the angular velocity term. This is because the velocity of instantaneous axis of rotation is zero. An example of instantaneous axis of rotation is the hinge of a door. Another example is the point of contact of a pure rolling spherical rigid body. See also External links Look up angular velocity in Wiktionary, the free dictionary. Wikimedia Commons has media related to Angular velocity.
{"url":"http://www.territorioscuola.com/wikipedia/en.wikipedia.php?title=Angular_velocity","timestamp":"2014-04-17T09:57:40Z","content_type":null,"content_length":"137873","record_id":"<urn:uuid:a0cba1ca-9e23-43d8-82ca-4dd6c896280b>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00450-ip-10-147-4-33.ec2.internal.warc.gz"}
exponential decay July 22nd 2010, 10:54 PM #1 Jul 2010 exponential decay A dosage of Q units of a certain drug is administered daily to a patient. If the amount of the drug in the bloodstream after n days is given by Qe^-kn where k is a constant, find: (i) The value of k if the amount of drug found in the bloodstream has halved after 1 day prior to administering the second dosage. Express your answer to three decimal places. (ii) Hence find the amount of drug found in the bloodstream after 15 days, prior to administering the next dosage Teacher said something about this question following simmilar to a supperannuation process but i don't understand. I need help with both parts. Last edited by Wullz16; July 24th 2010 at 02:47 AM. From the information given $\frac{Q_0}{2}=Q_0e^{-k\times 1}$ $k = -\ln\frac{1}{2}$ Sub in k as found above into $Q=Q_0e^{-k\times 1}$ and solve for Q when n=15. July 22nd 2010, 11:03 PM #2
{"url":"http://mathhelpforum.com/pre-calculus/151760-exponential-decay.html","timestamp":"2014-04-17T08:49:44Z","content_type":null,"content_length":"35194","record_id":"<urn:uuid:37c7e9ac-5397-410b-b8fa-41534d4e1e83>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Here is the answer to your query. Unitary method is the method of finding the value of a single unit (object) at first and the finding the value of required number of unit. We can use unitary method for decimals in the same way as the use it for whole numbers. Example : The cost of 3 shirts is Rs 630.75 . How many shirts can be purchased with Rs 1471.75? In the above question, first of all we have to find the number of shirts that can be purchased for Rs 1 (Single unit of money) then we can calculate the number of shirts that can be purchased for the given amount Number of shirts purchased with Rs 630.75 = 3 ⇒ Number of shirts purchased with Rs 1 = Thus 7 shirts can be purchased with Rs 1471.75 Unitary method is the method of carrying out calculation for finding the value of required number of units by first finding the value of one unit. For examples: If you are given cost of 5 kg of apples and you are asked to calculate the cost of 10 kg of apples, then what will you do ?? For solving such types of problems, we use unitary method. For solving this problem, we will firstly calculate the cost of 1 kg of apples (using division) and then find the cost of 10 kg of apples (using multiplication). Hope you will get the concept now Unitary method is the method of finding the value of a single unit (object) at first and then finding the value of required number of units (objects). For example: The cost of 3 shirts is Rs 675. How many shirts can be purchased with Rs 2250? In the above question, first of all we have to find the number of shirts that can be purchased for Re 1 (single unit of money) then we can calculate the number of shirts that can be purchased for the given amount. Number of shirts purchased with Rs 675 = 3 Thus, 10 shirts can be purchased with Rs 2250 Show me more questions
{"url":"http://www.meritnation.com/ask-answer/question/what-is-unitary-method/math/2536471","timestamp":"2014-04-18T13:09:34Z","content_type":null,"content_length":"181893","record_id":"<urn:uuid:32bd91d3-bdc9-4a6c-9594-057b9945be99>","cc-path":"CC-MAIN-2014-15/segments/1397609533689.29/warc/CC-MAIN-20140416005213-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Order of Operation Order of Operations What is Order of Operations? When you see an equation that looks like this: • How are you going to tackle this problem? • Will you begin at the beginning of the equation or a the end? • Does it matter where you begin the problem from? P.E.M.D.A.S. tells you where to begin solving your equation and each subsequent step thereafter. Some people like the acronym B.E.M.D.A.S. (Brackets, exponents, multiplication, division, additions, 6 X 9 54-28 =26
{"url":"http://grade7mathworkshop.weebly.com/order-of-operation.html","timestamp":"2014-04-20T13:26:22Z","content_type":null,"content_length":"15843","record_id":"<urn:uuid:ae90e264-0c03-47dd-8730-72848717c42e>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00339-ip-10-147-4-33.ec2.internal.warc.gz"}
Why the chromatic scale has twelve tones There is a way to derive the number of tones in one octave using only basic physics and mathematics. This derivation uses the circle of fifths. By picking an initial tone and progressing through the tones, going up a perfect fifth each time, we will eventually arrive back at the same tone. The sequence of tones we cycle through is called the circle of fifths, and its number of tones can be calculated to be twelve. Out of all the intervals we use, why use the perfect fifth? Pythagoras discovered that notes sounded together with a simple ratio of frequencies will sound harmonious together (because the partials overlap). The perfect fifth has the ratio of 3/2, which is the simplest possible after 2/1 (octave) and 1/1 (unison). The perfect fifth is therefore a very consonant interval, so it is a natural choice to build the scale with. Suppose we choose an arbitary tone to start with. To transpose a tone upwards by a fifth, we multiply its frequency by 3/2. Transposing up one fifth gives a tone with a frequency that makes a ratio of 3/2 with the original frequency. Transposing up two fifths gives us the ratio 9/4. However, this note is more than one octave above the original note - the tone one octave above the original tone has the ratio 2, which is smaller than 9/4. To transpose it down to the same octave, we multiply the frequency by 1/2 to get 9/8. Three fifths up gives 27/16. Four fifths up and a second octave down gives 81/64 and so on. The table below shows what happens after we do this twelve times. As an example, the table also shows what tones we would get if the starting tone was A. (The frequency of middle A is defined as 440 Fifths Octaves Ratio Tone 0 0 1/1 A 1 0 3/2 E 2 1 9/8 B 3 1 27/16 F#/Gb 4 2 81/64 C#/Db 5 2 243/128 G#/Ab 6 3 729/512 D#/Eb 7 3 2187/2048 A#/Bb 8 4 6561/4096 F 9 5 19683/16384 C 10 5 59049/32768 G 11 6 177147/131072 D 12 7 531441/524288 A After 12 fifths up and 7 octaves down, we obtain the ratio 531441/524288 = 1.0136, which is really close to to 1. The difference between those two tones is hardly noticable by the untrained ear. The twelve tones produced above are the twelve tones of the scale. Tada! Notice that there is one problem -- the ratio isn't exactly 1. It is obvious by basic number theory that it is impossible to make it 1. The ratio will always a fraction in the form 2^a/3^b, and the numerator and denominator are relatively prime. However, it is possible for the ratio to approximate 1 arbitarily closely by transposing up more fifths. 53 fifths up and 31 octaves down gives us 1.0021, a better approximation than 12 tones, disregarding the fact that a 53-tone scale is very complicated. Scales used in microtonal music (i.e. scales with intervals smaller than semitones) also tend to use numbers that give good approximations. Examples include the 24-tone scale (used in Arab music) and the aforementioned 53-tone scale. The Pythagorean tuning system is based on this derivation. It has a few advantages -- it is a type of well-tempered tuning (the ratios between each interval remains unchanged when transposing between keys) and all the fifths sound nice (all of them being 3/2 ratios). However there are huge drawbacks that make it impractical for acutal tuning. Octaves sound terrible (tuning by fifths means that the octaves will draft away from the 1/2 ratio as we progress up the scale). Other intervals like the perfect fourth sound terrible too, compared to other tunings like just temperament. (Simple ratios such as 5/4 sound nice, but Pythagorean tuning has ugly ratios such as 177147/131072.) Although Pythagorean tuning has been largely surplanted by equal temperament, it's still good to keep in mind that this is where the twelve-tone scale comes from.
{"url":"http://everything2.com/title/Why+the+chromatic+scale+has+twelve+tones","timestamp":"2014-04-19T05:13:03Z","content_type":null,"content_length":"25434","record_id":"<urn:uuid:0b1f40b9-4ae6-499c-8331-94ede87c2f5a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00046-ip-10-147-4-33.ec2.internal.warc.gz"}
On the Convergence of Iterative Processes for Generalized Strongly Asymptotically Journal of Applied Mathematics VolumeΒ 2012Β (2012), Article IDΒ 563438, 18 pages Research Article On the Convergence of Iterative Processes for Generalized Strongly Asymptotically -Pseudocontractive Mappings in Banach Spaces Dipartimento di Matematica, UniversitΓ‘ della Calabria, 87036 Arcavacata di Rende (CS), Italy Received 5 October 2011; Accepted 11 October 2011 Academic Editor: YonghongΒ Yao Copyright Β© 2012 Vittorio Colao. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. We prove the equivalence and the strong convergence of iterative processes involving generalized strongly asymptotically -pseudocontractive mappings in uniformly smooth Banach spaces. 1. Introduction Throughout this paper, we assume that is a uniformly convex Banach space and is the dual space of . Let denote the normalized duality mapping form into given by for all , where denotes the generalized duality pairing. It is well known that if is uniformly smooth, then is single valued and is norm to norm uniformly continuous on any bounded subset of . In the sequel, we will denote the single valued duality mapping by . In 1967, Browder [1] and Kato [2], independently, introduced accretive operators (see, for details, Chidume [3]). Their interest is connected with the existence of results in the theory of nonlinear equations of evolution in Banach spaces. In 1972, Goebel and Kirk [4] introduced the class of asymptotically nonexpansive mappings as follows. Definition 1.1. Let be a subset of a Banach space . A mapping is said to be asymptotically nonexpansive if for each where is a sequence of real numbers converging to 1. This class is more general than the class of nonexpansive mappings as the following example clearly shows. Example 1.2 (see [4]). If is the unit ball of and is defined as where is such that , it satisfies. In 1974, Deimling [5], studying the zeros of accretive operators, introduced the class of -strongly accretive operators. Definition 1.3. An operator defined on a subset of a Banach space is said, -strongly accretive if where is a strictly increasing function such that . Note that in the special case in which ,β β , we obtain a strongly accretive operator. Osilike [6], among the others, proved that in is -strongly accretive where but not strongly accretive. Since an operator is a strongly accretive operator if and only if is a strongly pseudocontractive mapping (i.e., ,β β ), taking in to account Definition 1.3, it is natural to study the class of -pseudocontractive mappings, that is, the maps such that where is a strictly increasing function such that . Of course, the set of fixed points for these mappings contains, at most, only one point. Recently, has been also studied the following class of maps. Definition 1.4. A mapping is a generalized -strongly pseudocontractive mapping if where is a strictly increasing function such that . Choosing , we obtain Definition 1.3. In [7], Xiang remarked that it is a open problem if every generalized -strongly pseudocontractive mapping is -pseudocontractive mapping. In the same paper, Xiang obtained a fixed-point theorem for continuous and generalized -strongly pseudocontractive mappings in the setting of the Banach spaces. In 1991, Schu [8] introduced the class of asymptotically pseudocontractive mappings. Definition 1.5 (see [8]). Let be a normed space, and . A mapping is said to be asymptotically pseudocontractive with the sequence if and only if , and for all and all , there exists such that where is the normalized duality mapping. Obviously every asymptotically nonexpansive mapping is asymptotically pseudocontractive, but the converse is not valid; it is well known that defined by is not Lipschitz but asymptotically pseudocontractive [9]. In [8], Schu proved the following. Theorem 1.6 (see [8]). Let be a Hilbert space and closed and convex; ; completely continuous, uniformly Lipschitzian, and asymptotically pseudocontractive with sequence ;β β for all ; ; ; for all , some and some ; ; for all , define then converges strongly to some fixed point of . Until 2009, no results on fixed-point theorems for asymptotically pseudocontractive mappings have been proved. First, Zhou in [10] completed this lack in the setting of Hilbert spaces proving a fixed-point theorem for an asymptotically pseudocontractive mapping that is also uniformly -Lipschitzian and uniformly asymptotically regular and that the set of fixed points of is closed and convex. Moreover, Zhou proved the strong convergence of a CQ-iterative method involving this kind of mappings. In this paper, our attention is on the class of the generalized strongly asymptotically -pseudocontraction defined as follows. Definition 1.7. If is a Banach space and is a subset of , a mapping is said to be a generalized asymptotically -strongly pseudocontraction if where is converging to one and is strictly increasing and such that . One can note that (i)if has fixed points, then it is unique. In fact, if are fixed points for , then for every , so passing to , it results that Since is strictly increasing and , then . (ii)the mapping , where , is generalized asymptotically strongly -pseudocontraction with , for all and . However, is not strongly pseudocontractive; see [6]. We study the equivalence between three kinds of iterative methods involving the generalized asymptotically strongly -pseudocontractions. Moreover, we prove that these methods are equivalent and strongly convergent to the unique fixed point of the generalized strongly asymptotically -pseudocontraction , under suitable hypotheses. We will briefly introduce some of the results in the same line of ours. In 2001, [11] Chidume and Osilike proved the strong convergence of the iterative method where ,β β ( a -strongly accretive operator), and , to a solution of the equation . In 2003, Chidume and Zegeye [12] studied the following iterative method: where is a Lipschitzian pseudocontractive map with fixed points. The authors proved the strong convergence of the method to a fixed point of under suitable hypotheses on the control sequences . Taking in to account Chidume and Zegeye [12] and Chang [13], we introduce the modified Mann and Ishikawa iterative processes as follows: for any given , the sequence is defined by where , , , and are four sequences in satisfying the conditions and for all . In particular, if for all , we can define a sequence by which is called the modified Mann iteration sequence. We also introduce an implicit iterative process as follows: where are two real sequences in satisfying and for all , is a sequence in , and is an initial point. The algorithm is well defined. Indeed, if is a asymptotically strongly -pseudocontraction, one can observe that, for every fixed , the mapping defined by is such that that is, is a strongly pseudocontraction, for every fixed , then (see Theoremβ β 13.1 in [14]) there exists a unique fixed point of for each . These kind of iterative processes (also called by Chang iterative processes with errors) have been developed in [15β 18], while equivalence theorem for Mann and Ishikawa methods has been studied, in [19, 20], among the others. In [21], Huang established equivalences between convergence of the modified Mann iteration process with errors (1.15) and convergence of modified Ishikawa iteration process with errors (1.14) for strongly successively -pseudocontractive mappings in uniformly smooth Banach space. In the next section, we prove that, in the setting of the uniformly smooth Banach space, if is an asymptotically strongly -pseudocontraction, not only (1.14) and (1.15) are equivalent but also (1.16) is equivalent to the others. Moreover, we prove also that (1.14), (1.15), and (1.16) strongly converge to the unique fixed point of , if it exists. 2. Preliminaries We recall some definitions and conclusions. Definition 2.1. is said to be a uniformly smooth Banach space if the smooth module of satisfies . Lemma 2.2 (see [22]). Let be a Banach space, and let be the normalized duality mapping, then for any , one has The next lemma is one of the main tools for our proofs. Lemma 2.3 (see [21]). Let be a strictly increasing function with , and let , and be nonnegative real sequences such that Suppose that there exists an integer such that then . Proof. The proof is the same as in [21], but we substitute with , in (2.4). Lemma 2.4 (see [23]). Let , , , and be sequences such that for all . Assume that , then the following results hold:(1)if (where ), then is a bounded sequence;(2)if one has and , then as . Remark 2.5. If in Lemma 2.3 choosing , for all , (), then the inequality (2.4) becomes Setting and and by the hypotheses of Lemma 2.3, we get as , , and . That is, we reobtain Lemma 2.4 in the case of . 3. Main Results The ideas of the proofs of our main Theorems take in to account the papers of Chang and Chidume et al. [11, 13, 24]. Theorem 3.1. Let be a uniformly smooth Banach space, and let be generalized strongly asymptotically -pseudocontractive mapping with fixed point and bounded range. Let and be the sequences defined by (1.14) and (1.15), respectively, where ,,, satisfy(H1) and ,(H2),and the sequences ,, are bounded in , then for any initial point , the following two assertions are equivalent: (i)the modified Ishikawa iteration sequence with errors (1.14) converges to ; (ii)the modified Mann iteration sequence with errors (1.15) converges to . Proof. First of all, we note that by boundedness of the range of , of the sequences and by Lemma 2.4, it results that and are bounded sequences. So, we can set By Lemma 2.2, we have where . Using ( 1.14) and (1.15), we have In view of the uniformly continuity of , we obtain that as . Furthermore, it follows from the definition of that for all so Therefore, we have where . By (H1), we have that as . If for an infinite number of indices, we can extract a subsequence such that . For this subsequence, , as . In this case, we can prove that , that is, the thesis. Firstly, we note that substituting (3.4) into (3.2), we have where . Moreover, we observe that Thus, for every fixed , there exists such that for all Since , , , , and are null sequences (and in particular ), for the previous fixed , there exists an index such that, for all , for all . Take such that for a certain . We prove, by induction, that , for every . Let . Suppose that . By (3.6), we have Thus, . Since is strictly increasing, . From (3.8), we obtain that One can note that hence In the same manner, Thus, So we have , which contradicts . By the same idea, we can prove that and then, by inductive step, , for all . This is enough to ensure that . If there are only finite indices for which , then definitively . By the strict increasing function , we have definitively Again substituting (3.4) and (3.18) into (3.2) and simplifying, we have Suppose that . It follows from , , , and the hypothesis that we have, and , as . By virtue of Lemma 2.3, we obtain that . Hence, . Theorem 3.2. Let be a uniformly smooth Banach space, and let be generalized strongly asymptotically -pseudocontractive mapping with fixed point and bounded range. Let and be the sequences defined by (1.15) and (1.16), respectively, where , are null sequences satisfying (H1) and , (H2), of Theorem 3.1 and such that , for every . Suppose moreover that the sequences , are bounded in , then for any initial point , the following two assertions are equivalent: (i)the modified Mann iteration sequence with errors (1.15) converges to the fixed point , (ii)the implicit iteration sequence with errors (1.16) converges to the fixed point . Proof. As in Theorem 3.1, by the boundedness of the range of and by Lemma 2.4, one obtains that our schemes are bounded. We define By the iteration schemes (1.15) and (1.16), we have where . By (1.15 ), we get It follows from (H1) that as , which implies that as . Moreover, for all , Again by the boundedness of all components, we have that and so Hence, we have that , where . Note that as . As in proof of Theorem 3.1, we distinguish two cases:(i)the set of indices for which contains infinite terms; (ii)the set of indices for which contains finite terms. In the first case, (i) we can extract a subsequence such that , as . Substituting (3.23) in (3.21), we have that where . Again by (3.23), for every , there exists an index such that if , By hypotheses on the control sequence, with the same , there exists an index such that definitively So take with for a certain . We can prove that as proving that, for every , the result is . Let . If we suppose that , it results that so . In consequence of this, . In (3.26), we note that so hence in (3.26) remains as in Theorem 3.1. This is a contradiction. By the same idea, and using the inductive hypothesis, we obtain that , for every . This ensures that . In the second case (ii), definitively, , then from the strictly increasing function , we have Substituting (3.33) and (3.23) into (3.21) and simplifying, we have By virtue of Lemma 2.3, we obtain that . Theorem 3.3. Let be a uniformly smooth Banach space, and let be generalized strongly asymptotically -pseudocontractive mapping with fixed point and bounded range. Let be the sequences defined by (1.15) where satisfy(i), (ii), .and the sequence is bounded on , then for any initial point , the sequence strongly converges to . Proof. Firstly, we observe that, by the boundedness of the range of , of the sequence , and by Lemma 2.4, we have that is bounded. By Lemma 2.2, we observe that where . Let We have so we can observe that(1) as . Indeed from the inequality and since is norm to norm uniformly continuous, then , as , (2). Indeed, if we supposed that , by the monotonicity of , Thus, by (1) and by the hypotheses on and , the value is definitively negative. In this case, we conclude that there exists such that for every , and so In the same way we obtain that By the hypotheses and , the previous is a contradiction, and it follows that .Then, there exists a subsequence of that strongly converges to . This implies that for every , there exists an index such that, for all , . Now, we will prove that the sequence converges to . Since the sequences in (3.37) are null sequences and , but , then, for every , there exists an index such that for all , it results that So, fixing , let with for a certain . We will prove, by induction, that for every . Let . If not, it results that . Thus, that is, . By the strict increasing of , . By (3.37), it results that We can note that so Moreover, , so it results that This is a contradiction. Thus, . In the same manner, by induction, one obtains that, for every , . So . Corollary 3.4. Let be a uniformly smooth Banach space, and let be generalized strongly asymptotically -pseudocontractive mapping with bounded range and fixed point . The sequences , , and are defined by (1.14), (1.15), and (1.16), respectively, where the sequences , , , satisfy(i),(ii),and the sequences , , , and are bounded in X. Then for any initial point , the following two assertions are equivalent and true: (i)the modified Ishikawa iteration sequence with errors (1.14) converges to the fixed point ;(ii)the modified Mann iteration sequence with errors (1.15) converges to the fixed point ;(iii)the implicit iteration sequence with errors (1.16) converges to the fixed point . 1. F. E. Browder, β Nonlinear mappings of nonexpansive and accretive type in Banach spaces,β Bulletin of the American Mathematical Society, vol. 73, pp. 875β 882, 1967. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 2. T. Kato, β Nonlinear semigroups and evolution equations,β Journal of the Mathematical Society of Japan, vol. 19, pp. 508β 520, 1967. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 3. C. Chidume, Geometric properties of Banach spaces and nonlinear iterations, vol. 1965 of Lecture Notes in Mathematics, Springer, London, UK, 2009. 4. K. Goebel and W. A. Kirk, β A fixed point theorem for asymptotically nonexpansive mappings,β Proceedings of the American Mathematical Society, vol. 35, pp. 171β 174, 1972. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH 5. K. Deimling, β Zeros of accretive operators,β Manuscripta Mathematica, vol. 13, pp. 365β 374, 1974. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at 6. M. O. Osilike, β Iterative solution of nonlinear equations of the $\varphi$-strongly accretive type,β Journal of Mathematical Analysis and Applications, vol. 200, no. 2, pp. 259β 271, 1996. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 7. C. H. Xiang, β Fixed point theorem for generalized $\varphi$-pseudocontractive mappings,β Nonlinear Analysis, vol. 70, no. 6, pp. 2277β 2279, 2009. View at Publisher Β· View at Google Scholar 8. J. Schu, β Iterative construction of fixed points of asymptotically nonexpansive mappings,β Journal of Mathematical Analysis and Applications, vol. 158, no. 2, pp. 407β 413, 1991. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 9. E. U. Ofoedu, β Strong convergence theorem for uniformly L-Lipschitzian asymptotically pseudocontractive mapping in real Banach space,β Journal of Mathematical Analysis and Applications, vol. 321, no. 2, pp. 722β 728, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 10. H. Zhou, β Demiclosedness principle with applications for asymptotically pseudo-contractions in Hilbert spaces,β Nonlinear Analysis, vol. 70, no. 9, pp. 3140β 3145, 2009. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 11. C. E. Chidume and M. O. Osilike, β Equilibrium points for a system involving m-accretive operators,β Proceedings of the Edinburgh Mathematical Society. Series II, vol. 44, no. 1, pp. 187β 199, 2001. View at Publisher Β· View at Google Scholar 12. C. E. Chidume and H. Zegeye, β Approximate fixed point sequences and convergence theorems for Lipschitz pseudocontractive maps,β Proceedings of the American Mathematical Society, vol. 132, no. 3, pp. 831β 840, 2004. View at Zentralblatt MATH 13. S. S. Chang, β Some results for asymptotically pseudo-contractive mappings and asymptotically nonexpansive mappings,β Proceedings of the American Mathematical Society, vol. 129, no. 3, pp. 845β 853, 2001. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 14. K. Deimling, Nonlinear Functional Analysis, Springer, Berlin, Germany, 1985. 15. S. S. Chang, K. K. Tan, H. W. J. Lee, and C. K. Chan, β On the convergence of implicit iteration process with error for a finite family of asymptotically nonexpansive mappings,β Journal of Mathematical Analysis and Applications, vol. 313, no. 1, pp. 273β 283, 2006. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 16. F. Gu, β The new composite implicit iterative process with errors for common fixed points of a finite family of strictly pseudocontractive mappings,β Journal of Mathematical Analysis and Applications, vol. 329, no. 2, pp. 766β 776, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 17. Z. Huang and F. Bu, β The equivalence between the convergence of Ishikawa and Mann iterations with errors for strongly successively pseudocontractive mappings without Lipschitzian assumption,β Journal of Mathematical Analysis and Applications, vol. 325, no. 1, pp. 586β 594, 2007. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 18. L. S. Liu, β Ishikawa and Mann iterative process with errors for nonlinear strongly accretive mappings in Banach spaces,β Journal of Mathematical Analysis and Applications, vol. 194, no. 1, pp. 114β 125, 1995. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 19. B. E. Rhoades and S. M. Soltuz, β The equivalence between the convergences of Ishikawa and Mann iterations for an asymptotically nonexpansive in the intermediate sense and strongly successively pseudocontractive maps,β Journal of Mathematical Analysis and Applications, vol. 289, no. 1, pp. 266β 278, 2004. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 20. B. E. Rhoades and S. M. Soltuz, β The equivalence between Mann-Ishikawa iterations and multistep iteration,β Nonlinear Analysis, vol. 58, no. 1-2, pp. 219β 228, 2004. View at Publisher Β· View at Google Scholar Β· View at Zentralblatt MATH Β· View at MathSciNet 21. Z. Huang, β Equivalence theorems of the convergence between Ishikawa and Mann iterations with errors for generalized strongly successively $\varphi$-pseudocontractive mappings without Lipschitzian assumptions,β Journal of Mathematical Analysis and Applications, vol. 329, no. 2, pp. 935β 947, 2007. View at Publisher Β· View at Google Scholar Β· View at MathSciNet 22. Y. Xu, β Ishikawa and Mann iterative processes with errors
{"url":"http://www.hindawi.com/journals/jam/2012/563438/","timestamp":"2014-04-17T10:06:22Z","content_type":null,"content_length":"1046771","record_id":"<urn:uuid:17d479e5-8a25-4649-a74e-be1115b9c24a>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00404-ip-10-147-4-33.ec2.internal.warc.gz"}
Page Content College of Natural Sciences and Mathematics In today's highly technological society, the study of Mathematics takes on an increasingly important role. The Sacramento State Mathematics Department designs its courses with the goal of providing students with the mathematical concepts appropriate to the student's field. The program consists of sequences of courses that lead to a Bachelor of Arts with a major in Mathematics, with emphasis in Pure Mathematics, Applied Mathematics and Statistics, or a Teacher Preparation Program. A minor in Mathematics or Statistics and a Master of Arts in Mathematics is also offered. Special Features • The study of mathematics at Sacramento State has several strong advantages. The flexibility of the major gives students enough freedom to mold their degree along their particular interest. • An excellent computer facility gives mathematics students easy access to the campus computer resources. • Currently there is a demand for majors in mathematics with training in applied mathematics and statistics. Program graduates have had much success in finding employment in public and private • Since there is presently a need for high school mathematics teachers, some majors pursue a secondary teaching career. Graduate students in mathematics are finding opportunities for public and private employment in jobs requiring more advanced training in mathematics and statistics. Sacramento State Master's Degree graduates now teach at community colleges throughout the state. Upper division majors may check with the mathematics administrative support coordinator on the possibility of applying for paid positions as student assistants. • Student assistants work from 10-20 hours per week in math-related duties on campus. Note: Students interested in a major or minor in mathematics should contact the Department secretary for an advising appointment with a mathematics advisor. Prerequisites must be completed with grade ''C-'' or better. Grade ''C-'' or better required in all courses applied to Mathematics major or to the Mathematics or Statistics minors. PHYS 11A and PHYS 11C is recommended for all Mathematics majors. Requirements - Placement - Mathematics Courses Students who have not completed four years of high school mathematics consisting of a. Beginning Algebra (one year) b. Geometry (one year) c. Intermediate Algebra-Trigonometry (one year) d. Analytic Geometry-Mathematical Analysis (one year) may need to complete part of this preparation at the University. The following diagram, which is based upon course prerequisites and major objectives, may be of assistance in selecting the necessary Satisfactory completion of the Entry Level Mathematics (ELM) requirement is a prerequisite to enrollment in any mathematics or statistics course in Area B-4 (Quantitative Reasoning) of General Education. The mathematics and statistics courses listed in Area B-4 are: MATH 1, MATH 17, MATH 24, MATH 26A, MATH 26B, MATH 29, MATH 30, MATH 31, MATH 35, STAT 1, and STAT 50. Students Planning to take any of the following courses: MATH 9, MATH 11, MATH 17, MATH 24, MATH 26A, MATH 29, MATH 30, MATH 107A, or STAT 1 must pass a diagnostic test. A brochure describing the diagnostic tests and containing sample questions is available in the campus bookstore. The following table gives the course and appropriate diagnostic test. Those students who want to prepare for the ELM may purchase the Entry Level Mathematics workbook at the Hornet Bookstore (see Placement Tests section of this catalog). All students planning to take MATH 30, Calculus I, must take the Calculus Readiness test prior to the semester of enrollment in MATH 30. Requirements - Bachelor of Arts Degree Units required for Major: 48-51 Minimum total units required for the BA: 120 Courses in parentheses are prerequisites. Prerequisites must be completed with grade ''C-'' or better. A. Lower Division Core Courses (21 units) (4) MATH Calculus I (MATH 29 or four years of high school mathematics which includes two years of algebra, one year of geometry, and one year of mathematical analysis; completion of ELM requirement 30 and Pre-Calculus Diagnostic Test) (4) MATH Calculus II (MATH 30 or appropriate high school based AP credit) (4) MATH Calculus III (MATH 31) (3) MATH Introduction to Linear Algebra (MATH 30 or appropriate high school based AP credit) (3) MATH Differential Equations for Science and Engineering (MATH 31) (3) Select one of the following: CSC 10 Introduction to Programming Logic (MATH 11 or equivalent) CSC 15 Programming Concepts and Methodology I (CSC 10 or programming experience in a high-level programming language) CSC 22 Visual Programming in BASIC (Intermediate Algebra) CSC 25 Introduction to C Programming B. Upper Division Core Courses (15 units) (3) MATH 108 Introduction to Formal Mathematics (MATH 31, MATH 35) (3) MATH 110A Modern Algebra (MATH 108) (3) MATH 110B Modern Algebra (MATH 110A) (3) MATH 130A Functions of a Real Variable (MATH 32 and MATH 108) (3) MATH 117 Linear Algebra (MATH 110A) (3) MATH 134 Functions of a Complex Variable and Applications (MATH 32) (6) Select 6 units of upper division Mathematics or Statistics relating to the students academic and professional objectives; consult advisor. (3) MATH 130B Functions of a Real Variable (MATH 130A) C. Additional Requirements for Specialized Study (12-15 units) Select one from the following three choices below: Pure Mathematics (12 units) Applied Mathematics and Statistics (12 units) (3) STAT 115A Introduction to Probability Theory ( STAT 50 or instructor consent) (3) STAT 115B Introduction to Mathematical Statistics (STAT 115A) (6) Select two of the following: MATH 104 Vector Analysis (MATH 32) MATH 105A Advanced Mathematics for Science and Engineering I (MATH 32, MATH 45) MATH 105B Advanced Mathematics for Science and Engineering II (MATH 105A) MATH 117 Linear Algebra (MATH 110A) MATH 134 Functions of a Complex Variable and Applications (MATH 32) MATH 150 Introduction to Numerical Analysis (MATH 31) MATH 170 Linear Programming (MATH 31; MATH 35 or MATH 100) STAT 155 Introduction to Techniques of Operations Research (MATH 31; STAT 50, STAT 103, or STAT 115A; MATH 31 may be taken concurrently) Teacher Preparation Program (15 units) (3) MATH Number Theory (MATH 31) (3) MATH College Geometry (MATH 31; MATH 32 or MATH 35) (3) MATH History of Mathematics (MATH 31 and upper division status in mathematics) (3) MATH Capstone Course for the Teaching Credential Candidate (Successful completion of at least five of the following: MATH 102, MATH 110A, MATH 110B, MATH 121, MATH 130A, MATH 130B, or MATH 190; 193 MATH 110A or MATH 130A may be taken concurrently) (3) STAT 1 Introduction to Statistics (MATH 9 or three years of high school mathematics which includes two years of algebra and one year of geometry; completion of ELM requirement and the Intermediate Algebra Diagnostic Test) • Prerequisites must be completed with grade ''C-'' or better. • Grade ''C-'' or better required in all courses applied to a Mathematics major, or the Mathematics or Statistics minors. • PHYS 11A and PHYS 11C recommended for all Mathematics majors. Requirements - Subject Matter Program (Pre-Credential Preparation) Students interested in a Secondary Teaching Credential should select Teacher Preparation Program in Section C in the BA requirements outlined above. Teaching credential candidates must also complete the Professional Education Program in addition to other requirements for a teaching credential. Consult the Department credential advisor for details. You may also obtain information about the Professional Education Program from the Teacher Preparation and Credentials Office, Eureka Hall 216, (916) 278-6403. Note: Due to continuing policy changes, it is important to consult a credential advisor for current details. Requirements - Bachelor of Arts Degree Requirements - Integrated Mathematics Major/ Single Subject Credential Program Students in the Integrated Mathematics Major/Single Subject Credential Program (also called the Blended Program in Mathematics) begin their pedagogical studies while they are completing the mathematics courses required for the Bachelor's degree in Mathematics. The mathematics requirements include all of the courses required for the subject matter program in mathematics (see above), and MATH 198. Students who are interested in being admitted to the Blended Program in Mathematics must plan ahead, and must see their advisor as soon as possible. Admission requirements for the Blended Program include junior class standing with a minimum overall GPA of 2.67, a grade of "C-" or better in MATH 108, passing the Writing Placement for Juniors Exam (WPJ), spending and documenting at least 45 hours observing classes, tutoring, or teaching in a variety of settings in grades 7-12, taking all three sections of the California Basic Education Skills Test (CBEST), and submitting an application packet to the Department of Mathematics and Statistics. A completed application packet includes: • an application form; • an essay outlining reasons for entering a career in teaching; • two letters of recommendation; • two sets of transcripts from each college or university attended, other than Sacramento State; and • one complete Sacramento State transcript. The application packet may be submitted during the semester in which the requirements for admission are being completed, so the application may be submitted during the semester in which enrollment in MATH 108 occurs. There are three courses which are prerequisites or corequisites to the Blended Program and students are encouraged to take these courses prior to formal admission: (3) EDUC 170 Bilingual Education: Introduction to Educating English Learners (3) EDUC 100A/B Educating Students with Disabilities in Inclusive Settings & Lab (EDUC 100A and EDUC 100B must be taken concurrently) (2) HLSC 136 School Health Education (CPR training may be taken concurrently) In addition, students in the Blended Program take all the courses required for the Subject Matter Program in Mathematics (see above), as well as MATH 198 and the following education classes: (3) EDTE 372 Anthropology of Education (Acceptance into the Single Subject Teaching Credential Program; Enrollment in semester one) (1) EDTE 373A Assessment Center Laboratory I (Corequisite: Enrollment in semester one of the Single Subject Credential Program) (2) EDTE 373B Assessment Center Laboratory II (Admission to the Single Subject Credential Program; Enrollment in semester two) (3) EDTE 384 Instruction and Assessment of Academic Literacy (Admission to Single Subject Credential Program) (3) EDTE 386 Secondary School Mathematics (6) EDTE 470A Student Teaching I: Secondary Schools (Acceptance into the Single Subject Teaching Credential Program; Corequisite: EDTE 371A or EDTE 371D) (12) EDTE 470B Student Teaching II: Secondary Schools (EDTE 470A, Corequisite: EDTE 371B or EDTE 371E) (2) MATH 316 Psychology of Mathematics Instruction (Admission to the Mathematics Blended Program) (2) MATH 371A Schools and Community A (Corequisite: EDTE 470A) (2) MATH 371B Schools and Community B (Corequisite: EDTE 470B) Requirements - Minor - Mathematics Units required for the Minor: 20-21, all of which must be taken in Mathematics or Statistics. A minimum of 8 upper division units is required. At least 6 upper division units must be taken at Sacramento State. Courses in parentheses are prerequisites. Prerequisites must be completed with grade ''C-'' or better. Select one of the two following options. Option I (20-21 units) (4) MATH Calculus I (MATH 29 or four years of high school mathematics which includes two years of algebra, one year of geometry, and one year of mathematical analysis; completion of ELM requirement 30 and Pre-Calculus Diagnostic Test) (4) MATH Calculus II (MATH 30 or appropriate high school based AP credit) (3-4) Select one of the following: MATH 32 Calculus III (MATH 31) MATH 35 Introduction to Linear Algebra (MATH 30 or appropriate high school based AP credit) STAT 50 Introduction to Probability and Statistics (MATH 26A, MATH 30, or appropriate high school based AP credit) (9) Select 9 units of upper division Mathematics and/or Statistics courses selected with approval of a Mathematics advisor. Option II (20 units) (4) MATH 30 Calculus I (MATH 29 or four years of high school mathematics which includes two years of algebra, one year of geometry, and one year of mathematical analysis; completion of ELM requirement and Pre-Calculus Diagnostic Test) (4) MATH 31 Calculus II (MATH 30 or appropriate high school based AP credit) (4) MATH 32 Calculus III (MATH 31) (4) MATH Advanced Mathematics for Science and Engineering I (MATH 32, MATH 45) (4) MATH Advanced Mathematics for Science and Engineering II (MATH 105A) Requirements - Minor - Statistics Units required for the Minor: 21, all of which must be taken in Mathematics or Statistics. A minimum of 6 upper division units is required. At least 6 upper division units must be taken at Sacramento Courses in parentheses are prerequisites. Prerequisites must be completed with grade ''C-'' or better. Specific requirements are: (4) MATH 30 Calculus I (MATH 29 or four years of high school mathematics which includes two years of algebra, one year of geometry, and one year of mathematical analysis; completion of ELM requirement and Pre-Calculus Diagnostic Test) (4) MATH 31 Calculus II (MATH 30 or appropriate high school based AP credit) (4) MATH 32 Calculus III (MATH 31) OR STAT 50 Introduction to Probability and Statistics (MATH 26A, MATH 30, or appropriate high school based AP credit) (3) STAT Intermediate Statistics (STAT 50 or instructor consent) (3) STAT Introduction to Probability Theory ( STAT 50 or instructor consent) (3) STAT Introduction to Mathematical Statistics (STAT 115A) The Department of Mathematics and Statistics offers a Master of Arts degree in Mathematics. The MA program is designed to provide qualified students with an opportunity to increase the breadth and depth of their mathematical knowledge and understanding. Beyond assuring that successful candidates are proficient in the basic areas of mathematics, the program is sufficiently flexible to permit graduates to pursue individual professional and mathematical interests ranging from teaching at the secondary or community college level to a career in the private sector, to preparation for graduate study beyond the master's degree. Graduate courses are usually offered in the late afternoon to accommodate students who work full-time. Admission Requirements Admission as a classified graduate student in Mathematics requires: • an undergraduate major in Mathematics which includes one year each of Modern Algebra and Advanced Calculus or an undergraduate major in a related field together with one year each of Modern Algebra and Advanced Calculus; • a minimum 2.5 GPA; and • a minimum 2.5 GPA in the last 60 units attempted and a 3.0 GPA in Mathematics coursework. Students who have deficiencies in admission requirements that can be removed by specified additional preparation may be admitted with conditionally classified graduate status. Any such deficiencies will be noted on a written response to the admission application. No credit will be given towards the MA for MATH 110A, MATH 110B, MATH 130A, or MATH 130B. Admission Procedures Applications are accepted as long as room for new students exists. However, students are strongly urged to apply by the posted university application deadline for the fall or spring terms, in order to allow time for admission before registration. All prospective graduate students, including Sacramento State graduates, must file the following with the Office of Graduate Studies, River Front Center 206, (916) 278-6470: • an online application for admission; and • two sets of official transcripts from all colleges and universities attended, other than Sacramento State. For more admissions information and application deadlines please visit http://www.csus.edu/gradstudies/. Admission decisions are made approximately six to eight weeks after the application deadline date. Applicants will be notified of an admission decision via e-mail. Advancement to Candidacy Each student must file an application for Advancement to Candidacy, indicating a proposed program of graduate study. This procedure should begin as soon as the classified graduate student has: • removed any deficiencies in admission requirements; • completed at least 18 units in the graduate program with a minimum 3.0 GPA, including at least 12 units at the 200 level; and • taken the Writing Placement for Graduate Students (WPG) or taken a Graduate Writing Intensive (GWI) course in their discipline within the first two semesters of coursework at California State University, Sacramento or secured approval for a WPG waiver. Advancement to Candidacy forms are available in the Office of Graduate Studies. The student fills out the form after planning a degree program in consultation with a Mathematics advisor. The completed form is then returned to the Office of Graduate Studies for approval. Requirements - Master of Arts Degree Units required for the MA: 30, including at least 24 units of approved 200-level courses Minimum required GPA: 3.0. Courses in parentheses are prerequisites. A. Required Courses (30 units) (3) MATH 210A * Algebraic Structures (MATH 110B) (3) MATH 210B * Algebraic Structures (MATH 210A) (3) MATH 230A * Real Analysis (MATH 130B) (3) MATH 230B * Real Analysis (MATH 230A) (12) Select four from the following: MATH 220A Topology (MATH 130B) MATH 220B Topics in Topology (MATH 220A) MATH 234A Complex Analysis (MATH 130B; MATH 105B or MATH 134 recommended) MATH 234B Topics in Complex Analysis (MATH 234A) MATH 241A Methods of Applied Mathematics (MATH 134 recommended) MATH 241B Topics in Applied Mathematics (MATH 241A) STAT 215A Introduction to Mathematical Statistics (STAT 115A, STAT 115B; MATH 134 is recommended) STAT 215B Topics in Introduction to Mathematical Statistics (STAT 215A) (3) Select one of the following with advisor approval: (1-6) MATH 299 Special Problems Electives in mathematics and related disciplines *Courses must be completed with grade ''B-'' or better. B. Culminating Requirement (3 units) Written Comprehensive Examination Note: A foreign language is not required for the MA degree. However, students who plan further graduate study are encouraged to take coursework in French, German, or Russian since proficiency in two of these languages is usually required in doctoral programs. Career Possibilities Mathematics Teacher · Numerical Analyst · Engineering Analyst · Systems Analyst · Operations Analyst · Actuary · Casualty Rater · Technical Writer · Types of Statisticians: Survey/Polling, Biological /Agricultural, Business/Economics, Physical Sciences/Engineering Edward Bradley, Coskun Cetin, Rafael Diaz-Escamilla, Andras Domokos, Elizabeth Ebrahimzadeh, Kimberly Elce, Roland Esquerra, Scott Farrand, Tracy Hamilton, John Ingram, Elaine Kasimatis, Bin Lu, Marcus Marsh, K. C. Ng, Michelle Norris, Janusz Prajs, Doraiswamy Ramachandran, Geetha Ramachandran, Thomas Schulte, Corey Shanbrom, Gary Shannon, Ed Shea, Lisa Taylor, David Zeigler, Kathy Zhong, Kecheng Zhou Contact Information Edward Bradley, Department Chair Dawn Giovannoni, Administrative Support Coordinator Brighton Hall 141 (916) 278-6534 RETURN TO TOP |
{"url":"http://catalog.csus.edu/current/programs/math.html","timestamp":"2014-04-21T09:37:00Z","content_type":null,"content_length":"70862","record_id":"<urn:uuid:ea5583de-2cb3-4d26-94af-bf119b560120>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find the surface area of the figure below. A. 94.25 m2 B. 94.52 m2 C. 36.13 m2 D. 148.04 m2 • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50a27cdce4b0d4f5b7dbd69e","timestamp":"2014-04-17T06:46:05Z","content_type":null,"content_length":"49926","record_id":"<urn:uuid:3533078d-1aed-480b-9920-170f9631ba2d>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00327-ip-10-147-4-33.ec2.internal.warc.gz"}
scalar object bug or feature? Charles R Harris charlesr.harris at gmail.com Wed Oct 18 16:03:32 CDT 2006 On 10/18/06, Alan G Isaac <aisaac at american.edu> wrote: > On Wed, 18 Oct 2006, Keith Goodman apparently wrote: Here's a simpler (?) example: > >>> x=numpy.random.rand(300,1)>0 > >>> x.sum() > 300 > >>> sum(x) > array([44], dtype=int8) > >>> x=numpy.random.rand(300)>0 > >>> sum(x) > 300 > Alan Isaac Hmmm, I think sum(x) and x.sum() should behave the same. Note that In [12]:sum(x, dtype=int) I think sum should stick to the modular arithmetic unless specified otherwise. But in any case sum(x) and x.sum() should do the same thing. -------------- next part -------------- An HTML attachment was scrubbed... URL: http://projects.scipy.org/pipermail/numpy-discussion/attachments/20061018/d1030804/attachment-0001.html -------------- next part -------------- Using Tomcat but need to do more? Need to support web services, security? Get stuff done quickly with pre-integrated technology to make your job easier Download IBM WebSphere Application Server v.1.0.1 based on Apache Geronimo -------------- next part -------------- Numpy-discussion mailing list Numpy-discussion at lists.sourceforge.net More information about the Numpy-discussion mailing list
{"url":"http://mail.scipy.org/pipermail/numpy-discussion/2006-October/023970.html","timestamp":"2014-04-19T12:40:46Z","content_type":null,"content_length":"4464","record_id":"<urn:uuid:7ad3fd10-334a-4919-9632-b256f6945e39>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00112-ip-10-147-4-33.ec2.internal.warc.gz"}
Fort Belvoir Math Tutor ...I have also worked with high school students on Math and Science. While working on my Molecular Biology BS from Johns Hopkins University, I tutored college students on Math (including Calculus) and Science (including Chemistry). I have worked with individual students and small groups. I like to... 40 Subjects: including calculus, chemistry, elementary (k-6th), vocabulary ...This includes: Prewriting Organization Essay writing Mastery of various writing forms Sentence structure This is not an exhaustive list. I welcome students of all ages and skill levels. I am well versed in many different aspects of English. 22 Subjects: including prealgebra, ASVAB, ESL/ESOL, English ...I studied abroad for 4 months in Madrid, Spain. While there I volunteered with Helenski Espana, a human rights group. We taught lessons to school-aged children (in Spanish) on human rights and their basic rights as a citizen of Spain. 17 Subjects: including algebra 2, calculus, geometry, physics ...My broad background in math, science, and engineering combined with my extensive research experience provides me with the unique tools to teach effectively to students at all levels.As an undergraduate student in physics and electrical engineering and as a doctoral student in physics, I took adva... 16 Subjects: including algebra 1, algebra 2, calculus, geometry ...I look forward to hearing from you and helping you reach your goal of academic success! I am the former Owner and Director of Cross Cultures Learning Center, Inc. (CCLC), a MSDE registered church-exempt school, where I served as an Elementary and Middle school grade-level teacher for more than 1... 5 Subjects: including prealgebra, reading, elementary (k-6th), elementary math
{"url":"http://www.purplemath.com/fort_belvoir_va_math_tutors.php","timestamp":"2014-04-17T08:02:59Z","content_type":null,"content_length":"23828","record_id":"<urn:uuid:fe48e59d-0788-4193-9cbe-96f70db0cbb9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00402-ip-10-147-4-33.ec2.internal.warc.gz"}
partial fractions and complex roots in 50g 10-16-2008, 03:46 PM partial fractions and complex roots in 50g Doing partial fractions with this in the denominator: s^2+10s+169 gives me a crazy result. When I simply factor this, I know why. I get something involving e^i*ATAN(12/5) etc. All I really want is (s+5-12i) I played with most of the simplification functions in the ALG and EXP&LN menus, but none really makes a difference. I know it's not too hard to do by hand, but that's not why I have this Ultimately I want to get the partial fractions with an expression like that in the denominator, which I then take the inverse Laplace transform of. It works great with real roots, just not with complex Any ideas? 10-16-2008, 04:08 PM Re: partial fractions and complex roots in 50g On 16 Okt., 17:46, Christoph Koehler <christoph.koeh...@gmail.com> > Doing partial fractions with this in the denominator: s^2+10s+169 > gives me a crazy result. When I simply factor this, I know why.[/color] On my 50g it works like this Choose MODE CAS and check the flag for Complex, OK Then type enter x^2+10*x+169 and choose EDIT and FACTO and you get the result, you want 10-16-2008, 05:10 PM Re: partial fractions and complex roots in 50g If you try it with RECT (rectangular coordinates) mode and Complex mode both turned on, I think it will do what you want. Having the coordinates in Polar or Spherical modes causes some complex expressions to be displayed in polar form, r*exp(i*theta), instead of rectangular form, a+bi. I've noticed this mode setting also affects certain integrals. On Oct 16, 6:46*pm, Christoph Koehler <christoph.koeh...@gmail.com> > Hello, > Doing partial fractions with this in the denominator: s^2+10s+169 > gives me a crazy result. When I simply factor this, I know why. I get > something involving e^i*ATAN(12/5) etc. All I really want is (s+5-12i) > (s+5+12j). > I played with most of the simplification functions in the ALG and > EXP&LN menus, but none really makes a difference. > I know it's not too hard to do by hand, but that's not why I have this > calculator. > Ultimately I want to get the partial fractions with an expression like > that in the denominator, which I then take the inverse Laplace > transform of. It works great with real roots, just not with complex > roots. > Any ideas? > Thanks! > Christoph[/color] 10-17-2008, 03:21 AM Re: partial fractions and complex roots in 50g On Oct 16, 12:10*pm, Wes <wjltemp...@yahoo.com> wrote:[color=blue] > If you try it with RECT (rectangular coordinates) mode and Complex > mode both turned on, I think it will do what you want. *Having the > coordinates in Polar or Spherical modes causes some complex > expressions to be displayed in polar form, r*exp(i*theta), instead of > rectangular form, a+bi. *I've noticed this mode setting also affects > certain integrals.[/color] I am so glad you figured that out. That was exactly it, and it makes a lot of sense. Thanks again! 10-18-2008, 01:50 PM Re: partial fractions and complex roots in 50g On Thu, 16 Oct 2008 10:10:43 -0700 (PDT), Wes <wjltemp-gg@yahoo.com> >If you try it with RECT (rectangular coordinates) mode and Complex >mode both turned on, I think it will do what you want. Having the >coordinates in Polar or Spherical modes causes some complex >expressions to be displayed in polar form, r*exp(i*theta), instead of >rectangular form, a+bi. I've noticed this mode setting also affects >certain integrals. Could you please point me to ANY HP50 manual where the above procedure is described? 06-08-2009, 12:25 PM Re: partial fractions and complex roots in 50g No problems on the 50g turn the calculator on clear all entries enter equation in frequency domain then white left arrow key then (1) and selection 2 Polynomial and then 15 Partfrac and then if you just want to find the inverse laplace follow below. clear all entries on screen go into CALC menu by pressing white left arrow then key (4) select 3 Differential equations then 2 ILAP and enter the equation in the frequency domain f(s) to get f(t) as the answer. To do it manually eg to find the inverse laplace 1/(s^2+s+1) it needs the form As+B/(s^2+s+1). Then 1/(s^2+s+1)=As+B/(s^2+s+1) therefore 1=As+B when A=0; B=1 (0(s+0.5)/(s+0.5)^2+0.75)+(1/(s+0.5)^2+0.75) the denominators in perfect squares From the inverse laplace transform tables you may derive that the solution is I believe this to be correct
{"url":"http://fixunix.com/hewlett-packard/545744-partial-fractions-complex-roots-50g-print.html","timestamp":"2014-04-19T02:40:48Z","content_type":null,"content_length":"10530","record_id":"<urn:uuid:05aca587-9d9a-459c-a6c3-2e848b310c9a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00091-ip-10-147-4-33.ec2.internal.warc.gz"}
Surd problem March 12th 2009, 02:06 AM Surd problem hi all, this is a problem that requires the answer expressed as a surd. (calculator not to be used) The sides of a rectangle are in the ratio 2:3. The diagonal is of length 26cm. Find the perimeter. The answer given in the answer section is $20\sqrt13$ $26^2 = 626$ sides are therefore $\sqrt{\frac{2}{5} * 626} \mbox{ and} \sqrt{\frac{3}{5} * 626}$ and perimeter $2\sqrt{\frac{1352}{5}} + 2\sqrt{\frac{2028}{5}}$ using prime factors of 1352 and 2028 $26\sqrt{\frac{8}{5}} + 52\sqrt{\frac{3}{5}}$ using a calculator the answer i calculated is not equal to the textbooks answer March 12th 2009, 02:17 AM hi all, this is a problem that requires the answer expressed as a surd. (calculator not to be used) The sides of a rectangle are in the ratio 2:3. The diagonal is of length 26cm. Find the perimeter. The answer given in the answer section is $20\sqrt13$ $26^2 = 626$ sides are therefore $\sqrt{\frac{2}{5} * 626} \mbox{ and} \sqrt{\frac{3}{5} * 626}$ and perimeter $2\sqrt{\frac{1352}{5}} + 2\sqrt{\frac{2028}{5}}$ using prime factors of 1352 and 2028 $26\sqrt{\frac{8}{5}} + 52\sqrt{\frac{3}{5}}$ using a calculator the answer i calculated is not equal to the textbooks answer First of all 26^2 =676 I dont know what you are trying to do Lets say the sides are 2x & 3x $Diagonal = \sqrt{4x^2+9x^2} = 26$ $x\sqrt{13} = 26$ Divide both sides by squareroot(13) $x = 2\sqrt{13}$ Perimeter $= 2(2x + 3x) = 10 x = 20\sqrt{13}$ March 12th 2009, 02:45 AM excellent thanks adarsh. (Clapping) i didnt think about using algebra, i was just taking the ratio as a total 676=5/5 and expected the result to be the same. I need to figure out where i went wrong (Giggle) March 12th 2009, 06:25 AM i played with this abit more im still confused as to why the above not work but using adarsh algebra $26 = \sqrt{(\frac{2}{5}x)^2 + (\frac{3}{5}x)^2}$ $26 = \sqrt{\frac{4}{25}x^2 + \frac{9}{25}x^2}$ $26=\sqrt{\frac{13}{25}x^2} \equiv x\sqrt{\frac{13}{25}}$ $x = 26 \div \sqrt{\frac{13}{25}} \equiv \frac{26 * 5}{\sqrt{13}} \equiv \frac{13 * 10}{13^\frac{1}{2}} \equiv 10\sqrt{13}$ perimeter 2 x sides $2 \mbox{x} 10\sqrt{13} \equiv 20\sqrt{13}$ March 12th 2009, 11:32 AM i played with this abit more im still confused as to why the above not work but using adarsh algebra $26 = \sqrt{(\frac{2}{5}x)^2 + (\frac{3}{5}x)^2}$ $26 = \sqrt{\frac{4}{25}x^2 + \frac{9}{25}x^2}$ $26=\sqrt{\frac{13}{25}x^2} \equiv x\sqrt{\frac{13}{25}}$ $x = 26 \div \sqrt{\frac{13}{25}} \equiv \frac{26 * 5}{\sqrt{13}} \equiv \frac{13 * 10}{13^\frac{1}{2}} \equiv 10\sqrt{13}$ perimeter 2 x sides $2 \mbox{x} 10\sqrt{13} \equiv 20\sqrt{13}$ I think you got it But just as an assurance I will do it your way Lets consider the sides to be 3x/5 and 2x/5 Now we follow the steps (same as you did here) $26 = \sqrt{(\frac{2}{5}x)^2 + (\frac{3}{5}x)^2}$ ....this thing came from Pythagoras theorem $26 = \sqrt{\frac{4}{25}x^2 + \frac{9}{25}x^2}$ $26=\sqrt{\frac{13}{25}x^2} \equiv x\sqrt{\frac{13}{25}}$ $x = 26 \div \sqrt{\frac{13}{25}} \equiv \frac{26 * 5}{\sqrt{13}} \equiv \frac{13 * 10}{13^\frac{1}{2}} \equiv 10\sqrt{13}$ Perimeter is the sum of lengths of all sides Thus what we basically did was (3x/5 + 2x/ 5) + (3x/5 + 2x/5) = x + x = 2x =Answer March 13th 2009, 01:16 AM thank you for explaining it so thoroughly, my problem solving skill is not as good as it should be. I need to think before jumping into questions (Hi)
{"url":"http://mathhelpforum.com/geometry/78310-surd-problem-print.html","timestamp":"2014-04-19T18:43:40Z","content_type":null,"content_length":"14724","record_id":"<urn:uuid:b837c1b9-9817-4e0b-a27d-13a27e62676e>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00639-ip-10-147-4-33.ec2.internal.warc.gz"}
Matrix rotations May 12th 2008, 12:41 PM #1 May 2008 Matrix rotations I'm storing a matrix to represent the rotational position of a ball. I initialize it to an identity matrix of 100 010 001, which represent the x axis in my coord space. When the ball receives an impulse, it starts traveling in said direction, but sufarce friction makes it start to roll. I've calculated the axis of rotation, and multiplied that original vector by a matrix formed from a quaternion. This works perfectly, until the ball bounces off of something other than a 180 degree turn around. It works in all direction of impulse, until it gets off of it's x-y-z initial axis. I'm new here, so I don't exactly know what to provide to help you help me. So please take that into consideration. I can tell the rotation is working because I am raytracing the ball, using the matrix to rotate a vector from the center of the ball to the outside, then using that to look up UV coords in a texture map. I believe the problem is that I need to rotate my rotation matrix before I multiply it to the existing matrix... Apparently if you want to rotate ball by it's own (relative axis) you do current_matrix * what_to_rotate_by But if you want to rotate by a World axis, you current_matrix = what_to_rotate_by * current_matrix So simple. May 12th 2008, 02:31 PM #2 May 2008
{"url":"http://mathhelpforum.com/advanced-applied-math/38117-matrix-rotations.html","timestamp":"2014-04-18T13:01:12Z","content_type":null,"content_length":"31574","record_id":"<urn:uuid:1c083805-1e38-4afc-af91-20bd1603f37d>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00180-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics Eigenvalues of semi-positone Hammerstein integral equations and applications to boundary value problems. (English) Zbl 1186.45002 Nonlinear eigenvalue problems with positive eigenfunctions for the Hammerstein equation $\lambda y\left(t\right)={\int }_{0}^{1}k\left(t,s\right)f\left(s,y\left(s\right)\right)ds$ are studied where $f$ can be negative but is bounded from below (“semi-positone”), $k$ is nonnegative and $C\left(t\right){\Phi }\left(s\right)\le k\left(t,s\right)\le {\Phi }\left(s\right)$ with $0\le C\left(t\right)\le 1$. (For some reason, the kernel is actually written in the split form $k\left(t,s\right)g\left(s\right)$ in the paper.) Using estimates for the spectral radius (and thus the largest eigenvalue) of the positive linear integral operators $Lu\left(s\right)={\int }_{\alpha }^{\beta }k\left(t,s\right)u\left(s\right)ds$, conditions for intervals of eigenvalues (of the nonlinear problem) and in case of appropriately oscillating $f$, also lower bounds on their multiplicities are given in terms of the spectral radius of $L$ and in terms of its estimates. The proof consists in considering an auxiliary positive nonlinear operator on a special cone and proving, using the mentioned quantities, that its fixed point index is 0 or 1 on the intersection of certain balls with the cone. The result can be applied to a third-order three-point boundary value problem $\lambda {y}^{"\text{'}\text{'}}\left(t\right)-g\left(t\right)f\left(t,y\left(t\right)\right)=0,$ $y\left(0\right)={y}^{"}\left(\beta \right)={y}^{\text{'}\text{'}}\left(1\right)=0·$ For the corresponding Green’s function, estimates for the required quantities are calculated. 45C05 Eigenvalue problems (integral equations) 47H30 Particular nonlinear operators 34B18 Positive solutions of nonlinear boundary value problems for ODE 34B16 Singular nonlinear boundary value problems for ODE 45G10 Nonsingular nonlinear integral equations 34B15 Nonlinear boundary value problems for ODE
{"url":"http://zbmath.org/?q=an:1186.45002","timestamp":"2014-04-19T14:59:32Z","content_type":null,"content_length":"25351","record_id":"<urn:uuid:bc11a5a8-ab18-491d-908e-d7e596e66099>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00555-ip-10-147-4-33.ec2.internal.warc.gz"}
A188737 - OEIS %S 2,7,0,3,2,5,7,4,0,9,5,4,8,8,1,4,5,5,1,6,6,7,0,4,5,7,1,3,6,2,7,1,3,2, %T 1,9,2,8,7,4,4,6,7,5,0,8,1,2,0,4,1,0,6,6,8,0,0,1,2,9,2,0,3,4,2,4,0,4, %U 4,5,1,7,1,1,3,3,6,4,5,9,1,0,1,2,7,9,8,2,3,4,8,4,6,5,5,4,6,7,6,0,8,2,3,3,8,9,9,6,8,1,4,6,4,7,8,6,1,4,0,2,5,3,5,4,1,1,0,5,5,7 %N Decimal expansion of (7+sqrt(85))/6. %C Decimal expansion of the length/width ratio of a (7/3)-extension rectangle. See A188640 for definitions of shape and r-extension rectangle. %C A (7/3)-extension rectangle matches the continued fraction [2,1,2,2,1,2,2,1,2,2,1,...] for the shape L/W=(7+sqrt(85))/6. This is analogous to the matching of a golden rectangle to the continued fraction [1,1,1,1,1,1,1,1,...]. Specifically, for the (7/3)-extension rectangle, 2 squares are removed first, then 1 square, then 2 squares, then 2 squares,..., so that the original rectangle of shape (7+sqrt(85))/6 is partitioned into an infinite collection of squares. %e 2.703257409548814551667045713627132192874467508120... %t r = 7/3; t = (r + (4 + r^2)^(1/2))/2; FullSimplify[t] %t N[t, 130] %t RealDigits[N[t, 130]][[1]] %t ContinuedFraction[t, 120] %Y Cf. A188640. %K nonn,cons %O 1,1 %A _Clark Kimberling_, Apr 12 2011
{"url":"http://oeis.org/A188737/internal","timestamp":"2014-04-19T02:56:34Z","content_type":null,"content_length":"8492","record_id":"<urn:uuid:37baaaaa-fd9c-44be-a76e-c4fe4ba329cc>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00633-ip-10-147-4-33.ec2.internal.warc.gz"}
The algebra of timed processes, ATP: Theory and application Results 1 - 10 of 83 - Formal Aspects of Computing , 2000 "... We investigate different forms of termination in timed process algebras. The integrated framework of discrete and dense time, relative and absolute time process algebras is extended with forms of successful and unsuccessful termination. The different algebras are interrelated by embeddings and conse ..." Cited by 155 (25 self) Add to MetaCart We investigate different forms of termination in timed process algebras. The integrated framework of discrete and dense time, relative and absolute time process algebras is extended with forms of successful and unsuccessful termination. The different algebras are interrelated by embeddings and conservative extensions. , 1991 "... We present anoverview and synthesis of existing results about process algebras for the speci cation and analysis of timed systems. The motivation is double: present anoverview of some relevant and representative approaches and suggest a unifying framework for them. time, we propose a general model f ..." Cited by 140 (4 self) Add to MetaCart We present anoverview and synthesis of existing results about process algebras for the speci cation and analysis of timed systems. The motivation is double: present anoverview of some relevant and representative approaches and suggest a unifying framework for them. time, we propose a general model for them: transition systems whose labels are either elements ofavocabulary of actions or elements of a time domain. Many properties of this model are studied concerning their impact on description capabilities and on realisability issues. An overview of the language features of the process algebras considered is presented, by focusing on constructs used to express time constraints. The presentation is organised as an exercise of building a timed process algebra from a standard process algebra for untimed systems. The overview is completed by a discussion about description capabilities according to semantic and pragmatic criteria. 1 , 1999 "... This chapter surveys the semantic rami cations of extending traditional process algebras with notions of priority that allow for some transitions to be given precedence over others. The need for these enriched formalisms arises when one wishes to model system features such asinterrupts, prioritized ..." Cited by 103 (12 self) Add to MetaCart This chapter surveys the semantic rami cations of extending traditional process algebras with notions of priority that allow for some transitions to be given precedence over others. The need for these enriched formalisms arises when one wishes to model system features such asinterrupts, prioritized choice, orreal-time behavior. Approaches to priority in process algebras can be classi ed according to whether the induced notion of pre-emption on transitions is global or local and whether priorities are static or dynamic. Early work in the area concentrated on global preemption and static priorities and led to formalisms for modeling interrupts and aspects of real-time, such as maximal progress, in centralized computing environments. More recent research has investigated localized notions of pre-emption in which the distribution of systems is taken into account, as well as dynamic priority approaches, i.e., those where priority values may change as systems evolve. The latter allows one to model behavioral phenomena such as scheduling algorithms and also enables the e cient encoding of real-time semantics. Technically, this chapter studies the di erent models of priorities by presenting extensions of Milner's Calculus of Communicating Systems (CCS) with static and dynamic priority as well as with notions of global and local pre-emption. In each case the operational semantics of CCS is modi ed appropriately, behavioral theories based on strong and weak bisimulation are given, and related approaches for di erent process-algebraic settings are - Theoretical Computer Science , 1998 "... In this tutorial we give an overview of the process algebra EMPA, a calculus devised in order to model and analyze features of real-world concurrent systems such as nondeterminism, priorities, probabilities and time, with a particular emphasis on performance evaluation. The purpose of this tutorial ..." Cited by 95 (9 self) Add to MetaCart In this tutorial we give an overview of the process algebra EMPA, a calculus devised in order to model and analyze features of real-world concurrent systems such as nondeterminism, priorities, probabilities and time, with a particular emphasis on performance evaluation. The purpose of this tutorial is to explain the design choices behind the development of EMPA and how the four features above interact, and to show that a reasonable trade off between the expressive power of the calculus and the complexity of its underlying theory has been achieved. , 1993 "... The paper presents results of ongoing work aiming at the unification of some behavioral description formalisms for timed systems. We propose for the algebra of timed processes ATP a very general semantics in terms of a time domain. It is then shown how ATP can be translated into a variant of timed g ..." Cited by 80 (9 self) Add to MetaCart The paper presents results of ongoing work aiming at the unification of some behavioral description formalisms for timed systems. We propose for the algebra of timed processes ATP a very general semantics in terms of a time domain. It is then shown how ATP can be translated into a variant of timed graphs. This result allows the application of existing model-checking techniques to ATP. Finally, we propose a notion of hybrid systems as a generalization of timed graphs. Such systems can evolve, either by executing a discrete transition, or by performing some "continuous " transformation. The formalisms studied admit the same class of models: time deterministic and time continuous, possibly infinitely branching transition systems labeled by actions or durations. - IEEE Transactions on Software Engineering , 1992 "... We propose a method for the implementation and analysis of real-time systems, based on the compilation of specifications into extended automata. Such a method has been already adopted for the so called "synchronous" real-time programming languages. ..." Cited by 75 (8 self) Add to MetaCart We propose a method for the implementation and analysis of real-time systems, based on the compilation of specifications into extended automata. Such a method has been already adopted for the so called "synchronous" real-time programming languages. - PROCEEDINGS OF THE IEEE , 1994 "... Recently, significant progress has been made in the development of timed process algebras for the specification and analysis of real-time systems. This paper describes a timed process algebra called ACSR, which supports synchronous timed actions and asynchronous instantaneous events. Timed actions a ..." Cited by 58 (40 self) Add to MetaCart Recently, significant progress has been made in the development of timed process algebras for the specification and analysis of real-time systems. This paper describes a timed process algebra called ACSR, which supports synchronous timed actions and asynchronous instantaneous events. Timed actions are used to represent the usage of resources and to model the passage of time. Events are used to capture synchronization between processes. To be able to specify real systems accurately, ACSR supports a notion of priority that can be used to arbitrate among timed actions competing for the use of resources and among events that are ready for synchronization. The paper also includes a brief overview of other timed process algebras and discusses similarities and differences between them and - Theor. Comput. Sci , 2004 "... Abstract. This note addresses the history of process algebra as an area of research in concurrency theory, the theory of parallel and distributed systems in computer science. Origins are traced back to the early seventies of the twentieth century, and developments since that time are sketched. The a ..." Cited by 56 (1 self) Add to MetaCart Abstract. This note addresses the history of process algebra as an area of research in concurrency theory, the theory of parallel and distributed systems in computer science. Origins are traced back to the early seventies of the twentieth century, and developments since that time are sketched. The author gives his personal views on these matters. He also considers the present situation, and states some challenges for the future. - THEORETICAL COMPUTER SCIENCE , 1997 "... ..." - Formal Aspects of Computing , 1996 "... The timed automaton model of [LV92, LV93] is a general model for timing-based systems. A notion of timed action transducer is here defined as an automata-theoretic way of representing operations on timed automata. It is shown that two timed trace inclusion relations are substitutive with respect to ..." Cited by 40 (13 self) Add to MetaCart The timed automaton model of [LV92, LV93] is a general model for timing-based systems. A notion of timed action transducer is here defined as an automata-theoretic way of representing operations on timed automata. It is shown that two timed trace inclusion relations are substitutive with respect to operations that can be described by timed action transducers. Examples are given of operations that can be described in this way, and a preliminary proposal is given for an appropriate language of operators for describing timing-based systems.
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=58115","timestamp":"2014-04-19T00:11:12Z","content_type":null,"content_length":"36609","record_id":"<urn:uuid:1088035d-91cc-4f1b-91fe-986b2998f40f>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00507-ip-10-147-4-33.ec2.internal.warc.gz"}
Wont return to menu() function 09-04-2006 #1 Registered User Join Date Aug 2006 Wont return to menu() function So I've got another problem with a program. It's not really that big of a deal, but I can't seem to figure out why it's not working. I searched, but I couldn't find anything related to this. I've got a geometry calculation program, you decide which shape you want to calculate for, and then the actual calculation. Then you input the values necesary for the calculation. The program calculates, and outputs the answer. Finally it returns to the menu, incase you need to do more calculations. Everything works but returning to the menu() function, and I can't figure out why. Any help is greatly appreciated. Thanks! #include <iostream> #include <string> #include <math.h> //Funtion prototypes. void menu(); int which_calc(); void calc_circle(); void calc_rectangle(); void calc_triangle(); using namespace std; int main() return (0); void menu() int choice = 0; cout << "Which formula would you like to use?\n\n"; cout << " 1. Calculate for a circle\n"; cout << " 2. Calculate for a rectangle\n"; cout << " 3. Calculate for a triangle\n"; cout << " 4. Quit\n"; cin >> choice; if (choice < 1 || choice > 4) cout << "Please enter a correct menu number\n"; }while (choice < 1 || choice > 4); case 1: cout << "You chose calculate for a circle\n\n"; case 2: cout << "You chose calculate for a rectangle\n\n"; case 3: cout << "You chose calculate for a triangle\n\n"; case 4: }//!End of switch/case Function whichCalc() gets rid of several lines of repeating code by asking which calculation they want and returning int choice. int which_calc() int choice = 0; cout << "Which calculation would you like?\n\n"; cout << " 1. Circumference\n"; cout << " 2. Area\n"; cin >> choice; if (choice < 1 || choice > 2) cout << "Please enter a correct menu number\n"; }while(choice < 1 || choice > 2); void calc_circle() int choice = which_calc(); double radius, area, circumference; const double PI = 3.141592654; cout << "Please enter radius: "; cin >> radius; if (choice == 1) cout << "The circumference of a circle with a radius of " << radius << " is " << (2 * PI * radius) << endl; cout << "The area of a circle with a radius of " << radius << " is " << (PI * PI * radius) << endl; menu(); //This is where I want it to return to the original menu. Maybe callee can't call its caller. Make a loop in menu(). Run your program in debug mode. In your code functions actually don't return to menu(), they call it. Last edited by siavoshkc; 09-04-2006 at 04:47 PM. Learn C++ (C++ Books, C Books, FAQ, Forum Search) Code painter latest version on sourceforge DOWNLOAD NOW! Download FSB Data Integrity Tester. Siavosh K C What's the compiler you are using? Your code works as you expected with MinGW 3.4.5. The only change I had to make was to comment the calls to calc_triangle() and calc_rectangle() since these functions weren't defined. But tis has no effect on your problem. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. The OP wants to call menu infinitely, I think. Try something like this: // in main() and make menu() return 0 to continue looping, 1 to quit. Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. menu(); //This is where I want it to return to the original menu. It actually calls menu() and as such will only leave when the option chosen is 4. So i'm a little baffled. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. Well what if you wanted to calculate multiple triangles, then where would you be? I think the OP's looking for a sort of loop around the existing code, which could be implemented as a loop in menu() or as a loop in main() based on the return value of menu(). Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. Well, true. I've been assuming he has the other functions defined. As is, the code will give a link-time error. The programmer’s wife tells him: “Run to the store and pick up a loaf of bread. If they have eggs, get a dozen.” The programmer comes home with 12 loaves of bread. Originally Posted by brewbuck: Reimplementing a large system in another language to get a 25% performance boost is nonsense. It would be cheaper to just get a computer which is 25% faster. Read the original post. I've got a geometry calculation program, you decide which shape you want to calculate for, and then the actual calculation. Then you input the values necesary for the calculation. The program calculates, and outputs the answer. Finally it returns to the menu, incase you need to do more calculations. Everything works but returning to the menu() function, and I can't figure out why. It definitely sounds like it compiled. It sounds a lot like they just need to call menu() again. Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. If you continue to run that code it will result in a stack overflow since the functions being called never return and menu never actually returns, thus the old stack frames are still in effect. dwks is right. I just want to have the menu reappear until the user inputs "4" to quit. I tried using return(menu()); at the end of the circle function. I also changed the actual menu function to return the corresponding function instead of just calling it. Then I tried clearing the input at the beginning of menu(). None of that worked though. Try something like this: // in main() and make menu() return 0 to continue looping, 1 to quit. I'm not sure what toyu mean by this dwks Thanks for all the ideas so far, though. void menu() int choice = 0; cout << "Which formula would you like to use?\n\n"; cout << " 1. Calculate for a circle\n"; cout << " 2. Calculate for a rectangle\n"; cout << " 3. Calculate for a triangle\n"; cout << " 4. Quit\n"; cin >> choice; if (choice < 1 || choice > 4) cout << "Please enter a correct menu number\n"; }while (choice < 1 || choice > 4); case 1: cout << "You chose calculate for a circle\n\n"; case 2: cout << "You chose calculate for a rectangle\n\n"; case 3: cout << "You chose calculate for a triangle\n\n"; case 4: }//!End of switch/case And erase menu() at the end of calculation functions. Last edited by siavoshkc; 09-05-2006 at 09:14 AM. Learn C++ (C++ Books, C Books, FAQ, Forum Search) Code painter latest version on sourceforge DOWNLOAD NOW! Download FSB Data Integrity Tester. Siavosh K C And erase menu() at the end of calculation functions. This is extremely important. My fix was the following: #include <iostream> #include <string> #include <math.h> //Funtion prototypes. int menu(); int which_calc(); void calc_circle(); void calc_rectangle(); void calc_triangle(); using namespace std; int main() // while(menu()) {} // edited while(!menu()) {} return (0); int menu() int choice = 0; cout << "Which formula would you like to use?\n\n"; cout << " 1. Calculate for a circle\n"; cout << " 2. Calculate for a rectangle\n"; cout << " 3. Calculate for a triangle\n"; cout << " 4. Quit\n"; cin >> choice; if (choice < 1 || choice > 4) cout << "Please enter a correct menu number\n"; }while (choice < 1 || choice > 4); case 1: cout << "You chose calculate for a circle\n\n"; case 2: cout << "You chose calculate for a rectangle\n\n"; case 3: cout << "You chose calculate for a triangle\n\n"; case 4: return 1; }//!End of switch/case return 0; /* . . . same code as above . . . */ Blue lines are changed, the actual changes are in bold. Last edited by dwks; 09-06-2006 at 09:16 AM. Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. I think your while loop does the opposite of what you want, dwks. Returning 0 will break the loop, returning 1 will continue it. Yes, indeed. I meant to put while(!menu()) as I originally indicated. Seek and ye shall find. quaere et invenies. "Simplicity does not precede complexity, but follows it." -- Alan Perlis "Testing can only prove the presence of bugs, not their absence." -- Edsger Dijkstra "The only real mistake is the one from which we learn nothing." -- John Powell Other boards: DaniWeb, TPS Unofficial Wiki FAQ: cpwiki.sf.net My website: http://dwks.theprogrammingsite.com/ Projects: codeform, xuni, atlantis, nort, etc. 09-04-2006 #2 09-04-2006 #3 09-04-2006 #4 09-04-2006 #5 09-04-2006 #6 09-04-2006 #7 09-04-2006 #8 09-04-2006 #9 09-05-2006 #10 Registered User Join Date Aug 2006 09-05-2006 #11 09-05-2006 #12 09-05-2006 #13 09-06-2006 #14 Registered User Join Date Jan 2005 09-06-2006 #15
{"url":"http://cboard.cprogramming.com/cplusplus-programming/82602-wont-return-menu-function.html","timestamp":"2014-04-18T08:43:44Z","content_type":null,"content_length":"114679","record_id":"<urn:uuid:b5a6a822-793c-4ee0-b4c3-f80d916b848c>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00224-ip-10-147-4-33.ec2.internal.warc.gz"}
Annual Interest & Capital Equation February 19th 2010, 12:38 PM #1 Feb 2010 Originally from Indianapolis, Indiana - but currently live in Haarlem, The Netherlands Annual Interest & Capital Equation I have been studying for a mathematics entrance test and the following problem came up on the practice exam: Capital K is deposited at an annual interest rate of 4%. Interest is compounded once a year. In fifteen years time the capital has grown to an amount of 108056.62 euro. Compute the principal capital K. Could anyone suggest a formula/formulas for solving this problem. Also, could anyone suggest an area of mathematics to concentrate my efforts toward problems such as the above. Thank you, Deposit: K 1 year later: K*(1.04) 2 years later: K*(1.04)*(1.04) = K*(1.04)^2 3 years later: K*(1.04)^3 15 years later: K*(1.04)^15 = 108,056.62 For me, the concept of "a formula" is very, very weak. Figure it out! Make your own formulae. The area of concentration you should pursue is in solving problems, rather than in memorizing things. Take a deep breath and think it through. TK - I understand the formula you wrote. However, I guess I went blank when you wrote (1.04) in the context of the formula. I thought it would be (0.04) which equals 4%. Is the (1.04) because the "1" is equal to 100% of K? Thank you, You have it. .04 * K is the interest earned in the first year. You must then add that interest to the original amount, K, to manage the accumualted value one year after the deposit. K + 0.04*K = 1.00*K+ 0.04*K = K*(1.00 + 0.04) = K*1.04 So given the formula K*(1.04)^15 = 108,056.62, K should = 60,000...? February 19th 2010, 03:22 PM #2 MHF Contributor Aug 2007 February 20th 2010, 02:09 AM #3 Feb 2010 Originally from Indianapolis, Indiana - but currently live in Haarlem, The Netherlands February 20th 2010, 03:52 AM #4 MHF Contributor Aug 2007 February 20th 2010, 12:17 PM #5 Feb 2010 Originally from Indianapolis, Indiana - but currently live in Haarlem, The Netherlands
{"url":"http://mathhelpforum.com/math-topics/129651-annual-interest-capital-equation.html","timestamp":"2014-04-18T04:10:51Z","content_type":null,"content_length":"41471","record_id":"<urn:uuid:80c555d5-237d-4cec-9ac3-7c5daa16df4d>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00107-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Events at Siena Mathematics Math Events at Siena The Math Speaker Series The Math Colloquium is a series of talks for undergraduates given each semester by faculty from Siena and other colleges and universities in the area. The talks introduce students to new ideas and tools not otherwise seen in the curriculum. Titles and Abstracts from recent talks available on the Math Speaker Series page. Pi Mu Epsilon Math Honor Society Siena is home to the Alpha Epsilon chapter of the National Math Honor Society Pi Mu Epsilon. Members are chosen based on academic performance and inducted each spring. The induction includes a banquet and a keynote address from a local mathematical luminary. On March 15, 2013, Emily Case, Corinne delaGorgendiere, Anderson Da Silva Duraes, Yomary Rodriguez, and Francesca Romano were inducted. The keynote address was given by Siena professor, Dr. Sue Hurley. From left to right: Emily Casey, Yomary Rodriguez, Dr. Sue Hurley, Francesca Romano, and Anderson Da Silva Duraes. Fourth Grade Math Conference Nine Siena students (led by Christina Andromidas and Dr. Matthews) worked with local fourth graders between Jan 24 and March 21 to prepare them for their own math conference held at their school on March 21, 2013. Yomary Rodriguez (2014) guides local fourth graders. From left to right: Yomary Rodriguez, Kristen Nersesian, Renia Yoanidis, Christina Andromidas, Jessica Terrana, and Mr. Jim Matthews. Siena Hosts the American Mathematics Competition More than 70 middle school and high school students converged on campus on February 20, 2013 to participate in the AMC 10/12. Local students complete the 2013 AMC 10/12 exam. The Monthly Problem Challenge Each month Professor Javaheri posts a new, challenging math problem on the notice board across from the Math Library. The questions cover topics from the curriculum and typically require a unique or clever observation to solve. Game Night At least once per term, students and faculty gather for an evening of strategy games (Set, Hex, Blockus, Chess, etc.) and brain-teaser puzzles. Powered by pizza, the group solves challenging puzzles and puts their strategic thinking to the test. Approximately 25 students attended the recent Game Night on April 16, 2013. Ricochet Robots grandmasters! The Annual Siena Integration Bee Each spring the Math Department hosts the Siena Integration Bee. Over three rounds (and many, many integrals), students and faculty put their skills to the test. The photo below shows the triumphant winners from 2012. The 2013 winners were Catherine Kober, Gili Rusak, and Dr. Darren Lim. 2012 Integration Bee Winners: (from left to right) Patrick Bunk (3rd), Cory delaGorgendiere (3rd), Chan Tran (2nd), Jingya Gao (1st)
{"url":"http://www.siena.edu/pages/6645.asp","timestamp":"2014-04-16T13:18:24Z","content_type":null,"content_length":"20624","record_id":"<urn:uuid:fc886acf-7c1d-44c0-bfcd-83eede7df2de>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Please Check My Work on Quadratic Equations & Help with Word Problem I'm back again and still not understanding quadratic equations. My assignment has 4 questions and I answered the first 3 but I am stuck on number 4, which is a multi-question (a, b, c, & d) word problem. I would like someone to check my work on numbers 1 - 3, 4a, 4b, and help me to understand 4c and 4d of the word problem. http://i3.photobucket.com/albums/y63.../Algebra/1.jpg http:// i3.photobucket.com/albums/y63.../Algebra/2.jpg http://i3.photobucket.com/albums/y63.../Algebra/3.jpg http://i3.photobucket.com/albums/y63.../Algebra/4.jpg http://i3.photobucket.com/albums/y63.../ Algebra/5.jpg http://i3.photobucket.com/albums/y63.../Algebra/6.jpg http://i3.photobucket.com/albums/y63.../Algebra/7.jpg http://i3.photobucket.com/albums/y63.../Algebra/8.jpg http:// i3.photobucket.com/albums/y63.../Algebra/9.jpg http://i3.photobucket.com/albums/y63...Algebra/10.jpg http://i3.photobucket.com/albums/y63...Algebra/11.jpg Thank you in advance for your help 1,2 and 3 look correct to me. However in 4a you have a mistake it's supposed to be x^2+y^2 = 58^2 4b it is y = x+4 and 4 c it is x^2+(x+4)^2=58^2 You haven't done part d? Thank you for the clarification on 1, 2, 3 and 4a, 4b, and 4c. I didn't do part D because I wasn't sure how to do it, in my original posting, I said that needed someone to show me how to do it. I don't want just the answer because I need to know how to do it. I was never very good at word problem :-( It's not so bad. First expand 4c and then solve either like 1 or like 3. Once you find the values for x, substitute them in the equation y = x+4 and see what numbers you get. You will have to choose one solution. In this case, both x and y need to be positive! Thank you for your time and the information. I think I have a better understanding now, though I still thing Algebra in general is just confusing :-) Thank you again and you have a great weekend. Hey, whenever something is confusing, ask lots and lots of questions no matter how silly they are. I found that sometimes the silly questions lead to the biggest understanding! Hello Vlasev, Thank you very much. I do feel silly sometimes considering I am 40 years old and don't know all of this new stuff but with people like you, I am beginning to understand :-) I thanked the OP for his/her question because s/he made a big effort to show all the working and said specifically where the trouble was. Well done. A refreshing change.
{"url":"http://mathhelpforum.com/algebra/153632-please-check-my-work-quadratic-equations-help-word-problem-print.html","timestamp":"2014-04-25T00:10:16Z","content_type":null,"content_length":"8839","record_id":"<urn:uuid:c2b48724-6dbe-47b3-b27f-bfe234664281>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00092-ip-10-147-4-33.ec2.internal.warc.gz"}
Analysis of Boolean Functions Theorem 14 says that if $f$ is an unbiased linear threshold function $f(x) = \mathrm{sgn}(a_1 x_1 + \cdots + a_n x_n)$ in which all $a_i$’s are “small” then the noise stability $\mathbf{Stab}_\rho[f] $ is at least (roughly) $\frac{2}{\pi} \arcsin \rho$. Rephrasing in terms of noise sensitivity, this means $\mathbf{NS}_\delta[f]$ is at most (roughly) $\tfrac{2}{\pi} \sqrt{\delta} [...]
{"url":"http://www.contrib.andrew.cmu.edu/~ryanod/?tag=majority-is-least-stable-conjecture","timestamp":"2014-04-16T13:08:49Z","content_type":null,"content_length":"59188","record_id":"<urn:uuid:e32641c6-5afe-4f2c-aa51-116b919ee163>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Some help with problems Replies: 4 Last Post: Jun 22, 2013 3:52 AM Messages: [ Previous | Next ] Re: Some help with problems Posted: Jun 14, 2013 11:23 PM the fifth problem is wrong because (3/2) < ( (2^571 +3) / (3^247 + 2)) rather than">" It is same as: 2^572>3^248 2^143>3^62 there are two ways for you the first: 143 ln2>62 ln3 so we want to prove (ln2)/(ln3)>62/143 as is known to all ln2>0.7 ln3<1.1 then (ln2)/(ln3)>0.64 the second way: 3^62<4^71 so It is correct the seventh problem: I am so sorry that I cannot understand your meaning. rhombus is also a kind of parallelogram. Date Subject Author 5/29/13 justlooking for someone else 5/29/13 Re: Some help with problems magidin@math.berkeley.edu 5/29/13 Re: Some help with problems Stan Brown 6/14/13 Re: Some help with problems Nikola_lion 6/22/13 Re: Some help with problems olive stemforn
{"url":"http://mathforum.org/kb/message.jspa?messageID=9136601","timestamp":"2014-04-17T19:06:43Z","content_type":null,"content_length":"21257","record_id":"<urn:uuid:876acc23-26d4-4d77-b757-9aee1c5ea8f2>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00578-ip-10-147-4-33.ec2.internal.warc.gz"}
Rosemead Algebra 2 Tutor ...I love tutoring because it gives me a chance to focus on one person at a time and most people just need that extra attention to excel. I have had great results with all of my clients and I have a good sense of humor, so the time we spend together will not be boring. I have often been told that ... 11 Subjects: including algebra 2, physics, geometry, algebra 1 ...Even very difficult problems can be broken down into basic components, and applying the relevant formula becomes, well, formulaic. Being able to recognize the right patterns in math and physics can turn a mind-bending problem into a step-by-step procedure.I have earned a Bachelor's degree in Eng... 11 Subjects: including algebra 2, calculus, SAT math, physics ...And about myself: Even though I am a Business-Economics student at the University of California-Irvine, my strongest subject is mathematics and I recently got my AA in mathematics. Also I awarded from mathematics department in Chaffey College and no wonder my mathematics cumulative GPA including... 11 Subjects: including algebra 2, calculus, statistics, geometry ...As both a tutor and certified teacher, I have helped many students successfully prepare for the PSAT. It's a standardized test that provides firsthand practice for the SAT®. It also gives you a chance to enter NMSC scholarship programs and gain access to college and career planning tools. The P... 24 Subjects: including algebra 2, chemistry, writing, geometry ...I am a certified Montessori Teacher PreK-8 from UCSD. I taught high school for over 25 years so I know how to prepare students. I know what's coming so I know what to emphasize. 72 Subjects: including algebra 2, reading, English, geometry
{"url":"http://www.purplemath.com/rosemead_ca_algebra_2_tutors.php","timestamp":"2014-04-19T23:14:56Z","content_type":null,"content_length":"23901","record_id":"<urn:uuid:e107a7e0-d944-4733-b19f-ef61f76ddb7f>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00135-ip-10-147-4-33.ec2.internal.warc.gz"}
Mplus Discussion >> Obtaining Additional Fit Indices David Kosson posted on Thursday, August 19, 2010 - 10:29 am I am running a CFA with ordered categorical data using Mplus 5.1 and trying to test three-factor and four-factor solutions. I am finding generally acceptable (but not great) fit for the CFI, TLI, and RMSEA but poor fit for the WRMR -- one of my colleagues tells me I can also obtain the SRMR -- but I cannot figure out how to request the SRMR. It does not show up in my output -- here is a sample: CFI 0.900 TLI 0.939 Number of Free Parameters 42 RMSEA (Root Mean Square Error Of Approximation)Estimate 0.084 WRMR (Weighted Root Mean Square Residual) Value 1.571 Is there any way to request that the SRMR also be calculated? Dave Kosson Linda K. Muthen posted on Thursday, August 19, 2010 - 1:03 pm When SRMR is available it will be given automatically. David Kosson posted on Friday, August 20, 2010 - 2:46 pm What I do not understand is why SRMR was available in the older version of Mplus but is not available in the current version -- that is, i am using Mplus 5.1 and I do not get the SRMR -- One of my colleagues has Mplus 4.21, and he gets it: here is his output for the 3-factor model: CFI 0.922 TLI 0.955 Number of Free Parameters 16 RMSEA (Root Mean Square Error Of Approximation) Estimate .078 SRMR (Standardized Root Mean Square Residual) Value 0.060 WRMR (Weighted Root Mean Square Residual) Value 1.553 Here is my output with version 5.1: CFI 0.922 TLI 0.955 Number of Free Parameters 42 RMSEA (Root Mean Square Error Of Approximation) Estimate 0.078 WRMR (Weighted Root Mean Square Residual) Value 1.345 Has the program changed in some way so that the SRMR is no longer appropriate but used to be considered appropriate? I also noticed that the number of free parameters seems to be different -- it appears that now Mplus counts the number of items as part of the number of parameters? Dave Kosson Linda K. Muthen posted on Friday, August 20, 2010 - 3:05 pm Between the two versions you mention, TYPE=MEANSTRUCTURE became the default. SRMR is not available in this situation. Add MODEL=NOMEANSTRUCTURE to the ANALYSIS command to run the model without means. David Kosson posted on Monday, August 23, 2010 - 10:02 am Thanks. That may be what I need to do. However, it is not currently working -- here is the syntax i tried to use: NAMES ARE r1pcl1 r1pcl2 r1pcl3 r1pcl4 r1pcl5 r1pcl6 r1pcl7 r1pcl8 r1pcl9 r1pcl10 r1pcl12 r1pcl13 r1pcl14 r1pcl15 r1pcl16 r1pcl18 r1pcl19 r1pcl20 misgrp; MISSING = *; USEOBSERVATIONS ARE (misgrp eq 0); MODEL: f1 BY r1pcl1 r1pcl2 r1pcl4 r1pcl5; f2 BY r1pcl6 r1pcl7 r1pcl8 r1pcl16; f3 BY r1pcl3 r1pcl9 r1pcl13 r1pcl14 r1pcl15; ANALYSIS: TYPE = GENERAL; I was using this so that i could use the same dataset for my primary analyses that do not include cases with missing data and for supplementary analyses that include estimation of missing data. However, it did not work -- here is the message I got: *** WARNING in ANALYSIS command MODEL=NOMEANSTRUCTURE is not allowed in conjuction with TYPE=MISSING. Request for MODEL=NOMEANSTRUCTURE will be ignored. 1 WARNING(S) FOUND IN THE INPUT INSTRUCTIONS Is there a way to do this without MEANSTRUCTURE? Linda K. Muthen posted on Monday, August 23, 2010 - 10:13 am I think you need to also add LISTWISE=ON; to the DATA command. David Kosson posted on Friday, August 27, 2010 - 2:29 pm That was a big help. I was able to complete all my analyses that did not include missing values using this approach. Now, I am wondering if there is a way to run an analysis without using mean structures that does include missing values -- that is, my colleague has been able to run such an analysis in Mplus 4.21, but the following warning makes me think it may not be possible in Mplus 5.1 -- *** WARNING in ANALYSIS command MODEL=NOMEANSTRUCTURE is not allowed in conjuction with TYPE=MISSING. Request for MODEL=NOMEANSTRUCTURE will be ignored. Linda K. Muthen posted on Friday, August 27, 2010 - 3:27 pm I don't think your colleague was able to to this in Version 4.21. TYPE=MISSING has always included meanstructure. Having unstructured means is the same as having no means. David Kosson posted on Monday, August 30, 2010 - 3:12 pm I think you are correct. My colleague now realizes that he was mistaken about this issue. I am guessing that this means that there is no way in Mplus to estimate missing values without using mean I have one additional question. The final step of my project is to conduct multi-group CFAs to compare the fit of the models across groups -- e.g., for one of these, we are comparing equivalence of the models in North American versus European samples. Once again, i would prefer to do these analyses without using mean structures, if possible. My reading suggests that it is possible to conduct multi-group CFAs that do not use mean structures, but I am having trouble getting Mplus to allow this -- even though I am limiting my multi-group analyses to cases with full data. Is there a way to do this in Mplus? Linda K. Muthen posted on Tuesday, August 31, 2010 - 9:06 am You cannot exclude means from the analysis with TYPE-MISSING. However, if the means are unstructured, the fit will be the same as for a model without means. With multiple group analysis, to obtain an unstructured mean model, relax the default equality of the intercepts across groups and fix the factor means to zero in all groups. This is the same as not having means in the model. David Kosson posted on Tuesday, August 31, 2010 - 9:25 am With respect to the multiple group analysis, my impression is that with ordered categorical data, thresholds are modeled rather than intercepts -- so instead of relaxing the equality of the intercepts across groups, should i allow the thresholds to vary freely across groups? (For my dataset, each indicator has three possible scores (0, 1, and 2), so I think I do this by mentioning the thresholds in a 2nd MODEL paragraph that is specific to the 2nd group -- e.g., if Europe is my 2nd group, I have a 2nd paragraph that says: MODEL Europe (after my general model paragraph) and then specify: for each indicator for which i want to let the thresholds vary -- Does that sound right? Linda K. Muthen posted on Tuesday, August 31, 2010 - 9:55 am Correct. If you have ordered categorical variables, you need to free all of the thresholds as you show. You should check the results or TECH1 to be sure you have successfully achieved your goal. David Kosson posted on Tuesday, August 31, 2010 - 10:40 am And if I fix the factor means to 0 in each group (as I did), does that mean that the multigroup CFA will not examine whether there are differences between the two groups in the average levels of the latent factors? If so, then I think that means that the only difference between the model in which I allow the groups to differ on parameters (i.e., loadings and thresholds but not factor means) and the model in which i constrain the parameters to be the same is that, in the more constrained model, I require that the loadings be the same in the two groups -- but i still allow the thresholds to differ. Does that sound right? Linda K. Muthen posted on Tuesday, August 31, 2010 - 4:23 pm David Kosson posted on Wednesday, September 01, 2010 - 8:21 am Thanks again. I have now run all the multiple group CFAs and the results suggest a lack of invariance. I am sorry to say that I still have two questions for you. 1) Is there any evidence that the software is overly sensitive to failures of invariance so that people should not worry too much about a small degree of variance -- e.g., in one case, the result of the chi-square difference test is fairly close to .05 -- Chi-Square Test for Difference Testing Value 19. Degrees of Freedom 9** P-Value 0.0249 2) Given that I expect the mean levels of the latent factors to differ between the two groups, I am now wondering if it is possible to run the analysis without setting the means of the latent factors to 0 in both groups but allowing the program to estimate the means? And if I do that, would any differences between the groups in the levels of the factor means lead to a lack of invariance? Linda K. Muthen posted on Wednesday, September 01, 2010 - 8:33 am 1. I don't of any formal study of this. You might do a literature search to see if you can find anything or ask on SEMNET. 2. To estimate factor means, thresholds must be held equal across classes. This is the Mplus default for multiple group modeling. Comparing means across groups requires threshold invariance. Leslie Rutkowski posted on Tuesday, October 26, 2010 - 1:55 pm Hello Linda and Bengt, Can you point me to the technical details behind why there is an incompatibility between missing data methods and no mean structure (e.g. NOMEANSTRUCTURE is not possible with TYPE=MISSING). Bengt O. Muthen posted on Tuesday, October 26, 2010 - 5:42 pm Muthén, B., Kaplan, D. & Hollis, M. (1987). On structural equation modeling with data that are not missing completely at random. Psychometrika, 42, 431-462. which you find on my UCLA web site. This article shows that ML estimation under MAR involves a mean structure which cannot be ignored if you want ML estimates. Back to top
{"url":"http://www.statmodel.com/cgi-bin/discus/discus.cgi?pg=next&topic=9&page=5785","timestamp":"2014-04-19T06:58:36Z","content_type":null,"content_length":"43595","record_id":"<urn:uuid:b7f16d91-26a9-417f-a0d6-b68fe2080a51>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00533-ip-10-147-4-33.ec2.internal.warc.gz"}
an interesting function We are given the function f:R -> R defined as follows, f(x) = 1/n, if x is a rational number f(x) = 0, if x is irrational (a) first I need to find a sequence f_n of cont. functions such that f_n(x) -> f(x) as x -> infinity (b) second I need to prove that f is continuous at each irrational point and discontinuous at each rational point.
{"url":"http://mathhelpforum.com/differential-geometry/138314-interesting-function.html","timestamp":"2014-04-20T17:03:56Z","content_type":null,"content_length":"36283","record_id":"<urn:uuid:0357da9e-7d1f-4d8d-9767-b2033f1da6ce>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00311-ip-10-147-4-33.ec2.internal.warc.gz"}
Suppose you have a deck of n cards, labelled with numbers from 1 to n. You start counting the cards 1,2,3... If the count does not correspond to the number on the card on top of the deck, you move the card at the end of the deck. Otherwise, you put the top card aside and start counting again from 1. If your count reaches n+1, then you have lost, your deck is a dead-end and the game stops (this would happen by continuing the above example game). Otherwise, if you were able to remove all the cards with this method, then it is a success, and you restart the whole procedure on a new deck of cards, in the order in which you removed the cards. For example, for n=4, if you start from the deck 1 4 3 2, you start by removing the 1, then you count 1,2,3,4, you remove the 4, then 1,2, and finally 1,2,3 on the last card. This gives you the new deck 1 4 2 3. Starting again, you remove in order 1 2 3 4. Starting again, you remove the 1, but cannot remove any other card, so the game stops here. Two cases can occur: either you reach a dead-end at some point, or you fall into a cycle, meaning that you get a previously encountered deck. In this variant, you count modulo n. This means that you do not stop at n+1, but continue counting again with 1. Some decks are still dead-ends, like for example 4 3 2 1 (you would be counting infinitely without ever removing any card). Again, you continue playing until you reach either a dead-end or a cycle. For example, starting from 1 3 4 2, you would remove the 1, then count 1,2,3,4,1,2, remove the 2, and finally remove the 3 and 4. You would get the new deck 1 2 3 4. On this one, you would remove the 1, then count 1,2,3,4,1,2,3,4,1,2, and remove the 2, and finally remove the 3 and 4. You would get again 1 2 3 4, and thus stop because you ran into a cycle (this cycle is trivial, but there may be non-trivial cycles). WHAT YOU HAVE TO DO You have to provide 30 initial decks that produce long sequences and that preferably run into a cycle. More precisely, you have to provide decks for both the non-modular and modular versions for values of n from 10 to 25, except for 23 (guess why, designers of programming contests have strange requirements sometimes...). The length of a sequence is the number of distinct decks (including the starting one) you encounter before reaching either a dead-end or a previously encountered deck. This means that for cycles, this might be the length of the initial steps, plus the length of the cycle itself. For example, the length of the sequence starting from 1 4 3 2 in the non-modular version is 3, and the length of the sequence starting from 1 3 4 2 in the modular version is 2. More generally, the sequence A B C D E E E E... has length 5 (4+1), and the sequence A B C D E C D E C D E... has length 5 (2+3) (where different letters denote different decks). For each category, only your "best" entry is considered for the scoring. "Best" means your longest entry with cycle if you have entries with cycles, otherwise your longest entry if all your entries are without cycles. For each category, the reference length is the longest length among the best entries of all participants (the cyclic aspect is not taken into account here). Then, for each category, you get an individual subscore which is the length of your best entry divided by the reference length. Moreover, since cycles are good, you get 1 extra bonus point if your best entry ends on a cycle. As a consequence, a subscore is a number between 0 and 2. Note also that a cyclic sequence is always considered better than a non-cyclic one, even if it is much shorter. So if your best entry is a cycle of length 3, while another participant has a non-cyclic sequence of length 10 as his best entry, you get 1+3/10 = 1.30 points for that category, while he gets only 0+10/10 = 1 point. Your global score is the sum of the 30 subscores. It is a number between 0 and 60. First of all, you have to register in order to get your account created. Then, you can go to the submission page (username and password required), and fill the form with you entry. For convenience of use, your entry will be processed both in the non-modular and in the modular categories (but there is little chance that a same entry gives a good scoring in both categories). After submission, the lengths of the sequences starting from your permutation are computed and eventual cycles are detected. In the modular category, for values of n of 17 and 19, the entries will not be processed immediately by the web server, but will be processed manually from time to time (I hope every couple of days). This is necessary because those entries might take too much time and I don't want to overload the server. In your My entries page, will you be able to see if your entries have been processed. So, please don't write me e-mails to tell me that your score wasn't updated after you submitted a modular entry for n=17 or 19, and just be patient. On the general page, will you also see the date of the last manual processing. Very important: In order to avoid overloading the server, you must use the submission page to compute the length of your sequences. You should only submit what you assume to be your best solutions. I will disqualify any person who submits an "unreasonable" number of Here are a few other rules: • You cannot have multiple accounts or enter the contest more than once. • Teams are allowed. However, once you are in a team, you cannot re-enter the contest as an individual, and conversely once you are registered as an individual, you are not allowed to join a new team. Moreover, the composition of teams cannot be changed after their creation. Please indicate the names of all participants when registering. • People from Aarhus University are not allowed to join the contest. • How ties are resolved: when two entrants get exactly the same score, the tie is resolved by looking at the time of their last entry (the older one gets first). If there is still a tie, the first will be the one with the lower number of entries. • I am in no way responsible of collateral damages induced by this contest. This includes, but is not limited to: soulmate quitting the house because you spent your nights on the contest, boss getting angry because you are using all the CPUs of all your co-workers, etc... If you want to discuss about this contest, please join the Mousetrap Programming Contest mailing list. Any kind of discussion is allowed, provided it has a direct link with this contest. You are free to post your results, entries or code (but of course you might not want to reveal your secrets until the end of the contest). If you have questions about this contest or a technical problem, please post a message to this list instead of sending an e-mail directly to me, as some other participants might help you faster than I can. Official announcements about this contest will also be made on this list (and also on Al Zimmermann's discussion group The game of mousetrap was invented by Cayley (there is a reference to it in MathWorld). For those of you who have access to it, this problem is inspired by the section E37 of Richard Guy's book "Unsolved problems in number theory". Back to the main page
{"url":"http://www.recmath.com/contest/MouseTrap/mousetrap.html","timestamp":"2014-04-21T13:42:44Z","content_type":null,"content_length":"8907","record_id":"<urn:uuid:e2750fcf-75c7-4518-b434-5485cf29defe>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00275-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - How does GR handle metric transition for a spherical mass shell? Was trying to convey the analogy re local unobservability of 1st order metric effects. Basically that 'elastic being' deforms with it's surroundings, and must use a kind of 'K' factor to 'navigate' but with a limited perspective. I think you're twisting the analogy around here. A "local" being does *not* deform with the surroundings, in the sense you are using the term "deformation". That's the point. The K factor is *not* observable locally; it's only observable by taking measurements over an extended region. Locally, space looks Euclidean; there is no "deformation". Just as locally on Earth, its surface looks flat; we only see the non-Euclideanness of the surface by making measurements over an extended region. Furthermore, the non-Euclideanness never shows up as any kind of "strain" on individual objects. It's just a fact about the space, that it doesn't satisfy the theorems of Euclidean geometry. That's all. I really think it's a mistake to look for a "real" physical meaning to the non-Euclideanness of space, over and above the basic facts that I described using the K factor--i.e., that there is "more distance" between two spheres of area A and A + dA, or between two circles of circumference C and C + dC, than Euclidean geometry would lead us to expect. If I start from my house at the North Pole and walk in a particular direction, I encounter circles of gradually increasing circumference. Between two such circles, of circumference C and C + dC, I walk a distance K * (dC / 2 pi), where K is the "non-Euclideanness" factor and is a function of (C / 2 pi). If space were Euclidean, I would find K = 1; but I find K > 1. So what? If I insist on ascribing the fact that K > 1 to some actual physical "strain" in the space, or anything of that sort, what is my reason for insisting on this? The only possible reason would be that I ascribe some special status to K = 1, so that when I see K > 1, I think something must have "changed" from the "natural" state of things. But why should Euclidean geometry, K = 1, be considered the "natural" state of things? What makes it special? The answer is, as far as physics is concerned, nothing does. Euclidean geometry is not special, physically. It's only special in our minds; *we* ascribe a special status to K = 1 because that's the geometry our minds evolved to comprehend. But that's a fact about our minds, not about physics.
{"url":"http://www.physicsforums.com/showpost.php?p=3575106&postcount=99","timestamp":"2014-04-19T04:40:04Z","content_type":null,"content_length":"10155","record_id":"<urn:uuid:015a2155-9402-4c05-8828-f1908698ddab>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00415-ip-10-147-4-33.ec2.internal.warc.gz"}
You Have Taken Delivery Of A Sinusoidal Power Supply ... | Chegg.com Please help. I am struggling with this. Image text transcribed for accessibility: You have taken delivery of a sinusoidal power supply whose nominal supply voltage is specified as Vs = 120 volts RMS at 60Hz. However the supply has the additional feature in that frequency is variable from 0Hz up to 5kHz. You are involved in the design of part of a communications system and you need to determine the Thevenin equivalent voltage and impedance of this power supply. You have a frequency response of the amplitude of the output voltage as shown in Figure X below. In connecting a resistive load RL to the power supply you find that 100 Amps is drawn at 0Hz if the load resistor is RL = 1.44 Ohms. Determine component values of the simplest circuit that would provide a circuit model for the power supply. The amplitude of the output voltage versus frequency is shown below with the load resistor disconnected. The phase versus frequency characteristic is also shown with RL disconnected Phase versus frequency characteristic Electrical Engineering
{"url":"http://www.chegg.com/homework-help/questions-and-answers/taken-delivery-sinusoidal-power-supply-whose-nominal-supply-voltage-specified-vs-120-volts-q1184688","timestamp":"2014-04-19T00:44:37Z","content_type":null,"content_length":"21926","record_id":"<urn:uuid:aa40909b-3905-45cf-b004-2971186aebd2>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00156-ip-10-147-4-33.ec2.internal.warc.gz"}
word problem for math July 25th 2005, 08:28 PM #1 Jul 2005 word problem for math The math club offers a prize to any person who writes a problem that can be solved using a tree diagram and the answer is 48 choices. what problem would you write? what is the prize???? that is the math question in the book. July 25th 2005, 10:58 PM #2 July 26th 2005, 03:55 AM #3 Jul 2005
{"url":"http://mathhelpforum.com/algebra/639-word-problem-math.html","timestamp":"2014-04-18T07:21:47Z","content_type":null,"content_length":"32877","record_id":"<urn:uuid:7f197693-e163-472b-ad19-03e4508477b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00141-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: An escalator lifts people to the second floor of a building, 25 ft above the first floor. The escalator rises at a 30 degree angle. To the nearest foot, how far does a person travel from the bottom to the top of the escalator? • 8 months ago • 8 months ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5201948fe4b05ca7842082f3","timestamp":"2014-04-16T08:02:20Z","content_type":null,"content_length":"189652","record_id":"<urn:uuid:19c8fc4e-51f5-4feb-8857-37058c2fe935>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00564-ip-10-147-4-33.ec2.internal.warc.gz"}
Next: Symmetric Eigenproblems Up: Generalized Orthogonal Factorizations and Previous: Generalized QR Factorization &nbsp Contents &nbsp Index The generalized RQ (GRQ) factorization of an m-by-n matrix A and a p-by-n matrix B is given by the pair of factorizations where Q and Z are respectively n-by-n and p-by-p orthogonal matrices (or unitary matrices if A and B are complex). R has the form where R[12] or R[21] is upper triangular. T has the form where T[11] is upper triangular. Note that if B is square and nonsingular, the GRQ factorization of A and B implicitly gives the RQ factorization of the matrix AB^-1: A B^-1 = ( R T^-1 ) Z^T without explicitly computing the matrix inverse B^-1 or the product AB^-1. The routine xGGRQF computes the GRQ factorization by first computing the RQ factorization of A and then the QR factorization of BQ^T. The orthogonal (or unitary) matrices Q and Z can either be formed explicitly or just used to multiply another given matrix in the same way as the orthogonal (or unitary) matrix in the RQ factorization (see section 2.4.2). The GRQ factorization can be used to solve the linear equality-constrained least squares problem (LSE) (see (2.2) and [55, page 567]). We use the GRQ factorization of B and A (note that B and A have swapped roles), written as We write the linear equality constraints Bx = d as: T Q x = d which we partition as: Therefore x[2] is the solution of the upper triangular system T[12] x[2] = d We partition this expression as: To solve the LSE problem, we set R[11] x[1] + R[12] x[2] - c[1] = 0 which gives x[1] as the solution of the upper triangular system R[11] x[1] = c[1] - R[12] x[2]. Finally, the desired solution is given by which can be computed by xORMRQ (or xUNMRQ). Next: Symmetric Eigenproblems Up: Generalized Orthogonal Factorizations and Previous: Generalized QR Factorization &nbsp Contents &nbsp Index Susan Blackford
{"url":"http://www.netlib.org/lapack/lug/node47.html","timestamp":"2014-04-20T08:36:46Z","content_type":null,"content_length":"12399","record_id":"<urn:uuid:e750976f-7de1-4115-ba7e-8c1b5e18311a>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00539-ip-10-147-4-33.ec2.internal.warc.gz"}
Rocket in the Air April 5th 2010, 05:51 PM #1 Apr 2010 Rocket in the Air A rocket is fired straight up from the ground with an initial velocity of 900 feet per second. (a) How long does it take the rocket to reach 4200 feet? (b) When will the rocket hit the ground? No Gravity Math Question Not Physics You need to use newton's third law of motion... $(v_2)^2 =(v_1)^2 + 2as$ $s=$ max height height rocket ascends $a=$ acceleration due to gravity (32.1522 ft/s it is negative as it is accting downward) $v_1=$ initial vel(900 ft/s) $v_2=$ final vel(0 m/s at top pos it doesnt have velocity) Where $t=$ time in the air. $(v2) = (v1) + at$ Last edited by Anonymous1; April 5th 2010 at 09:49 PM. This is, in my opinion, a really bad problem. The equation Anonymous1 gives, as well as $s= -gt^2+ v_0t+ s_0$ will work for something thrown upward so that the only force is that of gravity. But the whole point of a rocket is that it continues firing as it goes up so that is NOT true. April 5th 2010, 07:57 PM #2 April 5th 2010, 09:25 PM #3 Apr 2010 April 5th 2010, 09:32 PM #4 April 5th 2010, 09:36 PM #5 Apr 2010 April 5th 2010, 09:38 PM #6 April 6th 2010, 03:55 AM #7 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/pre-calculus/137493-rocket-air.html","timestamp":"2014-04-19T21:43:35Z","content_type":null,"content_length":"49699","record_id":"<urn:uuid:d11f4db3-58b7-45b4-8b40-36a95cc4c96a>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00146-ip-10-147-4-33.ec2.internal.warc.gz"}
Penn Valley, PA Algebra 2 Tutor Find a Penn Valley, PA Algebra 2 Tutor ...I served as an elementary school tutor during my first two years of college. Before that, I tutored students in elementary math and science. I also participated in a program where older teens help elementary-aged students with reading and writing. 10 Subjects: including algebra 2, algebra 1, Latin, SAT math Hello! I am a patient and knowledgeable tutor for high-school and college level students in math and science. Each student starts out in a different place and has unique needs and strengths. 10 Subjects: including algebra 2, chemistry, calculus, physics ...If you understand the foundations of a subject very well, you can keep your head in a reasonable place when things get complicated. -Work on the same level as my students. As you work on a problem set, I will work through the same thing. I will never quote a solution that I can't explain thoroughly. 25 Subjects: including algebra 2, chemistry, writing, geometry ...I have high expectations and do expect to see a lot of hard work, productivity, progress and a regular evaluation of student skills in desired areas of focus, helping students be unashamed to work on weakness in the direction of change and improvement. Cheers to building strong foundations for c... 12 Subjects: including algebra 2, chemistry, geometry, biology ...I obtained my International Baccalaureate Diploma in July 2012 at Central High School of Philadelphia. I am well-versed in IB (as well as AP) Biology, Theory of Knowledge, English Literature and Composition, Writing craft, and 20th Century History. I have also obtained 'A' grades in Spanish lan... 18 Subjects: including algebra 2, reading, Spanish, English Related Penn Valley, PA Tutors Penn Valley, PA Accounting Tutors Penn Valley, PA ACT Tutors Penn Valley, PA Algebra Tutors Penn Valley, PA Algebra 2 Tutors Penn Valley, PA Calculus Tutors Penn Valley, PA Geometry Tutors Penn Valley, PA Math Tutors Penn Valley, PA Prealgebra Tutors Penn Valley, PA Precalculus Tutors Penn Valley, PA SAT Tutors Penn Valley, PA SAT Math Tutors Penn Valley, PA Science Tutors Penn Valley, PA Statistics Tutors Penn Valley, PA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Bala Cynwyd algebra 2 Tutors Bala, PA algebra 2 Tutors Belmont Hills, PA algebra 2 Tutors Cynwyd, PA algebra 2 Tutors Gulph Mills, PA algebra 2 Tutors Lower Merion, PA algebra 2 Tutors Merion Park, PA algebra 2 Tutors Merion, PA algebra 2 Tutors Miquon, PA algebra 2 Tutors Narberth algebra 2 Tutors Overbrook Hills, PA algebra 2 Tutors Penn Wynne, PA algebra 2 Tutors Pilgrim Gardens, PA algebra 2 Tutors Pilgrim Gdns, PA algebra 2 Tutors Wynnewood, PA algebra 2 Tutors
{"url":"http://www.purplemath.com/Penn_Valley_PA_algebra_2_tutors.php","timestamp":"2014-04-21T11:09:01Z","content_type":null,"content_length":"24294","record_id":"<urn:uuid:a77adc5e-672d-4bea-9d0e-1c3d6751743a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
How much water is in that river? The Changing Hudson Project curriculum was developed by scientists and educators at Cary to help students understand how the Hudson River changes over time. By collaborating with teachers, scientists, and management agencies, the curriculum has grown to include a wide range of topics that engage students with visualizations, readings, investigations, and actual scientific data. This exercise requires a trip to a stream or tributary. Students should be familiar with using Excel and calculating averages. Ask students how much water would fit into the classroom. How would they determine this? Allow students to work in pairs to come up with an answer. After discussing their methodologies, show students pictures of various types of rivers and streams. Ask them to imagine standing on the edge of a stream or river and to think about how much water is going by each second. Then ask them to imagine watching the water flow by in a local stream and the Hudson River. How do these compare? At the stream, take students to the water’s edge and have them observe the movement of the water. Ask students to think about whether they think different kinds of organisms live in low or slow-flow areas versus high or fast-flow areas and why. Then ask students to brainstorm what information they would need to measure the volume of water flowing by. (This is a good chance to refresh their memories about volume.) Introduce the measurement unit ‘cubic feet per second’ (cfs) and ask them what they must measure to find the cfs of water flowing past them in the stream. Eventually you should get them to the idea that you can think of taking a rectangular slice (width x depth) of the stream and calculating how many slices go past your location in a second. You can get the width and average depth of the stream at your location from student data collection. In groups, students should complete the attached stream flow worksheet. Make sure students understand the importance of taking several readings and calculating an average. Collect students’ worksheets and discuss the implications of changing flow on the Hudson River ecosystem (the last few questions on the worksheet). Palmer, M.A., C.A.R. Liermann, C. Nilsson, M. Floerke, J. Alcamo, P.S. Lake, and N.Bond. 2008. Climate change and the world’s river basins: anticipating management options. Fronteiers in Ecology and the Environment, vol 6. MST 1 - Mathematical analysis, scientific inquiry, and engineering design MST 4- Physical setting, living environment and nature of science MST 6- Interconnectedness of mathematics, science, and technology (modeling, systems, scale, change, equilibrium, optimization) MST 7- Problem solving using mathematics, science, and technology (working effectively, process and analyze information, presenting results) Cary Institute of Ecosystem Studies | Millbrook, New York 12545 | Tel (845) 677-5343
{"url":"http://www.caryinstitute.org/educators/teaching-materials/changing-hudson-project/weather-climate/day-9-how-much-water-river","timestamp":"2014-04-21T14:47:47Z","content_type":null,"content_length":"35748","record_id":"<urn:uuid:faa4a092-4d9c-4226-9a59-3c736141d469>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00238-ip-10-147-4-33.ec2.internal.warc.gz"}
Copyright © University of Cambridge. All rights reserved. 'After Thought' printed from http://nrich.maths.org/ Sue Liu, S6, Madras College St Andrews sent one of her super solutions in which she wrote $\sin(\cos x)$ as $\cos (\cos (x - (\pi/2)$ and then used the formula which gives the difference of two cosines as minus the product of two sines. Another triumph for Sue! After a lot more work with trig formulae Sue proves that $\cos(\sin x)$ is greater than $\sin(\cos x)$ for all $x$ and you can try this for yourselves. There is another way of looking at this. You may like to sketch some graphs. First for $x$ between $0$ and $\pi/2$ the cosine function is decreasing and \[ 0 \leq \sin x \leq x \leq \pi/2 \] so it follows that \[1 = \cos 0 \geq \cos(\sin x) \geq \cos x \geq \cos(\pi/2) = 0 \ \ \ [1]\] Also, as for all $x$ in this interval, $ \cos x \geq 0$ and we also know that, for$ y \geq 0, \sin y \leq y$, we can put $y = cos x$ which gives\[ \sin (\cos x) \leq \cos x. \ \ \ [2]\] From [1] and [2] we see that, for $x$ between $0$ and $\pi/2$ \[ \cos(\sin x) \geq \cos x \geq \sin (\cos x). \] For $x$ between $ \pi/2 \mbox{ and } \pi$ it is even easier because in this interval $ \cos(\sin x) > 0\ \mbox{ and }\ \sin (\cos x) < 0.$ So far we have $\cos (\sin x) \geq \sin (\cos x)$ for x between 0 and $ \pi $. For the interval $[-\pi, 0]$ put $y = - x$ then $y$ is in the interval $[-\pi, 0]$ and $x$ is in the interval $[0, \pi]$ so, using what we have already proved and the fact that sine is an odd function and cosine is an even function, we have $$\begin{eqnarray} \cos (\sin y) = \cos (\sin - x) = \cos (- \sin x) \\ = \cos (\sin x) \\ \geq\sin (\cos x) \\ = \sin (\cos y). \end{eqnarray}$$ We have proved that $\cos (\sin x) \geq \sin (\cos x)$ for all x between $-\pi \mbox{ and } \pi$ and hence everywhere by periodicity.
{"url":"http://nrich.maths.org/348/solution?nomenu=1","timestamp":"2014-04-17T01:25:46Z","content_type":null,"content_length":"4653","record_id":"<urn:uuid:1711289a-16c9-4068-bf37-b18d67399791>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00295-ip-10-147-4-33.ec2.internal.warc.gz"}
Initialization of ice-sheet forecasts viewed as an inverse Robin problem Arthern, Robert J.; Gudmundsson, Hilmar. 2010 Initialization of ice-sheet forecasts viewed as an inverse Robin problem. Journal of Glaciology, 56 (197). 527-533. Full text not available from this repository. As simulations of 21st-century climate start to include components with longer timescales, such as ice sheets, the initial conditions for those components will become critical to the forecast. This paper describes an algorithm for specifying the initial state of an ice-sheet model, given spatially continuous observations of the surface elevation, the velocity at the surface and the thickness of the ice. The algorithm can be viewed as an inverse procedure to solve for the viscosity or the basal drag coefficient. It applies to incompressible Stokes flow over an impenetrable boundary, and is based upon techniques used in electric impedance tomography; in particular, the minimization of a type of cost function proposed by Kohn and Vogelius. The algorithm can be implemented numerically using only the forward solution of the Stokes equations, with no need to develop a separate adjoint model. The only requirement placed upon the numerical Stokes solver is that boundary conditions of Dirichlet, Neumann and Robin types can be implemented. As an illustrative example, the algorithm is applied to shear flow down an impenetrable inclined plane. A fully three-dimensional test case using a commercially available solver for the Stokes equations is also presented. Actions (login required)
{"url":"http://nora.nerc.ac.uk/10838/","timestamp":"2014-04-19T09:38:24Z","content_type":null,"content_length":"20710","record_id":"<urn:uuid:36f3bb85-179a-463e-b389-510a36de9171>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Videos for base formula - Homework Help Videos - Brightstorm 16 Videos for "base formula" Examples of logarithmic notation, including a variety of bases, and the change of base formula. How to use the change of base formula for logs to evaluate a logarithmic statement. Overview of evaluating logarithmic expressions of any base using a calculator and the change of base formula. How to use the change of base formula for logs to evaluate a logarithmic statement. Examples of logarithmic notation, including a variety of bases, and the change of base formula. Overview of evaluating logarithmic expressions of any base using a calculator and the change of base formula. How to use the change of base formula to condense two logs into one. How to use the change of base formula to condense two logs into one. How to strategize about solving formulas for variables; how to solve the area of a rectangle formula for base and height; how to solve the area of a triangle formula for base and for height. How to solve for one of the bases of a trapezoid in the trapezoid area formula. How to use the change of base formula to compute the derivative of log functions of any base. Determine after how many years a product will be worth a certain amount. How to find the area of any parallelogram using rectangle area formulas. How to derive the area of a trapezoid formula using the area of a rectangle. How to find the volume of any prism, right or oblique using a general formula.
{"url":"http://www.brightstorm.com/tag/base-formula/","timestamp":"2014-04-18T03:20:10Z","content_type":null,"content_length":"52861","record_id":"<urn:uuid:c58020fa-fcc2-491d-9317-02cd25324b97>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00051-ip-10-147-4-33.ec2.internal.warc.gz"}
Kingwood, TX SAT Math Tutor Find a Kingwood, TX SAT Math Tutor ...As a violin and viola player who sight-reads music all the time, sight-singing comes very easily to me, and I enjoy teaching it. I have been playing violin since I was four years old and viola since I was 10. Even though I turned my focus to viola when I was 12, I continued to play violin. 35 Subjects: including SAT math, reading, English, writing ...To organize academic subjects, a student must have a system of folders or binders. These need to be organized according to subject. It is also necessary to keep the work and notes organized 49 Subjects: including SAT math, reading, geometry, English I have successfully tutored children from 4th grade through High School. I prefer one on one education to large classes and my students have always shown marked improvement. I am an excellent communicator with a desire to help student achieve their educational goals. 9 Subjects: including SAT math, geometry, algebra 2, algebra 1 ...The mastery of reading depends on the understanding and practice of these vital reading concepts. I am trained in creating systems for organizing, processing, and comprehending what students read, hear, or prepare in class; planning homework and long-term assignments; studying for tests; and det... 15 Subjects: including SAT math, reading, English, grammar ...One of the most difficult skills to master is the ability to effectively study and comprehend what is being read. For too many students, studying is a dreaded chore with countless preconceived notions about "cramming" in a locked room with a book and a light. Using techniques such as Cornell or... 27 Subjects: including SAT math, English, algebra 2, algebra 1 Related Kingwood, TX Tutors Kingwood, TX Accounting Tutors Kingwood, TX ACT Tutors Kingwood, TX Algebra Tutors Kingwood, TX Algebra 2 Tutors Kingwood, TX Calculus Tutors Kingwood, TX Geometry Tutors Kingwood, TX Math Tutors Kingwood, TX Prealgebra Tutors Kingwood, TX Precalculus Tutors Kingwood, TX SAT Tutors Kingwood, TX SAT Math Tutors Kingwood, TX Science Tutors Kingwood, TX Statistics Tutors Kingwood, TX Trigonometry Tutors
{"url":"http://www.purplemath.com/Kingwood_TX_SAT_Math_tutors.php","timestamp":"2014-04-19T17:31:07Z","content_type":null,"content_length":"23875","record_id":"<urn:uuid:eb7f62d4-841e-4ce7-9907-49d116cdc740>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00461-ip-10-147-4-33.ec2.internal.warc.gz"}
Forest fires and coagulation-deletion processes Seminar Room 1, Newton Institute Consider the following extension of the Erdos-Renyi random graph process; in a graph on $n$ vertices, each edge arrives at rate 1, but also each vertex is struck by lightning at rate $\lambda$, in which case all the edges in its connected component are removed. Such a "mean-field forest fire" model was introduced by Rath and Toth. For appropriate ranges of $\lambda$, the model exhibits self-organised criticality. We investigate scaling limits, involving a multiplicative coalescent with an added "deletion" mechanism. I'll mention a few other related models, including epidemic models and "frozen percolation" processes. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/SCS/seminars/2013081514301.html","timestamp":"2014-04-20T05:50:25Z","content_type":null,"content_length":"6124","record_id":"<urn:uuid:80dfe800-1a07-448e-a691-0cffabeeda42>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Center of gravity and Center of mass different?!?! I thought about it a bit longer and if we use the definition "The centre of gravity is the point around which there is zero torque due to gravity", then simply due to what torque is, we will never get a unique 'centre of gravity'. Even if the gravitational field is uniform, the centre of gravity is a line of solutions, going through the centre of mass. That's not quite the right definition. You're missing a key part. Take a point that is on your line but is not the center of mass. Now rotate the object a bit. Now you'll get a different line, and your point won't be on it. You'll instead get a torque about that chosen point. You forgot to add the qualifier "for any orientation of the object". In a uniform gravity field, that qualifier does make the center of gravity unique, and it is the center of mass. This definition doesn't work in a non-uniform gravitational field. What is wrong with using the point, at which all the object's mass concentrated would give the same net gravitational force as the extended object experiences? That definition doesn't work in a uniform gravity field. The center of gravity is indeterminate per this definition in a uniform gravity field because every point qualifies as the center of gravity. This definition is not unique in a non-uniform gravity field; the location of the center of gravity changes as an object changes orientation. This definition is used occasionally for space-based applications. For example, a space elevator would need its center of gravity rather than its center of mass at geosynchronous altitude. Newton's shell theorem proved that objects graviti is the same as if the whole mass was concentrated in its centre of mass. So in that respect there is no difference between COM and COG. That's only true for objects with a spherical mass distribution. It's not true in general. A couple of examples: The Earth and the Moon. The Earth has a non-spherical gravitational field thanks largely to its equatorial bulge. That non-spherical gravitational field is essential for how our sun synchronous satellites work. Place a satellite in such an orbit and the orbital plane will rotate by just the right amount over the course of a year so as to maintain near-ideal lighting conditions underneath the satellite. My other example is the Moon. The Moon's gravity field is rather lumpy thanks to a number of mass concentrations (mascons) on the near side of the Moon. This lumpy gravity field can make for some rather bizarre orbits.
{"url":"http://www.physicsforums.com/showthread.php?p=4289514","timestamp":"2014-04-20T08:31:44Z","content_type":null,"content_length":"82679","record_id":"<urn:uuid:633f86c4-de8a-4f18-8644-8ec7ff9b9895>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
Elusive Heat, Chaotic Patterns For decades scientists have used experiments replicating a classic form of heat transfer known as Rayleigh-Benard convection to study the effects of a warmer substance displacing a cooler one. These tightly controlled investigations of fluid dynamics allow scientists to scrutinize the fundamentals of behavior that is ubiquitous but otherwise elusive for study. A simmering pot of water, home heating and cooling, a weather system, or the fantastic furnace that is the sun -- all possess convection attributes. For the past several years, James D. Gunton, Haowen Xi and Jorge Vinals have used Pittsburgh Supercomputing Center's CRAY Y-MP and C90 to simulate Rayleigh-Benard convection. The Gunton team's simulations have revealed unpredicted chaotic patterns in R-B convection, unveiled new R-B convection structures and paved the way for mathematically accurate, three-dimensional renditions of chaotic "We're trying to understand pattern formation and chaotic behavior in nature," says Gunton. "As a result of using supercomputing to simulate R-B convection, we're discovering previously unknown chaos in fluids." Dependable Rayleigh-Benard These images show the topography of parallel roll (top) and (bottom) in corresponds to increasing from violet to red. Rayleigh-Benard convection is significant because for a comparatively easy-to-replicate phenomenon, it continues to provide insight into understanding how heat energy moves through a flow system. Producing R-B convection involves isolating a liquid in a tiny enclosed cylindrical or rectangular cell and creating a temperature difference (gradient) between the bottom and top layers. It's the rough equivalent of heating a covered pan of water, though the experimental arrangement allows for precise control of fluid properties and the temperature gradient between the cylinder top and bottom, in addition to offering a safe view of the heated fluid's surface. Under long-used experimental parameters, R-B experiments have consistently exhibited the same evolving structures, among them parallel rolls, squares and hexagons (discernible in the convection topography). The initial parameters and the type of fluid govern which kind of pattern will emerge first, but all the patterns are stable and non-chaotic. Given a constant temperature, they maintain their structure over time. Changing Patterns In 1992, Gunton's team replicated a laboratory experiment in which the expected transition from hexagonal to a parallel-roll state instead produced an unpredicted global spiral. The Gunton effort confirmed the experimental findings, revealing in a two-dimensional image a mesmerizing, stable rotating global spiral. "Convection often takes place in a very organized fluid motion," says Gunton, "and this has been studied and observed for decades. But supercomputing has helped reveal these novel, unpredicted states." More recently, the Gunton team used the CRAY C90 to monitor the onset of Rayleigh-Benard from a non-convective state. The resulting two-dimensional images reveal a kaleidoscopic pattern of local spirals. Their findings matched those of an independent though simultaneous experimental effort, thus bolstering the veracity of the unpredicted find. The spirals not only rotate, says Gunton, they move around in the fluid, annihilate each other, grow at the expense of one another and fluctuate in size and number. Gone is the regularity exhibited in the parallel rolls, with the fluid moving in continuous circular waves. Velocities and temperatures vary throughout the fluid. Some portions of it rise and fall faster than others. Everything about the system changes from one moment to the next. It's a crock pot of symmetry turned bubbly cauldron. Chaos in The bull's symmetry of the global spiral (top) as compared to the nature of a collection of adjacent local (bottom), as simulated on the CRAY C90 by James Gunton and coworkers. corresponds to increasing from violet to red. "Since these local spirals are erratic in time -- changing position and size -- there's an irregular behavior in space as well," says Gunton. Thus, he says, a very simple experiment can now be used to study one of nature's most complex occurrences -- chaos. Chaos is a realm beyond order, whose disorder has a recognizable pattern over time. Chaotic patterns are seen in everything from weather systems to cardiac activity at the cellular level. "We're now using supercomputing to create three-dimensional models of the chaotic patterns," says Gunton, "which will provide further understanding of both convective behavior and chaos." The new patterns revealed themselves when experimentalists, and later theorists, devised a means of enlarging the R-B cell, the enclosed setting in which the convection occurs and is observed. Previously, they could examine only a small piece of the convection system, which limited the extent to which pattern activity could evolve. "In order for us to mathematically model bigger cells, we had to solve some very complex equations," says Xi. "It couldn't have been done without supercomputing." The code ran between 300-350 Mflops on the C90 and the researchers have logged approximately 1200 hours on the supercomputers, with 900 more slated for investigations that will produce three-dimensional images of chaos in R-B convection. "One of the quantities we try to calculate is the upward velocity of the fluid at different points in the system, at different times," says Gunton. "Solving the equations means determining the velocity of the fluid at any given point in the system at any given time. It can't be done without supercomputing." Researchers: James Gunton, Lehigh University. Hardware: CRAY C90 Software: User-developed Code. Keywords: Rayleigh-Benard, convection, fluid dynamics, chaos, fluids, heat, energy, temperature gradient. Related Material on the Web: Physics Department Home Page at Lehigh University Projects in Scientific Computing, PSC's annual research report.
{"url":"http://www.psc.edu/science/Gunton/gunton.html","timestamp":"2014-04-18T23:16:58Z","content_type":null,"content_length":"8760","record_id":"<urn:uuid:eac7fee7-1e7b-40fc-ad35-275647f7cb72>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00340-ip-10-147-4-33.ec2.internal.warc.gz"}
Changing limits of integration. July 20th 2010, 08:41 AM #1 May 2010 Changing limits of integration. This problem involves a change in variable of the limit of integration. Consider the area bounded by $y=\frac{8x^2}{\sqrt(1-2x^2)}$, y=0, x=0, and x=.5. A definite integral of this area, using x as the variable of integration, is $\int^x_0 \frac{8x^2}{\sqrt(1-2x^2)}dx$. The next part of the question says: make the substitution of the form $x = A sin\theta$ to transform the integral so that $\theta$ is the variable of I'm not sure how to start this one. I thought of simply solving for $\theta$, but I'm don't know how to transform the integrand. *edit* forgot my dx. x.x Last edited by momopeaches; July 20th 2010 at 08:59 AM. hmmm... I'm very low on English but if i understood correctly, u need to change limits of integration... Isn't that for $\int\int$ ? Just wondering if there's $dxdy$ at the end Oops! Forgot my dx at the end of the integral. Yes, I also need to change the limit of integration, in addition to the variable. I'm not sure how to do this, though. This problem involves a change in variable of the limit of integration. Consider the area bounded by $y=\frac{8x^2}{\sqrt(1-2x^2)}$, y=0, x=0, and x=.5. A definite integral of this area, using x as the variable of integration, is $\int^x_0 \frac{8x^2}{\sqrt(1-2x^2)}$. The next part of the question says: make the substitution of the form $x = A sin\theta$ to transform the integral so that $\theta$ is the variable of I'm not sure how to start this one. I thought of simply solving for $\theta$, but I'm don't know how to transform the integrand. First of all, you should always include $dx$ when you integrate. Secondly, here is a plot of the function. I would assume you want to evaluate $\int_0^{0.5} \dfrac{8x^2}{\sqrt{1-2x^2}}\, dx$ The key is to think of it like a triangle(check triangle.pdf) Now $\dfrac{1}{\sqrt{1-2x^2}}=\dfrac{1}{\cos{\theta}}$ $8x^2=4\times 2x^2=4\sin^2 \theta$ And lastly you have $\dfrac{d}{dx}(\sin \theta)=\sqrt{2}\dfrac{dx}{dx}$ $\sqrt{2}\, dx=\cos(\theta)\, d\theta$ Now the last part is to change the limits of integration. You have $\sin \theta=\sqrt{2}x$ so $x=0,\theta=0$ and $x=0.5,\theta=\sin^{-1}\left(\sqrt{2}\times 0.5\right)$ Can you finish from here? You don't need to change the limits of integration if you don't want to. You can write it like this $\int_{x=0}^{x=0.5} f(\theta)\, d\theta$ and then change back to x when you've solved the integral before plugging in the numerical values and calculating the difference. I'm not sure that I follow. I understand about thinking of it as a triangle, but where did the $\frac{d}{dx}sin\theta=\sqrt(2)\frac{dx}{dx}$ come from? You have $\sin \theta=\sqrt{2}x$ from the triangle. Then you differentiate both sides with respect to x $\dfrac{d}{dx}(\sin \theta)=\sqrt{2}\dfrac{dx}{dx}$ Now you apply the chain rule $\dfrac{d}{dx}(\sin \theta)=\dfrac{d\theta}{dx}\cos\theta=\sqrt{2}\dfr ac{dx}{dx}$ And finally you multiply both sides by dx $d\theta\cos \teta=\sqrt{2}\,dx$ Okay, I get that, and was able to write out my definite integral. The next part of the question asks to evaluate each of the integrals. Should I be getting the same or different answers for the two integrals? I got .40361 for the first answer, and .697067 for the second answer. Yes you should be getting the same answers. Here I'll show you a simpler example. Consider the integral $\int_0^1 \dfrac{1}{\sqrt{1-x^2}}\,dx$ In your triangle put the hypotenuse equal to 1. The leg that is across the angle should be x so that $\sin \theta=x$ Now using Pythagoras theorem the other leg is $\sqrt{1-x^2}=\cos \theta$ Now you can write the integral like this $\int_0^1 \dfrac{1}{\sqrt{1-x^2}}=\int_{x=0}^{x=1}\dfrac{1}{\cos \theta}\,dx$ You have to express $dx$ in terms of $d\theta$ We have $x=\sin \theta$ and we differentiate both sides to get $dx=\cos \theta \,d\theta$ Now we substitute dx in the integral to get $\int_{x=0}^{x=1}1\,d\theta=\left[\theta\right]_{x=0}^{x=1}$ What you do next is change the limits of integration. The limits are x=0 and x=1 but we are integrating with respect to $\theta$ You use $x=\sin\theta$ and you need to solve the two equations. $\sin \theta=0 \therefore \theta=0$ $\sin \theta=1 \therefore \theta=\sin^{-1} 1=\dfrac{\pi}{2}$ You have to work in radians. The reason for that is because radians have a unit of length which is the same unit for x and y. Now your integral becomes $\int_0^{\pi/2} 1\,d\theta=\left[\theta\right]_0^{\pi/2}=\dfrac{\pi}{2}$ Yes you should be getting the same answers. Here I'll show you a simpler example. Consider the integral $\int_0^1 \dfrac{1}{\sqrt{1-x^2}}\,dx$ In your triangle put the hypotenuse equal to 1. The leg that is across the angle should be x so that $\sin \theta=x$ Now using Pythagoras theorem the other leg is $\sqrt{1-x^2}=\cos \theta$ Now you can write the integral like this $\int_0^1 \dfrac{1}{\sqrt{1-x^2}}=\int_{x=0}^{x=1}\dfrac{1}{\cos \theta}\,dx$ You have to express $dx$ in terms of $d\theta$ We have $x=\sin \theta$ and we differentiate both sides to get $dx=\cos \theta \,d\theta$ Now we substitute dx in the integral to get $\int_{x=0}^{x=1}1\,d\theta=\left[\theta\right]_{x=0}^{x=1}$ What you do next is change the limits of integration. The limits are x=0 and x=1 but we are integrating with respect to $\theta$ You use $x=\sin\theta$ and you need to solve the two equations. $\sin \theta=0 \therefore \theta=0$ $\sin \theta=1 \therefore \theta=\sin^{-1} 1=\dfrac{\pi}{2}$ You have to work in radians. The reason for that is because radians have a unit of length which is the same unit for x and y. Now your integral becomes $\int_0^{\pi/2} 1\,d\theta=\left[\theta\right]_0^{\pi/2}=\dfrac{\pi}{2}$ This is an improper integral. It should be calculated in terms of limits. It is true it is improper integral but after the trig substitution that is no longer the case. Besides the answer is correct check http://www.wolframalpha.com/input/?i=integrate[1%2Fsqrt[1-x^2] Okat, I understand how the problem works, but I still am getting the wrong answer. I don't know what I'm doing wrong. The integral that I'm trying to evaluate is $\int\frac{4sin^2(\theta)}{cos(\ theta)}d\theta$ and the limits of integration are from 0 to $arcsin(\sqrt(2)*.5)$. What step did I mess up on? Your integral seems wrong. Did you replace $dx$ with $d\theta$ straight away. You can't do that. From my first post $\sqrt{2}\, dx=\cos(\theta)\, d\theta$ Now you substitute that and you get $\dfrac{4}{\sqrt{2}}\int_0^{\sin^{-1}(0.5\times \sqrt{2})}\dfrac{\sin^2 \theta}{\cos\theta}\cos\theta \,d\theta=2\sqrt{2}\int_0^{\sin^{-1}(1/\sqrt{2})}\sin^2\ July 20th 2010, 08:56 AM #2 July 20th 2010, 09:02 AM #3 May 2010 July 20th 2010, 09:16 AM #4 Jun 2008 July 20th 2010, 09:39 AM #5 May 2010 July 21st 2010, 07:22 AM #6 Jun 2008 July 21st 2010, 01:29 PM #7 May 2010 July 22nd 2010, 04:02 AM #8 Jun 2008 July 22nd 2010, 05:03 AM #9 July 22nd 2010, 06:39 AM #10 Jun 2008 July 22nd 2010, 11:51 AM #11 May 2010 July 22nd 2010, 12:11 PM #12 Jun 2008
{"url":"http://mathhelpforum.com/calculus/151481-changing-limits-integration.html","timestamp":"2014-04-20T20:23:59Z","content_type":null,"content_length":"75639","record_id":"<urn:uuid:50944a7b-0d59-47f1-ae2f-c3deb778e0b4>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00223-ip-10-147-4-33.ec2.internal.warc.gz"}
Congruent Triangles Triangles in the World By Sierra Anna Johnson is going on a trip with her family traveling all around the world! What triangles will she see in different countries and cities of the world? 4.1 Triangles and Angles: Hong Kong The first place Anna stops is Hong Kong, where she sees the Bank of China, the most famous Hong Kong skyscraper in the world! The first thing she notices are all the different triangles! Immediately she sees that all the triangles are isosceles triangles, triangles that have at least two congruent sides. And she quickly notices two right triangles, triangles with one right angle, in the pattern as well. And what other types of triangles are there that are not in the building? Acute Triangle Obtuse Triangle Scalene Triangle Equilateral Triangle Equiangular Triangle Theorems 4.1 and 4.2 4.1- Triangle Sum Theorem- The sum of the measures of the interior angles of a triangle are 180 degrees. m<A + m<B + m<C = 180 ̊ 4.2- Exterior Angle Theorem- The measure of an exterior angle of a triangle is equal to the sum of the measures of the two nonadjacent interior angles. m<D = m<A + m<B - Corollary to the Triangles Sum Theorem- The acute angles of a right triangle are complementary. m<A + m<B = 90 ̊ 4.2 Congruence and Triangles: the Anna is on the plane to the Netherlands and she flies over Bourtange and notices a triangular pattern in the center of the city. She knows from her studies of vertical angles that <1 is congruent to <2, and she knows that triangle A is congruent to triangle B. She also knows that m<4 = m<3. Using the Third Angles Theorem, theorem 4.3, she concludes that the third angles of the triangles are congruent. What’s the other theorem in lesson 4.2? You guessed it! It’s the Properties of Congruent Triangles, Theorem 4.4. Reflexive Property of Congruent Triangles- Every triangle is congruent to itself. Symmetric Property of Congruent Triangles- If triangle ABC is congruent to triangle DEF, then 1 triangle DEF is congruent to triangle ABC. 2 Transitive Property of Congruent Triangles- If triangle ABC is congruent to triangle DEF and triangle DEF is congruent to triangle GHI, then triangle ABC is congruent to triangle GHI. 4.3 Proving Triangles are Congruent: SSS and SAS: Turkey Anna stays in a hotel in Aksu, Turkey, near Antayla, and notices two triangles made by a building behind the hotel and it’s reflection, sharing the edge of the lake as another side. She sees that the angles opposite the shared side are congruent, and the bases are congruent. Using the reflexive property, she knows the shared side between them is congruent to itself. Recently she learned the Side-Angle-Side postulate in geometry, stating that if two sides and the included angle of a triangle are congruent to two sides and the included angle of another triangle, then the two triangles are congruent, and so she can conclude that the two triangles are congruent. And what’s the Side-Side-Side postulate? It states that if three sides of a triangle are congruent to three sides of a second triangle, then the two triangles are 4.4 Proving Triangles are Congruent: ASA and AAS: Egypt Anna’s next stop was Egypt, the home of the famous ancient Pyramids. As Anna stood looking at the Pyramids, she noticed two congruent angles between two triangular faces at the top of the pyramid and two congruent angles between the same faces at the bottom of the pyramid. Using the reflexive property, Anna knew the included side shared by both triangles was congruent to itself. Thinking back on her geometry lesson before the left, she realized the two triangles were congruent by the Angle-Side-Angle postulate. She’d also learned the Angle-Angle-Side Theorem 4.5 that day that stated if two angles and a nonincluded side of one triangle are congruent to two angles and the corresponding nonincluded side of a second triangle, then the two triangles are congruent. 4.5 Using Congruent Triangles: Anna’s next stop was Denmark. As she was walking through the streets, she saw this little shop and became curious of the patterns on the side. If triangles 1 and 2 were congruent and their sides measured 5 feet, 4 feet and 3 feet, and ABCD was a perfect rectangle, how could she find out what the perimeter was for triangle ABD? She decided to use geometry to figure it out... If triangles 1 and two were congruent, then by CPCTC, Corresponding Parts of Congruent Triangles are Congruent, DB would be congruent to DA. She already knew the lengths of the hypotenuses was 5 feet (she always had her ever-ready ruler) and she knew the length of CD and DE were 3 feet. Since ABCD was a rectangle, AB had to be congruent to CE by definition of a rectangle, and by Segment Addition Postulate, CE was 6 feet, making AB 6 feet. Therefore A B the perimeter of ABD is 16 feet (5 + 5 + 6). 1 2 C D E “Hey! I used congruent triangles for that!” 4.6 Isosceles, Equilateral and Right Triangles: Greece Anna’s next stop is Greece, her favorite country in her travels. As soon as the plane lands in Athens, Anna begs her mother to take a bus to the Acropolis to see the Parthenon, the most famous ancient temple in Greece. While studying the Parthenon, Anna notices that the triangle at the top of the temple is an isosceles triangle. Using the Base Angles Theorem 4.6, she could conclude that the two angles opposite the congruent sides are also congruent. And if she only knew the two angles were congruent, she could use the Base Angles Converse Theorem 4.7 to conclude the two sides opposite the angles were congruent. Anna sees she could also create two right triangles with congruent hypotenuses. Since they share the same base, or leg, she knows from the reflexive property that the leg is congruent to itself. Using the Hypotenuse-Leg Congruence Theorem 4.8, Anna knows that those two triangles are 4.7 Triangles and Coordinate Proof: I’m Going Home As Anna was on the plane ride home, she pulled out her map and began looking at three of the earlier countries she traveled to, picking out Turkey A, Egypt B and China C. She noticed <ABC was indeed a right angle, and pulling out her ruler again, measured the lengths of the legs. AB was .5 inches, BC was 3 inches, and just as she was about to measure CA, Anna dropped her ruler. “No worries!” She said, unable to find it. “I’ll just graph this!” B C Anna decided to make every four units measure 1 inch by her map. Using the distance formula, she calculated the measurement of the hypotenuse, CA. CA = 3.04 inches by her map. Careers Using Congruent Triangles •Architects have to use congruent triangles in order to keep their design measurements the same. •Designers use many congruent triangles in modern art and decorating rooms. They like to keep unity around the room and congruent figures, triangles especially, pull the room together. •Airplane Pilots use triangles in coordinate grids to figure out the distance of their routes. •Artists use congruent triangles in their compositions. •Construction Workers use triangles in coordinate grids to figure out where everything should be built and placed and the •The people who map constellations use triangles in coordinate grids to lay out where the stars are and the distances between the stars in constellations. •Carpenters have to make sure the triangles they use are congruent so that the pieces of wood will fit the way they are meant to. •Painters have to make sure they paint congruent triangles in their patterns. •Sailors have to map their routes, sometimes triangles, in coordinate planes and find the distances between each point to make sure they have enough supplies to last. •Engineers have to make sure they design products with congruent triangles so their machines work the way they are invented to work.
{"url":"http://www.docstoc.com/docs/143420176/Congruent-Triangles","timestamp":"2014-04-20T23:47:58Z","content_type":null,"content_length":"59365","record_id":"<urn:uuid:67a4f8f8-f769-49ef-85a1-1745b0c681b6>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00395-ip-10-147-4-33.ec2.internal.warc.gz"}
A Cartesian Puzzle Copyright © University of Cambridge. All rights reserved. 'A Cartesian Puzzle' printed from http://nrich.maths.org/ Why do this problem? This problem is one that requires some understanding of coordinates in the first quadrant. It will also call on knowledge of both rotational and line symmetry, and the properties of various quadrilaterals. Possible approach You could play a game of 'twenty questions' to begin with so that pupils get a chance to familiarise themselves with properties of shapes. Choose a quadrilateral and write the name of it on a piece of paper. Invite the class to ask questions to guess what your quadrilateral is, but you can only answer yes or no. Keep a tally of the number of questions asked - if they get it in less than twenty, they win, otherwise you win. You could repeat this a few times with pupils choosing shapes. You could start on the problem itself by showing it to the group on an interactive whiteboard or data projector. Alternatively, if they are already very familiar with coordinates in the first quadrant, you could get them to work in pairs from a printed sheet of the problem from the beginning. It is important that they are able to talk through their ideas with a partner while doing the This sheet of the first quadrant could be used for both rough working and the final results. Otherwise supply plenty of squared paper! It might help learners to know that the coordinates of each quadrilateral are given going round in an anti-clockwise direction. You could choose not to give this information to some pupils and then ask them to select the coordinates which give the most symmetry for the plotted shape. One of the nice things about this problem is that learners will know that they have solved it correctly. In the plenary, therefore, you can concentrate on asking some pairs to explain the way they tackled the problem, rather than focusing on the answer. Were some ways more efficient than others? Key questions What kind of quadrilateral do you think this one is? Where is its fourth vertex? What kind of symmetry do you think this quadrilateral has? Possible extension Learners could plot their own quadrilaterals with one vertex of each forming a hexagon and so make a similar problem for a friend to try. Possible support You might want to tell some children that the shapes include one parallelogram, one trapezium and one rhombus, and are otherwise squares and rectangles.
{"url":"http://nrich.maths.org/1110/note?nomenu=1","timestamp":"2014-04-20T15:58:47Z","content_type":null,"content_length":"6682","record_id":"<urn:uuid:8cbea5c1-06e6-489b-95b7-1e66eb919c70>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00660-ip-10-147-4-33.ec2.internal.warc.gz"}
the definition of chi-square a quantity equal to the summation over all variables of the quotient of the square of the difference between the observed and expected values divided by the expected value of the variable. chi-square n. A test statistic that is calculated as the sum of the squares of observed values minus expected values divided by the expected values. chi-square (kī'skwâr') Pronunciation Key A test statistic that is calculated as the sum of the squares of observed values minus expected values divided by the expected values.
{"url":"http://dictionary.reference.com/browse/chi-square","timestamp":"2014-04-18T21:42:33Z","content_type":null,"content_length":"94009","record_id":"<urn:uuid:17a4ba2c-4ada-4ecb-9e9d-92845f7ea14d>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
A Uniform Rectangular Plate Is Suspended From A... | Chegg.com A uniform rectangular plate is suspended from a pin located at the midpoint of one edge as shown. Considering the dimension b constant, determine (a) the ratio c/b for which the period of oscillation of the plate is minimum, (b) the ratio c/b for which the period of oscillation of the plate is the same as the period of a simple pendulum of length c.
{"url":"http://www.chegg.com/homework-help/uniform-rectangular-plate-suspended-pin-located-midpoint-one-chapter-19-problem-49p-solution-9780072976939-exc","timestamp":"2014-04-18T16:15:49Z","content_type":null,"content_length":"47523","record_id":"<urn:uuid:ef372e12-471d-4fee-9c28-49ae572e9054>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00605-ip-10-147-4-33.ec2.internal.warc.gz"}
Glossary of statistical terms Chi-Square Test: Chi-square test (or statistical test for testing the null hypothesis that the distribution of a discrete random variable coincides with a given distribution. It is one of the most popular goodness-of-fit tests . For example, in a supermarket, relative frequencies of purchasing 4 brands of tee have been 0.1, 0.4, 0.2, and 0.3 during the last year; during the last week the number of packets sold have been 31, 41, 22, 18 for the 4 brands, respectively. Has the preference changed - i.e. probabilities of purchasing now differs from the last year average preferences, or the deviations in the observed relative frequencies is caused by chance alone? The chi-square test, besides discrete variables, is often applied to problems involving continuous random variables . In this case, the values of a continuous variable are transformed to a discrete variable with a finite number of values - e.g. the whole range of possible values is split into a finite number of intervals, and every such interval is considered as a discrete value (e.g. age groups "20...29", "30...39", etc). Then the chi-square test is applied to the new discrete variable. For small samples, the classical chi-square test is not very accurate - because the sampling distribution of the statistic of the test differs from the chi-square distribution . In such cases, Monte Carlo simulation is a more reasonable approach. In many cases such simulation can be carried out by creating an artificial sample with the given proportion of values and applying a resampling procedure to this sample. Besides the one-sample chi-square test, there are variants of the test for comparison of the distribution of two or several samples. For these variants, a permutation version of the test is more accurate when at least one sample is small. See more on the use of resampling and permutation in short online courses Resampling , and in the online book Resampling: The New Statistics The chi-square test is typically used in categorical data analysis , e.g. to check if two such variables are independent random variables ). The chi-square test is based on the chi-square statistic . Browse Other Glossary Entries Want to learn more about this topic? Statistics.com offers over 100 courses in statistics from introductory to advanced level. Most are 4 weeks long and take place online in series of weekly lessons and assignments, requiring about 15 hours/week. Participate at your convenience; there are no set times when you must to be online. Ask questions and exchange comments with the instructor and other students on a private discussion board throughout the course. Survey of Statistics for Beginners This course provides an easy introduction to statistics and statistical terminology through a series of practical applications. Once you've completed this course you'll be able to summarize data and interpret reports and newspaper accounts that use statistics and probability. You'll use simulation and resampling to fully grasp the difficult concept of "statistical significance." Statistics 1 - Probability and Study Design This course, the first of a 3-course sequence, provides an introduction to statistics for those with little or no prior exposure to basic probability and statistics. It runs every eight weeks. This course covers the principal statistical concepts used in medical and health sciences. Basic concepts common to all statistical analysis are reviewed, and those concepts with specific importance in medicine and health are covered in detail. This course covers the analysis of data gathered in surveys. This course will cover the analysis of contingency table data (tabular data in which the cell entries represent counts of subjects or items falling into certain categories). Topics include tests for independence (comparing proportions as well as chi-square), exact methods, and treatment of ordered data. Both 2-way and 3-way tables are covered. Promoting better understanding of statistics throughout the world To celebrate the International Year of Statistics in 2013, we will provide a statistical term every week, delivered directly to your inbox. The Institute for Statistics Education offers an extensive glossary of statistical terms, available to all for reference and research. Make it your New Year's resolution to improve your own statistical knowledge! Sign up . Rather not have more email? Simply bookmark our home page and check our “Stats Word of the Week” feature.
{"url":"http://www.statistics.com/index.php?page=glossary&term_id=727","timestamp":"2014-04-19T09:52:54Z","content_type":null,"content_length":"19098","record_id":"<urn:uuid:96b7b073-6c3e-427c-940f-fa95265f410b>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00351-ip-10-147-4-33.ec2.internal.warc.gz"}
Grid of points within a polygon - File Exchange - MATLAB Central inPoints = polygrid(xv,yv,ppa) generates points that are within a polygon using help from the inpolygon function. xv and yv are columns representing the vertices of the polygon, as used in the Matlab function inpolygon ppa refers to the points per unit area you would like inside the polygon. Here unit area refers to a 1.0 X 1.0 square in the axes. L = linspace(0,2.*pi,6); xv = cos(L)';yv = sin(L)'; %from the inpolygon documentation inPoints = polygrid(xv, yv, 10^5) plot(inPoints(:, 1),inPoints(:,2), '.k'); Please login to add a comment or rating.
{"url":"http://www.mathworks.com/matlabcentral/fileexchange/41454-grid-of-points-within-a-polygon","timestamp":"2014-04-17T15:41:19Z","content_type":null,"content_length":"32173","record_id":"<urn:uuid:ecb6e593-327a-42ea-b6d7-85228f037521>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00521-ip-10-147-4-33.ec2.internal.warc.gz"}
The six types of data are: 1. Integer 2. Real 3. Double precision 4. Complex 5. Logical 6. Character Each type is different and may have a different internal representation. The type may affect the interpretation of the operations involving the datum. The name employed to identify a datum or a function also identifies its data type. A symbolic name representing a constant, variable, array, or function (except a generic function) must have only one type for each program unit. Once a particular name is identified with a particular type in a program unit, that type is implied for any usage of the name in the program unit that requires a type. A symbolic name that identifies a constant, variable, array, external function, or statement function may have its type specified in a type-statement ( 8.4) as integer, real, double precision, complex, logical, or character. In the absence of an explicit declaration in a type-statement, the type is implied by the first letter of the name. A first letter of I, J, K, L, M, or N implies type integer and any other letter implies type real, unless an IMPLICIT statement ( 8.5) is used to change the default implied type. The data type of an array element name is the same as the type of its array name. The data type of a function name specifies the type of the datum supplied by the function reference in an expression. A symbolic name that identifies a specific intrinsic function in a program unit has a type as specified in 15.10. An explicit type-statement is not required; however, it is permitted. A generic function name does not have a predetermined type; the result of a generic function reference assumes a type that depends on the type of the argument, as specified in 15.10. If a generic function name appears in a type-statement, such an appearance is not sufficient by itself to remove the generic properties from that function. In a program unit that contains an external function reference, the type of the function is determined in the same manner as for variables and arrays. The type of an external function is specified implicitly by its name, explicitly in a FUNCTION statement, or explicitly in a type-statement. Note that an IMPLICIT statement within a function subprogram may affect the type of the external function specified in the subprogram. A symbolic name that identifies a main program, subroutine, common block, or block data subprogram has no data type. The mathematical and representation properties for each of the data types are specified in the following sections. For real, double precision, and integer data, the value zero is considered neither positive nor negative. The value of a signed zero is the same as the value of an unsigned zero. A constant is an arithmetic constant, logical constant, or character constant. The value of a constant does not change. Within an executable program, all constants that have the same form have the same value. The form of the string representing a constant specifies both its value and data type. A PARAMETER statement ( 8.6) allows a constant to be given a symbolic name. The symbolic name of a constant must not be used to form part of another constant. Blank characters occurring in a constant, except in a character constant, have no effect on the value of the constant. Integer, real, double precision, and complex constants are arithmetic constants. An unsigned constant is a constant without a leading sign. A signed constant is a constant with a leading plus or minus sign. An optionally signed constant is a constant that may be either signed or unsigned. Integer, real, and double precision constants may be optionally signed constants, except where specified otherwise. An integer datum is always an exact representation of an integer value. It may assume a positive, negative, or zero value. It may assume only an integral value. An integer datum has one numeric storage unit in a storage sequence. The form of an integer constant is an optional sign followed by a nonempty string of digits. The digit string is interpreted as a decimal number. A real datum is a processor approximation to the value of a real number. It may assume a positive, negative, or zero value. A real datum has one numeric storage unit in a storage sequence. The form of a basic real constant is an optional sign, an integer part, a decimal point, and a fractional part, in that order. Both the integer part and the fractional part are strings of digits; either of these parts may be omitted but not both. A basic real constant may be written with more digits than a processor will use to approximate the value of the constant. A basic real constant is interpreted as a decimal number. The form of a real exponent is the letter E followed by an optionally signed integer constant. A real exponent denotes a power of ten. The forms of a real constant are: 1. Basic real constant 2. Basic real constant followed by a real exponent 3. Integer constant followed by a real exponent The value of a real constant that contains a real exponent is the product of the constant that precedes the E and the power of ten indicated by the integer following the E. The integer constant part of form (3) may be written with more digits than a processor will use to approximate the value of the constant. A double precision datum is a processor approximation to the value of a real number. The precision, although not specified, must be greater than that of type real. A double precision datum may assume a positive, negative, or zero value. A double precision datum has two consecutive numeric storage units in a storage sequence. The form of a double precision exponent is the letter D followed by an optionally signed integer constant. A double precision exponent denotes a power of ten. Note that the form and interpretation of a double precision exponent are identical to those of a real exponent, except that the letter D is used instead of the letter E. The forms of a double precision constant are: 1. Basic real constant followed by a double precision exponent 2. Integer constant followed by a double precision exponent The value of a double precision constant is the product of the constant that precedes the D and the power of ten indicated by the integer following the D. The integer constant part of form (2) may be written with more digits than a processor will use to approximate the value of the constant. A complex datum is a processor approximation to the value of a complex number. The representation of a complex datum is in the form of an ordered pair of real data. The first of the pair represents the real part of the complex datum and the second represents the imaginary part. Each part has the same degree of approximation as for a real datum. A complex datum has two consecutive numeric storage units in a storage sequence; the first storage unit is the real part and the second storage unit is the imaginary part. The form of a complex constant is a left parenthesis followed by an ordered pair of real or integer constants separated by a comma, and followed by a right parenthesis. The first constant of the pair is the real part of the complex constant and the second is the imaginary part. A logical datum may assume only the values true or false. A logical datum has one numeric storage unit in a storage sequence. The forms and values of a logical constant are: | | | | | | | .TRUE. | true | | | | | .FALSE.| false| | | | A character datum is a string of characters. The string may consist of any characters capable of representation in the processor. The blank character is valid and significant in a character datum. The length of a character datum is the number of characters in the string. A character datum has one character storage unit in a storage sequence for each character in the string. Each character in the string has a character position that is numbered consecutively 1, 2, 3, etc. The number indicates the sequential position of a character in the string, beginning at the left and proceeding to the right. The form of a character constant is an apostrophe followed by a nonempty string of characters followed by an apostrophe. The string may consist of any characters capable of representation in the processor. Note that the delimiting apostrophes are not part of the datum represented by the constant. An apostrophe within the datum string is represented by two consecutive apostrophes with no intervening blanks. In a character constant, blanks embedded between the delimiting apostrophes are significant. The length of a character constant is the number of characters between the delimiting apostrophes, except that each pair of consecutive apostrophes counts as a single character. The delimiting apostrophes are not counted. The length of a character constant must be greater than zero. [Contents] [Previous] [Next] This document was translated by troff2html v0.21 on August 16, 1995.
{"url":"http://www.fortran.com/fortran/F77_std/rjcnf0001-sh-4.html","timestamp":"2014-04-18T15:39:47Z","content_type":null,"content_length":"11692","record_id":"<urn:uuid:1d5efc6a-53db-46a2-a75a-eb7a9c13936b>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00028-ip-10-147-4-33.ec2.internal.warc.gz"}
0 And 1 Are Not Probabilities - Less Wrong Comments (84) Sort By: Old hmm... I feel even more confident about the existence of probability-zero statements than I feel about the existence of probability-1 statements. Because not only do we have logical contradictions, but we also have incoherent statements (like Husserl's "the green is either"). Can one form subjective probabilities over the truth of "the green is either" at all? I don't think so, but I remember a some-months-ago suggestion of Robin's about "impossible possible worlds," which might also imply the ability to form probability estimates over incoherencies. (Why not incoherent worlds? One might ask.) So the idea is at least potentially on the table. And then it seems obvious that we will forever, across all space and time, have no evidence to support an incoherent proposition. That's as good an approximation of infinite lack of evidence as I can come up with. P("the green is either")=0? If you assign 0 to logical contradictions, you should assign 1 to the negations of logical contradictions. (Particularly since your confidence in bivalence and the power of negation is what allowed you to doubt the truth of the contradiction in the first place.) So it's strange to say that you feel safer appealing to 0s than to 1s. For my part, I have a hard time convincing myself that there's simply no (epistemic) chance that Graham Priest is right. On the other hand, assigning any value but 1 to the sentence "All bachelors are bachelors" just seems perverse. It seems as though I could only get that sentence wrong if I misunderstand it. But what am I assigning a probability to, if not the truth of the sentence as I understand it? Another way of saying this is that I feel queasy assigning a nonzero probability to "Not all bachelors are bachelors," (i.e., ¬(p → p)) even though I think it probably makes some sense to entertain as a vanishingly small possibility "All bachelors are non-bachelors" (i.e., p → ¬p, all bachelors are contradictory objects). One answer would be that an incoherent proposition is not a proposition, and so doesn't have any probability (not even zero, if zero is a probability.) Another answer would be that there is some probability that you are wrong that the proposition is incoherent (you might be forgetting your knowledge of English), and therefore also some probability that "the green is either" is both coherent and true. It's difficult to assign probability to incoherent statements, because since we can't mean anything by them, we can't assert a referent to the statement -- in that sense, the probability is indeterminate (additionally, one could easily imagine a language in which a statement such as "the green is either" has a perfectly coherent meaning -- and we can't say that's not what we meant, since we didn't mean anything). Recall also that each probability zero statement implies a probability one statement by its denial and vice versa, so one is equally capable of imagining them, if in a contrived way. Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable) If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either" If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities. Putting this in a slightly more coherent way. (I was having some trouble understanding the explanation, so I broke it down into layman's terms, might make it more easily understandable) If I assign P(0) to "Green is either" Then I assign P(1) to the statement "Green is not either" If you assign absolute certainty to any one statement you are, by definition assigning absolute impossibility to all other possibilities. j.edwards, I think your last sentence convinced me to withdraw the objection -- I can't very well assign a probability of 1 to ~"the green is either" can I? Good point, thanks. that anecdote wasn't amusing at all. and it wasn't an anecdote. and it doesn't prove the point. all it shows is that a single person didn't know their 17 times tables off the top of their head. there's no reason to expect someone to be as confident that 51 is or is not prime than 7 is or is not prime - and anyway, the point of the story should have been that, eventually, 7 might NOT be prime. which it's always going to be. i didn't get it. Probabilities of 0 and 1 are perhaps more like the perfectly massless, perfectly inelastic rods we learn about in high school physics - they are useful as part of an idealized model which is often sufficient to accurately predict real-world events, but we know that they are idealizations that will never be seen in real life. However, I think we can assign the primeness of 7 a value of "so close to 1 that there's no point in worrying about it". In stark contrast to this time last week, I now internally believe the title of this post. I did enjoy "something, somewhere, is having this thought," Paul, despite all its inherent messiness. 'Green is either' doesn't tell us much. As far as we know it's a nonsensical statement, but I think that makes it more believable than 'green is purple', which makes sense, but seems extremely wrong. You might as well try to assign a probability to 'flarg is nardle'. I can demonstrate that green isn't purple, but not that green isn't either, nor that flarg isn't nardle. Is there anything truer than '7 is prime'? What's the truest statement anyone can come up with? Can we definitely get no closer to 0 than 1, based on J Edwards & Paul, above? I think you can still have probabilities sum to 1: probability 1 would be the theoretical limit of probability reaching infinite certitude. Just like you can integrate over the entire real line, i.e -â to â even though those numbers don't actually exist. i didn't get it. Easy: it's a demonstration of how you can never be certain that you haven't made an error even on the things you're really sure about. It's a cheap, dirty demonstration, but one nevertheless. You seem to think probabilities of 0 and 1 are mysterious or contradictory when discussing randomness; they aren't. When you're talking about randomness, you need to define your support. that mere action gives you places where the probability is zero. For example: Can the time to run 100m ever be negative? No? Then P(t<0) = 0. And by extension, P(T>=0) = 1. No puzzle there. But you're transfrormation to log-odds has some regularity conditions you're violating in those cases: the transform is only defined for probabilities in (0,1). But that doesn't mean log-odds or probabilities are flawed. Probabilities or 0 and 1 -- like log-odds of plus-and-minus infinity -- are just filling in the boundaries on the system you've created. Mathematically, you want to be able to handle limits; that means handling limits as a probability approaches 0 or 1. That's it. This shouldn't be some huge philosophical puzzle; it's merely the need to have any mathematical system you use be complete. Sir David Cox would be the first to tell you that. We certainly can talk about the limit of a function whose codomain is a measure of probability being 1; the limit of the probability of a proposition as the amount of evidence in favor of it approaches infinity is 1. But that doesn't mean that 1 is a measure of probability. Infinity is valid as the limit of a function yielding real numbers, but infinity is not a real number. As for your example with the amount of time it takes to run a particular distance, I can't be certain that we won't find a region of space with strange temporal effects that allow you to take a walk and arrive at your starting point before you left. This would allow you to run a hundred meters in negative time, in at least one sense of the word. Getting that sort of speed from the runner's point of view would be stranger, but the Dark Lords of the Matrix could probably make it happen. Cumulant - can you state, with infinite certainty, that no-one will ever run faster than light? Well, it does seem like someone who travels back in time to reach the finish before he got there has... not actually followed the rules of the 100-meter dash. Another way to think about probabilities of 0 and 1 is in terms of code length. Shannon told us that if we know the probability distribution of a stream of symbols, then the optimal code length for a symbol X is: l(X) = -log p(X) If you consider that an event has zero probability, then there's no point in assigning a code to it (codespace is a conserved quantity, so if you want to get short codes you can't waste space on events that never happen). But if you think the event has zero probability, and then it happens, you've got a problem - system crash or something. Likewise, if you think an event has probability of one, there's no point in sending ANY bits. The receiver will also know that the event is certain, so he can just insert the symbol into the stream without being told anything (this could happen in a symbol stream where three As are always followed by a fourth). But again, if you think the event is certain and then it turns out not to be, you've got a problem: the receiver doesn't get the code you want to send. If you refuse to assign zero or unity probabilities to events, then you have a strong guarantee that you will always be able to encode the symbols that actually appear. You might not get good code lengths, but you'll be able to send your message. So Eliezer's stance can be interpreted as an insistence on making sure there is a code for every symbol sequence, regardless of whether that sequence appears to be impossible. But then, do you really want to build a binary transmitter that is prepared to handle not only sequences of 0 and 1, but also the occasional "zebrafish" and "Thursday" (imagine somehow fitting these into an electrical signal, or don't, because the whole point is that it can't be done)? Such a transmitter has enormously increased complexity to handle signals that, well... won't ever happen. I guess you could say the probability is low enough that the expected utility of dealing with it is not worth it. But what about the chance that a "zebrafish" in the launch codes will wipe out humanity? Surely that expected utility cannot be ignored? (Except it can!) From what I understood on reading the Wikipedia article on Bayesian probability and inferring from how he writes (and correct me if I'm wrong), Eliezer is talking about your "subjective probability." You are a being, have consciousness, and interpret input as information. Given a lot of this information, you've formed an idea that 7 is prime. You've also formed an idea that other people exist, and that the sky is blue, which also have a high subjective probability in your mind because you have a lot of direct information to sustain that belief. Moreover, if you've ever been wrong before, hopefully you've noticed that you have been wrong before. That's a little information that "you are sometimes wrong about things that you are very sure of". So, you might apply this information to your formula of your probability of the idea that "7 is prime", so you still end up with a high probability, but not 1. Now, you might not think that "you are sometimes wrong about things that you are sure of" about every single subject, such as primeness. But, what if you had the information that other humans, smart people, have at some point in the past, incorrectly understood the primeness of a number (the anecdote). You might state, that "human beings are sometimes wrong about the primeness of a number," and "I am a human being." Again, if you include that information in your calculation of the probability that the idea that "7 is prime" is true, then you end up with a high probability, but not 1. (Oh, but what if you didn't make the statement "human beings are sometimes wrong about the primeness of a number", but instead, "this idiot is sometimes wrong about the primeness of a number, but I am never" Well, you can. That's one big problem with Bayesian subjective probabilities. How do we generalize? How can we formalize it so that two people with the same information deterministically get the same probability? Logical (or objective epistemic) probability attempts to answer these questions.) So, you're right that it is just "a single person" getting it wrong, that his cerainty was incorrect. But that's Eliezer's point. We are not supreme beings lording over all reality, we are humans who have memorized some information from the past and made some generalizations, including generalizations that sometimes our generalizations are wrong. I agree with cumulant. The mathematical subject of probability is based on measure theory, which loses a ton of convergence theorems if we exclude 0 and 1. We can agree that things that are not known a priori can't have probability 0 or 1, but I think we must also agree that "an impossible thing will happen soon" has probability 0, because it's a contradiction. An alternate universe in which the number 7 (in the same kind of number system as ours, etc.) is prime is damn-near inconceivable, but an alternate universe in which impossible things are possible is purely absurd. If our mathematical reasoning is coherent enough for it to be meaningful to make probability assignments then certainly we are not so fundamentally flawed that what we consider tautologies could be false. If you are willing to accept that maybe 0 is 1, then you can't do any of your probability adjustments, or use Bayes' Theorem, or anything of the sort without having a (possibly unstated) caveat that probability theory might be complete nonsense. But what's the probability that probability theory is nonsense (i.e. false or inconsistent)? What does that even mean? We can only assign a probability if that makes sense, so conditioned on the sentence making sense, probability theory must be nonsense with probability 0, no? So averaged over all possible universes (those where probability theory makes sense, and those where it doesn't) the sentence "probability makes sense with probability 1" better approximates the truth value of probability making sense than "probability makes sense with probability p" for p<1, assuming the probability of probability making sense is >0. If it's not, it's still not worse, but what the hell are we even saying? Speaking of measure theory, what probability should we assign to a uniformly distributed random real number on the interval [0, 1] being rational? Something bigger than 0? Maybe in practice we would never hold a uniform distribution over [0, 1] but would assign greater probability to "special" numbers (like, say, 1/2). But regardless of our probability distribution, there will exist subsets of [0, 1] to which we must assign probability 0. The only way I can see around this is to refuse to talk about infinite (or at least uncountable) sets. Are there others? I suspect Eliezer would object to my post claiming that I'm confusing map and territory, but I don't think that's fair. If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway. So is it really reasonable to claim that "the probability that probability makes sense is <1"? Measure theory gives a clear answer to this: it's 0. Which is fine. For all x, the probability that your rv will take the value x is 0. Actually the probability that your rv is computable is also 0. (Computable numbers are the largest countable class I know of.) What's false is the tempting statement that probability 0 events are impossible. It's only the converse that's true: impossible events have probability 0. There's another tempting statement that's false, namely the statement that if S is an arbitrary collection of disjoint events, the probability of one of them happening is the sum of the probabilities of each one happening. Instead, this only holds for countable sets S. This is part of the definition of a measure. If there's a map you're trying to use all over the place (and you do seem to), then I claim it makes no sense to put a little region on the map labelled "maybe this map doesn't make any sense at all". If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway. Janos, are you saying that it is in fact impossible that your map in fact doesn't make any sense? Because I do, indeed, have a little section of my map labelled "maybe this map doesn't make any sense at all", and every now and then, I think about it a little, because there are so many fundamental premises of which I am unsure even in their definitions. (E.g: "the universe exists", and "but why?") Just because this area of my map drops out of my everyday decision theory due to failure to generate coherent advice on preferences, does not mean it is absent from my map. "You must ignore" or rather "You should usually ignore" is decision theory, and probability theory should usually be firewalled off from preferences. Computable numbers are the largest countable class I know of. Either all countable sets are the same size anyway, or you can generate a larger set by saying "all computable reals plus the halting probability". How about computable with various oracles? What's false is the tempting statement that probability 0 events are impossible. It's only the converse that's true: impossible events have probability 0. If you cannot repose probability 1 in the statement "all events to which I assign probability 0 are impossible" you should apply a correction and stop reposing probability 0 to those events. Do you mean to say that all impossible events have probability 0, plus some more possible events also have probability 0? This makes no sense, especially as a justification for using "probability 0" in a meaningfully calibrated sense. To use "probability 0" without a finite expectation of being infinitely surprised, you must repose probability 1 in the belief that you use "probability 0" only for actually impossible events; but not necessarily believe that you assign probability 0 to every impossible event (satisfying both conditions implies logical omniscience). I should mention that I'm also an infinite set atheist. I can admit the possibility that probability doesn't work, but not have to do anything about it. If probability doesn't work and I can't make rational decisions, I can expect to be equally screwed no matter what I do, so it cancels out of the equation. The definable real numbers are a countable superset of the computable ones, I think. (I haven't studied this formally or extensively.) If you don't want to assume the existence of certain propositions, you're asking for a probability theory corresponding to a co-intutionistic variant of minimal logic. (Cointuitionistic logic is the logic of affirmatively false propositions, and is sometimes called Popperian logic.) This is a logic with false, or, and (but not truth), and an operation called co-implication, which I will write a <-- b. Take your event space L to be a distributive lattice (with ordering <), which does not necessarily have a top element, but does have dual relative pseudo-complements. Take < to be the ordering on the lattice. (a <-- b) if for all x in the lattice L, for all x, b < (a or x) if and only if a <-- b < x Now, we take a probability function to be a function from elements of L to the reals, satisfying the following axioms: 1. P(false) = 0 2. if A < B then P(A) <= P(B) 3. P(A or B) + P(A and B) = P(A) + P(B) There you go. Probability theory without certainty. This is not terribly satisfying, though, since Bayes's theorem stops working. It fails because conditional probabilities stop working -- they arise from a forced normalization that occurs when you try to construct a lattice homomorphism between an event space and a conditionalized event space. That is, in ordinary probability theory (where L is a Boolean algebra, and P(true) = 1), you can define a conditionalization space L|A as follows: L|A = { X in L | X < A } true' = A false' = false and' = and or' = or not'(X) = not(X) and A P'(X) = P(X)/P(A) with a lattice homomorphism X|A = X and A Then, the probability of a conditionalized event P'(X|A) = P(X and A)/P(A), which is just what we're used to. Note that the definition of P' is forced by the fact that L|A must be a probability space. In the non-certain variant, there's no unique definition of P', so conditional probabilities are not well-defined. To regain something like this for cointuitionistic logic, we can switch to tracking degrees of disbelief, rather than degrees of belief. Say that: 1. D(false) = 1 2. for all A, D(A) > 0 3. if A < B then D(A) >= D(B) 4. D(A or B) + D(A and B) = D(A) + D(B) This will give you the bounds you need to let you need to nail down a conditional disbelief function. I'll leave that as an exercise for the reader. Hi guys you don't know me and I prefer to stay anonymous. I look at it backwards and get the very same result as Eliezer Y. What is total degeneracy? In practice, it is being total impervious to updating, regardless of the magnitude of the information seen (even infinity). That can only be achieved by unitary of nul probabilities as priors. Bayesian updating never takes you there (posteriors). And no updating can take place from that situation. Anonymous If the map seems to make sense and you're still following it for everything, you'll have to ignore that region anyway. Just cos it's not a very nice place to visit, doesn't mean it ain't on the map. ;) "1, 2, and 3 are all integers, and so is -4. If you keep counting up, or keep counting down, you're bound to encounter a whole lot more integers. You will not, however, encounter anything called "positive infinity" or "negative infinity", so these are not integers." This bothered me, more to the point, it hit on some stuff I've been thinking about. I realize I don't have a very good way to precisely state what I mean by "finite" or "eventually" The above, for instance, basically says "if infinity is not an integer, then if I start at an integer and move an integer number of steps away from it, I will still be at an integer that's not infinity, therefore infinity isn't an integer" But if we allowed infinity to be considered an integer, then we allow an infinite number of steps... How about this: if N is a non infinite integer, SN is N's successor, PN is N's predecessor, neither SN nor PN will be infinite. Great, no matter where we start from, we can't reach an infinity in one step, so that seems to make this notion more solid. but... if N is an infinity, then neither SN nor PN (thinking about ordinals now, btw, instead of cardinals) will be finite. Doh. So the situation seems a bit symmetric here. This is really annoying to me. I have as of late been getting the notion that the notions of "finite" and "eventually" are so tied to the idea of mathematical induction that it's probably best do define the former in terms of the latter... ie, the number of steps from A to A is finite if and only if induction arguments starting from A and going in the direction toward B actually validly prove the relevant proposition for B. This is a vague notion, but near as I can tell, it comes closes to what I actually think I mean when I say something like "finite" or "eventually reach in a finite number of steps" or something like ie, finite values are exactly those critters for which mathematical induction arguments can be used on. (maybe this is a bad definition. I'm more stating it as a "here's my suspicion of what may be the best basis to really represent the concept") Anyways, as far as 0,1 not being probabilities... While I agree that one should't believe a proposition with probability 0 or 1, I'm not sure I'd consider them nonprobabilities. Perhaps "unreachable" probabilities instead. Disallowing stuff like sum to 1 normalizations and so on would seem to require "unnatural" hoops to jump through to get around that. Unless, of course, someone has come up with a clean model without that. (If so, well, I'm curious too.) I'm not sure what an "infinite set atheist" is, but it seems from your post that you use different notions of probability than what I think of as standard modern measure theory, which surprises me. Utilitarian's example of a uniform r.v. on [0, 1] is perfect: it must take some value in [0, 1], but for all x it takes value x with probability 0. Clearly you can't say that for all x it's impossible for the r.v. to take value x, because it must in fact take one of those values. But the probabilities are still 0. Pragmatically the way this comes out is that "probability 0" doesn't imply impossible. If you perform an experiment countably-infinitely many times with the probability of a certain outcome being 0 each time, the probability of ever getting that outcome is 0; in this sense you can say the outcome is almost impossible. However it's possible that each outcome individually is almost impossible, even though of course the experiment will have an outcome. You can object that such experiments are physically impossible e.g. because you can only actually measure/observe countably many outcomes. That's fine; that just means you can get by with only discrete measures. But such assumptions about the real world are not known a priori; I like usual measure theory better, and it seems to do quite a good job of encompassing what I would want to mean by "probability", certainly including the discrete probability spaces in which "probability 0" can safely be interpreted to mean "impossible". You're right, it's not that hard to come up with larger countable classes of reals than the computables; I just meant that all of the usual, "rolls-off-the-tip-of-your-tongue" classes seem to be subsets of the computables. But maybe Nick is right, and the definables are broader. I haven't studied this either. And yes, I also sometimes think about how assumptions I make about life and the perceptible universe could be wrong, but I do not do this much for mathematics that I've studied deeply enough, because I'm almost as convinced of its "truth" as I am of my own ability to reason, and I don't see the use in reasoning about what to do if I can't reason. This is doubly true if the statements I'm contemplating are nonsense unless the math works. I am curious as to why you asked Peter not to repeat his stunt. Also, I would really like to know how confident you are in your infinite set atheism and for that matter in your non-standard philosophy of mathematics attitudes in general. Regarding infinite set atheism: Is the set of "possible landing sites of a struck golf ball" finite or infinite? In other words, can you finitely parameterize locations in space? Physicists normally model "position" as n-tuples of real numbers in a coordinate system; if they were forced to model position discretely, what would happen? I can claim to see an infinite set each time I use a ruler... Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it. I should mention that I'm also an infinite set atheist. You've mentioned this before, and I have always wondered: what does this mean? Does it mean that you don't believe there are any infinite sets? If so, then you have to believe that a mathematician who claims the contrary (and gives the standard proof) is making a mistake somewhere. What is it? Frankly, even if you actually are a finitist (which I find hard to imagine), it doesn't seem relevant to this disucssion: every argument you have presented could equally well have been given by someone who accepts standard mathematics, including the existence of infinite sets. The nature of 0 & 1 as limit cases seem to be fascinating for the theorists. However, in terms of 'Overcoming Bias', shouldn't we be looking at more mundane conceptions of probability ? EY's posts have drawn attention to the idea that the amount of information needed to add additional cetainty on a proposition increases exponentially while the probability increases linearly. This says that in utilitarian terms, not many situations will warrant chasing the additional information above 99.9% certainty (outside technical implementations in nuclear physics, rocket science or whatever). 99.9% as a number is taken out of a hat. In human terms, when we say 'I'm 99.9% sure that 2+2 always =4', where not talking about 1000 equivalent statements. We're talking about one statement, with a spatial representation of what '100% sure' means with respect to that statement, and 0.1% of that spatial representation allowed for 'niggling doubts', of the sort : what have I forgotten ? What don't I know ? What is inconceivable for me ? The interesting question for 'overcoming bias' is : how do we make that tradeoff between seeking additional information on the one hand and accepting a limited degree of certainty on the other ? As an example (cf. the Evil Lords of the Matrix), considering whether our minds are being controlled by magic mushrooms from Alpha Pictoris may someday increase the 'niggling doubt' range from 0.1% to 5%, but the evidence would have to be shoved in our faces pretty hard first. Doug S., I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it. Not in standard quantum mechanics. Certain of the many [S:theories:S] unsupported hypotheses of quantum gravity (such as Loop Quantum Gravity) might say something similar to this, but that doesn't abolish every infinite set in the framework. The total number of "places where infinity can happen" in modern models has tended to increase, rather than decrease, over the centuries, as models have gotten more complex. One can never prove that nature isn't "allergic to infinities" (the skeptic can always claim, "wait, but if we looked even closer or farther, maybe we would see a heretofore unobserved brick wall"), but this allergy is not something that has been empirically observed. I think Eliezer's "infinite set atheism" is a belief that infinite sets, although well-defined mathematically, do not exist in the "real world"; in other words, that any physical phenomenon that actually occurs can be described using a finite number of bits. (This can include numbers with infinite decimal expansions, as long as they can be generated by a finitely long computer program. Therefore, using pi in equations is not prohibited, because you're using the symbol "pi" to represent the program, which is finite.) A consequence of "infinite set atheism" seems to be that the universe is a finite state machine (although one that is not necessarily deterministic). Am I understanding this properly? What do you mean by "infinite set atheism"? You are essentially stating that you don't believe in mathematical limits -- because that is one of the major consequences of infinite sets (or sequences). If you don't believe in those... well, you lose calculus, you lose the density of real numbers, you lose the need or understanding of man events with probability 0 or 1, and you lose the point of Zeno's Paradox. -- Janos is spot on about measure zero not implying impossibility. What is the probability of a golf ball landing at any exact point? Zero. But it has to land somewhere, so no one point is impossible. Impossibility would mean absence from your sigma algebra. What's that you ask? Without making this painful, you need three things for probability: an idea of what constitutes "the space of everything", an idea of what constitutes possible events out of that space which we can confirm or deny, and an assignment of numbers to those events. (This is often LaTeX'ed as (\Omega, \mathcal{F}, P).) The conversation here seems to be confusing the filtration/sigma-algebra F with the numbers assigned to those events by P. Can we choose which we're talking about: events or numbers? What is the probability of a golf ball landing at any exact point? Zero. I don't know which is more painful: Eliezer's errors, or those of his detractors. Perhaps you could clarify what exactly is an infinite set atheist in a full post...or maybe it's only worth a comment. Cumulant, I think the idea behind "infinite set atheism" is not that limits don't exist, but that that infinities are acceptable only as limits approached in a specified way. On this view, limits are not a consequence of infinite sets, as you contend; rather, only the limit exists, and the infinite set or sequence is merely a sloppy way of thinking about the limit. Eliezer, I'll second Matthew's suggestion above that you write a post on infinite set atheism; it looks as if we don't understand you. I think I understand the motive for rejecting infinite sets (viz., that whenever you deal with infinites you get all sorts of ridiculously counterintuitive results--sums coming out different when you re채rrange the terms, the Banach-Tarski paradox, &c., &c.), but I'm not sure you can give up infinite sets without also giving up the real numbers (as others have touched on above), which seems very Caledonian: Not wrong. Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals. Now say there is some probability density of landing spots; and, let's say no one spot is special in that it attracts golf balls more than points immediately nearby (i.e. our pdf is continuous and non-atomic). Right there, you need every point (as a singleton) to have measure 0. Go pick up Billingsley: measure 0 is not the same as impossible nor does it cause any problems. Take the field you're swinging at to be a plane. There are infinitely many points in that plane; that's just the density of the reals. And the location that the ball lands on will also be composed of infinitely many reals. Shall we compare the size of two infinite sets? I'd say that the ball is a sphere and consider the first point of impact (i.e. the tangency point of the plane to the sphere). Otherwise, you need to know a lot about the ball and the field where it You can compare infinite sets. Take the sets A and B, A={1,2,3,...} and B={2,3,4,...}. B is, by construction, a subset of A. There's your comparison; yet, both are infinite sets. What assumptions would you make for the golf ball and the field? (To keep things clear, can we define events and probabilities separately?) Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero. Probabilities in continuous space are measured on intervals. Basic calculus... I believe according to quantum mechanics the smallest unit of length is Planck length and all distances must be finite multiples of it. This is what I'm given to understand as well. Doesn't this take the teeth out of Zeno's paradox? Pragmatically the way this comes out is that "probability 0" doesn't imply impossible. Janos, would you agree that P=0 is a probability to the same degree that infinity is a number? Apologies for double post. Caledonian, every undergraduate who has ever taken a statistics class knows that the probability of any single point in a continuous distribution is zero. Gowder, everyone who's ever given the issue more than three-seconds'-thought knows that no statistical result ever involves a single point. Usually, if a die lands on edge we say it was a spoiled throw and do it over. Similarly if a Dark Lord writes 37 on the face that lands on top, we complain that the Dark Lord is spoiling our game and we don't count it. We count 6 possibilities for a 6-sided die, 5 possibilities for a 5-sided die, 2 possibilities for a 2-sided die, and if you have a die with just one face -- a spherical die -- what's the chance that face will come up? I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too. Well, that depends on your number system. For some purposes +infinity is a very useful value to have. For instance if you consider the extended nonnegative reals (i.e. including +infinity) then every measurable nonnegative extended-real-valued function on a measure space actually has a well-defined extended-nonnegative-real-values integral. There are all kinds of mathematical structures where an infinity element (or many) is indispensable. It's a matter of context. The question of what is a "number" is I think very vague given how many interesting number-like notions mathematicians have come up with. But unquestionably "infinity" is not a natural number, or a real number, or a complex number. Probability theory, on the other hand, would have to change shape if we comfortably wanted to exclude 0 probabilities. What we now call measures would be wrong for the job. I don't know how it would look, but I find the standard description intuitively appealing enough that I don't think it should be changed. It's probably true that for a Bayesian inference engine of some sort, whose purpose is to find likelihoods of propositions given evidence, the "probabilities" it keeps track of shouldn't become 0 or 1. If there's a rich theory there focussing on how to practically do this stuff (and I bet there is, although I know nothing of it beyond Bayes' Theorem, which is a simple result) then ignoring the possibility of 0s and 1s makes sense there: for example you can use the log odds. But in general probability theory? No. I think it would be interesting to develop probability theory with no boundaries, with no 0 and 1. It works fine to do it the way it's done now, and the alternative might turn up something interesting too. You might want to check out Kosko's Fuzzy Thinking. I haven't gone any further into fuzzy logic, yet, but that sounds like something he discussed. Also, he claimed probability was a subset of fuzzy logic. I intend to follow that up, but there is only one of me, and I found out a long time ago that they can write it faster than I can read it. "On some golf courses, the fairway is readily accessible, and the sand traps are not. The green is either." Haha, very nice CGD. Shows how much those philosophers of language know about golf. :-) Although... hmm... interesting. I think that gives us a way to think about another probability 1 statement: statements that occupy the entire logical space. Example: "either there are probability 1 statements, or there are not probability 1 statements." That statement seems to be true with probability 1... Disallowing a symbol for "all events" breaks the definition of a probability space. It's probably easier to allow extended reals and break some field axioms than figure out do rigorous probability without a sigma-algebra. When re-working this into a book, you need to double check your conversions of log odds into decibels. By definition, decibels are calculated using log base 10, but some of your odds are natural logarithms, which confused the heck out of me when reading those paragraphs. Probability .0001 = -40 decibels (This is the only correct one in this post, all "decibel" figures afterwards are listed as 10 * the natural logarithm of the odds.) Probability 0.502 = 0.035 decibels Probability 0.503 = 0.052 decibels Probability 0.9999 = 40 decibels Probability 0.99999 = 50 decibels P.S. It'd be nice if you provided an RSS feed for the comments on a post, in addition to the RSS feed for the posts... I cannot begin to imagine where those numbers came from. Dangers of "Posted at 1:58 am", I guess. Fixed. Could you respond to Neel Krishnaswami's post above, and this one as well? Isn't the "1" above a probability? My intution as a mathematician declares that nobody will never develop an elegant mathematical formulation of probability theory that does not allow for statements that are logically impossible or certain, such as statements of the form p AND NOT p. And it is necessary, if the theory is to be isomorphic to the usual one, that these statements have probability 0 (if impossible) or 1 (if certain). However, I believe that it is quite reasonable to declare, as a condition demanded of any prior deemed rational, that only truly impossible or certain statements have those probabilities. I think that this gives you what you want. It's obvious that you can make this very demand when working with discrete probability distributions. It may not be obvious that you can make this demand when working with continuous probability distributions. Certainly the usual theory of these, based on so-called ‘measure spaces’ and ‘σ-algebras’ (I mention those in case they jog the reader's memory), cannot tolerate this requirement, at least not if anything at all similar to the usual examples of continuous distributions are allowed. One answer is that only discrete probability distributions apply to the real world, in which one can never make measurements with infinite precision or observe an infinite sequence of events. Even if the world has infinite size or is continuous to infinitesimal scales, you will never observe that, so you don't need to predict anything about that. However, even if you don't buy this argument, never fear! There is a mathematical theory of probability based on ‘pointless measure spaces’ and ‘abstract σ-algebras’. In this theory, it again makes perfect sense to demand that any prior must assign probability 0 or 1 only to impossible or certain events. The idea is that if something can never be observed, even in principle, then it is effectively impossible, and the abstract pointless theory allows one to treat it as such. Then I agree that one should require, as a condition on considering a prior to be rational, that it should assign probability 0 only to these impossible events and assign probability 1 only to their certain complements. PS: cumulant-nimbus above gives a brief summary of the usual approach to measure theory. The pointless approach that I advocate can be suggested from that as follows: taboo \Omega. Neel Krishnamurti's comment is implicitly using the pointless approach; his event space is cumulant-nimbus's \mathcal{F}, and he works entirely in terms of events. As Perplexed points out this is usually known as Cromwell's_rule. I'm kinda surprised that it's only been mentioned once in the comments (I only just discovered this site, really really great, by the way) and one from 2010 at that, but it seems to me that "a magical symbol to stand for "all possibilities I haven't considered" " does exist: the symbol "~" (i.e. not). Even the commenter who does mention it makes things complicated for himself: P(Q or ~Q)=1 is the simplest example of a proposition with probability 1. The proposition is of course a tautology. I do think (but I'm not sure) that that is the only sort of statement that receives probability 1. This is in sync with Eliezer's "amount of evidence" interpretation. A bayesian update can only generate 1 if the initial proposition was of probability 1 or if the evidence was tautological (i.e. if Q then Q or, slightly less lame, if "Q or R" and "~R" then Q, where "Q or R" and "~R" are the evidence). Skimming the comments, I saw two other proposals for "sure bets", the runner who clocked a negative time and the golf ball landing in a particular spot. That last one degenerated pretty quickly into a discussion about how many points there are in a field and on a ball. I think that's typical of such arguments: it depends on your model. Once you have your model specified the probability becomes 1 (or not) if the statement is (or isn't) tautological in the model. If the model isn't specified, then neither is the statement (what is a precise point?) and hence the probability. Ask the next man what the probability is of a runner clocking a negative time and he'll rightly respond: "Huh?" (unless he is a particularly obfuscatory know-it-all, in which case he might start blabbering about the speed of light. But then too, he makes a claim because he can ascribe meaning to the question, that is, he picks his model). So these are also tautological examples. I think Eliezer's hold up pretty well for proposition that aren't tautological and hence empirical in nature: they require evidence and only tautological evidence will suffice for certainty. About the problem of inserting 0's in certain standard theorems: I don't see a problem with Bayes' theorem (I'm curious about other examples). Dividing by 0 is not defined, so the probability of it raining when hell freezes over is not defined. That seems like a satisfactory arrangement. Thanks for the analysis, MathijsJ! It made perfect sense and resolved most of my objections to the article. I was willing to accept that we cannot reach absolute certainty by accumulating evidence, but I also came up with multiple logical statements that undeniably seemed to have probability 1. Reading your post, I realized that my examples were all tautologies, and that your suggestion to allow certainty only for tautologies resolved the discrepancy. The Wikipedia article timtyler linked to seems to support this: "Cromwell's rule [...] states that one should avoid using prior probabilities of 0 or 1, except when applied to statements that are logically true or false." This matches your analysis - you can only be certain of tautologies. Also, your discussion of models neatly resolves the distinction between, say, a mathematically-defined die (which can be certain to end up showing an integer between 1 and 6) and a real-world die (which cannot quite be known for sure to have exactly six stable states). Eliezer makes his position pretty clear: "So I propose that it makes sense to say that 1 and 0 are not in the probabilities; just as negative and positive infinity, which do not obey the field axioms, are not in the real numbers." It's true - you cannot ever reach a probability of 1 if you start at 0.5 and accumulate evidence, just as you cannot reach infinity if you start at 0 and add integer values. And the inverse is true, too - you cannot accumulate evidence against a tautology and bring its probability down to anything less than 1. But this doesn't mean a probability of 1 is an incoherent concept or anything. Eliezer: if you're going to say that 0 and 1 are not probabilities, you need to come up with a new term for them. They haven't gone away completely just because we can't reach them. Edit a year and a half later: I agree with the article as written, partially as a result of reading How to Convince Me That 2 + 2 = 3, and partially as a result of concluding that "tautologies that have probability 1 but no bearing on reality" is a useless concept, and that therefore, "probability 1" is a useless concept. Jaynes avoids P(A|B) for "probability of A given evidence B" and P(B) for "probability of B", preferring P(A|BX) and P(B|X) where X is one's background knowledge. This and the above leads naturally to the question of ~X: the situation in which one's "background knowledge" is false. Assume that background knowledge X is the conjunction of a finite number of propositions. ~X is true if any of these propositions is false. If we can factor X into YZ where Y is the portion we suspect of being false — that is, if we can isolate for testing a portion of those beliefs we previously treated as "background knowledge" — then we can ask about P(A|BYZ) and P(A|B·~Y·Z). For any state of information X, we have P(A or not A | X) = 1 and P(A and not A | X) = 0. We have to have 0 and 1 as probabilities for probability theory even to work. I think you're taking a reasonable idea -- that P(A | X) should be neither 0 nor 1 when A is a statement about the concrete physical world -- and trying to apply it beyond its applicable domain. Consider the set of all possible hypotheses. This is a countable set, assuming I express hypotheses in natural language. It is potentially infinite as well, though in practice a finite mind cannot accomodate infintely-long hypotheses. To each hypothesis, I can try to assign a probability, on the basis of available evidence. These probabilities will be between zero and one. What is the probability that a rational mind will assign at least one hypothesis the status of absolute certainty? Either this is one (there is definitely such a hypothesis), or zero (there is definitely not such a hypothesis, which cannot be, because the hypothesis "there is definitely not such a hypothesis" is then a counterexample), or somewhere in between (there may be, somewhere, a hypothesis that a rational mind would regard as being absolutely certain). So I cannot accept your hypothesis that there does not exist, anywhere, ever, a hypothesis that I should regard as being absolutely certain. Self-referential hypotheses do not always map to truth values, and "a rational mind will assign at least one hypothesis the status of absolute certainty" is self-referential. The contradiction you've encountered arises from using a statement isomorphic to "this statement is false" and requiring it to have a truth value, not to a problem with excluding 0 and 1 as probabilities. Yes 0 and 1 are not probabilities. They're truth or falseness values. it's necessary to make a third 'truth value' for things that are unprovable, and possibly a fourth for things that are Digging up an old thread here, but an interesting point I want to bring up: a friend of mine claims that he internally assigns probability 1 (i.e. an undisprovable belief) only to one statement: that the universe is coherent. Because if not, then mnergarblewtf. Is it reasonable to say that even though no statement can actually have probability 1 if you're a true Bayesian, it's reasonable to internally establish an axiom which, if negated, would just make the universe completely stupid and not worth living in any more? No, it's not. It's the same fundamental mistake that a lot of religious rhetoric about "faith" and "meaning" is founded on: that wanting something to be true counts as evidence that it is true. There's no reason to think that the universe depends for any of its properties on whether someone finds it stupid or not, or worth living in. I'd also suggest you try to draw your friend out a bit on what it means exactly for the universe to be "coherent." Can that notion be expressed formally? What would we expect to see if we lived in an incoherent universe? Obviously, I'm dubious that the "coherence" of the universe is in any proper sense a philosophical or scientific idea -- it sounds a lot more like an aesthetic one. I think he just means "coherent" as "one which we can actually model based on our observations", i.e. one in which this whole exercise (rationality) makes any sense. He expects that the universe be incoherent with probability zero, and doesn't think there would be any sensible observations if this were the case (or any observation being possible if this were the ETA: Merriam-Webster Definition of COHERENT 1 a : logically or aesthetically ordered or integrated : consistent <coherent style> <a coherent argument> b : having clarity or intelligibility : understandable <a coherent person> <a coherent So, understandable and consistent: a universe which philosophy, mathematics and science can apply to in any meaningful way. A charitable paraphrase of "The universe is coherent" could be a statement of the universal validity of non-contradiction: For every p, not (p and not p). However, given the existence of paraconsistent logic and philosophers who take dialethism seriously, I cannot assign probability 1 to the claim that no aspect of the universe requires a contradiction in its description. I would go even further to say that I am quite more certain of many other claims (such as "1+1=2" and "2+2=4") than of such general and abstract propositions as "the universe is coherent" or even "there are no true contradictions". I don't think he goes quite that far - he assigns no statements probability 0 or 1 within our own logic system, even (P and ¬P), because he believes it to be possible (though not very likely) that some other logic system might supersede our own. His belief is that it is not possible for ALL systems of logic to be incorrect, i.e. that (it is impossible to reason correctly about the universe) is necessarily false. There's a lot of logic to that. For extremely unlikely possibilities you can often get away with setting their probability to 0 to make the calculations a lot simpler. For possibilities where predicted utility is independent of your actions (like "reality is just completely random") it can also be worthwhile setting their probability to 0 (ie. ignoring them), since they're approximately a constant term in expected utility. These are good ways of approximating actual expected utility so you can still mostly make the right decisions, which bounded rationality requires. What is P(A|A)? What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A". Also, there's also the probability that the particular version of logic you're using is wrong. What do you mean by "|A"? It's well-defined in mathematics, sure, but in real life, surely the furthest you can go is "|experience/perception of evidence for A". How far you can go depends on what you mean by "go". It's perfectly possible to calculate, say, P(I see the coin come up heads | the coin is flipped once, it is fair, and I see the outcome), and actually much more difficult to calculate P(I see the coin come up heads | I have experience/perception of evidence for the facts that the coin is flipped once, it is fair, and I see the outcome). "I see" is what I meant by perception/experience of evidence. Whenever I "see" something, there's always a non-zero chance of my brain deceiving me. The only thing you can really have to base your decisions on is P(I see the coin come up heads | I see/know the coin is flipped once, I know it is fair, and I see the outcome). P(the coin comes up heads|the coin is flipped once, it is fair and I know the outcome) is possible and easy to calculate, but not completely accurate to the world we live in. The ("Bayesian") framework explored in these essays replaces the two Cartesian options, affirmation and denial, by a continuum of judgmental probabilities in the interval from 0 to 1, endpoints included, or -- what comes to the same thing -- a continuum of judgmental odds in the interval from 0 to infinity, endpoints included. Zero and 1 are probabilities no less than 1/2 and 99/100 are. Probability 1 corresponds to infinite odds, 1:0. That's a reason for thinking in terms of odds: to remember how momentous it may be to assign probability 1 to a hypothesis." Richard Jeffrey, "Probability and the art of judgement". I leave it as an exercise to correctly state the relationships between Eliezer's article, the Jeffrey quote, and the value of P(A|A). (Note: Jeffrey is not to be confused with Jeffreys, although both were Bayesian probability theorists.) Interesting Log-Odds paper by Brian Lee and Jacob Sanders, November 2011. "When you work in log odds, the distance between any two degrees of uncertainty equals the amount of evidence you would need to go from one to the other. That is, the log odds gives us a natural measure of spacing among degrees of confidence." That observation is so useful and intuition friendly it probably deserves it's own blog post, and a prominent place in your book. Forgive me if this sounds condescending, but isn't saying "0 and 1 are not probabilities because they won't let you update your knowledge" basically the same as saying "you can't know something because knowing makes you unable to learn"? If we assign tautologies as having probability 1, then anything reducible to a tautology should have probability 1 (and similarly, all contradictions and things reducible to contradictions should have probability 0). For any arbitrarily large N, if you put 2 apples next to 2 apples and repeat the test N times, you'll get 4 apples N out of N times, no less (discounting molecular breakdowns in the apples or other possible interferences). You shouldn't assign tautologies probability 1 either because your notion of what a tautology is might be a hallucination. This confuses object level and meta level. In probability theory, P(-A|A) = 0 and P(A|A) = 1, however uncertain you may be about Cox's theorem, or about whether you are actually thinking about the same A each time it appears in those formulas. No-one, as far as I know, has ever constructed a theory of probability in which these are assigned anything else but 0 and 1. That is not to say that it cannot be done, only that it has not been done. Until that is done, 0 and 1 are probabilities. The title of the article is a rhetorical flourish to convey the idea elaborated in its body, that to assert a probability, as a measure of belief, of 0 or 1 is to assert that no possible evidence could update that belief, that 0 and 1 are probabilities that you should not find yourself assigning to matters about which there could be any real dispute, and to suggest odds ratios or their logarithms as a better concept when dealing with practical matters associated with very low or very high probabilities. There is a very large difference between saying that the probability of winning a lottery is tiny and saying that it cannot happen at all; with enough participants it is almost certain to happen to someone. That difference is made clear by the log-odds scale, which puts the chance of a lottery ticket at 60 or more decibels below zero, not infinitely far below. In a world with 7 billion people, billion-to-1 chances happen every day. As an example of even tinier probabilities which are still detectably different from zero, consider a typical computer. A billion transistors in its CPU, clocked a billion times a second, running for a conveniently round length of time, a million seconds, which is about 12 days. Computers these days can easily do that without a single hardware error, which means that for every one of a million billion billion switching events, a transistor opened or closed exactly as designed. A million billion billion is about 1.5 times Avogadro's number. The corresponding log-odds is -240 decibels. And yet hardware glitches can still happen. And P(A|A) is still 1, not any finite number of decibels. So you are saying that statement "0 and 1 are not probabilities" has probability of 1?
{"url":"http://lesswrong.com/lw/mp/0_and_1_are_not_probabilities/","timestamp":"2014-04-19T05:26:04Z","content_type":null,"content_length":"330103","record_id":"<urn:uuid:83d6e874-b0f5-4943-a23e-4c0a65869c44>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Visual Linear Algebra: with Maple and Mathematica Tutorials From the back cover: Featuring a unique blend of interactive computer tutorials and student-centered text, Herman and Pepe’s Visual Linear Algebra offers an innovative new way to learn linear algebra. This text and accompanying CD work together to help students achieve a thorough understanding of core concepts and the skills they need to apply them. With Visual Linear Algebra, students can: * Become actively engaged in the material. The exercises, demonstrations, explorations, visualizations, and animations in the tutorials stimulate student interest, encourage students to think about mathematics, and help them check their comprehension. * Build a strong geometric understanding. The authors use geometry extensively to help students develop an intuitive understanding of the concepts of linear algebra. * Learn the language of linear algebra. The authors’ innovative “Can You Speak Linear Algebra?” exercises help students use mathematical terminology correctly in their written work. * Work through interesting applications. Ten fascinating applications, each thoroughly developed in its own tutorial, allow readers to engage in substantial activities that yield worthwhile results. * Develop an intuitive grasp of the concepts. Visual Linear Algebra progresses from the concrete and experiential to the abstract and theoretical, to help students develop confidence in their understanding and make the transition to higher mathematics. http://www.amazon.com/Visual-Linear-Algebra-Eugene-Herman/dp/0471682993/wolframscienceco Algebra
{"url":"http://www.wolfram.com/books/profile.cgi?id=6406","timestamp":"2014-04-19T23:28:26Z","content_type":null,"content_length":"44717","record_id":"<urn:uuid:17a8727d-cfc8-40fe-b5dd-bc86e1a2bc81>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00378-ip-10-147-4-33.ec2.internal.warc.gz"}
CFD Modeling of Corrugated Flexible Pipe The flexible metal pipe has been used in smaller diameters for more than 30 years for all kind of cryogenic Liquid Natural Gas (LNG) transfer applications (Refs. [1,3]). Today these LNG loading systems have evolved into a complex system, which have to respect increasingly stringent rules and standards while continuing to maintain high levels of safety and availability. One of the main problems in these systems is to predict internal turbulent flow behavior, hence the associated pressure drop in the corrugated configuration of flexible pipes. Metallic corrugated pipes are well known structures, which can withstand tensile and internal pressure loads, as well as perform better from a fatigue and heat transfer standpoints. However, series of corrugations can induce complex and undesirable flow behavior in the pipes. The wavy configuration of the corrugations promotes turbulence and therefore improves heat transfer. For both design and operational standpoint, the LNG transfer from ship to ship is a relatively new application of this well known technology (Figure 1(a)). The basic design of LNG transfer pipe is illustrated in Figure 1(b). Figure 1: LNG transfer applications (a) Offshore LNG transfer system (b) Common design for LNG flexible pipe (Ref. [2]) The objective of this case study is to present CFD modeling of fully developed turbulent flow through a flexible corrugated pipe and to investigate the pressure drop reduction by introducing liner materials. The reduction in cost and complexity of developing a robust cryogenic liner or corrugation filler, plus eventual certifications, would be significant and needs to be worth the improvement (decrease) in pressure drop. To estimate the variation of the pressure in the corrugations, we do not model the phase change and the bubbles cavitation but accurately evaluate the pressure drop along the pipe. The pressure drop estimation can be useful to deduce the upstream pressure which can be imposed to stay everywhere downstream above the phase change pressure. This work also aims to establish a framework to be used in large scale numerical simulations of the offshore transfer of cryogenic fluids. A 3-D CFD approach is considered more appropriate than 2-D axisymmetric one, since the wavy corrugation profiles lead to a great deal of internal turbulent structures for a high Reynolds number over Re > 10 million. Three geometries of the bellows’ (corrugation) depth are considered to determine the potential value of a cryogenic liner, corrugation filler or geometric variations for the 16” pipe. We consider the length of 3D flow domain with L = 6D matching earlier work on the direct numerical simulations of fully developed pipe flow. For the parametric design study, we select three configurations with varying depths A* (A/ID): A*=0.06047 (base), A*=0.01583 (liner1), and A*=0.00798 (liner2); where A denotes the depth and ID is inner diameter of the pipe. The turbulence level is typically high due to the corrugations and turbulence modeling is critical to get the accurate predictions. To model the steady effects of the turbulence on the mean flow field, we employ the Spalart-Allmaras Reynolds Averaged Navier Stokes (RANS) model. For unsteady simulations, we employ Delayed Detached Eddy Simulation (DDES), a hybrid RANS model with Large Eddy Simulation (LES). In the LES based on dynamic subgrid scale estimation, an attempt is made to capture the large scale unsteady motions which carry the bulk of the mass and momentum in a flow, but the near wall turbulence behavior is treated with a wall function. In the DDES model, we resolve the large eddies that have the biggest effect on the wall shear stress and use the RANS equations to describe the flow near the wall. This was done not only to economize on mesh size, but also because most pipes have relatively rough walls. Wall functions reduce mesh size by providing an integrated relationship between the wall and the logarithmic region of the boundary layer. To simulate the large length of corrugated pipe with fully developed flow, periodic conditions are applied between the outlet (exit) and inlet (entrance) of the domain. Figure 2: Streamwise variation of velocity magnitude contours in the corrugated pipe at flow rate Q=3333 m3/h: (a) RANS model (b) Delayed-DES model Figure 2(a) shows the contours of velocity magnitude using the RANS model at the Reynolds number of Re=9.38E6 for the base model of corrugated pipe. The fully developed and time averaged steady flow behavior can be observed from the figure. As expected from the RANS model, there are no physical unsteady motions in the velocity field. Figures 2(b) shows the contours of streamwise velocity at the cross section of the corrugated pipe with the DDES model. The 3D turbulence structures and unsteadiness in the flow can clearly be inferred in the image. Figure 3: (a) Instantaneous velocity magnitude contours at the cross-sectional planes for flow rate Q=3333 m3/h (Re = 9.38E6) (b) Iso-surface of vorticity variable (Q-criterion) colored by velocity Figure 3(a) shows the contours of cross-stream velocity magnitude at the three cross section planes of the corrugated pipe. Significant circumferential variations in the velocity magnitude can be seen in the figure. These local variations are coupled with vorticity, which is defined as the rotation of the velocity field. Figure 3(b) shows complex 3D turbulent structures of low-speed streaks and in-plane streamwise vortices. Figure 4(a) shows the variation of coefficient of friction for the range of Reynolds number for the three configurations of varying depths and the smooth pipe. The friction factor was determined by evaluating the pressure gradient along the pipe from the integrated pressure values. For the baseline case, the friction coefficient is consistently larger than the liner1 (1/4 depth of base) & liner2 (1/8 depth of base) geometries. Notably, the wall shear stress of the liner2 model is converging towards the stress values corresponding to the smooth pipe. This implies that, by introducing liner materials, the coefficient of friction can be reduced by 80% with respect to the deeper metallic hose configuration. Due to complex flow behavior and recirculation in the base & liner1 models, the friction factor changes significantly with the Reynolds numbers. Figure 4(a) also presents the roughness theory predictions given by the lines. For the smooth pipe, the CFD results and the theory have an excellent match. However, for the corrugated shapes the roughness theory seems to differ up to 24%. Figure 4(b) shows a summary of the friction factor computed based on the pressure drop for the steady RANS with the DDES on the same meshes. A reasonable consistency in the predictions of integrated pressure drop can be seen in the figure. By tuning the grid distributions, an improved match between the RANS and DDES may be obtained. For the base and liner 1 geometry at Re~10M, an inflectional behavior in the pressure drop and wall shear stress have been observed in the RANS and DDES results. This dip in the frictional drag may be attributed to the sudden shift in the point of separation for the base and liner 1 geometries. In this range, the laminar viscous sub-layer portion of boundary layer may become unstable and undergoes transition to turbulence. For values of Re >10M, the separation point slowly moves upstream as the Reynolds number is increased, resulting in an increase of the friction factor. For the liner 2 and smooth pipe, the geometry is streamlined and the point of separation and the transition of boundary layer remain somewhat unchanged. Figure 4: (a) Variation of friction coefficients with Reynolds number and comparison with the theory (b) Variation of friction coefficients for the range of Reynolds number for the RANS and DDES models (c) Comparison of the CFD results for A*=0.0604 of 16” ID pipe with the water test (Ref. [1]) Figure 4(c) shows the comparison of CFD values with the experimental test done with water in 10.5” ID pipe (Ref. [3]). The friction factors are compared with respect the non-dimensional dynamic similarity parameter, Reynolds number. The depth and shape of the corrugation profiles are marginally different between the 16” ID pipe and 10.5” pipe. A reasonable agreement between the CFD and experimental values can be seen. In corrugated pipe applications, flow physics (e.g., recirculation, separation, mean flow three-dimensionality, streamline curvature, flow acceleration) and geometry play an important role. In this study, we showed that the CFD modeling using AcuSolve can offer an accurate and powerful predictive tool for estimating the macroscopic pressure drop and complex flow phenomenon in the corrugations. The 3D steady RANS and DDES models available in AcuSolve provided a consistent estimate of the pressure drop and friction factor for varying flow rates. Significant 3D turbulence effects are found for the pipe geometry with circular corrugations suggested by both qualitative features and quantitative information. Cryogenic flexible pipe based LNG transfer system seems to be a good candidate for CFD modeling, and to qualify the pipe system for the LNG industry requirements. The reader may wish to consult Ref. [4] for further details. [1] Framo Engineering AS Report, “CFD Calculations of Corrugated Flexible Pipe,” 4577-0313-D, 2006. [2] http://www.technip.com/pdf/OffshoreLNG.pdf [3] Frohne, C., Harten, F., Schippl, K., Steen, K.E., Haakonsen, R., Jorgen, E. and Høvik, J. “Innovative Pipe System for Offshore LNG Transfer,” OTC 19239, 2008. [4] Jaiman, R., Oakley, O. Jr., and Adkins, D., "CFD Modeling of Corrugated Flexible Pipe," OMAE2010-20509 (submitted)
{"url":"http://www.acusim.com/html/apps/corrugPipe.html","timestamp":"2014-04-21T09:35:31Z","content_type":null,"content_length":"16160","record_id":"<urn:uuid:1ca53bc9-2008-4785-80fe-67cc45a28e3d>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00484-ip-10-147-4-33.ec2.internal.warc.gz"}
au Space Curve How Flies Fly: Kappatau Space Curves by Rudy Rucker Department of Mathematics and Computer Science, San Jose State University, San Jose CA 95192 Copyright (C) Rudy Rucker 1999 Appeared in David Wolfe & Tom Rodgers, eds., Puzzlers' Tribute: A Feast for the Mind, A.K.Peters, Natick, MA, 2002 [Note that this paper is also available online in Romanian, translated by Delia Nastase with the support of Azoft, Inc. In addition the paper is online in German by Alexey Gnatuk, in Polish by Olga Babenko, and in Ukranian by Agnessa Petrova. Thanks to all!] It's interesting to watch flies buzz around. They trace out curves in space that are marvelously three-dimensional. Birds fly along space curves too, but their airy swoops are not nearly so bent and twisted as are the paths of flies. Is there a mathematical language for talking about the shapes of curves in space? Sure there is. Math is the science of form, and mathematicians are always studying nature for new forms to talk Historically, space curves were first discussed by the mathematician Alexis-Claude Clairaut in a paper called "Recherche sur les Courbes a Double Courbure," published in 1731 when Clairaut was eighteen [1]. Clairaut is said to have been an attractive, engaging man; he was a popular figure in eighteenth-century Paris society. In speaking of "double curvature," Clairaut meant that a path through three-dimensional space can warp itself in two independent ways; he thought of a curve in terms of its shadow projections onto, say, the floor and a wall. In discussing the bending of the planar, "shadow" curves, Clairaut drew on recent work by the incomparable Isaac Newton. Newton's mathematical curvature measures a curve's tendency to bend away from being a straight line. The more the curve bends, the greater is the absolute value of its curvature. From the viewpoint of a point moving along the curve, the curvature is said to be positive when the curve bends to the left, and negative when the curve bends to the right. The size of the curvature is determined by the principle that a circle of radius R should have curvature of 1/R. The smaller the radius, the greater the curvature. Figure 1 shows some examples of circular arcs. Figure 1: Curvature along circular arcs in the plane. We often represent a curve in the plane by an equation involving x and y coordinates. Most calculus students remember a brief, nasty encounter with Newton's formula for the curvature of a curve; the formula uses fractional powers and the first and second derivatives of y with respect to x. Fortunately, there is no necessity for us to trundle out this cruel, ancient idol. Instead we think of curvature as a primitive notion and express the curve in a more natural way. The idea is that instead of talking about positions relative to an arbitrary x axis and y axis, we think of a curve as being a bent number-line by itself. The curve is marked off in units of "arclength", where arclength is the distance measured along the curve, just as if the curve were a piece of rope that you could stretch out next to a ruler. In this context, the most natural way to describe a plane curve is by an equation that gives the curvature directly as a function of arclength, an equation of the form kappa = f(s), where s stands for arclength and kappa is the commonly used symbol for curvature. Figure 2 shows two famous plane curves which happen to have simple expressions for curvature as a function of arclength. The catenary curve is the shape assumed by a chain (or bridge cable) suspended from two points, while the logarithmic spiral is a form very popular among our friends the molluscs. Figure 2: The catenary and the logarithmic spiral expressed by natural equations, with curvature kappa a function of arclength s. Note that for the spiral, the center is where s approaches -1; if you jump over the anomalous central point and push down into larger negative values of s, you produce a mirror-image of the spiral. It would be nice to also think of space curves in a natural, coordinate-free way --- surely this is the way a fly buzzing around in the center of an empty room must think. Profound mathematical insights come hard, and it was a hundred and twenty years after Clairaut before the correct way to represent a space curve by intrinsic natural equations was finally discovered --- by the French mathematicians Joseph Alfred Serret and Frederic-Jean Frenet. The idea is that at each point of a space curve one can define two numerical quantities called curvature and torsion. The curvature of a space curve is essentially the same as the curvature of a plane curve: it measures how rapidly the curve is bending to one side. The torsion measures a curve's tendency to twist out of a plane. But what exactly is meant by "bend to one side," and "twist out of a plane"? Which plane? The idea is that at each point P of a space curve you can define three mutually perpendicular unit-length vectors: the tangent T, the normal N, and the binormal B. T shows the direction the curve is moving in, N lies along the direction which the curve is currently bending in, and B is a vector perpendicular to T and N. (In terms of the vector cross product, T cross N is B, N cross B is T, and B cross T is N.) For space curves we ordinarily work only with positive values of curvature, and have N point in the direction in which the curve is actually bending. (In certain of the analytical curves we ll look at later we relax this condition and allow negative curvature of space curves.) Taken together, T, N and B make up the so-called "moving trihedron of a space curve". In Figure 3 we show part of a space curve (actually a helix) with several instances of the moving trihedron. So that it's easier to see the three-dimensionality of the image, we draw the curve as a ribbon like a twisted ladder. The curve runs along one edge of the ladder, and the rungs of the ladder correspond to the directions of successive normals to the curve. Figure 3: The moving trihedron of a space curve: T the tangent, N the normal, and B the binormal. To understand exactly how the normal is defined, it helps to think of the notion of the "osculating" (kissing) plane. At each point of a space curve there is some plane that best fits the curve at that point. The tangent vector T lies in this plane, and the direction perpendicular to T in this plane holds the normal N. The binormal is a vector perpendicular to the osculating plane. With the idea of the moving trihedron in mind, we can now say that the curvature measures the rate at which the tangent turns, and the torsion measures the rate at which the binormal turns. Note that T, N and B are always selected so as to form a right-handed coordinate system. This means that if you hold out the thumb, index finger and middle finger of your right hand, these directions correspond to the tangent, the normal, and the binormal. Figure 4: A right-hand as a trihedron. Just as the circle is the plane curve characterized by having constant curvature, the helix is the space curve characterized by having constant curvature and constant torsion. Figure 5 shows how the signs of the curvature and torsion affect the shapes of plane and space curves. Figure 5: How the signs of the curvature and torsion affect the motion of a curve. Now let's look for some space formulae analogous to the plane formula stating that the curvature of a circle of radius R is 1/R. Think of a helix as wrapping around a cylinder --- like a vine growing up a post. Let R be the radius of the cylinder, and let H represent the turn-height: the vertical distance it takes the helix to make one complete turn (and to make the formulae nicer, we measure turn-height in units 2*pi as large as the units we measure R in.) The sizes of the curvature and torsion on a helix with radius R and turn-height H are given by two nice equations. We write "tau" for torsion and, as before, "kappa" for curvature: kappa = R / (R^2 + H^2), and tau = H / (R^2 + H^2). It's an interesting exercise in algebra to try and turn these two equations around and solve for R and H in terms of kappa and tau. (Hint: Start by computing kappa^2 + tau^2. When you're done, your new equations will look a lot like the original equations.) Some initial things to notice are that if H is much smaller than R, you get a curvature roughly equal to 1/R, just like for a circle, and a tau very close to 0. If, on the other hand, R is very close to zero, then the torsion is roughly 1/H while the curvature is close to 0. A fly which does a barrel-roll while moving through a nearly straight distance of H has a torsion of 1/H. The faster it can roll, the greater is its torsion. A less obvious fact is that if we look down on a plane showing all possible positive combinations R and H, the lines of constant curvature lie on horizontal semi-circles; while the points representing constant torsion lie on vertical semi-circles. The curvature and torsion combinations gotten by stretching a given Slinky lie along a quarter circle centered on the origin. Apparently the two families of semi-circles are perpendicular to each other. Figure 6: Lines of constant curvature and torsion for combinations of R and H. Suppose I have a helix like a steel Slinky spring. What happens to the curvature and the torsion as I stretch a single turn of it without untwisting? Suppose that the initial radius of the helix is A. Given the physical fact that the length of one twist of the Slinky keeps the same length, you can show that as you stretch it, R^2 + H^2 will stay constant at a value of A^2, which corresponds to a circle of radius A around the origin of the R-H plane. As you stretch a Slinky loop with the particular starting radius of 2, its R and H values will move along the dotted blue line shown in Figure 6. Figure 7 shows what a few of the intermediate positions will look like. Curvature is being traded off for torsion. Figure 7: Stretching a Slinky turns curvature into torsion. Here's another algebra problem: If you know that R^2 + H^2 = A^2, what can you say about the sum kappa^2 + tau^2? One fact that seems odd at first is that the curvature and torsion of a helix are dependent on the size of the helix. If you make both R and H five times as big, you make the torsion and curvature 1/ 10 as big. If you make R and H N times as big, you make the curvature and torsion 1/(2*N) as big. But this makes sense if you think of a fly that switches from a small helix to a big helix; it is indeed changing the way that its flying, so it makes sense that the kappa and the tau should change. Figure 8: Changing Curvature and Torsion. This observation suggests a simple way to express the difference between flies and birds --- flies fly with much higher curvature and torsion than do the birds. Gnats, for that matter, fly even more tightly knotted paths, and have very large values of curvature and torsion. Just as in the plane, a space curve can be specified in terms of natural equations that give the curvature and torsion as functions of the arclength. These equations have the form kappa = f(s) and tau = g(s). The shape and size of the space curve is uniquely determined by the curvature and the torsion functions. Figures 9 and 10 shows two intriguing space curves given by simple curvature and torsion functions. Figure 9: The rocker, with natural equations kappa = 1 and tau = sine(arclength) Figure 10: The phone-cord, with natural equations kappa = sine(arclength) and tau = 1. Well, actually I used kappa = 10*sine(arclength) and tau = 3 to make the picture look better. Note that this is a space curve where we do allow ourselves to put in negative values for the curvature. There is not a large literature on these "kappatau" curves, so I've given my own names to these two: the rocker, and the phone-cord. At one time I thought that the rocker was a correct way to represent the seam on a tennis-ball or the stitching on a baseball, but helpful email from the great mathematician John Horton Conway convinced me I was wrong. Conway makes the anthropological conjecture that every time a mathematician discovers a curve that he or she thinks might be the true baseball curve, the curve is a different one. An analysis of the real-world baseball stitch curve can be found in the web-published paper "Designing A Baseball Cover by Richard Thompson of the Department of Mathematics, University of Arizona [2]. It turns out the baseball stitch curve is based on something so prosaic as a patented 1860s pen and ink drawing of a plane shape used to cut out the leather for a half of a baseball, a shape arrived at by trial and error. Thompson finds a fairly gnarly closed-form approximation of this shape. Not only does my rocker fail to match the baseball stitch curve, it can be proved that the rocker curve does not in fact lie on the surface of a sphere (even though it kind of looks like it does). It fails to satisfy the following necessary condition for lying on the surface of the sphere, where s stands for arclength (see [3]). d/ds[(1/tau)*d/ds(1/kappa)] + tau*(1/kappa) = 0 (For kappa = 1 and tau = sin(s), the left-hand side of this is sin(s), which isn t 0.) Numerical estimates indicate that the arclength of the rocker has exactly twice the length of a circle of the same radius. This suggests an easy way to make a rocker. Cut out two identical annuli (thick circles) from some fairly stiff paper (manila file folders are good), cut radial slits in the annuli, tape two of the slit-edges together, bend the annuli in two different ways (one like a clockwise helix and one like a counterclockwise helix) and tape the other two slit-edges together, forming a continuous band of double length . Because an annulus cannot bend along its osculating plane, the curvature of the shape is fixed along the arclength. Because half the band is like a clockwise helix and half is like a counterclockwise helix, when the shape relaxes, the torsion presumably varies with the arclength like a sine wave function that goes between plus one and minus one. The torsion seems to be zero at the two places where the slits are taped together. Note that I have not proved that my empirical paper rocker is the same as my mathematical rocker, this is simply my conjecture. Figure 11: Make your own rocker. • To make the rocker, make a (larger) copy of Figure 11 on stiff paper. • Cut along all solid lines. • Tape edge A to edge B* with the letters on the same side. • Bend the two rings in the opposite sense. • Tape edge A* to edge B with the letters on the same side. How were the images in Figures 9 and 10 generated? They use an algorithm based on the 1851 formulae of Serret and Frenet. (See, for instance, Struik's classic work [4] for details; note that this book is now available as an inexpensive Dover paperback reprint.) Let's state the formulae in "differential" form. The question the formulae address is this: when we do a small displacement ds along a space curve, what is the displacement dT, dN, and dB of the vectors in the moving trihedron? dT =( kappa*N )*ds dN =( - kappa*T + tau*B )*ds dB =( - tau*N )*ds The first and third equations correspond, respectively, to the definitions of curvature and torsion. The second equation describes the "back-reaction" of the T and B motions on N. Since we are lucky enough to live in three-dimensional space, it is possible for us to experiment with our bodies and to perceive directly why the Serret-Frenet formulae are true. To experience the equations, you should, if possible, stick out your right hand's thumb, index finger, and middle finger as shown in Figure 4. Now start trying to "fly" your trihedron around according to these rules: (1) The index finger always points in the direction your hand is moving. (2) You are allowed to turn the index finger towards or away from direction of the middle finger by a motion corresponding to rotating around the axis of your thumb. (3) You are allowed to turn the thumb towards or away from the middle finger by a motion corresponding to rotating around the axis of your forefinger. To get clear on what's meant by motion (2), grab your thumb with you left hand and make as if you were trying to unscrew it from your hand. This is a kind of "yawing" motion, and it corresponds to the first of the three Serret-Frenet formulae: the change in the tangent is equal to the curvature times normal. Motion (3) corresponds to grabbing your index finger with your left hand and trying to unscrew that finger. This is a kind of "rolling" motion, and it corresponds to the third of the Serret-Frenet formulae: the change in the binormal is the negative of the torsion times the normal. In thinking of flying along a space curve you should explicitly resist thinking about boats and airplanes which have a built-in visual trihedron which generally does not correspond to the moving trihedron of the space curve. If you do want to think about a machine, imagine a rocket which never slows down and never speeds up, which can turn left or right --- relative to you the passenger --- and which can roll. Or better yet, think about being a cybernetic house-fly. An exciting thing about the Frenet-Serret formulae is that they lend themselves quite directly to creating a numerical computer simulation to create kappatau space curves with arbitrary curvature and torsion. To write the code in readable form, we create a Vector3 class with a few handy methods and overloaded operators. The heart of the algorithm's main loop looks about like this: P = P + ds * T; //operator*(Real, Vector3) is overloaded to mean scalar product. s = s + ds; T = T + (kappa(s) * ds) * N; //Bend. + is overloaded to mean vector addition. B = B + ( -tau(s) * ds) * N; //Twist. T.Normalize(); //The Vector3::Normalize() method makes T have unit length. B.Normalize(); //Makes B have unit length. N = (B * T); //operator*(Vector3, Vector3) is overloaded to mean cross product. As far as I know, very little mathematical work has been done with kappatau curves because in the past nobody could visualize them. I first implemented the algorithm as a Mathematica notebook for the Macintosh and for Windows machines, and then I wrote a stand-alone Windows program called Kaptau. You can download either of the Mathematica notebooks or the stand-alone Windows program from a page on my web-site [5]. Coming back to this paper's first two paragraphs, what can a mathematician say about the way flies fly? I think that flies generally move along at a constant speed, as if tracing a space curve parameterized by its arclength, and that they manage to loiter here and speed away from there by varying their curvature and torsion between low and high values. Figure 12: A kappatau curve with curvature varying as a random walk. 1. Morris Kline, Mathematical Thought From Ancient To Modern Times, Oxford U. Press, New York, 1972, p. 557. 2. Richard Thompson, "Designing a Baseball Cover," at http://www.mathsoft.com/asolve/baseball/baseball.html, 1996. 3. Yung-Chow Wong, "On An Explicit Characterization of Spherical Curves," Proceedings of the American Mathematical Society 34 (July, 1972), pp. 239-242. 4. Dirk J. Struik, Lectures on Classical Differential Geometry, Addison-Wesley, Reading, Mass, 1961. 5. Rudy Rucker, "Kappa Tau Curves Download Page," at http://www.cs.sjsu.edu/faculty/rucker/kappatau.htm, first posted 1997.
{"url":"http://www.cs.sjsu.edu/faculty/rucker/kaptaudoc/ktpaper.htm","timestamp":"2014-04-21T07:09:04Z","content_type":null,"content_length":"31119","record_id":"<urn:uuid:b3ff9b2b-2e9a-44ec-9876-19dde5bb6b6f>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00373-ip-10-147-4-33.ec2.internal.warc.gz"}
The correlator of two vector and one axial current in QCD It is known that the correlator of one axial and two vector currents, that receives leading contributions through one-loop fermion triangle diagrams, is not modified by QCD radiative corrections at two loops. It was suggested that this non-renormalization of the VVA correlator persists in higher orders in perturbative QCD as well. To check this assertion, I compute the three-loop QCD corrections to the VAA-correlator using the technique of asymptotic expansions. I find that these corrections do not vanish and that they are proportional to the QCD beta-function. I will also review some properties of the VVA correlator that were discovered in recent years.
{"url":"http://www.perimeterinstitute.ca/seminar/correlator-two-vector-and-one-axial-current-qcd","timestamp":"2014-04-24T13:05:33Z","content_type":null,"content_length":"26083","record_id":"<urn:uuid:7194f253-ea48-45c3-8748-7b63d698f225>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00371-ip-10-147-4-33.ec2.internal.warc.gz"}
Confidence intervals in psychological tests The confidence interval would refer to the sample population and NOT the individual. The test score is what the individual received with a margin of error It describe the score one receives on his/her IQ test of being part a sample and how reflective it is to the whole population. In other words it is comparing a sample population, of which the testee is part of, to the whole population, and then asking if the testee then does belong to a sample, of all the samples that can be taken of the population, reflecting the parameters of the population, or not.
{"url":"http://www.physicsforums.com/showthread.php?s=9a39de7f6b6a994340dc2ddfb2201a69&p=4633335","timestamp":"2014-04-20T08:41:10Z","content_type":null,"content_length":"28631","record_id":"<urn:uuid:5b03ed84-b424-4ea2-9690-b8b86b7a46b9>","cc-path":"CC-MAIN-2014-15/segments/1397609538110.1/warc/CC-MAIN-20140416005218-00641-ip-10-147-4-33.ec2.internal.warc.gz"}
Sum of random number of random variables Hi, Guys, I'm new to this forum, and don't have strong background in probability theory, so please bare with me if the question is too naive. Here's the question, In a problem I'm trying to model, I have a random variable (say, R), which is a sum of random number (say, N) of random variables (say, Hi), in which all Hi are i.i.d.. I have distribution of both N and Hi, and I am interested in the expected value and variance of R. Any suggestions how I can get it? My initial thought is E(R) = E(N)*E(Hi), but i feel it not quite right.. and it's even harder to have variance of R. I did some googling, and found out ways to sum rvs, but not so much of how to find random sums.. Any suggestions? or hint about where I can find related information?
{"url":"http://www.physicsforums.com/showthread.php?p=1822616","timestamp":"2014-04-17T18:29:02Z","content_type":null,"content_length":"42064","record_id":"<urn:uuid:11486dff-6642-4794-b015-3a58da40ee86>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00068-ip-10-147-4-33.ec2.internal.warc.gz"}
Julia, I Love You March 31, 2012 By John Myles White Julia is a new language for scientific computing that is winning praise from a slew of very smart people, including Harlan Harris, Chris Fonnesbeck, Douglas Bates, Vince Buffalo and Shane Conway. As a language, it has lofty design goals, which, if attained, will make it noticeably superior to Matlab, R and Python for scientific programming. In the core development team’s own words: We want a language that’s open source, with a liberal license. We want the speed of C with the dynamism of Ruby. We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab. We want something as usable for general programming as Python, as easy for statistics as R, as natural for string processing as Perl, as powerful for linear algebra as Matlab, as good at gluing programs together as the shell. Something that is dirt simple to learn, yet keeps the most serious hackers happy. We want it interactive and we want it compiled. (Did we mention it should be as fast as C?) Remarkably, Julia seems to be on its way to meeting those goals. Last night, I decided to see for myself whether Julia would live up to the hype. So I taught myself just enough of the language to write an implementation of the slowest R code I’ve ever written: the Metropolis algorithm-style sampler Drew and I use in Chapter 7 of Machine Learning for Hackers to show off randomized, iterative optimization algorithms. You can find both the original R code and my new Julia code on GitHub in two files name cipher.R and cipher.jl, respectively. In my opinion, the new code in Julia is easier to read than the R code because Julia has fewer syntactic quirks than R. More importantly, the Julia code runs much faster than the R code without any real effort put into speed optimization. For the sample text I tried to decipher, the Julia code completes 50,000 iterations of the sampler in 51 seconds, while the R code completes the same 50,000 iterations in 67 minutes — making the R code more than 75 slower than the Julia code. Having seen that example alone, I would be convinced Julia is a real contender for the future of scientific computing. But this iterative sampling algorithm is not close to being the harshest comparison between Julia and R on my machine. For a more powerful example (lifted straight from the Julia docs), we can compare Julia and R code for computing the 25th Fibonacci number recursively. First, the Julia code: 1 fib(n) = n < 2 ? n : fib(n - 1) + fib(n - 2) 2 @elapsed fib(25) Second, the R code: 1 fib <- function(n) 2 { 3 ifelse(n < 2, n, fib(n - 1) + fib(n - 2)) 4 } 6 start <- Sys.time() 7 fib(25) 8 end <- Sys.time() 9 end - start The Julia code takes around 8 milliseconds to complete, whereas the R code takes around 4000 milliseconds. In this case, R is 500 times slower than Julia. To me, that’s sufficient reason to want to start focusing my time on implementing the algorithms I care about in Julia. I hope others will consider doing the same. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/julia-i-love-you/","timestamp":"2014-04-19T19:42:57Z","content_type":null,"content_length":"39873","record_id":"<urn:uuid:3d4e32b8-78bc-4737-8f71-e99b38407693>","cc-path":"CC-MAIN-2014-15/segments/1397609537376.43/warc/CC-MAIN-20140416005217-00619-ip-10-147-4-33.ec2.internal.warc.gz"}
Factoring a quadratic equation I'm stuck on this one.... 6r^2-r-2=0 Any assistance appreciated. Thanks Thomas Hi tbradnc, Here's a technique you can use. Multiply the leading coefficient (6) by the constant (-2). This gives you -12. The coefficient of the middle term is -1. You need a pair of numbers whose product is -12 and whose sum is -1. After thinking a bit on this, you should come up with the only combination that works, namely -4 and 3. Now restate your quadratic replacing the middle coefficient with these two. $6r^2{\color{red}-4r +3r}-2$ Now group the first two terms, and then the last two terms. $(6r^2-4r)+(3r-2)$ Factor each group. $2r(3r-2)+1(3r-2)$ $(3r-2)$ is a common factor, so $ (3r-2)(2r+1)$ All done. Thank you so much....I'm so glad I found this site. ;-) I'm an older adult going back to school and it's been 30 years since I've played with algebra. I leave every class barely understanding what we're doing and I have to drill, drill, drill to get the hang of it.
{"url":"http://mathhelpforum.com/algebra/83870-factoring-quadratic-equation.html","timestamp":"2014-04-16T17:59:59Z","content_type":null,"content_length":"37630","record_id":"<urn:uuid:f1cdc9ab-9e21-4c46-85ba-b33e40d7e535>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00195-ip-10-147-4-33.ec2.internal.warc.gz"}
The Ph125abc sequence covers quantum mechanics at a level of sophistication beyond the introductory Ph 2/12 sequence. You will see much material that is familiar to you from these courses; but, in Ph125abc, you will truly learn to attack basic quantum problems from scratch and arrive at full solutions that can be tested by experiment. We will also explore some of the interesting and unusual implications of quantum mechanics. It is impossible to emphasize how important the core physics courses Ph106 and Ph125 are: these teach you the basic frameworks and techniques that you must know to do any physics. Ph125ab will cover the basic techniques and results of quantum mechanics along with a small selection of special topics. Ph125c will cover additional techniques and applications and will be taught by Prof. Wise. Quick Links Vital Information 107 Downs MWF 10:00 am - 11:00 am Prof. Sunil Golwala, 311 Downs, Mail Code 59-33, golwala at caltech.edu Teaching Assistants: Denis Bashkirov, denisb at caltech.edu Kevin Engel, kte at caltech.edu Marcus Teague, mlteague at caltech.edu Please contact the TAs directly if you would like to make appointments outside of normal office hours. Office Hours and Contact Information: Prof. Golwala: M 6-8 pm, 107 Downs. Additional office hours can be arranged by appointment or by popular demand. If you need to contact me outside of office hours, please try email first. I am happy to arrange meetings outside of normal office hours, but I am not always available on the spur of the moment. Please include "Ph125" in the subject line of your email -- I get a lot of email, and I want to make sure I see your emails quickly. Su 7-9 pm, location SFL room 2-1. The TAs will rotate through these office hours. F 4-5 pm, 107 Downs, usually run by Golwala. I greatly appreciate student feedback; feedback prior to the end-of-term evaluations lets me modify the class to fit your needs. In person, by email, by campus mail, whatever you like. If you would like to preserve your anonymity, campus mail will usually work. I have mailboxes on the 3rd floor of Downs near my office and in 61 W. Bridge. You will also be able to provide feedback via the new Moodle page being used for this course (details ). Unfortunately, this feedback is not anonymous, so please use one of the above means if you desire anonymity. • Required: Principles of Quantum Mechanics, Shankar, available at the bookstore. • Optional (on 3-hr reserve at Fairchild Library): Use these optional texts for alternate explanations or for additional problems or examples. The basic material is always the same, but different authors have different approaches. Find a text you like; different students learn in different ways -- internalizing your own understanding of the material is key to becoming expert in it, so you should follow the approach that best gets you □ Comparable to this course ☆ Griffiths, Introduction to Quantum Mechanics, not quite as advanced as this course. ☆ Cohen-Tannjoudji et al., Quantum Mechanics, similar to this course, but very axiomatic and long -- I prefer Shankar. ☆ Gasioriowicz, Quantum Physics, a good book -- I used this as an undergrad and was fairly happy with it. A bit less rigorous than I like, which is why I am using Shankar. ☆ Liboff, Introductory Quantum Mechanics, a good book at the right level, but the typesetting is so similar to the most recent edition of Goldstein as to cause unnecessary mental trauma. ☆ Merzbacher, Quantum Mechanics, a classic, writing and text style is also "classic" (dense text, not very many exercises) ☆ Messiah, Quantum Mechanics, c.f. Merzbacher ☆ Schiff, Quantum Mechanics, c.f. Merzbacher □ More advanced than this course ☆ Landau and Lifshitz, Quantum Mechanics, similar material to this course, very terse. ☆ Sakurai, Modern Quantum Mechanics: largely the same material as this course, but probably too terse for the first time through. ☆ Sakurai, Advanced Quantum Mechanics: covers second quantization and relativisitic QM. Only for certified quantum mechanics. □ Special topics (self-explanatory) ☆ Weissbluth, Atoms and Molecules • Lecture Notes: 2007/2008 lecture notes: pdf 2008/2009 lecture notes (updated following each lecture, see course Moodle page for details): pdf My lecture notes in general follow Shankar and are primarily intended as a distillation for my personal use. It will appear in class that I am working directly from them because I am -- that's why they're called lecture notes! My goals in making them available to you are: □ To provide clarification of points in Shankar that I thought deserved more or alternate explanation. □ To present additional explanation or material derived from other texts; the references will be provided in the notes. □ To get all the algebra down on the page, correctly, so that I don't get bogged down on the board and so that you don't have to transcribe everything that I write. I only provide the notes in electronic pdf form, available above and on the course Moodle page. Corrected versions will be posted there, too. The lectures are also broken out separately in the syllabus on the course Moodle page, with individual lecture update dates. I provide last year's notes above and there also. I do not consider myself responsible for providing updated copies of the lecture notes well ahead of class time -- they are being revised as the course is being given. They will be posted promptly after class. You are welcome to review last year's notes ahead of time, though there will be changes and improvements. I suggest that you spend your time in class following the lecture at a conceptual level and noting down for yourself points or derivations that were not clear to you; when you review the posted notes, you may find your questions answered. If not, you are welcome to ask for clarification. Of course, it is true that the lecture notes may relieve you of the obligation of coming to lecture. I won't claim that there is much said in class that is not in the notes. It's your choice. Some students benefit from being able to receive information aurally and to interact during that process; others prefer to read it off the page. Whatever works for you. Grades are based only on the written work you hand back. But please do not delude yourself into thinking that, because the lecture notes are available, you can just skim through it all on the day before a problem set or exam is due and expect to immediately become expert. Learning requires time to mull over concepts in your mind, for your subconscious to work on ideas and problems. If you choose not to come to class, please be disciplined about keeping up with the material in your own study time. Lecture Strategy I will not cover in lecture every bit of material you will be responsible for. There are some topics that are really better covered by reading than by lecture, and some topics that are simple enough that they are a waste of lecture time. I can use the leftover time to do more examples. Problem Set Policies The best way to learn physics is by doing problems. In addition to the regular problem sets, I list some links to other sources of problems, some with solutions -- doing problems is the best way to learn. All these policies are subject to change when Prof. Wise takes over for Ph125c. • Problem sets will be posted on the course Moodle page, linked to the syllabus, usually 1 week before they are due. • Due date: Tuesday 4 pm at the box outside my office. No mercy will be granted on the due date and time. Remember, we give partial credit, so the last 10 minutes of work will not make much • Electronic Submission: Electronic submission of problem sets (email or fax) is only allowed if you obtain prior approval from the instructor. Electronic submission only creates work for the instructor and TAs because the problem set still needs to be printed out for grading. Any reasonable justification will be accepted, reasonable meaning that you will be, for some well-defined reason, off-campus and unable to turn in the set physically. • Late policy: Problem sets will be accepted up to 1 week late at the due date for the following week's set for 50% credit, and after that not at all. You may turn in part on-time and part late. Please note on the problem set if it is being split this way. You do not need to contact me or the TAs to turn in a problem set late at 50% credit, or to turn in part on-time and part late. • Extensions: □ You may take one full-credit one-week extension per term. No need to contact us, just write it on your problem set. □ Otherwise, extensions will be granted for good reasons -- physical or mental health issues, family emergency, etc. You must contact me or one of the TAs before the homework is due and you must provide some sort of proof (e.g., note from resident head, health center, counseling center, or Barbara Green). A heavy amount of other coursework is not sufficient reason for an extension (though you may use your free extension in such circumstances -- so save it until you really need it!). • Solution sets will be also be posted on the course Moodle page when the homework sets are due (usually late the same night or the following morning). If you turn in the problem set late, you may not look at the solutions until you have turned in your problem set. • Graded problem sets will be available roughly 10 days after they are due, outside my office. You should keep a copy of your homework sets so you can review them with the solutions promptly after the set is due. In spite of my best efforts, sometimes I make mistakes in assigning problems; perhaps not providing enough information, or giving a problem that results in an algebra nightmare. I will post corrections via the Moodle page and will also send broadcast emails to the class. If you are having trouble with a problem, be sure to check to see if a correction has been posted, and feel free to contact me if you think a problem has errors in it or seems overly difficult. The course grade will be one-third homework sets, one-third midterm, and one-third final. Collaboration is permitted on homework sets, but each student's solution must be the result of his or her own understanding of the material. No manual xeroxing is allowed. See below for some on working in groups. Use of mathematical software like Mathematica is allowed, but will not be available for exams. Prof. Mabuchi made a very good point when he taught Ph125: It is absolutely essential that you develop a strong intuition for basic calculations involving linear algebra, differential equations, and the like. The only way to develop this intuition is by working lots of problems by hand; skipping this phase of your education is a really bad idea. Be careful how you use such packages. The midterm and final are not collaborative, though you are welcome to consult your own notes (both in-class and any additional notes you take), Shankar, and my lecture notes (including typo corrections). You may not use other textbooks, the web, any other resources, or any software of any kind. Grade Distributions and Anonymously Listed Grades Histograms of grades for the problem sets to date can be found (updated 2009/03/25, through final exam). You can check that we have the correct grades recorded for you (updated 2009/03/25 through final exam). Grades can still be corrected even after they have been turned in. Please let me know if you find any errors. Using Moodle Page This year we are trying out Moodle, course software used by many institutions. You can log on to Moodle at A password for the Ph125a page will be provided in class; you can also obtain it from your classmates, the TAs, or myself. All course logistics and assignments will be announced via the Moodle page. You will find a listing of the course syllabus, problem sets, and solutions there. There is also a weekly homework survey. It would be very beneficial (to me and you) if you could fill out the survey regularly. Especially important is the ``News Forum'', via which I will make course announcements that I believe you will receive automatically via email once you have logged in to the course page. This is the first time Moodle is in widespread use at Caltech, and the first time I am using it, so please bear with me as I figure it out. Practice Problems Comment on Working in Groups: It is in general a good thing to work with other students while reading and doing problem sets. You get to hear different perspectives on the material and frequently your peers can help you get past obstacles to understanding. However, you must use group work carefully. If you rely on your colleagues too much, or take a very long time to do the homework sets, you will do poorly in the fixed-time, independent exam environment. Empirically, we observe that students with good exam scores tend to also have done well on homework, but that good homework scores do not predict good exam scores. Exam scores correlate from exam to exam, even on largely independent material. For example, scores from 2004-2005 Ph106ab: Notice, in particular, the midterm-final correlation for Ph106b, which is remarkable because the exams covered totally disjoint material (mechanics vs. E&M) and were written by two different To avoid suffering from this problem, I have two suggestions: • Talk to your peers, in particular peers outside of your usual workgroup, to find out how long they are spending on problem sets. If you find you are spending much more time, figure out why! Do you need to spend more time understanding the material before diving in to problem sets? Do you jump to an incorrect solution method too quickly? Are you getting bogged down in algebra? Consult me or the TAs too. • While working in groups can be helpful, you have to be careful to remain sufficiently independent that you can solve problems on your own! My suggestion is to go over the material and examples in groups, but try to work the problems by yourself, using help from others as a last resort. If you find yourself helping one of your peers, don't just explain how to do the problem; try to help him find his way to the solution himself. This is not just an arbitrary classroom exercise. In research, one is always under schedule pressure -- because one only has a fixed number of nights at an observatory, because there are funding deadlines, because there are competing groups doing similar work. It is critical to learn how to cut through irrelevant or unimportant information and get to results in a timely fashion.
{"url":"http://www.astro.caltech.edu/~golwala/ph125ab/index.html","timestamp":"2014-04-17T18:23:53Z","content_type":null,"content_length":"22578","record_id":"<urn:uuid:13e82221-396c-4af5-9a39-99a6b38c6406>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00025-ip-10-147-4-33.ec2.internal.warc.gz"}
8-star-choosability of a graph with maximum average degree less than 3 Min Chen, André Raspaud, Weifan Wang A proper vertex coloring of a graph G is called a star-coloring if there is no path on four vertices assigned to two colors. The graph G is L-star-colorable if for a given list assignment L there is a star-coloring c such that c(v)∈L(v). If G is L-star-colorable for any list assignment L with |L(v)|≥k for all v∈V(G), then G is called k-star-choosable. The star list chromatic number of G, denoted by χ[s]^l(G), is the smallest integer k such that G is k-star-choosable. In this article, we prove that every graph G with maximum average degree less than 3 is 8-star-choosable. This extends a result that planar graphs of girth at least 6 are 8-star-choosable [A. Kündgen, C. Timmons, Star coloring planar graphs from small lists, J. Graph Theory, 63(4): 324-337, 2010]. Full Text: PDF PostScript
{"url":"http://www.dmtcs.org/dmtcs-ojs/index.php/dmtcs/article/viewArticle/1459","timestamp":"2014-04-19T22:22:02Z","content_type":null,"content_length":"11977","record_id":"<urn:uuid:bec49475-e1e8-4107-a9d8-6db3281859c2>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00476-ip-10-147-4-33.ec2.internal.warc.gz"}
General Departmental Seminar Series A Group Sequential, Response-Adaptive Design for Randomized Clinical Trials Theodore Karrison, Department of Health Studies, University of Chicago CO-AUTHORS: Dezheng Huo and Rick Chappell Friday, November 8, 2002, 12:00 pm G5/136-142 Clinical Sciences Center - 600 Highland Avenue There has been considerable methodological research on response-adaptive designs for clinical trials, but they have seldom been used in practice. The many reasons for this are summarized in a paper by Rosenberger and Lachin (1993), but the two main reasons generally cited are the logistical difficulties of implementing the adaptive assignment scheme and the potential for bias due to selection effects, "drift" in patient characteristics or risk factors over time, and other sources. Jennison and Turnbull (2000) consider a group sequential, response-adaptive design for continuous outcome variables that partially addresses these concerns while at the same time allowing for early stopping. The key advantage of constant randomization probabilities within sequential groups is that a stratified analysis will eliminate bias due to drift. In this paper we consider binary outcomes and an algorithm for altering the allocation ratio that depends on the strength of the accumulated evidence. Specifically, patients are enrolled in groups of size n_{Ak},n_{Bk},k=1,2,...,K where n_{Ak},n_{Bk} are the sample sizes in treatment arms A and B in sequential group k. Patients are initially allocated in a 1:1 ratio. After the k^th interim analysis, if the z-value comparing outcomes in the two treatment groups is less than one in absolute value, the ratio remains 1:1; if the z-value exceeds 1.0, the next sequential group is allocated in the ratio R1 favoring the currently better-performing treatment; if the z-statistic exceeds 1.5, the allocation ratio is R2, and if the z-value exceeds 2.0, the allocation ratio is R3. If the O'Brien-Fleming monitoring boundary is exceeded the trial is terminated. Group sample-sizes are adjusted upwards to maintain equal increments of information when allocation ratios exceed one. The z-statistic is derived from a Mantel-Haenszel test stratified by sequential group. Simulation studies and theoretical calculations were performed under a variety of scenarios and allocation rules (for example, [R1, R2, R3] = [1.5, 2, 2.5]). Results indicate that the method maintains the nominal type I error rate even when there is substantial drift in the patient population. When a true treatment difference exists, a modest reduction in the proportion of patients assigned to the inferior treatment arm and in the overall proportion of failures can be achieved at the expense of smaller increases in the total sample size relative to a non-adaptive design. Comparisons in terms of the total number of failures are less favorable. Limitations, such as the impact of delays in observing outcomes, are discussed, as well as areas for further research. Back to General Departmental Seminar Series
{"url":"http://www.biostat.wisc.edu/Seminars/SeminarAbstracts2002-2003/dept110802.htm","timestamp":"2014-04-18T15:40:14Z","content_type":null,"content_length":"19071","record_id":"<urn:uuid:585fbe0b-62bd-4c85-80ca-18e9d484980a>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Prime or Composite? Date: 09/30/97 at 14:01:08 From: ~MUSKRAT~ Subject: Prime or composite? Wouldn't every number be composite? The reason I think this is because the definition of PRIME is that it only has 2 factors...itself and 1... but what about the decimals? Those are numbers, aren't they? I think the definition of factor or prime and composite needs to be defined better, don't you? Because doesn't every number have another number that can go into it? I have another question: in one of your answers you said 0 was a number, but I dont think it is. Isn't zero considered something else? And about factors: every number has a factor, so zero isn't a number. I rest my case.... even 1! see...there is... .5! or .25! Date: 09/30/97 at 16:31:10 From: Doctor Rob Subject: Re: Prime or composite? The definitions of prime and composite numbers are fine the way they are. Probably you haven't seen them written out with precision and in detail. I will make an attempt to clarify this for you below. First of all, we shall speak only of the Natural Numbers, that is, the counting numbers. They are all integers, or whole numbers, and they are all positive. They begin 1, 2, 3, 4, .... A divisor of a natural number N is a natural number D such that N = D*Q for some unique other natural number Q. A prime number in this set is a number with exactly two divisors. Since the number itself and 1 are always divisors, in order to have just two divisors, the number must be bigger than 1, and it must not have any divisors other than 1 and itself. The natural number 1 is very special. It is called a unit, and it is the only natural number that has a natural number reciprocal, that is, a natural number I such that 1*I = 1. A composite number is a natural number which is neither a prime number nor a unit. The big deal about prime numbers is the Fundamental Theorem of Arithmetic. It says that every natural number can be written uniquely as a product of powers of prime numbers. This is a very important fact, as you might be able to tell by its name! That disposes of most of your objections above. 1 is not a prime because it has only one divisor, itself. Zero, negatives, and decimal fractions are neither prime nor composite because they are not natural numbers. They belong to a larger set, either the Integers, the Rational Numbers, or the Real Numbers. The next question is whether we can extend the notion of a prime number to one of these larger sets. In the case of the Integers, this works pretty well, but we have to be careful! Now there are two units, 1 and -1. To every prime number P in the natural numbers there correspond two integers that are "prime" in the integers: P and -P. These now have exactly FOUR integer divisors: 1, -1, P, and -P divide each of the numbers P and -P, and no other integers do. Notice, however, that there are only two of these divisors that are natural Likewise to every composite number C in the natural numbers there correspond two integers that are composite in the integers: C and -C. Now we have to worry about zero. Zero is a special case, because it has infinitely many divisors, since every integer except zero divides it. Zero is relegated to a new class, neither unit, nor prime, nor composite. The class is called the zero-divisors. Zero is the only zero-divisor in the integers. What happens to the Fundamental Theorem of Arithmetic in this setting? Now it says that every non-zero integer can be written uniquely as a unit times a product of powers of prime natural numbers (or positive prime numbers). When we try to extend to the Rational Numbers, we are in big trouble: every non-zero rational number is a unit! The same happens in the real numbers. There are no "prime" numbers and no "composite" numbers in those sets, just units and zero-divisors (zero is the only one). -Doctor Rob, The Math Forum Check out our web site! http://mathforum.org/dr.math/
{"url":"http://mathforum.org/library/drmath/view/57115.html","timestamp":"2014-04-19T02:01:43Z","content_type":null,"content_length":"8896","record_id":"<urn:uuid:139eb2ae-9e1b-47fb-af0f-d6be70e05e62>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00284-ip-10-147-4-33.ec2.internal.warc.gz"}
Woodhaven ACT Tutor Find a Woodhaven ACT Tutor ...Imagine the doors you can unlock with a higher test score. My students have seen their scores rise, on average, just over 400 points, and they've been admitted to prestigious schools--including Harvard, Columbia, Georgetown, and NYU. Get in touch with me through WyzAnt and I'll provide you with... 10 Subjects: including ACT Math, SAT math, GMAT, LSAT ...MY BIO I majored in Psychology at UC Berkeley, where I completed my degree in three years. After graduating I moved to New York and worked in finance for about 2 years. An illness in my family caused me to move back to California in early 2011, where I ended up getting a job teaching the SAT and ACT for Kaplan. 18 Subjects: including ACT Math, geometry, GRE, algebra 1 ...Depending on the subject (and especially for computer subjects) before the first session, I will ask that we speak on the phone and will request the student send me via email information regarding materials we'll be going over (syllabus, homework assignments, past exams). Feel free to reach ou... 9 Subjects: including ACT Math, algebra 1, algebra 2, precalculus Hi parents and students, My name is Natalie and I am a forthcoming high school mathematics teacher. I graduated from a NYC specialized high school and I am currently studying at New York University, majoring in Mathematics Secondary Education. I have been a volunteer math tutor for the last 5 years, and have grown to work quickly and effectively on any mathematics subject. 19 Subjects: including ACT Math, calculus, geometry, biology I have been a certified teacher for the last 10 years. I love teaching, especially to students who love learning. I'm flexible, patient, and my main interest is in helping the students. 40 Subjects: including ACT Math, English, writing, grammar
{"url":"http://www.purplemath.com/Woodhaven_ACT_tutors.php","timestamp":"2014-04-17T21:39:32Z","content_type":null,"content_length":"23652","record_id":"<urn:uuid:9b3958dc-a8e3-47ce-b96a-5226bf716411>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00282-ip-10-147-4-33.ec2.internal.warc.gz"}
I can't for the life of me pick a Math curriculum....help! I can't say enough about MathUSee. For us, it has been our best purchase every year. We have used it since the beginning, and I can't image trying to find something else. Life of Fred has been suggested to me, but I would only use it as a supplement (like to fill space for the remainder of this year since ds finished Beta). The MUS website is very helpful too, as I can print extra worksheets for more practice and use their online drills to generate addition/sub/multiplication/division probs for practice. Those are free, whether you use their curriculum or not. We struggled with all sorts of math programs until we found Rod and Staff. We LOVE Teaching Textbooks! teachingtextbooks.com I have an 11yo, 4yo, and 5 month old. It has an intimidating price but I do not let them write in the actual text so that they can be passed down from one child to the next. I just reviewed a bunch of math programs so I could figure out what I wanted to go with next year. Maybe you'll find it helpful: http://wiseowlhomeschool.wordpress.com/2012/04/01/ the-search-for-the-perfect-math-program/ There's so much out there, and math just seems to be harder to pick than other subjects. Best of luck! We also struggle in this area, math is obviously not mom's strong subject. We started out with MUS and as much as I wanted to love it, I didn't get it and the kids hated it so we switched to Singapore. If I couldn't figure it out, I wasn't able to re-inforce the lessons so it wasn't going to work. I liked singapore and so did the kids, there were only one or two concepts that they didn't get and we worked around it. I also tried Abeka and hated it with a passion. My oldest (10) is doing teaching textbooks and likes it as do I. I do it with him as much as possible because he likes me being there, but if I can't he can do it himself. My dd is a struggling learner and for her and my soon to be kindergartener we are going to try the Math lessons for a living education, but that is for younger years. Thanks everyone for sharing their math recommendations. I'm still a little torn but leaning toward CLE's Sunrise Math. Have any of you used this? I use CLE's math with my three youngest children, who each have very different learning styles and abilities, and it works well for each of them. One 11-year-old "gets" math quickly and is a bit advanced in math, but has convergence insufficiency - an eye issue that makes focusing difficult. When we tried Saxon, he had a great deal of trouble copying the problems from the book into a notebook, and his math scores plummeted even though he knew the concepts and had been doing them in another program. CLE has been great for him. Our other 11-year-old really struggled with math concepts and was not quick to understand, and forgot the concepts unless they were reviewed regularly. We tried several programs before trying CLE, and the set up of the lessons and the regular review works well for him too - no more complaints and he is learning. Our littlest, just 9, really really really struggled with math, and has trouble with nonverbal reasoning skills (which = math). We tried MUS, wanting to love it, but it was not a success here. We started him in CLE this year, and he is progressing well. One thing you should know is that the program, according to the gals on the Well Trained Mind forums, runs 12-18 months advanced, and we found this to be the case as well. Do use the placement test and make sure you place your child correctly if you choose the program. They have the scope and sequence of each of the books on their website, so you can start midway through one of the sets if you feel that's the right placement. And the program is so reasonably priced, if you decide to try a unit, you haven't invested a ton of money. I think CLE is a solid math program. CLE looks very interesting. Thanks for the input, even though this isn't my thread, lol. Thanks everyone! I am trying to find a new math for next year as well. I was looking at math mammoth but have just heard about CLD math in this post and it looks good. Can anyone tell me what exactly is covered in grade 3 and 4? Or give me the page to look it up? I can't seem to find it on the site. I have heard great things about CLE Sunrise Math, and I have a friend who uses it, and loves it. It's more of a spiral method, though, and I know after Right Start that neither DD as the student, nor me as the teacher, do well with spiral methods. I am looking at Mastering Mathematics, which is a mastery program, Christian in nature in re: to story problems, written by a homeschooling parent (Les Farmer)and covers K-8 math. I'm also intrigued by Math Lessons for a Living Education. We started using Math on the Level this year, and I love it. I think it fits in perfectly with the CM method of teaching. It is very teacher-intensive, but it provides plenty of teaching ideas. My daughter, who insists she hates math, is doing well because it is not workbook-based. I write her math problems in a notebook based on what I have taught her. I especially like that it gives me the freedom to choose the concepts we will work on, though the authors do give suggestions. There are no grade levels assigned to the concepts so you can put different aged students together without any stigma about what grade level they are working at. It won't be right for everyone, but I think it is worth looking into. Another Math on the Level family here. It really does work. Our sons, 13, 12 and 9, have learned a lot using it. All are different learning styles and 12YO DS just started speaking English 2 years ago when we adopted him. He BEGS for his 5-A-Days if I don't have them done first thing in the AM. Our oldest is ready for something more as MOTL goes from PreK to Pre-Algebra, but he has a solid base after us using this program and its maturational method for 3 years. I do highly recommend it. It seems pricey but if you have several children and break it down by years and per child, it is very cost-friendly. The MOTL yahoo group is a great place to ask questions and anyone can join who has the product or has an interest in learning about it. Oldest son btw is using Saxon now and while he doesn't love it, he thinks it works and I do too. He is just beginning the Algebra 1/2 book right now and plans to work on it through the summer. A neighbor gave us the Saxon and DIVE CDs/DVDs to use along with the book. Here is the site for CLE Math. They have samples of each light unit. Just view details for the grade you want.
{"url":"http://simplycharlottemason.com/scmforum/topic/i-cant-for-the-life-of-me-pick-a-math-curriculumhelp/page/2","timestamp":"2014-04-21T09:43:24Z","content_type":null,"content_length":"30715","record_id":"<urn:uuid:8ecfe7b7-e127-48c3-9d21-47bf9f850b14>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00269-ip-10-147-4-33.ec2.internal.warc.gz"}
Posts about Calculus on Math Jokes 4 Mathy Folks Posts tagged ‘Calculus’ Wilhelm Gottfried Liebniz was born on July 1, 1646, and his first paper about integral calculus was published 329 years ago. Whether he discovered calculus before or after Newton is an issue that mathematical historians have debated for centuries. Honestly, who cares? Both were great mathematicians. Still, it’s fun to think about how this issue might play out if they were both alive today… Check out the iPhone Text Generator to create your own fake text conversations. My wife forwarded an email with a link to a CNN article and subject line, “Your husband will love this.” Uh-oh. Even my closest friends cannot correctly predict what I will and will not love, so how would a colleague of my wife — who only knows me from an introduction at a professional reception — be able to make such a prediction? But the article did not disappoint. The author wrote about the mathematically satisfying shape of Pringles^®, and she quoted her husband thus: They [Lays Stax] set themselves up as a Pringles competitor, but it’s an entirely different curvature! I have never met the author, but her last name was familiar. As luck would have it, her math professor husband and I taught together at a gifted camp for several summers. Small world, eh? My favorite line of the article was from the last paragraph. Flavor is subjective. Math is irrefutable. What I enjoyed most about this occurrence was the intersection of several math topics. The article discusses parabolic cylinders and hyperbolic paraboloids, which are topics in multivariable calculus ; a colleague of my wife forwarded a link about an article written by the wife of a former colleague, which demonstrates social network theory; and, a colleague of my wife is not equivalent to the wife of my colleague, which shows non‑commutativity. My two cents? Pringles^® rule. Ask a silly question, get a silly answer. Teacher: If you have $4, and you ask your father for another dollar, how much would you have? Johnny: Four dollars. Teacher: Young man, you don’t know your addition facts! Johnny: Ma’am, you don’t know my father! Johnny’s father and my dad seem to have a lot in common. But my dad would have been proud of me yesterday. While walking home from the local coffee shop, I noticed a corner of a dollar bill on the ground. Not the whole bill, mind you, just a corner that had been ripped off. I thought not much of it, until two feet later I saw another scrap of the dollar bill… then another… and another… I know and understand Calculus, and I realized that a lot of little things can add up to a lot, so I spent 15 minutes scouring the area for as many pieces of the dollar bill as I could find. I took them home and asked my sons, “Wanna do a puzzle?” We spent a half-hour reconstructing the bill and taping it together. The pictures below show the before and after: The bill was not in good enough shape to be accepted by a vending machine (too much tape, I suspect, and the missing piece on the right side surely didn’t help, either), but it was in good enough shape for my bank to give me four shiny quarters in exchange for it. I know that a penny saved is a penny earned. But what is a dollar found? And the bigger question: What should I do with my new-found wealth? I decided to buy a lottery ticket. The state gambling commission organized a raffle that boasted an infinite amout of money as the prize. To my great surprise, I won! When I showed up to claim the prize, they told me it would be disbursed as 1 dollar now, 1/2 dollar next week, 1/3 dollar the thrid week, 1/4 dollar the week after that, and so on. But the joke’s on them. My winnings for the third week will include a one-third cent piece, and that’s gotta be worth something, right? (Note: Almost everything above is true. I really did find the pieces of a dollar bill on the ground yesterday. As best I can tell, the bill had been on the lawn when it was cut by the blades of a power mower. And my bank really did give me four quarters in exchange for the taped-up, reconstructed version.) My friend Pat Flynn, a teacher at Olathe East High School, recently told me about his childhood experience with math education. Sister Mary Constance only used her ruler to measure pain, not distance. That’s one of the funniest lines I’ve heard in a long time! Along similar lines… What do you get if you cross a zero and a pigeon? A flying none! Pat is a calculus teacher, and I once heard some students discuss his humor. When our calculus teacher would tell us a joke, my friend would laugh twice: once when he first heard it, then again when he got it. Here are some jokes that Pat would surely like his calculus students to suffer through. What did the calculus teacher ask the dazed and confused student? “Young man, have you been taking derivatives?” What’s the difference between a mathematician and a physicist? A physicist will take the average of the first three terms of a divergent series. But it’s not just calculus… Pat enjoys making students groan at every level, so here are some all-purpose jokes. Why did the variable break up with the constant? The constant was incapable of change. Did you hear about the bodybuilding mathematician who was always positive? He had nice abs(). The day before mid-term exams, the calculus professor allowed 10 minutes at the end of class for questions. When one student asked the professor how many problems would be on the exam, the professor replied, “I think you will have a lot of problems on the exam.” “Well, sir,” the student continued, “do you have any suggestions for what I can do to prepare?” “Yes,” he said. “Just study the old exams. The mid-term exam will have the same types of problems, just the numbers will be different. But not all of the numbers will be different. Both π and e will be the same, of course, and there’s a reason it’s called Planck’s constant…” Before dismissing the class, the professor warned that there would be no acceptable excuses for missing the exam. Upon hearing this, the class clown said, “What about sexual exhaustion?” “I’m sorry, Jason,” said the professor. “You’ll just have to write with your other hand.” Some things I’ve noticed… • Algebra is x sighting. • Rational people are partial to fractions. • Geometricians like angles… to a degree. • Vectors can be ‘arrowing. • Calculus teachers can go on and on about sequences. • Translations are shifty. • Complex numbers are unreal. • Most people’s feelings about integers are positive. • On average, people are mean. Inquiring minds want to know, so here are answers to questions that you’ve surely been pondering. Q: If one man can wash one stack of dishes in one hour, how many stacks of dishes can four men wash in four hours? A: None. They’ll all sit down together to watch football. Q: Why don’t members of the Ku Klux Klan study Calculus? A: Because they don’t like to integrate. Q: What did the circle say to the tangent line? A: “Stop touching me!” Q: Why did the statistician cross the interstate? A: To analyze data on the other side of the median.
{"url":"http://mathjokes4mathyfolks.wordpress.com/tag/calculus/","timestamp":"2014-04-19T04:33:25Z","content_type":null,"content_length":"47211","record_id":"<urn:uuid:84b5bda1-5644-476c-b7b6-3b227da719b7>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00122-ip-10-147-4-33.ec2.internal.warc.gz"}
Skokie ACT Tutor Find a Skokie ACT Tutor ...During College, I tutored children in math and English in Milwaukee. I enjoy working with children, so I make a good tutor. I love seeing the growth of a child during a tutoring relationship. 15 Subjects: including ACT Math, English, writing, grammar ...The pure enjoyment of the music it plays and the instrument itself has prompted me to perform in several orchestras and to start two performing woodwind quintets. I find my increasing knowledge of the literature for clarinet to be fascinating, and my personal library of solos, ensembles, clarine... 15 Subjects: including ACT Math, reading, grammar, writing ...I teach a discrete math course at a university entitled Quantitative Reasoning. The text is "For All Practical Purposes". I also teach Intermediate Algebra, College Algebra, Trigonometry and Calculus at a university. 11 Subjects: including ACT Math, calculus, geometry, GRE ...I look forward to helping you or your child be successful.During my eight years of teaching high school, I have taught Algebra 1 during the equivalent of 5 years (including credit recovery summer school). Enhanced by my teaching in the upper-level math courses, I help students focus on mastering ... 11 Subjects: including ACT Math, calculus, statistics, geometry ...The way I coach/tutor is heart based - warm, non-judgmental, sensitive, encouraging, while also keeping the student accountable and on track. I use my creativity to explain concepts and to make sense of them at the level of the student. I also use humor to lighten the "load" and make math fun. 10 Subjects: including ACT Math, geometry, algebra 1, algebra 2
{"url":"http://www.purplemath.com/Skokie_ACT_tutors.php","timestamp":"2014-04-16T13:24:54Z","content_type":null,"content_length":"23366","record_id":"<urn:uuid:fa994b63-2d65-4c29-8497-a4c7e2e8b4c8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00587-ip-10-147-4-33.ec2.internal.warc.gz"}
Gravitation Ppt Presentation PowerPoint Presentation: Presented By Anhadh Singh Class: 9 th ‘A’ Roll No: 24 PowerPoint Presentation: 1) Gravitation The force with which the earth pull objects towards itself is called Gravitational forces. i ) Gravitation may be the attraction of objects by the earth. Eg :- If a body is dropped from a certain height, it falls downwards due to earth’s gravity . ii) Gravitation may be the attraction between objects in outer space. Eg :- Attraction between the earth and moon. Attraction between the sun and planets. Gravitational Force is responsible for many phenomenon like- Holding the atmosphere about the Earth. Rain falling on the Earth. Keeping us firmly on the ground. NOTE: Force of gravitation is always the force of attraction. It is never Repulsive PowerPoint Presentation: Isaac Newton Isaac Newton was born in Woolsthorpe near Grantham , England. He is generally regarded as the most original and influential theorist in the history of science. He was born in a poor farming family. But he was not good at farming . He was sent to study at Cambridge University in 1661. In 1665 a plague broke out in Cambridge and so Newton took a year off . It was during this year that the incident of the apple falling on him is said to have occurred . This incident prompted Newton to explore the possibility of connecting gravity with the force that kept the moon in its orbit. This led him to the universal law of gravitation. PowerPoint Presentation: 2 ) Universal law of gravitation The universal law of gravitation states that, ‘Every object in the universe attracts every other object with a force which is directly proportional to product of the masses and inversely proportional to the square of the distance between them.’ Let two objects A and B of masses M and m lie at a distance d from each other. Let F be the force of attraction between them. According to the universal law of gravitation the force between the objects is directly proportional to the product of their masses and inversely proportional to the square of the distance between them. F M m (1 Eq ) F (2 Eq ) Combining Eqs . (1 ) and (2 ), we get F M m A B d F PowerPoint Presentation: Or, F = G (Eq.3) where G is the constant of proportionality and is called the universal gravitation constant. By multiplying crosswise, Eq. 3 gives F × = G M × m Or, G = ( Eq.4) The SI unit of G can be obtained by substituting the units of force, distance and mass in Eq . (4) as N . The value of G was found out by Henry Cavendish (1731 – 1810) by using a sensitive balance. The SI unit of G is N m 2 kg -2 and its value is 6.673 x 10 -11 N m 2 kg -2 PowerPoint Presentation: Importance of The Universal Law O f Gravitation The universal law of gravitation successfully explained several phenomena which were believed to be unconnected: the force that binds us to the earth; the motion of the moon around the earth; the motion of planets around the Sun; (iv) the tides due to the moon and the Sun. PowerPoint Presentation: 4) Free fall The falling of a body from a height towards the Earth under the influence of gravitational pull of the Earth is called Free fall . Galileo observed that the acceleration produced in a freely falling body is the same for all bodies and does not depend upon the mass of falling body. The uniform acceleration produced in a freely falling body due to gravitational pull of the Earth is called acceleration due to gravity and it is denoted by ‘g’. PowerPoint Presentation: Acceleration due to gravity The acceleration due to gravity is denoted by g. The unit of g is same as the unit of acceleration i.e. ms -2 . It’s value changes lightly from place to place but for practical purposes it is 9.8 m/ . Calculation of Value of ‘g’ Suppose a stone of mass ‘m’ is dropped from a distance ‘R’, from the center of the Earth ‘M’. Then according to Newton’s Law of Gravitational: F = G -------- ( I ) Also, F = m ( from Newton’s second law of motion) Or, ---------- (2) PowerPoint Presentation: Substituting the value of Eq.1 in Eq.2 ( cancelling out ‘m’) Since this acceleration is due to gravity PowerPoint Presentation: To calculate the value of ‘g’, put values of G, M & R. g = 9.8 (approx.) Value of ‘g’ (maximum & minimum) Value of ‘g’ is actually not a constant because the earth is not a perfect sphere. Therefore the value of its radius is not the same at all the places. Since the radius of the earth at the poles is minimum, the value of ‘g’ is maximum at the poles. T he radius of the earth is maximum at the equator, therefore the value of ‘g’ in minimum at the equator. Value of ‘g’ decreases as we go inside the earth and becomes zero. It also decreases on going about surface of earth. PowerPoint Presentation: 5) Mass and Weight a) Mass Mass of a body is a quantity of the matter contain in it. It is the measure of inertia and hence mass is also called inertial mass . The mass of the body can never be ZERO . The SI unit of mass is ‘kg’. b) Weigh t :- The weight of a body is the force with which the earth attracts the body. The weight of a body can be ZERO. I ts SI unit is ‘N’. The force with which a body is attracted by the earth depends on its mass m and acceleration due to gravity g, F = m x g Since weight of a body is the force with which the earth attracts the body, W = F or, W = m x g NOTE: 1 kg of weight = 9.8N PowerPoint Presentation: c) Difference between Mass & Weight Mass Weight 1. It is the quantity of matter contained in a body. 1. It is the force with which a body is attracted towards the earth. 2. It is a constant quantity and does not changes from place to place. 2. It varies from place to place. 3. The SI unit is ‘Kg’. 3. The SI unit is ‘N’. 4. It is a scalar quantity. 4. It is a vector quantity. 5. It is measured by a pan balance. 5. It is measured by a spring balance. 6. It cannot be zero. 6. Weight is zero at the center of earth and somewhere in the inter planetary space. PowerPoint Presentation: d) Weight of an object on the moon The weight of an object on the earth is the force with which the earth attracts the object and the weight of an object on the moon is the force with which the moon attracts the object. The mass of the moon is less than the mass of the earth. So the moon exerts lesser force on the objects than the earth. The weight of an object on the moon is one sixth (1/6 th ) of its weight on the earth. Celestial body Mass (kg) Radius (m) Earth 5.98 x 10 24 6.37 x 10 6 Moon 7.36 x 10 22 1.74 x 10 6 PowerPoint Presentation: 6) Thrust and pressure a) Thrust Thrust is the force acting on an object perpendicular to the surface. Eg :- When you stand on loose sand the force (weight) of your body is acting on an area equal to the area of your feet. When you lie down, the same force acts on an area equal to the contact area of the whole body. In both cases the force acting on the sand (thrust) is the same. b) Pressure Pressure is the force acting on unit area of a surface. Eg :- The effect of thrust on loose sand is larger while standing than while lying down. The SI unit of thrust is N/m 2 or N m -2 . It is called Pascal (Pa). PowerPoint Presentation: 7,a ) Pressure in fluids (Liquids and gases) Fluids exert pressure on the base and walls of the container. Fluids exert pressure in all directions. Pressure exerted on fluids is transmitted equally in all directions. b) Buoyancy ( Upthrust ) When an object is immersed in a fluid it experiences an upward force called buoyant force. This property is called buoyancy or upthrust . The force of gravity pulls the object downward and the buoyant force pushes it upwards. The magnitude of the buoyant force depends upon the density of the fluid. PowerPoint Presentation: c) Why objects float or sink in water ? If the density of an object is less than the density of a liquid, it will float on the liquid and if the density of an object is more than the density of a liquid, it will sink in the liquid. Activity Take some water in a beaker. Take a piece of cork and an iron nail of the same mass. Place them on the water. The cork floats and the nail sinks. The cork floats because the density of cork is less than the density of water and the upthrust of water is more than the weight of the cork. The nail sinks because the density of the iron nail is more than the density of water and the upthrust of water is less than the weight of the nail. Cork Iron nail PowerPoint Presentation: 8) Archimedes’ principle Archimedes’ principle states that, ‘ When a body is partially or fully immersed in a fluid it experiences an upward force that is equal to the weight of the fluid displaced by it.’ Archimedes principle has many uses. It is used in designing ships and submarines, Hydrometers used to determine the density of liquids, lactometers used to determine purity of milk etc. PowerPoint Presentation: 9) Density and relative density i ) Density :- The density of a substance is the mass of a unit volume of the substance. The unit of density is kilogram per meter cube (kg m -3 ). ii) Relative density :- The relative density of a substance is the ratio of the density of a substance to the density of water. Since relative density is a ratio of similar quantities, it has no unit. PowerPoint Presentation:
{"url":"http://www.authorstream.com/Presentation/anhadharora-1948513-gravitation/","timestamp":"2014-04-20T03:37:36Z","content_type":null,"content_length":"136263","record_id":"<urn:uuid:6ffdf5d5-313f-4a8e-80ff-be208cc3b49b>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00557-ip-10-147-4-33.ec2.internal.warc.gz"}