content stringlengths 86 994k | meta stringlengths 288 619 |
|---|---|
The asymptotic growth of global sections of powers of a complex line bundle
up vote 6 down vote favorite
L is a holomorphic line bundle on a compact complex manifold X. The Kodaira dimension of L is defined as the maximal dimension of the image of the map associated to the powers $ mL(m \in N)$. I want
to prove the asymptotic estimate
$$ h^0 (X,mL) \leq O(m^{k(L)})$$
I heard that it is an easy consequence of the Schwarz lemma. Maybe it is a similar argument used by Siegel to prove the theorem "the transcendental degree of the meromorphic function field of a
compact complex manifold is not bigger than the dimension of the manifold". But I'm afraid to deal with meromorphic mappings and singularities in the image.So I cannot complete the argument myself.
Can somebody tell me how the argument goes?
ag.algebraic-geometry complex-geometry
add comment
3 Answers
active oldest votes
An enlightening and very elementary proof of this fact can be found in the very complete book of X. Ma and G. Marinescu "Holomorphic Morse inequalities and Bergman
up vote 3 down vote
accepted You will find this in Chapter 2. Their approach is exactly what you are looking for (only elementary complex analysis in several variables and a slightly modified Schwarz
Yes,that is the proof I'm looking for.Thank you. – Jun Li Jun 29 '11 at 7:43
Ok! I am glad I gave you the answer you wanted... – diverietti Jun 29 '11 at 14:53
add comment
Actually, it is possible to prove the following statement. Set
$$\mathbb{N}(L, X):=\{a > 0 \ | \ h^0(X, aL) \geq 1 \}$$
and let $d$ be the largest common divisor of the elements of $\mathbb{N}(L, X)$.
up vote 8 down vote Then there exist two positive integers $\alpha$, $\beta$ such that for $m$ large enough one has
$$\alpha m^{k(L)}\leq h^0(X, mdL) \leq \beta m^{k(L)}.$$
For a proof, see [Ueno, Classification theory of algebraic varieties and compact complex manifolds, Lecture Notes in Mathematics 439, Theorem 8.1 p. 86].
This does not seem to work if $L$ is a non-trivial line bundle of finite order. Perhaps the statement needs to be changed slightly? – ulrich Jun 19 '11 at 10:18
Now it should be ok (this is actually the statement that one finds in Ueno's book). Thank you. – Francesco Polizzi Jun 19 '11 at 11:29
Thank you for the reference.I find many topics interesting in this book. – Jun Li Jun 20 '11 at 3:35
add comment
For an answer in terms of Bergman kernels associated with line bundles (a little bit of functional analysis is involved), see Theorem 4.2.3 in:
up vote 2 down vote Berndtsson, Bo An introduction to things $\overline\partial$. Analytic and algebraic geometry, 7–76, IAS/Park City Math. Ser., 17, Amer. Math. Soc., Providence, RI, 2010
Unfortunately I can't find this book in our library.Thanks for the reference. – Jun Li Jun 29 '11 at 7:37
add comment
Not the answer you're looking for? Browse other questions tagged ag.algebraic-geometry complex-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/68201/the-asymptotic-growth-of-global-sections-of-powers-of-a-complex-line-bundle/68796","timestamp":"2014-04-19T10:36:00Z","content_type":null,"content_length":"65114","record_id":"<urn:uuid:755e8588-8c14-4939-bc9e-c9c903032ff6>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00616-ip-10-147-4-33.ec2.internal.warc.gz"} |
Portability portable
Stability experimental
Maintainer libraries@haskell.org
Sequential strategies provide ways to compositionally specify the degree of evaluation of a data type between the extremes of no evaluation and full evaluation. Sequential strategies may be viewed as
complimentary to the parallel ones (see module Control.Parallel.Strategies).
The sequential strategy type
type Strategy a = a -> ()Source
The type Strategy a is a -> (). Thus, a strategy is a function whose sole purpose it is to evaluate its argument (either in full or in part).
Application of sequential strategies
Basic sequential strategies
Sequential strategies for lists
seqListNth :: Int -> Strategy a -> Strategy [a]Source
Evaluate the nth element of a list (if there is such) according to the given strategy. The spine of the list up to the nth element is evaluated as a side effect.
Sequential strategies for foldable data types
seqArray :: Ix i => Strategy a -> Strategy (Array i a)Source
Evaluate the elements of an array according to the given strategy. Evaluation of the array bounds may be triggered as a side effect.
Sequential strategies for tuples
Evaluate the components of a tuple according to the given strategies. No guarantee is given as to the order of evaluation. | {"url":"http://hackage.haskell.org/package/parallel-3.1.0.1/docs/Control-Seq.html","timestamp":"2014-04-17T07:53:40Z","content_type":null,"content_length":"20167","record_id":"<urn:uuid:b9ef1b9a-bd87-4cd9-8674-ebc80689eb69>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00619-ip-10-147-4-33.ec2.internal.warc.gz"} |
High Expectations
A Position of the National Council of Teachers of Mathematics
Question: What does it mean to hold high expectations for students in mathematics education?
│ NCTM Position │
│ │
│ To hold high expectations means to engage all students in cognitively challenging tasks that are simultaneously within reach and rich enough to stretch students as far as they can go. Holding │
│ high expectations does not necessarily mean accelerating coursework or presenting material that is more difficult or should be done faster. Teaching with high expectations for all students │
│ ensures greater understanding for every student. │
Holding high expectations begins with the fundamental assumption of equity—the belief that all students can learn and should be given rich and challenging opportunities to do so. Holding high
expectations means assuming that all students, from prekindergarten through college, are able to handle complexity and engage in mathematical reasoning and problem solving. It is through tasks that
challenge students to stretch and develop their reasoning and problem-solving skills that they learn more. Furthermore, holding high expectations involves recognizing that different students emerge
as talented on different types of mathematical problems and in different topics in mathematics.
Challenging tasks are at the core of mathematical reasoning and sense making, and they provide an introduction to content that students need to learn. The appropriate introduction of this content
helps motivate students to learn more (Stein, Remillard, & Smith, 2007; Silver & Stein, 1996; Stein, Grover, & Henningsen, 1996). Challenging tasks should not be postponed until the end of an
instructional unit; rather they should be used to launch and sustain learning throughout the unit. Meeting high expectations requires effort from students (Willingham, 2009; Bransford, Brown, &
Cocking, 2000). Teachers should challenge students to persevere to experience the rewards of meeting high expectations.
Teachers should assume that students bring to the classroom a diversity of mathematical understanding and backgrounds that can be tapped to enhance learning for all students (Donovan & Bransford,
2005). Therefore, classroom experiences that build mathematical communities to solve problems, communicate reasoning, and make sense of mathematics are key to high expectations for all.
Holding high expectations means giving all students access not only to challenging tasks but also to challenging courses and curricula (Stiff, Johnson, & Akos, 2011; Tate, 2005). This does not
necessarily mean that courses are difficult or accelerated but does mean that they consistently make problem solving the focus for all students. High expectations for all students in mathematical
reasoning, sense making, and communication enable students to learn to identify assumptions, develop arguments, and make connections within mathematical topics and to other contexts and disciplines.
Bransford, J. D., Brown, A. L., & Cocking, R. R. (Eds). (2000). How people learn: Brain, mind, experience, and school [Expanded ed.]. Washington, DC: National Academy of Sciences.
Donovan, S., & Bransford, J. (2005). How students learn: History, mathematics, and science in the classroom. Washington, DC: National Academies Press.
Silver, E. A., & Stein, M. K. (1996). The QUASAR project: The “revolution of the possible” in mathematics instructional reform in urban middle schools. Urban Education, 30(4), 476–521.
Stein, M. K., Grover, B. W., & Henningsen, M. (1996). Building student capacity for mathematical thinking and reasoning: An analysis of mathematical tasks used in reform classrooms. American
Educational Research Journal, 33(2), 455–488.
Stein, M. K., Remillard, J. T., & Smith, M. S. (2007). How curriculum influences student learning. In F. Lester Jr. (Ed.), Second handbook of research on mathematics teaching and learning (2nd ed.,
Vol. 1, pp. 319–369). Charlotte, NC: Information Age Publishing.
Stiff, L. V., Johnson, J. L., & Akros, P. (2011). Examining what we know for sure: Tracking in middle grades mathematics. In W. Tate, K. King, and C. Rousseau Anderson (Eds.), Disrupting tradition:
Research and practice pathways in mathematics education (pp. 63–75). Reston, VA: National Council of Teachers of Mathematics.
Tate, W. F. (2005). Access and opportunities to learn are not accidents: Engineering mathematical progress in your school. Greensboro, NC: Southeast Eisenhower Regional Consortium for Mathematics and
Science Education.
Willingham, D. T. (2009). Why don’t students like school? A cognitive scientist answers questions about how the mind works and what it means for your classroom. San Francisco: Jossey-Bass.
(October 2011)
NCTM position statements define a particular problem, issue, or need and describe its relevance to mathematics education. Each statement defines the Council's position or answers a question central
to the issue. The NCTM Board of Directors approves position statements. | {"url":"http://www.nctm.org/about/content.aspx?id=31777","timestamp":"2014-04-21T10:19:43Z","content_type":null,"content_length":"33055","record_id":"<urn:uuid:40589b16-2139-42ae-ac34-37c6a6a2fa9e>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00218-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
At 10:00 a.m. two planes leave an airport. If the northbound plane flies at 280 mph and the southbound at 320 mph, at what time will they be 1,000 miles apart? { a.m.}
• one year ago
• one year ago
Best Response
You've already chosen the best response.
Distance = speed × time.
Best Response
You've already chosen the best response.
There are two distances and two speeds, but the two distances add up to 1000mi. and the time for both is the same.
Best Response
You've already chosen the best response.
how does 320+280=1000
Best Response
You've already chosen the best response.
320×time + 280×time = 1000.
Best Response
You've already chosen the best response.
You have to find the time.
Best Response
You've already chosen the best response.
"320+280=1000" would be 'speed + speed = distance' which doesn't make sense.
Best Response
You've already chosen the best response.
so speed + speed = distance means that it equals 600?
Best Response
You've already chosen the best response.
Speed + speed = distance is nonsensical and cannot be used. There are two distances that add up to 1000, so you need distance1 + distance2 = total distance. distance1=speed1×time, distance2=
speed2×time. The only unknown here is the amount of time.
Best Response
You've already chosen the best response.
so confusing
Best Response
You've already chosen the best response.
It does reduce to an equation that looks like a single speed of 600mph, and a single time of ? for a single total distance of 1000mi.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Best Response
You've already chosen the best response.
Go back through everything I wrote a little at a time and think about it. I'm sure you'll figure it out.
Best Response
You've already chosen the best response.
is the time 4:00 a.m.?
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5076dd52e4b02f109be39114","timestamp":"2014-04-16T13:36:00Z","content_type":null,"content_length":"62900","record_id":"<urn:uuid:e9f589e7-d2e1-41e5-b6c8-4863bd212bec>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Edison, NJ Statistics Tutor
Find an Edison, NJ Statistics Tutor
...For students whose goal is to learn particular subjects, I make sure that they understand the concepts prior to dealing with details. I do believe that ALL students can achieve their academic
goals with a little help in the RIGHT direction. My goal is to make a difference through building self ...
39 Subjects: including statistics, reading, writing, calculus
...Over the course of my career thus far, I estimate that I have written approximately 20,000 lines of C code, including two large data analysis packages. I have experience training graduate
students in C programming, and can certainly help anybody learning C at the beginning to intermediate level....
15 Subjects: including statistics, chemistry, calculus, biology
...My first creation was a cotton dress that consisted of 5 pattern pieces. It was not something I could wear in public, but it sparked my interest. I made another dress using the same pattern,
and learning from my initial mistakes, it was a little better.
58 Subjects: including statistics, English, physics, writing
I am an experienced tutor working with elementary through high school students. I have over 6 years of experience tutoring and 4 years of experience working with middle-school students from
minority backgrounds who are struggling with reading, writing and math. I love working with students from all grade levels and helping them get motivated to set learning goals and to succeed in
25 Subjects: including statistics, reading, English, writing
...I graduated in 2008 from an Ivy League university with a double-major in Biology and Economics. As a former scientist, I can break down a difficult subject area into the component parts, teach
each component, and then help my students put it all together again to master the subject. Though I ma...
14 Subjects: including statistics, geometry, algebra 1, trigonometry
Related Edison, NJ Tutors
Edison, NJ Accounting Tutors
Edison, NJ ACT Tutors
Edison, NJ Algebra Tutors
Edison, NJ Algebra 2 Tutors
Edison, NJ Calculus Tutors
Edison, NJ Geometry Tutors
Edison, NJ Math Tutors
Edison, NJ Prealgebra Tutors
Edison, NJ Precalculus Tutors
Edison, NJ SAT Tutors
Edison, NJ SAT Math Tutors
Edison, NJ Science Tutors
Edison, NJ Statistics Tutors
Edison, NJ Trigonometry Tutors | {"url":"http://www.purplemath.com/edison_nj_statistics_tutors.php","timestamp":"2014-04-17T01:08:23Z","content_type":null,"content_length":"24153","record_id":"<urn:uuid:83917c92-2bc3-4a1b-959e-a7bd098bf030>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00175-ip-10-147-4-33.ec2.internal.warc.gz"} |
Chapter 6. Further Techniques in Decisio
236 C H A P T E R 6. F U R T H E R TECHNIQUES IN DECISION A N A L Y S I S E x a m p l e 6.7 As in Example 6.4, suppose Joe has an investment opportunity that entails a .6 probability of gaining $4000
and a .4 probability of losing $2500. Again let dl be the decision alternative to take the investment opportunity and d2 be the decision alternative to reject it. Suppose Joe's risk preferences can
be modeled using In(x). First, let's model the problem instance when Joe has a total wealth of $10,000. We then have that E U ( d l ) = .4In(10,000 + 4 0 0 0 ) + .6 In(10,000- 2500) = 9.1723 EU(d2)
-- In 10,000 = 9.2103. So the decision is to reject the investment opportunity and do nothing. Next let's model the problem instance when Joe has a total wealth of $100,000. We then have that EU(dl)
- . 4 I n ( 1 0 0 , 0 0 0 + 4 0 0 0 ) + .6 In(100,000- 2500) = 11.5134 EU(d2) = In 100,000 = 11.5129. So now the decision is to take the investment opportunity. Modeling risk attitudes is discussed
much more in [Clemen, 1996]. 6.2 Analyzing Risk Directly Some decision makers may not be comfortable assessing personal utility func- tions and making decisions based on such functions. Rather, they
may want to directly analyze the risk inherent in a decision alternative. One way to do this is to use the variance as a measure of spread from the expected value. Another way is to develop risk
profiles. We discuss each technique in turn. 6.2.1 U s i n g the Variance to M e a s u r e Risk We start with an example. E x a m p l e 6.8 Suppose Patricia is going to make the decision modeled by
the decision tree in Figure 6.4. If Patricia simply maximizes expected value, it is left as an exercise to show E(dl) E(d2) = = $1220 $1200. So dl is the decision alternative that maximizes expected
value. However, the expected values by themselves tell us nothing of the risk involved in the alternatives. Let's also compute the variance of each decision alternative. If we choose alternative dl,
then P(2000) P(1000) P(0) = = - .8· .1 .Sx.3+.1-.34. | {"url":"http://my.safaribooksonline.com/book/-/9780123704771/chapter-6dot-further-techniques-in-decision-analysis/62_analyzing_risk_directly","timestamp":"2014-04-17T01:05:21Z","content_type":null,"content_length":"54628","record_id":"<urn:uuid:dbc4a0a6-3eef-4488-b421-c81d92a75473>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00547-ip-10-147-4-33.ec2.internal.warc.gz"} |
Consider a range of frequencies passing through the filter, say 7000 kHz to 7300 kHz with an arithmetic mean is 7150 kHz. Determine that at 6750 kHz and 7550 kHz, the attenuation has to be 25 dB. In
summary: the center frequency is 7150 kHz, the pass bandwidth is 300 kHz, the stop bandwidth is 1300 kHz, and the attenuation is 25 dB. Make the input and output termination resistances 50 Ohms.
Type 7150 kHz in the top box, labeled "Center Frequency (Hz)" in a number of different fashions, using only the allowed characters 0-9, E (upper case is automatic), and . (a decimal point.) All of
the following are legal: 7150000, 7.15E6, 7150E3. This is not allowed: 7150 kHz. Following the same formatting, enter 300 kHz in the second box labeled "Pass Bandwidth (Hz)." And then in the third
box enter 1300 kHz, that's the box labeled "Stop Bandwidth (Hz)"
Enter 25 in the fourth box, labeled "Attenuation (dB)," In the sixth box, labeled "In/Out Resistance (Ohms)," enter the resistance in ohms of the source and load which is 50.
Click the "Computer Calculation" button in the green "Order" box. The computer will calculate the order and display it in the fifth box labeled "Order." Make an evaluation of the order number. It is
possible that 8 (this example) was more than the number of capacitors and coils (eight of each; for band pass filters, each order requires two reactances) desired. Examine the criterion a second
time. Perhaps less attenuation or a wider stop bandwidth is possible without suffering too much additional interference.
For the moment, assume an order of 8 is acceptable. Click either the button labeled "Tee - Series" for a series resonant input filter, or "Pi - Shunt" for a parallel resonant input filter. See the
"Help - Geometry" for more information.
After that button is clicked, the calculations are made. Depending on how the configuration is set up, the results may be shown immediately. Alternatively, use the "File - Retrieve" menu item to
launch the text editor to see the results, or simply compute other filters. At a later time, launch a text editor to see the files open the Butterworth filter folder and double click the files to
view the data after exiting the Butterworth Calculator. | {"url":"http://www.qsl.net/n9zia/butterworth/bpentry.htm","timestamp":"2014-04-18T08:03:35Z","content_type":null,"content_length":"6717","record_id":"<urn:uuid:0e675b93-4a26-420b-9381-9b7e96a4f0fa>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00520-ip-10-147-4-33.ec2.internal.warc.gz"} |
tale fundamental group
\'etale fundamental group
étale fundamental group
Recall that in topology, the fundamental group $\pi_{1}(T)$ of a connected topological space $T$ is defined to be the group of loops based at a point modulo homotopy. When one wants to obtain
something similar in the algebraic category, this definition encounters problems. One cannot simply attempt to use the same definition, since the result will be wrong if one is working in positive
characteristic. More to the point, the topology on a scheme fails to capture much of the stucture of the scheme. Simply choosing the “loop” to be an algebraic curve is not appropriate either, since
in the most familiar case (over the complex numbers) such a “loop” has two real dimensions rather than one.
Definition 1.
Let $X$ be a scheme, and let $x$ be a geometric point of $X$. Then let $C$ be the category of pairs $(Y,\pi)$ such that $\pi\colon Y\to X$ is a finite étale morphism. Morphisms $(Y,\pi)\to(Y^{{\
prime}},\pi^{{\prime}})$ in this category are morphisms $Y\to Y^{{\prime}}$ as schemes over $X$. If $Y^{{\prime}}$ factors through $Y$ as $Y^{{\prime}}\to Y\to X$ then we obtain a morphism from $\
Aut_{C}(Y)\to\Aut_{C}(Y^{{\prime}})$. This allows us to construct the étale fundamental group
$\pi_{1}(X,x)=\underleftarrow{\lim}_{{Y\in C}}\Aut_{C}(Y).$
This explanation follows [1].
EtaleMorphism, TopologicalSpace, FundamentalGroup, ClassificationOfCoveringSpaces
Mathematics Subject Classification
no label found | {"url":"http://planetmath.org/etalefundamentalgroup","timestamp":"2014-04-18T15:42:40Z","content_type":null,"content_length":"56180","record_id":"<urn:uuid:df42ea22-2071-4a0f-8cd4-bc3947e2b27f>","cc-path":"CC-MAIN-2014-15/segments/1397609533957.14/warc/CC-MAIN-20140416005213-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Countif On Multiple Sheets
Is it possible to do a countif formula using a range of sheets.
i.e. I have info on 6 sheets (same layout on each) and I want to do a countif "Internet" that's in column G in all the sheets.
Is it possible to count them all together?
View Complete Thread with Replies
Sponsored Links: Related Forum Messages:
COUNTIF On Multiple Sheets..
I have a workbook with alot of sheets, each sheet is identical in format. What I am trying to achive is a way of counting all the occurances of a name in a range of cells on all sheets.
To try and explain, each sheet has a drop down list in cells C5:V5, and I need to try and get a way of a summary sheet showing how many times "J.Bloggs" appears in All Sheets accorss the range C5:V5,
but am finding it impossible.
View Replies! View Related
Using Countif To Look At Multiple Sheets???
I am trying to create a formula that will tell me how often our sales reps are going to specific accounts. I have a master list of each persons accounts and next to each account name there will be a
cell with a formula in it. This formula will look at the cell next to it, "the account name", and then count how many times that name comes up over each months sheet. I was also wondering if I can
then get the average over 12 months in the same formula. If not thats ok.
The problem I am having is that I keep getting an error #VALUE!. I had no problem with it looking at only one sheet, but when I do more than one it gives me an error. I think it is the was I am
breaking up each sheet. I have tried to do it every way I can think of... ,;) Nothing seems to work.
View Replies! View Related
Countif Over Multiple Sheets
I'm trying to count the number of sheets that have the word, "Active" in the same cell on each sheet. Here's the formula I'm trying to use:
When I do this I get the value error.
I will have any number of sheets that say Inactive or Active and I want to have a "master sheet" that will tell me how many sheets say active and how many say inactive.
View Replies! View Related
Countif Function With Multiple Sheets
Can I use the "countif" function using a group of sheets as the range and the criteria on a separate sheet or manually typed in?
I have tried and continue to get the #VALUE error.
Can I do the same thing and perform the "countif" function by using a specific value as the way to count?
View Replies! View Related
Countif Across Sheets
I have one sheet with data and want to have the data transferred automatically into another sheet.
Let's say, column A of Sheet1 contains information like
A1 - F
A2 - M
A3 - F
A4 - F
A5 - M
In Sheet2 I want to have a cell in which, f.e. the sum of all Fs is added AND kept up to date whenever I alter the information in Sheet1.
I've tried countif.3d and also sumproducts(countif(indirect...),
View Replies! View Related
Countif Function Between Sheets
I'm trying to simply count a range of cells using the countif function. The range is on a different sheet within the same workbook then where the formula is. The formula is
=COUNTIF('Aggregated Results'!L3:L22,"yes"). It returns 0 (zero) for the count which is incorrect as three Yes's appear in the range of cells.
View Replies! View Related
Copy From Multiple Sheets (26), PASTE To 1 Sheet From 26 Sheets
I have a workbook with 26 sheets, labelled A to Z. Column A in all the sheets have names from rows A6:A35.
I need a macro or a code to extract all the names from each of the 26 sheets and paste it to a new sheet 'Names' under column A, such that names starting with 'B' paste under all the names 'A' and so
forth till 'Z'.
View Replies! View Related
Using Multiple COUNTIF + AND Formulas
Is it possible to use multiple COUNTIF combined with AND formulas in a single cell?
The current cell equation is =COUNTIF(C14:C83,"Alpha Full")+COUNTIF(C14:C83,"Beta Full")+COUNTIF(C14:C83,"Final Full")
But I need to to only add those cells if another parameter, namely if another cell continas a certain month.
For example something like this =COUNTIF(C14:C83,"Alpha Full")AND(b14,"November")
View Replies! View Related
COUNTIF On Multiple Columns
When using the following:
I get what I want.
What I can't figure out is, I want to get the count of columns G, J, N, and R which has the criterior that I'm looking for, added together for a total.
And in reality, the rows do not go down that far, but they might, so I put 200 in there to be safe.
View Replies! View Related
COUNTIF For Multiple Ranges
How to use COUNTIF when there are multiple criteria. For eg. I have 3 columns, and I want to count the # of employees in each row if all 3 criteria in columns C, D, and E hold true.
View Replies! View Related
Countif With Multiple Column
looking to use =countif for an address file and want to filter out duplicates of last then first name.
col F has last name
col d with first name
I've tried using the Conditional Formatting funtion, but am not understanding how to incorporate multiple columns,
View Replies! View Related
CountIF Multiple Parameters
I have a long list of past jobs, around 4000+. I have multiple fields, but I really wish to concentrate on the:
1) Job Type - Example would be Medical or Imprint
2) Job # - Correlates when it was done - Example would be 91059 would be a job within 2008-2009 fiscal year.
All the jobs are listed in the first spreadsheet. The second spreadsheet will hold generalized data broken down by the Type of Job and the fiscal year it was done.
I wish to first count how many jobs fit a specific job type. This was easily done:
A7 = Medical
A1:A4121 = the range of the names.
Count comes up as 346, which is correct.
I then want to add another parameter to break up the 346 by fiscal year. It ranges from 05-06 to 08-09. I came up with this:
Since the job # correlates to the fiscal year it was created, anything starting with 9 is a job done in 08-09. Anything starting with an 8 would be of course 07-08 range. I would have multiple fields
with each different fiscal year.
However, when I put the 2nd formula in the function didn't work. It keeps the count at 346, which I know is wrong. I am not sure if I did something wrong here. Been looking at this for an hour and
can't figure out what is wrong. The jobs that are medical within 08-09 fiscal year should be 120, but it keeps at 346.
View Replies! View Related
Countif Multiple Conditions....
I have a table like this:
Name (A).....Date-in (B)....Date-out (C)
I want to count the rows (in the entire table) that B and C dates intersect with a reference dates (say J1 and J2).
It is a booking table so I want to know if the apartment is available for the reference dates (i.e. no bookings for that days).
I tried first a simple double conditional to know if a date is inside two dates but it didn't work:
View Replies! View Related
Countif With Multiple Conditions?
In my Sheet "List" I have list of persons working on different projects.
I prepared graph after putting conditions on Project Type, Project Size, Project Year & Position (PM Project Manger). Every thing was done a in a nice manner with the help of below formula.
PJ TYPE , PJSIZE, PJYEAR, POSITION are ranges names.
But the problem was occured that in a year if a person work on small project more than once then he will be counted only once. But if he has worked in same year on Medium or Larage project then they
will be counted separately. I tried to oversome the problem with the help of Pivot Table and put manually some legend P1, P2 & P3 against the person name if he is working on same type of project in
same year. then count only P1 in my formula to count how many Project Manager worked on Project. like
Now i am trying that in a separate columm of # of PM there must be a formula which only put P or 1 for a person if he is working on same project in a year but i want that p or 1 only appear against
his first entry i duplicate. for other persons it automatically enter 1 or p if they are appearing only once. i have tried a lot while using countif with multiple conditions but all in vain.
View Replies! View Related
COUNTIF For Multiple Columns
I am trying to tally answers for a survey. Column A specifies one of 3 locations (Boulder, Larimer, Westminster) and column C specifies a grade for services between locations (Not Uniform, Slightly
Uniform, Very Uniform). I was able to tally each separately using 'COUNTIF' and the conditions, but now I would like a total for each of the grades by location, i.e. a count of people answering both
Boulder and Uniform, etc.
View Replies! View Related
Multiple CountIf Criteria
In Excel 2003, I need a countif to check for 2 criteria: (1) the left function looking for the value "Territory" in column A and (2) value > 0 in column G. I only want to count the rows where both
the criteria are met. I have tried different combinations of countif including "and" in the formula, but I cannot get it to work. What is the proper syntax?
View Replies! View Related
Countif With Multiple Ranges
Countif: Is there a way to have a single criteria (a persons name) and multiple ranges example: a6:a10 c6:c10 and recieve the sum of that criteria and ranges?? I know there is I just cant get it.
I need to count a persons name entered in multiple ranges (cells or areas) on the same worksheet. I cannot make one big range because i will need to do the same for the b6:b10 but for a different
"need" the a colum and c colum bieng "completed" the b colum bieng "not completed". I have tried =countif(a6:a10 + c6:c10, cell_with _persons_name) and for obvious reasons that wont work,
View Replies! View Related
Countif Multiple Criteria.
I'm trying to count multiple criteria from a second page in a work book, all the formulas i've looked up and tried do not seem to work... here's the formulas i've tried. DKOBULAR is the name of the
2nd page. D is the column used for the different resolves.
=COUNTIF(DKOBULAR!D:D="resolveA")+COUNTIF(DKOBULAR!D:D="resolveB")+COUNTIF(DKOBULAR!D:D="resolveC")+ COUNTIF(DKOBULAR!D:D="resolveD")
=COUNT(IF(DKOBULAR!D:D="resolveA",IF(DKOBULAR!D:D="resolveB",IF(DKOBULAR!D:D="resolveC",IF(DKOBULAR! D:D="resolveD")))))
View Replies! View Related
Using Countif With Multiple Corresponding Criteria
what I'm trying to do is to make a logbook for a machining center. Each part has an op10 and an op20, essentially front and back. And each part number falls into the category of OS or FS. I've used
AND logic to make tables in hidden columns to be used by a countif statement to determine my totals. I.e. to determine if a scroll is completed, op20 has a a value of 1 AND column C is "OS".
I use
=IF(AND(A9=1;C9="OS");1;" ")
Then I countif criteria is 1 in the column i created with that statement.
That works just fine. Now what I want to do is to be able to create daily totals of OS and FS by simply modifying a variable date in a formula. So I'd like to essentially say: Countif Column C =OS
and Corresponding column D = 1, and corresponding Shift date = 10.02.12(date to be variable). I'm at a wall here. Is there any way to do this somewhat simply?
View Replies! View Related
Using COUNTIF In Multiple Worksheets
I have 50 worksheets with shirt sizes. I am trying to count the number of instances that we have "XL".
So I use the formula:
=COUNTIF(worksheet1!A1:worksheet50!A1, "XL")
I also tried:
=COUNTIF(worksheet1!A1, "XL")+COUNTIF(Worksheet2!A1, "XL")+COUNTIF(worksheet3!A1, "XL")...etc
Niether work, in fact, excel decides just to leave the function as written above in the cell when I enter out of it or go to another cell. Sometimes it works if I "COUNTIF" multiple cells in one
worksheet but will leave the formula as is if I try to manipilate it.
View Replies! View Related
COUNTIF Across Multiple Worksheets
I'm after a formula (or a piece of quick code) to count how many times a value occurs in a range of sheets. I've tried COUNTIF, but it only seems to work for one sheet, not a range.
View Replies! View Related
Countif Multiple Criteria
I need to create a formula which counts the number of times a username appears in column X based on a given value in Column Y. This data will not be static - will need to be refreshed regularly.
Countif does not support multiple criteria - what is the best way to create this formula?
View Replies! View Related
Multiple Criteria Countif Or Sumproduct
I haven't been this deep into excel before. The deeper I look, the more potential I recognize, the more amazed I get. That being said, I have come to a tough count issue. Let me attempt to explain as
precisely as possible.
My current worksheet is large but I am only particularly concerned with two columns of information (Regions) and (Days). The logic I am attempting is something along the lines of Count If Region =
East, or West, and Days is greater than 0, less than 60.
I am open to any and all suggestions on how to tackle this situation. I have been able to achieve similar counts by using pivot tables but the dynamic nature of these two columns presents some
difficulties that my “new user” mind has been unable to work through.
View Replies! View Related
Multiple Criteria For Countif Functions
I need to create a formula that counts the number of times that an age range appears within a column. In column G, there is a list of ages based on a demographic collection. The ages range from 13 to
50+. I want to designate another cell to count the number of times the characters between 13 and 18 occur within that column. I have =COUNTIF(G8:G20,"13") How do I add "14", "15", "16", "17", and
View Replies! View Related
COUNTIF Multiple Ranges Dates
I want excel to count the number of items in a range that I have named "Name" and I have another range that I have name "date" which contains (obviously) dates. The dates are in order. I want to
count the number of items in "name" that are associated with the date in the "Date" range.
The problem is I want to count the names in a date range, which is todays date through to 30 days after. I have to days date already posted automatically in K1 [by the formula =TODAY()]
Essentially, count all the times "Bob" appears in the Range "Names" that appear in the next 30 days. My Brain hurts just trying to describe it
View Replies! View Related
COUNTIF Function With Multiple Criteria
I need to count rows that meet 2 criteria.
I have seen this help page
but that counts rows with "criteria 1" OR "criteria 2"...
I need to count rows that fulfill "criteria 1" AND "criteria 2"
ie - count the rows that have todays date AND a cell that says "COMPLETE"
ideally it would be as easy as "=countif(A:F,"today()","COMPLETE") but that doesn't work... any way around this???
View Replies! View Related
Multiple COUNTIF Statements With Different Criteria
I'm a marketer trying to find all calls within a week that are longer than 300 seconds. I'd prefer not to use a pivot table because of client issues.
View Replies! View Related
Working Out Multiple Countif's?
Following on from: http://www.excelforum.com/excel-misc...s-formula.html
I used the above formula to work out how many days a call went out of our SLA, that works great (puts the values into K)
View Replies! View Related
Using Countif Function For Multiple Worksheets
I am trying to use countif to count the number of times a unique items occurs in multiple worksheets.
For example, I want to count number of times "ITEM1" occurs in row 1 of sheets1, sheets2, sheets3, sheet4, etc. It may look like this:
Sheet1 = 4 entries
Sheet2 = 22 entries
Sheet3 = 5 entries
Sheet4 = 10 entries
So the entire count would be 41 total.
View Replies! View Related
Countif. Multiple Data Ranges
I have cells containing data within C15:C22
I also have cells containing data within E25:E32
Some of these cells have the value '5' in them.
I want to have a running total of all the cells in these two ranges that have a value of '5' in them.
I did this formula:
This works okay but unfortunately this only covers the first data range. How do I specify the other data range in this formula?
View Replies! View Related
Countif Function Work On Multiple Criteria
Can countif function work on multiple criteria to look on?
this are the criteria.
SA SL 1.0SA SL 0.5SA VL 1.0SA VL 0.5SA SLWOP 1.0SA SLWOP 0.5SA VLWOP 1.0SA VLWOP 0.5SUSPRD
View Replies! View Related
Countif(Range, Multiple Referenced Values?)
I was wondering if it's possible to use a countif so that the condition can be a range of values also? For example, A1 = 14.12.08, A2 = 15.12.08 and A3 = 16.12.08
So I can have a countif that looks like: Countif(B1:B300, A1:A3). It doesn't work when I try it, but was wondering if there's a way to achieve the same result? So if B1:B300 contained any of the
values in A1:A3 it would count the amount of times they appear in the B range.
View Replies! View Related
#DIV/0! Dividing Multiple Countif Functions Into Value
I have this formula
= COUNTIF(AT6:AY6,"F")+COUNTIF(AT6:AY6,"P")+COUNTIF(AT6:AY6,"M")+COUNTIF(AT6:AY6,"D")
that returns the number 2 (which is correct). However, if I precede it with
it returns a DIV/0 error, even though AZ6 has a value of 24.
Surely 24/2 would return a value of 12? NB AZ6 cell value is derived from the result of a formula.
View Replies! View Related
2003: COUNTIF/SUMPRODUCT, Multiple Criteria W/Wildcard
I'm trying to write this but it returns a 0 when I know there are 3 records that match this criteria: =SUMPRODUCT(('Invoice-Detail'!J2:J50="NewJob_Post.NET")*('Invoice-Detail'!H2:H50="KY_*")). I
think the problem is in the wildcard character. I don't know if I should be using COUNTIF or SUMPRODUCT or something else?
View Replies! View Related
Combine Workbooks With Multiple Sheets Into 1 Multiple Sheet Workbook
I have about 20 workbooks with different file names for different projects all saved in the same folder. Each workbook has about 10 worksheets and each worksheet is named in a similar fashion in each
of the 20 workbooks (eg. revenue, cost, variance etc.). I want to pull out a worksheet named ' forecast' from each workbook into a master workbook so that the master workbook would contain the 20
forecast worksheets.
View Replies! View Related
Count Unique Logs With Multiple Conditions Of Multiple Sheets
I've got no clue about all this, but I've had to get specific formula examples and fill in the blanks in order for my timesheet to work. There's just one final problem if somebody could please help.
This is a timesheet for a 5 day work week. I need to count the number of unique log numbers for a specific activity. The log numbers counted must be unique across the entire week, not just for each
day, which means I want the formula to count the unique log numbers across multiple sheets.
The formula also has multiple conditions. I got 2 columns. The first part of the formula needs to verify a word, say, "split" and if it does it checks the adjacent cell for a unique log number. If
both arguments are true, it counts the log as 1 unit.
Here is a working formula for only one page.
Here's 2 problems with this formula:
1. I will count if it encounters a blank cell in the Log numbers the first time (which will happen as not every activity we do has a log#), but it will stop counting if it encounters a second blank
2. I don't know how to make it work across several sheets.
This is an alternate formula which works and skips the blank cells, but I don't know how to add the multiple condition of "split" and to have it work across multiple sheets. I just copied it
Microsoft. As I said, I don't understand it, I just fill in the blanks.
SUM(IF(FREQUENCY(IF(LEN(C4:C29)>0,MATCH(C4:C29,C4:C29,0),""), IF(LEN(C4:C29)>0,MATCH(C4:C29,C4:C29,0),""))>0,1))
View Replies! View Related
Adding Multiple Cells From Multiple Sheets With Sumif Function
I'm trying to put together a spreadsheet that tracks disc capacity increases, affected by any incoming projects. I've managed to do so for one project, but would like to for up to 10. The way i've
designed the solution (i'm sure there are far more elegant ways, but hey) is thus:
A forecast worksheet keeps track of a grand total, taking information from sheets P1 -> P10 (being projects 1 to 10). I am unable to figure a way to add up all the increases from all 10 project
worksheets with one succinct formula. What I use so far is: ='P1'!C83+SUMIF('P1'!E82,"=2009 - Q1",'P1'!D82) ..................
View Replies! View Related
Using Multiple Sum Ranges In Sumproduct() & Countif() In Array
My problem is :
1.In G Column I put logic for Fail and Obtained Marks.
2. Now in H column I want use this formula which I obtained from this forum
To get the position of Students.
But the text value "fail" in the G2:G7 getting Position No. 1 and i've noticed the reason by using evaluate formula as well.
3. I got solution by changing "Fail" with 0 by creating column I and then column H put this formula ........
View Replies! View Related
Countif Multiple Conditions: Look At Two Columns And Set A Criteria To Count
I want to be able to look at two columns and set a criteria to count. I want to look at column A and if its blank then look at column B and if it has a value of more than 0 then count.
A B
1 1.00
2 Yes 4.78
4 5.00
5 Yes 4.89
6 11.99
So this example would count 3
View Replies! View Related
Using 2 Variables To Return Multiple Items From Multiple Sheets
I have a need to populate a summary worksheet using two variables to find data in two or more other worksheets.
I find writing out what I want helps some times so let me try it here.
So my variables are:
Product (there are 22 products)
Supply Less than (inset number)
These are the two criteria I want to use to produce a result.
The next issue is I have 300 stores that carry said 22 products. Each store has a unique number 0001, 0002, 0003 etc. So in a separate worksheet I have a list of the store numbers, and then the
products. So each product has the store's number to the left in Column A, Column B has the product name, Column C has the quantity on hand.
What i would like to do on the summary page is select the product, and then select the supply less than or equal to 'x' and then have the stores with the selected product less than or equal to x
display below.
The last part of this is then to display (data from an other sheet) on the summary page which contains the quantity of the product selected available at the warehouse for that store.
View Replies! View Related
Track Changes On Multiple Selected Ranges On Multiple Sheets
I need to be able to track changes on selected ranges on multiple sheets, but Excel does not appear to be able to do this. It only appears to allow me to select multiple ranges on the same sheet.
is there a way to track changes on multiple selected ranges on multiple sheets
View Replies! View Related
Copying VB To Multiple Sheets In Multiple Work Books
Ive put some sheet code together that i need copied to 12 sheets (jan to December) in 24 workbooks (each workbook has trhe same sheet names). I dont want to alter the actual content of the Excel
sheets, I just need to copy VB code from a template (in VB editor) to the 12 sheets in each of the workbooks. Is this possible to do with VB or do i need some other utility since Im using the VB
View Replies! View Related
Print Multiple Ranges From Multiple Sheets Userform
I inherited a spreadsheet that had an userform where the user checked off which 'pages' he wanted to print. The Ok button routine used if statements to run a routine for each 'page.' Here's an
example of the original code for one page:
Sub Button2_Click()
Run "HorizontalPrintStuff" 'generic landscape pagesetup
With ActiveSheet.PageSetup 'specific pageset settings
.RightFooter = " Construction Assumptions"
.PrintArea = "CONSTRUCTION" 'the named range to print
.Zoom = False
.FitToPagesTall = 1
.FitToPagesWide = 1 'this changes depending upon the page selected
End With
End Sub
The problem was it printed each page as a separate print job; and if you print to adobe, you get serveral files, not one file. That and it took a long time to run.
So I tried a different tack. If the checkboxes has true, then the printarea is set to that named range. If there were more than one named range on a sheet to be printed, I consolidated them. I did
this with a bunch of if statements - very cumbersome.
'Sheet3.ResetAllPageBreaks 'disabled due to errors
Run "HorizontalPrintStuff" 'generic landscape pagesetup
With ActiveSheet.PageSetup 'specific pageset settings
.PrintArea = "DEVBGTALL" 'the named range to print
.FitToPagesWide = 4 'this changes depending upon the
.FitToPagesTall = 1
End With
I haven't shown all the code cause it goes on for 12 sheets containing 16 different printareas.
My current muck ups are .....
1) it prints every printarea/named range on a given sheet (I took out all the if statements trying to debug everything.) Is there another conditional argument that allows for multiple 'trues'?
2) the pagebreaks in printarea/named ranges that are multiple pages (like a 48 month schedule) won't stay set. I've tried both VPageBreaks(3).Location:= and .VPageBreaks.Add Before:=
3) the Sheet1.select false argument is always adding a random sheet to the end of the print job. Don't know why.
I can do all this in a recorded macro, just not the selection userform. I've thought about copying to another sheet or hiding columns and rows then printing, but that seems just as cumbersome.
To recap, i want to print out, as one print job, multiple printareas from mulitple sheets, based upon checkbox selection on an userform.
View Replies! View Related
Combine Multiple Sheets From Multiple Books Into One
I have 6 spreadsheets all within the same folder, these are pretty much identical (rows, colums, sheets within them) apart from the names of the files.
I then have a master spreadsheet within the same folder where I want to combine all the data, from all the sheets within each book (if that makes sense!) apart from the data on the last sheet within
each book as this is the reference data, onto one sheet within this master file. If possible I only want to copy rows accross which have complete data too.
So: (names not correct)
From book1.xls copy all data on sheets (sheet1, sheet2 etc) except last sheet
From book2.xls copy all data on sheets (sheet1, sheet2 etc) except last sheet
combine onto masterfile.xls on sheet1.
I have searched on here and can only find how to do it with the first sheet in each workbook, not looping through all the sheets in each book. Please see below.
View Replies! View Related
Multiple Column Lookup Across Multiple Sheets
After going through multiple threads in the forums, I got this code to do a multiple VLOOKUP method.
It works perfect on a sample sheet. But when im trying to implement it in a sheet with too much data, it always fails.
I have attached the sheet I am trying the formula on. I have grayed down the columns which needs formula's. The data is picked out from the second sheet.
This is how I have modified the formula to suit me..
=INDEX(Data!$G$2:$G$663,MATCH(1,(Data!$A$2:$A$663=$C2)*(LEFT(TEXT(Data!$E$2:$E$663,"mm/dd/yy hh:mm:ss"),8)=$I2),0))
<<Please note that all the dates and numbers in the sheet are in "text" format for ease of use>>
View Replies! View Related
Comparing Multiple Columns In Multiple Sheets
I'm trying to do is develop a formula to:
- look at one column in sheet 1 and if a value in that column equals a value in another column in sheet2. then, display an X
View Replies! View Related
Sum On Multiple Critera Across Multiple Sheets
I’m relatively new at using the INDIRECT function, and am having a hard time setting up the syntax for ranges, and even knowing if those ranges will work. I have a workbook with multiple sheets
(let’s call them Program sheets) created from a template that contains variable numeric data that I need to sum by creating a formula on a Summary sheet within the same workbook. The criteria for
IDing and summing the data from the Program sheets is spread over 3 cells in adjacent columns (let’s call them $E7, $F7 and $J7) on the Program sheets. A string concatenation of these cells will not
create a unique string value on any one sheet as there are potentially multiple rows of data on each sheet and across sheets that could have the same value string. The Summary sheet is a report that
contains hard-coded values in adjacent cells ($C4, $D4 and $E4) that will match values found in columns E, F and J from the Program sheets.
I’d like to have the formula sum all values within the range P7:AA70 across all the Program sheets when the entries into E, F and J cells (from Program sheets) match $C4, $D4, and $E4 cells on the
Summary sheet, keeping in mind that there could be multiple instances of the same values over several rows within the Program sheets (that’s OK, because I want each instance to be part of the sum)
Here’s a formula I created for summing values found in a range based on a single matching criteria across sheets. Can this be adapted to the new sum formula I need? =SUMPRODUCT(SUMIF(INDIRECT("'"&
Sheet_List&"'!G7:G70"),'By-Month Summary'!$G7,INDIRECT("'"&Sheet_List&"'!P7:P70")))
Sheet_List is a named range on a separate tab that lists the names of the Program sheets that I need to sum from. P7:P70 is the range that the sumable data lies in.
G7:G70 is the range that contains values that need to match the criteria on the By-Month Summary sheet cell G7. For the new formula, I no longer want to sum based on criteria in the G column, but
rather on criteria in the multiple columns I outlined in my diatribe above.
View Replies! View Related
Run-time Error '1004' Method 'Add' Of Object ' Sheets' Failed Adding Multiple Sheets
I have been running a simulation for about 18 hours now and just received:
Run-time error '1004':
Method 'Add' of object ' Sheets' failed
I have been creating new sheets, importing data, pulling some values from the data then deleting the respective sheet. I am using:
ActiveWorkbook.Sheets.Add after:=Sheets(Sheets.Count)
The sheet is actually being added to the workbook, seemingly before the error. I resume the code, and a new sheet is placed in the workbook and it errors again. The Debugger stops and highlights on
the code above.The sheet count number was 10895 at the error, just as an indicator of how many times the simulation has performed successfully. I am hoping this is something I can fix without having
to start over...
View Replies! View Related | {"url":"http://excel.bigresource.com/Countif-on-multiple-sheets-jhB7G2wb.html","timestamp":"2014-04-21T04:37:42Z","content_type":null,"content_length":"76361","record_id":"<urn:uuid:7027d256-f363-4a1a-9a4f-e985e67737fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00509-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gas Compression
Gas compression is done to increase the pressure of the gas, this is accompanied by change of state of the gas which means change in temperature and volume of a quantum of gas going under
compression. If the pressure of gas is raised from P[1] to P[2], the compression ratio for this process is defined as, P[1] / P[2]. The physical state of gas will undergo change depending on the
process, whether it is isothermal, adiabatic or typically polytropic.
Adiabatic compression: For gas compressors without any cooling, gas temperature rises with rise in pressure. The temperature rise is given by the equation,
Here the subscripts 1 and 2 correspond to initial and final states of the gas respectively. is the ratio of specific heat at constant pressure to specific heat for constant volume for a gas (Cp/
Cv). For such an adiabatic process, the head developed and power consumed by the compressor are given by following two equations.
This developed pressure head when multiplied by the inlet volumetric flowrate of the gas (V) gives shaft power required to drive an ideal adiabatic compressor, given by following equation,
In these equations R is the ‘Gas constant’ which is obtained by dividing the ‘Universal gas constant’ by molecular weight of a particular gas.
Isothermal compression: When the gas being compressed in a compressor is cooled with jacketed flow of a coolant, the process is an isothermal process. The work done in this case and hence the power
required to run this type of cooled compressors is the theoretically minimum possible limit. This minimum possible power to compress a gas from P[1] to P[2 ]is given by following equation,
Here subscripts 1 and 2 correspond to inlet and outlet states of the gas, V is the inlet volumetric flow of gas.
Polytropic compression: Adiabatic (constant entropy) and isothermal (constant temperature) compression processes are ideal and theoretical in nature. The actual compression processes are polytropic
and typically described with the equation,
Note that when n is replaced by , this process becomes adiabatic. If, n > the work done for polytropic compression and power required to run such a compressor is more than that required for and
ideal frictionless adiabatic compressor. We get the head developed and power required for polytropic compression by replacing by n in the equations for adiabatic compression.
In these equations R is the ‘Gas constant’ which is obtained by dividing the ‘Universal gas constant’ by molecular weight of a particular gas. V is inlet volumetric flow of gas. | {"url":"http://www.enggcyclopedia.com/2011/05/gas-compression/","timestamp":"2014-04-20T01:18:42Z","content_type":null,"content_length":"56554","record_id":"<urn:uuid:ddbde85d-4553-41c5-af2b-985aeea72d84>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00634-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tujunga Math Tutor
Find a Tujunga Math Tutor
...I've simulated synchronization properties of nano-cantilevers, created quasicrystals, and simulated ion trap dynamics. I spend approximately 5-7 hours a day coding in MATLAB. I have taken a
class on numerical methods at Caltech that was done half in mathematica, half in Matlab.
26 Subjects: including SAT math, finite math, algebra 1, algebra 2
...Unlike other aspects of music, there is no surefire way to write a song. In my 12 years as a songwriter, I have written hundreds of songs in the genres of pop, rock, folk and country. I have
been a writer under BMI since 2006.
42 Subjects: including algebra 1, grammar, piano, elementary (k-6th)
...I will also help you find your personal voice as a writer, if that is something you are seeking guidance with. Elementary School is the the time when study and learning habits begin to form,
and it is important to be taught the right skills and strategies that work for that particular type of le...
16 Subjects: including prealgebra, algebra 1, reading, writing
...Styles include: lindy hop, charleston, balboa, east coast swing. I have written a number of very successful entrance and application essays, for my undergrad and graduate program, as well as
study abroad programs. I have experience assisting students with essay writing and applications, includi...
56 Subjects: including algebra 2, American history, French, geometry
I've been teaching different levels of math from basic to calculus for about 28 years: 15 years in Armenia and 13 in the U.S. It's been a great pleasure to be a part of my students' success--to
see their growth, to help them face challenges and overcome fears. I've developed a lot of instructional materials that make math easy to deal with.
14 Subjects: including calculus, Russian, CBEST, ISEE | {"url":"http://www.purplemath.com/tujunga_ca_math_tutors.php","timestamp":"2014-04-16T07:54:30Z","content_type":null,"content_length":"23633","record_id":"<urn:uuid:6b63e09b-a7c3-41bd-911b-267fecbe12bf>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00512-ip-10-147-4-33.ec2.internal.warc.gz"} |
Minimum number to remove from a complete graph
October 14th 2010, 10:30 AM #1
Junior Member
Oct 2010
Minimum number to remove from a complete graph
At least how many edges should we remove from a G complete graph with n vertice so that G contains no Hamiltonian cycle?
So I want the minimum number to remove so that there is a graph on n vertices with that many edges removed that has no Hamiltonian cycle.
I think we should see two things if we find this minimum number:
- First, we should prove that if we remove that many edges then the graph has no Hamiltonian cycle.
- Secondly, we should prove that if we remove less edges from the graph, then it still has a Hamiltonian cycle.
Unfortunately I can't see them, but I would be glad if you helped me!
Thank you!
The answer is n-2. Can you show why removing n-2 edges is sufficient to prevent G from containing a Hamiltonian cycle ?
Now we will prove why n-2 edges are necessary. The base case is easy to see. Assume that the claim holds true for upto n=m. For n = m+1, suppose removing m-2 edges is sufficient. Then choose such
an edge which is removed and remove a vertex v that was incident to it, to form graph G'. Thus n decreases to m and the edges removed from G' are m-3. So, by our induction hypothesis, G' must
contain a Hamiltonian cycle. If v has sufficient number of edges to G', then we can include it to expand the Hamiltonian cycle. If not, then in G, a large number of edges that we have removed
must have been incident to v. Note that we can conclude the same for every vertex which had an incident edge removed. Can you complete the proof ?
October 14th 2010, 03:11 PM #2 | {"url":"http://mathhelpforum.com/discrete-math/159625-minimum-number-remove-complete-graph.html","timestamp":"2014-04-18T23:17:44Z","content_type":null,"content_length":"32631","record_id":"<urn:uuid:6ce865f3-bd53-405d-a62f-a06bd0f29336>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00034-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear algebra
Linear algebra
is the branch of
concerned with the study of
vector spaces
(or linear spaces),
linear transformations
, and
systems of linear equations
. Vector spaces are a central theme in modern
; thus, linear algebra is widely used in both
abstract algebra
functional analysis
. Linear algebra also has a concrete representation in
analytic geometry
. It has extensive applications in the
natural sciences
and the
social sciences
See also list of linear algebra topics.
The history of modern linear algebra dates back to the years 1843 and 1844. In 1843, William Rowan Hamilton (from whom the term vector stems) discovered the quaternions. In 1844, Hermann Grassmann
published his book Die lineare Ausdehnungslehre (see References).
Elementary introduction
Linear algebra had its beginnings in the study of vectors in Cartesian 2-space and 3-space. A vector, here, is a directed line segment, characterized by both length or magnitude and direction.
Vectors can be used then to represent certain physical entities such as forces, and they can be added and multiplied with scalars, thus forming the first example of a real vector space.
Modern Linear algebra has been extended to consider spaces of arbitrary or infinite dimension. A vector space of dimension n is called an n-space. Most of the useful results from 2 and 3-space can be
extended to these higher dimensional spaces. Although many people cannot easily visualize vectors in n-space, such vectors or n-tuples are useful in representing data. Since vectors, as n-tuples, are
ordered lists of n components, most people can summarize and manipulate data efficiently in this framework. For example, in economics, one can create and use, say, 8-dimensional vectors or 8-tuples
to represent the Gross National Product of 8 countries. One can decide to display the GNP of 8 countries for a particular year, where the countries' order is specified, for example, (United States,
United Kingdom, France, Germany, Spain, India, Japan, Australia), by using a vector (v[1], v[2], v[3], v[4], v[5], v[6], v[7], v[8]) where each country's GNP is in its respective position.
A vector space (or linear space), as a purely abstract concept about which we prove theorems, is part of abstract algebra, and well integrated into this field. Some striking examples of this are the
group of invertible linear maps or matrices, and the ring of linear maps of a vector space. Linear algebra also plays an important part in analysis, notably, in the description of higher order
derivatives in vector analysis and the study of tensor products and alternating maps.
A vector space is defined over a field, such as the field of real numbers or the field of complex numbers. Linear operators take elements from a linear space to another (or to itself), in a manner
that is compatible with the addition and scalar multiplication given on the vector space(s). The set of all such transformations is itself a vector space. If a basis for a vector space is fixed,
every linear transform can be represented by a table of numbers called a matrix. The detailed study of the properties of and algorithms acting on matrices, including determinants and eigenvectors, is
considered to be part of linear algebra.
One can say quite simply that the linear problems of mathematics - those that exhibit linearity in their behaviour - are those most likely to be solved. For example differential calculus does a great
deal with linear approximation to functions. The difference from non-linear problems is very important in practice.
The general method of finding a linear way to look at a problem, expressing this in terms of linear algebra, and solving it, if need be by matrix calculations, is one of the most generally applicable
in mathematics.
Some useful theorems
Generalisation and related topics
Since linear algebra is a successful theory, its methods have been developed in other parts of mathematics. In module theory one replaces the field of scalars by a ring. In multilinear algebra one
deals with the 'several variables' problem of mappings linear in each of a number of different variables, inevitably leading to the tensor concept. In the spectral theory of operators control of
infinite-dimensional matrices is gained, by applying mathematical analysis in a theory that isn't purely algebraic. In all these cases the technical difficulties are much greater.
See also
external link
Linear Algebra Toolkit
• Grassmann, Hermann, Die lineare Ausdehnungslehre dargestellt und durch Anwendungen auf die übrigen Zweige der Mathematik, wie auch auf die Statik, Mechanik, die Lehre vom Magnetismus und die
Krystallonomie, 1844. | {"url":"http://july.fixedreference.org/en/20040724/wikipedia/Linear_algebra","timestamp":"2014-04-20T03:59:11Z","content_type":null,"content_length":"13759","record_id":"<urn:uuid:2328fae2-8f4b-439a-933b-aba24f8ef4d8>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00201-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cremona transformation
From Encyclopedia of Mathematics
A birational transformation of a projective space $\def\P{\mathbb{P}} \P_k^n$, $n\ge 2$, over a field $k$. Birational transformations of the plane and of three-dimensional space were systematically
studied (from 1863 on) by L. Cremona. The group of Cremona transformations is also named after him — the Cremona group, and is denoted by $\def\Cr{\rm{Cr}}\Cr(\P_k^n)$.
The simplest examples of Cremona transformations which are not projective transformations are quadratic birational transformations of the plane. In non-homogeneous coordinates $(x,y)$ they may be
expressed as linear-fractional transformations $$x\mapsto \frac{a_1x+b_1y+c_1}{a_2x+b_2y+c_2}, \quad y\mapsto \frac{a_3x+b_3y+c_3}{a_4x+b_4y+c_4}.$$ Among these transformations, special consideration
is given to the standard quadratic transformation $\tau$: $$(x,y)\mapsto (\frac{1}{x},\frac{1}{y}),$$ or, in homogeneous coordinates, $$(x_0,x_1,x_2) \mapsto(x_1x_2,x_0x_2,x_0x_1). $$ This
transformation is an isomorphism off the coordinate axes: $$\tau:\P_k^2\setminus \{x_0x_1x_2 = 0\} \tilde\to \P_k^2 \setminus \{x_0x_1x_2 = 0 \}, $$ it has three fundamental points (points at which
is it undefined) $(0,0,1)$, $(0,1,0)$ and $(1,0,0)$, and maps each coordinate axis onto the unique fundamental point not contained in that axis.
By Noether's theorem (see Cremona group), if $k$ is an algebraically closed field, each Cremona transformation of the plane $\P_k^2$ can be expressed as a composition of quadratic transformations.
An important place in the theory of Cremona transformations is occupied by certain special classes of transformations, in particular — Geiser involutions and Bertini involutions (see [1]). A Geiser
involution $\alpha : \P_k^2 \to \P_k^2$ is defined by a linear system of curves of degree 8 on $\P_k^2$, which pass with multiplicity 3 through 7 points in general position. A Bertini involution $\
beta : \P_k^2 \to \P_k^2$ is defined by a linear system of curves of degree 17 on $\P_k^2$, which pass with multiplicity 6 through 8 points in general position.
A Cremona transformation of the form $$x\mapsto x,$$
$$y\mapsto \frac{P(x)y+Q(x)}{R(x)y+S(x)},\quad P,Q,R,S\in k[x],$$ is called a de Jonquières transformation. De Jonquières transformations are most naturally interpreted as birational transformations
of the quadric $\P_k^1\times \P_k^1$ which preserve projection onto one of the factors. One can then restate Noether's theorem as follows: The group ${\rm Bir}(P^1\times P^1)$ of birational
automorphisms of the quadric is generated by an involution $\sigma$ and by the de Jonquières transformations, where $\sigma\in {\rm Aut}(P^1\times P^1)$ is the automorphism defined by permutation of
Any biregular automorphism of the affine space $\def\A{\mathbb{A}}\A_k^n$ in $\P_k^n$ may be extended to a Cremona transformation of $\P_k^n$, so that ${\rm Aut}(\P^1\times \P^1) \subset {\rm Cr}(\
P_k^n)$. When $n=2$ the group ${\rm Aut}(\A_k^2)$ is generated by the subgroup of affine transformations and the subgroup of transformations of the form $$x\mapsto ax+b,\quad y\mapsto cy+Q(x),$$
$$a\ne 0,\quad c\ne 0,\quad a,b\in k,\; Q(x)\in k[x],$$ moreover, it is the amalgamated product of these subgroups [5]. The structure of the group ${\rm Aut}(\A_k^n)$, $n\ge 3$, is not known. In
general, up to the present time (1987) no significant results have been obtained concerning Cremona transformations for dimensions $n\ge 3$.
[1] H.P. Hudson, "Cremona transformations in plane and space" , Cambridge Univ. Press (1927) MR1521296 Zbl 53.0595.01
[2] L. Godeaux, "Les transformations birationelles du plan" , Gauthier-Villars (1927)
[3] A.B. Coble, "Algebraic geometry and theta functions" ,Amer. Math. Soc. (1929) MR0733252 MR0123958 Zbl 55.0808.02
[4] M. Nagata, "On rational surfaces II" Mem. Coll. Sci. Univ. Kyoto , 33 (1960) pp. 271–393 MR0126444 Zbl 0100.16801
[5] I.R. Shafarevich, "On some infinitedimensional groups" Rend. di Math , 25 (1966) pp. 208–212 MR485898
The fact that ${\rm Aut}(\P_k^2)$ is the amalgamated product of the subgroup of affine transformations (cf. Affine transformation) with that of the transformations (*) was first proved (for ${\rm
char}\; k = 0$) by H.W.E. Jung [a1]; the case of arbitrary ground field was proved by W. van der Kulk [a2].
[a1] H.W.E. Jung, "Ueber ganze birationale Transformationen der Ebene" J. Reine Angew. Math. , 184 (1942) pp. 161–174 Zbl 0027.08503 Zbl 68.0382.01
[a2] W. van der Kulk, "On polynomial rings in two variables" Nieuw Arch. Wiskunde , 1 (1953) pp. 33–41 Zbl 0050.26002
How to Cite This Entry:
Cremona transformation. Encyclopedia of Mathematics. URL: http://www.encyclopediaofmath.org/index.php?title=Cremona_transformation&oldid=23801
This article was adapted from an original article by V.A. Iskovskikh (originator), which appeared in Encyclopedia of Mathematics - ISBN 1402006098.
See original article | {"url":"http://www.encyclopediaofmath.org/index.php?title=Cremona_transformation","timestamp":"2014-04-19T14:30:52Z","content_type":null,"content_length":"25332","record_id":"<urn:uuid:556dcdd3-0865-4732-8176-6fe16811e543>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00119-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nonagon Problem
September 23rd 2010, 05:05 AM #1
Nonagon Problem
I have a question about polygons. Here's how it goes:
Attachment 19017
I know that the sum of interior angles of the regular nonagon is 180º(n-2)=
180n -360
= 180(9) -360
So each of the interior angle is 1260º/9 = 140º
angle PCD + angle BCD = 180º (adjacent angles on a straight line)
angle PCD = 180º - 140º
angle PCD = 40º
So that explains part a).
Can you please give me tips on part b)?
Here are some hints :
i) Triangle DCE is isosceles.
ii) You will get this from the previous part.
iii) Triangle BEP is isosceles.
Hello Traveller.
Thank you for your reply, but what are the proofs that triangle DCE and trianglle BEP are isosceles triangles?
This is a regular nonagon, which means that all side lengths are the same.
Triangle DCE uses 2 lengths of the regular nonagon, therefore it must be an iscoceles triangle.
Triangle BEP are isoceles because length BP and EP both use extended lines of the regular nonagon sides. These lines that meet have to be equal, because the angles of the exterior are the same.
And also all regular polygons have symmetry, so if you drew a line from one corner to the other, the angles would be the same, thus making it an iscoceles triangle.
Thank you so much Educated!
So the answer for b)i) will be:
$CD = DE$ (sides of isos. triangle)
$Angle CDE is 140 °$ (one of the int. angles of nonagon)
$Angle CDE + angle DCE + angle DEC = 180 °$ (angle sum of triangle)
$Angle DCE + angle DEC = 40 °$
$Angle DCE = 20 °$
As for part ii):
Angle BCD is $140 °$.
Angle DCE is $20 °$.
Angle BCE is $140 °-20 °=120 °$
As for part iii):
Angle BEF + Angle DEB = angle DEF
$Angle BEF = 140 °-2(20 °)<br /> =100 °$
Problem solved.
September 23rd 2010, 06:35 AM #2
September 24th 2010, 01:35 AM #3
September 24th 2010, 01:48 AM #4
September 24th 2010, 02:14 AM #5 | {"url":"http://mathhelpforum.com/geometry/157161-nonagon-problem.html","timestamp":"2014-04-21T05:07:19Z","content_type":null,"content_length":"39182","record_id":"<urn:uuid:78d3a77a-7596-4ead-995b-daa6b6defddb>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00593-ip-10-147-4-33.ec2.internal.warc.gz"} |
Somerville, MA Algebra Tutor
Find a Somerville, MA Algebra Tutor
I am offering tutoring services in reading comprehension, writing, history, literature, theatre and logic. My qualifications include a master's degree in philosophy and over a decade of classroom
experience teaching K-12. My teaching experience includes several years of developing and teaching per...
18 Subjects: including algebra 1, reading, English, geometry
...I am mathematically minded and enjoy Geometry. I was the co-president of my high school’s Mu Alpha Theta Math Honors Society. I have tutored students in high school math, including Geometry.
17 Subjects: including algebra 1, algebra 2, Spanish, calculus
...I took honors Algebra I and II in high school and also took various math classes while in college. I also graduated with a minor in Spanish. I lived and studied in Spain for 6 months, where I
developed fluency in Spanish.
11 Subjects: including algebra 1, algebra 2, Spanish, ESL/ESOL
...I hope to assist you or your children in achieving your goals in studies. Finally, Chinese (Mandarin and Cantonese) is my mother language. I have learned Chinese systematically, in the
language per se, and in the traditional Chinese culture.
16 Subjects: including algebra 1, algebra 2, calculus, Chinese
...I have been a member of one soccer team or another since I was 7, from youth leagues to high school varsity to club in college to adult-level. I can help athletes master the techniques
necessary to have fun on the field! I can also help students in basketball!
10 Subjects: including algebra 1, algebra 2, chemistry, prealgebra
Related Somerville, MA Tutors
Somerville, MA Accounting Tutors
Somerville, MA ACT Tutors
Somerville, MA Algebra Tutors
Somerville, MA Algebra 2 Tutors
Somerville, MA Calculus Tutors
Somerville, MA Geometry Tutors
Somerville, MA Math Tutors
Somerville, MA Prealgebra Tutors
Somerville, MA Precalculus Tutors
Somerville, MA SAT Tutors
Somerville, MA SAT Math Tutors
Somerville, MA Science Tutors
Somerville, MA Statistics Tutors
Somerville, MA Trigonometry Tutors
Nearby Cities With algebra Tutor
Allston algebra Tutors
Arlington, MA algebra Tutors
Boston algebra Tutors
Brighton, MA algebra Tutors
Brookline, MA algebra Tutors
Cambridge, MA algebra Tutors
Charlestown, MA algebra Tutors
East Somerville, MA algebra Tutors
Everett, MA algebra Tutors
Malden, MA algebra Tutors
Medford, MA algebra Tutors
Newton, MA algebra Tutors
Revere, MA algebra Tutors
Roxbury, MA algebra Tutors
Winter Hill, MA algebra Tutors | {"url":"http://www.purplemath.com/somerville_ma_algebra_tutors.php","timestamp":"2014-04-21T02:42:14Z","content_type":null,"content_length":"23845","record_id":"<urn:uuid:bc302002-645a-4887-ae5f-803408081f53>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00038-ip-10-147-4-33.ec2.internal.warc.gz"} |
An analytic model for the vibrations of rectangular shells of
ASA 125th Meeting Ottawa 1993 May
3aSA8. An analytic model for the vibrations of rectangular shells of variable curvature and thickness.
Masahiko Okajima
Courtney B. Burroughs
Grad. Prog. in Acoust., Penn State Univ., P.O. Box 30, University Park, PA 16804
A flexible and powerful model is developed for analyzing the vibration of a rectangular platform shell whose curvature and thickness are arbitrary. The shape of the shell can have almost any
conceivable representation, since the curvature and thickness are represented by bicubic polynomials of the centerline arc length. The vibration model includes the effects of shear deformation,
rotary inertia, and centerline extension. The equations of motion are solved by an alternative form of the Rayleigh--Ritz method. The resulting integral formulas for the stiffness and mass matrix
elements are evaluated by a set of simple computer routines that do symbolic manipulations of algebra and calculus. Predictions of the natural frequency for the several shell geometries are compared
to published data, with good agreement. A parameter study shows the dependence of the resonance frequencies and mode shapes on the curvature and the thickness of the shell. | {"url":"http://www.auditory.org/asamtgs/asa93ott/3aSA/3aSA8.html","timestamp":"2014-04-16T16:28:56Z","content_type":null,"content_length":"1658","record_id":"<urn:uuid:0743d3e7-a3aa-472a-9132-9e2abb2ec012>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00578-ip-10-147-4-33.ec2.internal.warc.gz"} |
Brad Richards shooting percentage shows progression to mean likely
Brad Richards is struggling this season. Everyone has seen it, everyone has complained about it, and everyone is waiting for him to rebound. What generally goes unnoticed is that Ricahrds’ shooting
percentage is an astonishing low 4.5%. That’s less than most defensemen. Richards has a career average of about 9%, so his shooting percentage right now is half of what it should be.
We spoke about shooting percentages at the end of January and highlighted Carl Hagelin (at that point goalless) and Taylor Pyatt (at that point shooting at 43%). Both have since progressed and
regressed to their career averages, and the same theory is going to apply to Richards here. His career worst shooting percentage was in 2002-2003 when he shot at 6%, which is still above his current
Richards has had a career long enough that we can assume progression back to his mean of 9%. Since we are roughly 42% through the season, that means Richards is due for a hot streak of shooting
around 13%, which would get him back up to 9% over time. It’s not an exact science, as there’s some flexibility between his career average and what he could finish with this season.
Looking at his numbers for this season, Richards has 2 goals and 44 SOG. To progress back to the 9% career average (assuming the same ratio of shots per game), Richards will take about 60 more SOG
this season. If he finishes at the 13% clip mentioned above, he will score about eight more goals this season. That puts Richards at 10 goals on the season, which is 17 for a full 82 game season.
Richards certainly is struggling, but he won’t continue shooting at 4.5%, which seems to be the bigger issue with his offensive numbers. Regression and progression to the mean is something we
discussed with Hagelin and Pyatt, and we have seen Hagelin rebound and Pyatt cool down. There’s no reason to believe the same won’t happen to Richards. He’s too good to sit with two goals.
10 Responses to “Brad Richards shooting percentage shows progression to mean likely”
1. “Since we are roughly 42% through the season, that means Richards is due for a hot streak of shooting around 13%, which would get him back up to 9% over time.”
Technically not true. If his true talent is 9%, we would expect him to shoot about 9% for the rest of the season, not %13 to end at 9%. Regression means that they return to their true talent
level, not play above it for a while to get to the same numbers at the end of the season.
□ What your saying is technically true Mike, but I think Dave was trying to illustrate the peaks and valleys of the statistical curve. Especially with the marathon of an NHL season, mean
divination will be fairly substantial. In this shortened season he may stay well short of his career average, but over an extended period, there would need to be peaks of 13% or more to
progress back to that mean.
2. doesnt this sometimes happen to a player when he is given a star player to play with?
I see some of this comming from him or players like him trying to hard to pass the puck to players like NASH. When in fact players like him should become evenmore dangerous if they just shot the
puck more. And at the same time they would make that star player even more dangerous. Because he wouldnt be drawing all of the coverage.
□ I agree. Nash’s presence on the ice shouldn’t always lead players to try to get the puck to him. Sometimes, his mere presence should open up ice for other players to get quality shots as
oppposing players cheat over to Nash’s side. Richards and Gaborik are both guilty of giving up shooting opportunities in an attempt to feed Nash the puck. Doesn’t always work.
□ This could be a factor, but right now I’m focusing on Richards shots only. He is taking 2 shots per game and converting at 4.5%, which is well below his average. That won’t last.
□ I agree with Dave on this one. This post didn’t seem to be about the amount of shots he’s taking, but more about the low shooting percentage and an expected progression to the mean.
☆ Absolutely, giving up shooting opportunities doesn’t relate necessarily to lower shot percentage. This was more of a corollary observation about Richards not shooting enough, in addition
to his lower shooting percentage numbers.
3. Surely one has a better chance of scooring when he shoots the puck tho NO???
□ Ricky,
One could surmise that there would be more goals with more shots, but there is no relation between number of shots and shooting percentages. We’d like Richards to shoot more, and with will
come more goals. But, regression/progression to the mean isn’t really related to number of shots, it’s related to a player coming back to his natural numbers. | {"url":"http://blueseatblogs.com/2013/03/06/brad-richards-shooting-percentage-shows-progression-to-mean-likely/","timestamp":"2014-04-20T00:40:17Z","content_type":null,"content_length":"51933","record_id":"<urn:uuid:699db6f2-5c00-4014-a7e1-9bdbc3b4b1ba>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00339-ip-10-147-4-33.ec2.internal.warc.gz"} |
Schererville Algebra Tutor
Find a Schererville Algebra Tutor
...The math is the same, and I have learned the "bio- terminology". I have taught linear algebra at the college level, both at Lewis University and at Morraine Valley Community College. I am
currently teaching Decision Science, which is a applied Linear Algebra class in the Business Department.
11 Subjects: including algebra 2, algebra 1, physics, statistics
I've taught Algebra 1, Algebra 2, Geometry, and Pre-Calculus at the high school level for 6 years. In addition, I've completed a BS in Electrical Engineering and I am quite knowledgeable of
advance mathematical concepts. (Linear Algebra, Calculus, Differential Equations) I create an individualized...
12 Subjects: including algebra 1, algebra 2, calculus, trigonometry
Being a tutor allows me to fulfill a dual purpose; close the academic gap among our children, as well as leaving a legacy in a child’s life. Over the past nine years, I’ve committed to
volunteering and working closely with youth. This commitment is not just for enjoyment, but a dedication to make a change.
4 Subjects: including algebra 2, algebra 1, prealgebra, probability
...I took AP Statistics in high school, and received a 5 on the AP Exam. I also have experience programming in SAS for larger data analysis problems. Besides stats, I have a lot of knowledge about
subjects from algebra through calculus.
5 Subjects: including algebra 1, statistics, prealgebra, probability
...I have a degree in Mathematics from Augustana College. I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus
7 Subjects: including algebra 1, algebra 2, geometry, trigonometry | {"url":"http://www.purplemath.com/Schererville_Algebra_tutors.php","timestamp":"2014-04-21T04:57:50Z","content_type":null,"content_length":"23854","record_id":"<urn:uuid:11b2e238-2801-4d26-84b9-b99f424390e9>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00445-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Stability for multispecies population models in random environments.
(English) Zbl 0598.92017
In this paper, the stability consequences of taking into account random environmental fluctuations by stochastic differential equation models for multispecies populations are discussed. The starting
point is a deterministic multispecies model with a globally stable feasible equilibrium. When noise terms are added, the stochastic model that arises exhibits a degree of stability directly related
to how these noise terms respect the equilibrium.
In particular, section 3 reviews the situation when the noise terms do not vanish at the equilibrium (the nondegenerate case); although stochastic equilibrium stability is impossible in this case,
weaker types of stability may hold. In particular, there may exist a stable invariant distribution with positive density.
It is mentioned in section 4 that global asymptotic stochastic stability may hold if noise intensities vanish at the equilibrium (the degenerate case). However, there is some question concerning the
biological relevance of this assumption.
Finally, the discussion and results in sections 3 and 4 indicate that random noise fluctuations need not have a destabilizing effect in multispecies models.
92D25 Population dynamics (general)
60H10 Stochastic ordinary differential equations
92D40 Ecology
93E15 Stochastic stability | {"url":"http://zbmath.org/?q=an:0598.92017","timestamp":"2014-04-19T04:41:06Z","content_type":null,"content_length":"22024","record_id":"<urn:uuid:e4bce70d-9f3f-48e7-bbaa-1784fc0cc20f>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00101-ip-10-147-4-33.ec2.internal.warc.gz"} |
Posts about arithmetic problems on A Hundred Years Ago
Arithmetic Problems in 1911 High School Text Book
Posted on January 30, 2011 by Sheryl
15-year-old Helena wrote a hundred years ago today:
Monday, January 30, 1911. My! How the wind did blow today, smashed some panes in the school house windows with a deafening crash and alarmed us all, fortunately we escaped uninjured. Boo hoo I
haven’t got all my arithmetic problems for tomorrow. Boo hoo. I’m getting stupid.
Her middle-aged granddaughter’s comments 100 years later:
It was either a damn strong wind–or the windows weren’t very strong. I wonder if the panes blew out when the class was working on the dreaded arithmetic. To get a sense of what the problems were
like I found a high school arithmetic book that was published in 1911. Here are the written exercises in the chapter titled The Equation:
1. To four times a certain number I add 16, and obtain as a result 188. What is the number?
2. A man having $100 spent a part of it; he afterwards received five times as much as he had spent, and then his money was double what it was at first. How much did he spend?
3. A farmer had two flocks of sheep, each containing the same number. He sold 21 sheep from one flock and 70 from the other, and then found that he had left in one flock twice as many as in the
other. How many had he in each?
4. Divide 100 into two such parts that a fourth of one part diminished by a third of the other part may be equal to 11.
5. Find the area of a square field whose diagonal is 50 rods.
Kimball’s Commercial Arithmetic: Prepared for Use in Normal, Commercial and High Schools and the Higher Grades of the Common School (1911)
Filed under: Education | Tagged: 1911, arithmetic problems | Leave a comment » | {"url":"http://ahundredyearsago.com/tag/arithmetic-problems/page/2/","timestamp":"2014-04-18T10:36:21Z","content_type":null,"content_length":"25429","record_id":"<urn:uuid:73a92102-3f21-4ca3-ae4f-be06251c24d9>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Hickory Creek, TX Math Tutor
Find a Hickory Creek, TX Math Tutor
...Tutoring is a passion of mine; I enjoy helping students grab onto a better understanding of the needed math and chemistry skills as it relates to their course work. Not all students learn in
the same way, and that is why I make sure that I am able to adapt to the student's learning needs. After all, I am a visual learner and auditory learner as well as a hands on learner.
17 Subjects: including algebra 1, algebra 2, biology, SAT math
...In my career I have used and taught many mathematical concepts. I have gone to the military's ASVAB web site and have no problems with any of the AFQT subjects tested, including word knowledge
and paragraph comprehension. So I will be very happy to assist with your preparation for the exam.
15 Subjects: including algebra 1, algebra 2, calculus, chemistry
...I have been a teacher and coach for 13 years and I also hold certifications in physical education, health education, and special education. I taught special education for 4 years, and 2 of
those have have been specifically in Math. I believe this experience helps me reach all types of students from advanced learners to learners with disabilities.
3 Subjects: including algebra 1, geometry, prealgebra
...MAPS analyzes a passage quickly yielding its organization and logic as well as the function of each of its parts. Next, teaching math for standardized tests can easily become bogged down by a
mass of tricks students find difficult to remember on test day. I don't do that!
5 Subjects: including SAT math, GRE, GMAT, SAT reading
...My military requirement is one weekend a month thus allowing the others to be open. Thank you, and I look forward to hearing from you.I have taken and received high marks in Algebra 1 and 2,
Linear Algebra, and Calculus 1, 2, and 3. I was raised playing chess, my father being an ex-state (Texas) champion.
6 Subjects: including algebra 1, algebra 2, prealgebra, economics
Related Hickory Creek, TX Tutors
Hickory Creek, TX Accounting Tutors
Hickory Creek, TX ACT Tutors
Hickory Creek, TX Algebra Tutors
Hickory Creek, TX Algebra 2 Tutors
Hickory Creek, TX Calculus Tutors
Hickory Creek, TX Geometry Tutors
Hickory Creek, TX Math Tutors
Hickory Creek, TX Prealgebra Tutors
Hickory Creek, TX Precalculus Tutors
Hickory Creek, TX SAT Tutors
Hickory Creek, TX SAT Math Tutors
Hickory Creek, TX Science Tutors
Hickory Creek, TX Statistics Tutors
Hickory Creek, TX Trigonometry Tutors
Nearby Cities With Math Tutor
Argyle, TX Math Tutors
Bartonville, TX Math Tutors
Copper Canyon, TX Math Tutors
Corinth, TX Math Tutors
Corral City, TX Math Tutors
Cross Roads, TX Math Tutors
Double Oak, TX Math Tutors
Highland Village, TX Math Tutors
Lake Dallas Math Tutors
Lakewood Village, TX Math Tutors
Little Elm Math Tutors
Oak Point, TX Math Tutors
Roanoke, TX Math Tutors
Shady Shores, TX Math Tutors
Westlake, TX Math Tutors | {"url":"http://www.purplemath.com/hickory_creek_tx_math_tutors.php","timestamp":"2014-04-17T15:43:32Z","content_type":null,"content_length":"24220","record_id":"<urn:uuid:433d513d-05a0-498e-b59a-acd18e4d44fc>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00528-ip-10-147-4-33.ec2.internal.warc.gz"} |
Got Homework?
Connect with other students for help. It's a free community.
• across
MIT Grad Student
Online now
• laura*
Helped 1,000 students
Online now
• Hero
College Math Guru
Online now
Here's the question you clicked on:
what this equation represents in R²?
• one year ago
• one year ago
Your question is ready. Sign up for free to start getting answers.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/updates/5055da2de4b0a91cdf449b4f","timestamp":"2014-04-21T08:01:36Z","content_type":null,"content_length":"145610","record_id":"<urn:uuid:7537021c-d072-419d-9199-c6c1c1802340>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00617-ip-10-147-4-33.ec2.internal.warc.gz"} |
How do we find the very best clock?
The question is frequently asked how one can decide which of the best
available to us is really the very best. Because we assume that we have selected the best clocks available, there is nothing better to compare our clocks with. We seem to have uncovered a principal
difficulty. However, there is an answer to this perplexing question. We only have to consider our basic principles:
Time is not only the most basic but also the most abstract measure which we use to bring order into nature. Because of its abstractness, and because we all think we know what it is we become easily
confused. Therefore one has to develop clear and distinct ideas.
Time is a measure which we bring into nature according to our definitions. We postulate that the same interval of time elapsed if we can demonstrate that the same process has taken place. Now how do
you establish identical processes and how do you decide your specific question of which of the best clocks is the very best? The answer is given in the literature concerning time keeping, especially
in the documentation of our time computation algorithms. This literature is available on request.
The essence is this. The best clock is the one which shows the smallest residuals in its errors in reference to a time scale which is computed on the basis of a clock set.
And the second best clock is the one which shows the second smallest residuals in respect to the computed time scale which is statistically better than any contributing clock. In practice, however,
you use only clocks for the set which have otherwise been shown to be acceptable. We do not use Mickey Mouse watches because they would not be cost effective. One would have to use zillions of them
and it would not be practical! Everybody knows that Mickey Mouse watches agree very poorly among themselves whereas atomic clocks agree within nanoseconds (ns) from day to day. Of all the Mickey
Mouse watches at most one of them can be right and most likely none is. The situation and reasoning is identical with the reasoning which we apply when we judge anything else. If we find substantial
disagreement then probably nobody is right or at most one can be correct. Which one it is we can only find out gradually as more information becomes available. This is the scientific way which is not
different from common sense, only more systematic. We do not arbitrarily assign credence to one clock on the basis of intuition.
Basically the answer is, therefore, that clocks can only be judged by the degree of conformity with other clocks. The best time measure is that which agrees with the consensus of a set of other
processes. As you probably will agree, nothing else could be expected to be useful in science.
This is the basis of our time keeping: We have a group of nominally 24 standard clocks and we compute from hourly measures a best time scale which is then used to produce a table of corrections for
each of the clocks. As you see, time keeping is intrinsically dependent on statistics and probability. If 10 clocks agree to within, let us say 10 ns, and one differs by 100 ns, then it is
overwhelmingly unlikely that this one clock has been right and the other 10 wrong. It is the same procedure as it is used in a court of law. One interrogates and cross examines the witnesses who each
will tell you a slightly different story. One or two may tell you a completely different story. The proceedings are designed to establish a core of facts which correspond with the consensus of the
witnesses and that is accepted as "truth". Of course, what really happened may still be different but we have no other way to arrive at an acceptable definition of truth. And this consensus can be
evaluated in terms of probability theory.
In science the procedure is exactly the same. We try to establish a consistent measure and consistent theories so that they are applicable in the largest possible domain. With clocks the ideal
reference cannot be realized but only be approximated by finding the consensus of a large number of different standard processes. That is the reason why scientists are so interested in comparing
clocks based on different principles. Up to now all such tests have revealed no surprises. It turns out that a time scale constructed on the basis of undisturbed standard processes in the form of our
atomic clocks is superbly applicable in the description of other natural processes such as in astronomy (pulsars), or in modern technology, where one has to do the same thing independently at the
same time but at different places. On the basis of these principles it has also been discovered in 1934 that our earth does not rotate uniformly but shows small seasonal variations in the length of
the day.
But for more details I again refer you to the papers mentioned above.
-Gernot M. R. Winkler (emphasis added)
Time Service Deptartment, U.S. Naval Observatory, Washington, DC | {"url":"http://everything2.com/title/How+do+we+find+the+very+best+clock%253F","timestamp":"2014-04-17T18:31:58Z","content_type":null,"content_length":"26209","record_id":"<urn:uuid:158db57f-0d8f-4a2f-94da-f97587b82a1b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00326-ip-10-147-4-33.ec2.internal.warc.gz"} |
Linear and Nonlinear Inverse Problems with Practical Applications
Although many problems in applied mathematics proceed directly from a description of a physical system to predictions of observable properties of the system, in inverse problems we begin with
observations of a system and attempt to infer properties. These inverse problems are typically ill-posed, in the sense that the solution to the inverse problem can fail to exist or be extremely
sensitive to small errors in the observations. Since in practice all measurements incorporate noise at some level, a naive solution of an inverse problem with noisy data is typically a meaningless
solution. Such ill-posed inverse problems arise in many areas of science and engineering, including geophysical inversion, medical imaging, and inverse problems in heat transfer.
Inverse problems have been addressed in a variety of ways. One important line of research considers inverse problems from the point of view of regularization procedures that turn an ill-posed inverse
problem into a sequence of well posed problems whose solutions converge to the solution of the original inverse problem in the limit as the noise is reduced to 0. Another line of research focuses on
the recovery of coefficients in PDE boundary value problems and considers conditions under which these coefficients can be stably recovered.
The first half of this book discusses inverse problems in which the forward operator is a linear mapping between suitable spaces of functions. The authors approach these linear inverse problems by
discretizing them to produce linear systems of equations and then regularizing the solution. Image deblurring, X-ray tomography, and backward heat propagation are used as guiding examples throughout
these chapters.
The authors discuss strategies for regularizing these discretized problems including Tikhonov regularization and sparsity regularization. The strategy of discretizing an inverse problem to produce a
linear system of equations and then regularizing the solution of the resulting ill-conditioned system of equations is a very general strategy that can be applied to any linear inverse problem. The
authors have presented this material with a minimum of required mathematical background. It should be accessible to students majoring in mathematics, science, or engineering at the advanced
undergraduate level.
The same general strategy can be applied to nonlinear inverse problems, with iterative methods used to optimize the solution of the discretized and regularized problem. However, the resulting
nonlinear optimization problems may be nonconvex and suffer from local minima. Furthermore, there are significant theoretical difficulties in proving that this kind of iterative nonlinear
regularization scheme converges in the limit as noise goes to 0.
An alternative approach to nonlinear inverse problems is to directly construct a nonlinear map from the data to the unknown parameters. The direct approach typically requires careful analysis
followed by the numerical solution of a partial differential equation to obtain the inverse solution. This solution methodology is highly dependent on the particular inverse problem. The second half
of the book focuses on one example nonlinear inverse problem, electrical impedance tomography. The authors present the D-bar method for the solution of the EIT problem. The material in this second
half of the book is at a considerably more advanced mathematical level, though it should generally be accessible to graduate students in applied mathematics.
This book has clearly been written for use as a textbook. The authors have provided appendices that discuss important background in mathematical analysis and iterative methods for the solution of
linear systems of equations. The authors have also included numerous exercises and suggestions for student projects. There is a companion web site that includes MATLAB code for many of the examples.
The first half of this textbook will be of interest to instructors who are teaching an introductory course in inverse problems that focuses on the regularization approach to linear inverse problems.
The second half of the book, perhaps with some additional readings, would be suitable for a more advanced graduate course on direct methods for the solution of nonlinear inverse problems.
Brian Borchers is a professor of Mathematics at the New Mexico Institute of Mining and Technology. His interests are in optimization and applications of optimization in parameter estimation and
inverse problems. | {"url":"http://www.maa.org/publications/maa-reviews/linear-and-nonlinear-inverse-problems-with-practical-applications","timestamp":"2014-04-21T04:11:29Z","content_type":null,"content_length":"99037","record_id":"<urn:uuid:e80289e5-ca3c-4360-9fa5-a9f62857b617>","cc-path":"CC-MAIN-2014-15/segments/1397609539447.23/warc/CC-MAIN-20140416005219-00527-ip-10-147-4-33.ec2.internal.warc.gz"} |
Nine-pin Triangles
Copyright © University of Cambridge. All rights reserved.
'Nine-pin Triangles' printed from http://nrich.maths.org/
Why do this problem?
This problem
will help learners extend their knowledge of properties of triangles. It requires visualisation, a systematic approach and is a good context for generalisation and symbolic representation of
Possible approach
To start with, you could pose the problem orally, asking children to imagine a circle with nine equally spaced dots placed on its circumference. How many triangles do they think it might be possible
to draw by joining three of the dots? Take a few suggestions and then ask how they think they could go about finding out.
Show the interactivity, or draw a nine-point circle on the board. Invite them each to imagine a triangle on this circle. How would they describe their triangle to someone else? Let the class offer
some suggestions e.g. by numbering the dots and describing a triangle by the numbers at its vertices, and then return to the problem of the number of different triangles. Discuss ways in which they
will be able to keep track of the triangles and how they will know they have them all. Some children may wish to draw triangles in a particular order, for example those with a side of 1 first (i.e.
adjacent pegs joined), then 2 etc. Others may feel happy just to list the triangles as numbers.
This sheet of blank nine-point circles may be useful. Encourage children to work in small groups to find the total number.
Bring them together to share findings and systems, using the interactivity to aid visualisation.
Key questions
How do you know your triangles are all different?
How do you know you have got all the different triangles?
Possible extension
Triangles All Around
is a good follow-up activity to this one. You could challenge pupils to think about whether they could predict the number of different triangles which are possible for different point circles. How
would they do about finding out? It may be useful to have sheets of other point circles available:
four-point, five-point, six-point, eight-point. Are there any similarities between all the circles with an odd numbers of points? How about those with an even number?
Possible support | {"url":"http://nrich.maths.org/2852/note?nomenu=1","timestamp":"2014-04-18T21:19:03Z","content_type":null,"content_length":"8066","record_id":"<urn:uuid:bc426681-f0eb-4e33-9b97-60b74f5b0c33>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00152-ip-10-147-4-33.ec2.internal.warc.gz"} |
Exercise asks me to use a subarray. What is a subarray?
Exercise asks me to use a subarray. What is a subarray?
I'm working on the pointer chapter in "C How to Program" and I'm having trouble interpreting an exercise.
There is talk of developing a recursive function and the use of "subarrays." I'm not precisely sure what they mean when they say subarray. I'll write
out the whole exercise to give the whole context, and bolden the issue. I realize this is a little long but bare with me please.
(Quicksort) In the examples and exercises of Chapter 6, we discussed the sorting techniques of bubble sort, bucket sort, and selection sort. We now
present the recursive sorting technique called Quicksort. The basic algorithm for a single-subscripted array of values is as follows:
a) Paritioning Step: Take the first element of the unsorted array and determine its final location in the sorted array(i.e., all values to the left of the
element in the array are less than the element, and all values to the right of the element in the array are greater than the element). We now have
one element in its proper location and two unsorted subarrays.
b) Recursive Step: Perform step 1 on each unsorted subarray.
Each time step 1 is performed on a subarray, another element is placed in its final location of the sorted array, and two unsorted subarrays are
created. When a subarray consists of one element, it must be sorted; therefore that element is in its final location.
The basic algorithm seems simple enough, but how do we determine the final position of the first element of each subarray. As an example, consider
the following set of values (the elements in bold is the paritionting element--it will be placed in its final location in the sorted array):
a) Starting from the rightmost element of the array, compare each element with 37 until an element less than 37 is found. Then swap 37 and that
element. The first element less than 37 is 12, so 37 and 12 are swapped. The new array is
b) Starting from the left of the array, but beginning with the element after 12, comopare each element with 37 until an element greater than 37 is
found. Then swap 37 and that element. The first element greater than 37 is 89, so 37 and 89 are swapped. The new array is
c) Starting from the right, but beginning with the element before 89, compare each element with 37 until an element less than 37 is found. Then
swap 37 and that elmeent. The first element less than 37 is 10, so 37 and 10 are swapped. The new array is
d) Starting from the left, but beginning with the lement after 10, compare each element with 37 until an element greater than 37 is found. Then
swap 37 and that element. There are no more elements greater than 37, so when we compare 37 with itself, wek now that 37 has been placed in its
final location of the sorted array.
Once the partition has b een applied on the array, there are two unsorted subarray. The subarrays with values less than 37 contrains 12, 2, 6, 4
10, and 8. The subarray with values greater than 37 contains 89, 68, and 45. The sort continues with both subarrays being partitioned int he same
manner as the original array.
Based on the preceding discussion, write recursive function quicksort to sort a single-subscripted integer array. The funciton should recieve as
arguments an integer array, a starting subscript and an ending subscript. Function partition should be called by quicksort to perform the partitioning
So what is a subarray exactly?
A subarray is a part of the whole array or a section of the array.
Well if you have
int arr[10];
Then you might call a function with
func( arr, 10 );
Or you might call it with
func( &arr[4], 3 );
The second example is a sub-array.
Think of a subarray as a substring, where a substring is a portion of a string.
"Todd" is a string
"odd" is a substring of Todd.
So, with an array of 10 elements, a subarray of that might be 5 elements, starting with element #2. Or, 2 elements, starting with the first, etc.
Ahh, so that's where pointers come in. You input the address of the split-off point in the array, and set it into a pointer.
Thanks guys. | {"url":"http://cboard.cprogramming.com/c-programming/97532-exercise-asks-me-use-subarray-what-subarray-printable-thread.html","timestamp":"2014-04-17T18:43:19Z","content_type":null,"content_length":"11895","record_id":"<urn:uuid:553fc66a-20b4-4c39-828f-24b102017112>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00580-ip-10-147-4-33.ec2.internal.warc.gz"} |
Free Science and Video Lectures Online!
Math Video Lectures
Hello everyone! This month I've various mathematics full courses from Harvard on Abstract Algebra and Sets, Counting, and Probability. And then I've a lecture on Kurt Godel, a lecture on John Nash,
and visualizations of hypercomplex iterations.
Abstract Algebra (Harvard)
Course description:
Algebra is the language of modern mathematics. This course introduces students to that language through a study of groups, group actions, vector spaces, linear algebra, and the theory of fields.
Course topics:
Review of linear algebra. Permutations. Quotient groups, first isomorphism theorem. Abstract linear operators and how to calculate with them. Orthogonal groups. Isometrics of plane figures. Group
actions. A5 and the symmetries of an icosahedron. Rings. Extensions of rings. Euclidean domains, PIDs, UFDs. Structure of ring of integers in a quadratic field.
Sets, Counting, and Probability (Harvard)
Course description:
This online math course develops the mathematics needed to formulate and analyze probability models for idealized situations drawn from everyday life. Topics include elementary set theory, techniques
for systematic counting, axioms for probability, conditional probability, discrete random variables, infinite geometric series, and random walks. Applications to card games like bridge and poker, to
gambling, to sports, to election results, and to inference in fields like history and genealogy, national security, and theology. The emphasis is on careful application of basic principles rather
than on memorizing and using formulas.
Course topics:
Probability, Intuition, and Axioms. Probability by Counting and Inclusion-Exclusion. Principles of Counting. Conditional Probability. Conditional Craps. Lying Witnesses and Simpson's Paradox. Random
Variables. Expectation. Tartan Dice; Terminated Geometric; World Series Pitchers; Negative Hypergeometric; Coin Tossing. Gambling. Expected Lead Time; Bijections between Paths. Variables. Inequality.
A Beautiful Mind: Genius, Madness, Reawakening: John Nash
Lecture description:
Dr. Sylvia Nasar, the author of "A Beautiful Mind" tells the extraordinary story of mathematician John Nash a drama about the mystery of the human mind and shares some of her experiences in writing
her prize-winning biography. "A Beautiful Mind" won the 1998 National Book Critics Circle Award for Biography. It was also a finalist for the Pulitzer Prize, the Helen Bernstein Journalism Award, and
the Rhone Poulenc Prize for Science Writing and, most recently, was honored by the three leading American mathematics societies. Translated into more than a half dozen foreign languages, "A Beautiful
Mind" was adapted for the screen and released as feature film in 2001, directed by Ron Howard and starring Russell Crowe.
Kurt Gödel and the Limits of Mathematics
Lecture description:
Kurt Gödel was one of the foremost mathematicians and logicians of the 20th century, best known for his famous incompleteness theorem, which tells us that there are mathematical 'blind spots': parts
of mathematics that traditional methods of proof cannot access. The theorem has far-reaching consequences for computing and even for our understanding of the nature of the human mind. Mark Colyvan
from the University of Sydney introduces us to this strange and paradoxical result.
Hypercomplex Iterations
Visualization topics:
Quaternion Julia Sets. Diamond of Quaternionic Julia Sets. Air on the Dirac Strings. A Volume of Julia Sets. Dynamics of the Quaternion. Interactive Visualization of Quaternion Julia Sets. Quaternion
Have fun with these!
Related Posts | {"url":"http://freescienceonline.blogspot.com/","timestamp":"2014-04-20T00:37:55Z","content_type":null,"content_length":"56501","record_id":"<urn:uuid:7372b332-0fd6-4bbd-a5f4-99dfd26057be>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00660-ip-10-147-4-33.ec2.internal.warc.gz"} |
Process Capability
“If you want to retain those who are present, be loyal to those who are absent.”
-- Dr. Stephen Covey, The Seven Habits of Highly Effective People
“Learning cannot be disassociated from action.”
-- Peter Senge, The Fifth Discipline
“The most important measures are both unknown and unknowable.”
-- W. Edwards Deming, Out of the Crisis
What are the Odds? by Kevin McManus
First published in Industrial Engineer magazine October 2002
It usually happens at least twice a year. Whenever the weekly lottery begins to approach the $50 million mark, workplace discussions (and possibly activities) begin to focus on what it will take to
become rich this time. People begin to band together to sweeten the pot, and I invariably end up getting asked to participate as well. Each time this occurs, I have to bite my tongue and curb my
The statistician in me wants to tell them about how the odds of winning really don't increase significantly when you by more than one ticket, and that their odds of ‘winning big' in general (1 in 7
million) are much less than the possibility that they will be killed by lightening on the way home from work today (1 in 2 million). Since I don't want to spoil their fun or shoot holes in their
dreams, I keep these thoughts to myself. At the same time, I also wonder about the degree to which we really do dismiss statistical probability as we live our day-to-day lives at and away from work.
Industrial Engineers spend significant curricular time focusing on statistics concepts and applications. While the odds of using a high percentage of these skills in the future are less than the
chance of getting killed by a dog (1 in 700,000), their study does provide us with a way of thinking that helps us both understand systems performance and predict the potential output levels of such
systems. One of my recent lottery experiences jogged some interesting thoughts in mind about the non-use of statistics at work, and how a shift in perspective really might help ease some longer term
My insights centered around using statistics to project systems output, or more specifically, the probabilities of attaining certain levels of systems output given the current design of the
associated processes and procedures. This train of thought progressed to one where I envisioned using statistics to gauge the probability of attaining those lofty performance goals that many of us
are just starting to project (or have projected for us) for the coming fiscal year.
In the past, I have more often than not seen managers simply take the current year average and bump it up by a percentage to set next year's goal. This is usually done before (or without ever)
considering the degree of systems change that would be required to consistently attain that level of performance. In turn, unrealistic expectations are raised and skepticism about management's
understanding of the business is ignited. Imagine what might happen when the desired result is not consistently met?
When I first became an Industrial Engineer, using statistics in such a manner was a time consuming process. Spreadsheet software however now makes conducting such an analysis quite easy. For example,
if you have a twelve-month sales history, you can easily obtain a mean and standard deviation for this data, and then use a special formula to determine the probability of hitting certain performance
levels if the system is not changed. In other words, you can use statistics to compare process capability to the goals that you are considering for monitoring performance.
The more effective approach involves using statistics to identify the capability of the system as it is currently designed and to measure performance to this capability over time to search for
unexpected systems changes. Once we know where our current baseline is, we can analyze the data further to determine which factors are hindering our performance to the greatest degree and possible
countermeasures for minimizing or eliminating these factors. Only after doing this should we even begin to think about what levels our performance might improve to if these changes are implemented.
Another point to consider involves system variation. If you have done little to stabilize your processes, the probability of hitting your process average, or close to it, will be notably less. It
will be even more difficult to project future system performance. On the other hand, if you do what Dr. Deming (and others) suggest, you will seek to reduce process variation before even attempting
to shift the process average up or down.
Because the lottery is a relatively stable process, projecting your potential success against that system is pretty easy to do. Work systems however are inherently less predictable, primarily because
the potential for external influences (those outside of your immediate control) is so much greater. This only heightens the need to better understand our systems and processes in a fact-based manner.
More than anything else, you need to be clear on the operational definitions that are being used and the source of your data itself. I have referenced a few mortality odds in this article, but I
simply got them off of the Internet. The odds of my using them to make a meaningful decision are about as great as those of being killed by falling out of bed (1 in 2 million). If you don't know
where the data comes from, you should not use it to make assumptions or decisions.
If you are projecting performance goals for next year by simply bumping up this year's average, the odds of hitting those goals are probably about as good as being killed by poisoning (1 in 86,000).
Of course, I am making some pretty big assumptions in making such a statement, but so are you in terms of setting your goals in the manner that you are. It's like telling me to get to work two
minutes faster each day on average, without giving me a car that can't be detected by radar, fewer stop lights, or relief from the train that impedes my progress at least twice a month.
If we have a basic understanding of probability, we will think our people are foolish for buying multiple lottery tickets in a given week. If we have a basic understanding of probability and fail to
use those skills as we set performance goals, we are the ones that are being foolish. If we attempt to sell those higher goals to our people without also improving the system involved, they will see
us as being a lot worse than foolish.
Would You Like to Learn More?
Click on one of the following links to learn even more about Great Systems! and the types of systems improvements I can help you make:
Great Systems! home page
More articles on performance improvement
Systems Change: The Key to Getting Better Results
Do You Need Great Systems!
Types of Systems I Can Help You Improve
“The only thing I know is that I do not know it all.” -- Socrates | {"url":"http://greatsystems.com/odds.htm","timestamp":"2014-04-21T12:07:30Z","content_type":null,"content_length":"16724","record_id":"<urn:uuid:29773b1f-5236-4635-82e0-7a7dda9c2428>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00155-ip-10-147-4-33.ec2.internal.warc.gz"} |
FOM: Re: Explaining Simpson and McLarty to each other
Colin Mclarty cxm7 at po.cwru.edu
Sat Jan 17 14:19:46 EST 1998
Reply to message from JSHIPMAN at bloomberg.net of Fri, 16 Jan
> Colin, I think Steve really wants to know the following:
> If Set Theory did not exist, would it be necessary to invent it? That is,
>can you come up with an alternative Analysis 101 curriculum, in which you prove
>useful theorems like Fubini's theorem, Stokes's theorem (bridge to topology),
>some useful formulation of Fourier Analysis (bridge to engineering), basic
>properties of the Riemann zeta function (bridge to Number Theory), Chebyshev's
>inequalities (bridge to probability and statistics), and so on, starting from
>categories or toposes rather than set theory? If all you need are natural
>number objects and right inverses those are not hard to motivate, the question
>is can this really be done in a technically manageable fashion?
That's a nice openning. Of course we would need to invent set
theory in some sense. We would not need the cumulative heirarchy. Think
of some of Peter Aczel's points or notice Cantor did not use iterated
membership (let alone a cumulative hierarchy) in his own work.
To get the results you name you would most naturally use natural
numbers and right inverses, I think. (Steve Awodey has pointed out that
something significantly weaker than right inverses will do, called the
"internal axiom of choice", and he is the expert on this. But I find
right inverses more obvious.)
I will mention that the parts of this math not requiring choice
could as well be done in a topos assuming natural numbers and
Booleanness, i.e. assuming the law of excluded middle. That is a very
much weaker assumption than right inverses. And I suppose Steve would
consider it well motivated.
As to how it works: Whether you start with ZF or with a Boolean
topos with natural numbers, you need a lot of notational conventions
(even abuses of notation) before your math will look anything like
a textbook on real analysis. This is the "mess" Vaughan Pratt referred
to. I find that mess a bit simpler in a topos than in ZF--but we are
not talking about orders of magnitude difference, and this kind
of "simplicity" is somewhat vague and even subjective/psychological.
Anyway, once that mess is behind you, the current textbooks
on differential equations, Fourier analysis, number theory and so
on can be used unchanged. The foundational interpretation of the
notations will be different but the notations, statements, and proofs
can stand exactly as stated today.
> (Harvey and
>Steve would maintain, correctly, that you'd still have to invent set theory for
>certain natural theorems whose logical strength corresponds to big ordinals).
More often I hear about large cardinals, which are defined by
combinatoric or topological properties which are just as easily stated
in a topos as in ZF (indeed stated in virtually the same way). When it
comes to ordinals there is some difference of method, though no
difference in expressive power. In ZF every set has a partial order
on its elements, the membership relation. And some sets are well-ordered
this way, "the ordinals". In the topos approach no set, or object,
has any intrinsic ordering. But you can still define an ordered set as
a set with a specified order relation, and you can ask which well
orderings provably exist.
The difference is that when you talk about ordinals in
ZF you use a lot of the apparatus you needed from the start to deal
with sets. In a topos when you talk about well orderings you are
raising an issue you did not need for a lot of other purposes.
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/1998-January/000826.html","timestamp":"2014-04-16T10:11:42Z","content_type":null,"content_length":"6166","record_id":"<urn:uuid:4b54c7f9-31ce-479b-8372-9526c4649171>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
Watauga, TX Trigonometry Tutor
Find a Watauga, TX Trigonometry Tutor
...I taught Pre-Cal for four years here in Texas. The challenging part starts after the review of Algebra 2. This covers Trigonometric functions and continues into new areas.
15 Subjects: including trigonometry, chemistry, calculus, geometry
I have had a career in astronomy which included Hubble Space Telescope operations, where I became an expert in Excel and SQL, and teaching college-level astronomy and physics. This also involved
teaching and using geometry, algebra, trigonometry, and calculus. Recently I have developed considerable skill in chemistry tutoring.
15 Subjects: including trigonometry, chemistry, physics, calculus
...During my years as a graduate student at Duke University, I tutored undergraduates and students in the MBA program. I am certified in Secondary Math in the State of Texas and have taught
Algebra I in the Irving ISD. Topics usually covered in Algebra I include number systems, properties of real ...
82 Subjects: including trigonometry, chemistry, reading, English
...Unlike in a classroom setting I can pay undivided attention to my student. I came to the US when I was thirteen and did not speak much English. I was enrolled as an ESL student and graduated
from high school.
37 Subjects: including trigonometry, Spanish, calculus, reading
...My available times vary from week to week, please contact me to see what specific times I have available. I can meet anywhere (typically a local library, Starbucks, or in the comfort of your
home). About me: I am a senior at Texas Wesleyan, completing my bachelor's degree in Mathematics and Sec...
13 Subjects: including trigonometry, physics, calculus, geometry
Related Watauga, TX Tutors
Watauga, TX Accounting Tutors
Watauga, TX ACT Tutors
Watauga, TX Algebra Tutors
Watauga, TX Algebra 2 Tutors
Watauga, TX Calculus Tutors
Watauga, TX Geometry Tutors
Watauga, TX Math Tutors
Watauga, TX Prealgebra Tutors
Watauga, TX Precalculus Tutors
Watauga, TX SAT Tutors
Watauga, TX SAT Math Tutors
Watauga, TX Science Tutors
Watauga, TX Statistics Tutors
Watauga, TX Trigonometry Tutors | {"url":"http://www.purplemath.com/watauga_tx_trigonometry_tutors.php","timestamp":"2014-04-16T13:26:13Z","content_type":null,"content_length":"24098","record_id":"<urn:uuid:ef30f912-625f-43cb-a916-892263c8d0f6>","cc-path":"CC-MAIN-2014-15/segments/1398223202457.0/warc/CC-MAIN-20140423032002-00099-ip-10-147-4-33.ec2.internal.warc.gz"} |
Instructor Class Description
Time Schedule:
Rene H Levy
PCEUT 405
Seattle Campus
Clinical Pharmacokinetics
Basic principles of pharmacokinetics and their application to the clinical setting, including: single-dose intravenous and oral kinetics, multiple dosing, nonlinear pharmacokinetics, metabolite
kinetics, pharmacogenetics, and the role of disease in drug clearance and dose requirements, and kinetics of drug-drug interactions. Prerequisite: PCEUT 331. Offered: W.
Class description
The goals of this course are to provide the student with: (i) an understanding of the fundamental concepts of pharmacokinetics in humans; (ii) skills in applications of pharmacokinetics in
therapeutic monitoring.
1. To define the basic pharmacokinetic parameters of a drug: volume of distribution, clearance terms, extraction ratio, elimination half life, unbound fraction and to understand how these parameters
are related.
2. To differentiate between rate and rate constant.
3. To understand the role of compartment models in describing the time course of blood levels of drug in the body.
4. To differentiate between organ clearance, total body clearance, formation clearance of a given metabolite and the fraction of clearance associated with a given metabolic enzyme.
5. To describe the effect of protein binding on pharmacokinetic parameters (volume of distribution, clearance and half life) and on steady state plasma concentration.
6. To describe and define the pharmacokinetic parameters of a drug which can be determined using urinary excretion data.
7. To demonstrate how the first pass effect (upon oral administration) can be predicted.
8. To understand the time course of drug accumulation in the body during a constant rate infusion and the notion of steady state (or plateau) level.
9. To relate the time course of drug accumulation to the time course of drug elimination and to determine which pharmacokinetic parameters can be calculated.
10. To understand the concept of additivity of plasma levels upon intravenous or oral multiple dosing.
11. To understand the parameters governing the formation and elimination of metabolites.
12. To discuss the various sources of interpatient pharmacokinetic variability: metabolic isozymes, pharmacogenetics and drug interactions.
13. To understand the concept of Michaelis Menten pharmacokinetics and its consequences on the behavior of plasma levels; to define Michaelis Menten parameters.
14. To use relevant clinical pharmacokinetic data to demonstrate the ability to determine maintenance and loading doses of drugs in different patient populations.
15. To understand the effects of both intersubject and intrasubject variability in physiological and pathological parameters on using pharmacokinetic data to formulate individualized dosage regimens.
Student learning goals
General method of instruction
Recommended preparation
Class assignments and grading
1. To calculate pharmacokinetic parameters given sets of timed plasma concentrations after various routes of administration.
2. Given pharmacokinetic equations, to construct plots which demonstrate (i) the dependence of various pharmacokinetic parameters on dose of drug and (ii) the interdependence between parameters;
(iii) the dependence of plasma concentration on changes in protein binding.
3. Given a table of pharmacokinetic parameters: (i) to calculate a dosing regimen; (ii) to predict extent of first pass effect (bioavailability).
4. Given a knowledge of enzymes involved in the metabolism of a drug, to predict the potential for pharmacogenetic differences in the population and the potential for metabolic drug interactions.
There will be three (3) midterm examination and one (1) final examination
The information above is intended to be helpful in choosing courses. Because the instructor may further develop his/her plans for this course, its characteristics are subject to change without
notice. In most cases, the official course syllabus will be distributed on the first day of class. Last Update by Alain L Negretot
Date: 05/05/1998 | {"url":"http://www.washington.edu/students/icd/S/pharmceu/405rhlevy.html","timestamp":"2014-04-17T16:36:55Z","content_type":null,"content_length":"6640","record_id":"<urn:uuid:e773fd2a-048a-42e3-a401-15a189c8cb9f>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00333-ip-10-147-4-33.ec2.internal.warc.gz"} |
Oscillations and Simple Harmonic Motion
We begin our study of oscillations by examining the general definition of an oscillating system. From this definition we can examine the special case of harmonic oscillation, and derive the motion of
a harmonic system.
Definition of an Oscillating System
So what exactly is an oscillating system? In short, it is a system in which a particle or set of particles moves back and forth. Whether it be a ball bouncing on a floor, a pendulum swinging back and
forth, or a spring compressing and stretching, the basic principle of oscillation maintains that an oscillating particle returns to its initial state after a certain period of time. This kind of
motion, characteristic of oscillations, is called periodic motion, and is encountered in all areas of physics.
We can also define an oscillating system a little more precisely, in terms of the forces acting on a particle in the system. In every oscillating system there is an equilibrium point at which no net
force acts on the particle. A pendulum, for example, has its equilibrium position when it is hanging vertical, and the gravitational force is counteracted by the tension. If displaced from this
point, however, the pendulum will experience a gravitational force that causes it to return to the equilibrium position. No matter which way the pendulum is displaced from equilibrium, it will
experience a force returning it to the equilibrium point. If we denote our equilibrium point as x = 0 , we can generalize this principle for any oscillating system:
In an oscillating system, the force always acts in a direction opposite to the displacement of the particle from the equilibrium point.
This force can be constant, or it can vary with time or position, and is called a restoring force. As long as the force obeys the above principle, the resulting motion is oscillatory. Many
oscillating systems can be quite complex to describe. We shall focus on a special kind of oscillation, harmonic motion, which yields a simple physical description. Before we do so, however, we must
establish the variables that accompany oscillation.
Variables of Oscillation
In an oscillating system, the traditional variables x , v , t , and a still apply to motion. But we must introduce some new variables that describe the periodic nature of the motion: amplitude,
period, and frequency.
A simple oscillator generally goes back and forth between two extreme points; the points of maximum displacement from the equilibrium point. We shall denote this point by x [m] and define it as the
amplitude of the oscillation. If a pendulum is displaced 1 cm from equilibrium and then allowed to oscillate we can say that the amplitude of oscillation is 1 cm.
Period and Frequency
In simple oscillations, a particle completes a round trip in a certain period of time. This time, T , which denotes the time it takes for an oscillating particle to return to its initial position, is
called the period of oscillation. We also define another concept related to time, frequency. Frequency, denoted by ν , is defined as the number of cycles per unit time and is related to period as
Period, of course, is measured in seconds, while frequency is measured in Hertz (or Hz), where 1 Hz = 1 cycle/second. Angular frequency defines the number of radians per second in an oscillating
system, and is denoted by
. This may seem confusing: most oscillations don't engage in circular motion, and can't sweep out radians like in rotational motion. However, oscillating systems do complete cycles, and if we think
of each cycle as containing
radians, then we can define angular frequency. Again, angular frequency for oscillations may seem a bit odd for now, but it will make more sense when we compare oscillations and circular motion. For
now, we can relate our three variables dealing with the cycle of oscillation:
Equipped with these variables, we can now look at the special case of the simple harmonic oscillator. | {"url":"http://www.sparknotes.com/physics/oscillations/oscillationsandsimpleharmonicmotion/section1.html","timestamp":"2014-04-17T09:59:59Z","content_type":null,"content_length":"56971","record_id":"<urn:uuid:51f60c83-2e8e-466c-a039-3ea94623b85e>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00163-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cantor Dust
Also known as the Cantor set, possibly the first pure fractal ever found – by Georg Cantor around 1872. To produce Cantor Dust, start with a line segment, divide it in to three equal smaller
segments, take out the middle one, and repeat this process indefinitely.
Although Cantor Dust is riddled with infinitely many gaps, it still contains uncountably many points. It has a fractal dimension of log 2/log 3, or approximately 0.631.
Related entry
Sierpinski Carpet
Related category | {"url":"http://www.daviddarling.info/encyclopedia/C/Cantor_dust.html","timestamp":"2014-04-17T01:22:55Z","content_type":null,"content_length":"6070","record_id":"<urn:uuid:59a87919-0d76-473c-8679-2f68908b00c2>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00424-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Can the electromagnetic vector potential be written in terms of a complex field?
faraday tensor does describe in a single way the fields.it is an antisymmetric tensor having six independent components.However one can write maxwell eqn in a form similar to dirac eqn in which E and
B are used in some form like E+iB. Although those potentials can be combined to called four potentials but it is just a way of simplification and covariance. | {"url":"http://www.physicsforums.com/showpost.php?p=4164599&postcount=2","timestamp":"2014-04-17T12:38:22Z","content_type":null,"content_length":"7089","record_id":"<urn:uuid:9ef768e1-281f-4360-9173-07ec5d9e79c4>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00110-ip-10-147-4-33.ec2.internal.warc.gz"} |
Gaussian Noise
09-23-2012 #1
Gaussian Noise
Hi guys,
I start saying that this is not properly a "technical" question, but it requires a little bit knowledge about communications concepts like gaussian noise and SNR.
I'm coding a scientific software, and obviously it has some problem. It's not giving the result I'm expecting, so I triple checked everything and the one I'm proposing seems me to be the weakest
part of the software.
Ok, I created an array of complex elementes, but img part is always 0.
I need to create a gaussian noise vector to add to this array to simulate noise conditions.
I've not found source files to generate gaussian noise, so I decided to try by myself: I tought i could generate 1000 arrays of pseudorandom elements between -1 and 1 and sum them time by time.
At the end I divide each element by 1000, so I have a "random" array of values in the range [-1,1].
This is the code.
Main part:
for (j = 0; j < NVARS; j++) { init_genrand(rand());
for (i = 0; i < LENGTH; i++) {
uniformVar[i].real = (genrand_real1()*2 - 1);
uniformVar[i].img = 0.0;
cpxvecsum(noisevector, uniformVar, LENGTH);
Now that I have a random array of noise, I need to make it gaussian noise, obviously depending on my SNR value.
First of all I find the power of this array, so I can divide each element for a certain number to make its power=1.
float DigitalSignalPower(complex* vettore, int size) { int i;
float absol;
float temp = 0.0;
for (i = 0; i < size; i++) {
absol = fabs(vettore[i].real);
temp += pow(absol, 2);
return temp / size;
what I did is:
and at the end divide by size to normalize the result on the length of the array.
And in the main I divide each element by the square of this number. Now the power of the vector is 1, but I don't know if the "algorithm" is right.
By the way, I know that:
so I need to find new N[power ]but I need to get it in a linear scale, so:
I have a fixed SNRdB, and I compute Signal power as I computed noise power at the start of the topic:
float DigitalSignalPower(complex* vettore, int size) { int i;
float absol;
float temp = 0.0;
for (i = 0; i < size; i++) {
absol = fabs(vettore[i].real);
temp += pow(absol, 2);
return temp / size;
and at the end I multiply my noise vector element by element with the square of this number.
Seems you right?
Maybe you know a faster way to implement gaussian noise?
It would probably be easier to use an approach that is often used for this purpose, such as the Box-Muller transform.
Right 98% of the time, and don't care about the other 3%.
I didn't check the math.
Just noticed something very wrong at the top of the "Main part"
for (j = 0; j < NVARS; j++) { init_genrand(rand());
You initialize the random number generator inside a loop. Don't do that! Initialize it once only.
Oh ok, so I just need to fill my noise vector with numbers generated by this transform? I should always normalize it by the SNR value as the function does not get SNR as input... I was looking
for something similar to awgn function in Matlab, but I often find it implemented in c++ or other languages.
Ok thanks, it already seems pretty faster!
But results still are wrong. It seems nice the solution proposed by using Box Muller, but it doesn't take SNR as input, and I really don't know how to normalize it... I should use it instead of
generating 1000 vars?
Last edited by DeliriumCordia; 09-23-2012 at 03:48 AM.
Box-Muller generates pairs of normally distributed numbers with mean zero and unit variance (assuming a statistically significant number of samples taken, which is sort of the point). It is
trivial to do a transformation on such values to get any specified mean and variance.
You already gave formulae relating mean noise level to SNR and signal level .....
Right 98% of the time, and don't care about the other 3%.
Ok I think I got it, but if I simulate a vector of 1000 elements, generated with that algorithm, I get a mean that is in the range [-0.05,0.05], isn't it a little big?
You are aware of the statistical notion of confidence intervals?
To answer your question, however, for a thousand elements that variability of the mean is not particularly significant.
Right 98% of the time, and don't care about the other 3%.
09-23-2012 #2
Registered User
Join Date
Jun 2005
09-23-2012 #3
Registered User
Join Date
Sep 2012
09-23-2012 #4
09-23-2012 #5
09-23-2012 #6
Registered User
Join Date
Jun 2005
09-23-2012 #7
09-24-2012 #8
Registered User
Join Date
Jun 2005 | {"url":"http://cboard.cprogramming.com/c-programming/150867-gaussian-noise.html","timestamp":"2014-04-17T18:49:20Z","content_type":null,"content_length":"68774","record_id":"<urn:uuid:623d891b-ef61-4562-9cb3-712e38694b55>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00413-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ramanujan: Magician or Transcendental Professional
SIAM News, January 1992
Review of The Man Who Knew Infinity
By Robert Kanigel
Charles Scribner's Sons, New York 1991, 438 pages, $27.95
The famous playwright Moss Hart, in the flush of his first grand success on Broadway, bought a yacht and invited his family and friends to a party on board. His mother saw him strutting on the deck
wearing a captain's hat. This upset her. And she said to her son, "Moss, you think you're a captain. And me? Well, for me you're a captain. But when it comes to the captains, do they think you're a
The story of Ramanujan is fairly well known, but a brief outline may be in order. Late in 1912, Cambridge mathematicians H.F. Baker (algebraic geometry) and E.W. Hobson (theory of functions of a real
variable, spherical harmonics) received letters from India that included samples of newly worked out mathematics. They ignored the letters.
In January 1912, Geoffrey Harold Hardy, who at that time was considered to be among the most brilliant of English mathematicians, received a similar letter. Hardy, who was closer to the material of
the samples than either of the first two men, did not ignore it. He realized that the samples were not only inspired but devilishly difficult. Through Hardy, the author of the letter, a young Indian
by the name of Srinivasa Ramanujan, received a fellowship to work in Cambridge. He remained there off and on, working with Hardy, suffering from tuberculosis. Finally, he returned to Madras, where he
died in 1920 at the age of 32, a fellow of the Royal Society.
The legend of Ramanujan then began in earnest. In certain quarters, studies in Ramanujanian mathematics are popular today, perhaps even more so than they were a few years ago.
What are some of the raw materials that contribute to this legend? Ramanujan was born a high-caste Brahmin of modest economic status. In his early years, he lived the traditional life of a Brahmin.
He wore the topknot; his forehead was shaved. He was a strict vegetarian. He was spiritually and ceremonially religious. Every year he went to a local temple at the full moon of Sravanam to renew his
investiture with the Sacred Thread of the Brahmin caste. When he went to England, it was only after much soul searching and in defiance of Brahmin tradition. Throughout his life he recited the Vedas;
he believed is palmistry and in dreams, and he interpreted what he saw. He talked till the wee hours about the intimate relationship among God, zero, and infinity (as do some of our contemporary
semioticians of mathematics!) As to his purely mathematical life, however, Ramanujan's formal education in mathematics seems hardly to have gone beyond Sidney Loney's Trigonometry (CU Press, 1893),
which he mastered by the age of 13. He seems to have had little conception of the theory of complex analytic functions, in which some of his identities would later be embedded. He seems to have had
little conception of what a "rigorous" proof was, how it functioned, what its purposes where. And yet, this mystic mathematical autodidact, if we disregard a few errors and some trash, came up with
some of the most remarkable of mathematical theorems and identities. The problem for human cognition: How did he do it? The problem for Hardy: How to place this wonderful stuff firmly within the
canon; how also to move beyond the material that could be grasped with Ramanujan as a guide to where there would be yet more wonderful things.
As part of his effort to educate him, Hardy tried to teach Ramanujan what a proof was, what the theory of functions of a complex variable was --- in short, to teach him how to behave and think and
conceptualize in the manner of a proper Western mathematician of the period, to recreate him in his own image.
In the social sphere, Hardy tried to make Ramanujan "clubbable", to bring him into the Cambridge milieu somewhat in the manner in which Professor Higgins worked with Eliza Doolittle. He wanted to
bring this eater of rice, yogurt, and sambar to the roast beef and mutton of the Trinity High Table, and after dinner, to the port, apples, and little Conversaziones in the Combination room. Where
the fictitious Higgins succeeded, Hardy largely failed. Despite this failure, the younger man was not only to inspire the older man mathematically, but to provide him with a decided dash of romantic
color in a gray bachelor life whose only elements seemed to be mathematics and cricket.
This, in brief, is the story, and it is also the background for a remark made to me recently by a Cambridge don: "Of course we get a lot of fat, unsolicited envelopes. Most of the contents are
rubbish. But when the envelope bears a postmark from India, we tend to look at the rubbish rather more carefully."
Robert Kanigel, a science writer and a teacher of literary journalism at Johns Hopkins University, has written a splendid biography of Ramanujan. Carefully researched and documented, written lightly
with hardly a tedious page in it, the book is very much in the spirit of today's biographical craft: detailed, and with little respect for the "privacy" of the biographer. Although today's
biographies have a marked tendency toward simultaneous hero-worship and hero-bashing, Kanigel wisely maintains respect for his subject.
To round out the picture, Kanigel has done much more. As minor themes, he has given us a parallel biography of G.H. Hardy and travelogues into the minds and customs of Madras, India and of Cambridge,
England. And there is even more: He has tried to tell the general reader (I'm not sure how successfully) what Ramanujanian mathematics is all about.
The literary world has been deluged by material on the Cambridge scene. There are now dozens (it seems) of books on the Cambridge spies, moles, and double agents. Even this reviewer has contributed
to the Cambridge industry with a satire on the Cambridge Common Room (Thomas Gray, Philosopher Cat). Kanigel's work fills some of the remaining gaps, and the world of Cambridge mathematics now stands
quite adequately fleshed out. Madras, on the other hand, is unfamiliar, exotic, and more than a bit incomprehensible.
The basic as yet unanswered question is: How did Ramanujan do it? Sheer formal computational strength, as some have conjectured? Or does there exist an intuitive grasp of mathematical matters, a
grasp of such super penetrability that it is possessed by only one or two geniuses in a millenium?
The late Mark Kac gave as good an answer as can be found. Kanigel quotes it:
An ordinary genius is a fellow that you or I would be just as good as, if we were only many times better. There is no mystery as to how his mind works. Once we understand what he has done, we
feel certain that we, too, could have done it. It is different with the magicians. They are, to use mathematical jargon, in the orthogonal complement of where we are, and the working of their
minds is, for all intents and purposes, incomprehensible. Even after we understand what they have done, the process by which they have done it is completely dark.
Swimming along with today's psychologizing, social constructivist mode of biography, we may ask what, if any of the pieces of local color that Kanigel provide, were critical. Did it matter that
Ramanujan ate pickles and chilies while he was tubucular? Did his communion with the family goddess Namagiri of Namakkal help his mathematics? Were his firm beliefs in palmistry and astrology part of
a unified picture that included mathematics? In the Hardy-Ramanujan collaboration, what, if anything, was the role of the suppressed homosexuality sensed in Hardy by his close collaborator J.E.
The Ramanujan story raises many kinds of questions. What is mathematical intuition, insight? Can they be cultivated? What is genius? Can education foster it? Or is genius, bu definition, that which
is beyond fostering? What is proof? Why should one care? Philosophically, does the reality of mathematical results depend on their being accommodated to an establishment view? Is there such a thing
as mathematical luck? What is interesting, dull, significant in mathematics. (Most mathematicians couldn't care less about Ramanujan's discoveries.)
The succès fou of this particular unorthodoxy raises another question. How would you answer if I asked whether Ramanujan was a professional? I ask because several years ago, a prominent number
theorist in the U.K. told mem that in his opinion Louis Mordell was not a professional. Mordell? Of the Mordell-Weill theorem? Sadlerian Professor of Mathematics at Cambridge and an FRS? Come off it!
I am not sure what was implied by the denial, but it got me thinking about the word "professional". I would have said, following the practice in athletics, that professionals are simply people who
make money out of whatever it is that they do. But there is a counter definition: "A hired man is not a professional man. The essence of a professional man is that he is answerable for his
professional conduct only to his professional peers." Thus saith H.L. Mencken (Prejudices: Sixth Series, 1927), and I believe that Lily Hart of my lead-in anecdote would have agreed.
With this in mind, I would hope that today's mathematical establishment could exhibit some of the flexibility as regard mathematical conduct that Hardy did --- and that wasn't too much. Flexibility
seems often today to be a rare commodity. After all, in Hardy's time the name of the game was Proof; but that, really, is only part of the game. I know a number of brilliant people in several
countries whose work falls "between the cracks" --- and who have suffered greatly from cross-disciplinary intolerance. Their interests are viewed as peculiar, their methodologies unorthodox, their
results strange, their personalities cranky. They have been filtered out of university mathematics teaching as misfits, and they make their livings as they can. Think of Charles Sanders Peirce and
you will have an example of the sort of person I am talking about.
The Established Church always has great trouble with its Saints --- until, of course, they are canonized. Saints are a difficult, nonstandard lot, and anyone who claims to have a private hot line to
God is prima facie suspect. Such people are very lucky indeed when they have a Hardy to put forward their names in nomination.
Philip J. Davis is a professor in the Division of Applied Mathematics at Brown University. | {"url":"http://cs.nyu.edu/faculty/davise/personal/PJDBib/Ramanujan.html","timestamp":"2014-04-16T19:55:41Z","content_type":null,"content_length":"10955","record_id":"<urn:uuid:1c4f49a9-3b91-4f57-a26a-7036a73b0df1>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00411-ip-10-147-4-33.ec2.internal.warc.gz"} |
Maywood, IL Trigonometry Tutor
Find a Maywood, IL Trigonometry Tutor
...I hold a bachelor of science in electrical engineering with emphasis in mathematics. I teach with solid math acumen, coupled with encouragement and positive reinforcement. This has proved
positive for my own kids and many students I have tutored.
18 Subjects: including trigonometry, geometry, algebra 2, study skills
...One of the main areas I work with students on as part of their study skills is with respect to the process of how they go about memorizing concepts or words. For example when memorizing
vocabulary words to successfully develop a process which allows for the memorization of 30 words in 30 minutes...
38 Subjects: including trigonometry, Spanish, reading, statistics
...I am currently pursuing my Teaching Certification from North Central College. I have assisted in Pre-Algebra, Algebra, and Pre-Calculus classes. I have also tutored Geometry and Calculus
7 Subjects: including trigonometry, geometry, algebra 1, algebra 2
...I attended Dominican University and am majoring in Mathematics with a concentration of Secondary Education. I plan to get my certificate in Special Education as well. I have been exposed to
people with special needs throughout most of my life.
19 Subjects: including trigonometry, reading, calculus, geometry
...The main focus however, is generally in word problems. A good foundation in understanding and solving word problems not only creates a basis in Mathematics, but also prepares the student for
real-life situations. Topics include: working with fractions, decimals, percents, positive/negative integers, rational numbers, ratios and proportions, and algebraic equations.
11 Subjects: including trigonometry, calculus, algebra 2, geometry
Related Maywood, IL Tutors
Maywood, IL Accounting Tutors
Maywood, IL ACT Tutors
Maywood, IL Algebra Tutors
Maywood, IL Algebra 2 Tutors
Maywood, IL Calculus Tutors
Maywood, IL Geometry Tutors
Maywood, IL Math Tutors
Maywood, IL Prealgebra Tutors
Maywood, IL Precalculus Tutors
Maywood, IL SAT Tutors
Maywood, IL SAT Math Tutors
Maywood, IL Science Tutors
Maywood, IL Statistics Tutors
Maywood, IL Trigonometry Tutors
Nearby Cities With trigonometry Tutor
Bellwood, IL trigonometry Tutors
Berwyn, IL trigonometry Tutors
Broadview, IL trigonometry Tutors
Brookfield, IL trigonometry Tutors
Elmwood Park, IL trigonometry Tutors
Forest Park, IL trigonometry Tutors
Forest View, IL trigonometry Tutors
Franklin Park, IL trigonometry Tutors
Melrose Park trigonometry Tutors
Oak Park, IL trigonometry Tutors
River Forest trigonometry Tutors
River Grove trigonometry Tutors
Stickney, IL trigonometry Tutors
Stone Park trigonometry Tutors
Westchester trigonometry Tutors | {"url":"http://www.purplemath.com/maywood_il_trigonometry_tutors.php","timestamp":"2014-04-18T08:28:26Z","content_type":null,"content_length":"24213","record_id":"<urn:uuid:824144eb-4dde-4ccf-9006-b25dd51030ab>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00040-ip-10-147-4-33.ec2.internal.warc.gz"} |
Newton's Third Law of Motion
This page is intended for college, high school, or middle school students. For younger students, a simpler explanation of the information on this page is available on the Kids Page.
Sir Isaac Newton first presented his three laws of motion in the "Principia Mathematica Philosophiae Naturalis" in 1686. His third law states that for every action (force) in nature there is an equal
and opposite reaction. In other words, if object A exerts a force on object B, then object B also exerts an equal and opposite force on object A. Notice that the forces are exerted on different
For aircraft, the principal of action and reaction is very important. It helps to explain the generation of lift from an airfoil. In this problem, the air is deflected downward by the action of the
airfoil, and in reaction the wing is pushed upward. Similarly, for a spinning ball, the air is deflected to one side, and the ball reacts by moving in the opposite direction. A jet engine also
produces thrust through action and reaction. The engine produces hot exhaust gases which flow out the back of the engine. In reaction, a thrusting force is produced in the opposite direction.
Guided Tours
• Newton's Laws of Motion:
Navigation .. | {"url":"http://www.grc.nasa.gov/WWW/K-12/airplane/newton3.html","timestamp":"2014-04-18T08:13:53Z","content_type":null,"content_length":"12526","record_id":"<urn:uuid:f9e3a4be-1498-4870-8115-b558bfe10545>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00130-ip-10-147-4-33.ec2.internal.warc.gz"} |
New definitions are in bold and key topics covered are in a bulleted list.
This schedule is approximate and subject to change!
Introduction (3 classes)
2/1: graph, vertex, edge, finite graph, multiple edges, loop, simple graph, adjacent, neighbors, incident, endpoint, degree, degree sum, isolated vertex, leaf, end vertex, degree sequence, graphic,
Havel-Hakimi algorithm
• Notes from Section 1.1 (Notes pages 0–14)
• Syllabus discussion.
• What is a graph?
• How to describe a graph.
• Degree sequence of a graph.
• Theorem 1.1.2.
2/6: path graph P[n], cycle C[n], complete graph K[n], bipartite graph, complete bipartite graph K[m,n], wheel graph W[n], star graph St[n], cube graph
• Proof of Theorem 1.1.2.
• Notes from Section 1.2 (Notes pages 15–23)
• A dictionary of graphs.
2/8: Petersen graph, Grotzsch graph, Platonic solid, Schlegel diagram, Tetrahedron, Cube, Octahedron, Dodecahedron, Icosahedron, equal graphs, isomorphic graphs, disjoint union, union, graph
complement, self-complementary graph, subgraph, induced subgraph, proper subgraph
• A dictionary of graphs.
• Schlegel diagrams of Platonic solids.
• When are two graphs the same?
• Larger graphs from smaller graphs.
• Smaller graphs from larger graphs.
• Groupwork on definitions.
Graph Statistics (3 classes)
2/15: path in G from a to b, connected graph, disconnected graph, connected component, cut vertex, cut set
2/21: bridge, disconnecting set, connectivity (κ(G)), edge connectivity (κ'(G)), minimum, minimal, tree, forest
• Vertex and edge connectivity (No new notes today)
• Um versus Al
• Lemma A. If there is a path from a to b in G and a path from b to c in G, then there is a path from a to c in G.
• Lemma B. Let G be a connected graph. Suppose that G contains a cycle C and e is an edge of C. The graph H=G \ e is connected.
• Theorem 1.3.1.
• Discussion of Homework 2
2/27: tree, forest, girth g(G), distance between vertices, diameter diam(G), clique, clique number ω(G), independent set, independence number α(G)
• Trees and forests.
• Theorems 1.3.2, 1.3.3, and 1.3.5.
• Theorems 2.4.1 and 3.2.1
• Graph statistics (Notes pages 35–36)
Coloring (2 classes)
2/29: (vertex) coloring, proper coloring,
3/5: critical graph, bipartite graph, edge coloring, edge chromatic number
• Critical graphs
• Bipartite graphs
• Edge coloring
• Vizing's Theorem
3/7: snark, turning trick
• Snarks
• Edge chromatic number of complete graphs
• Question and Answer Day
• Exam 2
Planarity (4 classes)
3/19: drawing, simple curve, plane drawing, plane graph, planar graph, region, face, outside face, maximal planar, dual graph
• Notes from Sections 8.1 and 8.2 (Notes pages 50–61)
• Planar graphs.
• Euler's Formula.
• Maximal planar graphs.
3/21: dual graph, map, normal map, kempe chain, deletion, contraction, minor, subdivision
• dual graph, self-dual graph
• Maps, normal maps.
• Four Color Theorem (not proved).
• History of the four color theorem.
• Notes from Sections 8.2, 8.3, and 9.1 (Notes pages 62–71)
• Six Color Theorem (proved).
• Five Color Theorem (proved).
3/26: kempe chain, deletion, contraction, minor, subdivision
• Five Color Theorem (proved).
• Kempe Chains argument
• Modifications of graphs.
• Kuratowski's Theorem.
3/28: crossing number, thickness, and genus of a graph, torus,
• Notes from Sections 9.1, 9.2, and 10.3 (Notes pages 72–78)
• Statistics of nonplanarity.
• Crossing number of a graph
• Thickness of a graph
• Genus of a graph
• The Peterson graph is non-planar.
Algorithms (5 classes)
4/2: algorithm, correctness, matching, perfect matching, Hungarian algorithm, M-alternating path, M-augmenting path
• The Peterson graph is non-planar.
• Notes from Section 7.2 and more (Notes pages 79–86)
• Algorithms.
• Maximal, maximum, perfect matchings.
• Hungarian algorithm.
• Discussion of Homework 4
4/16: stable matching
• Correctness of the Hungarian algorithm.
• Notes about stable matchings (Notes pages 87–95)
• Stable matchings
• The play
• Proof of correctness
• Proof of male optimality
4/18: directed edges, network, flow, cut, max flow, min cut, augment a flow
• Notes about network flow (Notes pages 96–108)
• Network
• Flow in a network, MAX FLOW
• Edge cuts, MIN CUT
• Peer Review Day
4/25: Ford-Fulkerson algorithm, companion graph, transshipment, dynamic network
• Ford-Fulkerson algorithm and examples
• Notes about transshipment (Notes pages 109–115)
• Transshipment
• Dynamic Network
4/30: weighted graph, spanning tree, Kruskal's Algorithm, Hamiltonian Cycle, Traveling Salesman Tour
• Notes from Section 7.1 and TSP (Notes pages 116–123)
• Minimum Weight Spanning Trees
• Traveling Salesman Problem
• Question and Answer Day
• Exam 2
• Presentations
5/23, 4-6 pm (Final Exam Day)
• Presentations | {"url":"http://people.qc.cuny.edu/faculty/christopher.hanusa/courses/634sp12/Pages/notes.aspx","timestamp":"2014-04-16T07:32:55Z","content_type":null,"content_length":"42458","record_id":"<urn:uuid:af137ad1-83d7-448c-ae27-04650dd5b84c>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00214-ip-10-147-4-33.ec2.internal.warc.gz"} |
188 helpers are online right now
75% of questions are answered within 5 minutes.
is replying to Can someone tell me what button the professor is hitting...
• Teamwork 19 Teammate
• Problem Solving 19 Hero
• Engagement 19 Mad Hatter
• You have blocked this person.
• ✔ You're a fan Checking fan status...
Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy.
This is the testimonial you wrote.
You haven't written a testimonial for Owlfred. | {"url":"http://openstudy.com/users/badreferences/asked","timestamp":"2014-04-16T17:27:52Z","content_type":null,"content_length":"122320","record_id":"<urn:uuid:69245ce9-e8e2-4886-990a-abe283f5b748>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00177-ip-10-147-4-33.ec2.internal.warc.gz"} |
Polarized category theory, modules, and game semantics
J.R.B. Cockett and R.A.G. Seely
Motivated by an analysis of Abramsky-Jagadeesan games, the paper considers a categorical semantics for a polarized notion of two-player games, a semantics which has close connections with the logic
of (finite cartesian) sums and products, as well as with the multiplicative structure of linear logic. In each case, the structure is polarized, in the sense that it will be modelled by two
categories, one for each of two polarities, with a module structure connecting them. These are studied in considerable detail, and a comparison is made with a different notion of polarization due to
Olivier Laurent: there is an adjoint connection between the two notions.
Keywords: polarized categories, polarized linear logic, game semantics, theory of communication
2000 MSC: 18D10,18C50,03F52,68Q55,91A05,94A05
Theory and Applications of Categories, Vol. 18, 2007, No. 2, pp 4-101.
TAC Home | {"url":"http://www.emis.de/journals/TAC/volumes/18/2/18-02abs.html","timestamp":"2014-04-17T21:46:22Z","content_type":null,"content_length":"2351","record_id":"<urn:uuid:d3946e2f-04cf-4046-bc69-3e0ef97a1378>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00458-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum - Ask Dr. Math Archives: College Number Theory
This page:
number theory
Dr. Math
See also the
Dr. Math FAQ:
0.9999 = 1
0 to 0 power
n to 0 power
0! = 1
dividing by 0
number bases
Internet Library:
number theory
linear algebra
modern algebra
Discrete Math
conic sections/
coordinate plane
Logic/Set Theory
Number Theory
Browse College Number Theory
Stars indicate particularly interesting answers or good places to begin browsing.
Selected answers to common questions:
Testing for primality.
In the equation a^3 + b^3 = c^3, how is it possible to prove that there are no integers that satisfy the equation?
I have determined that there are no prime numbers in the interval [n! + 2, n! + n], but I am trying to make sense of why. Is there a theorem that expalins why this works?
Can you prove that 1 + 1 = 2?
How can I prove that if p is a prime number, then the equation x^5 - px^4 + (p^2-p)x^3 + px^2 - (p^3+p^2)x - p^2 = 0 has no integer roots?
How can I prove that the product of a non-zero rational number and an irrational number is irrational without using specific examples?
How can I show that the cube root of 3 is irrational?
Claim: Let m and n be positive integers. Then abs|2^(n+1/2) - 3^m|<1 if and only if n = m = 1. I would like to know how to prove that claim or, at least, obtain some hints as to how to proceed.
Let p be a prime number. Prove that 2^p - 1 is either a prime number or a pseudoprime number (2^n is congruent to 2 modulo n, where n is composite).
If a and b are positive integers, prove that (a^5)*(b) - (a)*(b^5) is a multiple of 10.
Prove that if p and q are twin primes, each greater than 3, then p+q is divisible by 12.
Let a and b be odd integers such that a^2 - b^2 + 1 divides b^2 - 1. Prove that a^2 - b^2 + 1 is a perfect square.
Let p be a prime number. Show that the polynomial x^p + px + (p-1) is irreducible over Q if and only if p >= 3.
How can I prove that a^x = a^y iff y = x for all real numbers x and y?
Prove that (n^2 - n) is divisible by 2 for every integer n; that (n^3 - n) is divisible by 6; and that n^5 - n is divisible by 30.
How can you prove Fermat's Last Theorem for the specific case n = 4?
What is a proof that there are infinitely many primes of the form 4n + 1?
Explain why phi(m) is always even for m greater than 2...
If m and n are relatively prime, show that Zmn is isomorphic to Zm X Zn.
How do you prove that the sequence of convergents 3 + 1/(7 + 1/(15 + 1/(1 + 1/(292 + 1/...)))) actually converges to pi?
False statements of Euler's Theorem and Fermat's Little Theorem.
How can you prove or derive the commutative, associative, and distributive properties of numbers?
Resources for learning about public key cryptography (RSA system).
I am looking for a triple of 3 natural numbers (a,b,c)...
Why are (3,4,5), (20,21,29), (119,120,169), and (696,697,985) considered Pythagorean triples?
Do all right triangles with integer side lengths have a side with a length divisible by 5?
Is there a shortcut to finding the integer solutions to equations of the type x^2 + c = y^2, where c is a constant of known value?
Find all positive integers N such that 2*N^2 - 2*N + 1 is the square of an odd integer.
Prove that the equation 34*y^2 - x^2 = 1 in Z (integer number set) has no solution.
I have the polynomial P(x) = 2*x^2 + 3*x + 4, and I'm trying to find all values of x for which P is a perfect square. Are there infinite values of x that generate perfect squares for P? Is there
a formula to generate those x values? From there, is there a general formula for P(x) = a*x^2 + b*x + c?
Devise a method for solving the congruence x^2 == a (mod p) if the prime p == 1 (mod 8) and one quadratic non-residue of p is known.
Are quadratic residues used only to prove other algorithms, or is there actually a useful application in solving, for example, numerical problems?
If p is prime, and if a^((p-1)/2) is congruent to 1 modulo p, then show that a is a quadratic modulo p.
Given x^2==a(mod p), let p be an odd prime. There are exactly (p - 1)/2 incongruent quadratic residues of p and exactly (p - 1)/2 quadratic nonresidues of p. Can you provide an example that helps
explain this concept?
In one of the lemmas in number theory, if p is an odd prime number, then there exist x, y such that x^2+y^2+1=kp...
Find all the rational solutions to x^2 + y^2 = 2.
What is the exact relationship between the gcf or gcd and the lcm of two numbers?
Questions about Pythagorean triples.
Given 17 integers, prove that it is always possible to select 5 of the 17 whose sum is divisible by 5.
How can I find the remainder when (12371^56 + 34)^28 is divided by 111?
If the length of the repeating sequence in a decimal of a converted fraction is less than the denominator of the fraction, is it always an integer factor of the denominator minus one?
Page: [<prev] 1 2 3 4 5 6 7 8 9 [next>] | {"url":"http://mathforum.org/library/drmath/sets/college_number_theory.html?start_at=241&num_to_see=40&s_keyid=39610269&f_keyid=39610270","timestamp":"2014-04-21T05:24:21Z","content_type":null,"content_length":"22095","record_id":"<urn:uuid:d77cc5ea-91dd-43ef-875d-3528cc7599ef>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00415-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Kronecker operational matrices for fractional calculus and some applications.
(English) Zbl 1123.65063
The authors study several operational matrices for integration and differentiation. For some applications, it is often not necessary to compute exact solutions, approximate solutions are sufficient.
The given method is extended to find the exact and numerical solutions of the general system matrix convolution differential equations.
Several systems are solved by the new and other approaches, and illustrative examples are considered.
65K10 Optimization techniques (numerical methods)
49J15 Optimal control problems with ODE (existence)
26A33 Fractional derivatives and integrals (real functions) | {"url":"http://zbmath.org/?q=an:1123.65063","timestamp":"2014-04-18T15:48:38Z","content_type":null,"content_length":"21248","record_id":"<urn:uuid:720d68e7-85be-4326-a43e-173c9b00fc11>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00467-ip-10-147-4-33.ec2.internal.warc.gz"} |
Unimodular Row Reduction: Systems of Linear Diff. Eq. with Constant Coefficients
April 22nd 2013, 04:26 PM #1
Mar 2013
New York, NY
Unimodular Row Reduction: Systems of Linear Diff. Eq. with Constant Coefficients
The letter D is used to denote differentiation of a function of t.
x and y are both functions of t.
Using unimodular row reduction, I want to solve the system:
(D² – 1)x + (D² – D)y = –2 sin(t)
(D² + D)x + D²y = 0
I have already reduced the system to:
(D+1)x + Dy = 2sin(t)
0x + 0y = cos(t)
I notice that from the second equation, the system is consistent only if cos(t) = 0 in which case y will be a free variable, but how do I proceed from there to determine the solution to the
Follow Math Help Forum on Facebook and Google+ | {"url":"http://mathhelpforum.com/advanced-algebra/218023-unimodular-row-reduction-systems-linear-diff-eq-constant-coefficients.html","timestamp":"2014-04-16T18:13:13Z","content_type":null,"content_length":"30924","record_id":"<urn:uuid:dddb8e70-96c7-4e9a-84c2-1c14dfba5914>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00266-ip-10-147-4-33.ec2.internal.warc.gz"} |
OpEdNews Article: Here's another reason the 9/11 fire-mediated collapse theory has to be wrong.
The notion that the WTC towers collapsed because fire weakened the steel is laughable.
The fact that other steel-framed, steel-cored buildings have suffered much longer burning, much larger in extent and, demonstrably, hotter
fires, and yet never collapsed, shows how difficult it is in practice to bring down one of these buildings from fire.
Apparently, these buildings are robust structures, highly over-built to handle heavy wind loads; and it seems you would need to heat a large volume of steel, uniformly, over a wide cross-sectional
area of the structure, to even have a chance of making one collapse in the neat, symmetrical manner witnessed (to the extent it is even, theoretically, possible to do this without resorting to
explosives in the first place).
The easiest way to see that these buildings were rigged for demolition is to start by considering the fact that, between the time Flt. 175 hit WTC2 and the time the building collapsed, only 56
minutes had elapsed. And 56 minutes, simply, isn't enough time to develop a fire hot enough, nor large enough in extent, to even have a remote chance of getting enough steel hot enough to be a
The best way to see the absurdity of the fire-mediated collapse theory is to make some simplifying assumptions...and apply some simple math and physics to the problem.
Say, for the sake of argument, that you’re concerned with one floor of the building. Assuming that you have an unlimited supply of readily combustible fuel available (which is, obviously, not true,
but let's be generous), and there is no heat loss by convection, conduction or radiation (another ridiculous assumption, but let's give the shills every advantage).
Now, the rate at which the temperature rises on that floor will be determined by the composite thermal mass of the building materials associated with that floor, and the rate at which you can bring
in oxygen to burn the fuel. Assuming, say, about 5E5 kg of steel, and about 1.4E6 kg of concrete, per floor (taking internet based numbers at face value), with specific heats of about 450 and 3300 J/
kg*C, respectively, simple algebra shows that you would have to release about 3.27E12 Joules of energy to uniformly bring the temperature from ambient up to, say, 700 degrees C (starting to get into
the interesting range, but probably still not high enough to cause a collapse).
The problem is that for WTC2, you have to release this huge amount of energy in only 56 minutes. That means you would have to burn somewhere on the order of 30,000 gallons of jet fuel in 56 minutes.
That means you would have to supply air to the fire inside the building at a rate somewhere in the neighborhood of 6E5 cubic feet per minute.
That's right, in order to bring the temperature of one floor of a WTC tower from 25 to 700 degrees centigrade, uniformly, in a short 56-minute time frame, you would have to supply about 600,000 cubic
feet of air per minute...for each of those 56 minutes. And that’s a ridiculously high number. And even if you did find a way to create such blast furnace like conditions, the fact of the matter is
that you would convect a significant portion of the heat away, just like what happens in a fireplace; in order to let fresh air in, you have to let the heated, oxygen-depleted air escape.
If you were lucky, and the process was, say, 50% efficient (meaning the airflow only carried away half your heat), you would need to double everything, which would mean burning 60,000 gallons of jet
fuel in 56 minutes, while feeding the fire with over one million cubic feet of air per minute.
By way of the above numbers, the absurdity of the "official" version of events is laid bare for all to see.
14 people are discussing this page, with 37 comments
What never seems to be addressed by the advocates ... by Paul Lefrak on Tuesday, Apr 28, 2009 at 10:03:25 AM
While the official government account of the event... by Rasoul Acheh on Wednesday, May 23, 2007 at 11:02:45 AM
fromfrom http://911research.wtc7.net/HEY TOLD US: ... by richard on Wednesday, May 23, 2007 at 11:56:30 AM
I meant to add, so adjust your quantity of availab... by richard on Wednesday, May 23, 2007 at 11:59:56 AM
I don't care if the whole building was filled ... by jpsmith123 on Wednesday, May 23, 2007 at 5:16:42 PM
the temperatures of hydrocarbon-based fires. I was... by richard on Wednesday, May 23, 2007 at 7:28:01 PM
carried by one of the WTC jetliners, just for purp... by Richard Mynick on Wednesday, May 23, 2007 at 12:07:01 PM
The official version of most things the government... by Roger on Wednesday, May 23, 2007 at 1:20:10 PM
Misleading HOGWASH It's a mistake opednews publ... by Ann Jones on Wednesday, May 23, 2007 at 2:29:33 PM
The wind-load design goal of the towers was somewh... by jpsmith123 on Wednesday, May 23, 2007 at 4:59:26 PM
Misleading HOGWASH It's a mistake opednews publ... by Ann Jones on Wednesday, May 23, 2007 at 2:30:18 PM
Anna/Mark Roberts lie: She/He said:"The ... by richard on Thursday, May 24, 2007 at 6:54:54 AM
Bro, I am not a lying shill, and this isnt about s... by Ann Jones on Thursday, May 24, 2007 at 10:12:34 AM
Why deliberately make your posts hard to read? Tha... by jpsmith123 on Thursday, May 24, 2007 at 4:41:18 PM
Evidence and reason, not lies and kneejerk BeliefB... by Ann Jones on Thursday, May 24, 2007 at 11:49:19 PM
If you had read my article, the article you w... by jpsmith123 on Friday, May 25, 2007 at 6:10:47 AM
Yes, she/he should be specific. The building... by BreezyinVA on Friday, May 25, 2007 at 10:07:38 PM
Seems you are the one demonizing. Not a good... by BreezyinVA on Friday, May 25, 2007 at 9:52:55 PM
???... by Blue Pilgrim on Wednesday, May 23, 2007 at 3:33:29 PM
Misleading Hogwash Indeed! ... by Rasoul Acheh on Wednesday, May 23, 2007 at 4:02:28 PM
I suggest you should read what scientists, militar... by Christie on Wednesday, May 23, 2007 at 4:23:36 PM
exactly the same fallacious arguments that Mark Ro... by richard on Wednesday, May 23, 2007 at 7:32:20 PM
It seems I may be one of the first people to ... by jpsmith123 on Wednesday, May 23, 2007 at 9:19:17 PM
Folks, I am not a Bushiebot shill, etc. (actually ... by Ann Jones on Thursday, May 24, 2007 at 9:38:31 AM
Sorry, but if you want anyone to read your posts, ... by jpsmith123 on Thursday, May 24, 2007 at 3:18:24 PM
Ditz, if it's THAT easy to bring buildin... by Tony Forest on Thursday, May 24, 2007 at 1:51:35 PM
Within the first month after 9/11, I ... by rhalfhill on Thursday, May 24, 2007 at 8:58:54 PM
I respect your cynical skepticism, but you ignore ... by Ann Jones on Thursday, May 24, 2007 at 11:45:25 PM
Get a grip, folks, they collapsed!Folks, I am not ... by Ann Jones on Thursday, May 24, 2007 at 11:31:38 PM
The essential issues of 9/11 are quite simple. The... by Rasoul Acheh on Friday, May 25, 2007 at 10:19:10 AM
ditz99, you wrote: "Smith uses specious... by James on Saturday, May 26, 2007 at 4:57:58 AM
The towers, by video evidence and testimony of the... by Ann Jones on Monday, May 28, 2007 at 7:25:11 PM
Sorry if you have no appreciation or understanding... by jpsmith123 on Monday, May 28, 2007 at 7:55:48 PM
Supporters of the official 9/11 fable, speak of co... by Rasoul Acheh on Tuesday, May 29, 2007 at 11:22:37 AM
Acheh, you may be correct, because i doubt we are ... by Ann Jones on Tuesday, May 29, 2007 at 11:10:05 PM
One of the most notable features of the human mind... by Rasoul Acheh on Wednesday, May 30, 2007 at 9:23:02 AM
It is really embarassing that the nation that put ... by psikeyhackr on Monday, Jun 18, 2007 at 10:44:22 PM | {"url":"http://www.opednews.com/articles/opedne_joseph_s_070522_oh_yeah_2c_here_s_anot.htm","timestamp":"2014-04-21T00:44:45Z","content_type":null,"content_length":"75474","record_id":"<urn:uuid:394e554a-2aad-4bc2-b55b-40b522f6aa33>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00350-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math 241
Math 241, Calculus III, Fall 2011
• Lecture sections:
• Course web page: http://dunfield.info/241
• Instructor: Nathan Dunfield
□ Email: ; Office: 378 Altgeld; Phone: (217) 244–3892.
□ Office hours: Monday 1:30–2:30, Tuesday 9:30–10:30, Thursday 2:00–3:30, and by appointment.
• Discussion sections: Meet each Tuesday and Thursday for 50 minutes.
• Tutoring room:
• Detailed course diary: Topics, lecture notes, HW assignments, worksheets, etc.
The focus of this course is vector calculus, which concerns functions of several variables and functions whose values are vectors rather than just numbers. In this broader context, we will revisit
notions like continuity, derivatives, and integrals, as well as their applications (like finding minima and maxima). We’ll explore new geometric objects such as vector fields, curves, and surfaces in
3-space and study how these relate to differentiation and integration. The highlight of the course will be theorems of Green, Stokes, and Gauss, which relate seemingly disparate types of integrals in
surprising ways.
For most people, vector calculus is the most challenging term in the calculus sequence. There are a larger number of interrelated concepts than before, and solving a single problem can require
thinking about one concept or object in several different ways. Because of this, conceptual understanding is more important than ever, and it is not possible to learn a short list of “problem
templates” in lecture that will allow you to do all the HW and exam problems. Thus, while lecture and section will include many worked examples, you will still often be asked to solve a HW problem
that doesn’t match up with one that you’ve already seen. The goal here is to get a solid understanding of calculus so you can solve any such problem you encounter in mathematics, the sciences, or
engineering, and that requires trying to solve new problems from first principles, if only because the real world is sadly complicated.
We will cover Chapters 12–16 of
• James Stewart, Calculus: Early Transcendentals, 6th edition, with Enhanced WebAssign.
Please note there are two versions of this text currently available, and this course uses the 6th edition rather than the 7th. You will also need WebAssign access to do the online homework. The U of
I bookstore is selling a bundle of the text and WebAssign for $159, and the same is available directly from the publisher for $140.00 (including shipping). If you buy a used copy, or a new copy
without WebAssign, you can purchase single-semester access to WebAssign for $47.
Course Policies
Overall grading: Your course grade will be based on the online HW (10%), section worksheets and quizzes (6%), three in-class exams (18% each), and a comprehensive final exam (30%). Grade cutoffs on
any component will never be stricter than 90% for an A- grade, 80% for a B-, and so on. Individual exams may be curved more generously depending on their difficulty.
Exams: There will be three evening midterm exams, which will be held from 7:00–8:30pm on September 20, October 18, and November 14 (all Tuesdays).
There will be a combined final exam for these sections of Math 241. The final exam will take place on Tuesday, December 13 from 1:30pm–4:30pm, with the conflict exam on Wednesday, December 14 from
All exams will be closed book and notes, and no calculators or other electronic devices (e.g. cell phones, iPods) will be permitted.
Homework: Homework will be assigned for each lecture and posted on the course webpage. Most of this will be done online via WebAssign, and will generally be due two lectures later, just before the
8am class starts. That is, HW for Monday’s lecture is due Friday at 8am, and Wednesday’s is due on the following Monday, etc. The other HW problems you will be responsible for on exams and quizzes
but will not be collected. Late HW will not be accepted, but the lowest 5 scores will be dropped. The first assignment is due Friday, August 26.
Homework Solutions: WebAssign lets you see a complete solution to any assigned problem once the due date has passed. For the other HW problem, you can check your answers and see the solutions here:
Part 1, Part 2, Part 3, Part 4.
WebAssign: To sign up for the online HW, go to WebAssign and enroll using the Class Key:
Important: Be sure to provide your NetID@illinois.edu address when WebAssign asks for your email. This is crucial for your HW scores to be credited toward your grade. Technical issues with WebAssign
should be addressed here. If you have not yet received your WebAssign access code, sign up for the 14 day free trial so you can start doing the HW immediately.
Worksheets and Quizzes: Most section meetings will include either a worksheet or a quiz. The former will be graded for effort and latter for accuracy. Missing either results in a score of zero, but
the lowest 4 scores in this category will be dropped.
Conflict exams: If you have a conflict with one of the exam times, please consult the university policy on evening midterm exams and final exam conflicts. Based on that, if you think your situation
qualifies you to take the conflict exam, please contact me as soon as possible, but no later than a week before the exam in question. I reserve final judgment as to which exam you will take.
Missed exams: There will be no make-up exams. Rather, in the event of a valid illness, accident, or family crisis you can be excused from an exam so that it does not count toward your overall
average. Such situations must be documented by an absence letter from the Emergency Dean located in Room 300 of the Turner Student Services Building, though I reserve final judgment as to whether an
exam will be excused. All requests for an exam to be excused must be made within a week of the exam date.
Missed HW/worksheets/quizzes: Generally, these are taken care of with the policy of dropping the lowest scores. For extended absences, these are handled in same way as missed exams.
Exam Regrading: The section leaders and myself try hard to accurately grade all exams and quizzes, but please contact one of us if you think there was an error. All such requests for regrading must
be made within one week of the test being returned.
Viewing grades online: You can always find the details of your worksheet, quiz, and exam scores here. Due to a limitation of the system, both worksheets and quizzes are recorded as "qu". Details of
your HW scores can be viewed on WebAssign, and are only periodically input into the above system as an overall average (hw).
Large-lecture Etiquette: Since there are more than 200 people in the room, it’s particularly important to arrive on time, remember to turn off your cell phone, refrain from talking, not pack up your
stuff up until the bell has rung, etc. Otherwise it will quickly become hard for the other students to pay attention.
Cheating: Cheating is taken very seriously as it takes unfair advantage of the other students in the class. Penalties for cheating on exams, in particular, are very high, typically resulting in a 0
on the exam or an F in the class.
Disabilities: Students with disabilities who require reasonable accommodations to should see me as soon as possible. In particular, any accommodation on exams must be requested at least a week in
advance and will require a letter from DRES.
James Scholar/Honors Learning Agreements: These are not offered for this section of Math 241. Those interested in such credit should enroll in one of the honors sections of this course.
Sources of help
Ask questions in class: This applies to both the main lecture and the sections. The lecture may be large, but I still strongly encourage you to ask questions there. If you’re confused about
something, then several dozen other people are as well.
Come to office hours: I have office hours from Monday 1:30-3:30, Tuesday 9:30-10:30, and Thursday 2:00-3:30 in 378 Altgeld. If those times don’t work for you, you can make an appointment by sending
me email or talking to me after class.
Math 241 tutoring room: Come and work with the TAs and your classmates on homework, test preparation, and any general questions about Math 241:
• Monday, Tuesday, Wednesday, and Thursday from 3:30–6:30pm in 1 Illini Hall.
• Monday, Tuesday, Wednesday, and Thursday from 6:00–9:00pm in 345 Altgeld.
• Tuesday and Thursday from 11:30–12:50 in 142 Henry Admin Building.
The tutoring room will be staffed starting Wednesday, August 24.
Other sources: A change of perspective is sometimes helpful to clear up confusion. Here are two other vector calculus sources you might find helpful. They are both on reserve at the Math Library in
Altgeld Hall:
• H. M. Schey, Div, Grad, Curl, and All That, W. W. Norton. A classic informal account of vector calculus from a physics point of view. Library: Reserve copy.
• Adams, Thompson, and Hass, How to ace the rest of calculus, the streetwise guide, Freeman. A snarky and lighthearted source. Library: Reserve copy, other copies.
Lecture notes, HW assignments, section worksheets, etc.
These are all posted online on the course diary. | {"url":"http://www.math.uiuc.edu/~nmd/classes/2011/241/","timestamp":"2014-04-16T16:43:01Z","content_type":null,"content_length":"14281","record_id":"<urn:uuid:20fc1acb-1421-4edc-a14a-29c81151152f>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00599-ip-10-147-4-33.ec2.internal.warc.gz"} |
Calculating the electric field line at a point with a circular field charge
All I've been able to do up to this point is calculate the direction that the field charge would be. The +2Q would cancel out the +Q, leaving a +Q at the top, and a -Q on the left. The forces from
these two charges would cause the field to point south-west.
I wouldn't rely on any symmetry for this problem. Let the math do the work for you. Besides, the problem statement instructed that you had to use direct integration. (Meaning if you use properties of
symmetry, your instructor might ding you a few points. Just work it out the long way.)
For now, the only thing you really need to get right is the direction of the electric field from the infinitesimal charge slice
to the test point,
. This infinitesimal electric field is called
. But again, don't worry about symmetry! Treat
as though its a point charge. (And thus the direction of
is the direction from
[tex] dE=\frac{k\lambda d\theta}{R}[/tex]
'Looks good to me ...
The only things missing is to recognize that
is a vector, and it has direction. First off, let's define the direction of the vector
, as you've defined it in your figure. It has a magnitude of
and a direction of [tex] \hat r [/itex].
The direction of the electric field is
the charge
the test point. So in this case it's in the opposite direction of [tex] \hat r [/itex].
So modifying your equation every so slightly,
\vec{dE} = -k \frac{\lambda d\theta}{R} \hat r
I then, completely shooting in the dark, added a cos(theta),
You don't need to take a shot in the dark; you just need to convert the unit vector (direction vector) from polar to Cartesian coordinates.
[tex] \hat r = \cos \theta \hat x + \sin \theta \hat y [/tex]
and integrated from 135 degrees from 315 degrees.
Here is where your method needs a lot of work. Break things up into 4 different integrations. (And in a sense this is really 8 total integrations, since you can integrate the x- and y- components
separately for each section.)
Integrate over the four different sections separately, then add everything together (vector summation, of course -- keep your x- and y- components separate).
• Integrate from -45^o to +45^o with q = 0.
• Integrate from 45^o to +135^o with q = 2Q.
• Integrate from 135^o to +225^o with q = -Q.
• Integrate from 225^o to +315^o with q = Q.
Then sum everything together (vector sum -- keeping the x- and y- components separate).
You'll also need to create a relationship between
before you're finished (or before you integrate), so you can express
in terms of Q (since the relationship is different for each section).
Good luck! | {"url":"http://www.physicsforums.com/showthread.php?t=465054","timestamp":"2014-04-19T19:39:34Z","content_type":null,"content_length":"43205","record_id":"<urn:uuid:e2c38721-4ecb-4e4a-ba2f-0c84293eed46>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00280-ip-10-147-4-33.ec2.internal.warc.gz"} |
Mathematical Induction - Problem 2
We’re now going to use mathematical induction to prove that the sum of the first n odd integers is n². Using induction the first thing went to do is show that it works for n equals 1.
First thing we do, sum of the first terms 1 that's got to be the same thing as 1², okay that works.
Second thing is assume it works for a arbitrary value of k. So then we assume that 1 plus 3 plus 5 plus so on and so forth plus 2k minus 1 is equal to k². And then lastly we want to use the fact of
this assumption to prove that it works for k plus 1. One term more. What we get in that case is 1 plus 3 plus 5 plus 2k minus 1 plus and then we have to plug in k plus 1 into our summation so this
turns into 2(k plus 1 minus 1) and in theory this should equal quantity k plus 1².
Let’s see what we actually have done here. We have, this is just 2k plus 2 minus 1 which is then just 2k plus 1. What we have on the right side is, let’s use a different color, 1 plus 3 plus 5 plus
dot, dot, dot, 2k minus 1 plus k plus 1. Using my assumption up here I already know that this whole thing is equal to k². What I really have on the right side is simply k² plus 2k plus 1. Checking
what I have on the right side, if I were to FOIL this out I end up getting k² plus 2k plus 1.
We were able to prove that these two sides are equal therefore using our assumption that it works for k, we have proven that it works for k plus 1 as well.
Using mathematical induction, you show it works for the first thing, you assume it works for an arbitrary value k and then using that arbitrary value k you prove that it works for k plus 1.
mathematical induction assume prove | {"url":"https://www.brightstorm.com/math/precalculus/sequences-and-series/mathematical-induction-problem-2/","timestamp":"2014-04-20T03:21:39Z","content_type":null,"content_length":"55780","record_id":"<urn:uuid:dac1021a-f007-4ad1-963f-7a999ec0d82a>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00322-ip-10-147-4-33.ec2.internal.warc.gz"} |
CAS: Teachers - Polygon Poetry
Connected Experience: Polygon Poetry
Grades 3rd - 8th Grade Download Materials
Subjects English-Language Arts, Life Sciences, Mathematics
Topics At-Academy Activity, Collecting Data, Measurement & Graphing, Poetry & Prose, Simple Shapes
Settings Classroom, Academy (African Hall)
Duration 30 min Prep + 20 min Activity + 30 min Post
In this interdisciplinary problem-solving activity, students use themed poems to hunt for exhibit elements, connecting these points to form a simple shape. After recording distances between
corners, they will convert measurements to solve a design problem.
In this interdisciplinary problem-solving activity conducted both in the classroom and the museum gallery, students will:
1. review the attributes of simple shapes.
2. use themed poems to hunt for exhibit elements, connecting these points to form a particular polygon.
3. estimate and record distances between their secret locations using non-standard units.
4. learn to convert their measurements to standard units to solve a design problem.
• ball of yarn
• dictionary
• Polygon Poetry Worksheets (1 riddle per group)
• pencils
• metric rulers
• calculators
• polygon: a closed figure of three or more sides, lying on one flat plane
• perimeter: the length of the border of a closed figure
• convert: to find an equal value using a different unit of measurement
Before your Visit
1. Print copies of polygon riddle worksheets as convenient for your class (many versions are available). Students will work cooperatively in polygon groups to solve their riddle, but each is
required to record their own data in the gallery (by repeating measurements in science, you can recognize mistakes!).
1. Select two students to stand up, each representing points. What forms when two points are connected? How about three? Have students toss a ball of yarn between points to create a visual
triangle in the horizontal plane of the classroom. Continue using student ‘points’ to form a variety of closed polygons, discussing the number of corners, sides, and angles, and perhaps
introducing terms such as parallel and perpendicular.
2. Define perimeter and ask students to think of ways the class could measure the distance around the border of, for example, a triangle constructed by three student vertices. What if the class
lacked rulers? Model how to measure distance in the units of a textbook, pencil, or hand, by having students trace the perimeter using these ‘tools’. Draw the simple shape on the board, and
record the length of each side as the students go around. Calculate the final perimeter as a class.
3. Tell students that the class will visit the African Hall while at the museum. The walls of this exhibit are made of life-size displays representing scenes from habitats in Africa, but the
center of the hall is a wide, empty area.
4. Explain that students will work in groups to solve a riddle. Riddle poems will contain clues directing them to specific displays, locations that form the ‘points’ of a mystery polygon. Each
student will then use their own feet as a measuring device to record the perimeter of the groups’ shape.
5. Distribute the worksheets to student groups. Have students read the text aloud to each other to develop an idea of what they might discover in the exhibit. Instruct them to look up any unknown
words. Feign ignorance of diorama content.
6. Have students label their ‘data sheet’ with their name, and record a guess as to what kind of shape they will be forming. Collect the worksheets.
At the Academy
1. Distribute worksheets to pre-arranged student groups.
2. Instruct students to follow the clues to find the exhibit elements that represent the points of their polygon, marking these secret locations with an X. Students should then ‘connect the dots’
on their map to form the shape’s perimeter (no need to walk along diagonals!).
3. Have students measure the length of each side of their polygon by counting their own footsteps, walking from one point to the next by placing one foot directly in front of the other. Students
should record the distance along the appropriate side of the shape.
4. Once data has been collected, students are free to explore the hall.
Riddle Answers
• Isosceles Triangle: Oryx, Roan, Sable
• Right Triangle: Cheetah, Tortoise, Zebra
• Pentagon: Penguin, Lechwe, Bushbuck, Bongo, Hartebeest
• Parallelogram: Steinbok, Klipspringer, Duiker, Dikdik
• Equilateral Triangle: Klipspringer, Baboon, Leopard
• Scalene Triangle*: Chameleon, Tortoise, Monitor
*As live displays, subject to change
Back at School
1. Have each student calculate the perimeter of their own polygon.
2. Have students compare values among members of their own group. Why do values for the same shape differ? Explain how society has created standard units for convenient, comparable measurement.
Cite everyday examples. (The metric system is the universal standard for measuring distance. The United States is exceptionally stubborn. In 1999, NASA lost a $125 million Mars orbiter because
the ground crews forgot to convert from English to metric!).
3. Using a metric ruler, introduce students to the relationship between meter, centimeter, and millimeter. What unit is most appropriate to measure one’s footstep? Have students measure and
record the length of their shoe from toe to heel in centimeters.
4. Lead students through the conversion of their estimated polygon perimeter from units of ‘my footsteps’ to ‘centimeters’ and then from ‘centimeters’ to ‘meters’.
5. How do students’ results compare within their group now? Explain the importance of repetitive trials when conducting scientific experiments to validate the reproducibility of data and reduce
6. Presume that the museum is building a new giraffe display in this space, but that the builders are limited by the amount of railing left from construction. Assuming the builders have 20 m of
railing, where could they construct the giraffe diorama? In groups or as a class, discuss how to best include a display that is large enough for giraffes, but will leave room for guests to
walk and stand around the scene. Students should use their recorded data to help estimate an appropriate shape or area. Have students present ideas and give reasons in support of their
How to convert measurements into a unit of length that others will understand:
‘My footsteps’ (varies with student) → centimeters (too small) → meters (just right!)
• Step 1:
My polygon’s perimeter = ________ of my footsteps.
If each footstep = ________ centimeters,
Then the perimeter = ________ × ________ = _______ cm. Wow! Seems like a lot…
• Step 2:
I calculated my polygon’s perimeter to be ________ cm.
If there are 100 centimeters on one meter stick,
Then, my polygon’s perimeter = _______ cm ÷ 100 = ________ meters.
For older grades, consider:
• calculating the area enclosed by the polygons.
• discussing more complex geometry.
• computing modes and medians for data of student groups.
California Content Standards
Grade Three
Mathematics: Measurement and Geometry
• 1.1 Choose the appropriate tools and units (metric and U.S.) and estimate and measure the length, liquid volume, and weight/mass of given objects.
• 1.3 Find the perimeter of a polygon with integer sides.
• 1.4 Carry out simple unit conversions within a system of measurement (e.g., centimeters and meters, hours and minutes).
• 2.1 Identify, describe, and classify polygons (including pentagons, hexagons, and octagons).
• 2.2 Identify attributes of triangles (e.g., two equal sides for the isosceles triangle, three equal sides for the equilateral triangle, right angle for the right triangle).
• 2.3 Identify attributes of quadrilaterals (e.g., parallel sides for the parallelogram, right angles for the rectangle, equal sides and right angles for the square).
Mathematics: Mathematical Reasoning
• 2.1 Use estimation to verify the reasonableness of calculated results.
• 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
Investigation and Experimentation
• 5a. Repeat observations to improve accuracy and know that the results of similar scientific investigations seldom turn out exactly the same because of differences in the things being
investigated, methods being used, or uncertainty in the observation.
• 5c. Use numerical data in describing and comparing objects, events, and measurements.
English-language Arts: Reading
• 1.6 Use sentence and word context to find the meaning of unknown words.
• 1.7 Use a dictionary to learn the meaning and other features of unknown words.
Grade Four
Mathematics: Mathematical Reasoning
• 2.1 Use estimation to verify the reasonableness of calculated results.
• 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
Investigation and Experimentation
• 6b. Measure and estimate the weight, length, or volume of objects.
Grade Five
Mathematics: Mathematical Reasoning
• 2.1 Use estimation to verify the reasonableness of calculated results.
• 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
Investigation and Experimentation
• 6f. Select appropriate tools (e.g., thermometers, meter sticks, balances, and graduated cylinders) and make quantitative observations.
English-language Arts: Reading
• 2.3 Discern main ideas and concepts presented in texts, identifying and assessing evidence that supports those ideas.
• 2.4 Draw inferences, conclusions, or generalizations about text and support them with textual evidence and prior knowledge.
Grade Six
Mathematics: Algebra and Functions
• 2.1 Convert one unit of measurement to another (e.g., from feet to miles, from centimeters to inches).
Mathematics: Mathematical Reasoning
• 2.1 Use estimation to verify the reasonableness of calculated results.
• 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
English-language Arts: Reading
• 1.4 Monitor expository text for unknown words or words with novel meanings by using word, sentence, and paragraph clues to determine meaning.
Grade Seven
Mathematics: Algebra and Functions
• 2.1 Convert one unit of measurement to another (e.g., from feet to miles, from centimeters to inches).
Mathematics: Mathematical Reasoning
• 2.1 Use estimation to verify the reasonableness of calculated results.
• 3.2 Note the method of deriving the solution and demonstrate a conceptual understanding of the derivation by solving similar problems.
Share how you adapted this activity for your grade level. Or, ask us a question!
If you used this lesson, give your star rating above.
comments powered by | {"url":"http://www.calacademy.org/teachers/resources/lessons/polygon-poetry/","timestamp":"2014-04-16T21:54:50Z","content_type":null,"content_length":"44947","record_id":"<urn:uuid:cddedf5c-880c-4a52-8a4a-8b2bd186c716>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Another kind of nothing
Missing values are the bane of the applied statistician. They haunt our data sets, invisible specters lurking malevolently.
The missing value is like the evil twin of zero. The introduction of a symbol representing "nothing" was a revolutionary development in mathematics. The arithmetic properties of zero are
straightforward and universally understood (except perhaps when it comes to division by zero, a rather upsetting idea). In comparison, the missing value has no membership in the club of numbers, and
its properties are shrouded in mystery. The missing value was a pariah until statisticians reluctantly took it in—
had to. And it's an ill-behaved tenant, popping in and out unexpectedly, sometimes masquerading as zero, sometimes carrying important messages—always a source of mischief.
... symbolizing nothing
A variety of different symbols are used to represent missing values. The statistical software packages SAS and SPSS, for example, use a dot. The fact that it's almost invisible is oddly fitting.
Other software uses NA or N/A—but does that mean "not available" or "not applicable"? These are, after all, two very different situations. The
IEEE floating point standard
, meaning "not a number" (for example 0/0 is not a number). In spreadsheets, a missing value is simply an empty cell (but in Excel, at least, invalid calculations result in special types of missing
values—for example 0/0 results in "#DIV/0!"). Dates and times can also be missing, as can character string variables.
Logical expressions, such as "X > 0" (which can either be TRUE or FALSE), are an interesting special case. If X is missing, then the value of the expression itself is missing. Suppose Y=5. If X is
missing, what is the value of the expression "(X > 0) AND (Y > 0)"? Well, we can't say, because we need to know the values of both X and of Y to determine the result. So the value of "(X > 0) AND (Y
> 0)" is missing. How about "(X > 0) OR (Y > 0)"? In this case, the answer is TRUE. It is enough to know that Y is positive to answer the question, regardless of the value of X. (There's also a
logical operation called exclusive-OR, denoted XOR, which means "one or the other is true, but not both". You'd need to know both values in that case.)
Even though the rules above seem straightforward, great care must still be taken in ensuring that calculations are handling missing values appropriately. That's because in reality there are any
number of different kinds of missing values. Suppose, for example, that as part of a clinical study of neurotoxic effects of chemotherapy agents, IQ is measured. What does a missing value in the data
set mean? Perhaps the patient wasn't available on the day the measurement took place. Or perhaps they died. Or perhaps their cognitive disability was so severe that the test couldn't be administered.
In the last two cases, the missingness might well be related to the neurotoxic effect of interest. This is known as
informative missingness
. Statisticians also distinguish the case where values are "missing at random" versus "missing completely at random". The latter is the best we can hope for—but it's often wishful thinking.
Something for nothing
One potential solution to the problem of missing values is
, that is filling in values ... somehow. One approach is
mean imputation
in which the mean of the values that are
missing is substituted for any missing values. Seems reasonable, except that it effectively reduces variability, which can seriously distort inferences. A variety of other
imputation methods
have been proposed, the most sophisticated of which,
multiple imputation
, allows for valid variance estimates
a number of assumptions hold. The bottom line is there are no easy solutions: you can't get something for nothing ... for nothing.
Too much of nothing
The really unfortunate thing is that missing values are often the result of bad design. Perhaps the most common instance of this is surveys. Most surveys are
too long
! This creates at least three problems. The first is non-response (which is missing values writ large). While I might be willing to spend 30 seconds answering a questionnaire, I'd be much less
interested in spending 10 minutes. The second problem is that even if I do answer the questionnaire, I may get tired and skip some questions (or perhaps only get part way through), or take less care
in answering. The third problem is that long surveys also tend to be badly designed. There's a simple explanation for this: when there are a small number of questions, great care can be taken to get
them right; typically when there are a large number of questions, less effort is put into designing each individual question. "Surveybloat" is ubiquitous and wasteful, often the product of
design-by-committee. The desire to add "just one more question" is just too strong and the consequences are apparently too intangible (despite being utterly predictable). I would say that most
surveys are at the very least
as long as they ought to be.
In medical research, the
randomized controlled trial
is considered to be the "gold standard" of evidence. By randomly assigning an experimental intervention to some patients and a control (for example, standard care) to other patients, the effect of
the experimental intervention can be reliably assessed. Because of the random allocation, the two groups of patients are unlikely to be very different beforehand. This is a tremendous advantage
because it permits
fair comparisons
. But everything hinges on being able to assess the outcomes for
all patients
, and this is surprisingly difficult to do. Patients die or drop out of studies (due to side effects of the intervention?); forms are sometimes lost or not completed properly; it's not always
possible to obtain follow-up measurements—the sources of missing values are almost endless. But each missing value weakens the study.
If this is a problem with prospective studies, in which patients are followed forward in time and pre-planned measurements are conducted, imagine the difficulties with retrospective studies, for
example reviews of medical charts. Missing values are sometimes so prevalent that data sets resemble Swiss cheese. In such cases, how confident can we really be in the study findings?
Learning nothing
Most children learn about zero even before they start school. But who learns about missing values? University-level statistics courses cover
-tests, chi-square tests, analysis of variance, linear regression ... (How much of any of this is retained by most students is another question.) It's only in advanced courses that any mention is
made of missing values. So graduate students in statistics (and perhaps students in a few other disciplines) learn about missing values; but even then, it's usually from a rather theoretical
perspective. In day-to-day data analysis, missing values are rife. I would hazard a guess that of all the
-values reported in scientific publications, at least half the time there were missing values in the corresponding data, and they were simply ignored. In scientific publications, missing values are
routinely swept under the carpet.
Missing values are a bit of a dirty secret in science. Because they are rarely mentioned in science education, it's not surprising that they are often overlooked in practice. This is terribly
damaging—regardless of whether it's due to ignorance, dishonesty, or wishful thinking.
Nihil obstat
In some cases, missing values may just be an irritation with little consequence other than a reduction in sample size. It would be lovely if that were always the case, but it simply isn't. We ignore
missing values at our peril.
Addendum (22June2006):
In my post I discussed how logical expressions are affected by missing values. The difference between a value that is
not available
and one that is
not applicable
has an interesting effect here. Suppose that following an initial assessment of a patient, a clinic administers a single-sheet questionnaire each time the patient returns. One of the questions is:
Since your last visit, have you experienced such-and-such symptom?
It might be of interest to know what proportion of patients have ever answered yes. Suppose that patients returned to the clinic up to three times. A logical expression to represent whether each
patient had ever experienced the symptom would be:
symptom = v1 OR v2 OR v3
where v1 is TRUE if the patient reported the symptom on the first return visit, and likewise for v2 and v3. Suppose that a particular patient returned to the clinic three times, and answered "no" the
first two times, but the questionnaire sheet for that patient's third visit was misplaced. Then v1=FALSE, v2=FALSE, and v3 is missing (
not available
). Following the rules that I discussed earlier for logic with missing values (these are rules used in SPSS and R, and I suspect in most other statistical packages), the value of the logical
expression would be missing, which makes sense: we unfortunately don't know if the patient ever reported experiencing the symptom.
Suppose that another patient only returned to the clinic twice, also answering "no" on both visits. Then again v1=FALSE, v2=FALSE, and v3 is missing (
not applicable
, since there
no third visit). Blindly applying the rules, the value of the logical expression would again be missing. But this time, it's incorrect: we know that this patient
reported experiencing the symptom.
This is one of the justifications for my statement that "Even though the rules above seem straightforward, great care must still be taken in ensuring that calculations are handling missing values
2 Comments:
Nick thank you for highlighting and showing the importance and repurcussions of this pitfall in the science of analysis.
For one like me its so much mind boggling that I can't even seem to come up with questions :)
All I can say is this: Deal with missing values as it pleases, but be transparent about it.
Nick Barrowman said...
It's hard to overstate the importance of transparency in scientific reporting. Unfortunately, in most reports, the description of how missing values were handled is ... missing. | {"url":"http://logbase2.blogspot.com/2006/06/another-kind-of-nothing.html","timestamp":"2014-04-18T13:37:35Z","content_type":null,"content_length":"48135","record_id":"<urn:uuid:e159df14-7d43-4d4a-b0cc-221a71f08835>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
[FOM] 267:Pi01/digraphs/progress
Harvey Friedman friedman at math.ohio-state.edu
Wed Feb 8 03:03:48 EST 2006
This version relies on paths rather than embeddings or induced subgraphs. I
much prefer it.
I am also optimistic that this progress will yield good Pi00 sentences. Say
Pi01 INCOMPLETENESS: digraph paths
Harvey M. Friedman
February 7, 2006
In this abstract, a digraph is a directed graph with no loops and no
multiple edges. Thus all digraphs will be simple. The results will be the
same if we allow loops.
A dag is a directed graph with no cycles.
Let G be a digraph. We write V(G) for the set of all vertices in G, and E(G)
for the set of all edges in G.
Let A containedin V(G). We write GA for the set of all destinations of edges
in G whose origins lie in A. I.e., GA = {y: (therexists x)((x,y) in E(G))}.
We begin by quoting a well known theorem about directed acyclic graphs, or
so called dags. We call it the complementation theorem, but we have been
told that it is rather ordinary fundamental fare in dag theory.
COMPLEMENTATION THEOREM (finite dags). Let G be a finite dag. There is a
unique set A containedin V(G) such that GA = V(G)\A.
We can look at the Complementation Theorem in terms of a large independent
set. We say that A containedin V(G) is independent in G if and only if there
is no edge connecting any two elements of A.
COMPLEMENTATION THEOREM (finite dags). Every finite dag has a unique
independent set A such that V(G)\A containedin GA.
A digraph on a set E is a digraph G where V(G) = E.
We will focus on digraphs whose vertex set is of the form [1,n]^k. Here
k,n >= 1 and [1,n] = {1,2,...,n}.
An upgraph on [1,n]^k is a digraph on [1,n]^k such that for all (x,y) in
E(G), max(x) < max(y).
The following is an immediate consequence of the Complementation Theorem
(finite dags) since upgraphs are obviously dags.
COMPLEMENTATION THEOREM (finite upgraphs). For all k,n >= 1, every upgraph
on [1,n]^k has a unique independent set A such that V(G)\A containedin GA.
Our development relies on what we call order invariant digraphs on [1,n]^k.
These are the digraphs G on [1,n]^k where only the relative order of
coordinates of pairs of vertices determine if they are connected by an edge.
More formally, let u,v lie in {1,2,3,...}^p. We say that u,v are order
equivalent if and only if for all 1 <= i,j <= p,
u_i < u_j iff v_i < v_j.
Let G be a digraph on [1,n]^k. We say that G is order invariant if and only
if the following holds. For all x,y,z,w in [1,n]^k, if (x,y) and (z,w) are
order equivalent (as 2k tuples), then
(x,y) in E(G) iff (z,w) in E(G).
Note that an order invariant digraph on [1,n]^k is completely determined,
among digraphs on [1,n]^k, by the subdigraph induced by [1,2k]^k -
regardless of how large n is. Thus the number of order invariant digraphs on
[1,n]^k is bounded by (2k)^k.
We write x! when x is a tuple of nonnegative integers. Here x! =
(x1!,...,xk!), where x has length k.
We say that
x starts a length r path in G continuing through S
if and only only if there exists x = x0,x1,...,xr such that each (xi,xi+1)
is an edge in G, and x1,...,xr lie in S.
PROPOSITION A. For all n,k,r >= 1, every order invariant upgraph G on
[1,n]^k has an independent set A such that every vertex x! that starts a
length r path in G continuing through V(G)\A, starts a length r path in G
continuing through GA in which the integer (8kr)!-1 does not appear.
Note that if we remove 'in which ...', then the statement immediately
follows from the Complementation Theorem (finite upgraphs), since we can use
the identity embeddings.
Proposition A can be proved with large cardinals but not without. Note that
Proposition A is explicitly Pi01.
Here is more detailed information.
Let MAH = ZFC + {there exists a strongly n-Mahlo cardinal}n.
Let MAH+ = ZFC + "for all n there exists a strongly n-Mahlo cardinal".
THEOREM 1. MAH+ proves Proposition A. However, Proposition A is not
provable in any consistent fragment of MAH that derives Z = Zermelo set
theory. In particular, Proposition A is not provable in ZFC, provided
ZFC is consistent. These facts are provable in RCA_0.
THEOREM 2. EFA + Con(MAH) proves Proposition A.
THEOREM 3. It is provable in ACA that Propositions A is equivalent to
I use http://www.math.ohio-state.edu/%7Efriedman/ for downloadable
manuscripts. This is the 267th in a series of self contained numbered
postings to FOM covering a wide range of topics in f.o.m. The list of
previous numbered postings #1-249 can be found at
http://www.cs.nyu.edu/pipermail/fom/2005-June/008999.html in the FOM
archives, 6/15/05, 9:18PM.
250. Extreme Cardinals/Pi01 7/31/05 8:34PM
251. Embedding Axioms 8/1/05 10:40AM
252. Pi01 Revisited 10/25/05 10:35PM
253. Pi01 Progress 10/26/05 6:32AM
254. Pi01 Progress/more 11/10/05 4:37AM
255. Controlling Pi01 11/12 5:10PM
256. NAME:finite inclusion theory 11/21/05 2:34AM
257. FIT/more 11/22/05 5:34AM
258. Pi01/Simplification/Restatement 11/27/05 2:12AM
259. Pi01 pointer 11/30/05 10:36AM
260. Pi01/simplification 12/3/05 3:11PM
261. Pi01/nicer 12/5/05 2:26AM
262. Correction/Restatement 12/9/05 10:13AM
263. Pi01/digraphs 1 1/13/06 1:11AM
264. Pi01/digraphs 2 1/27/06 11:34AM
265. Pi01/digraphs 2/more 1/28/06 2:46PM
266. Pi01/digraphs/unifying 2/4/06 5:27AM
Harvey Friedman
More information about the FOM mailing list | {"url":"http://www.cs.nyu.edu/pipermail/fom/2006-February/009719.html","timestamp":"2014-04-20T08:16:42Z","content_type":null,"content_length":"7931","record_id":"<urn:uuid:fa03750c-ae20-487b-b1df-13bea2a4d1b8>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00421-ip-10-147-4-33.ec2.internal.warc.gz"} |
infinitesimal calculus
the combined methods of mathematical analysis of differential calculus and integral calculus
infinitesimal calculus
infinitesimal calculus
1. (calculus) Differential calculus and integral calculus considered together as a single subject.
Usage notes
1. Although calculus (in the sense of analysis) is usually synonymous with infinitesimal calculus, not all historical formulations have relied on infinitesimals (infinitely small numbers that are
are nevertheless not zero). The original infinitesimal calculus of Newton and Leibniz did use them, but not in a demonstrably rigorous way, and many philosophers found the notion of an
infinitesimal objectionable. Early attempts to prove the rigour of the approach were unsuccessful. A rigorous formulation (known also as standard calculus) was developed by Cauchy and
Weierstrass, who avoided infinitesimals and made use of the concept of limit. In the 1960s, Robinson was able to develop a rigorous formulation (known as non-standard calculus) that makes use of
infinitesimals. (See infinitesimal calculus.) | {"url":"http://www.yourdictionary.com/infinitesimal-calculus","timestamp":"2014-04-21T08:21:42Z","content_type":null,"content_length":"43275","record_id":"<urn:uuid:278de61c-b236-428d-95a1-75aafc69636f>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00532-ip-10-147-4-33.ec2.internal.warc.gz"} |
Books released during the week of April
7, 2014
Count Like an Egyptian:
A Hands-on Introduction to Ancient
David Reimer
"Reimer gives us a detailed
introduction to the mathematics of
the ancient Egyptians--from their
arithmetic operations to their
truncated pyramids--in a beautifully
designed volume that is so much
easier to read than a papyrus
scroll."--William Dunham, author of
The Calculus Gallery: Masterpieces
from Newton to Lebesgue
Cloth | $29.95 / £19.95
eBook edition available
Everyday Calculus:
Discovering the Hidden Math All
around Us
Oscar E. Fernandez
"For every befuddled math student
who's ever sat in class and thought,
'When am I ever going to use this?'
Fernandez, assistant professor of
mathematics at Wellesley College,
gleefully reveals the truth: the
world really does run on math. . . .
Whether describing how biology uses
math to design more efficient organs
and body structures or the best way
to figure out when to overhaul a
subway car, Fernandez keeps the tone
light, as entertaining as it is
informative. The book will speak most
strongly to readers with some
experience in trigonometry and basic
calculus, but it's also accessible to
those willing to put in a little
extra effort. Either way, Fernandez's
witty, delightful approach makes for
a winning introduction to the
wonderland of math behind the scenes
of everyday life."--Publishers Weekly
(starred review)
Cloth | $24.95 / £16.95
eBook edition available
Books released during the week of March
17, 2014
Math Bytes:
Google Bombs, Chocolate-Covered Pi,
and Other Cool Bits in Computing
Tim Chartier
"How can you tell, by just looking,
that 1782^12 + 1841^12 = 1922^12 is
not a true statement? When you search
for something on the web, how does
Google know how to respond with the
most relevant hits first? Chartier
tells you how in this book. Each new
discussion illustrates the almost
supernatural explanatory nature of
mathematics, promising many hours of
enjoyment."--Paul J. Nahin, author of
Will You Be Alive 10 Years from Now?:
And Numerous Other Curious Questions
in Probability
Cloth | $24.95 / £16.95
eBook edition available
Books released during the week of March
10, 2014
Falling Behind?
Boom, Bust, and the Global Race for
Scientific Talent
Michael S. Teitelbaum
"Detailing the varied interests
driving science and engineering
workforce policy, Falling Behind?
demonstrates that unfortunately,
scores of high-skilled workers have
been on the losing end of failed
education and immigration agendas.
This book provides critical analysis
and an opportunity to change the
dialogue for these issues."--Paul E.
Almeida, DPE AFL-CIO
Cloth | $29.95 / £19.95
eBook edition available
Introduction to Computational
Modeling and Simulation for the
(Second Edition)
Angela B. Shiflet & George W. Shiflet
Praise for the previous edition: "The
heart of Introduction to
Computational Science is a collection
of modules. Each module is either a
discussion of a general computational
issue or an investigation of an
application. . . . [This book] has
been carefully and thoughtfully
written with students clearly in
mind."--William J. Satzer, MAA
Cloth | $99.50 / £69.95
eBook edition available
Books released during the week of March
3, 2014
Enlightening Symbols:
A Short History of Mathematical
Notation and Its Hidden Powers
Joseph Mazur
"Mazur (Euclid in the Rainforest)
gives readers the fascinating history
behind the mathematical symbols we
use, and completely take for granted,
every day. Mathematical notation
turns numbers into sentences--or, to
the uninitiated, a mysterious and
impenetrable code. Mazur says the
story of math symbols begins some
3,700 years ago, in ancient Babylon,
where merchants incised tallies of
goods on cuneiform tablets, along
with the first place holder--a blank
space. Many early cultures used
letters for both numbers and an
alphabet, but convenient objects like
rods, fingers, and abacus beads, also
proved popular. Mazur shows how our
'modern' system began in India,
picking up the numeral 'zero' on its
way to Europe, where it came into
common use in the 16th century,
thanks to travelers and merchants as
well as mathematicians like
Fibonacci. Signs for addition,
subtraction, roots, and equivalence
followed, but only became
standardized through the influence of
scientists and mathematicians like
René Descartes and Gottfried Leibniz.
Mazur's lively and accessible writing
makes what could otherwise be a dry,
arcane history as entertaining as it
is informative."--Publishers Weekly
Cloth | $29.95 / £19.95
eBook edition available
Books released during the week of
February 24, 2014
Frontiers in Complex Dynamics:
In Celebration of John Milnor's 80th
Edited by Araceli Bonifant, Misha
Lyubich, & Scott Sutherland
John Milnor, best known for his work
in differential topology, K-theory,
and dynamical systems, is one of only
three mathematicians to have won the
Fields medal, the Abel prize, and the
Wolf prize, and is the only one to
have received all three of the Leroy
P. Steele prizes. In honor of his
eightieth birthday, this book gathers
together surveys and papers inspired
by Milnor's work, from distinguished
experts examining not only
holomorphic dynamics in one and
several variables, but also
differential geometry, entropy
theory, and combinatorial group
Cloth | $99.00 / £69.95
eBook edition available
Books released during the week of
February 18, 2014
Hangzhou Lectures on Eigenfunctions
of the Laplacian
Christopher D. Sogge
Based on lectures given at Zhejiang
University in Hangzhou, China, and
Johns Hopkins University, this book
introduces eigenfunctions on
Riemannian manifolds. Christopher
Sogge gives a proof of the sharp Weyl
formula for the distribution of
eigenvalues of Laplace-Beltrami
operators, as well as an improved
version of the Weyl formula, the
Duistermaat-Guillemin theorem under
natural assumptions on the geodesic
Paper | $75.00 / £52.00
Cloth | $165.00 / £115.00
eBook edition available
Books released during the week of
February 3, 2014
Toward Neurocognitive Foundations for
Generative Social Science
Joshua M. Epstein
"Agent Zero offers a solution to some
of social science's great puzzles.
Its behavioral basis is the interplay
of emotion, cognition, and network
contagion effects. It elegantly
explains why so many human actions
are so manifestly dysfunctional, and
why some are downright evil."--George
Akerlof, Nobel Laureate in Economics
Cloth | $49.50 / £34.95
eBook edition available
Chow Rings, Decomposition of the
Diagonal, and the Topology of
Claire Voisin
In this book, Claire Voisin provides
an introduction to algebraic cycles
on complex algebraic varieties, to
the major conjectures relating them
to cohomology, and even more
precisely to Hodge structures on
cohomology. The volume is intended
for both students and researchers,
and not only presents a survey of the
geometric methods developed in the
last thirty years to understand the
famous Bloch-Beilinson conjectures,
but also examines recent work by
Paper | $75.00 / £52.00
Cloth | $165.00 / £115.00
eBook edition available
Books released during the week of
January 13, 2014
Wizards, Aliens, and Starships:
Physics and Math in Fantasy and
Science Fiction
Charles L. Adler
"To only call Wizards, Aliens, and
Starships engaging would be a real
understatement--it is a delightful,
funny, and immensely interesting romp
through science and fiction. From
candlepower to teleportation, all the
way to the fate of the cosmos in the
span of a googol years, this is a
cornucopia of teachable material. It
is also a reminder of the simple
thrill of applying science to the
world around us, real or imagined. A
new classic."--Caleb Scharf, author
of Gravity's Engines and The
Copernicus Complex
Cloth | $29.95 / £19.95
eBook edition available
Books released during the week of
January 6, 2014
Three Views of Logic:
Mathematics, Philosophy, and Computer
Donald W. Loveland, Richard E. Hodel
& S. G. Sterrett
"Formal logic should no longer be
taught as a course within a single
subject area, but should be taught
from an interdisciplinary
perspective. Three Views of Logic has
many fine features and combines
materials not found together
elsewhere. We have needed an
accessible textbook like this one for
quite some time."--Hans Halvorson,
Princeton University
Cloth | $49.50 / £34.95
eBook edition available
Books released during the week of
December 30, 2013
Beautiful Geometry
Eli Maor & Eugen Jost
"Mathematicians sometimes compare
well-constructed equations to works
of art. To them, patterns in numbers
hold a beauty at least equal to that
found in any sonnet or sculpture. In
this book, Maor, a math historian,
teams with Jost, an artist, to reveal
some of that mathematical majesty
using jewel-like visualizations of
classic geometric theorems. . . . The
result is a book that stimulates the
mind as well as the eye."--Lee
Billings, Scientific American
Cloth | $27.95 / £19.95
eBook edition available
The Best Writing on Mathematics 2013
Edited by Mircea Pitici
Foreword by Roger Penrose
Praise for Princeton's previous
editions: "[A] volume of unexpectedly
fascinating mathematical research,
musings, and studies that explore
subjects from art to medicine. . . .
[R]eaders from many disciplines will
find much to pique their interest."--
Publishers Weekly
Paper | $21.95 / £14.95
eBook edition available
Books released during the week of
December 16, 2013
Advances in Analysis: Negative Math:
The Legacy of Elias M. Stein How Mathematical Rules
Edited by Charles Fefferman, Can Be Positively Bent
Alexandru D. Ionescu, D. H. Phong & Alberto A. Martinez
Stephen Wainger
"Alberto A. Martínez . .
Princeton University's Elias Stein . shows that the concept
was the first mathematician to see of negative numbers has
the profound interconnections that perplexed not just young
tie classical Fourier analysis to students but also quite a
several complex variables and few notable
representation theory. His mathematicians. . . . The
fundamental contributions include the rule that minus times
Kunze-Stein phenomenon, the minus makes plus is not
construction of new representations, in fact grounded in some
the Stein interpolation theorem, the deep and immutable law of
idea of a restriction theorem for the nature. Martínez shows
Fourier transform, and the theory of that it's possible to
H^p spaces in several variables. . . construct a fully
. Drawing inspiration from Stein's consistent system of
contributions to harmonic analysis arithmetic in which minus
and related topics, this volume times minus makes minus.
gathers papers from internationally It's a wonderful
renowned mathematicians, many of whom vindication for the
have been Stein's students. obstinate smart-aleck kid
Cloth | $99.50 / £69.95 in the back of the
eBook edition available class."--Greg Ross,
American Scientist
Paper | $19.95 / £16.00
Cloth | 2005 | $35.00 /
Books released during the week of
December 9, 2013
Number Theory:
A Historical Approach
John J. Watkins
"I know of no other book at this
easily accessible level that combines
extensive coverage of the mathematics
with so many interesting biographical
facts and anecdotes."--Thomas W.
Cusick, University at Buffalo, State
University of New York
Cloth | $75.00 / £52.00
eBook edition available
Books released during the week of
November 18, 2013
Reinventing Discovery:
The New Era of Networked Science
Michael Nielsen
"[A] thought-provoking call to arms.
. . . Reinventing Discovery will
frame serious discussion and inspire
wild, disruptive ideas for the next
decade."--Chris Lintott, Nature
Paper | $19.95 / £13.95
Cloth | 2011 | $24.95 / £16.95
eBook edition available
Books released during the week of
November 11, 2013
X and the City: Who's #1?
Modeling Aspects of Urban Life The Science of Rating and
John A. Adam Ranking
Amy N. Langville & Carl
"[Adam's] writing is fun and D. Meyer
accessible. . . . College or even
advanced high school mathematics "[A] thorough exploration
instructors will find plenty of great of the methods and
examples here to supplement the applications of ranking
standard calculus problem sets."-- for an audience ranging
Library Journal from computer scientists
Paper | $19.95 / £13.95 and engineers to
Cloth | 2012 | $27.95 / £19.95 high-school teachers to
eBook edition available 'people interested in
wagering on just about
Paper | $22.95 / £14.95
Cloth | 2012 | $29.95 /
eBook edition available
Books released during the week of
November 4, 2013
Will You Be Alive 10 Years from Now?
And Numerous Other Curious Questions
in Probability
Paul J. Nahin
"A wonderful book for trained math
lovers who enjoy the mental
stimulation provided by a good
mathematics puzzle."--Harold D.
Shane, Library Journal
Cloth | $27.95 / £19.95
eBook edition available
Books released during the week of
October 21, 2013
Four Colors Suffice:
How the Map Problem Was Solved
(Revised Color Edition)
Robin Wilson
With a new foreword by Ian Stewart
"The simplicity of the four-color
conjecture is deceptive. Just how
deceptive is made clear by Robin
Wilson's delightful history of the
quest to resolve it. . . . Four
Colors Suffice is strewn with good
anecdotes, and the author . . .
proves himself skillful at making the
mathematics accessible."--Jim Holt,
New York Review of Books
Cloth | $24.95 / £16.95
eBook edition available
Books released during the week of
September 16, 2013
Einstein and the Quantum:
The Quest of the Valiant Swabian
A. Douglas Stone
"Albert Einstein (1879-1955) is as
famous for his paradigm-shifting
theories of relativity as he is for
his grudge against quantum mechanics,
but Stone's (Physics/Yale Univ.)
engaging history of Einstein's ardent
search for a unifying theory tells a
different story. Einstein's creative
mind was behind almost every single
major development in quantum
mechanics. . . . The author adeptly
weaves his subject's personal life
and scientific fame through the
tumult of world war and, in
accessible and bright language,
brings readers deep into Einstein's
struggle with both the macroscopic
reality around him and the quantum
reality he was trying to unlock. . .
. A wonderful reminder that
Einstein's monumental role in the
development of contemporary science
is even more profound than history
has allowed."--Kirkus Reviews
Cloth | $29.95 / £19.95
eBook edition available
Books released during the week of
September 9, 2013
Undiluted Hocus-Pocus:
The Autobiography of Martin Gardner
Martin Gardner
With a foreword by Persi Diaconis and
an afterword by James Randi
"Readers who only know Gardner for
his math and science writing will be
surprised at his focus on religion,
and this autobiography demonstrates
his passion to explain and understand
the world around him."--Publishers
Cloth | $24.95 / £16.95
Books released during the week of
September 3, 2013
The Ultimate Quotable Einstein
(New in Paperback)
Collected and edited by Alice
With a foreword by Freeman Dyson
"[The Ultimate Quotable Einstein] is
a compelling selection. . . .
Students of Einstein's work and life,
who are familiar with these contexts,
can find many embellishments to their
research, and often puzzling contrary
notes to customary portrayals of his
stance on issues ranging from Zionism
to domestic life."--Choice
Paper | $16.95 / £11.95
Books released during the week of August
5, 2013
The Universe in Zero Words: The Story
of Mathematics as Told through
Dana Mackenzie
"Quietly learned and beautifully
illustrated, Mackenzie's book is a
celebration of the succinct and the
singular in human expression."--
Paper | $19.95 / £13.95
Cloth | 2012 | $27.95 / £19.95
eBook edition available
Books released during the week of July
22, 2013
A Mathematics Course for Political
and Social Research
Will H. Moore & David A. Siegel
"Moore and Siegel provide an
exceptionally clear exposition for
political scientists with little
formal training in mathematics. They
do this by emphasizing intuition and
providing reasons for why the topic
is important. Anyone who has taught a
first-year graduate course in
political methodology has heard
students ask why they need to know
mathematics. It is refreshing to have
the answers in this book."--Jan
Box-Steffensmeier, Ohio State
Paper | $39.95 / £27.95
Cloth | $95.00 / £65.00
eBook edition available
Books released during the week of April
22, 2013
Inventor of the Electrical Age
W. Bernard Carlson
"A scholarly, critical, mostly
illuminating study of the life and
work of the great Serbian
inventor."--Kirkus Reviews
Cloth | 2012 | $29.95 / £19.95
eBook edition available
Books released during the week of April
15, 2013
Nine Algorithms That Changed the
The Ingenious Ideas That Drive
Today's Computers
John MacCormick
With a foreword by Chris Bishop
"Nine Algorithms That Changed the
Future offers a great way to find out
what computer science is really
about. In this very readable book,
MacCormick (a computer scientist at
Dickinson College) shows how a
collection of sets of intangible
instructions invented since the 1940s
has led to monumental changes in all
our lives. . . . MacCormick provides
a taste of why we computer scientists
get so excited about algorithms--for
their utility, of course, but also
for their beauty and elegance."--Paul
Curzon, Science
Paper | $16.95 / £11.95
Cloth | 2012 | $27.95 / £19.95
eBook edition available
Books released during the week of April
8, 2013
Spaces of PL Manifolds and Categories
of Simple Maps
Friedhelm Waldhausen, Bjørn Jahren &
John Rognes
Since its introduction by Friedhelm
Waldhausen in the 1970s, the
algebraic K-theory of spaces has been
recognized as the main tool for
studying parametrized phenomena in
the theory of manifolds. However, a
full proof of the equivalence
relating the two areas has not
appeared until now. This book
presents such a proof, essentially
completing Waldhausen's program from
more than thirty years ago.
Paper | $75.00 / £52.00
Cloth | $165.00 / £115.00
eBook edition available
Books released during the week of March
18, 2013
Degenerate Diffusion Operators
Arising in Population Biology
Charles L. Epstein & Rafe Mazzeo
This book provides the mathematical
foundations for the analysis of a
class of degenerate elliptic
operators defined on manifolds with
corners, which arise in a variety of
applications such as population
genetics, mathematical finance, and
Paper | $75.00 / £52.00
Cloth | $165.00 / £115.00
eBook edition available
Books released during the week of March
11, 2013
The Golden Ticket:
P, NP, and the Search for the
Lance Fortnow
"Fortnow effectively initiates
readers into the seductive mystery
and importance of P and NP
problems."--Publishers Weekly
Cloth | $26.95 / £18.95
eBook edition available
Books released during the week of March
4, 2013
Arithmetic Compactifications of Digital Dice:
PEL-Type Shimura Varieties Computational Solutions
Kai-Wen Lan to Practical Probability
Problems (New in
By studying the degeneration of Paperback)
abelian varieties with PEL Paul J. Nahin
structures, this book explains the
compactifications of smooth integral "The problems are
models of all PEL-type Shimura accessible but still
varieties, providing the logical realistic enough to be
foundation for several exciting engaging, and the
recent developments. The book is solutions in the back of
designed to be accessible to graduate the book will get you
students who have an understanding of through any sticky spots.
schemes and abelian varieties. Writing your own versions
Cloth | $150.00 / £103.00 of a few of these
eBook edition available programs will acquaint
you with a useful
approach to problem
solving and a novel style
of thinking."--Brian
Hayes, American Scientist
Paper | $18.95 / £12.95
Cloth | 2008 | $27.95 /
eBook edition available
Books released during the week of
February 4, 2013
Invisible in the Storm: Towing Icebergs, Falling Trigonometric Delights (New
The Role of Mathematics in Dominoes, and Other in Paperback)
Understanding Weather Adventures in Applied Eli Maor
Ian Roulstone & John Norbury Mathematics (New in
Paperback) "Maor's presentation of the
"[O]ne of the great strengths of the Robert B. Banks historical development of
book is the way it picks apart the the concepts and results
challenge of making predictions about "Robert Banks's study of deepens one's appreciation
a chaotic system, showing what everyday phenomena is of them, and his discussion
improvements we might yet hope for infused with infectious of the personalities
and what factors confound enthusiasm."--Publishers involved and their politics
them."--Philip Ball, Prospect Weekly and religions puts a human
Cloth | $35.00 / £24.95 Cloth | $16.95 / £11.95 face on the subject. His
eBook edition available eBook edition available exposition of mathematical
arguments is thorough and
remarkably easy to
understand. There is a lot
of material here that
teachers can use to keep
their students awake and
interested. In short,
Trigonometric Delights
should be required reading
for everyone who teaches
trigonometry and can be
highly recommended for
anyone who uses it."--George
H. Swift, American
Mathematics Monthly
Paper | $18.95 / £12.95
eBook edition available
Books released during the week of
December 24, 2012
Spin Glasses and Complexity
Daniel L. Stein & Charles M. Newman
"This excellent book fills a unique
and valuable niche. It is a great
introduction to some fascinating
physics, emphasizing the fundamental
concepts and the connections to other
complex systems. There are lots of
technical volumes on spin glasses,
but no other book works at this
nonmathematical level, certainly not
while still being so accurate and
insightful."--Cosma Shalizi, Carnegie
Mellon University
Paper | $39.95 / £27.95
eBook edition available
Books released during the week of
December 3, 2012
Heavenly Mathematics:
The Forgotten Art of Spherical
Glen Van Brummelen
"Heavenly Mathematics is heavenly, is
mathematics, and is so much more:
history, astronomy, geography, and
navigation replete with historical
illustrations, elegant diagrams, and
charming anecdotes. I haven't
followed mathematical proofs with
such delight in decades. If, as the
author laments, spherical
trigonometry was in danger of
extinction, this book will give it a
long-lasting reprieve."--David J.
Helfand, president of the American
Astronomical Society
Cloth | $35.00 / £24.95
Books released during the week of
November 12, 2012
The Gross-Zagier Formula on Shimura Wind Wizard:
Curves Alan G. Davenport and the
Xinyi Yuan, Shou-wu Zhang & Wei Zhang Art of Wind Engineering
Siobhan Roberts
This comprehensive account of the
Gross-Zagier formula on Shimura "Richly drawn. . . . A
curves over totally real fields winning, enlightening
relates the heights of Heegner points investigation into wind
on abelian varieties to the engineering and the man
derivatives of L-series. The formula who made the airwaves
will have new applications for the speak."--Kirkus Reviews
Birch and Swinnerton-Dyer conjecture (starred review)
and Diophantine equations. Cloth | $29.95 / £19.95
Paper | $75.00 / £52.00
Cloth | $150.00 / £103.00
eBook edition available
Books released during the week of
November 5, 2012
Henri Poincaré: Mathematical Tools for
A Scientific Biography Understanding Infectious
Jeremy Gray Disease Dynamics
Odo Diekmann, Hans
"Poincaré was much more than a Heesterbeek & Tom Britton
mathematician: he was a public
intellectual, and a rare scientist "This landmark volume
who enthusiastically rose to the describes for readers how
challenge of explaining and one should view the
interpreting science for the public. theoretical side of
With amazingly lucid explanations of mathematical epidemiology
Poincaré's ideas, this book is one as a whole. A
that any reader who wants to particularly important
understand the context and content of need is for a book that
Poincaré's work will want to have on integrates deterministic
hand."--Dana Mackenzie, author of The and stochastic
Universe in Zero Words epidemiological models,
Cloth | $35.00 / £24.95 and this is the first one
eBook edition available that does this. I know of
no better overview of the
subject. It belongs on
the shelf of everyone
working in mathematical
Brauer, University of
British Columbia
Cloth | $90.00 / £62.00
eBook edition available
Books released during the week of
October 22, 2012
The Best Writing on Mathematics 2012 Newton and the Origin of
Edited by Mircea Pitici Civilization
Foreword by David Mumford Jed Z. Buchwald &
Mordechai Feingold
Praise for The Best Writing on
Mathematics 2011: "Pitici turns out a "The reader of Buchwald
second volume of unexpectedly and Feingold's long
fascinating mathematical research, awaited book will learn
musings, and studies that explore not only about Newton the
subjects from art to medicine. . . historian, but also about
Readers from many disciplines will his theological,
find much to pique their interest."-- alchemical, mathematical,
Publishers Weekly and astronomical work.
Paper | $19.95 / £13.95 The authors have
eBook edition available something new to say
about every facet of
Newton's intellectual
endeavor: about his
peculiar way of working
with numbers and data,
his anxieties concerning
evidence and testimony,
his polemics with the
English and the French
Guicciardini, author of
Isaac Newton on
Mathematical Certainty
and Method
Paper | $49.50 / £34.95
Books released during the week of
October 8, 2012
The Logician and the Engineer:
How George Boole and Claude Shannon
Created the Information Age
Paul J. Nahin
"In this book, Nahin brings to life
the immense practical outcomes of
deep theoretical ideas. Too often,
technological advances are seen as
isolated inventions and the
underlying mathematical and
scientific infrastructure goes
unappreciated. By following the story
of George Boole and Claude Shannon
with a lively historical style, and a
futuristic extension to quantum
computing, Nahin makes the connection
of theory and practice into something
vivid and compelling."--Andrew
Hodges, author of Alan Turing: The
Cloth | $24.95 / £16.95
eBook edition available
Books released during the week of
October 1, 2012
The Collected Papers of Albert The Collected Papers of
Einstein, Volume 13: Albert Einstein, Volume
The Berlin Years: Writings & 13:
Correspondence, January 1922 - March The Berlin Years:
1923 (Documentary Edition) Writings &
Edited by Diana Kormos Buchwald, Correspondence, January
József Illy, Ze'ev Rosenkranz, & 1922 - March 1923
Tilman Sauer (English Translation
Einstein's work and intense Edited by Diana Kormos
scientific exchanges--with N. Bohr, Buchwald, József Illy,
P. Ehrenfest, A. Sommerfeld, M. Born, Ze'ev Rosenkranz, &
and others--during these fifteen Tilman Sauer
months result in remarkable Translated by Ann M.
publications and intellectual Hentschel & Osik Moses
developments. A paper written with Klaus Hentschel,
Ehrenfest shows with uncompromising consultant
clarity that the outcome of the
recent Stern-Gerlach experiment could Every document in
not be explained by either classical Collected Papers of
or quantum theory. In a similar vein, Albert Einstein appears
he analyzes the phenomenon of in the language in which
superconductivity. Clearly among the it was written, and this
leading quantum theorists, he focuses supplementary paperback
on its conceptual bases, tirelessly volume presents the
proposing crucial experiments that English translations of
could decide between classical and select portions of
quantum physics. We also see non-English materials in
foundational interests develop in his Volume 13. This
concerns with a unified field theory translation does not
of electromagnetism and gravitation. include notes or
Cloth | $125.00 / £85.00 annotation of the
documentary volume and is
not intended for use
without the original
language documentary
edition which provides
the extensive editorial
commentary necessary for
a full historical and
scientific understanding
of the documents.
Paper | $45.00 / £30.95
Books released during the week of
September 10, 2012
Guesstimation 2.0: Solving Today's
Problems on the Back of a Napkin
Lawrence Weinstein
"Guesstimation 2.0 is an entertaining
read, with the added attraction that
it can be consumed in small portions
by opening it on almost any page. I
can easily see having this book close
by and returning to it again and
again."--Mark Levi, author of Why
Cats Land on Their Feet: And 76 Other
Physical Paradoxes and Puzzles
Paper $19.95 / £13.95
eBook edition available
Books released during the week of July
30, 2012
Science on Stage: From Doctor Faustus
to Copenhagen
Kirsten Shepherd-Barr
"Science on Stage is the best
available companion to modern science
plays."--Times Higher Education
Paper $24.95 / £16.95
Cloth $42.00 / £28.95
Books released during the week of July
2, 2012
Across the Board: The Mathematics of Chases and Escapes: The Duelling Idiots and Other The Irrationals: A Mathematical The Mathematical Slicing Pizzas,
Chessboard Problems Mathematics of Pursuit Probability Puzzlers Story of the Numbers Excursions to Mechanic: Using Racing Turtles, and
John J. Watkins and Evasion (New in Paul J. Nahin You Can't Count On the World's Physical Reasoning Further Adventures
Paper) Julian Havil Great to Solve Problems in Applied
"Watkins offers an excellent Paul J. Nahin "Nahin's sophisticated Buildings Mark Levi Mathematics
invitation to serious mathematics."-- puzzles, and their "The Irrationals is a Alexander J. Robert B. Banks
Choice "I am sure that this book accompanying explanations, true mathematician's Hahn "A most interesting
Paper $18.95 / £12.95 will appeal to everyone have a far better than even and historian's book. . . . Many of "[Banks displays] a
eBook edition available who is interested in chance of fascinating and delight."--Robert "[Hahn] the ideas in it playful imagination
mathematics and game preoccupying the Schaefer, New York conducts an could be used as and love of the
theory. Excellent mathematically literate Journal of Books opulent motivational or fantastic that one
work."--Prabhat Kumar readership they seek."-- Cloth $29.95 / £19.95 historical and illustrative would not ordinarily
Mahanti, Zentralblatt Publisher's Weekly eBook edition available geographical examples to support associate with a
Math Paper $18.95 / £12.95 tour."--Jascha the teaching of mathematical
Paper $18.95 / £12.95 eBook edition available Hoffman, New non-specialists, engineer. . . .
eBook edition available York Times especially Banks's style is
Book Review physicists and entertaining but
Cloth $49.95 / engineers. In never
£34.95 conclusion--a condescending."--The
eBook edition thoroughly Christian Science
available enjoyable and Monitor
thought-provoking Paper $18.95 /
read."--Nigel £12.95
Steele, London eBook edition
Mathematical available
Society Newsletter
Paper $14.95 /
Cloth $19.95 /
eBook edition
Books released during the week of May 7,
Alan Turing: The Enigma The Centenary Alan Turing's Systems of Why Cats Land on Their Feet: X and the City:
Edition Logic: The Princeton And 76 Other Physical Modeling Aspects of
Andrew Hodges Thesis Paradoxes and Puzzles Urban Life
Edited and introduced by Mark Levi John A. Adam
"One of the finest scientific Andrew W. Appel
biographies ever written."--Jim Holt, "A collection of physical "[Adam's] writing is
New Yorker "For me, this is the most puzzlers, often with counter fun and accessible. . .
Paper $24.95 interesting of Alan intuitive manifestations, . College or even
eBook edition available Turing's writings, and it which, for all that, admit advanced high school
is a real delight to see rigorous explanation mathematics instructors
a facsimile of the supported by physical will find plenty of
original typescript here. intuition. . . . [H]ugely great examples here to
The work is packed with entertaining and provide supplement the standard
ideas that have turned hours of brainy calculus problem
out to be significant for activities."--Alexander sets."--Library Journal
all sorts of current Bogomolny, CTK Insights Cloth $29.95 / £19.95
research areas in Paper $19.95 / £13.95 eBook edition available
computer science and eBook edition available
Cooper, University of
Cloth $24.95 / £16.95
Books released during the week of April
23, 2012
The Ultimate Book of Saturday
The Very Best Backyard Science
Experiments You Can Do Yourself
Neil A. Downie
"This is the most extensive
collection of project ideas at this
level that I know of. Downie gives
better 'how to' explanations and
takes the ideas further than most
other books of this kind. The
Ultimate Book of Saturday Science is
a true omnibus."--David Willey,
University of Pittsburgh at Johnstown
Paper $29.95 / £19.95
eBook edition available
Books released during the week of April
16, 2012
The Decomposition of Global Conformal
Invariants (AM-182)
Spyros Alexakis
This book addresses a basic question
in differential geometry that was
first considered by physicists
Stanley Deser and Adam Schwimmer in
1993 in their study of conformal
Paper $75.00 / £52.00
Cloth $165.00 / £115.00
eBook edition available
Books released during the week of April
9, 2012
The Universe in Zero Words: A Wealth of Numbers:
The Story of Mathematics as Told An Anthology of 500 Years
through Equations of Popular Mathematics
Dana Mackenzie Writing
Edited by Benjamin
The Universe in Zero Words tells the Wardhaugh
history of twenty-four great and
beautiful equations that have shaped This entertaining and
mathematics, science, and enlightening
society--from the elementary (1+1=2) anthology--the first of
to the sophisticated (the its kind--gathers nearly
Black-Scholes formula for financial one hundred fascinating
derivatives), and from the famous (E= selections from the past
mc2) to the arcane (Hamilton's 500 years of popular math
quaternion equations). Mackenzie, who writing, bringing to life
has been called "a popular-science a little-known side of
ace" by Booklist magazine, lucidly math history.
explains what each equation means, Cloth $45.00/ £30.95
who discovered it (and how), and how
it has affected our lives.
Cloth $27.95 / £19.95
eBook edition available
Books released during the week of April
2, 2012
Mumford-Tate Groups and Domains:
Their Geometry and Arithmetic
Mark Green, Phillip A. Griffiths &
Matt Kerr
Mumford-Tate groups are the
fundamental symmetry groups of Hodge
theory, a subject which rests at the
center of contemporary complex
algebraic geometry. This book is the
first comprehensive exploration of
Mumford-Tate groups and domains.
Containing basic theory and a wealth
of new views and results, it will
become an essential resource for
graduate students and researchers.
Paper $75.00 / £52.00
Cloth $165.00 / £115.00
eBook edition available
Books released during the week of March
26, 2012
New in Paperback: Euler's Gem:
The Polyhedron Formula and the Birth
of Topology
David S. Richeson
"The author has achieved a remarkable
feat, introducing a naïve reader to a
rich history without compromising the
insights and without leaving out a
delicious detail. Furthermore, he
describes the development of topology
from a suggestion by Gottfried
Leibniz to its algebraic formulation
by Emmy Noether, relating all to
Euler's formula. This book will be
valuable to every library with
patrons looking for an awe-inspiring
Paper $16.95 / £11.95
Cloth $27.95 / £19.95
eBook edition available
Books released during the week of
February 27, 2012
Circles Disturbed: Hybrid Dynamical Systems: Mathletics:
The Interplay of Mathematics and Modeling, Stability, and How Gamblers, Managers, and
Narrative Robustness Sports Enthusiasts Use
Edited by Apostolos Doxiadis & Barry Rafal Goebel, Ricardo G. Mathematics in Baseball,
Mazur Sanfelice & Andrew R. Basketball, and Football
Teel (New in Paper)
"Circles Disturbed offers a range of Wayne L. Winston
possibilities for how narrative can "This superb book unifies
function in mathematics and how some of the key "Sports fans will learn much
narratives themselves show signs of a developments in hybrid from probability theory and
mathematical structure. An dynamical systems from statistical models as they
intelligent, exploratory collection the last decade and, abandon empty clichés (time
of writings by a distinguished group through elegant and clear to throw momentum out of the
of contributors."--Theodore Porter, technical content, informed fan's lexicon) and
University of California, Los Angeles introduces the necessary confront institutionalized
Cloth $49.50 / £34.95 tools for understanding injustices (such as those
eBook edition available the stability of these built into the protocols for
systems. It will be a selecting a national
great resource for champion in college football
graduate students and and for seeding the NCAA's
researchers in the basketball tournament). A
field."--Magnus rare fusion of sports
Egerstedt, Georgia enthusiasm and numerical
Institute of Technology acumen."--Booklist
Cloth $79.50 / £55.00 Paper $19.95
eBook edition available eBook edition available
Books released during the week of
February 13, 2012
Mathematical Analysis of
Deterministic and Stochastic Problems
in Complex Media Electromagnetics
G. F. Roach, I. G. Stratis & A. N.
"This is an outstanding book that has
the potential to become a real
classic. It is the first to
systematically address the
mathematics of electromagnetic wave
propagation in complex media. It will
be useful not only to mathematicians
but also graduate students,
physicists, and engineers who want to
get a state-of-the-art picture of
scattering by complex
media."--Gerhard Kristensson, Lund
University, Sweden
Cloth $99.50 / £69.95
eBook edition available
Books released during the week of
February 6, 2012
Fréchet Differentiability of Google's PageRank and Small Unmanned Aircraft: Who's #1?
Lipschitz Functions and Porous Sets Beyond: Theory and Practice The Science of Rating
in Banach Spaces (AM-179) The Science of Search Randal W. Beard & Timothy W. and Ranking Amy N.
Joram Lindenstrauss, David Preiss & Engine Rankings McLain Langville & Carl D.
Jaroslav Tišer Amy N. Langville & Carl Meyer
D. Meyer "This book presents a unique
This book makes a significant inroad and broad introduction to "Who's #1? is an
into the unexpectedly difficult "[F]or anyone who wants the necessary background, excellent survey of the
question of existence of Fréchet to delve deeply into just tools, and methods to design fundamental ideas
derivatives of Lipschitz maps of how Google's PageRank guidance, navigation, and behind mathematical
Banach spaces into higher dimensional works, I recommend control systems for unmanned rating systems. Once a
spaces. Because the question turns Google's PageRank and air vehicles. Written with realm of sports
out to be closely related to porous Beyond."--Stephen H. confidence and authority by enthusiasts, ranking
sets in Banach spaces, it provides a Wildstrom, BusinessWeek leading researchers in the things is becoming a
bridge between descriptive set theory Paper $24.95 / £16.95 field, this effectively vital tool in many
and the classical topic of existence Cloth $42.00 / £28.95 organized book provides an information-age
of derivatives of vector-valued eBook edition available excellent reference for all applications. Langville
Lipschitz functions. The topic is those interested in this and Meyer compare and
relevant to classical analysis and subject."--Emilio Frazzoli, contrast a variety of
descriptive set theory on Banach Massachusetts Institute of models, explaining the
spaces. The book opens several new Technology mathematical
research directions in this area of Cloth $99.50 / £69.95 foundations and
geometric nonlinear functional eBook edition available motivation. Readers of
analysis. this book will be
Paper $75.00 / £52.00 inspired to further
Cloth $165.00 / £115.00 explore this exciting
eBook edition available field."--Kenneth
Massey, Massey Ratings
Cloth $29.95 / £19.95
eBook edition available | {"url":"http://press.princeton.edu/math/newpubs6.html","timestamp":"2014-04-19T01:50:25Z","content_type":null,"content_length":"77928","record_id":"<urn:uuid:dc3695c6-5657-4119-8a80-6b1106a5e4d0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00483-ip-10-147-4-33.ec2.internal.warc.gz"} |
Physics Forums - View Single Post - Finite element method versus intergrated finite difference for complex geometries?
Hello all:
For modeling flow (or whatever) in a non-rectangular geometry, can anyone comment on whether the finite element method would be better or worse or the same as the integrated finite difference method?
I'm reading some papers by competing groups (so I can decide which code to start using), and the finite element group maintains that the flow field can be distorted when using the integrated finite
difference method.
My questions: first of all, is this true? If so, is the problem significant? And are there any other potential advantages/disadvantages of either method over the other?
I have basic knowledge of these methods, but not enough to evaluate their advantages/disadvantages in a meaningful way! Thanks. | {"url":"http://www.physicsforums.com/showpost.php?p=3608897&postcount=1","timestamp":"2014-04-19T09:42:37Z","content_type":null,"content_length":"9176","record_id":"<urn:uuid:19c20168-e537-4132-b03e-7e2ece2d66a7>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00374-ip-10-147-4-33.ec2.internal.warc.gz"} |
Robert Morris University - Course Catalog
Course Catalog
ENGR2090 - Appl Engineering Statistics II
Spring 2014
emphasizes the application of statistical techniques in this continuation of QS208 Applied Engineering Statistics I. Simple and multiple regression is studied along with analysis of variance.
Chi-square goodness of fit tests are studied including the test for independence in a contingency table. Control charts are presented and studied. All techniques are examined from the data collection
and analysis stages through the presentation of final results. Throughout the process, computer analysis is employed. 3 credits
Prerequisite: QS208
3 Credits
MATH0900 - Intro To College Mathematics
Spring 2014
QS099 Modern Elementary College Algebra. This course is an introductory course presenting the principles of elementary algebra. Topics covered include the real number system, linear equations and
inequalities, factoring, operations with polynomials, exponents and radicals, and an introduction to functions and the Cartesian coordinate system. Placement into this course is done through the
placement testing program. (3 credits)
3 Credits
MATH1010 - College Mathematics
Spring 2014
QS101 College Mathematics I. This course is a comprehensive study of college algebra topics utilizing the TI-83 calculator wherever possible. Emphasis is placed on the solution and graphing of
linear, quadratic, exponential, and logarithmic functions. Systems of equations and inequalities are presented as they relate to linear programming. The TI-83 graphing calculator is required.
Prerequisite: MATH0900 or content evidence by placement examination
3 Credits
MATH1020 - Pre-Calculus
Spring 2014
QS102 College Mathematics II. This is a course for students wishing further study in algebra and trigonometry. This course emphasizes the function concept and includes topics of circular and
trigonometric functions, theory of equations, matrices and determinants, vectors, complex numbers, and sequences and series. The TI-83 (or higher) graphing calculator is required.
Prerequisite: MATH1010 or content evidence by placement examination
3 Credits
MATH1050 - Math Reasoning/Applications
Spring 2014
QS105 Mathematical Reasoning and Applications. This course aids the student to be cognizant of the vocabulary and mathematical skills necessary to develop quantitative reasoning skills for the
general liberal arts major. It takes the view that modern mathematics has become an art of posing and solving problems by logical reasoning by understanding the problem, devising a plan, and carrying
out the plan. The course accomplishes these tasks by dealing with the following topics: algebra, geometry, probability and statistics, set theory, finite groups, graph theory, and basic logic.
Computer applications throughout the course may be included.
Prerequisite: MATH0900 or content evidence by placement examination.
3 Credits
MATH2010 - Fundamentals Of Mathematics
Spring 2014
This course is designed to give elementary education majors the mathematical foundation for early mathematics. Fundamental topics in geometry, measurement, estimation, numeration, number systems,
number relations, fractions, decimals, statistics, and probability will be covered. The National Council of Teachers of Mathematics' Curriculum and Evaluation Standards for Grades K-8 and the
integration of technology will also be a focus.
Prerequisite: MATH1010 or MATH1050
3 Credits
MATH2040 - Finite Math & Applied Calculus
Spring 2014
MATH2040 Finite Mathematics and Applied Calculus is directed toward students in the bachelor of science in business administration and other disciplines outside of the School of Engineering Math and
Science. Topics covered include systems of linear equations, matrices, linear programming, differential calculus, exponential functions and the mathematics of finance. The primary focus is the
application of each of these topics. The graphing calculator (TI83) is used throughout the course to discover and to gain insights into the fundamental concepts. 3 credits
Prerequisite: MATH1010 or MATH1020 or content evidence by placement examination.
3 Credits
MATH2050 - Applied Calculus I
Spring 2014
This course introduces students to the basic ideas of calculus. Topics covered include functions and graphs; differentiation, integration, and optimization of algebraic, exponential, and logarithmic
functions of one independent variable; cost revenue and profit functions along with elasticity of demand and consumer surplus; and exponential growth and decay. The graphing calculator (TI83) is used
throughout the course to discover and to gain insights into the fundamental concepts of calculus.
Prerequisite -- MATH1010 or Advanced Placement
3 Credits
MATH2070 - Calculus W/Analytic Geom I
Spring 2014
This is the first in a three-course calculus sequence. Topics covered include limits, continuity, derivatives, rules for derivation, applications, and related rates; optimization techniques for
extrema including Rolle's and mean value theorems; first and second derivative tests; curve sketching; differentials and indefinite integrals; Riemann Sums; integration techniques, and the
Fundamental Theorem of Calculus. The TI-83 (or higher) graphing calculator is required.
Prerequisite: MATH1020 or content evidence by placement examination
4 Credits
MATH2170 - Calculus W/Analytic Geom II
Spring 2014
This is the second course in a three-course calculus sequence. Topics covered include applications of the integral; area between curves; solids of revolution; moments and centroids; logarithmic and
exponential functions; indeterminate forms; derivatives and integrals of trigonometric and inverse trigonometric functions; integration techniques and improper integrals; infinite series and
sequences including Taylor and Maclaurin series.
Prerequisite: MATH2070
4 Credits
MATH3030 - Operations Research I
Spring 2014
This course introduces students to quantitative methods and applications in business decision-making. The quantitative models studied in this course include matrix models, the Leontief input/output
model, Markov chains, linear programming with shadow pricing and sensitivity analysis, transportation and assignment algorithms, and network models. Computer software is used as a practical
implementation of these models. This course is usually offered only in the winter term.
Prerequisites: STAT2110 or STAT3140 and MATH2070
3 Credits
MATH3040 - Operations Research II
Spring 2014
This course presents quantitative methods and business applications, most of which require a basic knowledge of probability and statistics. The topics of study include PERT/CPM, inventory control
models, queuing systems, introduction to time series and forecasting, and introduction to simulation and Monte Carlo methods. Computer software is used as a practical implementation of the various
models. This course is usually offered in the fall term.
Prerequisite: MATH3030 (QS303)
3 Credits
MATH3060 - Applied Calculus II
Spring 2014
This course uses the basic ideas of QS305 for business and economic applications. Topics covered include techniques of integration, partial differentiation, double integration, and optimization of
functions of two or more independent variables; Lagrangian multipliers for constrained optima; areas and volumes of revolution; economic order quantity and production lot size methods; and regression
and probability using calculus. This course is usually offered only in the winter term.
Prerequisites: MATH2050, MATH2070 (QS205 or 207 (3 credits))
3 Credits
MATH3090 - Calculus W/Analytic Geom III
Spring 2014
This course is the third in a three-course calculus sequence. Topics covered include conic sections, plane curves, parametric equations, vectors and curves in the plane, dot and cross products,
applications, target and normal vectors, functions of several variables, partial derivatives, gradients, extrema of function of two variables, Lagrange multiples, multiple integrals, polar,
cylindrical, and spherical coordinates. Software proficiency in word processing is required.
Prerequisite: MATH2170 or equivalent.
4 Credits
MATH3200 - Geometry
Spring 2014
This is a course designed primarily for students majoring in applied mathematics with teacher certification. Topics covered include classical Euclidean geometry, theorems of Ceva and Menelaus, varied
sets of axioms, analytic and transformational geometry, non-Euclidean, and projective geometry. Finite geometries including nine point geometry of a circle and 25 point geometry are investigated.
Software proficiency in word processing is required.
Prerequisites: MATH2170 and COSK2220
3 Credits
MATH3250 - Practicum Tchng Math With Tech
Spring 2014
This course is designed to train students in the use of computer software, CD-ROM technology, and the graphing calculator for the teaching of mathematics and to allow these students to act as
technology assistants in computer laboratories and as trainers of math tutors. Emphasis is placed on the use of the TI-83 graphing calculator in algebra, calculus, and finance, on the use of Excel
software in statistics, and on CD-ROM technology used in elementary algebra. Training in the use of the equation editor in Microsoft Word will also be provided. Students are required to complete a
minimum number of hours of tutor training and/or computer lab assisting and to review current mathematics software packages for secondary and post-secondary settings.
Prerequisites: STAT2110 and MATH2070 with a grade of C or better.
3 Credits
MATH3400 - Linear Algebra W/Applications
Spring 2014
This is a course designed for students with majors in several mathematics related areas. Topics covered include matrices, operations with matrices, inverses of matrices, singular and nonsingular
matrices, determinants, cofactors, Cramer's rule, vectors and vector spaces, independence, basis and dimension, orthagonality, Gram Schmidt process, linear transformations, Eigen values, and Eigen
vectors. Applications include linear programming, Markov chains, quadratic forms, theory of games, least squares, and linear economic models.
Prerequisites: MATH2070 and COSK2220 or COSK2225
3 Credits
MATH3420 - Differential Equations
Spring 2014
This is a course is designed primarily for students majoring in engineering, mathmatics, and other physical sciences. Topics covered include first and second order differential equations, boundry
value problems, and methods of solution involving calculus, infinite series, Laplace transforms, and numeric procedures.
Prerequisite: MATH2170
3 Credits
MATH3440 - Intro To Real Analysis
Spring 2014
The rigorous development of calculus in a single variable is the content of the course. The theory is developed entirely from a small number of axioms for the real number system and intuitive
set-theoretic concepts. Lectures emphasize the construction of rigorous arguments in analysis and the communication of mathematical proofs. This course is recommended for students in mathmatics,
mathmatics education, actuarial science, engineering, and the physical sciences. Topics covered include the real number system, limits, continuity, differentiation, the Riemann integral, and
sequences and series of functions. 3 credits
Prerequisite: MATH2170
3 Credits
MATH4000 - Discrete Math
Spring 2014
This course is an introductory course for anyone interested in mathematical structures with emphasis on computer implementation. The course includes topics such as propositional calculus, set
theoretic concepts, relations and functions, mathematical induction, recursion, combinatorics, matrices, graphs, trees, their branching, leaves, and how to climb them (i.e. tree traversals). (3
Prerequisites: MATH2070 and COSK2220 or COSK2225
3 Credits
MATH4050 - Abstract Algebra/Number Theory
Spring 2014
This course introduces students to the basic ideas of abstract algebra and number theory. Topics covered in number theory include mathematical induction, divisibility algorithms, factorization
methods, primes, congruences, and Diophantine equations. Topics covered in abstract algebra include binary and equivalence relations, groups and subgroups, isomorphisms and homorphisms, rings, and
Prerequisite: MATH2170
3 Credits
MATH4200 - Intro To Stochastic Processes
Spring 2014
This course introduces various techniques of modeling a variety of real world problems. The techniques cover a spectrum of discrete and continuous, linear and non-linear models and illustrate the use
of mathematical software as an aid to simulating and testing models. Applications will come from such diverse areas as production planning, finance, transportation, environmental and health related
Prerequisites: MATH2070, STAT3140 and MATH3400
3 Credits
MATH4250 - Multivariable Systems Analysis
Spring 2014
This course is directed to the application major in the areas of applied mathematics, quantitative sciences, marketing, economics, and others who need a firm foundation in multivariable statistics
with applications. The course includes a brief introduction to hypothesis testing and internal estimation, simple and multiple regression analysis techniques, model building using classical stepwise
regression, forward selections, backward elimination, auto correlation, and Durbin-Watson tests. Multiple regression and factor analysis techniques applied to analysis of variance and experimental
design are emphasized. Applications to sample survey, sampling techniques, and sample size calculation via power analysis in econometrics and market research cases are developed throughout the
Prerequisite: MATH3140
3 Credits
MATH4903 - Internship/Co-Op
Spring 2014
Course description unavailable, please contact Academic Services.
Please try again at a later time
3 Credits
MATH4909 - Internship/Co-Op
Spring 2014
Course description unavailable, please contact Academic Services.
Please try again at a later time
9 Credits
STAT4400 - Statistical Software Applicatn
Spring 2014
This course emphasizes the use of computer technology and quantitative methods in statistical analysis and in the managerial decision making process. Most of the course concentrates on SAS
(Statistical Analysis System), including basic terminology and logic, simple tasks and statistics, reading and analyzing data, refining the program, data management and reporting, and procedures for
univariate, bivariate, and multivariate analysis. Other computer packages such as EXECUSTAT for time series analysis, and CRITERIUM for decision making analysis are also presented. Software
proficiency in word processing, spreadsheet, and database is required.
Prerequisite: STAT3120 (QS312) (3 credits)
3 Credits | {"url":"http://rmu.edu/OnTheMove/wpcrsehist.list_courses?icalledby=WPCRSEHIST&ipage=701&it=&iattr=&idiscipline=MATH&ischool=U&ilong_descr=1&ishowreg=No","timestamp":"2014-04-20T06:22:58Z","content_type":null,"content_length":"45144","record_id":"<urn:uuid:9c5f0297-22cf-4bad-af18-460106f35ae5>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00196-ip-10-147-4-33.ec2.internal.warc.gz"} |
Okun law on GDP components
The ECB released an extensive report on the Euro area labour market recently. It’s worth a read and contains a detailed statistical analysis of the labour developments since the start of the Euro
crisis. One interesting part of that analysis is a calculation of Okun’s law while decomposing GDP into its components and computing the impact of each one on total unemployment change. The exercise
is quite elegant and explains a large part of the change in unemployment (R² = 0.57). Coefficients for each component are computed, as well as the elasticity by utilizing the weights of each
component on aggregate GDP:
What is very impressive is the fact that the computed elasticities cast a significant blow on the ‘internal devaluation’ policy viability since the (private) consumption elasticity is almost an order
of magnitude larger than the export and import elasticity (which are equal and up to a point cancel each other since a part of any exported product is imported). That’s probably attributed to the
fact that domestic, consumption oriented sectors such as services are much more labour-intensive than mostly capital-intensive export sectors. As a result, export-led recovery, at least in terms of
employment is not a viable solution for the Euro imbalances since the austerity measures will lead to sustained long-term unemployment which will not be absorbed easily by the tradable sector and
create serious social problems as well as lower the potential growth rate.
One can use the computed coefficients and the Greek GDP components weights (for the 2000 – 2012Q2 period) to examine what the calculated change in unemployment would be. I ‘ve done the exercise for
the 2008Q2 – 2012Q2 period in the following table:
The computed change is almost 2/3 of the actual change so the regression is actually able to explain a very large part of the increase in the Greek labour unemployment. Private consumption and
investment account for 5.9% and 5.2% of the change while the imports squeeze only lowers the unemployment rate by 1.8% and exports (especially compared to 2008) do not have any meaningful effect.
Even if Greece were to observe a large export-led recovery, the very low elasticity of no more than 0.02 (which is equal to the import elasticity) would not produce any significant result on the
unemployment rate, especially since at least a part of exports must be imported. The ultimate employment recovery must be domestically demand led to have any measurable change.
One can also use the above elasticity numbers to calculate the projected unemployment increase from the 2013 austerity measures, using the Greek budget projections for GDP components change:
The government projection seems reasonable. Since the regression can only explain 2/3 of the change, the actual increase could be up to 3%. What is optimistic is the projection of only -3.7% change
in investment since the corresponding change in 2012 was -15%. A more realistic change of -10% leads to an unemployment increase of 2.5%.
Γράψτε ένα σχόλιο
Υποβολή απάντησης | {"url":"http://kkalev4economy.wordpress.com/2012/11/03/okun-law-on-gdp-components/","timestamp":"2014-04-19T23:11:45Z","content_type":null,"content_length":"62419","record_id":"<urn:uuid:974db5d2-c575-4845-953b-b22ff0a7a02e>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00457-ip-10-147-4-33.ec2.internal.warc.gz"} |
A meta-proof
inspired by
: a meta-proof of P=/!=NP, coming to a
newsgroup near you
P: I would like to announce my proof of P=/!=NP. The proof is very short and demonstrates how to solve/not solve SAT in polynomial time. You may find a write up of the proof
|-- V: I started reading your proof and when you claim 'foobar' do you mean 'foobar' or 'phishbang' ?
|----P: I meant 'phishbang'. Thanks for pointing that out. An updated version is
|------V: Well if you meant 'phishbang' then statement "in this step we assume the feefum" is incorrect.
|--------P: No no, you don't understand. I can assume feefum because my algorithm has a glemish.
|-----------V: It has a glemish ? !! But having a glemish doesn't imply anything. All algorithms have glemishes !!
|----V': Yes, and in fact in the 3rd step of argument 4, your glemish contradicts the first propum.
|--V'': I think you need to understand some basic facts about complicity theory before you can go further. Here is a book to read.
|----P: My proof is quite clear, and I don't see why I have to explain it to you if you don't understand. I have spent a long time on this.
|------V': Um, this is a famous problem, and there are many false proofs, and so you do have to convince us that the argument using glemishes can actually work.
|--------P: But what is wrong in my proof ? I don't see any problems with it, and if you can't point one out, how can you say it is wrong.
|----------V'''': I don't have to read the entire proof: glemished algorithms are well known not to work.
|------------V'''''': Check out this reference by
to see why.
P: <silence>
|--P: <answering earlier post>. This is what I mean by a glemish. it is really a flemish, not a glemish, which answers your objection.
|----P': Keep up the good work P. I tried publishing my result, and these people savaged my proof without even trying to identify a problem. All great mathematical progress has come from amateurs
like us. See this link of all the theorems proved by non-experts.
|------V': Oh jeez, not P' again. I thought we had established that your proof was wrong.
|--------P': no you didn't: in fact I have a new version that explains the proof in such simple language even dumb&%&%s like you can get it.
|------P: Thanks P', I understand that there will be resistance from the community since I have proved what they thought to be so hard.
|--V': P, I'm trying to understand your proof, with the flemishes, and it seems that maybe there is a problem in step 517 with the brouhaha technique.
P: <silence>
|----P: V', thanks for pointing out that mistake. you are right. Instead of a brouhaha technique I need a slushpit. The details are complicated, so I will fix it and post a corrected version of the
proof shortly. Thanks to all those who gave me constructive advice. I am glad that at least some of you have an open mind to accept new ideas.
This was prompted by the latest P=NP fiasco on comp.theory. I can only express my amazement and wonder at the few tireless souls (and they seem to be the same ones) who patiently try to address each
new crackpot proof that comes down the pipe. | {"url":"http://geomblog.blogspot.com/2004/04/meta-proof.html","timestamp":"2014-04-21T13:10:13Z","content_type":null,"content_length":"141324","record_id":"<urn:uuid:6ea9e1b4-6343-439b-bedb-5977484f5102>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00590-ip-10-147-4-33.ec2.internal.warc.gz"} |
Ted Bunn’s Blog
for generating this report, and NPR should be ashamed of themselves for running it on their flagship news program Morning Edition. For that matter, so should John Grotzinger, the NASA scientist
interviewed in the segment.
Grotzinger says they recently put a soil sample in SAM, and the analysis shows something remarkable. “This data is gonna be one for the history books. It’s looking really good,” he says.
Grotzinger can see the pained look on my face as I wait, hoping he’ll tell me what the heck he’s found, but he’s not providing any more information.
So why doesn’t Grotzinger want to share his exciting news? The main reason is caution. Grotzinger and his team were almost stung once before. When SAM analyzed an air sample, it looked like there
was methane in it, and at least here on Earth, some methane comes from living organisms.
Either they’ve got something amazing, in which case it’ll still be amazing once they’ve released it, or they don’t, in which case this is just a bit of hype that does a bit more to erode the
credibility of scientists everywhere. There’s no scenario in which this report has any positive effect. | {"url":"http://blog.richmond.edu/physicsbunn/2012/11/","timestamp":"2014-04-20T11:20:25Z","content_type":null,"content_length":"42812","record_id":"<urn:uuid:c0397fe6-158a-410a-b8da-93f85fb1ee06>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00116-ip-10-147-4-33.ec2.internal.warc.gz"} |
Logic and Scientific Methods
Logic and Scientific Methods: Volume One of the Tenth International Congress of Logic, Methodology and Philosophy of Science, Florence, August 1995
Maria Luisa Dalla Chiara
This is the first of two volumes comprising the papers submitted for publication by the invited participants to the Tenth International Congress of Logic, Methodology and Philosophy of Science, held
in Florence, August 1995. The Congress was held under the auspices of the International Union of History and Philosophy of Science, Division of Logic, Methodology and Philosophy of Science.
The invited lectures published in the two volumes demonstrate much of what goes on in the fields of the Congress and give the state of the art of current research. The two volumes cover the
traditional subdisciplines of mathematical logic and philosophical logic, as well as their interfaces with computer science, linguistics and philosophy. Philosophy of science is broadly represented,
too, including general issues of natural sciences, social sciences and humanities. The papers in Volume One are concerned with logic, mathematical logic, the philosophy of logic and mathematics, and
computer science.
We haven't found any reviews in the usual places.
PROOFTHEORETICAL ASPECTS OF SELFREFERENTIAL TRUTH 7
FREE LATTICES COMMUNICATION AND MONEY GAMES 29
ON METHODS FOR PROVING LOWER BOUNDS IN PROPOSITIONAL LOGIC 69
ON BOUNDED SET THEORY 85
Model Theory Set Theory and Formal Systems 105
INFINITARY LOGIC IN FINITE MODEL THEORY 107
DECISION PROBLEMS FOR SECONDORDER LINEAR LOGIC 127
WHAT CAN WE DO IN PRINCIPLE? 335
CAUSATION ACTION AND COUNTERFACTUALS 355
Logic and Philosophy of Science Current Interfaces 377
CURRENT INTERFACES 379
RELIABLE BELIEF REVISION 383
Beyond Functionalism and Reductionism 399
LOGIC VISUAL THINKING AND COHERENCE 413
CAN THE LAWS OF NATURE PHYSICS BE COMPLETE? 429
Recursion Theory and Constructivism 157
CHURCHS THESIS AND HUMES PROBLEM 159
THE LOGIC OF FUNCTIONAL RECURSION 179
FROM HIGHER ORDER TERMS TO CIRCUITS 209
COMPUTABILITY AND ENUMERABILITY 221
THE IMPORT OF TURINGS THESIS 239
Philosophical Logic 259
CONJOINING AND DISJOINING ON DIFFERENT LEVELS 261
A TURN IN STYLE 289
APPLYING NORMATIVE RULES WITH RESTRAINT 313
Foundations of Logic Mathematics and Computer Science 333
Logic in Central and Eastern Europe 447
LOGIC IN CENTRAL AND EASTERN EUROPE 449
LOGIC IN CZECHOSLOVAKIA AND HUNGARY 451
BRIEF HISTORY AND CURRENT TRENDS 457
BALKAN REGION 485
THE POSTWAR PANORAMA OF LOGIC IN POLAND 497
Closing Address 509
PHILOSOPHICAL PERPLEXITY AND PARADOX 511
TABLE OF CONTENTS VOLUME II 531
Bibliographic information | {"url":"http://books.google.co.uk/books?id=e9-Q0qy0HaoC&dq=related:ISBN0444705201&lr=","timestamp":"2014-04-19T01:57:46Z","content_type":null,"content_length":"131792","record_id":"<urn:uuid:18f5db0d-1b3e-4677-bb39-c9f4926ffd7a>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00150-ip-10-147-4-33.ec2.internal.warc.gz"} |
Astronomical Cycles
The Earth's orbit around the Sun isn't perfectly circular. And, like a spinning top, the Earth has wobbles. The overall result is that there are a number of cycles, and these cycles should affect the
amount of sunlight we get.
Astronomers long ago worked out that the most detectable cycles would take 23,000 years, 41,000 years, 100,000 years, and 404,000 years. So, if we find a layered rock, it would be interesting to
measure the layer thicknesses, and study the numbers to see if they're cyclic. (That's called a Fourier analysis.)
Specifically, it would be very significant if one found a cycle of 23,000 layers. And it would be just 'way beyond coincidence if there were three cycles, and they were in the ratio 23:41:100. It
would be even better if the cycles repeated many times. Exactly such evidence is reported in
and in
except that they didn't measure thicknesses. In fact, they drilled a long sample - a core - out of the ocean floor. At a great many places in the core, they measured an isotope ratio. The ratio
changed in just the cyclic way we've been talking about. So, just on the astronomical evidence alone, we can say that Clemens' sea-floor sediments took 4,000,000 years to form, and Broecker's
sea-floor sediments took 600,000 years to form. The samples were also dated, and the shortest cycle was dated as taking 23,000 years. This is a very nice confirmation that radioactive dating works
Those two ocean sediments aren't the only cases:
In recent months a 25 million year long record from the triassic (about 200 million years ago, for those of us who believe such things) has been obtained. The rock is banded, and the bands form
quite regular groupings. The smallest bands contain about 20,000 varves (annual layers) - and the precession cycle at that time was about 20,000 years long. Coincidence? Well, the precession
cycle is modulated by the 100,000 year eccentricity cycle so the bands should occur in groups of five, with slightly different characteristics within the group. They do. Not enough? There is also
a 400,000 year eccentricity cycle, so the large bands should be bunched in groups of four. And they are.
Astronomy, not radiometric dating, tells us that this sample of rock was laid down over 25,000,000 years. So the earth is at least that old. Furthermore, since K-Ar dating gives the same length
to this record we have no reason for not trusting within a few percent the K-Ar absolute age for this stratum, which is about 200 million years.
The geological evidence was presented by Paul Olsen of Lamont-Doherty at a recent workshop at Johns Hopkins. The theoretical paper is
Short, D. A., J. G. Mengel, T. J. Crowley, W. T. Hyde and G. R. North 1991: Filtering of Milankovitch Cycles by Earth's Geography. Quaternary Research. 35, 157--173.
-- Bill Hyde, Department of Oceanography, Dalhousie University, Halifax, Nova Scotia, 19 Mar 1994
The Scientific American article mentions Dr. Hyde's work on exactly how the astronomical cycles affect ice sheets. Last modified: 15 September 1997
Email a comment. | {"url":"http://www.don-lindsay-archive.org/creation/astro_cycles.html","timestamp":"2014-04-16T07:55:18Z","content_type":null,"content_length":"4470","record_id":"<urn:uuid:c03340a1-430d-4b2c-8aaf-c7bfd64dd641>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00042-ip-10-147-4-33.ec2.internal.warc.gz"} |
Stochastic oscillator
In technical analysis of securities trading, the stochastic oscillator is a momentum indicator that uses support and resistance levels. Dr. George Lane promoted this indicator in the 1950s. The term
stochastic refers to the location of a current price in relation to its price range over a period of time.^[1] This method attempts to predict price turning points by comparing the closing price of a
security to its price range.
The indicator is defined as follows:
$%K = 100 *(Price-L5)/(H5-L5)$
$%D = 100*H3/L3$
There is only one valid signal in working with %D alone — a divergence between %D and the analyzed security.^[2]
The calculation above finds the range between an asset’s high and low price during a given period of time. The current security's price is then expressed as a percentage of this range with 0%
indicating the bottom of the range and 100% indicating the upper limits of the range over the time period covered. The idea behind this indicator is that prices tend to close near the extremes of the
recent range before turning points. The Stochastic oscillator is calculated:
$Price$ is the last closing price
$LOW_N(Price)$ is the lowest price over the last N periods
$HIGH_N(Price)$ is the highest price over the last N periods
$%D$ is a 3-period exponential moving average of %K, $EMA_3(%K)$.
$%D-Slow$ is a 3-period exponential moving average of %D, $EMA_3(%D)$.
A 3-line Stochastics will give an anticipatory signal in %K, a signal in the turnaround of %D at or before a bottom, and a confirmation of the turnaround in %D-Slow.^[3] Typical values for N are 5,
9, or 14 periods. Smoothing the indicator over 3 periods is standard.
Dr. George Lane, a financial analyst, is one of the first to publish on the use of stochastic oscillators to forecast prices. According to Lane, the Stochastics indicator is to be used with cycles,
Elliot Wave Theory and Fibonacci retracement for timing. In low margin, calendar futures spreads, one might use Wilders parabolic as a trailing stop after a stochastics entry. A centerpiece of his
teaching is the divergence and convergence of trendlines drawn on stochastics, as diverging/converging to trendlines drawn on price cycles. Stochastics predicts tops and bottoms.
The signal to act is when there is a divergence-convergence, in an extreme area, with a crossover on the right hand side, of a cycle bottom.^[2] As plain crossovers can occur frequently, one
typically waits for crossovers occurring together with an extreme pullback, after a peak or trough in the %D line. If price volatility is high, an exponential moving average of the %D indicator may
be taken, which tends to smooth out rapid fluctuations in price.
Stochastics attempts to predict turning points by comparing the closing price of a security to its price range. Prices tend to close near the extremes of the recent range just before turning points.
In the case of an uptrend, prices tend to make higher highs, and the settlement price usually tends to be in the upper end of that time period's trading range. When the momentum starts to slow, the
settlement prices will start to retreat from the upper boundaries of the range, causing the stochastic indicator to turn down at or before the final price high.^[4]
An alert or set-up is present when the %D line is in an extreme area and diverging from the price action. The actual signal takes place when the faster % K line crosses the % D line.^[5]
Divergence-convergence is an indication that the momentum in the market is waning and a reversal may be in the making. The chart below illustrates an example of where a divergence in stochastics,
relative to price, forecasts a reversal in the price's direction.
An event known as "stochastic pop" occurs when prices break out and keep going. This is interpreted as a signal to increase the current position, or liquidate if the direction is against the current
See also[edit]
• Williams %R – Equivalent of %K, mirrored around the 0%-axis
External links[edit] | {"url":"http://blekko.com/wiki/Stochastic_oscillator?source=672620ff","timestamp":"2014-04-21T15:16:13Z","content_type":null,"content_length":"31822","record_id":"<urn:uuid:04a30156-2b26-41f6-9e48-f9b067f2f9bb>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00292-ip-10-147-4-33.ec2.internal.warc.gz"} |
Interference, Diffraction & the Principle of Superposition
Interference takes place when waves interact with each other, while diffraction takes place when a wave passes through an aperture. These interactions are governed by the principle of superposition.
Interference, diffraction, and the principle of superposition are important concepts to understanding several applications of waves.
Interference & the Principle of Superposition
When two waves interact, the principle of superposition says that the resulting
wave function
is the sum of the two individual wave functions. This phenomena is generally described as
Consider a case where water is dripping into a tub of water. If there's a single drop hitting the water, it will create a circular wave of ripples across the water. If, however, you were to begin
dripping water at another point, it would also begin making similar waves. At the points where those waves overlap, the resulting wave would be the sum of the two earlier waves.
This holds only for situations where the wave function is linear, that is where it depends on x and t only to the first power. Some situations, such as nonlinear elastic behavior that doesn't obey
Hooke's Law, would not fit this situation, because it has a nonlinear wave equation. But for almost all waves that are dealt with in physics, this situation holds true.
It might be obvious, but it's probably good to also be clear on this principle involves waves of similar type. Obviously, waves of water will not interfere with electromagnetic waves. Even among
similar types of waves, the effect is generally confined to waves of virtually (or exactly) the same wavelength. Most experiments in involving interference assure that the waves are identical in
these respects.
Constructive & Destructive Interference
The picture to the right shows two waves and, beneath them, how those two waves are combined to show interference.
When the crests overlap, the superposition wave reaches a maximum height. This height is the sum of their amplitudes (or twice their amplitude, in the case where the initial waves have equal
amplitude). The same happens when the troughs overlap, creating a resultant trough that is the sum of the negative amplitudes. This sort of interference is called constructive interference, because
it increases the overall amplitude. Another, non-animated, example can be seen by clicking on the picture and advancing to the second image.
Alternately, when the crest of a wave overlaps with the trough of another wave, the waves cancel each other out to some degree. If the waves are symmetrical (i.e. the same wave function, but shifted
by a phase or half-wavelength), they will cancel each other completely. This sort of interference is called destructive interference, and can be viewed in the graphic to the right or by clicking on
that image and advancing to another representation.
In the earlier case of ripples in a tub of water, you would therefore see some points where the interference waves are larger than each of the individual waves, and some points where the waves cancel
each other out.
A special case of interference is known as
and takes place when a wave strikes the barrier of an aperture or edge. At the edge of the obstacle, a wave is cut off, and it creates interference effects with the remaining portion of the wave
fronts. Since nearly all optical phenomena involve light passing through an aperture of some kind - be it an eye, a sensor, a telescope, or whatever - diffraction is taking place in almost all of
them, although in most cases the effect is negligible. Diffraction typically creates a "fuzzy" edge, although in some cases (such as Young's double-slit experiment, described below) diffraction can
cause phenomena of interest in their own right.
Consequences & Applications
Interference is an intriguing concept and has some consequences that are worth note, specifically in the area of light where such interference is relatively easy to observe.
In Thomas Young's double-slit experiment, for example, the interference patterns resulting from diffraction of the light "wave" make it so that you can shine a uniform light and break it into a
series of light and dark bands just by sending it through two slits, which is certainly not what one would expect. Even more surprising is that performing this experiment with particles, such as
electrons, results in similar wave-like properties. Any sort of wave exhibits this behavior, with the proper set-up.
Perhaps the most fascinating application of interference is to create holograms. This is done by reflecting a coherent light source, such as a laser, off of an object onto a special film. The
interference patterns created by the reflected light are what result in the holographic image, which can be viewed when it is again placed in the right sort of lighting. | {"url":"http://physics.about.com/od/mathematicsofwaves/a/interference.htm","timestamp":"2014-04-17T18:32:49Z","content_type":null,"content_length":"45542","record_id":"<urn:uuid:9103eb51-0815-4b5e-a527-07297cc3505b>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00106-ip-10-147-4-33.ec2.internal.warc.gz"} |
Notes by Jurčo on generalized Bundle Gerbes
Posted by Urs Schreiber
Branislav Jurčo asks me to draw attention here to the following notes which he provides on his website:
a) Associated 2-vector bundle construction using Baez-Crans 2-vector spaces
b) 2-crossed module bundle 2-gerbes
c) Differential geometry of 2-crossed module bundle 2-gerbes
These are generalizations building on his work on nonabelian bundle gerbes as presented in
P. Aschieri, L. Cantini B. Jurčo
Nonabelian Bundle Gerbes, their Differential Geometry and Gauge Theory
Branislav Jurčo
Crossed Module Bundle Gerbes; Classification, String Group and Differential Geometry
The first one discusses the bundle gerbe analog of how a vector bundle may be associated to a principal bundle. The second and third generalize bundle (1-)gerbes to bundle 2-gerbes. This are then
gadgets that are to 3-bundles with some structure 3-group like transitions functions are to ordinary principal bundles.
(A crossed module is the realization of a strict 2-group in the world of complexes of groups. A “crossed module bundle gerbe” is to an abelian bundle gerbe like a 2-bundle for a general strict
2-group is to a 2-bundle for the 2-group “shifted $U(1)$”. A 2-crossed module is a realization of a suitably well behaved 3-group in the world of complexes of groups.)
See also the entry
Jurčo on Gerbes and Stringy Applications
Unfortunately I am quite busy at the moment. But if I find the time, I might try to write a summary of Brano’s notes. Unless sombody else beats me to it…
Posted at September 10, 2007 7:04 PM UTC | {"url":"http://golem.ph.utexas.edu/category/2007/09/notes_by_jur_on_gerbes.html","timestamp":"2014-04-21T14:52:11Z","content_type":null,"content_length":"12731","record_id":"<urn:uuid:7edc743d-6b49-4ad7-b6fd-e1097f9b3743>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00250-ip-10-147-4-33.ec2.internal.warc.gz"} |
Injective maps on cohomology and Kahler manifolds
up vote 7 down vote favorite
Compact Kahler manifolds have the property that surjective maps induce injections on cohomology with coefficents in $\mathbb{Q}$ (That is, if $X,Y$ compact Kahler, then a surjective map $\phi: X \
rightarrow Y$ induces injections $\phi^*: H^i(Y, \mathbb{Q}) \rightarrow H^i(X, \mathbb{Q})$ for all $i$, [Voisen, Hodge Theory I, p 177]).
Question: I'm wondering if I should think of this as a property of compact Kahler manifolds, or as an instance of something more general. For example, can the Kahler condition be replaced with a more
general class of manifolds (not dropping the compactness hypothesis). Perhaps one that includes not just complex manifolds but maybe a few manifolds of odd (topological) dimension? I know that if we
require that $\dim X = \dim Y$ then the fact above is true more generally just for compact oriented manifolds, for formal reasons.
add comment
2 Answers
active oldest votes
In order to have this property it is sufficient to require that $\phi^{-1}(y)$ is a non-zero cycle in $H_*(X,\mathbb Q)$, where $y$ is a generic point in $Y$. This holds indeed when
$X$ and $Y$ are Kahler.
up vote 15 Let me give an example showing that this does not work when $X$ and $Y$ are just complex.
down vote
accepted Example. Let $X$ be a Hopf surface $X=(\mathbb C^2\setminus 0)/\mathbb Z$ where $\mathbb Z$ is acting on $\mathbb C^2$ by multiplication by (say) $2$. Then there is a fibration $\phi\
colon X\to \mathbb CP^1=Y$. This fibration comes from the standard $\mathbb C^*$-action on $\mathbb C^2$. Now $\phi^{-1}(y)$ is null-homlogous.
Nice criteria! Is it easy to see that $\phi^{-1}(y)$ is a non-zero cycle in $H_*(X,\mathbb{Q})$ in the Kahler case? – LMN Oct 1 '12 at 2:57
Sure, the preimage of a point is a complex submanifold and the appropriate power of Kahler form of $X$ is a volume form on it. – Dmitri Oct 1 '12 at 3:00
Great! Can you give me a hint (or reference) of how you would prove this criteria implies that the maps on cohomology with $\mathbb{Q}$ coefficents are injective? – LMN Oct 1 '12 at
2 This follows form Poincare duality. So let $Z\in Y$ be a non-trivial cycle (i.e $[Z]\ne 0 \in H_*(Y,\mathbb Q)$). Let us prove that $\phi^{-1}(Z)$ is a non-trivial cycle in $X$.
Chose $Z'$ in $Y$ Poincare dual to $Z$. And let us make so that $Z$ intersects $Z'$ transversally. Then $\phi^{-1}Z\cap \phi^{-1}\Z'$ is a collection of fibers. So the intersection
is non-zero in homology of $X$, so both preimages in non-zero in homology of $X$. – Dmitri Oct 1 '12 at 3:13
2 Remark: your condition is necessary and sufficient, since it is equivalent to $\phi^\ast:H^n(Y)\hookrightarrow H^n(X)$, where $n=dim Y$. – Ian Agol Oct 1 '12 at 14:13
show 3 more comments
Suppose one has compact symplectic manifolds $(X,\omega), (Y,\sigma)$, and a map $f:X\to Y$ such that $f^\ast\sigma=\omega$ (if $f$ is also a diffeomorphism, then this map is a
symplectomorphism, but I'm not sure the terminology in this case). Then $f^\ast: H^*(Y;\mathbb{Q})\to H^\ast(X;\mathbb{Q})$ will be injective. If $dim Y=2k$, then $\sigma^k\neq 0\in H^{2k}
(Y)$ is a fundamental class, and $dim X=2n \geq 2k$, then $\omega^n\neq 0\in H^{2n}(X)$. So $f^\ast(\sigma^k)= \omega^k \neq 0 \in H^{2k}(X)$.
up vote 2
down vote Now, consider $\alpha \in H^j(Y)$. By Poincare duality, there exists $\beta\in H^{2k-j}(Y)$ such that $\alpha\cup \beta = \sigma^k$. Then $f^\ast(\alpha\cup \beta)=\omega^k\neq 0$, so $f^\
ast \alpha\neq 0$.
In fact $Y$ is not required to be symplectic. It suffices that $f$ is surjective and $X$ has a closed $2$ form which is symplectic on a generic fiber (so that dimensions have equal
1 parity). When $f$ is a submersion, it is called a symplectic fibration, but there are important examples with critical points, the so-called (symplectic) Lefschetz fibrations. – BS. Oct
1 '12 at 11:34
add comment
Not the answer you're looking for? Browse other questions tagged at.algebraic-topology cohomology complex-geometry ag.algebraic-geometry kahler-manifolds or ask your own question. | {"url":"http://mathoverflow.net/questions/108508/injective-maps-on-cohomology-and-kahler-manifolds?sort=oldest","timestamp":"2014-04-20T11:16:17Z","content_type":null,"content_length":"63648","record_id":"<urn:uuid:f79fdf42-48e0-4a17-9246-fad1012a346f>","cc-path":"CC-MAIN-2014-15/segments/1397609538423.10/warc/CC-MAIN-20140416005218-00083-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2007/174
Counting hyperelliptic curves that admit a Koblitz modelCevahir Demirkiran and Enric NartAbstract: Let $k=\mathbb{F}_q$ be a finite field of odd characteristic. We find a closed formula for the
number of $k$-isomorphism classes of pointed, and non-pointed, hyperelliptic curves of genus $g$ over $k$, admitting a Koblitz model. These numbers are expressed as a polynomial in $q$ with integer
coefficients (for pointed curves) and rational coefficients (for non-pointed curves). The coefficients depend on $g$ and the set of divisors of $q-1$ and $q+1$. These formulas show that the number of
hyperelliptic curves of genus $g$ suitable (in principle) of cryptographic applications is asymptotically $(1-e^{-1})2q^{2g-1}$, and not $2q^{2g-1}$ as it was believed. The curves of genus $g=2$ and
$g=3$ are more resistant to the attacks to the DLP; for these values of $g$ the number of curves is respectively $(91/72)q^3+O(q^2)$ and $(3641/2880)q^5+O(q^4)$. Category / Keywords: public-key
cryptography / hyperelliptic cryptosystemsDate: received 10 May 2007Contact author: nart at mat uab catAvailable format(s): Postscript (PS) | Compressed Postscript (PS.GZ) | PDF | BibTeX Citation
Version: 20070520:125411 (All versions of this report) Discussion forum: Show discussion | Start new discussion[ Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2007/174/20070520:125411","timestamp":"2014-04-20T13:26:14Z","content_type":null,"content_length":"2813","record_id":"<urn:uuid:769cd369-e97d-47d5-86d6-68b356ff24ad>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00636-ip-10-147-4-33.ec2.internal.warc.gz"} |
Geodesic metrics that admit dilatation at each point
up vote 13 down vote favorite
Consider the class of geodesic metrics $g$ on manifolds, that have the following property: for each point $x$ there exists a neighbourhood $U_x$ and a smooth vector field $v_x$ in $U_x$ that vanishes
at $x$ and whose flow (for small time) dilatates $g$ by a constant factor. Let us call such metrics dilatatable.
An obvious example is provided by an Euclidean $\mathbb R^n$, the flow of the field $\sum_i x_i \frac{\partial}{\partial x_i}$ dilatates the Euclidean metric by a constant factor. More generally one
can take any Banach space. I would like to make a guess about the structure of such metrics in general.
Guess. Suppose $g$ on $M^n$ is dilatatable. Then there exists a triangulation of $M^n$
such that the restriction of the metric $g$ to each simplex if flat with respect to the flat structure on the simplex, and $g$ is flat on the complement to the union of all co-dimension $2$
The first question is the following: was such class of metrics considered somewhere and is this guess correct? Are there obvious counterexamples?
Second part of the question is about examples. It is not hard to construct an example of such a metric, if we don't require $M^n$ to be a smooth manifold. Namely, we can take any polyhedral metric on
$M^n$, i.e. glue $M^n$ from a union of Euclidean simplexes (glue the boundaries by isometries). Then for each point there is a conical neighbourhood, and obviously we can always scale this
neighbourhood by the radial field emanating from $x$. So now comes the
Second question. Take a topological manifold $M^n$ of dimension $n<7$ with such a polyhedral metric. It is known then that such a manifold has a smooth structure (because a PL structure in dimension
up to $6$ always defines a unique smooth structure). Is it possible to chose this smooth structure in such a way, that the polyhedral metric is dilatatable for the smooth structure?
The answer to this question is positive for $n=2$, but I don't know already what happen for $n=3$. At the same time, there are non-trivial examples in higher dimensions, coming from complex geometry.
For example one can quotient some complex tori $\mathbb T^n$ by a finite group of isometries to get $\mathbb CP^n$, the obtained polyheral metric on $\mathbb CP^n$ is dilatatable with respect to the
canonical complex (and hence smooth) structure on $\mathbb CP^n$.
mg.metric-geometry geometry dg.differential-geometry
1 I can't figure out your first sentence. As $v_x$ is said to fix $x,$ this suggests an action or infinitesimal action on $U_x,$ but then $v_x$ acts on the set of metrics. Could you please expand on
this a bit, maybe say more about how $\mathbf R^n$ is an example? – Will Jagy Sep 25 '10 at 18:24
@Will, thanks for your remark, that was sloppy indeed, I meant that the flow generated by $v_x$ should dilatate the metric constantly. – Dmitri Sep 25 '10 at 18:55
1 Full marks for using the word "dilatation" but I think "dilatatable" is one syllable too far. – gowers Sep 26 '10 at 8:17
3 Берестовский В. Н. Подобно однородные локально полные пространства с внутренней метрикой. Известия ВУЗов. Математика. — 2004. — № 11(510). — с. 3-22. – Anton Petrunin Sep 26 '10 at 16:19
add comment
2 Answers
active oldest votes
Concerning the first question: you description is incomplete, even in the homogeneous case.
There are homogeneous geodesic metrics that admit smooth families of dilatations but are not made of flat Banach metrics. In particular, some Carnot-Caratheodory metrics are.
For example, consider the Heisenberg group $H$, which can be thought of as $\mathbb R^3$ equipped with the following group law: $$ (x,y,z)\cdot(x',y',z') = (x+x',y+y',z+z'+x'y) . $$
Observe that for every $t\in\mathbb R$, the map $\phi_t:(x,y,z)\mapsto (e^tx,e^ty,e^{2t}z)$ is a group homomorphism, and these maps form a smooth 1-parameter group of diffeomorphisms
up vote 9 (and hence a flow generated by a smooth vector field).
down vote
accepted Consider a left-invariant two-dimensional distribution $V\subset TH$ spanned by left-invariant vector fields $X$ and $Y$ whose values at $(0,0,0)$ equal $\partial/\partial x$ and $\
partial/\partial y$, respectively. Equip this distribution with a left-invariant Euclidean metric. The distribution is completely non-integrable, so we get a Carnot-Caratheodory metric
on $H$. Observe that $\phi_t$ maps $X$ to $e^tX$ and $Y$ to $e^tY$, hence it is a $e^t$-dilatation of the Carnot-Caratheodory metric.
The Carnot-Caratheodory metric is very different from Banach metrics. For example, its Hausdorff dimension equals 4.
Sergei, thanks a lot for the answer! It makes clear that Banach metrics are not enough, and also that an existence of a polyhderal stratification is too optimistic. Still I wonder if
some kind of decent stratified structure always appear for this dilatateble metrics... – Dmitri Sep 25 '10 at 21:01
2 I believe there is one. Let's say that two point are connected if one is an image of the other under one of these local dilatations. Two points are equivalent if they one can get from
one to the other via a chain of such connections. It seems that equivalence classes form a stratification into smooth submanifolds, but I did not try to check the details. – Sergei
Ivanov Sep 25 '10 at 21:24
Yes, this sounds plausible. I wonder, if you impose the condition, that the metric is complete, $M^n$ is simply connected, and the size of $U_x$ for all $x$ is at least $\varepsilon$
-- can there be classification of all dilatatable metrics in this case? (So this should include flat ones and Carnot-Caratheodory). Do you think this question was studied? Finally I
am not sure that I interpret correctly your second phrase -- that "the description is not complete even in homogeneous case". What "homogeneous" means here? The transitivity of
isometry group? – Dmitri Sep 25 '10 at 21:49
3 Yes I meant transitivity of the isometry group. If sizes of neighborhoods are bounded away from zero, then all points are equivalent in the sense of my previous comment, hence all
these neighborhoods are isometric, so the space is locally homogeneous. I vaguely remember that Berestovskij proved that every homogeneous geodesic metric is a Finsler
Carnot-Caratheodory metric. I don't know whether anyone studied which of those admit dilatations but this should not be hard. – Sergei Ivanov Sep 25 '10 at 22:15
I think homogeneous examples come from nilpotent Lie groups. The low dimensional ones tend to have expanding automorphisms, but most high-dimensional ones do not. I think the list is
1 complicated. The possible structure near lower-dimensional strata, where size of neighborhoods goes to 0, seems interesting and intricate. But, Maybe it's worth 1st settling it
assuming hausdorff dimension is normal. – Bill Thurston Sep 26 '10 at 7:16
show 1 more comment
Relative to comments by Sergei Ivanov and Bill Thurston, maybe this line of research concerning "metric spaces with dilations" or "dilation structures" provides a precise answer, more
general than Berestovskii result. See this introduction and dig into the biblio.
Concerning examples related to Carnot-Caratheodory geometry and nilpotent groups (precisely: "Carnot groups"), they appear naturally as models of the (metric) tangent space to a point
up vote 2 down in a space with dilations.
If you stand to read a more algebraic account, see emergent algebras, where it is proven that this is not really a metric induced phenomenon.
Marius, thanks a lot! I will have a look. – Dmitri Nov 9 '10 at 23:15
add comment
Not the answer you're looking for? Browse other questions tagged mg.metric-geometry geometry dg.differential-geometry or ask your own question. | {"url":"http://mathoverflow.net/questions/39960/geodesic-metrics-that-admit-dilatation-at-each-point?sort=oldest","timestamp":"2014-04-18T14:05:48Z","content_type":null,"content_length":"71810","record_id":"<urn:uuid:b8a4fe9d-ad42-45e2-b21a-da29844c29b3>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00622-ip-10-147-4-33.ec2.internal.warc.gz"} |
Bug 13631 – quantile and IQR do not check for numeric input
Description Jitterbug compatibility account 2009-03-30 15:13:37 UTC
From: Simone Giannerini <sgiannerini@gmail.com>
This report follows the post
where it is shown that quantile() and IQR() do not work as documented.
In fact they do not check for numeric input even if the documentation says :
x numeric vectors whose sample quantiles are wanted. Missing
values are ignored.
x a numeric vector.
> quantile(factor(1:9))
0% 25% 50% 75% 100%
Levels: 1 2 3 4 5 6 7 8 9
> IQR(factor(1:9))
[1] 4
> R.version
platform i386-pc-mingw32
arch i386
os mingw32
system i386, mingw32
status alpha
major 2
minor 9.0
year 2009
month 03
day 26
svn rev 48224
language R
version.string R version 2.9.0 alpha (2009-03-26 r48224)
Simone Giannerini
Dipartimento di Scienze Statistiche "Paolo Fortunati"
Universita' di Bologna
Via delle belle arti 41 - 40126 Â Bologna, Â ITALY
Tel: +39 051 2098262 Â Fax: +39 051 232153
Comment 1 Jitterbug compatibility account 2009-03-30 16:50:56 UTC
From: Thomas Lumley <tlumley@u.washington.edu>
On Mon, 30 Mar 2009 sgiannerini@gmail.com wrote:
> This report follows the post
> http://tolstoy.newcastle.edu.au/R/e6/devel/09/03/0760.html
> where it is shown that quantile() and IQR() do not work as documented.
Nothing of the sort is shown! The thread argued that methods for these functions for ordered factors would be useful.
> In fact they do not check for numeric input even if the documentation says =
> :
> ?quantile
> x numeric vectors whose sample quantiles are wanted. Missing
> values are ignored.
> ?IQR
> x a numeric vector.
The documentation says that you are not allowed to pass anything except a numeric vector to quantile() and IQR(). It doesn't, for example, say you can pass an arbitrary vector that will be checked to see if it is numeric. If you have code that passes a factor to IQR(), the bug is in that code.
On the other hand, as someone else has since reported, the 'missing values are ignored' statement in ?quantile is wrong (or at least incomplete).
Thomas Lumley Assoc. Professor, Biostatistics
tlumley@u.washington.edu University of Washington, Seattle
Comment 2 Jitterbug compatibility account 2009-03-30 17:16:25 UTC
From: Duncan Murdoch <murdoch@stats.uwo.ca>
On 3/30/2009 7:50 AM, Thomas Lumley wrote:
> On Mon, 30 Mar 2009 sgiannerini@gmail.com wrote:
>> This report follows the post
>> http://tolstoy.newcastle.edu.au/R/e6/devel/09/03/0760.html
>> where it is shown that quantile() and IQR() do not work as documented.
> Nothing of the sort is shown! The thread argued that methods for these functions for ordered factors would be useful.
>> In fact they do not check for numeric input even if the documentation says =
>> :
>> ?quantile
>> x numeric vectors whose sample quantiles are wanted. Missing
>> values are ignored.
>> ?IQR
>> x a numeric vector.
> The documentation says that you are not allowed to pass anything except a numeric vector to quantile() and IQR(). It doesn't, for example, say you can pass an arbitrary vector that will be checked to see if it is numeric. If you have code that passes a factor to IQR(), the bug is in that code.
> On the other hand, as someone else has since reported, the 'missing values are ignored' statement in ?quantile is wrong (or at least incomplete).
I think that statement was wrong, and I fixed it last night, but then
didn't get it committed. The commit will make it into 2.9 and R-devel
Duncan Murdoch
Comment 3 Jitterbug compatibility account 2009-03-30 17:57:36 UTC
From: Simone Giannerini <sgiannerini@gmail.com>
Dear Thomas,
On Mon, Mar 30, 2009 at 1:50 PM, Thomas Lumley <tlumley@u.washington.edu> wrote:
> On Mon, 30 Mar 2009 sgiannerini@gmail.com wrote:
>> This report follows the post
>> http://tolstoy.newcastle.edu.au/R/e6/devel/09/03/0760.html
>> where it is shown that quantile() and IQR() do not work as documented.
> Nothing of the sort is shown! The thread argued that methods for these
> functions for ordered factors would be useful.
in the original thread I initiated the matters were two (at least in
my intention)
1. quantile() and IQR() do not check for numeric input whereas
median() has a check (for a factor input). This has nothing to deal
with ordered factors
2. the opportunity of having methods for ordered factors for
quantile() and the like.
If, for some reason, you think that it is ok to have such check for
median but not for quantile and IQR then take this as a wishlist. BTW
also var() and the like do not check for factor input while mean() has
the check.
> x <- factor(letters[1:9])
> x
[1] a b c d e f g h i
Levels: a b c d e f g h i
> mean(x)
[1] NA
Warning message:
In mean.default(x) : argument is not numeric or logical: returning NA
> var(x)
[1] 7.5
>> In fact they do not check for numeric input even if the documentation says
>> =
>> :
>> ?quantile
>> x    numeric vectors whose sample quantiles are wanted. Missing
>> values are ignored.
>> ?IQR
>> x    a numeric vector.
> The documentation says that you are not allowed to pass anything except a
> numeric vector to quantile() and IQR(). It doesn't, for example, say you can
> pass an arbitrary vector that will be checked to see if it is numeric. If
> you have code that passes a factor to IQR(), the bug is in that code.
> On the other hand, as someone else has since reported, the 'missing values
> are ignored' statement in ?quantile is wrong (or at least incomplete).
> Â Â -thomas
> Thomas Lumley          Assoc. Professor, Biostatistics
> tlumley@u.washington.edu     University of Washington, Seattle
Simone Giannerini
Dipartimento di Scienze Statistiche "Paolo Fortunati"
Universita' di Bologna
Via delle belle arti 41 - 40126 Bologna, ITALY
Tel: +39 051 2098262 Fax: +39 051 232153
Comment 4 Jitterbug compatibility account 2009-05-27 02:28:00 UTC
factors are disallowed in 2.10.0
Note that 'numeric' can be far more general than an atomic numeric vector
and this still work.
Comment 5 Jitterbug compatibility account 2009-05-27 02:28:11 UTC
Audit (from Jitterbug):
Tue May 26 21:28:11 2009 ripley changed notes
Tue May 26 21:28:11 2009 ripley moved from incoming to Analyses-fixed | {"url":"https://bugs.r-project.org/bugzilla3/show_bug.cgi?id=13631","timestamp":"2014-04-17T04:49:59Z","content_type":null,"content_length":"38239","record_id":"<urn:uuid:2057e548-39dc-4e0c-bb02-686f6a987601>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00154-ip-10-147-4-33.ec2.internal.warc.gz"} |
General functions applicable to all vector types.
class BasicVector v whereSource
All vector types belong to this class. Aside from vpack and vunpack, these methods aren't especially useful to end-users; they're used internally by the vector arithmetic implementations.
vmap :: (Scalar -> Scalar) -> v -> vSource
Apply a function to all vector fields.
vzip :: (Scalar -> Scalar -> Scalar) -> v -> v -> vSource
Zip two vectors together field-by-field using the supplied function (in the style of Data.List.zipWith).
vfold :: (Scalar -> Scalar -> Scalar) -> v -> ScalarSource
Reduce a vector down to a single value using the supplied binary operator. The ordering in which this happens isn't guaranteed, so the operator should probably be associative and commutative.
vpack :: [Scalar] -> Maybe vSource
Pack a list of values into a vector. Extra values are ignored, too few values yields Nothing.
vunpack :: v -> [Scalar]Source
Unpack a vector into a list of values. (Always succeeds.)
vpromote :: Scalar -> vSource
Convert a Scalar to a vector (with all components the same).
BasicVector Vector1
BasicVector Vector2
BasicVector Vector3
BasicVector Vector4
class (BasicVector v, Num v, Fractional v) => Vector v Source
Dummy class that enables you to request a vector in a type signature without needing to explicitly list Num or Fractional as well.
Vector Vector1
Vector Vector2
Vector Vector3
Vector Vector4
(*|) :: Vector v => Scalar -> v -> vSource
Scale a vector (i.e., change its length but not its direction). This operator has the same precedence as the usual (*) operator.
The (*|) and (|*) operators are identical, but with their argument flipped. Just remember that the '|' denotes the scalar part.
(|*) :: Vector v => v -> Scalar -> vSource
Scale a vector (i.e., change its length but not its direction). This operator has the same precedence as the usual (*) operator.
The (*|) and (|*) operators are identical, but with their argument flipped. Just remember that the '|' denotes the scalar part.
(|/) :: Vector v => v -> Scalar -> vSource
Scale a vector (i.e., change its length but not its direction). This operator has the same precedence as the usual (/) operator.
The (|)@ and @(|) operators are identical, but with their argument flipped. Just remember that the '|' denotes the scalar part.
(/|) :: Vector v => Scalar -> v -> vSource
Scale a vector (i.e., change its length but not its direction). This operator has the same precedence as the usual (/) operator.
The (|)@ and @(|) operators are identical, but with their argument flipped. Just remember that the '|' denotes the scalar part.
vdot :: Vector v => v -> v -> ScalarSource
Take the dot product of two vectors. This is a scalar equal to the cosine of the angle between the two vectors multiplied by the length of each vectors.
vmag :: Vector v => v -> ScalarSource
Return the length or magnitude of a vector. (Note that this involves a slow square root operation.)
vnormalise :: Vector v => v -> vSource
Normalise a vector. In order words, return a new vector with the same direction, but a length of exactly one. (If the vector's length is zero or very near to zero, the vector is returned unchanged.)
vlinear :: Vector v => Scalar -> v -> v -> vSource
Linearly interpolate between two points in space.
• vlinear 0 a b = a
• vlinear 1 a b = b
• vlinear 0.5 a b would give a point exactly half way between a and b in a straight line. | {"url":"http://hackage.haskell.org/package/AC-Vector-2.3.0/docs/Data-Vector-Class.html","timestamp":"2014-04-19T02:45:52Z","content_type":null,"content_length":"15365","record_id":"<urn:uuid:230d5864-4420-4683-9741-096b0c2bd7e0>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00184-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Forum Discussions - Re: Linear independence of eigenvectors
Date: Dec 6, 2012 2:44 AM
Author: Jose Carlos Santos
Subject: Re: Linear independence of eigenvectors
On 05-12-2012 21:44, Shmuel (Seymour J.) Metz wrote:
>> A classical Linear Algebra exercise says: prove that _n_
>> eigenvectors of an endomorphism of some linear space with _n_
>> distinct eigenvalues are linear independent.
> I hope that you mean that the eigenvalues associated with the n
> eigenvectors are distinct, not just that there are n distinct
> eigenvalues of the endomorphism.
Yes, of course.
Best regards,
Jose Carlos Santos | {"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7933030","timestamp":"2014-04-19T08:11:25Z","content_type":null,"content_length":"1560","record_id":"<urn:uuid:56fea14b-7951-481c-ad4f-c36e9ba58872>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00297-ip-10-147-4-33.ec2.internal.warc.gz"} |
Teaching your child about money reinforces the basic number facts of the base 10 number system: 0, 1, 2, 3, 4, 5, 6, 7, 8, 9. At first, for younger children, you can count the number of each
denomination you have on hand. And, as you move up the currency you can show how each is related to the others. Another important concept to teach here grouping similar coins and bills. One way to
reinforce this grouping, is to count more than $2 worth of pennies, nickels and/or dimes. Let your child count until half of the coins have been counted, then distract him/her to read a few pages of
his/her favorite book, better if it relates to counting. After that task has been completed, return your child to continue counting the coins. He/she will probably start over, and here is where you
step in to show your child how to group the coins in separate piles; doing so makes it easy to return to the task and later to actually find the value of all coins.
So, a penny represents 1,
a nickel represents 5 (pennies)
a dime represents 10 (pennies)
a quarter represents 25 (pennies)
a fifty cent piece (half dollar) represents 50 (pennies)
and a dollar piece represents 100 (pennies).
Notice how I related each denomination to pennies. As your child masters these equivalences he/she will make relationships between the others, like 2 nickels makes a dime.
10 groups of 10
Here we go.... start with a bag of 100 pennies. Use real currency here, you are not doing your child any favors with fake money. Have your child start counting the pennies, as I mentioned earlier.
Now interrupt and return. Suggest to you child to place these pennies into separate piles with the same number of pennies in each. 10 is a number familiar to probably every child and is the number to
use. Your child already counts 1, 2, 3, ..., 9, 10 and starts over, 11, 12, 13, ..., 19, 20 etc. Interrupt your child again, then return. Point out it is much easier to continue then to start all
over again.
10 groups of 10 = 100 = 1 dollar
Your child has now separated the pennies into 10 piles of 10. Count these, 10, 20, 30, 40, 50, 60 , 70, 80, 90, 100. Point out if all the pennies are placed in a pile, you wouldn't know there were
100 pennies. We've grouped the pennies into equal piles of 10. Point this out, it is most important. Move the piles next to each other and place a dollar piece next to it (or a dollar bill if you
have no dollar piece.) Make the equivalence between the pennies and the dollar. 100 pennies is 1 dollar. As k your child which would be easier to carry around, a bag of 100 pennies or a single dollar
coin (or bill).
What's a penny for?
This is still too abstract! What's a penny? What's a dollar? Ok, a "cent" is a "penny." So, in words, if a toy costs 1 dollar and 35 cents, then ask your child how many pennies it would take to buy
that toy? Help your child make the connection between the pennies already counted and the price of this toy. Not enough pennies. Now place the dollar piece there and ask your child if there is
enough. How many of the pennies would be required with the dollar? How many pennies are left?
10 dimes = 1 dollar
Once again, have your child divide the pennies into 10 piles of 10. Ask your child to verify he/she has 100 pennies. Repetition is most important throughout. Now, explain with 10 dimes handy that
each dime is the same as each pile of pennies by placing a dime by each pile. Ask your child if the pennies added up to a dollar. Hopefully with the answer yes, then, ask how many dimes add up to a
dollar. Help your child arrive to the answer 10. So, help your child make the connection that 10 dimes represents 100 pennies which represents a dollar. Now with the pennies, dimes and dollar, ask
your child to pay for the toy mentioned above. Any answer as long as it's correct is fine. For example, use the dollar and 35 pennies. But help your child connect the dimes, the dollar and the
pennies to pay for the toy with the dollar, 3 dimes and 5 pennies.
50 cent piece (half dollar)
Here we go again, have your child divide the pennies into 10 columns of 10 pennies apiece. Reinforce that 100 pennies in 10 columns of 10 pennies apiece is the same as 1 dollar. Now separate the
first 5 columns from the last five columns. Place a 50 cent piece above each group of 5 columns. Explain to your child that each 50 cent piece is 50 pennies. So 2 half dollars is 100 pennies is 1
Now, ask your child to place the dimes back into the picture, one above each column. then ask, how many dimes are in the half dollar.
the nickel
This time have your child separate the pennies into piles of 5 each. When done, ask how many piles there are. then ask if this makes sense? 10 piles of 10 is 20 piles of 5. this can be hard to grasp,
if so, have your child separate into columns of ten, then carefully pull the bottom five pennies from each column a bit below the top five. Now count the groups of 5. Have your child place a nickel
by each 5 pennies. And say with your child "5 pennies is a nickel" for each nickel. So 20 nickels represents 100 pennies which is 1 dollar. Recall the half dollar exercise. How many nickels are in
the each of the two groups? 10 nickels is a half dollar.
the quarter
Have your child group the pennies in columns of 10 each. Now count the pennies starting from 1 down one column then the next. When your child reaches 25, group those pennies together, then start
counting again, 1 to 25. You should have 4 groups of 25 pennies, 2 and 1/2 columns each. Now have your child place a quarter next to each group. So, each quarter is 25 pennies, and 4 quarters are 100
Now is a good time to have your child relate dimes and nickels to quarters, quarters to half dollars, etc. Ask your child with all of these choices how he/she would pay for that 1 dollar and 35 cent
toy. Explore all possibilities.
Finally, have your child play play bank teller. Ask your child to convert one denomination to the other. More advance, ask your child to make change for a purchase of some pretend item.
Oh, one final note, its fine to tell your child that $1.35 means 1 dollar and 35 cents. That is, the number to the left of the decimal point (dot) is the number of dollars and the number to the right
of the dot is the number of pennies.
Have fun! And don't try to do this all in one sitting! This takes time!
Kindergarten Money Lesson Plan | {"url":"http://www.k12math.com/math-concepts/Money.htm","timestamp":"2014-04-18T08:04:41Z","content_type":null,"content_length":"25154","record_id":"<urn:uuid:45ac8a3e-ff4a-4fc4-9308-171897cc3594>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00460-ip-10-147-4-33.ec2.internal.warc.gz"} |
WordCluster: detecting clusters of DNA words and genomic elements
Many k-mers (or DNA words) and genomic elements are known to be spatially clustered in the genome. Well established examples are the genes, TFBSs, CpG dinucleotides, microRNA genes and
ultra-conserved non-coding regions. Currently, no algorithm exists to find these clusters in a statistically comprehensible way. The detection of clustering often relies on densities and
sliding-window approaches or arbitrarily chosen distance thresholds.
We introduce here an algorithm to detect clusters of DNA words (k-mers), or any other genomic element, based on the distance between consecutive copies and an assigned statistical significance. We
implemented the method into a web server connected to a MySQL backend, which also determines the co-localization with gene annotations. We demonstrate the usefulness of this approach by detecting the
clusters of CAG/CTG (cytosine contexts that can be methylated in undifferentiated cells), showing that the degree of methylation vary drastically between inside and outside of the clusters. As
another example, we used WordCluster to search for statistically significant clusters of olfactory receptor (OR) genes in the human genome.
WordCluster seems to predict biological meaningful clusters of DNA words (k-mers) and genomic entities. The implementation of the method into a web server is available at http://bioinfo2.ugr.es/
wordCluster/wordCluster.php webcite including additional features like the detection of co-localization with gene regions or the annotation enrichment tool for functional analysis of overlapped
Genome entities as diverse as genes [1], CpG dinucleotides [2], transcription factor binding sites (TFBSs [3]) or ultra-conserved non-coding regions [4] usually form clusters along the chromosome
sequence. Such spatial clustering often translates into genome structures with a clear functional and/or evolutionary meaning: gene clusters encoding the same or similar products and originated
through gene duplication events, CpG islands, cis-regulatory modules, etc. Thus, the spatial clustering of functional genome elements (in general, words or k-mers) would somewhat remember the
situation in literary texts, where keywords show a strong clustering, whereas common words are randomly distributed [5].
Despite its potential importance, no algorithm exists to detect the clustering of DNA words in a rigorous way. Most current methods are based on densities and sliding-window approaches or arbitrary
distances. For example, the Galaxy work suite ([6], http://main.g2.bx.psu.edu/ webcite) implements an algorithm which lets the user decide to fix the maximum distance between two entities and the
minimum number of entities in the cluster. Recently, we developed an algorithm to detect clusters of CpG dinucleotides in DNA sequences based on the distance between neighboring CpGs, then assigning
a statistical significance [7]. Now, we generalize the method to any k-mer or any arbitrary combination of them, as well as to any other genome entity defined by its chromosome coordinates.
The WordCluster algorithm allows the detection of clusters for DNA words (k-mers) and genomic elements (genes, transposons, SINEs, TFBSs, etc.). The algorithm is based on the distances between the
entities and an assigned p-value.
The algorithm
The algorithm is basically the same for k-mers and genomic elements except for the detection of the coordinates and the way the success probabilities are calculated. Briefly the algorithm performs
the following steps:
1. Detection of all k-mer copies in the chromosomes, storing its coordinates (this step is unique to the detection of k-mer clusters as the genomic elements already come defined by its coordinates).
The copies are detected in a non-overlapping way, i.e. once a copy is found the search is resumed at the end of the word, thus preventing the detection of overlapping copies.
2. Calculation of the distances between consecutive copies. The distance is defined as: "start coordinate of the downstream copy" minus "end coordinate of the upstream copy". This implies that the
minimum distance is 1 when the two entities are located directly next to each other.
3. Detection of the clusters, defined as those chromosomal regions where all distances are equal or below a given maximum distance. A cluster is defined by its start and end coordinates and the
number of k-mers or genomic elements it contain.
4. Calculation of the statistical significance for each cluster by means of the negative binomial distribution. A p-value threshold is then used to filter out those clusters which are not
statistically significant.
A main difference to the originally described algorithm is the way N-runs in the DNA sequence (ambiguous sequence sites occupied by any nucleotide) are treated. While the original CpGcluster method
allows up to 10 Ns between two consecutive CpGs, WordCluster detects the DNA words and the distances strictly within the contigs, i.e. not a single N is allowed to lie between two copies.
Statistical significance
From now on, we will have to use the word k-mer in different contexts. Therefore, to avoid confusion we define as "target k-mer(s)" the k-mer(s) which are being analysed, i.e. those for which the
clusters are going to be detected. On the contrary, "no-target k-mer(s)" are all the remaining k-mer(s). We use k-mer in a generic way, referring to all DNA words of length k.
The statistical significance is calculated as the cumulative density function of the negative binomial distribution:
being n the number of target k-mers within the cluster, n[f ]the number of "failures", i.e. the number of no-target k-mers. For example, if we are detecting clusters of AGCT, all k-mers other than
AGCT would be considered as failures. Finally, p is the success probability, i.e. the probability to find a target k-mer or genomic element within the DNA sequence. Note that in the above equation we
use (n-1) instead of n, as the first appearance of a target k-mer within the cluster is trivial (i.e. all the clusters start with a target k-mer). While the negative binomial distribution can be
defined in the same way for k-mers and genomic elements, differences exist in the way the number of "failures" and the success probability are calculated.
For k-mers, the number of failures n[f ]is simply given by
being L[c ]the length of the cluster, k the length of the target k-mer and n the number of non-overlapping target k-mers in the cluster. The number of failures is the number of no-target k-mers
within the cluster. For example, given the target k-mer ATGC, the cluster ATGCATGC would give n[f ]= 0 while ATGCAATGC would give n[f ]= 1. Each k-mer can overlap with itself and other k-mers, but
here we consider just non-overlapping occurrences. In such a case, the probabilities for k-mers are given by the following equation
being N the number of non-overlapping occurrences of the target k-mers in the sequence, k the length of the k-mer and L[s ]the sequence length. The formula is simply the number of target k-mers in
the sequence divided by the total number of k-mers in the sequence. As we do not consider overlapping instances, N*(k-1) was subtracted from the total number of k-mers (L[s ]- k + 1), as those
sequence positions are not considered, in order to take this effect into account.
For genomic elements, it is less clear how to define the number of failures. For example, one has a cluster with 5 elements which have mean length of 300 bp and 250 bp of distance on average between
each other. The question is how many "no-elements" contain this cluster, i.e. how many failures. We define the number of failures as
being L[no ]the number of bases in the cluster not belonging to the genomic element and L[mean ]the mean length of the genomic element. Thus, this number is an approximation to the number of
"no-elements" within the cluster. Finally, the success probability is then given as
being L[s ]the length of the sequence, L[mean ]the mean length of the genomic elements and N the number of genomic elements.
Distance models
The maximum distance is the main parameter of the algorithm determining the copies belonging to each cluster. We have shown previously [7] that, for most human chromosomes, the median of the observed
distance distribution of CpGs lies near the intersection between the observed and the expected distance distribution. The intersection can be interpreted as the point separating the intra-cluster
from the inter-cluster distances. In this new tool, we added two more distance models based on the direct detection of the mentioned intersection (one genome wide and the other for each chromosome
separately). In this way, WordCluster implements a total of 4 different distance models:
1. Percentile distance: The distance corresponding to a given percentile of the observed distance distribution is calculated and used as the maximum distance threshold.
2. Chromosomal intersection: The distance corresponding to the intersection between the observed and the expected distributions is used as the maximum distance (see Figure 1).
Figure 1. Distance distributions. Expected and observed distance distributions for human chromosomes 16 (above) and 5 (below). It can be seen that for chr16 the median, the chromosome intersection
and the genome intersection are very close (within 1 bp), while for chromosome 5 notable differences exist (from 33 bp to 49 bp).
3. Genome intersection: The distance distributions for all chromosomes are merged, then calculating the distance corresponding to the "genome intersection point". If this distance model is chosen,
the success probabilities (i.e. the probability to find the target k-mers in the chromosome) are not calculated for each chromosome separately (like in the two models above), but a genome wide
success probability (probability to find the target k-mers) is calculated.
4. Fixed distance: the user can set the distance threshold.
We implemented the described algorithm into a web server. The tool uses PHP for the interaction with the user, to access the core program (written in Java) and the MySQL database. Two types of input
data can be supplied: 1) a group of k-mers and a genomic sequence to be scanned by the program (the user can upload his own sequence or choose one of the 24 genome assemblies stored in our database -
see below); and 2) a file in BED format [8,9] with the coordinates of the genomic elements whose clustering properties should be analyzed. No mandatory input parameters exist, but the user can select
between different distance models (the default is the chromosome intersection) and set the cut-off for the statistical significance (the default here is p-value ≤ 1E-5).
The output generated by the web server depends on whether the user chooses a genome assembly from our database or supplies an anonymous sequence. The minimum output consists of the basic statistics
of the clusters (base composition, entity composition and statistical significance) and the statistics by chromosome. Furthermore, for all species in the database, the co-localization of detected
clusters with different gene regions (promoters, introns, etc.) is reported.
Finally, for some species (human, mouse, rat, cow, C. elegans, zebrafish and chicken) an enrichment/depletion analysis for the genes overlapped by the clusters is carried out using the Gene Ontology
[10] and the Annotation-Modules database [11,12].
Currently, the genomes of 24 genome assemblies are stored into our database. The following sequences where downloaded from the UCSC genome browser or the corresponding project homepages (plant
genomes): Human (hg18, hg19), Mouse (mm8, mm9), Rat (rn4), Fruit fly (dm3), Anopheles gambiae (anogam1), Honey bee (apimel2), Cow (bosTau4), Dog (canFam2), C. briggsae (cb3), C. elegans (ce6), Sea
squirt (ci2), Zebrafish (danrer5), Chicken (galgal3), Stickleback (gasacu1), Medaka (orylat2), Chimp (pantro2), Rhesus macaque (rhemac2), S. cerevisiae (saccer1), Tetraodon (tetnig1), Arabidopsis
thaliana (tair8, tair9), and Zea mays (zm1). To determine the co-localization with genes, we used RefSeq genes whenever they were available [13], Ensembl genes otherwise [14].
Results and Discussion
To demonstrate the ability of our algorithm in finding biologically significant and relevant clusters in the genome, at the same time illustrating the different distance models, we carried out three
analysis: 1) detection of clusters of CpGs (CpG islands) using different distance models, 2) detection of clusters of the word CWG (where W = A, T) and 3) detection of clusters of olfactory receptor
genes in the human chromosome 11.
Detection of CpG islands with different distance models
We choose this example as the detection of CpG islands was the reason to develop the algorithm from which WordCluster [7] was derived. In the original CpGcluster algorithm, we used the percentile of
the observed distance distribution as distance model (apart from the fixed distance), suggesting the median as the default parameter. We did this since we observed that the intersection between the
observed and expected distance distributions is often very close to the median of the observed distance distribution (see Figure 1). This intersection can be interpreted in the following way. When
the observed curve lies above the expected, theoretical curve, it means that more CpGs exist at this distance than expected by chance. We can observe in Figure 1 that this is generally the case for
short distances, thus indicating the clustering (overrepresentation of short distances) of CpG dinucleotides. The intersection defines the "reversal point", i.e. at larger distances than this point,
the CpG dinucleotides are not clustered any more. Therefore, it might be that the strict use of the intersection defines better clusters that the use of the median, which is a mere approximation to
the intersection point. Furthermore, we observed that for some chromosomes the intersection and the median differ slightly. To clarify the impact of this change in the maximum distance, we predict
CpG islands by means of the median (cpg50), the chromosome intersection (cpgISc) and the genome intersection (cpgISg), then assessing the prediction quality by some of the criteria previously
described [7,15]. Table 1 shows that the mean length of both intersection models are clearly below the mean length of the original cpg50 islands. This can be explained as the intersection models
produce on average shorter distance thresholds, which leads to fragmentation, shortening and disappearance of some cpg50 CpG islands. Consequently, the chromosome intersection model (cpgISc) predicts
fewer islands than the original cpg50 algorithm (3979). Nevertheless, the genome intersection (cpgISg) yields more predictions compared to cpg50 (5535). The latter observation can be explained as the
predictions are done with a single, genome wide probability. The p-value assigned to each cluster depends on the success probability, and in G+C rich chromosomes the genome wide probability is much
lower than the chromosome probability. This leads to smaller p-values in G+C rich chromosomes, so that more islands can pass the p-value threshold. For example, cpg50 predicts 2434 islands in
chromosome 22 while cpgISg predicts 5197. Of course, in AT-rich chromosomes this effect is reverted but less pronounced (the difference between genome wide and chromosome probabilities are smaller in
AT-rich compared to GC-rich chromosomes), and therefore a higher total number of islands are predicted.
Table 1. WordCluster predictions of CpG clusters*
Next, we analyzed the predictions under functional aspects. Table 2 shows the overlap of the predictions with RefSeq genes [13], Alu elements and phylogenetically conserved PhastCons elements [16].
The cpgISg predictions show the highest overlap with the promoter region (R13), and conserved PhastCons elements, simultaneously showing the lowest overlap with spurious Alu elements. This might
indicate that cpgISg predictions are slightly better than the other two, the original cpg50 and cpgISc. However, 1) the differences seem to be rather small and 2) a more detailed analysis would be
needed to resolve this question.
Table 2. Biological meaning of WordCluster predictions*
Independently of this open question, we can summarize: 1) the chromosome intersection seems to be a good replacement for the median and furthermore removes one input parameter from the method, as the
intersection is a fixed statistical property of the chromosome; 2) the genome intersection may be used when the expected clusters are known to be not dependent on the chromosome. The CpG islands are
probably not dependent on the chromosome, as the biological mechanisms forming and maintaining them are probably the same for all chromosomes. This may suggest the use of the genome intersection,
which is confirmed by producing slightly better results than the other two tested distance models.
Detection of CWG clusters
Besides the conventional CpG context, the CWG context has recently been shown to be a potential target for methylation [17]. WordCluster detects 84996 CAG/CTG clusters in the human genome (NCBI 36,
hg18) significant at the 1E-5 level using the chromosome intersection (Table 1). We found a high number of statistically significant CWG clusters scattered along all human chromosomes, many of which
are overlapping gene regions (Table 3). To check if the detected clusters might be biologically meaningful, we compared the percentage of methylated words (CAG and CTG) inside and outside of the
clusters. We observed that 26.7% of all CAG/CTG trinucleotides are methylated inside the clusters while 45.3% of them are methylated when located outside a cluster. It seems therefore, as occurs in
CpG islands, that CAG/CTG clusters remain unmethylated with a much higher probability than the bulk DNA.
Table 3. Clusters of CWG trinucleotides*
Detection of olfactory gene clusters
As a third example, we used WordCluster to search for significant clusters of olfactory receptor (OR) genes, the largest multigene family in multicellular organisms whose members are known to be
clustered within vertebrate genomes [18,19]. Table 4 shows the basic statistics for the 13 clusters of OR genes detected by our algorithm in human chromosome 11. Figure 2 shows a comparative analysis
of the clusters predicted by WordCluster to the clusters currently annotated in the CLIC/HORDE database [19] in a selected region of chromosome 11. Our algorithm predicts a higher number of clusters,
being all of them statistically significant.
Table 4. Clusters of OR genes in human chromosome 11*
Figure 2. Clusters of OR genes. A region of human chromosome 11 showing OR genes (green), the clusters annotated in the CLIC/HORDE database (blue) and the statistically significant clusters predicted
by WordCluster (red). Our algorithm predicts more compact clusters compared to the CLIC/HORDE annotation. For example, in the first and third HORDE clusters pronounced gaps exist between the genes,
which is detected by WordCluster but ignored by the CLIC/HORDE annotation. The figure was generated using the UCSC Genome Browser [8].
WordCluster generalizes the previous CpGcluster algorithm [7] to any word or genomic element in the genome, at the same time associating a statistical significance to the clusters found. It
outperforms current methods relying on densities and sliding-window approaches or arbitrarily chosen distance thresholds. The implementation as a web server connected to a MySQL backend allows for
co-localization studies with different gene regions, as well as for genome wide enrichment/depletion analysis of functional terms (GO).
List of abbreviations used
k-mer: DNA word (oligonucleotide) with length k; SINEs: Short interspersed nuclear elements; TSS: Transcription Start Site; TFBS: Transcription Factor Binding Site; R13: promoter region [TSS-1500 bp;
TSS+500 bp].
Authors' contributions
MH developed and implemented the algorithm and wrote the manuscript (with JLO), PC and PB carried out the theoretical analysis of word clustering and help with the interpretation of statistical
results, GB and AMA retrieve and organize the genome and methylation databases, and JLO developed the algorithm and wrote the manuscript (with MH). All the authors critically read and approved the
final version.
The Spanish Government grants BIO2008-01353 to JLO, mobility PR2009-0285 to PC, Spanish Junta de Andalucía grants P07-FQM3163 to PC and P06-FQM1858 to PB are acknowledged. The Spanish 'Juan de la
Cierva' grant to MH and Basque Country 'Programa de formación de investigadores del Departamento de Educación, Universidades e Investigación' grant to GB are also acknowledged.
Sign up to receive new article alerts from Algorithms for Molecular Biology | {"url":"http://www.almob.org/content/6/1/2","timestamp":"2014-04-21T12:28:37Z","content_type":null,"content_length":"107883","record_id":"<urn:uuid:45ee4212-b719-49a5-9469-6fb35c1bcce1>","cc-path":"CC-MAIN-2014-15/segments/1397609539776.45/warc/CC-MAIN-20140416005219-00193-ip-10-147-4-33.ec2.internal.warc.gz"} |
the first resource for mathematics
Symmetrization of the sinc-Galerkin method for boundary value problems.
(English) Zbl 0629.65085
The general Galerkin scheme applied to the boundary value problem
$\left(1\right)\phantom{\rule{1.em}{0ex}}Lf\left(x\right)={f}^{"\text{'}}\left(x\right)+\mu \left(x\right){f}^{\text{'}}\left(x\right)+u \left(x\right)f\left(x\right)=\sigma \left(x\right);$
a$<x<b$, $f\left(a\right)=f\left(b\right)$, has the discrete form (2) $\left(L{f}_{m}-\sigma ,{S}_{k}\right)=0$, - M$\le k\le N$, where the ${S}_{k}$ are sinc functions composed with conformal maps.
In the selfadjoint case of (1) the sinc-Galerkin coefficient matrix generated by (2) is nonsymmetric. The discrete Galerkin system is very much dependent on the choice of the weight function for (2).
The author shows that by changing the weight function from what was used by F. Stenger [ibid. 33, 85-109 (1979; Zbl 0402.65053)], the symmetry of the discrete system is preserved. It is shown that
for the appropriate selection of the mesh size, the error for the symmetrized sinc-Galerkin method and the error for the method by Stenger are asymptotically equal. The presented method still handles
a wide class of singular problems.
65L10 Boundary value problems for ODE (numerical methods)
65L60 Finite elements, Rayleigh-Ritz, Galerkin and collocation methods for ODE
34B05 Linear boundary value problems for ODE | {"url":"http://zbmath.org/?q=an:0629.65085","timestamp":"2014-04-17T15:34:54Z","content_type":null,"content_length":"22754","record_id":"<urn:uuid:78dd7230-310e-4883-94de-ff6e9168e61e>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00463-ip-10-147-4-33.ec2.internal.warc.gz"} |
[SOLVED] Palindromes (Tricky)
May 29th 2009, 07:23 PM #1
Junior Member
Mar 2009
[SOLVED] Palindromes (Tricky)
How many binary strings of length n are palindromes?
Now, I have come to the conclusion that it depends on whether n is even or odd because I just cant find a single formula for this otherwise.
Even length = 2 ^ (n / 2).
Odd Length im not sure. Can someone give me alittle hint or help on this.
Ive been thinking about it for too long.
How many binary strings of length n are palindromes?
Now, I have come to the conclusion that it depends on whether n is even or odd because I just cant find a single formula for this otherwise.
Even length = 2 ^ (n / 2).
Odd Length im not sure. Can someone give me alittle hint or help on this.
Ive been thinking about it for too long.
You are correct for when n is even.
Suppose n is odd. Then n=2k+1.
A palindrome is entirely determined by the first k positions, as the last k must be the same in reverse. The middle position can be anything.
So we have 2 choices for the middle positions, and $2^k$ choices for the first k positions. So there are $2\cdot 2^k=2^{k+1}=2^{(n-1)/2+1}=2^{(n+1)/2}$
Hello, kurac!
How many binary strings of length $n$ are palindromes?
Both you and amitface are correct . . .
. . $\begin{array}{cc}\text{Even }n\!:& 2^{\frac{n}{2}} \\ \\[-3mm] \text{Odd }n\!: & 2^{\frac{n+1}{2}} \end{array}$
They can be merged into one function like this:
. . $f(n) \;=\;\frac{1+(\text{-}1)^n}{2}\cdot2^{\frac{n}{2}} + \frac{1-(\text{-}1)^n}{2}\cdot 2^{\frac{n+1}{2}}$
May 29th 2009, 07:50 PM #2
Junior Member
May 2009
May 30th 2009, 05:11 PM #3
Super Member
May 2006
Lexington, MA (USA) | {"url":"http://mathhelpforum.com/discrete-math/91017-solved-palindromes-tricky.html","timestamp":"2014-04-16T13:22:44Z","content_type":null,"content_length":"37371","record_id":"<urn:uuid:21875844-8253-4267-88a5-da3f5ae2ee0a>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00287-ip-10-147-4-33.ec2.internal.warc.gz"} |
Which is the Soul, Exists??
Anyway, if I were to explain the soul, I would go with Plato and say it is the charioteer that drives and controls the horses and the chariot. It is the aspect of man that is not at
the mercy of physical concerns or impulses, it is the very ability to make a decision one way as opposed to the other.
This idea has been out of style for some time. It similar enough to the "homunculus theory". Unfortunately it leads to an infinite regress; if a little man in my mind is deciding
everything for me, then what is deciding everything for the little man? Another little man? Who decides for him?
The idea has pragmatic value if you apply it to human cognition.
Alright. I think I can see it either way. After all, we do seem to be living in a universe where infinity is not impossible. Maybe an infinite regress is actually not that absurd of an idea after
all, when considering cognition.
Is that what you're saying?
The way I understand it, a good portion of our mathematical models only work with infinity.
Though to answer your question, infinite regress is just a glitch in our thinking pertaining to causality. | {"url":"http://www.deathmetal.org/forum/index.php/topic,17972.msg86822.html","timestamp":"2014-04-17T13:47:02Z","content_type":null,"content_length":"68183","record_id":"<urn:uuid:c0b1011a-a8dc-4567-96bf-91f039d0099c>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00140-ip-10-147-4-33.ec2.internal.warc.gz"} |
3251 -- Big Square
Big Square
Time Limit: 2000MS Memory Limit: 65536K
Total Submissions: 2569 Accepted: 624
Farmer John's cows have entered into a competition with Farmer Bob's cows. They have drawn lines on the field so it is a square grid with N × N points (2 ≤ N ≤ 100), and each cow of the two herds has
positioned herself exactly on a gridpoint. Of course, no two cows can stand in the same gridpoint. The goal of each herd is to form the largest square (not necessarily parallel to the gridlines)
whose corners are formed by cows from that herd.
All the cows have been placed except for Farmer John's cow Bessie. Determine the area of the largest square that Farmer John's cows can form once Bessie is placed on the field (the largest square
might not necessarily contain Bessie).
Line 1: A single integer, N
Lines 2..N+1: Line i+1 describes line i of the field with N characters. The characters are: 'J' for a Farmer John cow, 'B' for a Farmer Bob cow, and '*' for an unoccupied square. There will always be
at least one unoccupied gridpoint.
Line 1: The area of the largest square that Farmer John's cows can form, or 0 if they cannot form any square.
Sample Input
Sample Output
[Submit] [Go Back] [Status] [Discuss]
All Rights Reserved 2003-2013 Ying Fuchen,Xu Pengcheng,Xie Di
Any problem, Please Contact Administrator | {"url":"http://poj.org/problem?id=3251","timestamp":"2014-04-16T10:49:07Z","content_type":null,"content_length":"6644","record_id":"<urn:uuid:e009ff20-1968-49b9-917f-6e81d040a41b>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00334-ip-10-147-4-33.ec2.internal.warc.gz"} |
Relevant examples and relevant features: Thoughts from computational learning theory
Results 1 - 10 of 19
- In Proc. 35th Ann. ACM Symp. on the Theory of Computing , 2003
"... We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function which depends on an unknown set of k out of n Boolean variables. We give an algorithm
for learning such functions from uniform random exam-ples which runs in time roughly (n k) ω ω+1, where ω < ..."
Cited by 28 (2 self)
Add to MetaCart
We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function which depends on an unknown set of k out of n Boolean variables. We give an algorithm for
learning such functions from uniform random exam-ples which runs in time roughly (n k) ω ω+1, where ω < 2.376 is the matrix multiplication exponent. We thus obtain the first polynomial factor
improvement on the naive n k time bound which can be achieved via exhaustive search. Our algorithm and analysis exploit new structural properties of Boolean functions.
- Proceedings of the Thirteenth International Conference on Machine Learning (ICML96 , 1996
"... Most classification algorithms are "passive", in that they assign a class-label to each instance based only on the description given, even if that description is incomplete. By contrast, an
active classifier can -- at some cost -- obtain the values of missing attributes, before deciding upon a class ..."
Cited by 18 (5 self)
Add to MetaCart
Most classification algorithms are "passive", in that they assign a class-label to each instance based only on the description given, even if that description is incomplete. By contrast, an active
classifier can -- at some cost -- obtain the values of missing attributes, before deciding upon a class label. This can be useful when considering, for example, whether to extract some information
from the web for a critical decision or whether to gather information for a medical test or experiment. The expected utility of using an active classifier depends on both the cost required to obtain
the additional attribute values and the penalty incurred if the classifier outputs the wrong classification. This paper analyzes the problem of learning optimal active classifiers, using a variant of
the probably-approximately-correct (PAC) model. After defining the framework, we show that this task can be achieved efficiently when the active classifier is allowed to perform only (at most) a
constant number of tests. We then show that, in more general environments, the task is often intractable.
- In Proceedings of 20th IEEE Conference on Computational Complexity , 2005
"... We study the following question: What is the smallest t such that every symmetric boolean function on k variables (which is not a constant or a parity function), has a non-zero Fourier
coefficient of order at least 1 and at most t? We exclude the constant functions for which there is no such t and t ..."
Cited by 14 (3 self)
Add to MetaCart
We study the following question: What is the smallest t such that every symmetric boolean function on k variables (which is not a constant or a parity function), has a non-zero Fourier coefficient of
order at least 1 and at most t? We exclude the constant functions for which there is no such t and the parity functions for which t has to be k. Let τ(k) be the smallest such t. The main contribution
of this paper is a proof of the following self similar nature of this question: If τ(l) ≤ s, then for any ɛ> 0 and � for k ≥ k0(l, ɛ), τ(k) ≤ k s+1 l+1 + ɛ Coupling this result with a computer based
search which establishes τ(30) = 2, one obtains that for large enough k, τ(k) ≤ 3k/31. The motivation for our work is to understand the complexity of learning symmetric juntas. A k-junta is a boolean
function of n variables that depends only on an unknown subset of k variables. If f is symmetric in the variables it depends on, it is called a symmetric k-junta. Our results imply an algorithm to
learn the class of symmetric k-juntas, in the uniform PAC learning model, in time approximately n 3k 31. This improves on a result of Mossel, O’Donnell and Servedio in [11], who show that symmetric
k-juntas can be ∗ Research supported by NSF grants CCR-0002299 and CCF-0431023.
, 2008
"... We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently
good unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and ..."
Cited by 10 (2 self)
Add to MetaCart
We construct a new public key encryption based on two assumptions: 1. One can obtain a pseudorandom generator with small locality by connecting the outputs to the inputs using any sufficiently good
unbalanced expander. 2. It is hard to distinguish between a random graph that is such an expander and a random graph where a (planted) random logarithmic-sized subset S of the outputs is connected to
fewer than |S | inputs. The validity and strength of the assumptions raise interesting new algorithmic and pseudorandomness questions, and we explore their relation to the current state-of-art. 1
- In Proc. 12th Workshop RANDOM , 2008
"... Abstract. We consider the problem of testing functions for the property of being a k-junta (i.e., of depending on at most k variables). Fischer, Kindler, Ron, Safra, and Samorodnitsky (J.
Comput. Sys. Sci., 2004) showed that Õ(k2)/ɛ queries are sufficient to test k-juntas, and conjectured that this ..."
Cited by 8 (4 self)
Add to MetaCart
Abstract. We consider the problem of testing functions for the property of being a k-junta (i.e., of depending on at most k variables). Fischer, Kindler, Ron, Safra, and Samorodnitsky (J. Comput.
Sys. Sci., 2004) showed that Õ(k2)/ɛ queries are sufficient to test k-juntas, and conjectured that this bound is optimal for non-adaptive testing algorithms. Our main result is a non-adaptive
algorithm for testing k-juntas with Õ(k 3/2)/ɛ queries. This algorithm disproves the conjecture of Fischer et al. We also show that the query complexity of non-adaptive algorithms for testing juntas
has a lower bound of min ` ˜ Ω(k/ɛ), 2
, 2006
"... We study the learnability of several fundamental concept classes in the agnostic learning framework of Haussler [Hau92] and Kearns et al. [KSS94]. We show that under the uniform distribution,
agnostically learning parities reduces to learning parities with random classification noise, commonly refer ..."
Cited by 6 (1 self)
Add to MetaCart
We study the learnability of several fundamental concept classes in the agnostic learning framework of Haussler [Hau92] and Kearns et al. [KSS94]. We show that under the uniform distribution,
agnostically learning parities reduces to learning parities with random classification noise, commonly referred to as the noisy parity problem. Together with the parity learning algorithm of Blum et
al. [BKW03], this gives the first nontrivial algorithm for agnostic learning of parities. We use similar techniques to reduce learning of two other fundamental concept classes under the uniform
distribution to learning of noisy parities. Namely, we show that learning of DNF expressions reduces to learning noisy parities of just logarithmic number of variables and learning of k-juntas
reduces to learning noisy parities of k variables. We give essentially optimal hardness results for agnostic learning of monomials over {0, 1} n and halfspaces over Q n. We show that for any constant
ɛ finding a monomial (halfspace) that agrees with an unknown function on 1/2 + ɛ fraction of examples is NP-hard even when there exists a monomial (halfspace) that agrees with the unknown function on
1 − ɛ fraction of examples. This resolves an open question due to Blum and significantly improves on a number of previous hardness results for these problems. We extend these results to ɛ = 2 −
log1−λ n (ɛ = 2 − √ log n in the case of halfspaces) for any constant λ> 0 under stronger complexity assumptions.
- PROC. OF THE 11TH INTERNATIONAL CONFERENCE ON ALGORITHMIC LEARNING THEORY, IN: LECTURE NOTES IN ARTIFICIAL INTELLIGENCE , 2000
"... As pointed out by Blum [Blu94], ”nearly all results in Machine Learning [...] deal with problems of separating relevant from irrelevant information in some way”. This paper is concerned with
structural complexity issues regarding the selection of relevant Prototypes or Features. We give the first re ..."
Cited by 4 (2 self)
Add to MetaCart
As pointed out by Blum [Blu94], ”nearly all results in Machine Learning [...] deal with problems of separating relevant from irrelevant information in some way”. This paper is concerned with
structural complexity issues regarding the selection of relevant Prototypes or Features. We give the first results proving that both problems can be much harder than expected in the literature for
various notions of relevance. In particular, the worst-case bounds achievable by any efficient algorithm are proven to be very large, most of the time not so far from trivial bounds. We think these
results give a theoretical justification for the numerous heuristic approaches found in the literature to cope with these problems.
"... Abstract—We present a new algorithm for learning a convex set in n-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a
fixed polynomial in n times a function of k and ɛ where k is the dimension of the normal subspace (the spa ..."
Cited by 4 (2 self)
Add to MetaCart
Abstract—We present a new algorithm for learning a convex set in n-dimensional space given labeled examples drawn from any Gaussian distribution. The complexity of the algorithm is bounded by a fixed
polynomial in n times a function of k and ɛ where k is the dimension of the normal subspace (the span of normal vectors to supporting hyperplanes of the convex set) and the output is a hypothesis
that correctly classifies at least 1−ɛ of the unknown Gaussian distribution. For the important case when the convex set is the intersection of k halfspaces, the complexity is poly(n, k, 1/ɛ) + n ·
min k O(log k/ɛ4)
"... We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function which depends on an unknown set of k out of n Boolean variables. We give an algorithm
for learning such functions under the uniform distribution which runs in time roughly (nk)!!+1; where! ! 2: ..."
Cited by 2 (0 self)
Add to MetaCart
We consider a fundamental problem in computational learning theory: learning an arbitrary Boolean function which depends on an unknown set of k out of n Boolean variables. We give an algorithm for
learning such functions under the uniform distribution which runs in time roughly (nk)!!+1; where! ! 2:376 is the matrix multiplication exponent. We thus obtain the first polynomial factor
improvement on a naive nk time bound which can be achieved via exhaustive search. Our algorithm and analysis exploit new structural properties of Boolean functions. | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=62657","timestamp":"2014-04-18T22:04:17Z","content_type":null,"content_length":"37803","record_id":"<urn:uuid:31d988e5-fb19-4f7f-9df1-1beea115edea>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00557-ip-10-147-4-33.ec2.internal.warc.gz"} |
Motivic puzzle: the moduli space of squarefree polynomials
Motivic puzzle: the moduli space of squarefree polynomials
As I’ve mentioned before, the number of squarefree monic polynomials of degree n in F_q[t] is exactly q^n – q^{n-1}.
I explained in the earlier post how to interpret this fact in terms of the cohomology of the braid group. But one can also ask whether this identity has a motivic interpretation. Namely: let U be
the variety over Q parametrizing monic squarefree polynomials of degree d. So U is a nice open subvariety of affine n-space. Now the identity of point-counts above suggests the question:
Question: Is there an identity [U] = [A^n] – [A^{n-1}] in the ring of motives K_0(Var/Q)?
I asked Loeser, who seemed to feel the answer was likely yes, and pointed out to me that one could also ask whether the two classes were identical in the localization K_0(Var/Q)[1/L], where L is the
class of A^1. Are these questions different? That is, is there any nontrivial kernel in the natural map K_0(Var/Q) -> K_0(Var/Q)[1/L]? This too is apparently unknown.
Here, I’ll start you off by giving a positive answer in the easy case n=2! Then the monic polynomials are parametrized by A^2, where (b,c) corresponds to the polynomial x^2 + bx + c. The
non-squarefree locus (i.e. the locus of vanishing of the discriminant) consists of solutions to b^2 – 4c = 0; the projection to c is an isomorphism to A^1 over Q. So in this case the identity is
indeed correct.
Update: I totally forgot that Mike Zieve sent me a one-line argument a few months back for the identity |U(F_q)| = q^n – q^{n-1} which is in fact a proof of the motivic identity as well! Here it
is, in my paraphrase.
Write U_e for the subvariety of U consisting of degree-d polynomials of the form a(x)b(x)^2, with a,b monic, a squarefree, and b of degree e. Then U is the union of U_e as e ranges from 1 to d/2.
Note that the factorisation as ab^2 is unique; i.e, U_e is naturally identified with {monic squarefree polynomials of degree d-2e} x {monic polynomials of degree e.}
Now let V be the space of all polynomials (not necessarily monic) of degree d-2, so that [V] = [A^{n-1}] – [A^{n-2}]. Let V_e be the space of polynomials which factor as c(x)d(x)^2, with d(x) having
degree e-1. Then V is the union of V_e as e ranges from 1 to d/2.
Now there is a map from U_e to V_e which sends a(x)b(x)^2 to a(x)(b(x) – b(0))^2, and one checks that this induces an isomorphism between V_e x A^1 and U_e, done.
But actually, now that I think of it, Mike’s observation allows you to get the motivic identity even without writing down the map above: if we write $U^d_e$ for the space of monic squarefrees of
degree d in stratum e, then $U^d_e = U_{d-2e} \times \mathbf{A}^e$, and then one can easily compute the class $U^d_0$ by induction.
8 thoughts on “Motivic puzzle: the moduli space of squarefree polynomials”
1. I think the following gives the argument in K_0(Var_C). I’m not sure what it would take to soup it up to get an argument over Q.
The space of monic polynomials is given by C^n = Sym^n(C) where we think of elements on the right as unordered n-tuple of the roots of the polynomial. The discriminant locus pulls back to a union
of hyperplanes under the quotient map C^n — > Sym^n(C). Now if I stratify those hyperplanes according to how many hyperplanes each point lies on, then the symmetric group acts transitively on the
components in the stratification of a given dimension. The upshot is that the discriminant locus in Sym^n(C) has a stratification whose strata are isomorphic to the strata of a single hyperplane
and hence in the ring of motives, the discriminant locus is equal to C^{n-1}.
2. I don’t know if I totally buy this, Jim — I feel like your strata on the discriminant locus are still going to be _quotients_ by some finite stablizer group of the strata in the hyperplane
3. You are right Jordan, the strata will be quotients by stabilizers. Hmm, serves me right for firing this off 2 minutes after thinking of it. Let me see if I can fix my argument and get back to
4. Perhaps the following works:
The discriminant locus D, i.e. the complement of U in A^n, has a natural stratification parametrized by partitions of n (except n = 1 + 1 + …+ 1 which corresponds to U). The normalisation of the
closure of each stratum is isomorphic to an affine space of dimension the number of terms in the corresponding partition. By induction on the dimension one sees that each stratum is in the
subring of K_0(Var/k) generated by A^1, where is k any field, and there is a universal formula for this, not depending on k. (To see this one should consider the quotients of A^{n_1+n_2+…+n_k} by
the natural action of S_{n_1}x … xS_{n_k} together with their natural stratifications). In particular, D is also in this subring and hence also U. The number of points of U over a finite field
with q elements is therefore given by a polynomial in q whose coefficients determine the representation of U as an element of the subring of K_0(Var/k) generated by A^1. Plugging in the formula
for this polynomial that you gave above then tells us that [U] = [A^n] – [A^{n-1}].
Of course this is a cheap proof and perhaps not what you wanted. There should be a purely combinatorial proof, but such a proof might require a lot more work.
5. This seems right to me! And indeed the pure thought “by induction it’s polynomial in [A^1]” argument is in the end the same as the explicit induction argument I just added to the post.
6. A natural generalisation of your question would be to ask whether the closure of any stratum is equal to an affine space in K_0(Var_k); this seems quite plausible.
7. Naf, that is a great question, which Ravi Vakil and I also wondered about. Definitely plausible, and the evidence looks good. But the answer is no! The closure of the 1,1,2,2,3 stratum in Sym^9
is =L^5-L^2+L in the Grothendieck ring (that’s the first counterexample). We investigate this phenomena more thoroughly in an upcoming paper “Discriminants in the Grothendieck ring” (which also
includes a few different, and different from above, proofs of Jordan’s original puzzle).
8. In even further response to naf’s question, a recent paper of Almousa and myself http://arxiv.org/pdf/1210.0472v1.pdf addresses the question of for which strata the closure is equal to an affine
space in K_0(Var_k).
Tagged algebraic geometry, motives, polynomials, puzzles | {"url":"http://quomodocumque.wordpress.com/2010/05/13/motivic-puzzle-the-moduli-space-of-squarefree-polynomials/","timestamp":"2014-04-18T10:50:26Z","content_type":null,"content_length":"70063","record_id":"<urn:uuid:bf3690ab-2e57-4144-8239-70150d66f6d4>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00436-ip-10-147-4-33.ec2.internal.warc.gz"} |
Pre Search root
For a function, I want to calculate the range where one root exists. For example, f(x) has a root in subrange [a1,b1], another root in subrange[a2,b2] in a total range of [X,Y]. Is any such algorithm
available to calculate the ranges?
Thanks in advance.
Can you give us any more details about the function? For a completely general function, this sort of thing is not really possible. However if you have a specific class of functions you are interested
in (such as polynomials) then it may be possible.
Or how about not posting schoolwork assignments?
Thanks for your reply.
My desired function is 10sin(x)-x=0.
Well I’ll give you a hint.
You know (or should know) that sin(x) is always between -1 and 1. Given this, you should be able to calculate a range of x values where that function could possibly have roots.
Once you’ve done that, you can use what you know about where the roots of sin(x) are to estimate how many roots your function could have and ranges where they could be.
I found the method “Bracketing method” in the following link
It works fine for my purpose. Thanks to all. | {"url":"http://devmaster.net/posts/15642/pre-search-root","timestamp":"2014-04-19T09:43:45Z","content_type":null,"content_length":"18382","record_id":"<urn:uuid:b58d2eb5-16fb-4e9b-a443-3c57c478fea4>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00592-ip-10-147-4-33.ec2.internal.warc.gz"} |
Closed connected additive subgroups of the Hilbert space
up vote 14 down vote favorite
It is a classical result that a closed and connected additive subgroup of $\mathbb{R}^n$ is necessarily a linear subspace. However, this is no longer true in infinite dimension: a very easy example
is the subgroup $L^2(I,\mathbb{Z})$ of the real Hilbert space of all $L^2$ real valued functions on the unit interval $I:=[0,1].$
Indeed, any element $\phi$ of $L^2(I,\mathbb{Z})$ is connected to the origin by the path $\gamma:I\ni t\mapsto \phi\chi_{[0,t]}\in L^2(I,\mathbb{Z}),$ where $\chi_{[0,t]}$ is the characteristic
function of the interval $[0,t].$ Actually, up to a reparametrization, this path is also $1/2$-Hölder continuous. (Indeed, if $\sigma:[0,1]\to\big[0,\ 1+ \|\phi\| _2^2\, \big]$ is the strictly
increasing, surjective continuous map $t\mapsto t+\int_0^t \phi^2dx$, then $\|\gamma(t)-\gamma(t')\|_2\le|\sigma(t)-\sigma(t')|^{1/2}$, meaning that $\gamma\circ\sigma^{-1}$ is $1/2$-Hölder
So we may say that $L^2(I,\mathbb{Z})$ is even $1/2$-Hölder-path-connected, though it is certainly not a linear subspace.
It is also not hard to see that the Hölder exponent $1/2$ is critic: any closed subgroup $G$ of a Hilbert space $H$, which is connected by $\alpha$-Hölder paths, with $\alpha > 1/2,$ is necessarily a
linear space. (Reason: as a consequence of the generalized parallelogram identity, it turns out that the lattice generated by $n$ vectors $g_1,\dots,g_n$ in $H$, with norms $\|g_k\|\leq r,$ is a $rn^
{1/2}$-net in their linear span. In particular, if $\gamma:[0,1]\to G$ is an $\alpha$-Hölder path, for any $n\in\mathbb{N},$ the $n$ elements $g_{k,n}:=\gamma(\frac{k+1}{n})-\gamma(\frac{k}{n})\in G,
\quad k=0,\dots,n-1$ are a $Cn^{1/2 - \alpha }$-net in their linear span. Since $G$ is closed this implies that it is a cone, hence a linear subspace).
I find this quite nice, but at this point some questions arise quite naturally. Let $H$ be the infinite dimensional real separable Hilbert space.
• Let $0 < \alpha < 1/2.$ Are there closed additive subgroups of $H$ which are connected by $\alpha$-Hölder paths, but not by $\beta$-Hölder paths for any $\beta >\alpha \, $?
• More generally: connected / non-connected w.r.to paths with given modulus of continuity? Are there closed, connected, not path-connected additive subgroups?
• Are these objects just pathologies/curiosities of the mathematical Zoo, or did anybody gave an application of them to functional analysis?
hilbert-spaces topological-groups fa.functional-analysis
add comment
Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook.
Browse other questions tagged hilbert-spaces topological-groups fa.functional-analysis or ask your own question. | {"url":"http://mathoverflow.net/questions/45322/closed-connected-additive-subgroups-of-the-hilbert-space","timestamp":"2014-04-24T12:13:25Z","content_type":null,"content_length":"48897","record_id":"<urn:uuid:f545bf3f-697c-48a4-974d-77a7c7ff29ff>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00016-ip-10-147-4-33.ec2.internal.warc.gz"} |
HCSSiM Workshop day 6
math education
> HCSSiM Workshop day 6
HCSSiM Workshop day 6
July 8, 2012
A continuation of this, where I take notes on my workshop at HCSSiM.
What is a group
We talked about sets with addition laws and what that really means. We noted that associativity seems to be a common condition and that some weird operations aren’t associative. Example: define $a*b$
for a pair of integers $(a,b)$ to be the sum $a + b^2.$ Then we have:
$(1*1)*1 = 2*1 = 3$ but $1*(1*1) = 1*2 = 5$.
We decided those things would make them crappy generalized addition operators. We ended up by defining what a group is, although we call it a “Karafiol” so that when our final senior staff member
P.J. Karafiol arrives in a couple of weeks he will already be famous.
We showed that $\mathbb{Z}/n \mathbb{Z}$ is a Karafiol and that, if you remove all of the congruence classes with numbers that aren’t relatively prime to $n,$ you can also turn $\mathbb{Z}/n \mathbb
{Z}$ into a group under multiplication. I was happy to hear them challenge us on whether that would be closed under multiplication. The kids proved everything, we were just mediating. They are
We had already talked about graphs (“Visual Representations”) as defined by vertices and edges. Today we talked about being able to put vertices in different groups depending on how the edges go
between groups, so we ended up talking about bipartite and tripartite graphs. We ended up being convinced that the complete bipartite graph on 6 vertices (so 3 on each side) is not planar. But we
haven’t proven it yet.
Special Lecture
Saturday morning we have only two hours of normal class, instead of 4, and we have a special event for the late morning. Yesterday Johan was visiting so he talked to them about the projective plane
over a finite field, and how every line has the same number of points. He talked to them a bit about his REU at Columbia and his Stacks Project and the graph of theorems t-shirt that he wore to the
talk. I think it’s cool to show the students this kind of thing because they are the next generation of mathematicians and it’s great to get them into online collaborative math as soon as possible.
They were impressed that the Stack Project is more than 3000 pages.
1. July 8, 2012 at 8:55 pm |
I have that shirt. The checker at my grocery store tried to guess what it was about. He guessed it was a CS project, which is pretty close.
2. July 8, 2012 at 9:30 pm |
Typo in your non-associativity example? ((a,b),c) and (a,(b,c)) both give the last digit of c.
□ July 8, 2012 at 10:07 pm |
Thanks, I was being lazy (we got it right in class). I’ve corrected it! | {"url":"http://mathbabe.org/2012/07/08/hcssim-workshop-day-6/","timestamp":"2014-04-20T03:57:58Z","content_type":null,"content_length":"50170","record_id":"<urn:uuid:d44332c6-5b7f-4908-a1bc-76e28cc32bf1>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00579-ip-10-147-4-33.ec2.internal.warc.gz"} |
Tennis Groundstroke Drills
This drill teaches the players the importance of momentum and concentration until the end.
a) Two players play and the game starts with a serve.
b) The score is only one (not X:Y but just X) and it starts at 0
c) The server's winning points increase the score by +1, and the returner's winning points decrease the score by -1. Example: if the server wins first two points, the score is 2. If then the returner
wins one point, the score goes to 1.
d) The first player to reach +3 or -3 wins the game. Then they change roles - the server now returns and vice versa.
e) The whole score is now 1:0 and they play to 3. So the winner wins by 3:0, 3:1 or 3:2.
• Players learn to fight and never give up, even when things don't look so well. They can get back in the game faster. Example: if the opponent leads 2:0 and you win 1 point, you've actually pulled
your opponent away from winning the game.
• In real tennis when your opponent is 40:15 up and you win a point, he can still win the game with the next point. But that is only on the score board. Psychologically the leading player feels as
if he is held back and can become impatient. And you know what that means.
• The player also learns to focus and fight for the last point, even if he leads 2:0. If he loses the point, he will now need 2 in a row to win the game. In reality most players relax too much when
they lead 40:0. This gives good players a chance to catch up. | {"url":"http://www.tennis4you.com/workshop/groundstrokes/groundstroke-drill-020.htm","timestamp":"2014-04-16T13:27:24Z","content_type":null,"content_length":"24246","record_id":"<urn:uuid:b7678b8a-0d1d-417e-a315-f28e0dc682b0>","cc-path":"CC-MAIN-2014-15/segments/1397609523429.20/warc/CC-MAIN-20140416005203-00182-ip-10-147-4-33.ec2.internal.warc.gz"} |
What is the Antoine equation?
Common Compounds
Exam Guide
Construction Kits
Companion Notes
Just Ask Antoine!
Slide Index
Tutorial Index
Atoms & ions
Chemical change
The mole
Energy & change
The quantum theory
Electrons in atoms
The periodic table
Chemical bonds
Acids & bases
Redox reactions
Reaction rates
Organic chemistry
Everyday chemistry
Inorganic chemistry
Environmental chemistry
History of chemistry
Home FAQ Liquids Print | Comment
What is the Antoine equation?
Who is Antoine, and why did he think it important that the Clausius-Clapeyron equation be modified?
melonie lewis 7/27/99
Antoine equation
Clausius-Clapeyron equation
enthalpy of sublimation
enthalpy of vaporization
ideal gas law constant
vapor pressure
Chemists often use the Clausius-Clapeyron equation to estimate the vapor pressures of pure liquids or solids. Several of the assumptions made in the derivation of the equation fail at high
pressure and near the critical point, and under those conditions the Clausius-Clapeyron equation will give inaccurate results. Chemists still like to use the equation because it's good enough in
most applications and because it's easy to derive and justify theoretically.
Chemical engineers sometimes need more more reliable vapor pressure estimates. The Antoine equation is a simple 3-parameter fit to experimental vapor pressures measured over a restricted
temperature range:
where A, B, and C are "Antoine coefficients" that vary from substance to substance. Sublimations and vaporizations of the same substance have separate sets of Antoine coefficients, as do
components in mixtures. The Antoine equation is accurate to a few percent for most volatile substances (with vapor pressures over 10 Torr).
Antoine coefficients for many substances are tabulated in Lange's Handbook of Chemistry (12th ed., McGraw-Hill, New York, 1979) and they are available online from NIST's Chemistry WebBook. The
following books, CD's, and Web sites give parameters and further information about the Antoine equation:
Antoine Coefficient Conversion Calculator (Process Data Control Corporation)
A Java applet that converts Antoine coefficients from one system of units to another.
http://www.pdccorp.com/java/conversion.html (7/27/99)
Antoine Equation (nextstep@athena.compulink.gr)
A JavaScript calculator that estimates vapor pressure for a few common materials using the Antoine equation.
http://www.compulink.gr/users/nextstep/antoine.html (4/11/99)
Properties of Gases and Liquids (R. C. Reid, J. M. Praunitz, B. E. Poling)
This handbook provides models and parameters for estimating thermodynamic properties of gases and liquids, both pure and mixtures, including enthalpies, entropies, fugacity coefficients,
heat capacities, and critical points; vapor-liquid and liquid-liquid equilibria. ISBN: 0070517991, published by McGraw-Hill.
http://www.amazon.com/exec/obidos/ISBN%3D0070517991/002-3780039-4315040 (4/11/99)
Vapor Pressures of Pure Substances (Bernhard Spang)
A brief introduction to the Antoine equation, used to estimate the vapor pressure of substances as a function of temperature. Includes Antoine equation parameters for selected substances.
http://chemengineer.about.com/library/weekly/aa071497.htm (4/11/99)
Author: Fred Senese senese@antoine.frostburg.edu | {"url":"http://antoine.frostburg.edu/chem/senese/101/liquids/faq/antoine-vapor-pressure.shtml","timestamp":"2014-04-16T18:58:46Z","content_type":null,"content_length":"17000","record_id":"<urn:uuid:ad3de34e-2636-4f92-bd0f-fd2bf117681c>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00569-ip-10-147-4-33.ec2.internal.warc.gz"} |
CFD Review | Quality and Control – Two Reasons Why Structured Grids Aren't Going Away
T-Rex and methods like it are filling a need. For example, the Klein bottle in Figure 1 is easily meshed using T-Rex. The method meets these combined requirements: rapid generation of a high quality
mesh that resolves boundary layers, wakes and other flow features for realistic (i.e. complex) geometry. One might, therefore, infer that structured grids might not meet some of those requirements –
otherwise, why bother with hybrid - and that would be correct. Structured grids have a reputation for taking a long time to generate.
Figure 1: Hybrid meshes like this one for a Klein bottle are great but structured grids can do just as well or better. larger image
On the other hand, structured grids give you two things that unstructured meshes may lack: quality and control. Because unstructured and hybrid meshing algorithms are highly automated you necessarily
give up some control. The control afforded to you by structured grids ensure that you generate precisely the grid you need. Also, structured grids (or more specifically, hexahedral cells) are widely
acknowledged to be superior to unstructured meshes (tet cells). A recent reminder comes from the Pointwise User Group Meeting 2013, at which a presenter made it abundantly clear that “choice of grid
is an integral part of getting an accurate solution” for aeroacoustics, an application for which a structured grid was chosen (Figure 2).
Figure 2: Structured grids are well suited for many applications and critical for others, like aeroacoustics, exemplified by this grid for a jet engine nozzle with chevrons.
Another example of the use of structured grids for aeroacoustic applications is the airfoil grid we generated for a test case from the 2nd workshop benchmark problems for airframe noise computations
(BANC-II), shown in Figure 3.
Figure 3: This structured grid for a multi-element airfoil was designed specifically for noise computations. larger image
There are several general advantages to using a structured grid:
• Time and memory. You can fill the same volume with fewer hexes than tets, thereby lowering the cell count and your CFD computation time and memory usage. Structured grids generally have a
different topology than unstructured grids, so it is difficult to make a direct cell count comparison. At its simplest, each hexahedron can be decomposed into 5 tetrahedra that share its edges,
giving a 5:1 reduction in cell count for the same flowfield resolution. The benefit to reducing cell count becomes very apparent when generating a mesh with a wide variation in resolved length
scales; you will use many more tets than you would hexes.
• Resolution. Flow of a fluid will often exhibit strong gradients in one direction with milder gradients in the transverse directions (e.g. boundary layers, shear layers, wakes). In these
instances, high quality cells are easily generated on a hex grid with high aspect ratio (on the order of one thousand or more). It is much more difficult to generate accurate CFD solutions on
highly stretched tetrahedra. (Plus, not all stretched tets are equal depending on the maximum included angles.)
• Alignment. CFD solvers converge better and can produce more accurate results when the grid is aligned with the predominant flow direction. Alignment in a structured grid is achieved almost
implicitly because grid lines follow the contours of the geometry (as does the flow), whereas there’s no such alignment in an unstructured mesh.
• Definable normals. Application of boundary conditions and turbulence models work well when there is a well-defined computational direction normal to a feature such as a wall or wake. Transverse
normals are easily defined in a structured grid.
These last two bullet points are why boundary layers are best modeled in an unstructured mesh with prism layers: they provide structure in the direction away from the wall.
For good comparisons of the performance of structured and unstructured grids, you should review the extensive body of results accumulated by AIAA’s Drag Prediction Workshop [Reference 1] and AIAA’s
High Lift Prediction Workshop [Reference 2], among others.
The good news is that the Pointwise meshing software has an extensive suite of structured grid technology dating back to 1984 and the birth of its predecessor, Gridgen. In fact, structured grids
should be viewed in equal light with Pointwise’s other meshing tools in terms of breadth of applicability and depth of capability.
Rather than delving into the issue of quantifying why and how structured grids may be better than unstructured (a topic for another article), this article will review and highlight Pointwise’s
capabilities in structured grid generation.
A structured grid (aka mapped mesh) is one in which the cells (quadrilaterals for a surface grid, hexahedra for a volume grid) are arranged in an IxJ (or IxJxK) array where I, J, and K are known as
the grid’s dimensions. The term “structured” refers to the structure provided to the cells’ organization within that array such that a cell’s neighbors are known implicitly. In other words, the point
at (i,j,k) has neighbors at (i+1,j,k), (i-1,j,k), etc. This structure contrasts with an unstructured mesh in which a connectivity table has to be maintained and queried to find any point’s neighbors.
Figure 4: This schematic illustrates the mapping of a structured grid to computational space. From Novel multi-block strategy for CAD tools for microfluidics type applications.
In fact, structure is the source of one of the benefits of structured grids in terms of computational performance. Finding neighbors directly (via the structure) is much faster and uses less memory
than having to look them up in a table (i.e. unstructured).
The best - and freely available - online reference for structured grid generation is the classic text by Joe Thompson et al, Numerical Grid Generation: Foundations and Applications [Reference 3].
There are many mathematical methods with which structured grids can be generated. Pointwise’s suite, though not comprehensive, provides a broad range of capability that can be applied to virtually
any type of geometry.
Note that the use of structured grids implies that the distribution of boundary points has been performed in a manner such that dimensions (number of grid points) of opposite boundaries are
identical. Domains (surface grids) will have four boundaries (edges) and blocks (volume grids) will have six boundaries (faces). These topological considerations won’t be discussed in what follows,
only the grid methods themselves.
Algebraic Methods
Algebraic methods in Pointwise serve two purposes. First, they initialize the grid in every domain and block. That grid will be sufficient (in terms of cell quality) for many geometries. In other
cases, this algebraic grid serves as the starting point for elliptic PDE methods (described below).
Transfinite interpolation (TFI) is a very simple set of algebraic equations (i.e. the solution is not iterative) that is based on the solution of a boundary value problem. In other words, the
location of each interior grid point is computed based on the locations of the grid’s boundary points.
TFI is applied automatically in Pointwise to initialize structured surface and volume grids, domains and blocks respectively. Because TFI is a relatively fast computation (an 88x88x88 block takes
only a second to TFI on a laptop), it isn’t actually applied to a block until that block’s grid points are needed (e.g. mesh quality examination, export), thereby minimizing memory usage.
Figure 5: A surface grid initialized using TFI with standard interpolants and spanning multiple CAD surfaces
Cell quality in a TFI’d grid is influenced by the choice of interpolants. You have three interpolant choices in Pointwise.
• Arc length. By default, Pointwise uses arc-length based interpolants that result in a grid that mimics the distribution of boundary points.
• Polar. For surface grids that need to be shaped like a surface of revolution (i.e. farfield boundaries), you can apply polar interpolants. Given an axis of revolution, the grid’s boundary
coordinates are mapped to a cylindrical coordinate system, TFI is applied, and the grid points’ (x,y,z) coordinates are obtained from the inverse mapping. This method means you won’t need CAD
surfaces to define outer boundaries.
• Linear. Primarily for historical reasons, linear interpolants are also available.
Conforming to CAD Geometry
When surface grids are generated, it’s vital that they conform to the CAD model where appropriate. When Pointwise detects that a domain’s boundaries all lie on the same surface in the CAD model, it
applies Parametric TFI in which the interpolation is performed in the CAD surface’s (u,v) parametric coordinates, which ensures that each of the domain’s grid points lies precisely on the CAD model.
(In other words, because (u,v) is known at each of the boundary points, the TFI method interpolates those (u,v) coordinates onto the grid’s interior. Evaluating the CAD surface at each interpolated
(u,v) location yields the grid’s (x,y,z) coordinates.
In more general cases, Pointwise can detect where the surface grid’s boundary points lie “close enough” to a CAD surface (usually via a boundary shared with another CAD surface), and will fit them to
that surface before applying Parametric TFI with the same result: grid points that all lie precisely on the CAD geometry.
Elliptic PDE Methods
It’s perhaps a bit ironic that the use of elliptic partial differential equations (PDEs) for meshing was originally developed for unstructured meshes by Winslow in the late 1960s [Reference 4]. These
methods involve iterative solution of an elliptic PDE with right hand side terms (control functions) that influence various aspects of the grid’s cell quality.
You will likely run the Solve command on at a least a few of the domains and blocks in most grids you generate for the simple reason that the improvements in smoothness, clustering and orthogonality
relative to the TFI grid are substantial and will improve your flow solver’s convergence and solution accuracy. Pointwise uses a unique implementation of the control functions whereby they are split
into those that influence the grid on the interior and those that influence the grid near the boundaries.
Interior Control Functions
• Laplace. The right hand side (RHS) terms of the elliptic PDE are set to zero, resulting in a maximization of smoothness but without any control over clustering.
• Thomas-Middlecoff. This formulation of the RHS terms provides a smooth variation of cell sizes on the grid’s interior that mimics the cell sizes on its boundaries. This is the default control
• Fixed Grid. This relatively subtle technique extracts control functions from the existing grid, smoothes them, and applies them to the grid. It helps remove kinks from grids that are otherwise
Figure 6: The grid in Figure 5 has been smoothed through application of the elliptic PDE solver while remaining constrained to the CAD model.
Boundary Control Functions
There is one very good reason for applying a boundary control function to your grid: controlling the cell spacing and orthogonality at the boundaries. It should be clear that having grid lines
emanate orthogonally from solid walls is a desirable property (for boundary conditions and turbulence models), as is the ability to control the height of the first cell (for Y+ clustering
requirements). Those attributes are the purpose of the boundary control functions which come in two basic formulations:
• von Lavante/Hilgenstock/White. This sequentially poly-authored formulation is the default method because it achieves the desired constraints precisely at the price of being potentially and
slightly unstable in the numerical solution of the PDEs.
• Steger-Sorenson. This classic formulation of the boundary control functions tends to be more stable at the price of not matching the constraints exactly. (In other words, in highly concave
regions the first grid point off the wall may be slightly farther away from the wall than specified.)
The descriptions above mention “constraints.” These constraints are definitions of exactly how you desire the wall spacing and orthogonality to be computed. Without going into too much detail,
constraint types include:
• Interpolated. The constraints on the boundary are interpolated from the ends of the boundary (default).
• Current Grid. The constraints are obtained from the current grid.
• Adjacent Grid. The constraints are obtained from an adjacent grid via a shared boundary. This is good for ensuring smoothness and continuity across a grid boundary.
• User Specified. Enter whatever values you desire. Each grid boundary in the grid can use a different control function formulation.
Figure 7: The improvement in this smoothed grid (right) over the algebraic version (left) is readily apparent. larger image
Boundary Conditions
For completeness’ sake, it’s worth mentioning that you also have a choice of boundary condition to be applied to each boundary of the grid.
• Fixed – Boundary points remain fixed in space (default).
• Float – The boundary points float with the solution of the PDE.
• Orthogonal – Whereas boundary control functions leave boundary points fixed and influence the near-boundary grid, you may choose to have boundary points slide along the boundary shape to achieve
Numerical Solution of the Elliptic PDEs
When applying the Solve command (or “smoothing” the grid as the operation is commonly known), the elliptic PDEs are solved iteratively using either a multigrid (default) or a pointwise, successive
over-relaxation (SOR) algorithm. Each algorithm has several tunable parameters that influence the rate of convergence of the solution. What you must keep in mind is that convergence of these PDEs
means nothing. There’s no guarantee that the “converged” grid will be better than one that’s partially converged. The point is to run the solver sufficiently to achieve in the grid the desirable
attributes of smoothness, clustering and orthogonality. Mesh quality can be quantified using the Examine command, but that’s a topic for another article.
CAD and the Elliptic Solver
As is the case for TFI, it’s vital that surface meshes remain constrained to the CAD model even while being smoothed. Fortunately, the elliptic PDE solver also uses parametric techniques like TFI
does in which the grid is smoothed in the CAD surface’s (u,v) parametric space ensuring grid adherence to the CAD model.
But as you know, grid topology is independent of CAD topology and one grid may span several CAD surfaces (see Figure 6), making the parametric solver unusable because a single consistent (u,v)
mapping can’t be obtained. Fortunately, Pointwise’s elliptic solver is coupled with grid point projection to the CAD model to provide coverage when the parametric technique isn’t available. For each
iteration of the elliptic PDEs, each grid point can be projected onto the CAD model.
One of the attributes for the elliptic solver for surfaces is Shape, which can be Free (float in 3D space), Fixed (maintain current shape), or Database (CAD model) constrained. For the latter, you
have the choice of which CAD surfaces the grid should be projected onto (Pointwise can figure that out for you) and the type of projection:
• Closest Point (default)
• Linear. A ray cast in a direction chosen by the user
• Cylindrical. Outward projection from an axis
• Spherical. Outward projection from a point
Extrusion Methods
Up to this point, we have been describing boundary value problems. Pointwise also contains a class of structured grid methods that are initial value problems: given a grid along a boundary (the
initial value), a grid is created by extruding from that boundary. Extrusion methods are very powerful and can create some great grids as illustrated in Figure 8, a single structured grid extruded
around a multi-element airfoil.
Figure 8: This grid for a multiple element airfoil was extruded from a single line that wraps all the elements.
Extrusion methods are limited in their application because you only have control over one boundary (the initial boundary, often called the extrusion front or just front). One common use for extrusion
is with grids that extend to the farfield where that outer boundary shape need not be stringently defined. Another very popular usage of extrusion methods is for overset grids. Extrusion in Pointwise
consists of two subclasses of methods, algebraic and hyperbolic PDE:
• Algebraic. Using a set of simple algebraic equations, a grid is created by sweeping a lower order grid along a line, along a vector, or around an axis. Figure 9 demonstrates how these methods can
be used to create complex shapes.
• Hyperbolic PDE. The grid generation equations are recast as hyperbolic PDE and a solution is obtained by marching outward from the initial grid.
Figure 9: A simple butterfly topology surface grid (orange and yellow) is extruded along a line (green), around an axis (red), and along a user-defined path (blue) to create a volume grid.
The hyperbolic PDE technique for structured grid extrusion is significantly more sophisticated than the algebraic methods. The PDEs are cast in terms of the marching direction (typically orthogonal
to the extrusion front) and the step size with various controls on each attribute. As listed below.
• Step Size. In addition to specifying the height of the first grid cell, you can control how rapidly it grows and place limits on its minimum and maximum size.
• Smoothing. Four different types of tunable smoothing parameters are available. (These are necessary because numerical solutions to hyperbolic PDEs can be a little finicky.)
• Quality criteria. You can specify limits on grid quality attributes like cell skewness and aspect ratio or even a total height of the extrusion layer. The extrusion stops when those criteria are
Figure 10: Hyperbolic PDE extrusion can be used to create near-wall layers of hex cells.
Boundary Conditions
Even though the hyperbolic PDEs only give you control over the initial front, you can apply boundary conditions to the extrusion to influence its behavior. For overset grid applications the main (and
default) boundary condition is splay with which you can influence how widely the sides of the extrusion splay outward as the extrusion proceeds. As you can imagine, this is good for overset grids as
it ensures sufficient overlap between adjacent grids for the interpolation to proceed. Other boundary conditions include:
• Database constrained. The boundary will follow the contours of the CAD model.
• Adjacent grid. The boundary will match point-to-point with an existing grid.
• Symmetry
• Constant X, Y, or Z
Conforming to CAD Geometry
Based on descriptions of the previous methods, it will not surprise you to learn that the hyperbolic PDE technique also has built-in capabilities to ensure that surface grids adhere to the CAD model
where appropriate. A unique feature of Pointwise’s implementation of hyperbolic PDE methods is the ability to extrude a surface while keeping it constrained to CAD surfaces. The grid in Figure 10 for
a notional space vehicle was extruded using the hyperbolic PDE method from a single curve that wraps around the cross section at the base of the wings. After each point is extruded via the PDE
solution it is projected onto the CAD model.
Figure 11: This surface grid was generated by extruding the grid from the aft cross-section forward while keeping it constrained to the CAD model.
Structured Grids Are Here to Stay
Hopefully, after reading this you will understand how Pointwise provides a great breadth of capability in its structured grid generation methods, enabling you to use structured grids in your
simulations. One point to remember is that by using grid generation methods that are sufficiently broad in their applicability, you don’t have to use grid topology as a crutch to achieve your desired
results. Structured grids will continue to be used in CFD for a long time to come and we plan to continue adding new capabilities to Pointwise to make them even more amenable to your work.
Figure 12: Turbomachinery is another example of an application area that is extremely well suited for structured grids. larger image
Give Pointwise a try and find out if a structured grid is a good fit for your application. A no-cost, no-obligation trial is available.
1. 5th AIAA CFD Drag Prediction Workshop
2. 2nd AIAA CFD High Lift Prediction Workshop
3. Numerical Grid Generation - Foundations and Applications, Joe E. Thompson, Z.U.A. Warsi and C. Wayne Mastin.
4. Winslow, A.M.: Numerical Solution of the Quasilinear Poisson Equations in a Nonuniform Triangle Mesh, Journal of Computational Physics 2, 149-172 (1967) | {"url":"http://www.cfdreview.com/gridgen/13/05/14/2235219.shtml","timestamp":"2014-04-19T07:18:50Z","content_type":null,"content_length":"55828","record_id":"<urn:uuid:107a38cb-e22e-40d3-910f-a24629109ecd>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00033-ip-10-147-4-33.ec2.internal.warc.gz"} |
Do-It-Yourself (DIY) Investor
One time, a few years ago, I was explaining to a couple the importance of diversification and why I prefer exchange traded funds to mutual funds and how different funds are used to satisfy an asset
allocation. Basically, an explanation I had given numerous times. As usual, I ended up by asking if there were any questions. The man said "Just one - what is a mutual fund?"
This taught me the basic lesson that most advisors (but not all!) learn at some point, which is that the jargon that we so glibly use is not familiar to a lot of people. Furthermore, I'm convinced
that it is the single most important point fund reps making 401(k) presentations could work on. I've attended presentations where reps have droned on and on, enamored with their power point graph,
about Sharpe ratios to groups comprised mostly of people who have no idea of the importance of the risk/return tradeoff in portfolio construction much less the ratio between a risk-free rate and a
standard deviation. These meetings remind me of a time when I wandered into a conference seminar on "Advances in Linux" - or something like that. As far as I was concerned, it could have been a
seminar on speaking Swahili - unlike the rest of the audience, I had no idea what the presenter was talking about.
One term that could confuse the layman is "basis point." We like to say, for example, that a single-A corporate bond yields 111 basis points more than the corresponding U.S. Treasury note. What the
heck does this mean?
A basis point is simply one one-hundredths of a percent. If one bond yields 5% and another bond yields 5.25%, then the difference is 25 basis points. You can easily see that talking in terms of basis
points is better and more convenient than saying the bond yields 25 one one-hundredths more. Sometimes basis point is shortened to "bips" as in "the bond yields 25 bips more."
Basis Points and Investment Costs
An important area where basis points comes up is in thinking about investment costs. Investment costs is an aspect of investing that an investor can control. As is often pointed out, there are parts
of the investment process that people obsess over that they, in fact, can't control. The prime example is the markets. No amount of gnashing of teeth and towel wringing will change what the market
will do. It is better to focus on what you can control.
Most investors, with a small bit of effort, can reduce investment costs by up to 50 basis points (or bips-- your call on the jargon) by paying attention to expense ratios and talking to an advisor to
see if 401(k)s can be rolled over to where less expensive funds are available, etc..
But is it worth it? Well, consider $100,000 over a 25-year period. 50 bips over this period compounds up to 13.3%, i.e. $13,279 in our example. Here's the kicker - that money will go into your nest
egg or the broker's pocket - your choice. Furthermore, most investors have a longer time frame - they just don't think it through. Suppose you are 50 years old. There is a real good chance that part
of your nest egg will be funding your retirement even 35 years from now!
Homework Questions
1. How many basis points difference is the yield between the 10-year
Treasury note and the 5-year Treasury note? Hint: Go to
2. What is the basis point difference in the expense ratio between FLCSX (Fidelity large cap stock fund) and SPY S&P 500 exchange traded fund? Hint: Go to
2 comments:
1. Very nicely explained Robert! Jargons are one of the main reasons people are afraid to explore this area of financial literacy.
There aren't many blogs that discuss raw basics. I'm glad you are filling that void.
2. Thank you. I think most everyone has been in a situation where jargon is thrown around and they want to ask a question but they feel like an idiot because everyone else seems to understand.
Sadly, the financial services industry uses that to their advantage in certain instances. | {"url":"http://rwinvesting.blogspot.com/2012/12/whats-basis-point.html","timestamp":"2014-04-16T16:57:46Z","content_type":null,"content_length":"119626","record_id":"<urn:uuid:0a546cce-728d-4ef0-b73c-ea0972d5c03d>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00037-ip-10-147-4-33.ec2.internal.warc.gz"} |
Think about what you have:
const int MAX = 5;
int arr[MAX];
So how many ELEMENTS will be IN that array???
Yes, 5.
But HOW are they NUMBERED???
Yes, zero through 4. Only.
So even your
display( )
function is wrong:
void array::display()
for (short i = 0; i <= MAX; i++)
You should be using < MAX, *NOT* <= MAX.
SO now look at your code to reverse the elements, *EVEN AFTER* we change the <= to just < :
void array::reverse()
for (short i = 0; i < MAX; i++)
arr[i] = arr[(MAX + 1)- i];
Let's "unroll" the loop and see what you are *ACTUALLY* doing:
arr[0] = arr[(5+1)-0] ==>> arr[0] = arr[6]
arr[1] = arr[(5+1)-1] ==>> arr[1] = arr[5]
arr[2] = arr[(5+1)-2] ==>> arr[2] = arr[4]
arr[3] = arr[(5+1)-3] ==>> arr[3] = arr[3]
arr[4] = arr[(5+1)-4] ==>> arr[4] = arr[2]
You *SHOULD* have been using
arr[i] = arr[(MAX - 1)- i]; // -1, *NOT* +1
But that's *STILL* not enough!
Let's make an example array:
arr[0] = 777
arr[1] = 222
arr[2] = 444
arr[3] = 888
arr[4] = 333
With YOUR code, even after my -1 adjustment, you would be doing:
arr[0] = arr[(5-1)-0] ==>> arr[0] = arr[4], so arr[0] is now 333
arr[1] = arr[(5-1)-1] ==>> arr[1] = arr[3], so arr[1] is now 888
arr[2] = arr[(5-1)-2] ==>> arr[2] = arr[2], so arr[2] is still 444
arr[3] = arr[(5-1)-3] ==>> arr[3] = arr[1], so arr[3] is now 888! got the NEW value of arr[1]!
arr[4] = arr[(5-1)-4] ==>> arr[4] = arr[0], so arr[4] is now 333! got the NEW value of arr[0]!
So you can see that this is complete mistake! You can *NOT* swap an array like that, *AT ALL*.
I'm sure this is homework for some class, and we don't do homework. (Or at least I won't.) We WILL help you when you run into trouble, as I have just done.
But now it is time for you to go put on your thinking cap and figure out the *RIGHT* way to swap an array.
p.s.: Weren't you suspicious when you couldn't even get your
populate( )
function to work??? Didn't it give you an error when you tried to put the 6th value into your 5 element array? Or are you possibly not running this with a debugger and you just got an undiagnosed | {"url":"http://p2p.wrox.com/c-programming/72517-how-reverse-array-here.html","timestamp":"2014-04-21T10:03:26Z","content_type":null,"content_length":"55389","record_id":"<urn:uuid:0f825473-9612-429c-8d1f-3cd210534ffd>","cc-path":"CC-MAIN-2014-15/segments/1398223203235.2/warc/CC-MAIN-20140423032003-00097-ip-10-147-4-33.ec2.internal.warc.gz"} |
Cryptology ePrint Archive: Report 2011/237
The block cipher NSABC (public domain)Alice Nguyenova-Stepanikova and Tran Ngoc DuongAbstract: We introduce NSABC/w – Nice-Structured Algebraic Block Cipher using w -bit word arithmetics, a 4w -bit
analogous of Skipjack [NSA98] with 5w -bit key. The Skipjack's internal 4-round Feistel structure is replaced with a w -bit, 2-round cascade of a binary operation (x,z)\mapsto(x\boxdot z)\lll(w/2)
that permutes a text word x under control of a key word z . The operation \boxdot , similarly to the multiplication in IDEA [LM91, LMM91], bases on an algebraic group over w -bit words, so it is also
capable of decrypting by means of the inverse element of z in the group. The cipher utilizes a secret 4w -bit tweak – an easily changeable parameter with unique value for each block encrypted under
the same key [LRW02] – that is derived from the block index and an additional 4w -bit key. A software implementation for w=64 takes circa 9 clock cycles per byte on x86-64 processors. Category /
Keywords: secret-key cryptography / block ciphers, tweakable, algebraic, modular multiplication, IDEA, SkipjackDate: received 12 May 2011, last revised 17 May 2011Contact author: tranngocduong at
gmail comAvailable format(s): PDF | BibTeX Citation Note: Terminology correction. Version: 20110518:021153 (All versions of this report) Discussion forum: Show discussion | Start new discussion[
Cryptology ePrint archive ] | {"url":"http://eprint.iacr.org/2011/237","timestamp":"2014-04-16T16:32:01Z","content_type":null,"content_length":"2675","record_id":"<urn:uuid:165ea411-c92c-41d0-96e5-8401d9048333>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Convolution: Part 2 of 3
(Part Two of a Three part series - ed.)
Back to Part One.
On to Part Three.
Evidently we need to find a better solution. Several design choices present themselves:
1. Do we continue to represent the farms as point data with emissions values or do we represent the farm emissions as values within a grid, as suggested above?
2. Do we have to represent the kernel as a grid of cells or is there perhaps a better way?
In some circumstances, three hours to do the calculation is good enough. Maybe the calculation only has to be performed once, or rarely (perhaps once a year, or when lots of new data come in). Maybe
there is a workstation dedicated to doing the computation once a day, so completing it in anything less than a day is ok. If that were the case, we would pursue this grid + grid (data + kernel)
design and hope to accomplish it entirely within our GIS.
Unfortunately, the purpose our modeler has in mind is more demanding. He wants to evaluate different scenarios. What happens if emissions can be reduced at one set of farms? Where should they be
reduced, and by how much, to get total deposition below a threshold? This requires making frequent modifications to the data and running the model again and again. In such circumstances people need
computation to be immediate.
If we take immediate to mean within one minute, which is typical of even simple operations on large grids, then we have a long way to go.
What can be done to speed up the convolution?
The obvious thing is to keep the representation of the farms as 4,000 points, rather than converting them into an emissions grid. After all, the grid has three million cells, which are 750 times as
many as we really need. This suggests two solutions.
The first is output-oriented. To compute the value of each cell in the output grid, we find each farm within 3000 meters of the cell and add its contribution (as computed from the kernel function) to
the output cell. This solution suffers from the need for a farm-finding algorithm, which could slow down the process significantly.
The second is input-oriented. The kernel really is just a method that directly spreads one value (a farm s annual emissions) from a point (the farm) into its neighborhood. If this spreading operation
is fast enough, we might be able to do it 4,000 times over in a short period of time. It is superior to the first in not requiring any searches for nearby farms.
The second approach can be done with Spatial Analyst, using its gridIO library functions and some Avenue scripting. (Kenneth McVay, a prolific and generous programmer, has provided an Avenue
interface to gridIO for the intrepid or desperate analyst. See his CellIO script). The calculation shrinks to eight hours using this approach. On a faster workstation, it could be done in less than
three hours. We will revisit this direct method, but first let s cast about for a different solution.
How about a non-gridded representation of the kernel? Although attractive, it is not clear it could lead to a superior method. Any method to represent the kernel ultimately must lead to computations
on the kernel grid. (Remember those arrows in the figure: the value for each arrow must be computed somehow.) We are probably better off doing those computations once and for all, recording them on a
kernel grid, and letting the computer look up the values in that grid. Lookups are almost always faster than computations, when you have the RAM for it.
By the way, here is a hillshaded representation of our modeler s deposition kernel. It has a sharp peak in the center and tapers almost exponentially away. It is cut off at a 3,000 meter radius.
The deposition kernel (hillshaded relief). Blue = high, red = low. It takes discontinuous steps at 100 and 1000 meters; the radius is 3000 meters. See it spatial perspective.
The problem asks specifically for a Spatial Analyst solution. The first thing is to review the Spatial Analyst toolkit. The documentation refers to the things you can do with grids as requests .
They are all listed on one help page (under the Grid(class) topic). Quite a few requests look suspiciously like convolution. The most promising ones include:
• Filter - Performs a focal filter on a grid.
• FocalStats - Calculates a statistic (such as an average) for a user-defined neighborhood, for each cell. Each cell has a unique overlapping neighborhood.
• MakeByInterpolation - Includes the IDW method, which can be considered the result of two intermediate convolutions.
• MakeDensitySurface - Converts a point theme to a grid and then convolves with a constant circular kernel or an approximate Gaussian kernel.
However, many other requests are really specialized convolutions or can be made to simulate convolutions: Resample, Slope, and Aspect are among these. Spatial Analyst has so many powerful requests
that a little time spent mulling them over is usually time well spent.
Most of these requests have obvious built-in limitations. Filter, for example, is a convolution with a specific 3 x 3 grid. Two such convolutions are equivalent to a convolution with a 5 x 5 grid,
three convolutions with a 7 x 7 grid, and so on. (Each successive convolution spreads the grid values two cells further away.) However, it would take 60 such operations to spread out to 120 cells
away, and we don t get any choice of kernel. That looks like a dead end.
The MakeDensitySurface request could be used in some circumstances, but the two kernels it uses are very flat in the vicinity of the input cell. They just cannot be readily combined to reproduce the
sharp peak of our kernel.
What about the IDW technique? I mentioned earlier that it is really two convolutions. Its kernel tapers off according to some inverse power of distance. The power is under user control. Any inverse
power of distance is sharply peaked (infinite, actually) near the input cell, which is good for us. Unfortunately, the IDW output from Spatial Analyst does not allow one to recover the intermediate
convolutions that were performed. There seems to be no way around this.
That leaves us with the most obvious candidate, FocalStats, which we already know to be too slow.
4. Hitting the literature
One of the greatest benefits of recognizing what it is we are doing is that it connects us with centuries of scholarship and clever thinking. By knowing at least the name for this operation
convolution we have a key to this literature. A little research will reveal a wonderful related technique: Fourier transforms.
The Fourier transform was developed two centuries ago as a tool for writing (nearly) arbitrary functions as sums of trigonometric series. However, we do not really need to know much about it. The key
points are:
• The Fourier transform produces a new grid of the same size and shape as the original grid.
• Computing the Fourier transform takes a number of computer operations that is roughly proportional to the number of cells in a grid. (This is the Fast Fourier Transform, or FFT.)
• With minor modifications, the Fourier transform is its own inverse: applying it twice produces a grid that is related to the original grid in a very simple manner.
• To do the FFT really quickly requires holding the entire grid in RAM at once.
• You actually need to put a special collar of zero values around your grid to do the FFT correctly, so it takes more RAM than you think.
What makes the Fourier transform so valuable in this instance is that when you multiply the Fourier transforms of two grids (on a cell by cell basis, so this is quick and easy) you get the Fourier
transform of the convolution of the two grids! This means that the following algorithm solves our problem:
• Take the FFT of the emissions grid.
• Take the FFT of the kernel grid.
• Multiply the two FFTs.
• Take the inverse FFT of that product.
Each step takes a number of operations essentially proportional to the size of the grid involved. So, rather than performing 240 * 240 operations for each of 1600 * 2000 cells, we only need to
perform a hundred or so operations for each cell. That is an enormous speed up. Indeed, if you merely want a moving window average, this technique is faster than the Spatial Analyst FocalStats
request for windows 6 x 6 or larger.
The origin of this project.
On to Part Three.
Back to Part One. | {"url":"http://www.directionsmag.com/features/convolution-part-2-of-3/129751","timestamp":"2014-04-18T20:59:22Z","content_type":null,"content_length":"71349","record_id":"<urn:uuid:20fd89aa-d57b-487f-b6ea-059899be64fa>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00506-ip-10-147-4-33.ec2.internal.warc.gz"} |
Evolution and Game Theory
Evolution and Game Theory
by Larry Samuelson
Evolution and Game Theory
The Journal of Economic Perspectives
Updated: December 5th, 2012
Evolution and Game Theory
Larry Samuelson
ntroduced by John von Neumann and Oskar Morgenstern (1944),energized
by the addition of John Nash's (1950) equilibrium concept, and popularized
by the strategic revolution of the 1980s, noncooperative game theory has become a standard tool in economics. In the process, attention has increasingly been focused on game theory's conceptual
foundations. Two questions have taken center stage: Should we expect Nash equilibrium play-that is, should we expect the choice of each player to be a best response to the choices of the other
players? If so, which of the multiple Nash equilibria that arise in many games should we expect?
In the 1980s, game theorists addressed these questions with models based on the assumptions that players are perfectly rational and have common knowledge of this rationality. In the 1990s, however,
emphasis has shifted away from rationality-based to evolutionary models. One reason for this shift was frustration with the limitations of rationality-based models. These models readily motivated one
of the requirements of Nash equilibrium, that players choose best responses to their beliefs about others' behavior, but less readily provided the second requirement, that these beliefs be correct.
Simultaneously,rationality-basedcriteria for choosing among Nash equilibria produced alternative "equilibrium refinements9'-strengthenings of the Nash equilibrium concept designed to exclude
implausible Nash equilibria-with sufficient abandon as to prompt despair at the thought of ever choosing one as the "right" concept. A second reason for the shift away from rationality-based game
theory was a change in the underlying view of what games represent. It was once typical to interpret a game as a literal description of an idealized interaction, in which an assumption of perfect
rationality appeared quite
Lary Samuelson is Professor of Economics, University of Wisconsin, Madison, Wisconsin. His e-mail address is.
natural. It is now more common to interpret a game, like other economic models, as an approximation of an actual interaction, in which perfect rationality seems less appropriate.
The term "evolutionary game theory" covers a wide variety of models. The common theme is a dynamic process describing how players adapt their behavior over the course of repeated plays of a game. The
interpretations of this dynamic stretch from biological processes acting over millions of years to cultural processes acting over generations to individual learning processes acting over the few
minutes that pass between rounds of an experiment. The dynamic process potentially provides the coordination device that brings beliefs into line with behavior, pro- viding the second requirement for
a Nash equilibrium. It also provides a context for play that may be useful in assessing multiple equilibria.' Overall, it brings game theory closer to economics by viewing equilibrium as the outcome
of an adjustment process rather than something that simply springs into being.
This essay first describes the basic techniques of evolutionary game theory. I then turn to the questions of whether evolutionary game theory provides support for equilibrium play and whether it
provides insight into which equi- librium we might expect to see. Finally, it is useful to recall a time when general equilibrium theory was the new technique sweeping the profession, spurred by
elegant proofs of existence and optimality and prompting concern about ques- tions as to why we should expect equilibrium outcomes. The result was an explosion of work on tstonnemont and other
adjustment processes. This work taught us much about competitive equilibrium, but had little impact on the practice of economics. Will evolutionary game theory have a similarly negligible effect on
what economists do? Perhaps not. Game theory has given rise to an equilibrium selection problem, potentially amenable to evolutionary techniques, that had no parallel in the case of general
equilibrium theory. The key to the success of evolutionary game theory will be delivering results in this area that will change the way economists practice their craft. The final section suggests
where we might look for such results.
Evolutionary game theory has been the subject of several surveys and texts, including Fudenberg and Levine (1998), Hofbauer and Sigmund (1988), Mailath (1998), Samuelson (1997), van Damme (1991,
chapter 9), Vega-Redondo (1996), Weibiill (1995) and Young (1998). I will accordingly not attempt a survey of the literature in this paper and will also feel free to suppress technical details or
simplify models whenever helpful.
Dynamic models based on adaptive behavior have a long history in economics. A distinguishing feature of evolutionary game theory is an explicit model of the strategic considerations that give rise to
individual behavior and that convert this behavior into economic outcomes.
Evolutionary Models
Biological Antecedents
True to its name, evolutionary game theory appeared first in biology. The central concept of an evolutionarily stable strategy was introduced by Maynard Smith and Price (1973) and developed further
in Maynard Smith's (1982) influential Evolution and the Theory of Games. Dawkins (1989, p. 84) suggests that evolutionary stability is potentially "one of the most important advances in evolutionary
theory since Darwin."
The context used to interpret an evolutionarily stable strategy envisions a large population of agents who are repeatedly, randomly matched in pairs to play a game. This underlying game is assumed to
be symmetric, in the sense that i) the players choose their strategies from identical sets, and the payoff to a player choosing a particular strategy against an opponent choosing an alternative is
the same regardless of the identities or characteristics of the players; and ii) players cannot make their strategy choices conditional on any characteristics such as which is the larger or older, or
which is the row player. The payoffs in the game are assumed to represent "fitnesses," in the sense that a process of natural selection will favor those who earn higher payoffs.
Now suppose that everyone in the population plays a "common" strategy, except for a tiny toehold of "mutants" who play an alternative strategy. If the common strategy earns a higher expected payoff
than the mutant strategy, then we can expect selection to eliminate the latter. If this outcome holds for any possible mutant strategy, then the common strategy is said to be evolutionarily stable.
An evolutionarily stable strategy is thus a strategy that, once pervasive in the popula- tion, can repel any (sufficiently small) mutant advance.
Translating this higher-expected-payoff-than-any-mutant condition into pay- offs, any strategy that is a strict best response to itself (that is, earns a strictly higher payoff against itself than
does any alternative) will be evolutionarily stable. Because such a strategy earns a higher payoff against itself than does any mutant, it will earn a higher average expected payoff than the mutant
in any population in which the mutant toehold is sufficiently small.
A strategy may also be evolutionarily stable if it is only a weak Nash equilibrium (that is, if there is some other strategy that fares as well against the candidate for evolutionary stability as
does the candidate itself), but only if the candidate strategy then satisfies the stability condition that it earns a higher payoff when facing any alternative best response than does the alternative
itself. In this case, the evolution- arily stable strategy secures a higher expected payoff in the population not through its unrivaled performance against itself, where it ties with the mutant, but
by performing better against the mutant.
The criteria for evolutionary stability are thus more demanding than those for a Nash equilibrium. Any evolutionarily stable strategy must be at least a weak best response to itself, and is hence a
Nash equilibrium, but a weak Nash equilibrium that fails the stability condition is not evolutionarily stable.
Figure 1
Hawk-Dove Game
Hawk Dove
Maynard Smith (1982) opened his book with the hawk-dove game, which has since become a standard setting for discussions of evolutionary stability in biology. This game, shown in Figure 1, involves
two players who contest a resource worth V. If one player is aggressive (Hawk) and the other acquiescent (Dove), then the former gets the resource and the latter nothing. Each has an equal chance at
the resource if both are aggressive or both passive, with mutual aggression causing each to incur an injury cost of C > V with probability i. The hawk-dove game has a unique evolutionarily stable
strategy, given by the mixed strategy in which Hawk is played with probability V/ C.
Knowing that a strategy is evolutionarily stable tells us something about a population in which everyone chooses that strategy. But do we have any reason to expect such a state of affairs to arise?
In response to this question, biologists have studied the population dynamics that lie behind the evolutionary stability concept more explicitly. In keeping with the interpretation of payoffs as
fitnesses, let payoffs identify rates of reproduction. Then the composition of the population is described by the replicator dynamic in which the share of the agents playing a given strategy grows at
a rate equal to the difference between the average payoff of that strategy and the average payoff of the population as a whole.3 An evolutionarily stable strategy is asymptotically stable under the
replicator dynamics, meaning that the dynamics converge to the evolutionarily stable strategy from all nearby population configurations, providing a dynamic motivation for the concept of evolutionary
stability. In the hawk-dove game, for example, the replicator dynamic will converge to the state in which proportion V/C of the population plays Hawk. This repro- duces the evolutionarily stable
strategy, but in the form of a population in which V/ C of the players in the population play Hawk and 1 -V/ C play Dove, rather than a situation in which every player chooses a mixed strategy, which
leads to Hawk with probability V/ c.~
The symmetry assumption rules out strategies such as "play Hawk when row player; Dove when column player." The mixed equilibrium must have the property that Hawk and Dove give identical expected
payoffs against an opponent playing the mixed equilibrium. Hence, if pHis the probability attached to Hawk, it must be that pH(: -(V -C)) + (1 -pH)V = (1 -pH)V/ 2, which requires pH = V/C.
Let x, be the share of the population choosing pure strategy i. The growth rate of the population share x, is given by the difference between the average payoff of strategy i (denoted by n,) and the
average payoff of all strategies in the population (denoted by %): (dx,/dt) (l/x,) = n, -%.
Mixed-strategy equilibria have long been a source of uneasiness in game theory. Why should players who are indifferent between strategies, as they must be in a mixed equilibrium, randomly choose
The evolutionarily stable strategy of the hawk-dove game is also its unique symmetric Nash equilibrium. In many biological applications, the Nash equilibrium condition alone suffices to yield the
desired outcome. As a result, perhaps the primary effect of the evolutionary stability concept in biology has been to popular- ize the ideas of a noncooperative game and Nash equilibrium.
The concepts of evolutionary stability and the replicator dynamics sweep away much that is of interest to biologists, most noticeably considerations arising out of the genetics of sexual
reproduction. This neglect has prompted a continuing effort to embed evolutionary game theory in more realistic biological models (for exam- ple, Eshel, 1991; Eshel, Feldman and Bergman, 1998).
Evolutionary Stability in Economic Models
Evolutionary ideas have a long history in economics, with origins that predate biological applications. Darwin (188'7, p. 83) acknowledged the influence of Malthus and the classical economists in the
formation of his theory of natural selection. Alchian (1950) and Friedman (1953) popularized evolutionary meta- phors to motivate the "as if" approach to ~~timization.~
Economic theory is now routinely described as assuming not that people are relentless maximizers, but rather that some process of selection-perhaps the tendency of unprofitable firms to fail or the
tendency of people to imitate their more successful counterparts-will cause us to observe people who act as if they are maximizing. This view allows optimization to be a tiny subset of the vast
repertoire of possible human behavior, but makes it quite likely that the behavior we observe will be drawn from this subset.
Evolutionary game theory brings the evolutionary portion of these arguments out of the background. This approach initially gained popularity on the strength of evolutionary stability's ability to
reject some seemingly implausible Nash equilibria. Consider the joint venture coordination game shown in Figure 2. Think of this as a case in which two players have the opportunity to form a joint
venture that will earn a profit of 2 for each of them if they both choose In. If at least one player chooses Out, the opportunity dissipates with no reward and no cost to either player.
This game has two Nash equilibria, given by (In, In) and (Out, Out), but the former appears to be overwhelmingly more compelling than the latter. The intu- ition directing our attention to (In, In)
is reproduced in the argument that only In is an evolutionarily stable strategy in this game. Because In is a best response to Out, and a superior response to itself, a population in which the joint
venture is assiduously ignored could be invaded by mutants who exploit the venture, losing
between these strategies in precisely the proportion required for equilibrium? Harsanyi (1973) introduced a model in which players' payoffs are those specified in the game, plus small privately
observed perturbations. Players choose pure strategies that are strict best responses given their payoff perturba- tion. Remarkably, Harsanyi showed that no matter what the nature of the
perturbations, the resulting game has a pure-strategy equilibrium that approximates the original mixed equilibrium of the unper- turbed game. The evolutionary model of a polymorphic population in
which some players choose Hawk and some Dove similarly "purifies" mixed equilibria.
The paper by Nelson and Winter in this symposium discusses this "as if" approach in greater detail.
Figure 2 Joint Venture Game
In Out In Out
nothing against the those who ignore the opportunity and gaining when encoun- tering one another, thus ensuring that they fare better on average than those who play Out. Evolutionary stability thus
directs our attention to the more plausible Nash equilibrium.
One could also justify equilibrium (In, In) by appealing to the cornerstone of the equilibrium refinements literature of the 1980s, that players should avoid weakly dominated strategies. Strategy In
is the only undominated strategy in this game. Selten (1975) introduced the concept of a perfect equilibrium to capture the sense in which dominance considerations make (In, In) more appealing than
(Out, Out) in Figure 2. The intuition behind perfect equilibria is that there is always some chance that any strategy might be played, perhaps by mistake or through some environmental tremble, and so
one should protect oneself against the unexpected by avoiding dominated strategies. The concept of a proper equilibrium (Myerson, 1978) strengthens this by assuming that trembles discriminate among
inferior strategies, attaching arbitrarily less probability to those that are more inferior. But why should rational players tremble, and why should mistakes or trembles have any relationship to
payoffs? Evolutionary stability provides an answer: In symmetric games, evolutionarily stable strategies induce equilibria that are proper (and hence perfect) (van Damme, 1991, Theorem 9.3.4). Any
strategy that is chosen without regard to trembles can thus be displaced by an evolutionarily superior mutant.
Unfortunately, complications loom close behind that prevent us from simply ending the argument with the assertion that evolutionary stability implies proper equilibrium. Some anomalies arise out of
the fact that not all proper equilibria are evolutionarily stable. More importantly, the convenient link between evolutionary stability and refinements of the Nash equilibrium concept does not extend
beyond symmetric games. Instead, Selten (1980) has shown that in asymmetric games, a strategy is evolutionarily stable only if it is a strict Nash equilibrium for all players to choose this strategy,
that is, only if each player's strategy is a unique best response to the strategies of the other players.
This result has striking implications. In an extensive-form game, for example, a strategy can be evolutionarily stable only if it is a pure strategy and causes every contingency in the extensive form
to be realized in equilibrium. Suppose instead that some contingency is missed. Then there must be an alternative best response whose behavior differs from that of the candidate strategy only in
circumstances that are never realized. Because the differences in the strategies are never realized in the course of playing the game, both strategies attain identical payoffs, leaving no opportunity
for the candidate strategy to gain a payoff advantage against the alternative and hence no prospect of expelling the latter from the population. The candidate strategy then is not evolutionarily
Do we really care if a strategy fails to be evolutionarily stable because there are mutants whose differences do not appear in the course of play? Why not simply revise the definition of evolutionary
stability, still requiring that no mutant earn a higher expected payoff than the evolutionarily stable strategy, but allowing the possibility of a mutant whose play duplicates that of the candidate
strategy? Doing so yields what Maynard Smith (1982, p. 107) called a neutrally stable strategy6 Let the mutants come, as long as they produce identical behavior.
Unfortunately, mutants who produce behavior identical to that of an existing strategy can be of tremendous importance. For example, the Titfor-tat strategy, which cooperates on its first move and
mimics the opponent's previous move thereafter, was initially suggested as an evolutionarily stable strategy for the re- peated prisoners' dilemma, a result made all the more appealing by the fact
that it yields perpetual cooperation. However, the strategy Cooperate, that simply coop- erates all of the time regardless of what its opponent does, behaves identically to Tit-for-tat under any
circumstances that can be reached when the two play the game, differing only in the event of the unrealized contingency of a defection.' Unlike the Tit-for-tat strategy, the Cooperate strategy can be
exploited by oppo- nents who defect. The stability of cooperative behavior can thus depend critically upon whether a population consists primarily of Tit-for-tat or the seemingly iden- tical
The potential instability of neutrally stable strategies is a general problem. For example, no strategy can gain a payoff advantage over the strategy of Tat-for-tit, one of the neutrally stable
strategies in Binmore and Samuelson's (1992) repeated prisoners' dilemma with complexity costs.' But there are alternative best responses that do just as well as Tat-for-tit and hence that can creep
into the population without being expelled by Tat-for-tit. In addition, there are predatory strategies that can gain an advantage over some of these infiltrators, so that the appearance of the
'The concept of neutral stability has proven useful. Binmore and Samuelson (1992), working with repeated games (including the prisoners' dilemma), in which players prefer simpler to more complex
strategies (other things equal), show that neutrally stable strategies exist and give efficient outcomes. Fudenberg and Maskin (1990) obtain a similar result. Matsui (1991) and Kim and Sobel (1992)
exploit analogous ideas to identify conditions under which evolution will select efficient outcomes in cheap-talk games, which are games in which play is preceded by opportunities for nonbinding
communication, or "cheap talk." (For an introduction to cheap-talk games in this journal, see Farrell and Rabin, 1996.) These results are welcome in light of the frustrating abundance of equilibria
in both repeated games and cheap-talk games. 'Hence, Tit-for-tat is neither a strict Nash equilibrium nor evolutionarily stable (nor is any other strategy) in the repeated prisoners' dilemma.
The Tat-for-tit strategy defects in the first period and thereafter changes its action whenever the opponent defects. Two Tat-for-tit players produce an initial period of defection followed by
perpetual cooperation, enforced by the fact that a defection prompts the opponent to switch back to defecting. Once complexity costs are incorporated, Tit-for-tat is not even neutrally stable in the
repeated prisoners' dilemma, with the simpler Cooperate strategy being a better response.
latter opens the door to mutants who may destroy the efficient outcome. There seems to be no hope for ascertaining whether Tat-for-tit or efficient play more generally can be expected to survive such
onslaughts short of explicitly examining the evolutionary dynamics. As a result, attention in evolutionary game theory has increasingly turned to dynamic models.
Evolutionary Dynamics in Economic Models
Economists working with dynamic models envision a population of players who are repeatedly, randomly matched to play a game. Each player is equipped with a behavioral rule that chooses strategies in
the game as a result of the player's experience, typically interpreted as modeling a learning or imitation process. A wide range of such behavioral rules have been examined, from simple stimulus-
response rules that attribute no cognitive activity to the agent to models in which agents play best responses to expectations that are the result of complicated Bayesian inference problems, though
it is common to build some degree of "bounded rationality" into the model. If nothing else, players are typically assumed to ignore any effect that their current actions might have on the future
behavior of their compatriots, a formulation often motivated by an assumption that the popu- lation is quite large.
This setting immediately suggests some limitations for evolutionary game theory. First, if bounded rationality is an important constraint, then we would expect subjects simply to ignore those games
in which the payoff consequences of their choices are small. Scarce reasoning resources will not be expended without some reasonable prospect of gain. Secondly, if players must learn which strategies
are advantageous, then we cannot expect to say much about games that are played too infrequently or that are too complicated for such learning to occur. Combining these considerations, we might
expect the amount of experience required for players to hit upon suitable behavior to decrease when the stakes increase and the complexity of the problem decreases, as players juggle the bounded
rationality constraint by bringing more sophisticated learning processes to bear on games where they are more likely to make a difference. Very few people play bridge well the first time they play
it, no matter how much they have studied the game beforehand. No matter how often one allows them to try, and how much one pays for a correct answer, most people will not learn to prove Goldbach's
Conjecture (that every even integer larger than two is the sum of two primes). Most of us continually fall prey to optical illusions, simply because it does not pay to be constantly on guard against
We thus cannot expect the results of evolutionary game theory always to be applicable. I do not find this constraint particularly troubling, nor do I think it special to games. Instead, I suspect
that this is a general characteristic of human behavior: it becomes more deliberate, more measured and seemingly more rational as the consequences increase, the problem becomes more straightforward
and familiarity with the problem increases. The question of how important and how familiar a problem must be before our analysis is likely to be useful pervades all of economics.
One approach to dynamic evolutionary models is based on examining deter- ministic difference or differential equations that describe the proportion of the players in the population playing the
various strategies. The typical motivation behind such studies is that individual behavior is likely to be both quite complex and stochastic, but that forces akin to the law of large numbers are
likely to ensure that in large populations this behavior averages to something reasonably determin- istic and, one hopes, reasonably simple. The dynamics themselves come in many varieties. Some
studies have simply adopted the replicator dynamics. However, in response to concerns that the biologically motivated replicator dynamics may not be appropriate in economic settings, work is
typically done with a more general dynamic satisfying a monotonicity condition. This latter condition imposes some version of the requirement that the population shares of high-payoff strategies grow
more quickly than those of low-payoff strategies, without imposing the specific structure of the replicator dynamics, and is often interpreted as assuming that, on average, the players are able to
switch from worse to better strategies."
Alternatively, evolutionary models have been built up from explicit specifica- tions of individual behavior, their authors preferring to tackle the vagaries of individual choice directly rather than
smoothing them out in an aggregate dy- namic. Kandori, Mailath and Rob (1993) consider a collection of players who are repeatedly matched to play a coordination game such as that shown in Figure 3.
(Foster and Young, 1990, and Young, 1993, work with similar models.) Time is measured in discrete periods. In each period, the agents are matched to play a round-robin tournament with the other
agents in the population. With high probability, each player chooses the strategy that is a best response to the previous period's distribution of play. With small probability, the player is a
"mutant" who chooses strategy X or Y, each with probability i.The result is a stochastic process, where the state space is the set of possible specifications of which players choose which strategies.
Behavior in any single period will always be unpredictable, but over time, average behavior will converge to a stationary distribution. Kandori, Mailath and Rob (1993) and Young (1993) show that this
type of model can yield particularly strong results when attention is directed to the "limiting distribution," which is obtained by examining the limit of the stationary distributions as the mutation
rate becomes arbitrarily small.
Figure 4 shows a phase diagram for a monotonic dynamic operating on the coordination game of Figure 3. The axis measures the proportion of the population playing strategy X, ranging from zero
(corresponding to the equilibrium ( Y, Y) ) to one (corresponding to the equilibrium (X,X) ). Whenever less than 80 percent of
Considerable work has also been done with generalizations of fictitious play (for example, Fudenberg and Levine, 1998),in which players choose best responses to the average of their opponents' past
play. This work reinterprets what was originally proposed as a technical tool for calculating Nash equilibria as a learning process.
Figure 3
Coordination Game
Figure 4
Phase Diagram for a Monotonic Dynamic, Applied to the Coordination Game of Figure 3
the population initially plays X, so that Y is a best response, the game converges to the equilibrium in which everyone plays Y. If more than 80 percent initially play X, then X is a best response
and the equilibrium in which everyone plays X is approached. The system thus always approaches an equilibrium, but which one is selected depends upon the happenstance of how the population is
originally distributed between the two strategies.
In contrast, Kandori, Mailath and Rob (1993) conclude that, no matter what the initial condition, the system spends virtually all of its time at the equilibrium ( Y, Y) , for a sufficiently small
mutation probability. Because almost all agents choose a best response in each period, the stationary distribution concentrates virtually all of its probability mass on states where almost all agents
play X or almost all play Y. Once near such a state, the system tends to remain there. Occasionally, sufficiently many players will just happen to be mutants, and to switch strategies, as to switch
the system from one in which X (or Y) is a best response to one in which Y (or X) is a best response, flipping the system to the other end of the state space. The phase diagram in Figure 4 shows that
more mutations (80 percent of the population) are required to make X a best response in a population where everyone plays Y than to accomplish the reverse transition (requiring only 20 percent
mutants), making the latter transition more likely. As the mutation prob- ability gets small, the latter transition becomes arbitrarily more likely, causing all of the probability in the stationary
distribution to accumulate on the equilibrium
( y, Y) . The two types of model thus appear to give disconcertingly different results. If most of the population initially plays strategy X, for example, then the deterministic dynamics predicts
convergence to the equilibrium (X,X), while the stochastic process directs sole attention to the equilibrium ( Y, Y) . Upon closer examination, it is not the models that differ so much as the
questions we ask of the models. Binmore, Samuelson and Vaughan (1995), build- ing on the work of Boylan (1995), show that both models provide approximations of the underlying stochastic process. A
deterministic dynamic, whose details de- pend upon the nature of the individual learning processes, (approximately) de- scribes the behavior of the system over finite periods of time, being
applicable to longer and longer periods of time in the case of larger and larger populations.'0 At the same time, the limiting distribution describes the behavior in the limiting case of an infinite
time horizon.
Results of this kind allow us to see that seemingly quite different evolutionary models are often compatible. The key question is not so much one of which model to select from a list of conflicting
contenders, but rather which information to seek about a single underlying evolutionary process, with the answer implying the relevant model. Evolutionary game theory is thus unlikely to provide
context-free answers, nor will it identify a single equilibrium concept as unquestionably "right." The relevant time horizon, population size, individual behavior specification and the interaction
rules will all depend upon the setting to which the evolutionary analysis is applied and, hence, so will the specification of an appropriate model.
I view this abundance of possible outcomes as an advantage. Much of the difficulty in interpreting the contending equilibrium refinements of the 1980s appeared because the models were divorced from
their context of application in an attempt to rely on nothing other than rationality. The resulting models contain insufficient information about the underlying strategic interaction to answer the
relevant questions, at least if game theory is intended to model real interactions rather than to ponder philosophical points. Certain behavior may be quite likely in some settings and quite absurd
in others. It is interesting that students seem to grasp this point instinctively-a common response when queried about how they would behave in a game is to ask, "Who is my opponent?" In general, the
process by which people find their way to an equilibrium may be littered with accidents of history, framing effects, bandwagon effects and endogenous conventions. A useful theory must incorporate
these considerations. Evolutionary game theory provides some tools for bringing them into the theory.
Why Equilibrium?
With this background in mind, we turn to our first question: Do evolutionary models give us any reason to choose Nash equilibrium as a solution concept? Given our previous discussion of evolutionary
stability, attention naturally turns to dy- namic models.
Consider first a stationary state of the replicator dynamic, describing the
lo Surprisingly, in some cases, the replicator dynamic emerges as the relevant deterministic approxima- tion from a model based on learning considerations that appears to have no biological
connection, substituting imitation for reproduction as the driving force.
proportions of the population playing the game's various strategies. We say that this state is stable if small perturbations away from the stationary state proportions cannot give rise to dynamics
that take the system far from these proportions. Instead, an initial condition close to the stationary state proportions ensures that the system remains perpetually close to these proportions.
If a state is to be stationary, then all of the strategies played by various members of the population must give the same payoffs, since otherwise the population proportion attached to high-payoff
strategies would be growing at the expense of low-payoff strategies, vitiating stationarity. However, a stationary state may still not be a Nash equilibrium, because there may be superior replies
that are not played by any member of the population, and hence whose population proportion cannot grow (via reproduction or imitation) from an initial proportion of zero. Once a perturbation moves
the system to a nearby state in which such a superior strategy is played, the latter's population share will grow, leading the system away from the original stationary state and ensuring that the
latter is not stable. A stationary state can thus be stable only if there are no superior best responses, in which case the stationary state must correspond to a Nash equilibrium.
Results of this form-that stability implies Nash equilibrium-also emerge from the various monotonic dynamics that generalize the replicator dynamic, as well as from a wide variety of stochastic
models based on individual behavior (with suitably modified notions of stability). Such results are sufficiently common that a typical reaction to an evolutionary model featuring convergence to an
outcome that is not a Nash equilibrium would be to argue that the model is misspecified or implausible.
We thus have a qualified positive answer to the question of whether evolution- ary game theory provides a motivation for Nash equilibrium. I characterize this as a qualified positive answer because
of the conditional nature of the result: an outcome is a Nash equilibrium ijit is stable.
Some game theorists would have preferred a result of the form that an evolutionary process must produce convergence to a Nash equilibrium. However, there are ample examples of simple evolutionary
models that yield cyclic or even chaotic behavior instead of convergence to a Nash equilibrium. It remains an open question whether any plausible evolutionary process exists that invariably ensures
convergence." But even if such a process existed, we have little reason to believe that it would faithfully reflect actual behavior.
I view the "stability implies Nash" result as putting game theory on much the same footing as the rest of economics. We do not believe that markets are always in equilibrium, just as we do not
believe that people are always rational or that firms always maximize profits. But the bulk of our attention is devoted to equilibrium
"Hart and Mas-Collel (2000) present a process that always converges to a correlated equilibrium, but argue that the extra complexity of the set of Nash equilibria and the natural propensity for the
history of an adaptive process to induce correlation suggest that we cannot expect the process always to arrive at a Nash equilibrium.
models either because we hope that equilibrium behavior is sufficiently persistent and disequilibrium behavior sufficiently transient that behavior that is robust enough to be an object of study is
(approximately) equilibrium behavior, or because studying equilibrium behavior is our best hope for gaining insight into more ephemeral disequilibrium behavior. Evolutionary game theory thus provides
little reason to believe that equilibrium behavior should characterize all games in all circumstances. But it provides reason to hope that behavior that comes into our field of study is likely to be
equilibrium behavior. In this sense, we obtain a stronger motivation for Nash equilibrium than that provided by rationality-based models.
Which Equilibrium?
Does evolutionary game theory direct our attention to some Nash equilibria rather than others? Again, previous discussion directs our attention to dynamic models. The results are surprising in the
extent to which they do both more and less than traditional equilibrium refinements.
Choosing Between Strict Nash Equilibria
To see how evolutionary game theory does more than traditional refinements, consider again the coordination game of Figure 3. This game has two pure-strategy equilibria, given by (X,X) and ( Y, Y) ,
each of which is strict, in the sense that each player has a unique best response. With the notable exception of Harsanyi and Selten (1988), the equilibrium refinements literature has concentrated on
elimi- nating Nash equilibria in which there are alternative best responses. Strict Nash equilibria survive all of the conventional refinements, reflecting an intuition that a situation in which
everyone has a strict incentive to maintain their current behavior is not easily destabilized.
In contrast, the evolutionary models of Young (1993) and Kandori, Mailath and Rob (1993) make a distinction between these two equilibria, with the limiting distribution allocating almost all of its
probability to equilibrium ( Y, Y) in Figure 3. More generally, these models select the equilibrium in 2 X 2 games with the larger basin of attraction under a monotonic dynamic.12
The finding of Kandori, Mailath and Rob (1993) and Young (1993) that the equilibrium with the larger basin of attraction is selected can be reversed with appropriate modifications to the model, as in
Robson and Vega-Redondo (1996). The important result is not the selection of a particular equilibrium, but rather the
'"he basin of attraction under a monotonic dynamic is the set of (possibly mixed) strategy profiles to which the equilibrium strategy in question is a best response. In 2 X 2 games, an easy test of
which equilibrium has the largest basin of attraction is provided by the fact that this equilibrium is the best response to a 50:50 mixture between the two strategies. This is the equilibrium that
Harsanyi and Selten (1988, pp. 82-84) refer to as risk dominant.
departure from the bulk of the equilibrium refinements literature in distinguishing between contending strict Nash equilibria.
This ability is not an unqualified success. The limiting stationary distribution, as the probability of a mutation goes to zero, may be a reasonable approximation only of what occurs after long
periods of time, with this waiting time becoming very long when the probability of a mutation is quite small. In some cases, the implied waiting times will be sufficiently long that our interest will
center on shorter horizons and the stationary distribution will be irrelevant. In other cases, the stationary distribution may be more useful. Beginning with the work of Ellison (1993), it has been
recognized that waiting times may be significantly reduced if there are spatial or "local" patterns to the interaction between agents. Young (1998) discusses the evolution of social structures that
may occur over sufficiently long periods of time for the theory to be applicable. Much remains to be done, but it is clear that evolutionary game theory has provided new tools to address a difficult
Equilibrium Refinements
To see how evolutionary game theory does less than the equilibrium refine- ments literature, we return to the cornerstone of the refinements literature: the presumption that weakly dominated
strategies should not be played. Consider the game whose normal and extensive forms are shown in Figure 5. Binmore, Gale and Samuelson (1995) interpret this as a simplified version of the ultimatum
game. Player 1must propose an amount of a surplus of size 4 to offer to player 2 and can choose either a high offer of 2 or a low offer of 1.A high offer is assumed to be accepted, while a low offer
may be either accepted ('Yes") or rejected ("No").
An equilibrium of this game is subgame perfect if player 1's choice is a best response to player 2's choice and if player 2's Yes/No decision would be a best response in the event that player
1chooses Low. Backward induction identifies the only subgame perfect equilibrium of this game: player 2 accepts low offers, and as a result, player 1 makes a low offer. There are other Nash
equilibria in which player 1 makes a high offer and player 2 plays No with probability at least 1/3.
No is a dominated strategy for player 2. It can never earn a higher payoff than Yes, and one would accordingly expect an evolutionary process to exert constant pressure against No. Suppose, however,
that a large fraction of the player-:! popu- lation initially plays No. High will then produce a higher average payoff for player 1 than Low, and an evolutionary process will also exert pressure
against Low. But as fewer and fewer player Is choose Low, the payoff disadvantage of No dissipates, and hence, so does its evolutionary disadvantage. The result may be convergence to an outcome in
which player 1 offers High and a significant fraction of player 2s would reject Low if offered. The dominated strategy No is thus not eliminated.
Binmore, Gale and Samuelson (1995) and Roth and Erev (1995) fill in the details of this argument. However, it seems as if this argument relies too heavily on the fact that player 2s' choice of No is
never tested if player Is make high offers. We expect the world to be a noisy place, certainly noisier than our simple models. It
Figure 5
Simplified Ultimatum Game
Yes No
seems that this noise should ensure that Low never dies out of the population completely and, hence, that No is always inferior to Yes and should accordingly be eliminated from the population.
Binmore and Samuelson (1999) show that this need not be the case and that the outcome depends delicately on the specification of noise. By continually injecting strategy Low into population 1, noise
introduces a pressure against No, though this pressure may be quite weak if most of the population plays High. In addition, this same noise has the potential to introduce a counterpressure by
continually injecting No into population 2. Either force may win, raising the possibility that dominated strategies are not eliminated from the population.
The conventional interpretation behind these dynamic models is that they represent a learning process. One response to the previous paragraph is then to argue that the ultimatum game is transparently
simple, making it hard to imagine any learning at all being required. This is especially the case for the responders. Their task is to accept some money or decline it. Then what do they have to
learn? Whether money is good? Much depends, however, on our remembering that the games with which we work are meant to be not literal descriptions of reality, but rather models of interactions that
may appear to be much more complicated to the participants. We should not confuse our ability to find Yes transparently obvious in the model with a similar ability on the part of the player "on the
ground." President
John F. Kennedy is often characterized as having presented Premier Nikita Khru-
shchev with an ultimatum during the Cuban missile crisis of 1963. Is it likely that
Khrushchev found the resulting choice as simple as that shown in Figure 5?
As the dominance relations become more complicated, the elimination of dominated strategies becomes all the more problematic. Whereas suitably tailored evolutionary models will eliminate dominated
strategies in certain contexts (for example, Hart, 2002), a reasonable characterization of the combined results of the literature is that one generally cannot rely on evolutionary processes to
eliminate weakly dominated strategies, much less to perform iterated eliminations. Given the close link between dominance and backward induction arguments, it is no surprise that evolutionary models
also provide very little motivation for backward induction. In terms of sorting between Nash equilibria, the lesson of evolutionary game theory is that we should be less anxious to apply dominance or
backward induction based refinements than the refinements literature would suggest.
This finding dovetails nicely with the recent experimental literature, which provides ample reason to believe that backward induction should not be taken for granted (Davis and Holt, 1993, chapter 5;
Roth, 1995). But in the experiments, the game literally is as transparent as that shown in Figure 5. Then why do we suppose that responders require learning or trial-and-error or experience to know
what to do? We must remember that while the game itself is transparent, the context in which it is played, including the absence of any chance of repeated play or breach of anonymity between the
players, is quite foreign. What do players do when faced with an unprecedented context? One possibility is that they strip away the context and analyze the game. Another is that they search their
experience for the closest analogies they can find, using the game's context as a clue in searching for a similar situation and choosing behavior they have found to be effective in the latter. It may
then take considerable experience to hit upon appropriate analogies, producing a dynamic process that, for reasons described above, need not produce the backward induction solution. The role of
analogies in reasoning is pursued further in Jehiel (2000) and Samuelson (2001).
What Do We Take Away?
Will evolutionary game theory have an impact on the way people practice game theory, or will it fade away, leaving economists to carry on as they have before? The latter will surely be its fate if it
does nothing other than ease our consciences a bit when doing what we've been doing all along, namely examining Nash equilibria. But I believe that evolutionary game theory has the potential to do
Evolutionary game theory will have done much if we simply take seriously the caution that dominance and backward induction arguments are not as compelling as they may first appear. It is common in
models of bargaining, contracting and exchange to assume that an agent can be pushed to the brink of indifference and still be relied upon to agree to the deal. Though these arguments appear in a
variety of guises, they are all variations on the assertion that the subgame perfect equilibrium will appear in the ultimatum game. The more we learn from evolu- tionary games, the less certain can
we be that we have good reason for doing so.
For evolutionary game theory to realize its potential, however, it must go beyond warnings about what we should not do to provide results concerning what we should do. Here, evolutionary game theory
runs the risk that plagued the equilibrium refinements literature: so many equilibrium concepts and so little basis
Figure 6
Coordination Games
for choosing among them in the abstract. However, I regard three areas of research as particularly promising.
The first is research that removes evolutionary game theory from its abstract setting and links the theory to observed behavior, either in the laboratory or in the field. For example, Battalio,
Samuelson and van Huyck (2001) examine the three games shown in Figure 6. In many respects, these games are strategically identical. They have identical equilibria and best-response correspondences,
and they have identical phase diagrams under best-response, replicator or monotonic dynamics with that phase diagram given by Figure 4. Many rationality-based models would accordingly treat these
three games as equivalent. They differ, however, in that no matter what strategy one expects one's opponent to play, the premium on playing a best response (and hence the penalty for a suboptimal
response) increases as one moves from right to left through the three games. If evolutionary models are on the mark in thinking of behavior as being shaped by trial-and-error learning, and in
thinking of these processes as being more effective when it is more important to make good choices, then we would expect behavior to adjust to an equilibrium more rapidly as one moves from right to
left. This is indeed the pattern in the data (Battalio, Samuelson and van Huyck, 2001, Figure 6 and Table 4).
This is only a single, small step, all the smaller because it examines a behavioral prediction that is particularly intuitive and that might also emerge from many other models. But more research is
appearing that melds evolutionary models with behavioral observation. In assessing this work, one must realize that explaining the dynamics of individual behavior is a formidable task. The early
steps will be modest, but will hold great promise.
Second, evolutionary models rely on settings in which games are played repeatedly. Literally speaking, we never face precisely the same decision twice. Instead, our hope must be that people face
successive games that are sufficiently similar as to be viewed as essentially the same game. Again, this is consistent with a view of games as models, perhaps constructed by the players themselves,
of more complicated strategic interactions.
This view of how people approach games not only potentially widens the purview of evolutionary game theory, but also provides some important insights into how people play games. Return to the
question of which equilibria we should expect in the repeated prisoners' dilemma. The intuition that the players are likely to achieve mutual cooperation conflicts with the intuition that players
should have a preference for simple strategies. In particular, the most common means by which cooperation is thought to be sustained is through strategies like Tit-for-tat, which begin play by
cooperating and continue to cooperate as long as the opponent does so. In equilibrium, the ability to punish defections is never used. But this allows the players to simplify their strategies, at no
sacrifice in payoffs, by eliminating the unused punishment capabilities. This leaves strategies that always cooperate, but whose vulnerability to defecting opponents precludes the existence of an
equilibrium, threatening the ability to sustain mutual cooperation.
The conventional response is to seek strategies that cooperate nearly all of the time while still using their punishment capability (Abreu and Rubinstein, 1988; Binmore and Samuelson, 1992). Suppose,
however, that we think of people as facing a variety of prisoners' dilemma situations. In some of these, the shadow of the future will be insufficiently important to induce cooperation, and mutual
defection will be the fare. In others, the future will be important, and cooperation can be supported by a grim strategy that cooperates initially, doing so as long as the opponent does, switching to
defection otherwise. Notice now that this strategy cannot be costlessly simplified by deleting the ability to defect, since this ability is needed to handle those situations in which defection is
optimal. Considering the games together thus ensures that it is always important to be able to punish defection, allowing cooperation to be sustained whenever it is feasible without running afoul of
complexity constraints.
This view of cooperation is reminiscent of work in evolutionary psychology, suggesting that people have a deep-seated ability to detect and to respond to cheating on norms of behavior (Cosmides and
Tooby, 1992). The common theme is that this punishment ability is a general purpose capability applied as part of one's behavioral mix in a wide variety of settings. In some games, it may never be
used, but the possibility that it might be eliminated in a quest for simplicity is squelched by its usefulness in other problems.
Finally, I think we have much to learn from pushing the evolutionary point of view beyond the simple question of how people behave in games. In partic- ular, our evolutionary background has much to
tell us about some of the idiosyncrasies of our preferences. Consider, for example, the question of why we have emotions. I suspect that emotions help us cope with our complex envi- ronment. An
appropriately chosen rule of thumb of the form "do the fair thing" or "retaliate when crossed" may be optimal not because it allows us to commit in a seemingly irrational manner to things we would
otherwise not do, but because it allows us to simplify the process by which we arrive at what we would otherwise want to do. This is especially the case if we think of people being faced with a vast
variety of games that differ in intricately complex ways, which must then be simplified into something that can be tractably handled. Lumping possibly dissimilar games together, exploiting analogies
between games and creating such analogies by labeling certain outcomes as fair may all be useful weapons in this process. In essence, we (or Nature, through the process of evolution) simplify our
lives by deciding that things are fair because they are the things we do, not by doing things because they are fair. There may be much to be learned from such an extension of evolutionary game
I thank Ken Binmore, George Mailath, Brad De Long, Timothy Taylor and Michael Waldman for heful comments and discussions and thank the National Science Foundation
for financial support.
Abreu, Dilip and Ariel Rubiitein. 1988. "The Structure of Nash Equilibrium in Repeated Games with Finite Automata." Econometrica. November, 565, pp. 1259-281.
Alchian, hen. 1950. "Uncertainty, Evolu- tion, and Economic Theory." Journal of Political Economy. 58, pp. 211-21.
Battalio, Raymond, Larry Samuelson and John van Huyck. 2001. "Optimization Incentives and Coordination Failure in Laboratory Stag Hunt Games." Econometrica. May, 69:3, pp. 749-64.
Binmore, Ken and Larry Samuelson. 1992. "Evolutionary Stability in Repeated Games Played by Finite Automata." Journal of Economic Theory. August, 57:2, pp. 278-305.
Binmore, Ken and Larry Samuelson. 1999. "Evolutionary Drift and Equilibrium Selection." Revim of Economic Studies. April, 66:2, pp. 363-
Binmore, Ken, John Gale and Larry Samuel- son. 1995. "Learning to be Imperfect: The Ulti- matum Game." Games and Economic Behavior. January, 8:1, pp. 56-90.
Binmore, Ken, Larry Samuelson and Richard Vaughan. 1995. "Musical Chairs: Modeling Noisy Evolution." Games and Economic Behavior. October, 11:1, pp. 1-35.
Boylan, Richard T. 1995. "Continuous Ap- proximation of Dynamical Systems with Ran- domly Matched Individuals." Journal of Economic Theory. August, 66:2, pp. 615-25.
Cosmides, Leda and John Tooby. 1992. "Cog- nitive Adaptations for Social Exchange," in The Adapted Mind. Jerome H. Barkow, Leda Cos- mides and John Tooby, eds. Oxford: Oxford University Press, pp.
Darwin, Charles. 1887. The Life and Letters of Charles Darwin, Including an Autobiographical
Chapter, Second Edition, Volume I. Francis Darwin, ed. London: John Murray.
Davis, Douglas D. and Charles A. Holt. 1993. Experimental Economics. Princeton: Princeton University Press.
Dawkins, R. 1989. The Selfish Gene. Oxford: Oxford University Press.
Ellison, Glenn. 1993. "Learning, Local Inter- action, and Coordination." Econometrica. September, 61:5, pp. 1047-072.
Eshel, Ilan. 1991. "Game Theory and Popula- tion Dynamics in Complex Genetical Systems: The Role of Sex in Short Term and in Long Term Evolution," in Game Equilibnum Models. Reinhard Selten, ed.
Berlin: Springer-Verlag, pp. 6-28.
Eshel, nan, Marcus W. Feldman and Aviv Bergman. 1998. "Long-Term Evolution, Short- Term Evolution, and Population Genetic Theo- ry." Journal of Theoretical Biology. 191:4, pp. 391-
Farrell, Joseph and Matthew Rabin. 1996. "Cheap Talk." Journal of Economic Perspectives. 10:3, pp. 103-18.
Foster, Dean and Peyton Young. 1990. "Sto- chastic Evolutionary Game Dynamics." Journal of Theoretical Biology. October, 38:8, pp. 219-32.
Friedman, Milton. 1953. Essays in Positive Eco- nomics. Chicago: University of Chicago Press.
Fudenberg, Drew and David K. Levine. 1998. Theory of Learning in Games. Cambridge: MIT Press.
Fudenberg, Drew and Eric Maskin. 1990. "Evo- lution and Cooperation in Noisy Repeated Games." American Economic Review. May, 80, pp. 274-79.
Harsanyi, John C. 1973. "Games with Ran- domly Distributed Payoffs: A New Rationale for
Mixed-Strategy Equilibrium Points." International Journal of Game Theory. 2, pp. 1-23. Harsanyi, John C. and Reinhard Selten. 1988.
A General Theory of Equilibrium Selection in Games.
Cambridge: MIT Press.
Hart, Sergiu. 2002. "Evolutionary Dynamics and Backward Induction." Games and Economic Behavior. Forthcoming.
Hart, Sergiu and Andreu Mas-Collel. 2000. "A Simple Adaptive Procedure Leading to Corre- lated Equilibrium." Econometrica. September, 68:5, pp. 1127-150.
Hofbauer, J. and K. Sigmund. 1988. Evolutionary Games and Population Dynamics. Cambridge: Cambridge University Press.
Jehiel, Phillippe. 2000. "Analogy-Based Expec- tation Equilibrium." Mimeo, University College London.
Kandori, Michihiro, George J. Mailath and Rafael Rob. 1993. "Learning, Mutation, and Long Run Equilibria in Games." Econornetrica. January, 61:1, pp. 29-56.
Kim, Yong-Gwan and Joel Sobel. 1992. "An Evolutionary Approach to Pre-Play Communica- tion." Econometrica. September, 63:5, pp. 1181-
Mailath. George J. 1998. "Do People Play Nash Equilibrium? Lessons from Evolutionary Game Theory." Journal of Economic Literature. September, 36:3, pp. 1347-374.
Matsui, Akihiko. 1991. "Cheap-Talk and Coop- eration in Society." Journal of Economic Theory. August, 54:2, pp. 245-58.
Maynard Smith, John. 1982. Evolution and the Theory of Games. Cambridge: Cambridge Univer- sity Press.
Maynard Smith, John and G. R. Price. 1973. "The Logic of Animal Conflict." Nature. 246, pp. 15-18.
Myerson, Roger B. 1978. "Proper Equilibria." InternationalJournal of Game Theory. 7, pp. 73-80. Nash, John F. 1950. "Equilibrium Points in
n-Person Games." Proceedings of the National Acad- emy of Sciences. 36, pp. 48-49.
Robson, Arthur J. and Fernando Vega-Redondo. 1996. "Efficient Equilibrium Selection in Evolutionary Games with Random Matching." Journal ofEconomic Theory. July, 70:1, pp. 65-92.
Roth, Alvin E. 1995. "Bargaining Experi- ments," In Handbook of Experimental Economics. John Kagel and Alvin E. Roth, eds. Princeton: Princeton University Press, pp. 253-348.
Roth, Alvin E. and Ido Erev. 1995. "Learning in Extensive-Form Games: Experimental Data and Simple Dynamic Models in the Intermediate Term." Games and Economic Behavior. January, 8:1, pp 164-212.
Samuelson, Larry. 1997. Evolutionary Games and Equilibrium Selection. Cambridge: MIT Press.
Samuelson, Larry. 2001. "Analogies, Adapta- tion, and Anomalies." Journal of Economic Theory. April, 97:2, pp. 320-66.
Selten, Reinhard. 1975. "Reexamination of the Perfectness Concept for Equilibrium Points in Extensive-Form Games." International Journal of Game Theory. 4, pp. 25-55.
Selten, Reinhard. 1980. "A Note on Evolution- arily Stable Strategies in Asymmetric Animal Con- tests."Journal of Thewetical Biology. 84, pp. 93-101.
van Damme, Eric. 1991. Stability and Perfection
of Nash Equilibria. Berlin: Springer-Verlag.
Vega-Redondo, Fernando. 1996. Evolution, Games, and Economic Behavior. Oxford: Oxford University Press.
von Neumann, John and Oskar Morgenstern.
1944. Theory of Games and Economic Behavior. Princeton : Princeton University Press. Weibiill, Jurgen. 1995. Evolutionary Game The- ory. Cambridge: MIT Press. Young, Peyton. 1993. "The Evolution of
Con- ventions." Econometrica.January, 61:1, pp. 57-84.
Young, Peyton. 1998. Individual Strategy and Social Structure. Princeton: Princeton University Press. | {"url":"http://www.academicroom.com/article/evolution-and-game-theory","timestamp":"2014-04-21T15:03:23Z","content_type":null,"content_length":"125872","record_id":"<urn:uuid:cb994c02-ff15-44bd-bfca-a7eb2371dfdc>","cc-path":"CC-MAIN-2014-15/segments/1397609540626.47/warc/CC-MAIN-20140416005220-00181-ip-10-147-4-33.ec2.internal.warc.gz"} |
Example 6.2.8(c):
While this function
uniformly continuous on any interval
[0, N]
any number) it is no longer uniformly continuous on the interval
. To prove this, take . Note that
| f(s) - f(t) | = | s - t | | s + t |
Can you see that if s = t + and if t is sufficiently large (depending on the undetermined | f(s) - f(t) | > 1. That would prove that the function is no longer uniformly continuous. The details are
left as an exercise.
Note that this argument no longer works on a bounded interval [0, N]. Here we can not make t 'sufficiently large', since it can be no larger than N. And indeed, the function is uniformly continuous
on those bounded intervals. Later we will show that any function that is continuous on a compact set is necessarily uniformly continuous. | {"url":"http://www.mathcs.org/analysis/reals/cont/answers/contuni3.html","timestamp":"2014-04-20T17:44:24Z","content_type":null,"content_length":"5936","record_id":"<urn:uuid:e838a3ea-a5d5-42c1-b80a-ac141a11d74d>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00137-ip-10-147-4-33.ec2.internal.warc.gz"} |
A new metric-based approach to model selection
Results 11 - 20 of 30
- In ICML-97 , 1997
"... : We investigate the structure of model selection problems via the bias/variance decomposition. In particular, we characterize the essential structure of a model selection task by the bias and
variance profiles it generates over the sequence of hypothesis classes. This leads to a new understanding o ..."
Cited by 15 (4 self)
Add to MetaCart
: We investigate the structure of model selection problems via the bias/variance decomposition. In particular, we characterize the essential structure of a model selection task by the bias and
variance profiles it generates over the sequence of hypothesis classes. This leads to a new understanding of complexity-penalization methods: First, the penalty terms in effect postulate a particular
profile for the variances as a function of model complexity--- if the postulated and true profiles do not match, then systematic under-fitting or over-fitting results, depending on whether the
penalty terms are too large or too small. Second, it is usually best to penalize according to the true variances of the task, and therefore no fixed penalization strategy is optimal across all
problems. We then use this bias/variance characterization to identify the notion of easy and hard model selection problems. In particular, we show that if the variance profile grows too rapidly in
relation to the biases t...
, 2002
"... Model selection is an important ingredient of many machine learning algorithms, in particular when the sample size in small, in order to strike the right trade-off between overfitting and
underfitting. Previous classical results for linear regression are based on an asymptotic analysis. We present ..."
Cited by 15 (2 self)
Add to MetaCart
Model selection is an important ingredient of many machine learning algorithms, in particular when the sample size in small, in order to strike the right trade-off between overfitting and
underfitting. Previous classical results for linear regression are based on an asymptotic analysis. We present a new penalization method for performing model selection for regression that is
appropriate even for small samples. Our penalization is based on an accurate estimator of the ratio of the expected training error and the expected generalization error, in terms of the expected
eigenvalues of the input covariance matrix.
- JOURNAL OF MACHINE LEARNING RESEARCH , 2003
"... Metric-based methods have recently been introduced for model selection and regularization, often yielding very significant improvements over the alternatives tried (including cross-validation).
All these methods require unlabeled data over which to compare functions and detect gross differences i ..."
Cited by 15 (1 self)
Add to MetaCart
Metric-based methods have recently been introduced for model selection and regularization, often yielding very significant improvements over the alternatives tried (including cross-validation). All
these methods require unlabeled data over which to compare functions and detect gross differences in behavior away from the training points. We introduce three new extensions of the metric model
selection methods and apply them to feature selection. The first extension takes advantage of the particular case of time-series data in which the task involves prediction with a horizon h. The idea
is to use at t the h unlabeled examples that precede t for model selection. The second extension takes advantage of the different error distributions of cross-validation and the metric methods:
crossvalidation tends to have a larger variance and is unbiased. A hybrid combining the two model selection methods is rarely beaten by any of the two methods. The third extension deals with the case
when unlabeled data is not available at all, using an estimated input density. Experiments are described to study these extensions in the context of capacity control and feature subset selection.
- The 24th International Conference on Machine Learning , 2007
"... Although semi-supervised learning has been an active area of research, its use in deployed applications is still relatively rare because the methods are often difficult to implement, fragile in
tuning, or lacking in scalability. This paper presents expectation regularization, a semi-supervised learn ..."
Cited by 11 (0 self)
Add to MetaCart
Although semi-supervised learning has been an active area of research, its use in deployed applications is still relatively rare because the methods are often difficult to implement, fragile in
tuning, or lacking in scalability. This paper presents expectation regularization, a semi-supervised learning method for exponential family parametric models that augments the traditional conditional
label-likelihood objective function with an additional term that encourages model predictions on unlabeled data to match certain expectations—such as label priors. The method is extremely easy to
implement, scales as well as logistic regression, and can handle non-independent features. We present experiments on five different data sets, showing accuracy improvements over other semi-supervised
methods. 1.
- Machine Learning , 2000
"... Abstract. We present an approach to inductive concept learning using multiple models for time series. Our objective is to improve the efficiency and accuracy of concept learning by decomposing
learning tasks that admit multiple types of learning architectures and mixture estimation methods. The deco ..."
Cited by 9 (3 self)
Add to MetaCart
Abstract. We present an approach to inductive concept learning using multiple models for time series. Our objective is to improve the efficiency and accuracy of concept learning by decomposing
learning tasks that admit multiple types of learning architectures and mixture estimation methods. The decomposition method adapts attribute subset selection and constructive induction (cluster
definition) to define new subproblems. To these problem definitions, we can apply metricbased model selection to select from a database of learning components, thereby producing a specification for
supervised learning using a mixture model. We report positive learning results using temporal artificial neural networks (ANNs), on a synthetic, multiattribute learning problem and on a real-world
time series monitoring application.
- University of Illinois , 1998
"... The purpose of this research is to extend the theory of uncertain reasoning over time through integrated, multi-strategy learning. Its focus is on decomposable, concept learning problems for
classification of spatiotemporal sequences. Systematic methods of task decomposition using attribute-driven m ..."
Cited by 9 (9 self)
Add to MetaCart
The purpose of this research is to extend the theory of uncertain reasoning over time through integrated, multi-strategy learning. Its focus is on decomposable, concept learning problems for
classification of spatiotemporal sequences. Systematic methods of task decomposition using attribute-driven methods, especially attribute partitioning, are investigated. This leads to a novel and
important type of unsupervised learning in which the feature construction (or extraction) step is modified to account for multiple sources of data and to systematically search for embedded temporal
patterns. This modified technique is combined with traditional cluster definition methods to provide an effective mechanism for decomposition of time series learning problems. The decomposition
process interacts with model selection from a collection of probabilistic models such as temporal artificial neural networks and temporal Bayesian networks. Models are chosen using a new quantitative
(metric-based) approach that estimates expected performance of a learning architecture, algorithm, and mixture model on a newly defined subproblem. By mapping subproblems to customized configurations
of probabilistic networks for time series learning, a hierarchical, supervised learning system with enhanced generalization quality can be automatically built. The system can improve data fusion
- Proceedings of ICML'2000 , 2000
"... We introduce a new regularization criterion that exploits unlabeled data to adaptively control hypothesis-complexity in general supervised learning tasks. The technique is based on an abstract
metric-space view of supervised learning that has been successfully applied to model selection in pre ..."
Cited by 8 (2 self)
Add to MetaCart
We introduce a new regularization criterion that exploits unlabeled data to adaptively control hypothesis-complexity in general supervised learning tasks. The technique is based on an abstract
metric-space view of supervised learning that has been successfully applied to model selection in previous research. The new regularization criterion we introduce involves no free parameters and yet
performs well on a variety of regression and conditional density estimation tasks. The only proviso is that sucient unlabeled training data be available. We demonstrate the eectiveness of our
approach on learning radial basis functions and polynomials for regression, and learning logistic regression models for conditional density estimation. 1. Introduction In the canonical supervised
learning task one is given a training set hx 1 ; y 1 i; :::; hx t ; y t i and attempts to infer a hypothesis function h : X ! Y that achieves a small prediction error err(h(x); y) on future test
, 1999
"... Machine learning algorithms search a space of possible hypotheses and estimate the error of each hypotheses using a sample. Most often, the goal of classification tasks is to find a hypothesis
with a low true (or generalization) misclassification probability (or error rate); however, only the sample ..."
Cited by 6 (1 self)
Add to MetaCart
Machine learning algorithms search a space of possible hypotheses and estimate the error of each hypotheses using a sample. Most often, the goal of classification tasks is to find a hypothesis with a
low true (or generalization) misclassification probability (or error rate); however, only the sample (or empirical) error rate can actually be measured and minimized. The true error rate of the
returned hypothesis is unknown but can, for instance, be estimated using cross validation, and very general worst-case bounds can be given. This doctoral dissertation addresses a compound of
questions on error assessment and the intimately related selection of a "good" hypothesis language, or learning algorithm, for a given problem. In the first
, 2002
"... A neural network with fixed topology can be regarded as a parametrization of functions, which decides on the correlations between functional variations when parameters are adapted. We propose an
analysis, based on a differential geometry point of view, that allows to calculate these correlations. In ..."
Cited by 5 (3 self)
Add to MetaCart
A neural network with fixed topology can be regarded as a parametrization of functions, which decides on the correlations between functional variations when parameters are adapted. We propose an
analysis, based on a differential geometry point of view, that allows to calculate these correlations. In practise, this describes how one response is unlearned while another is trained. Concerning
conventional feed-forward neural networks we find that they generically introduce strong correlations, are predisposed to forgetting, and inappropriate for task decomposition. Perspectives to solve
these problems are discussed.
- International Conference on Machine Learning (ICML , 1999
"... In order to select a good hypothesis language (or model) from a collection of possible models, one has to assess the generalization performance of the hypothesis which is returned by a learner
that is bound to use some particular model. This paper deals with a new and very efficient way of assessing ..."
Cited by 5 (1 self)
Add to MetaCart
In order to select a good hypothesis language (or model) from a collection of possible models, one has to assess the generalization performance of the hypothesis which is returned by a learner that
is bound to use some particular model. This paper deals with a new and very efficient way of assessing this generalization performance. We present a new analysis which characterizes the expected
generalization error of the hypothesis with least training error in terms of the distribution of error rates of the hypotheses in the model. This distribution can be estimated very efficiently from
the data which immediately leads to an efficient model selection algorithm. The analysis predicts learning curves with a very high precision and thus contributes to a better understanding of why and
when over-fitting occurs. We present empirical studies (controlled experiments on Boolean decision trees and a large-scale text categorization problem) which show that the model selection algorithm
leads to err... | {"url":"http://citeseerx.ist.psu.edu/showciting?cid=60072&sort=cite&start=10","timestamp":"2014-04-21T11:17:34Z","content_type":null,"content_length":"39654","record_id":"<urn:uuid:0fc8167c-ce3f-441a-b317-10ebd6a31527>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00358-ip-10-147-4-33.ec2.internal.warc.gz"} |
Math Help
August 23rd 2008, 04:07 PM #1
Aug 2008
My maths assignment is due tomorrow, and i dont understand it. i know how to do it using the secant method, but we are not allowed to use it.
I NEED TO DIFFERENTIATE THIS PROBLEM!!!
if anyone can tell me what the differential of this equation is without using a secant method, i will LOVE you forever!!!!
$\frac{d}{dx}[\tan{u}] = \sec^2{u} \cdot \frac{du}{dx}$
$tanu = \frac{sinu}{cosu}$
Let $f(u) = sin u$ and $g(u)=cosu$
Then $\frac{d}{du}\left( \frac{f(u)}{g(u)}\right) = \frac{f'(u)g(u) - f(u)g'(u)}{g(u)^2} = \frac{cosu\cdot cosu - sinu \cdot (-) sinu}{cos^2u}=$$\frac{cos^2u + sin^2u}{cos^2u} = \frac{1}{cos^2u}$
Now if $u=\frac{230}{x}$, then $\frac{du}{dx} = -\frac{230}{x^2}$
$f'(u(x)) = cosu(x) \cdot \frac{du}{dx} = -\frac{230}{x^2}cos\frac{230}{x^2}$ and $g'(u(x)) = -sinu(x) \cdot \frac{du}{dx} = \frac{230}{x^2}sin\frac{230}{x^2}$
You should be able to handle it from here.
$tanu = \frac{sinu}{cosu}$
Let $f(u) = sin u$ and $g(u)=cosu$
Then $\frac{d}{du}\left( \frac{f(u)}{g(u)}\right) = \frac{f'(u)g(u) - f(u)g'(u)}{g(u)^2} = \frac{cosu\cdot cosu - sinu \cdot (-) sinu}{cos^2u}=$$\frac{cos^2u + sin^2u}{cos^2u} = \frac{1}{cos^2u}$
Now if $u=\frac{230}{x}$, then $\frac{du}{dx} = -\frac{230}{x^2}$
$f'(u(x)) = cosu(x) \cdot \frac{du}{dx} = -\frac{230}{x^2}cos\frac{230}{x^2}$ and $g'(u(x)) = -sinu(x) \cdot \frac{du}{dx} = \frac{230}{x^2}sin\frac{230}{x^2}$
You should be able to handle it from here.
I could be wrong, but I think what he meant by not using the secant method is that he doesn't want to use the definition of the derivative, not the secant trig function. If he/she could only
confirm it...
no, im not allowed to use anything to do with secant. we havent been taught it in class, so i cant answer the question using it.
My maths assignment is due tomorrow, and i dont understand it. i know how to do it using the secant method, but we are not allowed to use it.
I NEED TO DIFFERENTIATE THIS PROBLEM!!!
if anyone can tell me what the differential of this equation is without using a secant method, i will LOVE you forever!!!!
$\frac{d}{\,dx}\tan u=\sec^2u\cdot\frac{\,du}{\,dx}=\frac{1}{\cos^2u}\ cdot\frac{\,du}{\,dx}$
You get away without using a secant term...its in terms of cosine...
But is this what you're looking for?
it looks like it. im trying to find the max and min of tan230/x, which i need to diffentiate for. do u think this will work?
yes, you must find the derivative and set it equal to zero to find the critical points. then you can test these points to find maximums or mins. in this case, there are none since the derivative
is never zero
so my equation doesnt have a maximum point?
hmmm okay, that means i got the first equation wrong and its not tan230/x. ummm let me just tell you what im trying to answer.
"Local art gallery wants to know what distance patrons are to stand away from artworks for optimal viewing agle. consider at least 2 scenarios with painting size and hight placements."
so ive made the hieght of the painting 2m (200cm) and the bottome of the artwork 30cm above eye level.
that would make the opposite wall 230cm. and as the adjacent is unknown, i assumed that by using a tan function i could find the optimal angle, and then find the distance. is this the wrong way
to go about it?
You haven't defined what an optimal viewing angle is. Without knowing that, there's no way to tell what an optimal viewing distance is.
sorry, im not very good at defining. optimal angle is the largest possible angle.
And what is that for the human eye? And are you to assume 20/20 vision?
well the question doesnt say anything about the vision, so i would assume so
August 23rd 2008, 04:32 PM #2
August 23rd 2008, 04:59 PM #3
August 23rd 2008, 05:13 PM #4
Super Member
Jun 2008
August 23rd 2008, 06:45 PM #5
Aug 2008
August 23rd 2008, 06:51 PM #6
August 23rd 2008, 06:59 PM #7
Aug 2008
August 23rd 2008, 07:26 PM #8
August 23rd 2008, 07:29 PM #9
Aug 2008
August 23rd 2008, 07:31 PM #10
August 23rd 2008, 07:39 PM #11
Aug 2008
August 23rd 2008, 08:03 PM #12
August 23rd 2008, 08:07 PM #13
Aug 2008
August 23rd 2008, 08:17 PM #14
August 23rd 2008, 08:23 PM #15
Aug 2008 | {"url":"http://mathhelpforum.com/calculus/46592-differentiate.html","timestamp":"2014-04-17T01:21:01Z","content_type":null,"content_length":"76517","record_id":"<urn:uuid:6547e8f5-e948-4e89-89a5-1c8859f27030>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00248-ip-10-147-4-33.ec2.internal.warc.gz"} |
[Numpy-discussion] Bug or surprising undocumented behaviour in irfft
[Numpy-discussion] Bug or surprising undocumented behaviour in irfft
Anne Archibald peridot.faceted@gmail....
Wed Aug 29 21:24:50 CDT 2007
On 29/08/2007, Charles R Harris <charlesr.harris@gmail.com> wrote:
> > Is this also appropriate for the other FFTs? (inverse real, complex,
> > hermitian, what have you) I have written a quick hack (attached) that
> > should do just that rescaling, but I don't know that it's a good idea,
> > as implemented. Really, for a complex IFFT it's extremely peculiar to
> > add the padding where we do (between frequency -1 and frequency zero);
> > it would make more sense to pad at the high frequencies (which are in
> > the middle of the array). Forward FFTs, though, can reasonably be
> > padded at the end, and it doesn't make much sense to rescale the last
> > data point.
> It all depends on the data and what you intend. Much of my experience is
> with Michaelson interferometers and in that case the interferogram is
> essentially an autocorrelation, so it is desirable to keep its center at
> sample zero and let the left side wrap around, so ideally you fill in the
> middle as you suggest. You can also pad at the end if you don't put the
> center at zero, but then you need to phase shift the spectrum in a way that
> corresponds to rotating the center to index zero and padding in the middle.
> I expect you would want to do the same thing for complex transforms if they
> are of real data and do the nyquist divided by two thingy. If the high
> frequencies in a complex transform are actually high frequencies and not
> aliases of negative frequencies, then you will want to just append zeros.
> That case also occurs, I have designed decimating complex filters that
> produce output like that, they were like single sideband in the radi o
> world.
So is it a fair summary to say that for irfft, it is fairly clear that
one should adjust the Nyquist coefficient, but for the other varieties
of FFT, the padding done by numpy is just one of many possible
Should numpy be modified so that irfft adjusts the Nyquist
coefficient? Should this happen only for irfft?
> I usually multiply the forward transform by the sample interval, in secs or
> cm, and the unscaled inverse transform by the frequency sample interval, in
> Hz or cm^-1. That treats both the forward and inverse fft like
> approximations to the integral transforms and makes the units those of
> spectral density. If you think trapezoidal rule, then you will also see
> factors of .5 at the ends, but that is a sort of apodization that is
> consistent with how Fourier series converge at discontinuities. In the
> normal case where no interpolation is done the product of the sample
> intervals is 1/N, so it reduces to the usual convention. Note that in your
> example the sampling interval decreases when you do the interpolation, so if
> you did another forward transform it would be scaled down to account for the
> extra points in the data.
That's a convenient normalization.
Do you know if there's a current package to associate units with numpy
arrays? For my purposes it would usually be sufficient to have arrays
of quantities with uniform units. Conversions need only be
multiplicative (I don't care about Celsius-to-Fahrenheit style
conversions) and need not even be automatic, though of course that
would be convenient. Right now I use Frink for that sort of thing, but
it would have saved me from making a number of minor mistakes in
several pieces of python code I've written.
More information about the Numpy-discussion mailing list | {"url":"http://mail.scipy.org/pipermail/numpy-discussion/2007-August/029058.html","timestamp":"2014-04-19T02:21:49Z","content_type":null,"content_length":"6673","record_id":"<urn:uuid:a8abfcec-b866-41be-b217-72355a8b4a02>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00353-ip-10-147-4-33.ec2.internal.warc.gz"} |
STAAD.Pro Plates And Solid Elements [FAQ]
Comments: 0
Applies To
Product(s): STAAD.Pro
Version(s): All
Environment: N/A
Area: Modeling & Postprocessing
Subarea: Plates and Solids
Original Author: Bentley Technical Support Group
1. Is there a way to obtain the average stress at a node where several plate elements meet?
Presently, STAAD only reports the nodal stresses of the individual elements meeting at the node. In the Plate page of the post-processing mode, the lower table on the right hand side of the screen
contains the element nodal stresses. However, the average value from all such plates connected to the node is not reported.
Averaging is straightforward at joints that are only connected to one plane of plate elements and the loading is a normal pressure. However transverse shear will jump at a line if a line load is
applied; so a single average would be inappropriate for that stress only. Similarly for stresses across a line of beams or walls or a line of bending moments; etc.
If two walls and a floor meet at a joint there are 3 planes that should be treated separately. Also averaging should be separate for the same surface on either side of a wall to account for the
stress discontinuity. At the common joint there would be 12 sets of stresses (4 plates on each of 3 surfaces).
So averaging can be interrupted due to certain loadings, plates in other planes, and other members.
Further complexity occurs for contours and corner stresses if a shallow curved surface is being averaged. Most likely the inplane stresses should be averaged separately from the bending stresses,
without coordinate transformations, since the flat plate faceted surfaces are trying to simulate a smooth surface.
The above considerations are not easily automated. We hope to implement at least some simple cases in future.
2. In the plate element stress results, what do the terms TRESCAT and TRESTAB stand for? How are they calculated?
TRESCA is 2.0 times TMAX. TMAX is the maximum inplane shear stress on a plate element. TMAX = 0.5 * max[abs((s1 - s2)) , abs((s2 - s3)) , abs((s3 - s1))] where s1 and s2 are the inplane principal
stresses and the 3rd principal stress, s3, is zero at the surface. TRESCAT is the value for the top surface of the element. TRESCAB is on the bottom. Top and bottom are in accordance with the
direction of the local Z axis.
Example problem 18 in the examples manual shows the calculation of TMAX
3. I am modelling a concrete slab using plate elements. I am looking for the moments in the slab at the center of each element. I noticed that the output gives the bending moments per unit width.
What is the per unit width? Would that be the thickness of the plate element?
For Mx, the unit width is a unit distance at the center of the element, parallel to the local Y axis.
For My, the unit width is a unit distance at the center of the element, parallel to the local X axis.
Diagrams explaining the various plates forces, moments and streses are available at section 1.6.1 of the Technical Reference manual.
4. I have modeled a 40" x 40" column base plate with (4) 12" dia. pipe columns on it (equally spaced in both directions). How do I tell STAAD that the base plate will be on a concrete pedestal (f'c =
4.0 ksi)? My first guess is to assign supports at the mesh intersections:
SUPPORTS 1 TO 529 ELASTIC MAT DIRECT Y SUBGRADE 4
Any suggestions?
Your guess is a good one. You can model the support as an elastic mat foundation. To do that, you first need to know the subgrade modulus of concrete. One of the methods by which the modulus can be
computed is using the following equation:
Ks = Es / B ( 1 - PoissonRation * PoissonRatio )
( Reference: Foundation Analysis and Design ( Fifth Edition ) by Joseph E. Bowels Page 503 , Equation 9-6a )
In addition, if you want to make sure the concrete pedestal takes only compressive force, then specify the SPRING COMPRESSION command for those joints in the direction KFY.
An example of this is
1 TO 529 ELASTIC MAT YONLY SUBGRADE 987
1 TO 529 KFY
If you have any anchor bolts attached to the baseplate, they can be modeled as spring supports (tension only).
An example of this is
1000 TO 1004 FIXED BUT MX MY MZ KFY 5467
1000 TO 1004 KFY
5. How do you assign properties to solids?
You do not have to assign any properties for solid elements. For solids, the only information required is their geometry (node numbers and their coordinates), and material constants (E, Poisson,
etc.). You may refer to example problem 24 in the examples manual if you want details.
6. I would like to change the direction of the local Z axis of an element so that it points in the opposite direction. How do I do it?
From the Select menu at the top, select the Plates Cursor. Then select the element for which you want the Z axis direction changed. From the Commands menu, select Geometric Constants followed by
Plate Reference Point and give the coordinates of this point. Choose the Local Axis direction to point towards or away from the Reference Point. The Assign option should be set to "To Selection".
Click on OK.
7. Is it possible to apply a concentrated force on the surface of an element? The point where the load acts is not one of the nodes of the element, as a result of which I cannot use the JOINT LOAD
Yes, it is possible to do this. In Section 5.32.3 of the STAAD.Pro Technical Reference Manual, if you look at the syntax of the element pressure loading, you will find the following :
element-list PRESSURE direction x1 y1 x2 y2
In this syntax, (x1,y1) and (x2,y2) represent the corners of the region (on the element) over which the PRESSURE load is applied. However, if you omit the terms (x2,y2), the load will be treated as a
concentrated force acting at the point (x1,y1), where x1 and y1 are measured as distances, from the centroid of the element, along the local X and Y axes, of the point of action of the load.
Thus, if you want to apply a 580 pound force along the negative global Z direction at a distance away from the centroid of (1.3,2.5)feet along the local X & Y axes of element 73, you can specify the
following commands
73 PR GZ -580.0 1.3 2.5
8. How can I find the maximum shear stress on my plate element model?
Since there are several types of shear stress results we can get from STAAD, the expression "maximum shear stress" needs to be clarified. So, let us first see what the choices are :
SXY - For any given element, this is the in-plane shear stress on the element and acts along the plate local X-Y axes directions.
TMAX - This is the maximum inplane shear stress on the element and is a composite of SXY and the stress resulting from torsion MXY.
SQX - This is the out-of-plane shear stress on the X face at the centroid of the element.
SQY - This is the out-of-plane shear stress on the Y face at the centroid of the element.
All of these results can be obtained in a report form, with additional options like sorting done in ascending or descending order for a user-defined set of elements and a user-defined set of load
cases. As an example, do the following for getting a report of TMAX sorted in the order from maximum to minimum for all plates for load cases 4 and 5.
Go to the post-processing mode. Select all plates. From the Report menu, select Plate Results - Principal stresses. Select TMAX, and set the sorting order from High to Low. Switch on "Absolute
values" also to perform sorting based on Absolute values. Click on the Loading tab, and select just cases 4 and 5. Click on OK. A report will be displayed. Click the right mouse button inside the
table, and select Print.
9. The plate element results contain a term called TMAX. Is TMAX the best representation of the total stresses resulting from the torsion on the element?
Among the various stresses resulting from the torsional moment MXY, the only stress which is considered in TMAX is the shear stress. There are other stresses such as warping normal stresses which do
not get represented in TMAX.
TMAX is the maximum inplane shear stress on an element for a given load case. It represents inplane shear stresses only. It contains contributions from the direct inplane shear stress SXY as well as
the shear stress caused by the torsional moment MXY. Example 18 in the examples manual shows the derivation of TMAX from SXY and MXY.
While on the subject of shear stresses, one must note that the plate is also subjected to out-of-plane shear stresses SQX and SQY, which do not have any representation in TMAX.
10. In the post processing mode - Results menu - Plate Stress Contour, there are two options called Max Top and Max Bottom. Are these direct stresses or flexural stresses?
These are the principal stresses SMAX and SMIN. Principal stresses are a blend of axial stresses (also known as membrane stresses SX and SY), bending stresses (caused by MX and MY) and inplane shear
stresses (SXY). Since the bending stresses have distinct signs for the top and bottom surfaces of the element, the principal stresses too are distinct for top and bottom. The derivation for principal
stresses is shown in example 18 of the STAAD Examples manual.
11. Can STAAD.Pro be used in designing a mat foundation?
The answer to the question is Yes. The following are the major steps involved in the modelling and design of mat foundations using STAAD.
1) The mat foundation has to be modelled using finite elements. If the length and width of the mat are atleast 10 times larger than its thickness, plate elements can be used. If not, one may use 8
noded solid elements. The remainder of the structure involving the beams, columns and slabs also has to be modelled along with the mat. If beams share a common boundary with the mat and slabs, to
ensure the proper transfer of load between the beams and the mat & slabs, the mat & slabs have to be divided into several elements, the beams have to be divided into several members, and the elements
and members must share common nodes.
2) Generally, the supports for the mat are derived from the subgrade reaction of the soil. Using this attribute, and the influence area of each node of the mat, the spring constant for the supports
may be derived. STAAD contains an automatic spring support generation facility for mat foundations. One may refer to Section 5.27.3 of the STAAD.Pro Technical Reference Manual for details on
this type of support generation.
3) Soil spring supports generally tend to be effective against resisting compressive forces only. They are ineffective in resisting uplift. This type of a unidirectional support requires those
springs to be assigned an attribute call SPRING COMPRESSION.
4) The loads on the mat and the rest of the model have to be specified. Then, the structure has to be analyzed. This will generate the plate stresses and corner forces needed to design the mat.
5) You can then use the program's concrete design ability to design the individual elements which make up the mat. The only tedious aspect of this is that the program can presently design individual
elements only. The task of taking the reinforcement values from each element and assembling the reinforcement picture of the overall mat has to be done by you manually.
We suggest you take a look at example problem number 27 in the STAAD.Pro examples manual for guidance on analysing mat foundations. In that example, the aspects explained in steps 1,2, 3 and 4 above
are illustrated. Example problems 9 and 10 discuss concrete design of individual plate elements.
Note : A better option is to use STAAD.foundation software for design of mat. Mat modeled and analyzed in STAAD.Pro can be imported into STAAD.foundation will all results data and it can be
subseqently designed in STAAD.foundation. One can also model and design mats directly in STAAD.Foundation.
12. When modelling plate elements, should the individual elements satisfy any minimum requirements for the ratio of the length of their side to their thickness?
No, they do not have to. However, for the overall slab or wall, if the span in either direction is less than 10 times its thickness, then the slab or wall becomes more like a solid than like a plate;
and thick plate theory may not be adequate. In that case, 8-noded solid elements may be necessary.
13. I have a model where supports are defined at the nodes of some of the plate elements in the structure. If I divide the support reaction values by the thickness, length etc., of the side of the
elements adjacent to the support, shouldn't the values match the ELEMENT NODAL STRESSES? I am aware of the fact that element stresses are in the local axis system of the element, and support
reactions are in the global axis system, and am making the required transformations before making the comparison.
The element nodal stresses are obtained as the value of the stress polynomial at the coordinates of those joints. Stresses in an element are most accurately determined only at the center of the
element (in the middle of the joint displacement locations used in calculating that stress). The stress values calculated at the nodes will only be approximate (only the displacements of the joints
from this one element are used in calculating the stress). Stresses at a joint would be improved if the stresses from the other elements at the joint (on the same surface) were averaged.
Consequently, the comparison you suggest is not feasible.
A better alternative would be to compare the forces at the node rather than the stresses at the node.
The output for the command
consists of the 3 forces and 3 moments at each of the nodes of the elements, reported in the global axis system. Thus, the output will consist of FX,FY,FZ,MX,MY,MZ with the 3 forces having units of
force (not stress) and the 3 moments have units of moment (not moment per unit width). If you add up the values at the nodes of those elements which are connected to the support, those values must be
equal to the support reaction.
Another consideration is the way in which element loads are evaluated and used. Staad computes the equivalent forces at the corner joints (same total force, center of force, and direction). The
remainder of the analysis and results are as if you had applied the loads as joint loads rather than as element loads. Two exceptions, temperature loads are applied internally to the element and
plate releases will affect the load distribution to the joints.
Say you have a wall with uniform pressure. Half of the load on the elements along the base will be applied directly to the base, the other half is applied to the line of joints at the top of these
elements. So the internal transverse shears are too high at the top of the element. The transverse shears are OK at the center and too small at the base. The same will be true for the element force
output of transverse forces. However, the reactions will have the entire force. A finer mesh in general, and near the base in particular, will improve the element stress and load distribution.
14. Why shear stresses are not included in the Von Mises stresses for solid elements?
Have a look at the attached figure which is from the following link: http://en.wikipedia.org/wiki/Von_Mises_yield_criterion
There are 2 equations for calculating Von Mises stresses.
The first equation uses the STAAD.Pro terms SXX, SYY, SZZ, SXY, SYZ and SZX.
The second equation uses the terms S1, S2 and S3. These are the principal stresses. There are no shear terms in this equation. This is the one implemented in STAAD.Pro.
The Von Mises stress should be the same from both equations.
15. How to change the local axis orientation X and Y for plates ?
Although it is very easy to flip the direction of the local Z axis for the plates by using the Commands > Geometric Constants > Plate Reference Point option, unfortunately there is no easy way to
change the local X and Y for plates as needed. For example, if one needs to re-orient the local x for the plate to be parallel to the global X direction, one has to redefine the incidence of the
plates by going to the STAAD editor and changing the ELEMENT INCIDENCE data manually. The local X would be a vector from the center of the plate, drawn in a direction from node1 to node2 and is hence
dependent on the element incidences data. In other words the element incidences need to be such that node1 to node2 should be the direction of the global X. The rule which governs how the plate local
axes are oriented, is explained under section 1.6.1 of the Technical Reference Manual..
16 . What is the difference between Plate element , Shell element and Surface?
Plate element and shell element
Both terms represent the same thing in the STAAD context, which is, a 3-noded (triangular) or a 4-noded (quadrilateral) element to which a thickness has to be assigned as a property.
In STAAD, this element has both attributes - membrane (in-plane effect) and bending (out-of-plane effect). The bending effect can be shut off by declaring it as ELEMENT PLANE STRESS. The in-plane
effect can't be shut off.
Plate Element and Surface
If you want to model a structure which contains a wall, slab or panel type component, you have two choices in STAAD :
a) Model that panel using a collection of individual elements. This is called a finite element mesh. This is an assembly of the 2d triangular and/or quadrilateral elements described above.
b) Model that as a single physical object called a Surface.
Option (a) is achieved using the mesh generation facilities in STAAD.
In option (b), (surface object), what happens under the hood is that, during the analysis, STAAD transforms the surface into a finite element mesh. The type of mesh (number of elements, type of
elements, size of elements, etc.) that is generated from the surface is based on the parameters that you provide at the time of defining the surface.The details of the mesh thus generated are to a
large extent, masked from the user. Results are presented for that surface, not for the individual elements that it is made up of.
In other words, a surface is merely an object that represents a collection of elements. When the program goes through the analysis phase, it subdivides the surface into a plate elements . From
analysis point of view, both plate and surface are the same thing . The difference is in the interpretation of results . For plates , the stresses are reported while for Surface the force is
17. How to model an orthotropic plate so that it would have a smaller stiffness in one direction than in another?
You can create the orthotropic 2D material, which would have a different Young's Modulus in X and Y direction (hence different stiffness in both directions). To do so, please go to General ->
Materials vertical tab, then in the Material table on the right bottom corner select Orthotropic 2D tab and click Create button. In the opened window provide the required information and then assign
this material to the plates.
18. Is it possible to obtain the element stresses/ moments at a certain location within a plate ?
There is a print command that lets you print the element stress at any location within a plate. The syntax for the print command is
PRINT ELEMENT JOINT STRESSES AT f1 f2 LIST elements-list where f1 and f2 represent the distances along local X and local Y of the plate measured from the plate origin in current units.
For example if the current length units is set to “feet” and to one wants to print the moments/stresses at 0.5 ft from the center of a plate # 47, the print command would be
PRINT ELEMENT JOINT STRESSES AT 0.5 0.5 LIST 47
You may refer to the section 5.42 of the Technical Reference for more details.
19. I have modeled a T beam using quad plates. I am trying to find out the bending stress in the flange. How can I get that ?
STAAD.Pro reports the bending moments for plates and not bending stresses. However one can find the bending stress without much effort from the data available. First one has to pick a plate close to
where the stress is required. Mx (or My ) moments are reported as part of the Plate Center Stress table within the Plate page in the Postprocessing mode. The moment is reported per unit width. One
may calculate the Section modulus S ( = 1/6x w x t2 , where w = width of flange plate over which moment is reported, t = thickness of flange plate ). The bending stress would then be M/S.
Alternately STAAD.Pro does report the combined stress for plates. This is available within the Combined Stress tab inside the Plate Center stress table. For the selected plate, note the Top combined
stress ( Comb Sx or Comb Sy as appropriate ). The combined stress is the P/A + M/S value. From within the Shear Membrane and Bending tab of the same table, note the SX or the SY which would be the P/
A component. One can then find the M/S component by taking out the P/A part from the combined stress.
See Also
Structural Product TechNotes And FAQs
External Links
Bentley Technical Support KnowledgeBase
Comments or Corrections?
Bentley's Technical Support Group requests that you please confine any comments you have on this Wiki entry to this "Comments or Corrections?" section. THANK YOU! | {"url":"http://communities.bentley.com/products/structural/structural_analysis___design/w/structural_analysis_and_design__wiki/2045.staad-pro-plates-and-solid-elements-faq.aspx","timestamp":"2014-04-19T17:02:13Z","content_type":null,"content_length":"116416","record_id":"<urn:uuid:3b2f7d94-0a96-4d2c-91cb-38c72b2fe393>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00661-ip-10-147-4-33.ec2.internal.warc.gz"} |
How to integrate a fraction of sums of exponentials?
Is it possible to have an solution to this sort of integral? And if not, why not?
[itex]\int_0^\infty \frac{e^{-ax}}{e^{-bx}+e^{-cx}}dx[/itex]
Is a Taylor expansion the only way forward?
Many thanks
Use [tex ] instead of inline tex if you're not writing a formula on the same line with words.
[tex]\int_0^\infty \frac{e^{-ax}}{e^{-bx}+e^{-cx}}dx[/tex]
looks better and is easier to read.
As for your question, before jumping to series expansions and substitutions, specify if the arbitrary constants are positive or negative. This makes a huge difference on the final result.
Then try to get rid of as many exponentials as possible. You can make the substitution (a,b,c >0) [itex] \displaystyle{e^{-ax}} = t [/itex] and see what you get. | {"url":"http://www.physicsforums.com/showthread.php?t=561889","timestamp":"2014-04-18T03:16:21Z","content_type":null,"content_length":"23691","record_id":"<urn:uuid:592d7bcc-5cb4-4ead-ae23-00907f83eda9>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00581-ip-10-147-4-33.ec2.internal.warc.gz"} |
Real Numbers: Problem Solving and Quizzes
Problem Solving
Often, when students hear the words problem solving in a mathematics class, they assume that we are talking about word problems. While word problems are included in problem solving, the concept is
much broader than that. In the Problem Solving Assignments you will find word problems, but you'll also find other types of problems.
For example, if you were asked to evaluate , you could solve that easily using the Order of Operations. This activity would be called an exercise, since you are practicing what you have already
learned. However, if you were given the expression and were asked to insert grouping symbols to make it equal to 13, then you have a problem. This type of question requires you to apply what you know
about the Order of Operations in a creative way. These types of problems allow you to demonstrate a thorough grasp of the material that you have been studying.
Go to the Lesson 1: Problem Solving Assignment.
Lesson Quiz
Lesson 1: Quiz with your mentor. Your mentor will check your work and provide feedback for this quiz.
Solve It! Quiz
The Solve It! Quiz will give you a chance to demonstrate your problem-solving skills. Take the Lesson 1: Solve It! Quiz with your mentor.
Digging Deeper
The Four 4s Activity (Wheeler) is a puzzle that has intrigued mathematicians for at least one hundred years. Warning: Be careful or you will spend all your free time thinking about the number 4!
Note: The Digging Deeper activity is purposely written to challenge you. Don't expect to solve these problems quickly. These problems are meant to be pondered. Andrew Wiles described his method of
pondering: "When I got stuck, and I didn't know what to do next, I would go out for a walk.... Walking has a very good effect in that you're in this state of relaxation, but at the same time you're
allowing the sub-conscious to work on you" (Wiles, NOVA).
Work on these problems for a while; when you get stuck, take a break. Go for a walk. Take a snack break. Put your pencil down, relax, and think. Ponder. Contemplate. Then pick up your pencil and try
Solutions to the
• Problem Solving Assignment,
• Lesson Quiz,
• Solve It! Quiz, and
• Digging Deeper Problem Set
are available on the Mentor Guidelines Web site. (Mentors, if you have not already done so, please register on the Duke TIP Mentor Registration page to access the answer keys.)
Please proceed to Lesson 2.
Back to Table of Contents | {"url":"http://tip.duke.edu/independent_learning/mathematics/algebra1_online_lesson/Lesson_13.html","timestamp":"2014-04-19T19:34:10Z","content_type":null,"content_length":"10196","record_id":"<urn:uuid:15805d7f-af79-4a37-a000-20b993645ab9>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00364-ip-10-147-4-33.ec2.internal.warc.gz"} |
The scipy.integrate sub-package provides several integration techniques including an ordinary differential equation integrator. An overview of the module is provided by the help command:
>>> help(integrate)
Methods for Integrating Functions given function object.
quad -- General purpose integration.
dblquad -- General purpose double integration.
tplquad -- General purpose triple integration.
fixed_quad -- Integrate func(x) using Gaussian quadrature of order n.
quadrature -- Integrate with given tolerance using Gaussian quadrature.
romberg -- Integrate func using Romberg integration.
Methods for Integrating Functions given fixed samples.
trapz -- Use trapezoidal rule to compute integral from samples.
cumtrapz -- Use trapezoidal rule to cumulatively compute integral.
simps -- Use Simpson's rule to compute integral from samples.
romb -- Use Romberg Integration to compute integral from
(2**k + 1) evenly-spaced samples.
See the special module's orthogonal polynomials (special) for Gaussian
quadrature roots and weights for other weighting factors and regions.
Interface to numerical integrators of ODE systems.
odeint -- General integration of ordinary differential equations.
ode -- Integrate ODE using VODE and ZVODE routines.
General integration (quad)
The function quad is provided to integrate a function of one variable between two points. The points can be inf) to indicate infinite limits. For example, suppose you wish to integrate a bessel
function jv(2.5,x) along the interval
This could be computed using quad:
>>> result = integrate.quad(lambda x: special.jv(2.5,x), 0, 4.5)
>>> print result
(1.1178179380783249, 7.8663172481899801e-09)
>>> I = sqrt(2/pi)*(18.0/27*sqrt(2)*cos(4.5)-4.0/27*sqrt(2)*sin(4.5)+
>>> print I
>>> print abs(result[0]-I)
The first argument to quad is a “callable” Python object (i.e a function, method, or class instance). Notice the use of a lambda- function in this case as the argument. The next two arguments are the
limits of integration. The return value is a tuple, with the first element holding the estimated value of the integral and the second element holding an upper bound on the error. Notice, that in this
case, the true value of this integral is
is the Fresnel sine integral. Note that the numerically-computed integral is within
Infinite inputs are also allowed in quad by using inf as one of the arguments. For example, suppose that a numerical value for the exponential integral:
is desired (and the fact that this integral can be computed as special.expn(n,x) is forgotten). The functionality of the function special.expn can be replicated by defining a new function vec_expint
based on the routine quad:
>>> from scipy.integrate import quad
>>> def integrand(t,n,x):
... return exp(-x*t) / t**n
>>> def expint(n,x):
... return quad(integrand, 1, Inf, args=(n, x))[0]
>>> vec_expint = vectorize(expint)
>>> vec_expint(3,arange(1.0,4.0,0.5))
array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049])
>>> special.expn(3,arange(1.0,4.0,0.5))
array([ 0.1097, 0.0567, 0.0301, 0.0163, 0.0089, 0.0049])
The function which is integrated can even use the quad argument (though the error bound may underestimate the error due to possible numerical error in the integrand from the use of quad ). The
integral in this case is
>>> result = quad(lambda x: expint(3, x), 0, inf)
>>> print result
(0.33333333324560266, 2.8548934485373678e-09)
>>> I3 = 1.0/3.0
>>> print I3
>>> print I3 - result[0]
This last example shows that multiple integration can be handled using repeated calls to quad. The mechanics of this for double and triple integration have been wrapped up into the functions dblquad
and tplquad. The function, dblquad performs double integration. Use the help function to be sure that the arguments are defined in the correct order. In addition, the limits on all inner integrals
are actually functions which can be constant functions. An example of using double integration to compute several values of
>>> from scipy.integrate import quad, dblquad
>>> def I(n):
... return dblquad(lambda t, x: exp(-x*t)/t**n, 0, Inf, lambda x: 1, lambda x: Inf)
>>> print I(4)
(0.25000000000435768, 1.0518245707751597e-09)
>>> print I(3)
(0.33333333325010883, 2.8604069919261191e-09)
>>> print I(2)
(0.49999999999857514, 1.8855523253868967e-09)
Gaussian quadrature (integrate.gauss_quadtol)
A few functions are also provided in order to perform simple Gaussian quadrature over a fixed interval. The first is fixed_quad which performs fixed-order Gaussian quadrature. The second function is
quadrature which performs Gaussian quadrature of multiple orders until the difference in the integral estimate is beneath some tolerance supplied by the user. These functions both use the module
special.orthogonal which can calculate the roots and quadrature weights of a large variety of orthogonal polynomials (the polynomials themselves are available as special functions returning instances
of the polynomial class — e.g. special.legendre).
Integrating using samples
There are three functions for computing integrals given only samples: trapz , simps, and romb . The first two functions use Newton-Coates formulas of order 1 and 2 respectively to perform
integration. These two functions can handle, non-equally-spaced samples. The trapezoidal rule approximates the function as a straight line between adjacent points, while Simpson’s rule approximates
the function between three adjacent points as a parabola.
If the samples are equally-spaced and the number of samples available is romberg).
Ordinary differential equations (odeint)
Integrating a set of ordinary differential equations (ODEs) given initial conditions is another useful example. The function odeint is available in SciPy for integrating a first-order vector
differential equation:
given initial conditions
For example suppose it is desired to find the solution to the following second-order differential equation:
with initial conditions
which gives a means to check the integrator using special.airy.
First, convert this ODE into standard form by setting
In other words,
As an interesting reminder, if
However, in this case,
There are many optional inputs and outputs available when using odeint which can help tune the solver. These additional inputs and outputs are not needed much of the time, however, and the three
required input arguments and the output solution suffice. The required inputs are the function defining the derivative, fprime, the initial conditions vector, y0, and the time points to obtain a
solution, t, (with the initial value point as the first element of this sequence). The output to odeint is a matrix where each row contains the solution vector at each requested time point (thus, the
initial conditions are given in the first output row).
The following example illustrates the use of odeint including the usage of the Dfun option which allows the user to specify a gradient (with respect to
>>> from scipy.integrate import odeint
>>> from scipy.special import gamma, airy
>>> y1_0 = 1.0/3**(2.0/3.0)/gamma(2.0/3.0)
>>> y0_0 = -1.0/3**(1.0/3.0)/gamma(1.0/3.0)
>>> y0 = [y0_0, y1_0]
>>> def func(y, t):
... return [t*y[1],y[0]]
>>> def gradient(y,t):
... return [[0,t],[1,0]]
>>> x = arange(0,4.0, 0.01)
>>> t = x
>>> ychk = airy(x)[0]
>>> y = odeint(func, y0, t)
>>> y2 = odeint(func, y0, t, Dfun=gradient)
>>> print ychk[:36:6]
[ 0.355028 0.339511 0.324068 0.308763 0.293658 0.278806]
>>> print y[:36:6,1]
[ 0.355028 0.339511 0.324067 0.308763 0.293658 0.278806]
>>> print y2[:36:6,1]
[ 0.355028 0.339511 0.324067 0.308763 0.293658 0.278806] | {"url":"http://docs.scipy.org/doc/scipy-0.11.0/reference/tutorial/integrate.html","timestamp":"2014-04-17T03:49:35Z","content_type":null,"content_length":"40380","record_id":"<urn:uuid:cdcc054e-a435-471e-b141-b8d6d52bb55b>","cc-path":"CC-MAIN-2014-15/segments/1398223207985.17/warc/CC-MAIN-20140423032007-00377-ip-10-147-4-33.ec2.internal.warc.gz"} |
Providence, RI Algebra 1 Tutor
Find a Providence, RI Algebra 1 Tutor
...Instilling Good Study Habits and Time Management Skills ------------------------------------------ I am a versatile scheduler who will work almost anywhere in Rhode Island or the Boston area.
Please contact me to make arrangements. -SidWorking with the College Guidance Project at Brown Univer...
36 Subjects: including algebra 1, chemistry, reading, English
...In addition to my understanding of general chemistry, I have also completed Intro to Biochemistry & Biochemistry research, a full year of Chemistry and Organic Chemistry, Ecology, and my
aquaculture major focuses on the real-life application of many chemistry concepts in the aquaculture industry ...
22 Subjects: including algebra 1, chemistry, physics, biology
...My teaching license is pending receipt. I have three school aged children of my own and I enjoy working with children to help them reach their full potential through a variety of teaching
techniques, since each child learns differently. I would love to hear from you regarding the tutoring of your child.
13 Subjects: including algebra 1, English, writing, grammar
...Then spent after school volunteering at an elementary school to help out children that were falling behind class. Though I did not tutor much during college, I started up with tutoring again
with a company as soon as I had my degree (and more free time). I enjoy working with children of all ages...
17 Subjects: including algebra 1, calculus, statistics, geometry
...My expertise is in mathematics, statistics, and computers, particularly with programs such as LaTeX, SPSS, SAS, R, and Excel. My bachelor's degree is in psychology with a focus in mathematics
from the University of Wisconsin-Milwaukee along with some JAVA programming experience. Another passion of mine is the English language, so I am an avid reader and writer as well.
36 Subjects: including algebra 1, reading, English, ACT Math
Related Providence, RI Tutors
Providence, RI Accounting Tutors
Providence, RI ACT Tutors
Providence, RI Algebra Tutors
Providence, RI Algebra 2 Tutors
Providence, RI Calculus Tutors
Providence, RI Geometry Tutors
Providence, RI Math Tutors
Providence, RI Prealgebra Tutors
Providence, RI Precalculus Tutors
Providence, RI SAT Tutors
Providence, RI SAT Math Tutors
Providence, RI Science Tutors
Providence, RI Statistics Tutors
Providence, RI Trigonometry Tutors | {"url":"http://www.purplemath.com/providence_ri_algebra_1_tutors.php","timestamp":"2014-04-17T13:41:24Z","content_type":null,"content_length":"24358","record_id":"<urn:uuid:0dba1849-5d56-4dcd-a15e-d0963ceb796a>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00020-ip-10-147-4-33.ec2.internal.warc.gz"} |
Estimating the workpiece-backingplate heat transfer coefficient in friction stirwelding
Publication: Research - peer-review › Journal article – Annual report year: 2012
title = "Estimating the workpiece-backingplate heat transfer coefficient in friction stirwelding",
publisher = "Emerald Group Publishing Ltd.",
author = "Anders Larsen and Mathias Stolpe and Hattel, {Jesper Henri}",
year = "2012",
doi = "10.1108/02644401211190573",
volume = "29",
number = "1",
pages = "65--82",
journal = "Engineering Computations",
issn = "0264-4401",
TY - JOUR
T1 - Estimating the workpiece-backingplate heat transfer coefficient in friction stirwelding
A1 - Larsen,Anders
A1 - Stolpe,Mathias
A1 - Hattel,Jesper Henri
AU - Larsen,Anders
AU - Stolpe,Mathias
AU - Hattel,Jesper Henri
PB - Emerald Group Publishing Ltd.
PY - 2012
Y1 - 2012
N2 - Purpose - The purpose of this paper is to determine the magnitude and spatial distribution of the heat transfer coefficient between the workpiece and the backingplate in a friction stir welding
process using inverse modelling. Design/methodology/approach - The magnitude and distribution of the heat transfer coefficient are the variables in an optimisation problem. The objective is to
minimise the difference between experimentally measured temperatures and temperatures obtained using a 3D finite element model. The optimisation problem is solved using a gradient based optimisation
method. This approach yields optimal values for the magnitude and distribution of the heat transfer coefficient. Findings - It is found that the heat transfer coefficient between the workpiece and
the backingplate is non-uniform and takes its maximum value in a region below the welding tool. Four different parameterisations of the spatial distribution of the heat transfer coefficient are
analysed and a simple, two parameter distribution is found to give good results. Originality/value - The heat transfer from workpiece to backingplate is important for the temperature field in the
workpiece, and in turn the mechanical properties of the welded plate. Accurate modelling of the magnitude and distribution of the heat transfer coefficient is therefore an essential step towards
improved models of the process. This is the first study using a gradient based optimisation method and a non-uniform parameterisation of the heat transfer coefficient in an inverse modeling approach
to determine the heat transfer coefficient in friction stir welding. © Emerald Group Publishing Limited.
AB - Purpose - The purpose of this paper is to determine the magnitude and spatial distribution of the heat transfer coefficient between the workpiece and the backingplate in a friction stir welding
process using inverse modelling. Design/methodology/approach - The magnitude and distribution of the heat transfer coefficient are the variables in an optimisation problem. The objective is to
minimise the difference between experimentally measured temperatures and temperatures obtained using a 3D finite element model. The optimisation problem is solved using a gradient based optimisation
method. This approach yields optimal values for the magnitude and distribution of the heat transfer coefficient. Findings - It is found that the heat transfer coefficient between the workpiece and
the backingplate is non-uniform and takes its maximum value in a region below the welding tool. Four different parameterisations of the spatial distribution of the heat transfer coefficient are
analysed and a simple, two parameter distribution is found to give good results. Originality/value - The heat transfer from workpiece to backingplate is important for the temperature field in the
workpiece, and in turn the mechanical properties of the welded plate. Accurate modelling of the magnitude and distribution of the heat transfer coefficient is therefore an essential step towards
improved models of the process. This is the first study using a gradient based optimisation method and a non-uniform parameterisation of the heat transfer coefficient in an inverse modeling approach
to determine the heat transfer coefficient in friction stir welding. © Emerald Group Publishing Limited.
KW - Friction stir welding
KW - Inverse modelling
KW - Friction welding
KW - Heat transfer coefficient
KW - Optimisation
U2 - 10.1108/02644401211190573
DO - 10.1108/02644401211190573
JO - Engineering Computations
JF - Engineering Computations
SN - 0264-4401
IS - 1
VL - 29
SP - 65
EP - 82
ER - | {"url":"http://orbit.dtu.dk/en/publications/estimating-the-workpiecebackingplate-heat-transfer-coefficient-in-friction-stirwelding(cda70846-dde9-4bbb-a035-25f0efe29f5c)/export.html","timestamp":"2014-04-19T05:35:39Z","content_type":null,"content_length":"23435","record_id":"<urn:uuid:9f62fd70-7253-4add-9d5c-d5a1d57015d9>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00420-ip-10-147-4-33.ec2.internal.warc.gz"} |
What are the results to date and the future of the work?
A conductivity model for Australia has been obtained by inverting the data. In the inverted model, the known regional conductive anomalies, which were investigated by previous studies, are well
imaged. New information from this conductivity model is (1) this model gives the position, extension and shape of the anomalies, so that the relation between the anomalies can be seen; (2). the model
gives conductance values for the anomalies.
In future work, the inversion program will be applied to interpret the Queensland electromagnetic induction data. The conductivity model derived from Queensland will give a detailed picture of the
Carpentaria anomaly in a three-dimension sense.
What computational techniques are used?
The inversion is achieved by minimising both misfit function and model roughness function. The misfit function is least squares misfit and model roughness is the first derivative roughness.
The model changes at each step of the inversion procedures represent a compromise between fitting the data and reducing the model roughness. The compromise is adjusted by a weight factor lambda. A
conjugate gradient relaxation method is used to solve the final equation.
Wang, L.J., Lilley F.E.M and Chamalaun F.H, The large scale electric conductivity structure of Australia from magnetometer arrays., Exploration Geophysics, 28, 150-155. 1997. | {"url":"http://anusf.anu.edu.au/annual_reports/annual_report96/AR3_html_96/Lilley.html","timestamp":"2014-04-21T15:08:38Z","content_type":null,"content_length":"5686","record_id":"<urn:uuid:07daf939-ff38-454e-86f7-63b6c14d8b1b>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00136-ip-10-147-4-33.ec2.internal.warc.gz"} |