content
stringlengths
86
994k
meta
stringlengths
288
619
An open letter to Steve LevittAn open letter to Steve Levitt An open letter to Steve Levitt Now, that’s the total electric energy consumed during the year, and you can turn that into the rate of energy consumption (measured in Watts, just like the world was one big light bulb) by dividing kilowatt hours by the number of hours in a year, and multiplying by 1000 to convert kilowatts into watts. The answer is two trillion Watts, in round numbers. How much area of solar cells do you need to generate this? On average, about 200 Watts falls on each square meter of Earth’s surface, but you might preferentially put your cells in sunnier, clearer places, so let’s call it 250 Watts per square meter. With a 15% efficiency, which is middling for present technology the area you need is 2 trillion Watts/(.15 X 250. Watts per square meter) or 53,333 square kilometers. That’s a square 231 kilometers on a side, or about the size of a single cell of a typical general circulation model grid box. If we put it on the globe, it looks like So already you should be beginning to suspect that this is a pretty trivial part of the Earth’s surface, and maybe unlikely to have much of an effect on the overall absorbed sunlight. In fact, it’s only 0.01% of the Earth’s surface. The numbers I used to do this calculation can all be found in Wikipedia, or even in a good paperbound World Almanac. But we should go further, and look at the actual amount of extra solar energy absorbed. As many reviewers of Superfreakonomics have noted, solar cells aren’t actually black, but that’s not the main issue. For the sake of argument, let’s just assume they absorb all the sunlight that falls on them. In my business, we call that “zero albedo” (i.e. zero reflectivity). As many commentators also noted, the albedo of real solar cells is no lower than materials like roofs that they are often placed on, so that solar cells don’t necessarily increase absorbed solar energy at all. Let’s ignore that, though. After all, you might want to put your solar cells in the desert, and you might try to cool the planet by painting your roof white. The albedo of desert sand can also be found easily by doing a Google search on “Albedo Sahara Desert,” for example. Here’s what you get: So, let’s say that sand has a 50% albedo. That means that each square meter of black solar cell absorbs an extra 125 Watts that otherwise would have been reflected by the sand (i.e. 50% of the 250 Watts per square meter of sunlight). Multiplying by the area of solar cell, we get 6.66 trillion Watts. That 6.66 trillion Watts is the “waste heat” that is a byproduct of generating electricity by using solar cells. All means of generating electricity involve waste heat, and fossil fuels are not an exception. A typical coal-fired power plant only is around 33% efficient, so you would need to release 6 trillion Watts of heat to burn the coal to make our 2 trillion Watts of electricity. That makes the waste heat of solar cells vs. coal basically a wash, and we could stop right there, but let’s continue our exercise in thinking with numbers anyway. | Previous page | Next page
{"url":"http://www.realclimate.org/index.php/archives/2009/10/an-open-letter-to-steve-levitt?wpmp_switcher=mobile&wpmp_tp=1","timestamp":"2014-04-16T22:02:04Z","content_type":null,"content_length":"14745","record_id":"<urn:uuid:4fbcf7b9-af9f-4a49-bea4-8db754efaabb>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00098-ip-10-147-4-33.ec2.internal.warc.gz"}
Integral of e^(-as^2)cos(Bs)ds from 0-infinity Thanks Jameson, the problem is if you apply integration by parts across that definite interval, you end up with an only marginally different integral, and applying it again gets you back where you started(which was kinda neat)that's the first thing I tried, also with the cos in exponential form(which I should've known just gets me the same effect) and nrged, I think you got it, lemme run with it after I'm done eating and see if I get it. I got the whole thing down into two different integrals and tried to think of a useful change of variable I could do, but I was looking at the whooooole thing, not just a single integral after breaking it into two. Good eyes laddy ^_^
{"url":"http://www.physicsforums.com/showthread.php?t=117948","timestamp":"2014-04-19T22:42:56Z","content_type":null,"content_length":"34019","record_id":"<urn:uuid:30eb1a23-bb76-486f-8d06-5fbfc29410b0>","cc-path":"CC-MAIN-2014-15/segments/1397609537754.12/warc/CC-MAIN-20140416005217-00226-ip-10-147-4-33.ec2.internal.warc.gz"}
A tough limit This problem came up in another thread. Let's see if it can be done using a CAS and some experimental techniques. First we attempt to approximate the sum as accurately as we can: with confidence in all the digits. Next a PSLQ was done to try to identify the number in terms of simple constants. This turned out to be fruitless. So the Euler Mclaurin formula was used next. Basically the EMS is a formula that relates sums, integrals and derivatives together. It has many forms but the one we are interested in looks like this. Often this form is of use for tough sums because it is often easier to integrate and differentiate. Plugging the above sum into that equation and asking mathematica to evaluate it produces a big mess. With some work you can get this out of it. We can take the limit as n approaches infinity term by term and we are left with and we are done.
{"url":"http://www.mathisfunforum.com/viewtopic.php?pid=210644","timestamp":"2014-04-19T06:58:33Z","content_type":null,"content_length":"20042","record_id":"<urn:uuid:a5e0f57d-d2d6-4dac-b4ba-560df82b4911>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00042-ip-10-147-4-33.ec2.internal.warc.gz"}
homology with compact supports up vote 8 down vote favorite In one of the exercises in McDuff and Salamon, they mention homology with compact supports. I know how to define *co*homology with compact supports, but I can't picture the homology version. How do I say that a chain has compact support? If I use singular chains, don't they all have compact support anyway? Google isn't a big help here, so any references would really help me out. Also, I've added some basic sub-questions that would also help me out tremendously. This must all be pretty simple, but my background in algebraic topology is weak and it completely baffles me! 1. McDuff-Salamon go on to state that for the open annulus $(1/2 < r < 1)$ in the plane, the first compact homology group is generated by the arc $\theta = 0$, $1/2 < r < 1$, which I can understand with hindsight: this is just the generator of the homology rel. the boundary and most likely there will be an isomorphism between the compact and relative homologies, just like there is one between the compact and relative co-homologies. 2. They also implicitly use an isomorphism between the compact homology and compact cohomology in certain dimensions. Should I just use this as the definition for the compact homology? I.e. $H_{k, c} = (H^k_c)^\ast$? homology at.algebraic-topology reference-request 2 My guess is that they are referring to what is usually called Borel-Moore homology (or homology with closed support). That fits at least with their given example. (A quick glance at Wikipedia indicates that the entry on Borel-Moore homology gives a correct description of it.) – Torsten Ekedahl Feb 24 '10 at 16:15 Great!! You're right, this fits the example perfectly, but it also helped me to understand some constructions that I was thinking about. It turns out that BM homology is just the tool that I need. – jvkersch Feb 24 '10 at 20:20 add comment 4 Answers active oldest votes To elaborate on Ekedahl's comment: It will be easiest to describe things for a triangulated space, so I can work with simplicial chains and cochains. (But my space could be infinitely triangulated; e.g. think of Escher's famous picture of the infinitely triangulated hyperbolic plane. I will also assume that my triangulation is locally finite.) As you observed, usual chains have compact support. This is why you can pair them with cochains (which have arbitrary support). Borel--Moore chains can have unbounded support. These can be paired with not with arbitrary cochains, but only with up vote 8 compactly supported ones. down vote accepted So Borel--Moore homology is the "homology analogue" of compactly supported cohomology. (But the support conditions are reversed, since homology is dual to cohomology.) One can often interpret Borel--Moore homology as relative homology. E.g. if $M$ is a compact manifold with boundary $\partial M$, then the Borel--Moore homology of $M\setminus \partial M$ (the interior of $M$) is the same as the homology of the pair $(M,\partial M)$. Thanks for the elaboration -- as I mentioned in my answer to Torsten Ekedahl's comment, Borel-Moore homology turns out to be precisely what I need. Your explanation made the remaining pieces fall into place. – jvkersch Feb 24 '10 at 20:27 add comment I think what McDuff-Salamon call homology with compact support is more commonly known as homology of infinite chains. The chains are formal infinite sums of singular simplices that are locally finite in the sense that each compact subset meets only finitely many singular simplices. The boundary is defined in the usual way. up vote 4 Note that the usual singular homology are with compact support: the cycles have compact images. By contrast, the usual singular cohomology do not have compact support as a cocycle may take down vote nonzero value on a sequence of cycles that run off to infinity. There is a book by Massey, "Homology and cohomology theory. An approach based on Alexander-Spanier cochains", where these matters are discussed in a very general setting. 1 This is the same as Borel--Moore homology, as far as I understand. – Emerton Feb 24 '10 at 19:51 add comment See http://eom.springer.de/H/h047870.htm up vote 1 down vote It seems they define homology with compact support as the direct limit of homology of compact subspaces. But this is just the ordinary homology (if we are considering singular homology), since every singular chain is contained in some compact subspace anyway (so the natural map between the direct limit and the ordinary homology is an isomorphism). – Andrea Ferretti Feb 24 '10 at 15:39 but I think there are also homology theories, wehre this map is not an isomorphism. In this situation homology with compact support could make sense/be useful. – HenrikRüping Feb 24 '10 at 16:44 Yes, but I don't see how this is relevant to the original poster question. – Andrea Ferretti Feb 24 '10 at 17:30 add comment For any manifold (compact or not), compactly supported cohomology is Poincare dual to (ordinary) homology, via capping with the fundamental class, which is an infinite chain (i.e the sum of up vote all the top simplices in a triangulation). Likewise, (ordinary) cohomology is Poincare dual to homology with locally finite infinite chains. In notation, $ H^{n-k}_{comp}(X)\cong H_{k}(X)$ 1 down and $H^{n-k}(X)\cong H_{k, inf}(X) $. I'm not sure that I understand the claims here. For example, one cannot pair an ordinary cocycle with a locally finite chain (since the resulting sum may still be infinite), but only with a finite chain. Also, it is true that $H^k_c$ and Borel--Moore homology in degree $k$ (which is what I think you mean by $H_{k,inf}(X)$) are canonically dual (with field coefficients). (This is not a statement of Poincare duality, but is easier, and is analogous to the canonical duality of cohomology and usual homology with field coefficients given by one of the universal coefficient-type formulas.) – Emerton Feb 25 '10 at 2:16 $ H^{n−k}_{comp}(X)=H_k(X)$ and $H^{n−k}(X)=H_{k,BM}(X)$ seems OK to me: If you think of a regular cell structure and its dual cell structure, the Poincare dual to a finite chain is a compactly supported cochain and the dual to a locally finite infinite chain is a regular cochain. But I agree your assertion contradicts what I said in the second paragraph. What if $X$ is an infinite genus 2-manifold? Then (say rational coeffs) $H^1_{comp}(X)=H_1(X)$ is a direct sum of infinitely many $Q$s, so $(H_1(X))^*$ is a infinite product of $Q$s. I'm confused. – Paul Feb 25 '10 at 3:31 OK, I see my mistake. $H^1_c(X)$' is not dual to $H_1(X)$', but it is dual to `$H_{1,BM}(X)$. I'll get rid of the second paragraph of my answer, but then the first paragraph becomes irrelevant to the question. – Paul Feb 25 '10 at 3:39 add comment Not the answer you're looking for? Browse other questions tagged homology at.algebraic-topology reference-request or ask your own question.
{"url":"http://mathoverflow.net/questions/16265/homology-with-compact-supports","timestamp":"2014-04-19T00:01:46Z","content_type":null,"content_length":"76498","record_id":"<urn:uuid:5aa1287d-e0ae-4af4-837c-cb899e01572b>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Could someone explain how the attenuation of sound or light through an absorber is an example of exponential decay? • one year ago • one year ago Best Response You've already chosen the best response. Okay, so say you have something that looks like this|dw:1360376093954:dw| As light, or sound doesn't matter, bounces around in there a little of it will be absorbed, attenuated, each time it hits one of the wall of the cube. So, as it bounces around the box it will lose some amplitude, say half, each time it hits one of the walls. So if its original amplitude is A then after the first hit it will be A/2, then A/4, then A/6, so its decay is given by A(1/2)^n and thus, it decays exponentially as it bounces around the box. Best Response You've already chosen the best response. Thank you! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/5115a2e2e4b0e554778b9570","timestamp":"2014-04-21T10:15:01Z","content_type":null,"content_length":"116553","record_id":"<urn:uuid:d3278d34-e1b3-4480-bbf7-1e148799421d>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00612-ip-10-147-4-33.ec2.internal.warc.gz"}
Mole question Hello, how would i solve the following?: "How many litres of air (78% N2, 22% O2 by volume) are needed for the combustion of 1 litre of octane, C8H18, a typical gasoline component, of density 0.70g/mL?" Note - 1 mole of oxygen gas is equal to 24.5 litres. I have done the chemical equation of the combustion of octane which is C8H18 + 25/2O2 ---> 8CO2 + 9H20 but I am unsure about what to do next. Thank you Assuming a complete combustion: [tex] C_8 H_{18} + a(O_2+3.714N_2)\rightarrow b CO_2 + c H_2O[/tex] Conservation of atoms: b=8, c=9, a=25/2 (you were right) So that, the Fuel to Air ratio at stochiometric conditions is: W=molecular weight, N=moles, m=mass, f=fuel, a=air: [tex] FAR_s=\frac{m_f}{m_a}=\frac{W_f N_f}{W_a N_a}=\frac{W_f}{4.714W_a a}=\frac{114 g/mol}{4.714\cdot 28.9 \cdot 25/2}=\frac{0.0669 g_{fuel}}{g_{air}}[/tex] 1 litre of octane is 0.7 Kg. So that you will need 0.7/0.066=10.4 Kg of air, which at normal conditions has a density of 0.0012 Kg per litre, therefore 8.6 cubic meters of air will be needed.
{"url":"http://www.physicsforums.com/showthread.php?t=53275","timestamp":"2014-04-20T16:09:05Z","content_type":null,"content_length":"23539","record_id":"<urn:uuid:89ac8dd5-d489-4373-ba4c-a744db1b1c72>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00182-ip-10-147-4-33.ec2.internal.warc.gz"}
search results Expand all Collapse all Results 1 - 2 of 2 1. CJM 2005 (vol 57 pp. 400) Generalized $k$-Configurations In this paper, we find configurations of points in $n$-dimensional projective space ($\proj ^n$) which simultaneously generalize both $k$-configurations and reduced 0-dimensional complete intersections. Recall that $k$-configurations in $\proj ^2$ are disjoint unions of distinct points on lines and in $\proj ^n$ are inductively disjoint unions of $k$-configurations on hyperplanes, subject to certain conditions. Furthermore, the Hilbert function of a $k$-configuration is determined from those of the smaller $k$-configurations. We call our generalized constructions $k_D$-configurations, where $D=\{ d_1, \ldots ,d_r\}$ (a set of $r$ positive integers with repetition allowed) is the type of a given complete intersection in $\proj ^n$. We show that the Hilbert function of any $k_D$-configuration can be obtained from those of smaller $k_D$-configurations. We then provide applications of this result in two different directions, both of which are motivated by corresponding results about $k$-configurations. Categories:13D40, 14M10 2. CJM 2001 (vol 53 pp. 923) Decompositions of the Hilbert Function of a Set of Points in $\P^n$ Let $\H$ be the Hilbert function of some set of distinct points in $\P^n$ and let $\alpha = \alpha (\H)$ be the least degree of a hypersurface of $\P^n$ containing these points. Write $\alpha = d_s + d_{s-1} + \cdots + d_1$ (where $d_i > 0$). We canonically decompose $\H$ into $s$ other Hilbert functions $\H \leftrightarrow (\H_s^\prime, \dots, \H_1^\prime)$ and show how to find sets of distinct points $\Y_s, \dots, \Y_1$, lying on reduced hypersurfaces of degrees $d_s, \dots, d_1$ (respectively) such that the Hilbert function of $\Y_i$ is $\H_i^\prime$ and the Hilbert function of $\Y = \bigcup_{i=1}^s \Y_i$ is $\H$. Some extremal properties of this canonical decomposition are also explored. Categories:13D40, 14M10
{"url":"http://cms.math.ca/cjm/msc/14M10?fromjnl=cjm&jnl=CJM","timestamp":"2014-04-21T04:36:00Z","content_type":null,"content_length":"28252","record_id":"<urn:uuid:26c58b06-40ea-4c28-980c-298945c9ecad>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00568-ip-10-147-4-33.ec2.internal.warc.gz"}
Teacher Materials: • Overhead projector • Six paper clips Student Materials: • Chalk or white board for each student with accompanying writing utensil, or piece of paper and pencil. • Dividing Shapes Worksheet Lesson Part I: 1. Draw a large rectangle on the board. Divide it into 5 equal parts. Draw another rectangle and divide it into 5 unequal parts. 2. Ask the students which rectangle has been divided into parts that are the same size and shape. Explain that this means the rectangle has been divided into even parts. 3. Draw a circle and divide it into 4 unequal parts. Draw another circle and divide it into 4 equal parts. 4. Ask the students which circle has been divided evenly. 5. Repeat as many times as the teacher deems necessary. Lesson Part II: 1. Write the word "set" on the board. Explain that a set is a group of objects that are all the same. 2. Place six paperclips on the overhead projector. Switch it on. Show the students that this is a set of paperclips. Take two of the paperclips and set them apart. 3. Ask the students if the sets are equal. 4. Divide the paperclips into two equal sets. 5. Ask the students if the sets are equal. 6. Repeat as many times as the teacher deems necessary. Use the worksheet to evaluate whether students understood the concept. More Fraction Worksheets More Math Lesson Plans, Worksheets, and Activities For more teaching material, lesson plans, lessons, and worksheets please go back to the InstructorWeb home page.
{"url":"http://www.instructorweb.com/lesson/dividingshapes.asp","timestamp":"2014-04-17T00:59:49Z","content_type":null,"content_length":"22699","record_id":"<urn:uuid:71a82c2d-6dd5-42ac-abb3-25d1fede1a43>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00506-ip-10-147-4-33.ec2.internal.warc.gz"}
Post a reply y' = 0.11y, where y' is the amount you get per year, y is the amount you have, and t is time (in years) dy/y = 0.11dt ln(|y|) = 0.11t + C y = C*e^(0.11t) y(0) = 3,415,569 y(0) = C*e^0 C = 3,415,569 y = 3,415,569*e^(0.11t) --> This is the equation you're looking for y = 5,000,000 5,000,000 = 3,415,569*e^(0.11t) e^(0.11t) = 1.464 0.11t = 0.381 t = 3.464 If you don't know calculus, don't worry about the above. If you do, just ask and I'll explain all the steps. Or are you saying that the interest rate is compounded daily? I'm having a little trouble understanding your wording.
{"url":"http://www.mathisfunforum.com/post.php?tid=2389&qid=23634","timestamp":"2014-04-20T18:38:26Z","content_type":null,"content_length":"19639","record_id":"<urn:uuid:aceb3bfa-3262-4cbc-ba78-fed79f550677>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00084-ip-10-147-4-33.ec2.internal.warc.gz"}
Principles of Dynamics Why Rent from Knetbooks? Because Knetbooks knows college students. Our rental program is designed to save you time and money. Whether you need a textbook for a semester, quarter or even a summer session, we have an option for you. Simply select a rental period, enter your information and your book will be on its way! Top 5 reasons to order all your textbooks from Knetbooks: • We have the lowest prices on thousands of popular textbooks • Free shipping both ways on ALL orders • Most orders ship within 48 hours • Need your book longer than expected? Extending your rental is simple • Our customer support team is always here to help
{"url":"http://www.knetbooks.com/principles-dynamics-10th-hibbeler-russell/bk/9780131866812","timestamp":"2014-04-17T03:58:20Z","content_type":null,"content_length":"29690","record_id":"<urn:uuid:155aa20c-da02-40ca-b4a5-dddde389130f>","cc-path":"CC-MAIN-2014-15/segments/1397609526252.40/warc/CC-MAIN-20140416005206-00006-ip-10-147-4-33.ec2.internal.warc.gz"}
Coronado SAT Math Tutor Find a Coronado SAT Math Tutor A little about me: My name is Doug H. and I am a twenty-three-year-old Liberal Studies and English Major currently attending California State University Dominguez Hills. I spent the previous five years in the military where I worked as a apart of the Educational Services Provider on my ship where ... 43 Subjects: including SAT math, reading, English, geometry ...My name is Jennifer, and I'm currently a college student attending National University, majoring in math. I can tutor a variety of subjects from basic elementary math to calculus, basic natural sciences to upper division chemistry, as well as up to Semester 4 of university Japanese. I started o... 13 Subjects: including SAT math, chemistry, statistics, calculus ...For the past 6 years, I have been tutoring students of all ages in math, science, and computer science. I have a passion for the sciences and love to help others have a better understanding and get better grades with what they originally struggled with. I specialize in tutoring mathematics. 37 Subjects: including SAT math, calculus, algebra 2, algebra 1 ...Learning doesn't come from memorization; it comes from critical thinking and the ability to make conclusions given a certain amount of information. Ultimately, this is what I try to teach students. If you would like to know some more about me or my teaching qualifications, feel free to contact me. 16 Subjects: including SAT math, calculus, physics, geometry ...I also teach SAT prep, critical thinking and analysis strategies. As a math tutor of several years I have taught pre-calculus and calculus to a variety of students from diverse backgrounds. I help my students to succeed by teaching them critical learning strategies such as how to think logically about the concepts and scenarios being presented. 24 Subjects: including SAT math, reading, English, calculus
{"url":"http://www.purplemath.com/coronado_sat_math_tutors.php","timestamp":"2014-04-16T16:28:59Z","content_type":null,"content_length":"23994","record_id":"<urn:uuid:bf71ec8d-0dfc-4b52-adfc-a264f2f0054e>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00381-ip-10-147-4-33.ec2.internal.warc.gz"}
FOM: Effective Bounds in Core Mathematics Martin Davis martin at eipye.com Thu Jun 29 11:37:33 EDT 2000 At 03:55 PM 6/27/00 -0400, Fred Richman wrote: >An analysis of a classical proof of P >might reveal a constructive proof of a classically equivalent theorem >Q. The constructivist would maintain that Q is what had been proved, >not P. ... the constructivist ... is simply interested in finer >than the classical mathematician wishes to make. As stated this is surely not true. The non-constructivist makes the very same distinction, but expresses it in different words. Instead of saying: P has not been proved, the proof only shows Q the non-constructivist would say P has been proved, but non-constructively Where is there greater fineness of distinction? If the constructivist is of the Bishop variety, the tendency will be to replace P by P* that can be proved constructively because a stronger hypothesis has been built in, e.g., classical continuity replaced by uniform effective continuity. With the redefinition, P* written in English may even use exactly the same words as P. Is this making finer distinctions or a form of obfuscation? Martin Davis Visiting Scholar UC Berkeley Professor Emeritus, NYU martin at eipye.com (Add 1 and get 0) More information about the FOM mailing list
{"url":"http://www.cs.nyu.edu/pipermail/fom/2000-June/004159.html","timestamp":"2014-04-24T09:13:11Z","content_type":null,"content_length":"3891","record_id":"<urn:uuid:2e04a1e1-0928-43d9-9128-48822f827296>","cc-path":"CC-MAIN-2014-15/segments/1398223206118.10/warc/CC-MAIN-20140423032006-00344-ip-10-147-4-33.ec2.internal.warc.gz"}
Convergence of the summation 1/p^(1+iy) (over all primes p with y a nonzero real number) up vote 4 down vote favorite For $z\in\mathbb{C}$ with real part greater than $1$ the sum $$\sum_{p}{\frac{1}{p^z}},$$ where the sum is taken over all primes $p$, converges absolutely. It is also well known that the same sum with $z=1$ does not converge. Now my question is if there are $y\in\mathbb{R}$ such that $$\sum_{p}{\frac{1}{p^{1+iy}}}$$ converges? Many thanks in advance! zeta-functions prime-numbers complex-analysis add comment 1 Answer active oldest votes This is always convergent for any real $y \neq 0$. This follows from the fact that the related integral $$\int_2^\infty \frac{x^{iy-1}}{\log x} dx $$ is convergent (to see this use the substitution $t=\log x$ ), and say the prime number theorem with some weak error term, in fact $$ \pi(x)=\frac x {\log x} \left( 1+O \left( \frac 1 {\log x} \right) \right) $$ is up vote 4 sufficient. In a similar spirit the similar sum $$ \sum_{n=1}^\infty n^{-1+iy} $$ is not convergent for any $y$ (while bounded for $y \neq 0$ the partial sums will oscillate). This can down vote be seen from the related integral $$ \int_2^\infty x^{iy-1} dx $$ and the same substitution. add comment Not the answer you're looking for? Browse other questions tagged zeta-functions prime-numbers complex-analysis or ask your own question.
{"url":"http://mathoverflow.net/questions/86828/convergence-of-the-summation-1-p1iy-over-all-primes-p-with-y-a-nonzero-re","timestamp":"2014-04-20T14:05:37Z","content_type":null,"content_length":"49837","record_id":"<urn:uuid:061dc0bf-cb04-4d12-b128-6debee418f8f>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00571-ip-10-147-4-33.ec2.internal.warc.gz"}
The converse of smoothed analysis A year ago, Timothy Gowers posted the following beautiful question to MathOverflow: Are there any interesting examples of random NP-complete problems? Here’s an example of the kind of thing I mean. Let’s consider a random instance of 3-SAT, where you choose enough clauses for the formula to be almost certainly unsatisfiable, but not too many more than that. So now you have a smallish random formula that is unsatisfiable. Given that formula, you can then ask, for any subset of its clauses, whether that subset gives you a satisfiable formula. That is a random (because it depends on the original random collection of clauses) problem in NP. It also looks as though it ought to be pretty hard. But proving that it is usually NP-complete also seems to be hard, because you don’t have the usual freedom to simulate. So my question is whether there are any results known that say that some randomized problem is NP-complete. (One can invent silly artificial examples like having a randomized part that has no effect on the problem — hence the word “interesting” in the question.) On skimming this question, my first thought was: “aha, he’s obviously groping toward the well-studied notion of average-case complexity! Let me generously enlighten him.” But no, it turns out he wasn’t asking about average-case complexity, but about something different and novel. Namely, the random generation of computational problems consisting of exponentially many instances, for which we’re then interested in the worst-case instance. When I explained to Gil Kalai what Gowers wanted, Gil amusingly described it as the “converse of smoothed analysis.” In smoothed analysis—one of many contributions for which Dan Spielman recently won the Nevanlinna Prize—we start with a worst-case instance of a problem (such as linear programming), then perturb the instance by adding some random noise. Gowers wants to do the opposite: start from a random instance and then perform a “worst-case perturbation” of it. (The closest existing notions I could think of were trapdoor one-way functions and other primitives in cryptography, which involve the random generation of a computational problem that’s then supposed to be hard on average.) Anyway, I tossed the question onto my stack of “questions that could develop into whole new branches of theoretical computer science, if someone felt like developing them,” and pretty much forgot about it. Then, at dinner last night, I posed the question to Allan Sly, who’s visiting MIT to talk about his exciting new FOCS paper Computational transition at the uniqueness threshold. Within an hour, Allan had emailed me a sketch of an NP-hardness proof for the “random 3SAT” problem that Gowers asked about. I repost Allan’s solution here with his kind permission. Group the n variables into N=n^ε groups of size n^1-ε, M[1],…M[N] arbitrarily. For each group M[i] take all the clauses with all 3 variables in M[i] such that it satisfies both the all 1 and the all 0 assignments i.e. clauses that have either 1 or 2 variables negated. I think that just a first moment estimate should show that with high probability the only assignments on M[i] that satisfies all of these clauses should be the all 1 assignment or the all 0 assignment – other assignments are just too unlikely. So in taking these clauses we reduce to the case where we have constant values on each of the groups. Once you have these clauses you can then treat each group as a new variable and can construct any SAT assignment on these new variables. Because now you only need to find a clause with 1 variable in each M[i], M[j], M[k] for each (i,j,k) ∈ [N]^3 that has the right negations. With high probability all of them should exist so you should be able to make whatever SAT assignment on the N variables you want. My back of the envelope calculation then suggests that as long as you have n^1+ε random clauses to begin with then this should be enough. It’s not hard to see that Allan’s solution generalizes to 3-COLORING and other constraint satisfaction problems (maybe even all NP-complete CSPs?). In retrospect, of course, the solution is embarrassingly simple, but one could easily generate other problems in the same vein for which proving NP-hardness was as nontrivial as you wanted it to be. Further development of this new branch of theoretical computer science, as well as coming up with a catchy name for it, are left as exercises for the reader. gowers Says: Comment #1 October 6th, 2010 at 11:01 am That’s very nice! I’ll try to remember why I asked the question in the first place, and see whether I this answer is satisfactory or whether I need to come up with a better question. Madhu Sudan Says: Comment #2 October 6th, 2010 at 12:22 pm Hi Scott The concept of complexity of a random input perturbed in the worst-case has been explored before, I think, in a paper by Feige and Kilian (FOCS 1998). But I don’t believe they showed any hardness Btw, I guess you/Timothy want to fix the “random” initial part to something small (or else I could sample n^4 random clauses of length 3 … and then the random part includes all possible clauses asterix Says: Comment #3 October 6th, 2010 at 1:37 pm What I am confused about is that if you have a random (unsatisfiable) instance of SAT, and then you find a subset of the clauses that comprise a satisfiable instance, then why is this final satisfiable instance random? It could be that the original random instance always contains a subset of clauses with a special structure, and therefore this special subset is not really “random”. Paul Carpenter Says: Comment #4 October 6th, 2010 at 2:01 pm “distinctly unsmoothed [or] crinkled up analysis” Scott Says: Comment #5 October 6th, 2010 at 3:40 pm What I am confused about is that if you have a random (unsatisfiable) instance of SAT, and then you find a subset of the clauses that comprise a satisfiable instance, then why is this final satisfiable instance random? It isn’t. It could be that the original random instance always contains a subset of clauses with a special structure, and therefore this special subset is not really “random”. Yes, that’s completely right! Indeed, in Allan’s NP-hardness argument, he does construct a very special subset of clauses. The issue is just that wasn’t clear a priori that such a subset exists w.h.p.—and if the original set of clauses has (say) linear size, then it still isn’t clear. Scott Says: Comment #6 October 6th, 2010 at 3:47 pm Madhu: Thanks for the Feige-Kilian reference! (or else I could sample n^4 random clauses of length 3 … and then the random part includes all possible clauses anyway). Yes, that’s right! That’s exactly why it’s important that Allan’s solution only requires n^1+ε clauses. asterix Says: Comment #7 October 6th, 2010 at 4:38 pm “Given that formula, you can then ask, for any subset of its clauses, whether that subset gives you a satisfiable formula. That is a random (because it depends on the original random collection of clauses) problem in NP. ” I misread/understood “random problem” to be “random subset”. Now it makes sense. Thanks! Tracy Hall Says: Comment #8 October 6th, 2010 at 11:47 pm Interesting construction. I might not have been so cavalier about walking over and interrupting the dinner conversation if I had realized that I was also interrupting the genesis of the discipline of jagged catalysis. anon Says: Comment #9 October 7th, 2010 at 2:56 am Chunky analysis? Chunked? Chunkified? (I like peanut butter) Okn Says: Comment #10 October 7th, 2010 at 2:18 pm Local-worst case analysis compaed to the usual global worst-case. ACW Says: Comment #11 October 7th, 2010 at 5:10 pm I have a newbie question. I almost understand the foregoing, but I think some tacit standard assumptions of the field are escaping me. Gowers asks us to pick a random set of clauses just large enough to make unsatisfiability overwhelmingly likely. Until now I didn’t know that the overwhelming majority of large 3-SAT instances turns out to be unsatisfiable, but I take it that this is a well-known result. Then he says to pick a random subset of this preselected set of clauses, small enough (I guess) that it might well be satisfiable, and asks whether this is satisfiable, and then adds, confusingly for this beginner, “This is a random problem in NP.”. What is an instance of this problem? What is the universe of instances? What is the parameter n of the complexity function? (I didn’t ask that last question properly; I’m too much of a beginner to know how to phrase it, but I hope my meaning is clear.) What’s confusing for me, I guess, is: how can a problem, which is supposed to be a set of instances, nestle inside a single instance of a bigger problem? András Salamon Says: Comment #12 October 7th, 2010 at 10:05 pm The Gowers-Aaronson-Sly (GAS) model of random NP problems should provide fun for everyone on a visit to the Zoo. GAS-3SAT is just the first step, GAS-3COL already beckons… Using Tracy Hall’s suggested terminology, there are starting instances that never generate an NP-hard problem, and the choice of partition is also important. So this doesn’t seem to be an NP-hardness proof in the usual sense, via a deterministic logspace reduction; the NP-hardness proof is only for “sufficiently jagged” starting instances, and only when a “catalytic” partition is chosen. One question is then: which parameter characterizes the easy to hard transition, and can one prove a phase transition? For instance, does fraction of mixed clauses work for jaggedness? And when does jaggedness fail to provide a catalyst? Dave Says: Comment #13 October 8th, 2010 at 10:48 am What’s confusing for me, I guess, is: how can a problem, which is supposed to be a set of instances, nestle inside a single instance of a bigger problem? The original 3SAT problem can be viewed this way also. Creating a 3CNF formula over n variables (say, for the purpose of reducing some other NP problem to 3SAT) can be viewed as starting with the single biggest possible 3CNF formula over n variables (the one with all ≈ n3 possible clauses over n variables) and removing some of its clauses. In this new model, when constructing the 3CNF formula, I don’t have complete freedom to choose any clause I want. Instead of having complete freedom to choose any clause over n variables, only a randomly chosen subset of n1.1 of the≈ n3 possible clauses are available to me. The question is then, do I have enough freedom to get the behavior I want from the formula with only these clauses available to choose? There are still an exponential number of formulas I can create (2n1.1 versus 2n3 in the standard model), but it is not obvious whether those formulas can get me all the behaviors I might want (although it appears that the answer is yes). You can of course view other problems this way. Outputting a graph with n nodes can be viewed as starting with Kn, the complete graph over n nodes, and removing some of its edges. Dave Says: Comment #14 October 8th, 2010 at 11:46 am In my previous comment, 2n1.1 = 2^{n^{1.1}} 2n3 = 2^{n^3} n3 = n^3 Kn = K_n Foiled again by the difference in html rendering between the preview and the actual posted comment. ACW Says: Comment #15 October 8th, 2010 at 12:23 pm Dave, your response helps, but I’m still not completely out of the woods. Suppose we pick a big “master problem” with a set S of N different clauses. Now the new problem that we’re constructing has, as inputs, subsets of S. There are exactly 2^N such subsets. I thought that an echt Decision Problem had to have an infinite set of possible inputs. But 2^N, while possibly monstrously large, is not infinite. All the definitions of complexity that I know about (and remember, I’m a tyro) require unbounded instance sizes, in order to enable asymptotic reasoning. With a cap on instance size, I claim that the new problem is O(1). [Just take the very hardest instance (there must be one). Its required runtime is some (monstrously enormous) constant T. By definition, the runtime for no instance of the problem exceeds T. So the limit, as the problem size goes to infinity, of the runtime is this constant.] As I said to start with, I’m obviously missing some subtlety. Dave Says: Comment #16 October 8th, 2010 at 1:29 pm I thought that an echt Decision Problem had to have an infinite set of possible inputs. But 2^N, while possibly monstrously large, is not infinite. You can take the union over all N, right? Here might be a way to phrase the problem as a promise problem. Start with the infinite set C of all 3CNF clauses, whose variables are labeled x_i for some positive integer i. For each natural number n, identify the set of ≈ 2^{n^3} clauses that contain only variables x_i with i ≤ n. Randomly select 2^{n^1.1} of them to keep, and throw the rest away. Call the resulting (infinite, since we do this for all n) set of clauses C’. Now you can define a promise problem P: Given a 3CNF formula that is promised to contain only clauses from C’, is it satisfiable or not? The question is whether this promise problem is NP-hard or What I don’t understand about the argument that Scott outlines is this. Define P_Y to be the set of satisfiable 3CNF formulas with only clauses from C’, and P_N to be the set of unsatisfiable 3CNF formulas with only clauses from C’. Call S_Y the set of satisfiable 3CNF formulas and S_N the set of unsatisfiable 3CNF formulas. For this promise problem P=(P_Y,P_N) to be NP-hard, we would have to show that there is a polynomial-time computable function f such that f(φ) is in P_Y if φ is in S_Y, and f(φ) is in P_N if φ is in S_N. The argument above depends on proving the existence of sufficient clauses in C’ to create a formula f(φ) in P that is satisfiable if and only if φ is satisfiable. But I see no way to find such clauses in polynomial time, since they are defined by randomly deleting clauses from an exponential-size (2^{n^3}) of clauses to form another exponential-size (2^{n^1.1}) of clauses. Perhaps the argument is not intended to form a promise problem. In this case, change P_N to be the set of all 3CNF formulas not in P_Y, and leave the definitions of P_Y, S_Y, and S_N the same. Still I don’t know how to reduce 3SAT to P. Gil Says: Comment #17 October 8th, 2010 at 7:36 pm Hi Scott, That’s nice! Indeed when we discussed it over email it (and you kindly explained to me Gowers’s question) it looked that the reverese smoothed behavior asked for by Tim will be close to the worst case behavior, while the smothed analysis is often close in properties to the average case. (Of course, we need to worry about the scale for the perturbation.) In this direction we can ask if for LP if we take a worse case linear program in the neighborhood of a random problem will the simplex algorithm be exponential, for those rules that we know it for the worst case? You mentioned random perturbations of worst case instances and worst case perturbations of random instances. Let me mention that there is also interest in random perturbations of random instances. How come? you may ask. The reason is that probability theory goes much beyond expectations. So while for averages this will not be terribly interesting it still is. In a sense, this the what noise sensitivity/stability is about. Let me mention another related idea by Bilu and Linial (in a recent ICS paper) that “isolated instances of hard problems are easy”. Of course, being isolated says something like the average behavior of a perturbed problem behaves like the problem itself so in some sense it is related to smoothed analysis. Albert Atserias Says: Comment #18 October 11th, 2010 at 7:09 pm The collection of NP-hard problems that you get by this method are promise problems in which even stating the promise requires non-uniformity (for each n, the promise is that the given formula is a subset of the randomly chosen one with n variables). You must be a really big fan of promise problems to accept these as “randomly generated NP-complete problems”. They do not even have a finite But what I want to point out in this comment is that more or less interesting (non-artificial) randomly generated NP-complete problems (with finite descriptions!) have been known for some time. I’ll give references later, but let me state a simple special case first. For a fixed finite graph H, let H-coloring be the following problem: given a graph G, is there a homomorphism from G into H? Here a homomorphism is just a mapping from the vertices of G to the vertices of H that takes edges of G into edges of H (the mapping may identify some vertices and may map non-edges to whatever it wants). The problem is called H-coloring because the case where H is a triangle is just 3-coloring in the usual sense. Now I claim that if you let H be a G(n,1/2) random graph, then H-coloring is NP-complete almost surely as n approaches infinity. The one-line proof calls the Hell-Nesetril Theorem: If H is bipartite then H-coloring is in P, otherwise H-coloring is NP-complete. Since G(n,1/2) is almost surely non-bipartite, the claim follows. Of course the same works for H taken from G(n,c/n) and many other models of random graphs for H. A much more general result was shown by Lucsak and Nesetril here for general homomorphism problems into a fixed template (so-called CSPs). The key to the argument is that a random structure is almost surely projective (it has no non-trivial homomorphisms from a power of itself into itself), from which NP-completeness follows from the known half of the so-called “algebraic dichotomy conjecture” for CSPs. There are other dichotomy results that could give rise to a few other randomly generated NP-complete problems (e.g. directed graph homeomorphism problem). John Sidles Says: Comment #19 October 12th, 2010 at 8:13 am I don’t mind confessing that I have been struggling to follow the above discussion, and in particular, Tim (Gowers) motivation(s) for raising this problem are still somewhat mysterious (to me). So I’ll simply guess at them! To begin, over on Dick Lipton’s weblog, it appears that similar problems are being attacked from precisely the opposite direction, under the topic Math and Theory Proof Methods: A Comparison. In particular, Paul Beame has offered some general remarks upon the role of structure theorems in complexity theory, and then Luca Aceto followed up with some well-considered comments and specific references relating to hybrid dynamical systems and bisimulation theory. The remainder of this post will attempt to identify some common mathematical themes that are shared by these two fine Lipton/Aaronson weblog topics. Scott and Tim, are your random NP-complete problems being constructed with a view toward demonstrating that Beame-type structure theorems in general, and Aceto-style bisimulation theory in particular, are in some sense irrelevant to at least some members of the broader class NP-complete problems? In other words, is the idea to construct a (structure-free?) class of Gowers/Aaronson problems that is complementary to the structure-restricted class of Beame/Aceto problems? Thus, is one possible long-term objective—albeit a hyper-ambitious objective—to establish that all problems in P satisfy Beame-type structure theorems (in some to-be-specified sense), but some problems in NP don’t? This would be IMHO a very interesting program to pursue, and I will observe, that the resulting complexity-theoretic world might well prove congenial both to complexity theorists and to quantum systems engineers. For example, Lindbladian dynamical processes would be recognized as generically unravelling (Aceto-style) to hybrid bisimulation processes … specifically, with the classical noise/measurement datastreams being bisimulation-dual to Lindblad-compressed quantum trajectories. Thus, practical quantum simulations would be seen as generically residing in P, without risk of collapse to well-established hierarchies of classical and quantum complexity theory … which would comprise a major advance in our understanding of practical methods for quantum systems engineering. More broadly, these Beame/Aceto structure-centric ideas would help us understand why—as Dick Lipton often observes—so many instances of problems that formally are NP-complete, in practice are in P. This would occur because real-world problem instances (both classical and quantum) typically are generated, not by random processes and not as worst-case instances, but by dynamical processes that respect Beame-type structure theorems that naturally induce Aceto-style bisimulation structures. ACW Says: Comment #20 October 12th, 2010 at 12:01 pm @Dave, I follow you up to “What I don’t understand is …”. I stopped trying to understand there, because I’m still wrestling with fundamentals. Two things bother me about your formulation. First, you are diving into axiom-of-choice hell as soon as you define C’, and it would worry me a lot if the question being asked only makes sense with the choice axiom. But even more worrying is the tacit assertion that the answer to the question is independent of C’. Surely it’s not, so we must be asking about most selections of C’. And I’m not at all sure that there is a meaningful measure on the space of these graded selected subsets that would allow one to say “most” in a meaningful way. Dave Says: Comment #21 October 13th, 2010 at 2:26 am WordPress cut off most of my previous comment. I’ll try again, but if it doesn’t work, you can give me your email and I can email the whole response. As you could probably tell, I realized halfway through attempting to answer your question that I didn’t totally understand Scott’s explanation. So I can’t completely answer your question because I don’t totally understand Scott’s explanation, but here is my current, very likely flawed, understanding of the idea. Also, I was not thinking straight in the previous response, because the numbers I picked are totally wrong. For instance, there are about n^3 clauses of the form (x_i or x_j or x_k) with i,j,k ≤ n, not 2^{n^3} as I mistakenly said before. Because of this I can see one way that such a problem can be considered NP-hard via nonuniform reductions. I’m still not sure how it is in NP, nor how to use uniform reductions to show it is NP-hard. No guarantees that I am thinking straight this time either. You are correct that they are talking about most selections of C’, but it is common when doing randomness in theoretical computer science to abbreviate a statement like, “choose A at random; then the probability that A has property P is at least 99%” to “choose A at random; then A has property P”. I think that there is a way to define a meaningful measure that could help formalize the statement. For each n, there is a finite set C_n of size |C_n| = 4/3 * n^3, consisting of all clauses with 3 variables x_i, x_j, x_k, where i,j,k ≤ n (I think that’s right; n^3 choices for the variables, times 8 choices for where to place negations, divided by 3! = 6 to ignore order since OR is commutative). We want to select a random subset of C_n of size n^1.1; there is a well-defined, finite number of subsets of C_n of size n^1.1: specifically, K(n) = (|C_n| choose n^1.1) of them. So it makes sense to talk about the uniform measure on the set of subsets of C_n of size n^1.1. Given a particular element C’_n of this set (a particular set of clauses of size n^1.1), its probability of being chosen is 1/K(n). Now, to define how to select “uniformly at random” an infinite sequence of “allowable clauses”, use the product measure: given particular sets C’_1,C’_2,…,C’_n, define the probability that a infinite sequence C’ chosen at random starts with C’_1,C’_2,…,C’_n to be μ(C’_1,C’_2,…,C’_n) = prod_{i=1}^n 1/K(i). This is similar to how you might define uniform measure on the space {0,1}^\infty of infinite binary sequences as the product measure based on the fair coin toss measure on the set {0,1} that assigns Pr[0] = Pr[1] = 1/2, by defining for each finite string x the probability that the sequence begins with x to be 2^{-|x|}. Except now, for each n, instead of choosing between one of two bits uniformly at random, you are choose among K(n) sets of clauses uniformly at random. Once this measure is defined, it makes sense to ask the following question. Pick an infinite sequence C’ = C’_1, C’_2, … of sets of clauses at random according to μ. Do not take their union; I earlier suggested this but it makes the measure messy. Define 3SAT_n to be the set of satisfiable 3CNF formulas with exactly n variables. Define the language Rand-3SAT(C’) = union_{n=1}^\infty { \phi \in 3SAT_n | \phi contains only clauses from C’_n}. C’ is chosen at random, but once chosen, Rand-3SAT(C’) is a unique, well-defined language, which is either NP-hard or it isn’t. The question is, what is Pr_μ[Rand-3SAT(C') is NP-hard]? I think the argument Scott puts forward proves that this probability is 1. This is probably not precisely what Scott meant. It is easy to show, for example, that Rand-3SAT(C’) is uncomputable with probability 1. That doesn’t prevent it from being NP-hard, but Scott used the term NP-complete so it’s clear that he has some different language in mind than the one I defined. I think possibly what he meant is that, given access to the n^1.1 allowable clauses in C’_n, one can construct a reduction from 3SAT to Rand-3SAT(C’). This is called a *nonuniform reduction*, in the sense that for each n, before the algorithm that computes the reduction can work, it must be given some information specific to n, the allowable clauses in C’_n, although not specific to the particular n-variable formula φ that is the input to the reduction. Or possibly, define the promise problem Rand-3SAT(C’) = (P_Y,P_N) by P_Y = union_{n=1}^\infty { \phi \in 3SAT_n | \phi contains only clauses from C’_n}, and P_N = union_{n=1}^\infty { \phi \in complement of 3SAT_n | \phi contains only clauses from C’_n}. Similarly, both P_Y and P_N would be uncomputable with probability 1, and nonuniform reduction would apparently be required to reduce 3SAT to it. Like I said, I’m not sure how to formalize this notion properly, but these are my best guesses. happy Says: Comment #22 October 13th, 2010 at 3:35 am Similarly, both P_Y and P_N would be uncomputable with probability 1, and nonuniform reduction would apparently be required to reduce 3SAT to it. Like I said, I’m not sure how to formalize this notion properly, but these are my best guesses. AJ Says: Comment #23 October 13th, 2010 at 3:02 pm What if someone proves P=NP is a few elementary lines that no one has ever though of. Will all the research of the past 50 years be worthless? Scott Says: Comment #24 October 13th, 2010 at 3:28 pm AJ: More or less. But if someone discovers a method for interstellar travel using yarn and peanut butter, most of the research NASA has done for the past 50 years will be worthless as well. AJ Says: Comment #25 October 13th, 2010 at 3:55 pm There are examples in history which suggests things like that happen. Hilbert’s basis theorem which virtually vanquished Gordon’s and Godel’s incompleteness which vanquished Hilbert’s. Both Gordon’s and Hilbert’s vision was insurmountable and Hilbert’ and Godel’s foundational work was by every means elementary. John Sidles Says: Comment #26 October 13th, 2010 at 6:35 pm AJ, such things have happened even in more recent decades. A good example si Walter Kohn and Lu Jeu Sham’s celebrated 1965 article Self-Consistent Equations Including Exchange and Correlation Effects. This article introduced what are today known as the Kohn-Sham equations, which efficiently solve (with sufficient accuracy for many practical purposes) a class of quantum simulation problems that once were regarded as intractable. Although Kohn and Sham’s article received few citations during its first decade, as of today Physical Review on-line lists it has having been cited 10,341 times. In comparison, CiteSeer lists Richard Feynman’s 1982 article Simulating Physics With Computers—which broadly reaches the opposite conclusion as Kohn and Sham—as being cited 222 times … which certainly is a very respectable citation record … but nowhere near the class of Kohn and Sham. To paraphrase Arthur Clarke: “If a distinguished theorist says that a quantum simulation problem is formally NP-hard, they are almost certainly right. If they say that it is NP-hard in practice, they are almost certainly wrong.” John Sidles Says: Comment #27 October 13th, 2010 at 7:38 pm AJ, to provide some quantitative foundations to the above Clarke-style aphorism, I have extracted from an article by Redner, titled Citation Statistics From More Than a Century of Physical Review, a figure showing the cumulative citation statistics that are associated to the Kohn-Sham article. These citation statistics are pretty incredible … it turns out that “KS” is the most-cited Physical Review article of all time, by a huge and ever-increasing margin (note: in the figure the competing “EPR” and “BCS” Physical Review articles are precisely the ones that a quantum physicist would guess). This citation record is especially remarkable for an article that describes quantum simulation techniques whose mathematical underpinnings remain rather poorly understood, even to the present day. AJ Says: Comment #28 October 14th, 2010 at 1:05 am Hi John: I do not know many things you are communicating here. But I do take Knuth’s argument on this. There may be atmost finitely many obstructions to P = NP. But I have a gut feeling that the Mathematics built over 5000 years cannot have missed out anything that is as subtle as not being adequate to settle the conjecture. John Sidles Says: Comment #29 October 14th, 2010 at 8:34 am AJ, you make a good point … it scarcely ever happens in mathematics that widely-accepted theorems are discovered to be outright wrong. Yet century-by-century in mathematics—and physics too—widely-accepted elements of naturality are set aside. For example, in the 19th century the naturality of Euclidean geometry was set aside, and in the 20th century, the naturality of Newtonian mechanics was set aside. So we can ask, in the 21st century, what 20th century elements of physical and mathematical naturality are likely to undergo substantial evolution? Shtetl Optimized grapples with two elements that are top candidates for 21st century evolution: (1) Hilbert state-spaces, and (2) the complexity classes P and NP. The point is that there’s nothing wrong with the many theorems that have been proved about Hilbert space dynamics, and about the complexity classes P and NP … just as there is nothing wrong with theorems about Euclidean geometry and Newtonian dynamics. Yet neither is it self-evident that these are the sole, or even the most natural, elements for building a 21st century understanding of physical dynamics and complexity. Perhaps the safest expectation is that, in coming decades, students will learn the traditional 20th century elements of Hilbert space dynamics and P-vs-NP complexity classes, in conjunction with new dynamical state-spaces and new hierarchies of complexity classes, whose nature we are at present still struggling to grasp. In short, among most important discoveries in each century are new, good, natural questions to ask … because very often, once we discover good questions to ask, good answers are not too hard to An outstanding weblog that regularly grapples with this class of questions is Dick Lipton’s Gödel’s Lost Letter and P=NP. John Sidles Says: Comment #30 October 15th, 2010 at 11:13 am Just to keep this thread ventilated, the topic currently running in Dick Lipton’s webblog provides a good illustration of (my own understanding of) Tim Gowers’ motivation in posing his question about the existence random NP-complete problems … as follows. Lipton’s weblog is discussing a recent preprint by Venkatesan Guruswami and Adam Smith, titled Codes for Computationally Simple Channels: Explicit Constructions with Optimal Rate. Guruswami and Smith are making progress with the (very tough) problem of error correction against, not random errors, but adversarial errors. Focusing on the broad strategy of the Guruswami and Smith preprint, and ignoring the (very many) interesting details, their strategy is to change the starting premises of the problem, by restricting the adversarial errors to those errors that can be generated by a bounded-resource adversary. This strategy allows Guruswami and Smith to prove what Dick Lipton calls a “pretty theorem” whose key ideas are “extremely appealing.” Tim Gowers’ question seemed (to me) to be motivated by the idea of adopting a similar strategy for making progress in P-versus-NP, namely, change the starting premises of the problem by restricting the classes P and NP to problem instances that can be generated by adversaries having bounded computational resources. Here the point is that a great many problems that are infeasible to solve when they are posed by an adversary having access to an omniscient oracle look *much* easier when they are posed by dumber adversaries. Guruswami and Smith’s preprint shows how this strategy can prove theorems that are concrete, powerful, and even “pretty.” LifeIsChill Says: Comment #31 October 15th, 2010 at 11:26 am Is there a polynomial time algorithm known for NP-complete problems where the word sizes one has to operate are exponentially large ? Say you do quadratically many multiplications to solve an NP-complete problem but on words of size n^n bits. If such an algorithm is found, would that be p=np? or at least would that be of any interest to the community? Scott Says: Comment #32 October 15th, 2010 at 1:45 pm LifeIsChill: Great question! It’s actually been known since the 1970s that, if you allow unit-cost arithmetic operations on exponentially-long numbers, and ALSO the reading out of individual bits from the binary representations of those numbers, then you can solve not only NP-complete problems, but even PSPACE-complete problems in polynomial time. (I don’t remember the reference offhand—does Needless to say, most people interpret this not as a practical proposal for solving PSPACE, but as an illustration of the hazards of the unit-cost model. LifeIsChill Says: Comment #33 October 15th, 2010 at 2:15 pm Actually this is great news. I was confused a lot last week. I have a half page technique on a particular hard problem. It takes quadratic operations but as expected on exponentially large words. Do you think any such technique would add any useful knowledge to the community other than the fact the technique is simple (and may have been known before but I have not found a reference)? Scott Says: Comment #34 October 15th, 2010 at 2:22 pm It might be interesting—from your description, I can’t really say. LifeIsChill Says: Comment #35 October 15th, 2010 at 11:32 pm Thank you. Actually it is with regards to the factorial function. What about polynomially growing bits size $n^k$ bits? Say one has a solution to computing n! (factorial) in $log^{c}(n)$ steps but with arithmetic operations on size $n^k$ bits. Would that be of any importance? Are such relations known? Are there any references? LifeIsChill Says: Comment #36 October 16th, 2010 at 12:01 am We need at least $nlog(n)$ bits to represent n!. However are their easy solutions to computing n! given word size of $n^k$. LifeIsChill Says: Comment #37 October 16th, 2010 at 11:34 am Professor I found the reference. Thanks for the tip. It is Shamir’s work on factoring in $\log(n)$ steps. I could not get the paper online though. Interestingly I am also getting similar results as he did. However I do not know if he is saying n! can be computed trivially if word size is of order $n^k$ or $k^n$ since I do not have access to the work. I seem to be getting $n^k$ space. I do not know what is technique either. Any comments from you would help. anon Says: Comment #38 November 4th, 2010 at 5:52 am There is reason we still discuss Hilbert’s program, right? Hilbert was wrong about the answer, but his question was the right question and led to discovery of lots of new areas in math, including TCS (I can’t think of TCS without Turing machines and Turing machines without Hilbert’s program). So the question is is P vs NP the right question? and the answer is absolutely!, just take a look at discoveries it has led to. I don’t agree with Scott’s answer, the world will be very different if P=NP, but that does not mean that the amazing structures and concepts that we have discovered during last 50 years will go away. P vs NP is the Holy Grail, but it is not the only question.
{"url":"http://www.scottaaronson.com/blog/?p=469","timestamp":"2014-04-20T18:23:00Z","content_type":null,"content_length":"54609","record_id":"<urn:uuid:2527ad7d-71b2-4874-bd56-93ba7a66995a>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00036-ip-10-147-4-33.ec2.internal.warc.gz"}
velocity and kinetic energy of a neutron First, the expression E=mc^2 does not give the kinetic energy of a particle. Second, even if it did, your method is immune to the given wavelength, so that should give you some idea where you should be looking. How can you relate the wavelength of a particle to its momentum?
{"url":"http://www.physicsforums.com/showthread.php?t=251157","timestamp":"2014-04-16T07:32:26Z","content_type":null,"content_length":"24960","record_id":"<urn:uuid:ed35af49-deb8-4ff0-a478-89a677a12806>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00121-ip-10-147-4-33.ec2.internal.warc.gz"}
When does the zeta function take on integer values? up vote 19 down vote favorite Here $\zeta(s)$ is the usual Riemann zeta function, defined as $\sum_{n=1}^\infty n^{-s}$ for $\Re(s)>1$. Let $A_n=${$s\;:\;\zeta(s)=n$}. The behaviour of $A_0$ is basically just the Riemann hypothesis; my question concerns $A_n$ for $n\neq0$. 1) Is determining this just as hard as the Riemann hypothesis? 2) If we know the behaviour of some $A_n$, does it help in deducing the behaviour of other $A_m$? 3) For which $n$ is $A_n$ non-empty? Question 3 has now been answered for all strictly positive $n$ - it is non-empty, and has points on the real line to the left of $s=1$. For $n=0$, it is known to be non-empty. Any idea for negative $n$? (the same answer won't work, since $\zeta(s)$ is strictly positive on the real line to left of $s=1$. Big Picard gives it non-empty for all but at most one $n$. How can we remove the 'at most nt.number-theory ca.analysis-and-odes analytic-number-theory 3 For (1) answer is probably that it is harder problem (there is less structure). The (2) is probably a wishful thinking, but hard to refute. (3) is almost solved by Big Picard. – Boris Bukh Dec 2 '09 at 16:11 I'd also tag this as analytic-number-theory. – Ben Weiss Dec 2 '09 at 16:36 2 Question (1) is interesting to me. I have the impression that computational techniques are such that it should be possible to plot a substantial portion of the inverse image of 1 (or 2, or 3....) under the zeta function: it would be interesting collect and look at the data and see if there are any visible patterns. – Jonah Sinick Dec 2 '09 at 22:46 You say that "$\zeta(s)$ is strictly positive on the real line to left of $s=1$". In fact $\zeta(s)$ is strictly negative on $(0,1)$. This follows from $(1-2^{1-s})\zeta(s)=1-2^{-s}+3^{-s}-4^{-s}+ \cdots$. In addition, $\lim_{s\to 1-}\zeta(s)=-\infty$ because $\zeta(s)=\frac{1}{s-1}+O(1)$ for $|s-1|<1$. – GH from MO May 1 '11 at 1:54 add comment 8 Answers active oldest votes Regarding 3), this "Big Picard" stuff is serious overkill. Think like an undergraduate real analysis student: The p-series zeta(p) converges for real p > 1, whereas zeta(1) = sum of the harmonic series = oo. An easy argument using (e.g.) the integral test shows that lim_{p -> oo} zeta(p) = 1. The function zeta(p) is continuous in p [the convergence is uniform on right half-planes, hence on compact subsets], so by the intermediate value theorem it takes on every positive integer value n >= 2 at least once -- and, since it is a decreasing function of p, exactly once -- on the real line. up vote 22 down Thus A_n is nonempty for all n >= 2. vote accepted EDIT: Let me show that zeta(s) takes on all real values infinitely many times on the negative real axis. For this, note that for all n >=1, zeta(-(2n-1)) = - B_{2n}/(2n), where B_{2n} is the (2n)th Bernoulli number. It is known that the B_{2n}'s alternate in sign and grow rapidly in absolute value: |B_{2n}| \sim 4 \sqrt{\pi n} (n/(pi e))^{2n}. The claim follows from this and the Intermediate Value Theorem. Isn't the the limit of $\zeta(p)$ as $p\to\infty$ 1? I mean, that certainly what it looks like from the Dirchlet series. Also, you get all negative integers on the real line below $p=1$. – Ben Webster♦ Dec 2 '09 at 22:04 Yes, of course it is -- I'll fix it. (This doesn't affect my argument.) – Pete L. Clark Dec 2 '09 at 22:54 Oops, yes it does, a bit: it shows that zeta(s) does not take on the value 1 in the half plane Re(s) > 1. I'll fix this by looking at negative integer values, as you suggest. – Pete L. Clark Dec 2 '09 at 23:02 add comment There are infinitely many roots of $\zeta(s)-a=0$ for every complex number $a$. When $a\ne 0$, these are called "$a$-values" and there is a whole chapter discussing their distribution in Titchmarsh's book on the zeta-function. Selberg also discusses $a$-values in his (now famous) paper "Old and new conjectures and results about a class of Dirichlet series" where he defines the Selberg class. Here is an overview of some of the important results: up vote 1) There are $\frac{T}{2\pi}\log T + O(T)$ $a$-values of $\zeta(s)$ in the strip $0<\Im s\leq T$. 14 down vote 2) Like the zeros of $\zeta(s)$, Levinson proved that $a$-values cluster near the half-line. That is to say, almost all $a$-values are arbitrarily close to the half-line. 3) Unlike the zeros of $\zeta(s)$, there are provably a lot of $a$-values away from the half-line (though not a positive proportion). Namely, there are $\gg T$ roots of $\zeta(s)=a$ for $a\ neq 0$ in any region $A\leq \Re s \leq B$ and $0<\Im s\leq T$ where $A\in (1/2,1)$ and $A$ strictly less than $B$. This is proved in Titchmarsh's book. On the other hand, standard zero-density estimates for the zeta-function tell us that there are $o(T)$ zeros in such a region. Some have suggested that this is evidence for the Riemann Hypothesis. Here is a link to Levinson's paper: pnas.org/content/72/4/… – Micah Milinovich Aug 2 '10 at 16:10 add comment About (1) I agree with Boris Bukh. There is no conjecture about the location of the a-points of the Riemann zeta function ($\zeta(s) = a$), in the way that we have for the 0-points. And by my next remark, there may well be no reasonable description of the m-points for $m \neq 0$. About (2) I also agree with Boris Bukh, for a particular reason. There is a universality result for the Riemann zeta function, to the effect that as you move the disk $|s - 3/4| \leq 1/4$ vertically upwards, the Riemann zeta function can be made to approximate arbitrarily well in the sup-norm an arbitrary continuous function on the closed disk that is holomorphic in the open up vote disk. It is just a matter of moving the disk far enough up. Since for an arbitrary holomorphic function there is no connection between the a-points and the b-points for a pair of values $a \ 6 down neq b$, neither would you expect such a relationship for the Riemann zeta function. The universality result is due to M. Voronin, see page 308 of the second edition of The Theory of the Riemann Zeta-function by E. C. Titchmarsh. It is crucial to get the second edition, with the end-of-chapter notes by D. R. Heath-Brown. This is the standard reference on the Riemann zeta function, though there is also a very useful book by Aleksandar Ivic. add comment This is by no means my area of expertise, but doesn't $|\zeta(s)|$ get arbitrarily large along the negative real line between the trivial zeros? And also alternate in sign at the odd up vote 4 negative integers? This would imply (median value theorem!) that it hits every integer, with none of this "at most one exception" stuff. down vote Yes, I modified my response above in this way before I noticed yours. (I'm upvoting it.) – Pete L. Clark Dec 2 '09 at 23:11 add comment Let's look at $\zeta(s)$ for some large $\sigma$ (the real part of $s$). We can bound the function by $\int_1^\infty \frac{dx}{x^\sigma} + 1$, which is $\frac{\sigma}{\sigma-1}.$ So all up vote 1 these values in any $A_m$ ($m > 1$) can be bounded by a half-plane. This definitely doesn't pin down where they are, but does give a nice bound on where they are not. down vote add comment Well, for 3, it'd be all of them (well, at most one of them is empty, by Big Picard). The zeta function is meromorphic, but not rational, and so has an essential singularity, at up vote 1 down infinity. Surely you mean "all of them (well, all but at most one of them)"? I.e. A_n is empty for at most one value of n. – Tom Leinster Dec 2 '09 at 16:53 Ahh, yes, that'd be a typo as I dashed off to class. Correcting it now. – Charles Siegel Dec 2 '09 at 18:07 1 I don't understand other people's objection to using Big Picard. It's a big theorem, sure, but we know it's true and it enables (3) to be answered without calculation. – Tom Leinster Dec 3 '09 at 0:40 add comment Regarding Jonah's remark about plotting the inverse images of n. A quick way to visualize the zeros of a meromorphic function $f(s)$ is to plot (by color-coding with four colors) the quadrant of the value of $f(s)$. Points at the junction of four differently colored regions are either zero or poles. For $f(s) = \zeta(s) - n$ there is only one pole, so all the other 4-color up vote junctions are inverse images of $n$. The plots I've looked at show some behavior that looks related to many other plots one has seen in connection with zeta. But the plot of the inverse image 1 down of the set of all (Gaussian) integers obtained from $f(s) = Mod[\zeta(s),1]$ (using $Mathematica$ notation) seems particularly interesting, especially (for example) in the three by three vote square with center 1, or in much smaller regions in the left half plane, for example, the square centered at $-25 + \frac 12 i$ with side length $10^{-5}$. add comment There is entire chapter of a book written on this very subject (Titchmarsh, The Theory of The Riemann zeta-function, chapter XI) up vote 0 down vote In the rectangle 0 \leq Re(s) \leq 1,0< Im(s)>T zeros of \zeta(s)-a in the box 3/4< Re(s) < 4/5, 0 2 The book recommendation already appears in Micah Milinovich's answer. – S. Carnahan♦ Jan 10 '12 at 8:43 add comment Not the answer you're looking for? Browse other questions tagged nt.number-theory ca.analysis-and-odes analytic-number-theory or ask your own question.
{"url":"http://mathoverflow.net/questions/7586/when-does-the-zeta-function-take-on-integer-values/7600","timestamp":"2014-04-20T06:44:30Z","content_type":null,"content_length":"94459","record_id":"<urn:uuid:56a5603a-9816-4c18-b355-694059f9c6a4>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00188-ip-10-147-4-33.ec2.internal.warc.gz"}
We describe first a general graphical method to represent nonlinear transformations , and apply it to the case of the invariant transformations of tensors of a certain kind . We interpret then the Lie algebra as well as the Hopf algebra of functions of such a group of nonlinear transformations . Finally we show how to connect these constructions with the Hopf algebra introduced by Connes and Kreimer in Renormalization Theory . As a bonus , we settle a vexing question of normalization connected with the number of symmetries of a Feynman diagram , and we recover a theorem of Connes and Kreimer about the renormalization group .
{"url":"http://www.newton.ac.uk/programmes/NCG/abstract3/cartier.html","timestamp":"2014-04-21T07:12:22Z","content_type":null,"content_length":"2611","record_id":"<urn:uuid:716294bd-0eeb-49bc-af02-d1b380be1efd>","cc-path":"CC-MAIN-2014-15/segments/1397609539665.16/warc/CC-MAIN-20140416005219-00558-ip-10-147-4-33.ec2.internal.warc.gz"}
8.NS Irrational Numbers on the Number Line Without using your calculator, label approximate locations for the following numbers on the number line. 1. $\pi$ 2. $-(\frac12 \times \pi)$ 3. $2\sqrt2$ 4. $\sqrt{17}$ When students plot irrational numbers on the number line, it helps reinforce the idea that they fit into a number system that includes the more familiar integer and rational numbers. This is a good time for teachers to start using the term "real number line" to emphasize the fact that the number system represented by the number line is the real numbers. When students begin to study complex numbers in high school, they will encounter numbers that are not on the real number line (and are, in fact, on a "number plane"). This task could be used for assessment, or if elaborated a bit, could be used in an instructional setting. • Be the first to leave a comment!
{"url":"http://www.illustrativemathematics.org/illustrations/337","timestamp":"2014-04-17T18:39:00Z","content_type":null,"content_length":"14601","record_id":"<urn:uuid:358216ca-cfb6-495b-b420-4b29a3347289>","cc-path":"CC-MAIN-2014-15/segments/1397609530895.48/warc/CC-MAIN-20140416005210-00640-ip-10-147-4-33.ec2.internal.warc.gz"}
Astoria, NY Calculus Tutor Find an Astoria, NY Calculus Tutor ...I take an interactive approach to tutoring, and encourage students to give lots of feedback, dictate the pace, and maintain a constant dialogue with me. I don't believe in lecturing too much--you will learn the material much better when you are talking out loud through examples with some guidanc... 10 Subjects: including calculus, physics, geometry, statistics ...I'd be happy to send my resume if requested.I studied mathematics and additional mathematics for my UK GCSE exams where algebra was covered extensively. I received A*s in both of these classes (highest scores). I then studied mathematics and further mathematics for my UK A levels (similar to US ... 40 Subjects: including calculus, chemistry, English, reading ...He switched from a private English school to a school where all the subjects (Math, Physics, Chemistry, Biology, history,...) were taught in French. Neither of his parents spoke French or were able to help him. We worked it out and he was very successful in his new school. 18 Subjects: including calculus, chemistry, physics, French ...I can help students pass their Exams by instilling in them a thorough understanding of the material as well as reviewing complex concepts and problems sets. I have been using Access at work and at home to manage data for several years. I have developed databases, automated processes, created new workflows, and generally increased productivity using my knowledge. 15 Subjects: including calculus, algebra 1, algebra 2, finance ...I also have good understanding of chemistry concepts like balancing reactions, nuclear energy, ideal gases, bonding, compounds, mixtures, etc. Lastly I can tie all of these concepts together when discussing more intensive concepts in chemistry. Geometry is most likely my favorite topic in all of math. 22 Subjects: including calculus, chemistry, Spanish, geometry
{"url":"http://www.purplemath.com/Astoria_NY_calculus_tutors.php","timestamp":"2014-04-19T12:42:01Z","content_type":null,"content_length":"24094","record_id":"<urn:uuid:8f54e700-e77c-4a4c-85d7-6545f5a738e1>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00650-ip-10-147-4-33.ec2.internal.warc.gz"}
9/3 - 9/6 Tuesday, September 3, 2013 - Dividing Decimals by Decimals Bell Work Students continued to practice applying their skills of adding, subtracting, and multiplying decimals through finding the (1) discount, (2) new price, (3) sales tax, (4) and total cost of a magic set. While working on Bell Work, students who did not turn in their work last Friday were seen to check off the 4 items they completed at 25 points each. The process shown below was demonstrated and left on the board last Thursday to assist students as they worked with their partner(s). Today, students were instructed that this will be the only quarter where they will be allowed to redo assignments and turn in late work during tutoring hours. This should help students adjust to my grading expectations and the rigor of middle school. After Quarter 1, students will not be allowed to turn in work late or redo any assignment. Students checked their work from "Sales Tax and Discounts," which is attached below. Afterwards, we began examining the algorithm for dividing decimals by decimals. How and why we move the decimal point out of the divisor was explored through strings of equivalent fractions which can be found on the following PowerPoint. When multiplying a decimal by Powers of 10, the decimal point shifts to the right. When dividing a decimal by Powers of 10, the decimal point shifts to the left. For practice, students completed problems using the Holt textbook p.139 #2-30 even. We will continue to practice tomorrow at which time I will show students how to access the online textbook. File Size: 760 kb File Type: pptx Download File File Size: 116 kb File Type: pptx Download File Wednesday, September 4, 2013 - Dividing Decimals by Decimals Grocery List Project was due today and collected during Bellwork. Bell Work Using addition, subtraction and multiplication algorithms for decimals, students found the discount, new price, sales tax and final price of the Mining Kit. Afterwards, students examined the part to whole relationship between the new price and original price of the Mining Kit, and they examined a concept called "percent of a number." A quiz over Sales Tax and Discounts is scheduled for Friday, September 6. Use the discount to find Fractions are typically described as being a part of a whole. The numerator represents When dividing by a decimal, we want to create a whole number divisor. Multiplying by the new price, the sales "how much of the whole is being taken" and the denominator represents "total parts of Powers of 10 allows us to rename decimals. 24.9 x 10 = 249. 21.17 x 10 = 211.7. The tax, and the final price the whole." Can we apply this here? The new discounted price of $21.17 is a fraction of ratio (fraction) remains the same. The numbers are larger, but the relationship of the Mining Kit. $24.90 - the whole. When we rename this fraction as a decimal, what does the quotient between the numerator and the denominator is still 0.85. So what does 0.85 mean? The quotient 0.85 translates to mean $21.17 is 85% of $24.90. If the fraction was $24.90/$24.90, the We can add $0.85 about 25 times for each of the 25 dollars or we can multiply $24.90 by relationship would equal 1 or 100%. In this situation, a person paid the full price and received no 0.85. The product shows that 85% of $24.90 is about $21.17. This is the same amount we got discount. Given the 15% discount, for every $1 we are getting a discount of $0.15. This also means that we when we found the 15% discount and then subtracted it from the original price! Now, we can are paying $0.85 for every $1 the item costs. remove a step! I showed a few classes the following PowerPoint to further support the understanding that when we multiply decimals by Powers of 10, the decimal moves to the right. Similarly, when we divide decimals by powers of 10 the decimal moves to the left. This understanding supports the algorithm for division with decimal divisors and will be used later when we work on converting metric units. Using textbook p.139, guided practice for dividing decimals by decimals was given for problems 2, 4 and 6. Students then worked independently on completing remaining questions 2-30 (even only). Problems that were not finished were to be finished for homework. File Size: 233 kb File Type: pptx Download File Accessing the online gradebook 1) Go to the website: my.hrw.com 2) Username and password: youthmath 3) Click on link to "Course 1 - online textbook" - beside the red book 4) Click on tab "Book Pages" 5) Type in the page number in the window next to "Page: " and press "Go!" ***Notice that once you go to that page, you can move between pages using the "previous" and "next" arrows at the top. Go to a "previous" page to see examples from the textbook or go to a "next" page to preview other concepts. Thursday, September 5, 2013 - Applying Decimals Students were given the cost of a Schoenhut acoustic guitar and the current discount. They were asked to find the discount, new price, sales tax, and final price of the guitar after the discount. Afterwards, students used the information to find the fraction of the discounted price to the whole - original price. Then, students used percent of a number to find the new discounted price through one step instead of two. We added one more component today - how to find the final price (cost plus tax) in one step instead of two. The steps we took are shown below. Students were given the solutions for textbook page 139, problems 2-30 (even only). This assignment was worked on during class on Wednesday and was to be finished for homework. Today, students corrected problems they missed. I conferenced with each student after he or she finished correcting their work. Those who did not finish were to finish making corrections for homework. This assignment was graded for accuracy, which falls into the Progress category (30%). Students who finished early began working on Activity 27, which is homework for next week. Friday, September 6, 2013 - Applying the Algorithms for Adding, Subtracting and Multiplying Decimals Students were instructed to put their bellwork for the week in chronological order to be stapled. They should have 4 items from Tuesday, 9/3 - discount, new price, sales tax, and final price for the magic set. They should have 7 items from Wednesday, 9/4 - discount, new price, sales tax, final price, fraction of a whole, decimal form, and percent of a number for the crystals and gems mining kit. They should also have 7 items from Thursday, 9/5, for the Schoenhut guitar. Bellwork could be used to assist the student on the Sales Tax and Discounts Quiz and was turned in with the quiz. Textbook p.139, problems 2-30 (even only), was collected from those who did not turn in their work at the end of class on Thursday. Students took the Sales Tax and Discounts Quiz to demonstrate their understanding of using a discount and applying Georgia's sales tax. Some classes had additional time to work on problem solving with decimals in groups.
{"url":"http://mrsheidesch.weebly.com/93---96.html","timestamp":"2014-04-21T02:05:44Z","content_type":null,"content_length":"42358","record_id":"<urn:uuid:ad64ecc0-717b-4daf-bd62-96aea72c0ecf>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
s new in general statistics What’s new in general statistics • Contrasts, which is to say, tests of linear hypotheses involving factor variables and their interactions from the most recently fit model, and that model can be virtually any model that Stata can fit. Tests include ANOVA-style tests of main effects, simple effects, interactions, and nested effects. Effects can be decomposed into comparisons with reference categories, comparisons of adjacent levels, comparisons with the grand mean, and more. • Pairwise comparisons of means, estimated cell means, estimated marginal means, predictive margins of linear and nonlinear responses, intercepts, and slopes. In addition to ANOVA-style comparisons, comparisons can be made of population averages. • Graphs of margins, marginal effects, contrasts, and pairwise comparisons. Margins and effects can be obtained from linear or nonlinear (for example, probability) responses. • ROC adjusted for covariates, which is to say, you can model the ROC curve and obtain coefficients, standard errors, and graphs. Nonparametric and parametric estimation is supported. • Estimation output improved: □ Baseline odds now shown, which is to say, the exponentiated intercept is displayed by logistic and by logit with option or. In fact, all estimation commands show exponentiated intercepts when option eform() or its equivalent is specified. For example, poisson shows the baseline incidence rate when option irr is specified. □ Implied zero coefficients now shown. When a coefficient is omitted, it is now shown as being zero, and the reason it was omitted—collinearity, base, empty—is shown in the standard-error column. (The word “omitted” is shown if the coefficient was omitted because of collinearity.) □ You can set displayed precision for all values in coefficient tables using set cformat, set pformat, and set sformat. Or you may use options cformat(), pformat(), and sformat() now allowed on all estimation commands. □ Estimation commands now respect the width of the Results window. This feature may be turned off by new display option nolstretch. □ You can now set whether base levels, empty cells, and omitted are shown using set showbaselevels, set showemptycells, and set showomitted. • test with coefficient names not using _b[ ] notation is now allowed, even when the specified variables no longer exist in the current dataset. • areg now faster. areg is orders of magnitude faster when there are hundreds of absorption groups, even if you are not running Stata/MP. • misstable summarize will now create a summary variable recording the missing-values pattern. • margins command supports contrasts. • sfrancia uses a better algorithm. sfrancia now uses an algorithm based on the log transformation for approximating the sampling distribution of the W′ statistic for testing normality. The old algorithm, using the Box–Cox transformation, is available under version control or via the new boxcox option. Based on simulation, the new algorithm is more powerful for sample sizes greater than 1,000 and is comparable to the old algorithm for sample sizes less than 1,000. Also, similarly to swilk, sfrancia now allows you to suppress the treatment of ties when option noties is used. • logistic now allows option noconstant. • Probability predictions now available. predict after count-data models, such as poisson and nbreg, can now predict the probability of any count or count range. • Truncated count-data models now available. New estimation commands tpoisson and tnbreg fit models of count-data outcomes with any form of left truncation, including truncation that varies observation by observation. These new commands supersede ztp and ztnb. • cnsreg checks for collinear variables prior to estimation and has new option collinear, which keeps the collinear variables instead of omitting them. The old behavior of always keeping collinear variables is preserved under version control. • ml improved, □ ml now distinguishes the Hessian matrix produced by technique(nr) from the other techniques that compute a substitute for the Hessian matrix. This means that ml will compute the real Hessian matrix of second derivatives to determine convergence when all other convergence tolerances are satisfied and technique(bfgs), technique(bhhh), or technique(dfp) is in effect. The old behavior was to use the nrtolerance() value with the H matrix associated with the technique() currently in effect to determine convergence; this behavior is preserved under version control. □ ml has new option qtolerance() that distinguishes itself from ntrol() when technique(bfgs), technique(bhhh), or technique(dfp) is specified. Option qtolerance() replaces nrtolerance() when technique(bfgs), technique(bhhh), or technique(dfp) is in effect. • margins has new option estimtolerance() for setting tolerance used to determine estimable functions. See New in Stata 13 for more about what was added in Stata 13.
{"url":"http://www.stata.com/stata12/general-statistics/","timestamp":"2014-04-18T18:52:42Z","content_type":null,"content_length":"33023","record_id":"<urn:uuid:c1c1e5ba-bf73-4895-866a-1a977b722bfb>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00663-ip-10-147-4-33.ec2.internal.warc.gz"}
Symmetric Informationally Complete POVMs in Prime Dimensions Le contenu de cette page n’est pas disponible en français. Veuillez nous en excuser. Symmetric Informationally Complete POVMs in Prime Dimensions A symmetric informationally complete positive-operator-valued measure (SIC POVM) is a special POVM that is composed of d^2 subnormalized pure projectors with equal pairwise fidelity. It may be considered a fiducial POVM for reasons of its high symmetry and high tomographic efficiency. Most known SIC POVMs are covariant with respect to the Heisenberg-Weyl (HW) group. We show that in prime dimensions the HW group is the unique group that may generate a SIC POVM. In particular, in prime dimensions not equal to three, each group covariant SIC POVM is covariant with respect to a unique HW group. In addition, the symmetry group of the SIC POVM is a subgroup of the Clifford group, which is the normalizer of the HW group. Hence, two SIC POVMs covariant with respect to the HW group are unitarily equivalent if and only if they are on the same orbit of the Clifford group. In dimension three, each group covariant SIC POVM may be covariant with respect to three or nine HW groups, and the symmetry group of the SIC POVM is a subgroup of at least one of the Clifford groups of these HW groups respectively. There may exist two or three orbits of equivalent SIC POVMs for each group covariant SIC POVM, depending on the order of its symmetry group.
{"url":"http://www.perimeterinstitute.ca/fr/videos/symmetric-informationally-complete-povms-prime-dimensions","timestamp":"2014-04-16T07:16:53Z","content_type":null,"content_length":"27897","record_id":"<urn:uuid:5f347969-6ff1-4849-b52f-34e54afdcdac>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00105-ip-10-147-4-33.ec2.internal.warc.gz"}
Results 1 - 10 of 15 - JOURNAL OF AUTOMATED REASONING , 1997 "... This article discusses the two incarnations of Otter entered in the CADE-13 Automated Theorem Proving Competition. Also presented are some historical background, a summary of applications that have led to new results in mathematics and logic, and a general discussion of Otter. ..." Cited by 44 (3 self) Add to MetaCart This article discusses the two incarnations of Otter entered in the CADE-13 Automated Theorem Proving Competition. Also presented are some historical background, a summary of applications that have led to new results in mathematics and logic, and a general discussion of Otter. , 1996 "... Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "ou ..." Cited by 24 (5 self) Add to MetaCart Introduction Many researchers who study the theoretical aspects of inference systems believe that if inference rule A is complete and more restrictive than inference rule B, then the use of A will lead more quickly to proofs than will the use of B. The literature contains statements of the sort "our rule is complete and it heavily prunes the search space; therefore it is efficient". 2 These positions are highly questionable and indicate that the authors have little or no experience with the practical use of automated inference systems. Restrictive rules (1) can block short, easy-to-find proofs, (2) can block proofs involving simple clauses, the type of clause on which many practical searches focus, (3) can require weakening of redundancy control such as subsumption and demodulation, and (4) can require the use of complex checks in deciding whether such rules should be applied. The only way to determ - In: Post-proceedings of the Types 2004 International Conference, Vol. 3839 of LNCS , 2004 "... Abstract. The prototype of a content based search engine for mathematical knowledge supporting a small set of queries requiring matching and/or typing operations is described. The prototype — called Whelp — exploits a metadata approach for indexing the information that looks far more flexible than t ..." Cited by 16 (7 self) Add to MetaCart Abstract. The prototype of a content based search engine for mathematical knowledge supporting a small set of queries requiring matching and/or typing operations is described. The prototype — called Whelp — exploits a metadata approach for indexing the information that looks far more flexible than traditional indexing techniques for structured expressions like substitution, discrimination, or context trees. The prototype has been instantiated to the standard library of the Coq proof assistant extended with many user contributions. 1 - Conference on Automated Deduction , 1998 "... Abstract. We propose several methods for writing efficient subsumption procedures for non-unit clauses, tested in practice as parts incorporated into the Gandalf family of theorem provers. Versions of Gandalf exist for classical logic, first order intuitionistic logic and type theory. Subsumption is ..." Cited by 11 (1 self) Add to MetaCart Abstract. We propose several methods for writing efficient subsumption procedures for non-unit clauses, tested in practice as parts incorporated into the Gandalf family of theorem provers. Versions of Gandalf exist for classical logic, first order intuitionistic logic and type theory. Subsumption is one of the most important techniques for cutting down search space in resolution theorem proving. However, for many problem categories most of the proof search time is spent on subsumption. While acceptable efficiency has been achieved for subsuming unit clauses (see [7], [2]), the nonunit subsumption tends to slow provers down prohibitively. We propose several methods for writing efficient subsumption procedures for non-unit clauses, succesfully tested in practice as parts built into the Gandalf family of theorem provers: – ordering literals according to a certain subsumption measure – indexing first two literals of each nonunit clause – pre-computed properties of terms, literals and clauses – a hierarchy of fast filters for clause-to-clause subsumption – combining subsumption with clause simplification – linear search among the strongly reduced number of candidates for back subsumption The presented methods for substitution were among the key techniques enabling the classical version of Gandalf to win the MIX division of the CASC-14 prover contest in 1997. The approach of the paper is purely empirical, presenting the methods and bringing some statistical evidence. 1 Gandalf Family of Provers Before continuing with the details of the subsumption methods we will present an overview of the Gandalf family of provers. We use the name Gandalf for the interdependent, code-sharing, resolution-based automated theorem provers we are developing: a resolution prover for first-order intuitionistic logic Tammet [9], for a fragment of Martin-Löf’s type theory Tammet [10] and for first-order - In Proceeding of the Third International Conference on Mathematical Knowledge Management, MKM 2004. Bialowieza, Poland. LNCS 3119 , 2004 "... Abstract. The paper describes an innovative technique for efficient retrieval of mathematical statements from large repositories, developing and substantially improving the metadata-based approach introduced in [13]. 1 ..." Cited by 8 (2 self) Add to MetaCart Abstract. The paper describes an innovative technique for efficient retrieval of mathematical statements from large repositories, developing and substantially improving the metadata-based approach introduced in [13]. 1 - Proc. of PASCO-97 , 1997 "... We introduce the distributed theorem prover Peers-mcd for networks of workstations. Peers-mcd is the parallelization of the Argonne prover EQP, according to our Clause-Diffusion methodology for distributed deduction. The new features of Peers-mcd include the AGO (Ancestor-Graph Oriented) heuristic c ..." Cited by 6 (2 self) Add to MetaCart We introduce the distributed theorem prover Peers-mcd for networks of workstations. Peers-mcd is the parallelization of the Argonne prover EQP, according to our Clause-Diffusion methodology for distributed deduction. The new features of Peers-mcd include the AGO (Ancestor-Graph Oriented) heuristic criteria for subdividing the search space among parallel processes. We report the performance of Peers-mcd on several experiments, including problems which require days of sequential computation. In these experiments Peersmcd achieves considerable, sometime super-linear, speed-up over EQP. We analyze these results by examining several statistics produced by the provers. The analysis shows that the AGO criteria partitions the search space effectively, enabling Peers-mcd to achieve super-linear speed-up by parallel search. 1 Introduction Distributed deduction is concerned with the problem of proving difficult theorems by distributing the work among networked computers. The motivation is to st... - In Proceedings of the First Int. Conf. on Automated Reasoning (IJCAR 2001), volume 2083 of LNCS , 2001 "... Indexing data structures have a crucial impact on the performance of automated theorem provers. Examples are discrimination trees, which are like tries where terms are seen as strings and common prefixes are shared, and substitution trees, where terms keep their tree structure and all common con ..." Cited by 6 (1 self) Add to MetaCart Indexing data structures have a crucial impact on the performance of automated theorem provers. Examples are discrimination trees, which are like tries where terms are seen as strings and common prefixes are shared, and substitution trees, where terms keep their tree structure and all common contexts can be shared. Here we describe a new indexing data structure, called context trees, where, by means of a limited kind of context variables, also common subterms can be shared, even if they occur below di#erent function symbols. Apart from introducing the concept, we also provide evidence for its practical value. We describe an implementation of context trees based on Curry terms and on an extension of substitution trees with equality constraints, where one also does not distinguish between internal and external variables. - Notes of the CADE-15 Workshop on Problem Solving Methodologies with Automated Deduction , 1998 "... . This note presents purely mechanical proofs of the Levi commutator problem in group theory. The problem was solved first by using the theorem prover EQP, developed by William McCune at the Argonne National Laboratory. The fastest proof was found by using Peers-mcd, the Clause-Diffusion paralleliza ..." Cited by 3 (1 self) Add to MetaCart . This note presents purely mechanical proofs of the Levi commutator problem in group theory. The problem was solved first by using the theorem prover EQP, developed by William McCune at the Argonne National Laboratory. The fastest proof was found by using Peers-mcd, the Clause-Diffusion parallelization of EQP, developed by the author at the University of Iowa. 1 The Levi commutator problem The Levi commutator problem is an equational problem in group theory. Given the axioms for a group with product and identity e e x ' x x \Gamma1 x ' e (x y) z ' x (y z) the commutator is a binary operator [ ; ] defined by: [x; y] ' x \Gamma1 y \Gamma1 x y: The Levi commutator problem consists in proving that x [y; z] ' [y; z] x , [[x; y]; z] ' [x; [y; z]] that is, x [y; z] ' [y; z] x holds if an only if the commutator is associative. A textbook proof of this theorem can be found in [10]. In the input to the theorem provers, the group axioms and the commutator definition are... , 2000 "... Machine [War83] implementation for Prolog) are stored in an array similar to the WAM heap. It is an array of pairs h tag, address i, where tag can be ref or struct, that is, a function symbol f. The field address contains a heap address. Terms are stored on the heap as in the WAM: each function symb ..." Cited by 3 (2 self) Add to MetaCart Machine [War83] implementation for Prolog) are stored in an array similar to the WAM heap. It is an array of pairs h tag, address i, where tag can be ref or struct, that is, a function symbol f. The field address contains a heap address. Terms are stored on the heap as in the WAM: each function symbol of arity n is followed by n contiguous ref positions pointing to its arguments. Each uninstantiated variable corresponds to a ref position pointing to itself. For example, the heap below at the left contains f(x; g(x); g(x); y) at the address 20: . . . . . . 20 f 21 ref 21 22 ref 30 23 ref 30 24 ref 24 . . . . . . 30 g 31 ref 21 . . . . . . Note that in such a representation the whole term needs not to be contiguous, and that common subterms ---not only variables--- can be shared, like the subterm g(x) at position 30. Moreover, unlike it happens in other term representations, matching and unification operations do not need to deal with a partial substitution: du... "... Abstract. One of the most annoying aspects in the formalization of mathematics is the need of transforming notions to match a given, existing result. This kind of transformations, often based on a conspicuous background knowledge in the given scientific domain (mostly expressed in the form of equali ..." Cited by 3 (3 self) Add to MetaCart Abstract. One of the most annoying aspects in the formalization of mathematics is the need of transforming notions to match a given, existing result. This kind of transformations, often based on a conspicuous background knowledge in the given scientific domain (mostly expressed in the form of equalities or isomorphisms), are usually implicit in the mathematical discourse, and it would be highly desirable to obtain a similar behaviour in interactive provers. The paper describes the superpositionbased implementation of this feature inside the Matita interactive theorem prover, focusing in particular on the so called smart application tactic, supporting smart matching between a goal and a given result. 1
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1096970","timestamp":"2014-04-17T06:33:31Z","content_type":null,"content_length":"39643","record_id":"<urn:uuid:9c46b6ac-96bd-4d1b-8cf9-468bc8371841>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00527-ip-10-147-4-33.ec2.internal.warc.gz"}
Quantum black hole in Loop Quantum Gravity Today micro black holes are known (seeing the brief review ). However, we do not know what is a quantum black hole? A quantum black hole is not necessarily a very small black hole. In fact, large black holes may or maynot demonstrate some quantum effects when they touch their outer pace. In Loop Quantum Gravity, there are two crucial pictures for a quantum black hole. 1- Quantum Isolated Horizon,(aka ABCK model, proposed about 1996-7 by Ashtekar, Baez, Corichi, and Krasnov - major reference). In this picture, first a spacetime with internal boundary (at the horizon of a black hole) is considered. Then, some conditions are imposed on this boundary in order to make it behave like a black hole horizon from the thermodynamic point of view. After that, the spacetime is quantized. This is done by promoting the gravitational degrees of freedom (the Ashtekar variables -on holonomies- and their conjugate momenta) to operators. The Hilbert space of such a spacetime is separated into the one of the bulk and the one of the internal boundary. The Hilbert space associated to the bulk fields contains the spin network states, defined on one dimensional floating graphs embedded into the bulk. The Hilber space of the boundary is simplified by gauge-fixing. The contribution of the boundary in the spacetime action is a Chern-Simons term. Each bulk edge interesect at the boundary at a puncture. The wave function associated to a puncture is not unique. This is pretty similar to regular quantum mechanics in which when the angular momentum j is projected on the z-axis it takes 2j+1 possible components m, where m=-j, -j+1, ..., j-1, j. In fact, each puncture is a degenerate state. A puncture corresponding the edge of spin j takes 2j+1 degenerate states on the horizon Hilbert space. On the other hand, each interesecting edge carries the area proportional to the edge spin. An edge of spin j generate the area proportional to [j*(j+1)]^0.5. Thus, associated to the degenerate wave function of a puncture, there is one area. This degeneracy is the root of black hole entropy, this picture says. These punctures are responcible for the curvature of the horizon and everywhere else on the horizon is flat. Following picture shows the portrait of a quantum isolated horizon: This model is conceptual restricted and a more fundamental microscopic description should be possible. I can give three reasons for this: • (i) in general relativity metric extends through a black hole via the junction conditions. However in quantum limits, the spin network states must extend through the horizon. A quantum isolated horizon forbid the extention by treating the black hole as the internal boundary of space. • (ii) the horizon in this picture is defined by the classical notion of localization. However, it is completely known that the notion of quantum localization is different than its classical version. Thus a quantum horizon must be localized as a ‘quantum boundary’ of its interior states. • (iii) the bulk edges may only end at the boundary at punctures and no tangential edge is allowed. Why? This is a too strong assumption! 2- Black Hole Spin Network (Proposed recently. Reference) As a second picture, spacetime is first geometrically quantized, without reference to the horizon. Instead of partial gauge-fixing, a condition that a surface be a horizon of a black hole is imposed on a surface in the full quantum state. The result is that no dynamical constraint is imposed at the horizon. Instead, a black hole horizon is defined by a partition of a spin network. In this picture all of the above ambiguities are resolved, although the dynamics of this model has not been completed yet. See the papers of Viqar Husain and Oliver Winkler (see 1 and 2), Martin Bojowald (see here), and I (see 1, 2). The portrait of a Black Hole Spin Network is as the following Two results quickly followed from this picture of black holes. • 1- The first is a new computation of black hole entropy on the basis of a degeneracy in the spectrum of areas. These came from states that were excluded from the isolated horizon boundaryconditions. In fact, their exclusion is a result of imposing a classical notion of black hole horizon as a boundary condition, and then quantizing, rather than quantizing and then identifying a surface as an horizon. The fact is that the background independent area operator is generically a degenerate operator. The area eigenstates associated to quantum surfaces are not unique! Some of the are eigenstates in the complete spectrum may be associated to one area eigenvalue. The following graph shows how the degeneracy is correlated Genericness of area degeneracy in loop quantum gravity (from here) • 2- The second payoff of the new definition of a black hole horizon is the prediction of small, but potentially observable corrections to the Hawking’s thermal black hole radiation formula. This happens by considering the fluctuation of horizon area in the kinematical step of the black hole definition. The full spectra of the area operator have an unexpected symmetry that was previously unnoticed. This leads to a physical effect, which is amplification of some modes of black hole radiation. This conclusion differs from Hawking’s original prediction even for massive black holes. Let me explain this a bit more. The complete spectrum of area in LQG is such that the gap between different levels of area eigenvalues become smaller in higher levels. But it is proved mathematically this spectrum can be split exactly into evenly spaced subsets. The gap between levels in each one of the subsets is unique and propostional to square roots of all square-free numbers (Squarefree numbers are those numbers whose prime number ingridients are not repeated. For example 18=2*3*3 is not a square-free number, but 15=3*5 is. These numbers have been studied heavily in Number theory. You can see some of their amazing propeties Each one of the evenly spaced subsets, to which the complete spectrum of area eigenvalues are splitted, is called a "generation". Black hole is a sector of spacetime on which horizon area A is proportional to mass squared M^2. Thus, the quantum of energy and the quantum of area are relevant, (1/M) dA = dM. Consequently, the fluctuations of the horizon are seen as the transition of black hole mass, the emission of energy (energy radiation). Having splitted the complete spectrum of area into different evenly spaced sets of numbers (with different gaps), the area transitions fall into two categories: 1. Generational Transitions, those transitions occuring between two area levels of the same generations. 2. Inter-generational Transitions, those transitions occuring between two area levels of different generation. The frequencies emitted via generational transitions are all proportional to each other by integers and thus are called "harmonics". This is not the case for the intergenerational transitions. The following picture discribes this graphically. The area transitions as how loop quantum gravity discribes. (from here In one generation, a harmonic frequency can be generated by transitions from many pairs of levels. For example: the frequency of the emission from te level 3 to 1 can be reproduced by the transitions from level 4 to 2, the levels 5 to 3, etc. This is not the case with the intergenerational frequencies. Therefore the average number of photons at the harmonic frequancies exceeds the other frequencies. Among the harmonic frequencies themelves, the number of photons corresponding to the transitions from nearby levels are more than others. The reason is simple. The more the gap between levels decreases, the more photons are created at each generational transition. Therefore, a few of the lines in the radiation spectrum are expected to be the brightest lines. These lines are all unblended. Exact calculation predicts the following spectrum for a black hole The spectrum of a black hole radiation as how loop quantum gravity predicts (from here) Here wo is of the order of 10^16/M(kg) electronVolts, which can be of the order of 10keV for a primordial black hole of mass M=10^12kg. This makes it possible to test loop quantum gravity with black holes well above Planck scale. These predictions will become amenable to experimental check if primordial black holes are ever found. In fact, this prediction has opened a window upon quantum gravity that does not require reaching down to Planck scale physics. 5 comments: What is the range of frequency you expect from this radiation? Hi Michael, The range of the frequency is different depending to the size of black hole. In the case of primordial holes they could be from keV to TeV. Larger frequencies appear in a spectrum with larger gaps between lines. Hope this helps! Very interesting. Is next step the observation of this spectrum? As far as I remember the quantum black holes have many definitions and these are only the loop quantum gravity version of them explained in here. Reading these definitions I think I would be happier if the discussion is extended to high temperature holes as well.
{"url":"http://gaugeinvariance.blogspot.com/2007/01/quantum-black-hole-in-loop-quantum.html","timestamp":"2014-04-16T22:01:10Z","content_type":null,"content_length":"86995","record_id":"<urn:uuid:eca1e789-d555-41e9-a45d-684062f640f0>","cc-path":"CC-MAIN-2014-15/segments/1397609525991.2/warc/CC-MAIN-20140416005205-00064-ip-10-147-4-33.ec2.internal.warc.gz"}
Pile Cap design Pile cap definition – Pile Cap design example A pile cap is outlined as a concrete block cast on the pinnacle of a cluster of piles, to transmit the load from the structure to the group of piles. Generally, pile cap transfers the load kind the structures to a pile group, then the load any transfers to firm soil. External pressures on a pile are probably to be greatest near the ground surface. Ground stability increases with depth and pressure. The top of the pile thus, is additional at risk of movement and stress than the base of the pile. Pile caps are so incorporated so as to tie the pile heads along thus that individual pile movement and settlement is greatly reduced. Therefore stability of the pile group is greatly increased. Pile Cap functions – Pile Cap design To distribute a single load equally over the pile group and thus over a larger space of bearing potential, To laterally stabilise individual piles therefore increasing overall stability of the group. And To provide the necessary combined resistance to stresses founded by the superstructure and/or ground movement. Pile caps are thick slabs used to tie a cluster of piles along to support and transmit column loads to the piles. Pile Cap Arrangement – Pile Cap design Spacing of the piles in the pile group The subsequent ought to be thought of when determining the spacing of the piles: 1. Overall value of the inspiration 2. Nature of the ground 3. Pile behaviour in the cluster 4. Resulting possible heave or compaction of ground causing harm to adjacent structures 5. Cost of Pile Cap design 6. Size and effective length of ground beam 7. Type and size of pile Piles should be placed during a suitable arrangement so that the spacing between piles ranges from (2-3) D (pile diameter) in case of isolated pile caps and (2-6) D in case of rafts supported on - The C.G. of piles should be placed as far as doable in the C.G. of hundreds transmitted from the structure to the cluster of piles. - Within the case of presence of neighbors, piles should be away from the property line by a distance not less than D or as the pile installation methodology needs. - The projection of the pile cap ought to be 10-15 cm. Initial Layout: The simplest pile layout is one without batter piles. Such a layout should be used if the magnitude of lateral forces is small. Since all piles don’t carry an equal portion of the load, axial pile capability can be reduced to 70 percent of the computed value to provide a good starting purpose to see an initial layout. In this case, the designer begins by dividing the biggest vertical load on the structure by the reduced pile capacity to get the approximate range of pile. If there are giant applied lateral forces, then batter piles are sometimes needed. Piles with flat batter 2.5 (V) to 1 (H), give bigger resistance to lateral masses and the less resistance to vertical loads. Piles with steep batters 5 (V) to 1 (H) provide greater vertical resistance and fewer lateral resistance. pile cap design Final Layout: After the preliminary layout was developed remaining load cases ought to be investigated and also the layout revised to produce an economical layout. The goal ought to be to supply a pile layout in which most piles are loaded as near to capacity as practical for the critical loading cases with tips located at the same elevation for the varied pile teams at intervals a given monolith. Adjustments to the initial layout by the addition deletion, or relocation of piles at intervals the layout grid system could be needed. Generally, revisions to the pile batters can not be needed as a result of they were optimized throughout the initial pile layout. The designer is cautioned that the founding of piles at numerous elevations or in several strata might end in monolith instability and differential settlement. Pile Cap design - If the pile group is analyzed with a flexible base, then the forces needed to design the base are obtained directly from the structure model. - If the pile cluster is analyzed with a rigid base, then a separate analysis is needed to see the stresses in the pile cap. - An acceptable finite part model (frame, plate and plane stress or plane strain) should be used and ought to include all external hundreds (water, concrete, soil, etc. ) and pile reactions. - There are many strategies for planning Pile Cap design from which we tend to could mention the subsequent: 1. Circulage Method 2. Beam Method 3. FEM methods File Type Download : PDF File Pile Cap design Design of Caps for Bridges, foundation design eurocode, guide to designing a steel pile shoring wall Download : Pile Cap design
{"url":"http://ww5.org/pile-cap-design","timestamp":"2014-04-16T17:12:35Z","content_type":null,"content_length":"31461","record_id":"<urn:uuid:1105e2e5-84a0-4b6b-aa74-16a29de5db31>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00007-ip-10-147-4-33.ec2.internal.warc.gz"}
scipy.optimize.brenth(f, a, b, args=(), xtol=1e-12, rtol=4.4408920985006262e-16, maxiter=100, full_output=False, disp=True)[source]¶ Find root of f in [a,b]. A variation on the classic Brent routine to find a zero of the function f between the arguments a and b that uses hyperbolic extrapolation instead of inverse quadratic extrapolation. There was a paper back in the 1980’s ... f(a) and f(b) can not have the same signs. Generally on a par with the brent routine, but not as heavily tested. It is a safe version of the secant method that uses hyperbolic extrapolation. The version here is by Chuck Harris. f : function Python function returning a number. f must be continuous, and f(a) and f(b) must have opposite signs. a : number One end of the bracketing interval [a,b]. b : number The other end of the bracketing interval [a,b]. xtol : number, optional The routine converges when a root is known to lie within xtol of the value return. Should be >= 0. The routine modifies this to take into account the relative precision of rtol : number, optional Parameters : The routine converges when a root is known to lie within rtol times the value returned of the value returned. Should be >= 0. Defaults to np.finfo(float).eps * 2. maxiter : number, optional if convergence is not achieved in maxiter iterations, and error is raised. Must be >= 0. args : tuple, optional containing extra arguments for the function f. f is called by apply(f, (x)+args). full_output : bool, optional If full_output is False, the root is returned. If full_output is True, the return value is (x, r), where x is the root, and r is a RootResults object. disp : bool, optional If True, raise RuntimeError if the algorithm didn’t converge. x0 : float Zero of f between a and b. Returns : r : RootResults (present if full_output = True) Object containing information about the convergence. In particular, r.converged is True if the routine converged. See also fmin, fmin_powell, fmin_cg nonlinear least squares minimizer fmin_l_bfgs_b, fmin_tnc, fmin_cobyla, anneal, brute, fminbound, brent, golden, bracket n-dimensional root-finding brentq, brenth, ridder, bisect, newton scalar fixed-point finder
{"url":"http://docs.scipy.org/doc/scipy/reference/generated/scipy.optimize.brenth.html","timestamp":"2014-04-16T04:23:26Z","content_type":null,"content_length":"14508","record_id":"<urn:uuid:2b6407c7-c642-4b61-a713-73b8085e89e4>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Connected sets March 16th 2011, 03:36 AM Connected sets Let A be connected subset of X and let A ⊂ B ⊂ cl(A). Show that B is connected and hence, in particuar, cl(A) is connected. Hint: (Use) Let G∪H be a disconnection of A and let B be a connected subset of A then we see that either B∩H=∅ or B∩G=∅, and so either B⊂G or B⊂H. March 16th 2011, 04:10 AM Suppose that $G\cup H=B$ is a disconnection of $B$. That means that $G\cap Be\emptyset~\&~H\cap Be\emptyset$. Also $G\cap\overline{H}=\emptyset~\&~H\cap\overline{G}=\ emptyset$ Using the fact that $B\subseteq\overline{A}$ prove that contradicts the given that $A$ is connected. March 16th 2011, 05:20 AM Do you mean that I must show cl(A) is disconnected and this will imply A is also disconnected. hence,the result will give a contradiction..? am I right? if not how can I do this , can you explain, please thank you March 16th 2011, 06:20 AM THEOREM The closure of a connected set is a connect set. That is what you are really asked to prove. As a lemma (previous theorem) you should have proved that if $A$ is a connected set and $A\subset G\cup H$ where $G~\&~H$ are separated sets then $A\subset G$ or $A\subset H$. Two set are said to be separated if neither is empty and neither contains a point nor a limit of the other. Now it is quite easy to prove the lemma. If the conclusion is false then $(A\cap G)\cup(A\cap H)=A$ is a separation of $A$. But $A$ is connected. So that is a contradiction. March 16th 2011, 11:47 AM Thanks for everything but I have one more question..you said that "Using the fact that B ⊂ cl(A) prove that contradicts the given that A is connected" How can I show this? March 16th 2011, 01:12 PM Suppose that $G\cup H=B$ is a separation of $B$. Because $A\subseteq B$ and $A$ is connected we have $A\subseteq G$ or $A\subseteq B$. Say $A\subseteq G$. Now is it possible for $H$ to be nonempty? If the answer is no, then you have a contradiction. And that means that $B$ is connected.
{"url":"http://mathhelpforum.com/differential-geometry/174747-connected-sets-print.html","timestamp":"2014-04-18T11:53:56Z","content_type":null,"content_length":"12150","record_id":"<urn:uuid:7f863481-4476-4116-964d-709f8f7780c2>","cc-path":"CC-MAIN-2014-15/segments/1397609533308.11/warc/CC-MAIN-20140416005213-00099-ip-10-147-4-33.ec2.internal.warc.gz"}
[Beowulf] OT: public random numbers? Vincent Diepeveen diep at xs4all.nl Fri Aug 26 12:53:14 EDT 2011 On Aug 26, 2011, at 2:57 PM, Robert G. Brown wrote: > On Fri, 26 Aug 2011, Vincent Diepeveen wrote: >> EVERY PROGRAMMER IS DOING THIS TO USE RANDOM NUMBERS IN THEIR >> PROGRAM. > Bullshit. "Every programmer" isn't dumb as a post. Or wasn't my > argument clear enough? Do you need me to actually post the code > for how > the GSL -- written by at least some of these programmers -- do this? > Here, I'll try again. This time I'll use smaller numbers and make an > actual table of the outcomes: Imagine only two lousy random bits, > enough to make 00, 01, 10, 11 (or 0,1,2,3). Here is the probability > table: > r = 0 1 2 3 > ------------------------ > p = 0.25 0.25 0.25 0.25 > Let us generate N samples from this distribution. Our expected > frequency of the occurrence of all of these numbers is: > r = 0 1 2 3 > --------------------------------- > Np = 0.25*N 0.25*N 0.25*N 0.25*N > Is this all clear? If I generate 100 random numbers, the expected > number of 3's is 0.25*100 = 25. Now apply mod 3 the outcomes are > now: > r = 0 1 2 3 > r%3 = 0 1 2 0 > --------------------------------- > Np = 0.25*N 0.25*N 0.25*N 0.25*N > You now sum the number of outcomes for each element in the mod 3 > table, > since we have two values of r that make one value of r%3 and frequency > clearly aggregates as the outcomes are independent. > r%3 = 0 1 2 > --------------------------------- > Np = 0.50*N 0.25*N 0.25*N 0.25*N > It is therefore twice as likely that two random bits, modulus 3, will > produce a zero. If you have a domain of 0..3 where a generator generates and your modulo n is just n-1, obviously that means it'll map a tad more to 0. Basically the deviation one would be able to measure in such case is that if we have a generator that runs over a field of say size m and we want to map that onto n entries then we have the next formula : m = x * n + y; Now your theory is basically if i summarize it that in such case the 0..y-1 will have a tad higher hit than y.. m-1. However if x is large enough that shouldn't be a big problem. If we map now in the test i'm doing onto say a few million to a billion entries, the size of that x is a number of 40+ bits for most RNG's. So that means that the deviation of the effect you show above the order of magnitued of 1 / 2^40 in such case, which is rather small. Especially because the 'test' if you want to call it like that, is operating in the granularity O ( log n ), we can fully ignore then the expected deviation granularity O ( 2 ^ 40 ). >> Apologies for the caps. I hope how important this is. You're >> claiming all programmers >> use random numbers in a faulty manner? > They don't. Only you do. Everybody else takes a uniform deviate and > scales it by the number of desired integer outcomes, checking to make > sure that they don't go out of bounds and thereby e.g. get an > incorrect > endpoint frequency. The gsl code is open source and it takes two > minutes to download it and check (I just timed it). Go on, look. the > file is rng/rng.c in the gsl distro directory, the function name is > gsl_rng_uniform_int. No modulus. > The exception is (obviously) when the range is a power of 2. In that > case ONLY, r%n where r is a binary uint and n is a power of 2 will > (obviously) equally balance the table above. Personally I'd use >> > and > shift the bits because it is faster than mod, but suit yourself, after > you've learned what you are doing. >> This is important enough to further discuss about it. >> As nearly always you need random numbers from within a given >> domain say 0.. n-1 >> So projecting a RNG onto that domain is pretty crucial. How would >> you want to do that in a correct manner? >> In the slot test in fact a simple AND is enough. > No, as I've just proven algebraically. The correct manner for > general n > is the gsl code, but in rough terms it is n*r/r_max (with care used to > avoid roundoff errors at the ends as noted). If you've been using > modulus, all your results are crap. > Look, the reason God invented the GSL and made it open source is so > numb-nuts and smart people alike wouldn't have to constantly reinvent > the wheel, badly. Use it. Don't question it -- you obviously aren't > competent to. Just use it. If you want a random integer from 0 to n, > use gsl_rn_uniform_int. If you want this for e.g. mt19937 don't write > the latter, set up the gsl to use it to generate your ints. Learn to > use it carefully, use it correctly, but use it. >> It's not interesting to discuss - but yes this strategy makes >> money in casino's, >> you just get thrown out of the casino and end up at the blacklist >> if you do. > You are clearly too stupid to be allowed out of the house without a > caretaker. I'm not going to walk you through the proof that this > isn't > so as it is openly published and I've already referenced a step my > step > analysis that you can't be bothered, apparently, to actually read. > I'll > just reiterate the previous offer -- I, too, am happy to buy a > roulette > wheel and you can come over and bet Martingale against me all day. > Just > one 0, no limits and no quitting, infinite credit on both sides, we > play > until it is obvious to you that you are losing, have lost, will always > lose, and the longer you play the more that you will lose. Loser buys > the winner a case of truly excellent beer. > Look, why don't you fix your random number code and try again, since > your simulations are obviously trash. It isn't difficult to show this > with simulations, once you actually code them correctly, but I have to > go and don't have time to do it for you. > rgb > Robert G. Brown http://www.phy.duke.edu/~rgb/ > Duke University Dept. of Physics, Box 90305 > Durham, N.C. 27708-0305 > Phone: 1-919-660-2567 Fax: 919-660-2525 email:rgb at phy.duke.edu Beowulf mailing list, Beowulf at beowulf.org sponsored by Penguin Computing To change your subscription (digest mode or unsubscribe) visit http://www.beowulf.org/mailman/listinfo/beowulf This message has been scanned for viruses and dangerous content by MailScanner, and is believed to be clean. More information about the Beowulf mailing list
{"url":"http://www.clustermonkey.net/pipermail/beowulf/2011-August/053325.html","timestamp":"2014-04-17T21:35:45Z","content_type":null,"content_length":"10486","record_id":"<urn:uuid:c9ef6fe0-3d6e-4b30-8d0e-799d5e1c04b8>","cc-path":"CC-MAIN-2014-15/segments/1397609532128.44/warc/CC-MAIN-20140416005212-00234-ip-10-147-4-33.ec2.internal.warc.gz"}
Physics Forums - View Single Post - 0 divided by 0 Hii , Nano-Passion !! This is a very interesting question. 0/0 is neither ∞ nor 0. It is what is "Indeterminate". For example we can say that 1/0 = ∞ because 0x1 = 0 , 0x10 = 0. So we assume that somehow at an undefined place that is ∞ 0 will become 1. But in case of 0/0 , every equation is satisfied ! 0/0 = x , where x can be any number. So this is kinda indeterminable. Here is the best explanation of 0/0 by Doctor Math : Read it , it is very interesting. Here are some simple examples: • If you choose an integer at random from Z, what is the probability that the integer chosen is 0? • If you choose an integer at random from Z, what is the probability that the integer lies between -N and N? • If you choose a real number at random from R, what is the probability that the real number chosen is rational (or algebraic)? In each case, the probability in question is 0. The third statement is a little more complicated, but it has a nice proof once you have measure-theoretic concepts. I will prove that the probability of the second statement is 0: Let [(2N) ] = {-(2N) , ... , (2N) }. Then for a fixed m, the probability of choosing an integer between -N and N is (2N) . By letting m → ∞, we see that the probability goes to 0. In particular, in the limiting case (when we are choosing elements from ), the probability I should probably write this more formally and nicely, but it captures the point. So there's your example. If you don't think that the limit actually 0, but rather is something else, what do you propose that something else should be? The third question is very interesting actually haha. I agree with you, when you are dealing with an infinite amount of numbers then it would be 0. But what about a finite amount of numbers? Can a number occur with 0 probability, such that n is a finite number [in this case let us limit n to a world consisting only of 50 digits]. Hmm, probability 0 is indeed a silly concept. Most people think of probability as throwing dice, and indeed: throwing a 5.5 with a dice has probability 0 and thus never happens. But it is important not to generalize this situations. There are some probability 0 situations which can happen. As an example: choosing an arbitrary number in the interval [0,1]. It is clear that all numbers have the same probability p of being chosen. However, saying that a number has probability p>0 is wrong, since [itex]\sum_{x\in [0,1]}{p}\neq 1[/itex]. So we NEED to choose p=0. So choosing probability 0 for this is actually quite unfortunate and caused by a limitation of mathematics. However, there is another way of seeing this. Probability can be seen as some "average" value. For example, if I throw dices n times (with n big), then I can count how many times I throw 6. Let [itex]a_n[/itex] be the number of 6's I throw. Then it is true that [tex]\frac{a_n}{n}\rightarrow \frac{1}{6}[/tex] So a probability is actually better seen as some kind of average. Now it becomes easier to deal with probability 0. Saying that an event has probability 0 is now actually a limiting average. So let [itex]a_n[/itex] be the number of times that the event holds, then we have [tex]\frac{a_n}{n}\rightarrow 0[/tex] It becomes obvious now that the event CAN become true. For example, if the event happens 1 or 2 times, then we the probability is indeed 0. It can even happen an infinite number of times. Probability 0 should not be seen as a impossibility, rather it should be seen as "if I take a large number of experiments, then the event will become more and more unlikely". This is what probability 0 means. Hey, thanks for the reply. To me probability rings to my neurons as a tendency to become a value over a period of time or over n times. But that is just my definition of course. If we take this definition in that context, then perhaps a probability of 0 would imply that it has 0 tendency to become any value over a period of time or n times. But then I guess this doesn't hold true in the mathematical context. I wonder, if something has 0 probability in Quantum Mechanics, can it happen? I suppose it can, which would support your statement.
{"url":"http://www.physicsforums.com/showpost.php?p=3641851&postcount=22","timestamp":"2014-04-18T00:29:58Z","content_type":null,"content_length":"13860","record_id":"<urn:uuid:2030f69c-3d4b-4c5a-b79f-49743d49028c>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00530-ip-10-147-4-33.ec2.internal.warc.gz"}
Montessori - Mathematics - Introduction Math is all around the young child from day one. How old are you? In one hour you will go to school. You were born on the 2nd. Number itself cannot be defined and understand of number grows from experience with real objects but eventually they become abstract ideas. It is one of the most abstract concepts that the human mind has encountered. No physical aspects of objects can ever suggest the idea of number. The ability to count, to compute, and to use numerical relationships are among the most significant among human achievements. The concept of number is not the contribution of a single individual but is the product of a gradual, social evolution. The number system which has been created over thousands of years is an abstract invention. It began with the realization of one and then more than one. It is marvelous to see the readiness of the child’s understanding of this same concept. Arithmetic deals with shape, space, numbers, and their relationships and attributes by the use of numbers and symbols. It is a study of the science of pattern and includes patterns of all kinds, such as numerical patterns, abstract patterns, patterns of shape and motion. In the Montessori classroom, five families with math are presented to the child: arithmetic, geometry, statistics and calculus. More precisely, the concepts covered in the Primary class are numeration, the decimal system, computation, the arithmetic tables, whole numbers, fractions, and positive numbers. We offer arithmetic to the child in the final two years of the first place of developments from age four to age five and six. Arithmetic is the science of computing using positive real numbers. It is specifically the process of addition, subtraction, multiplication and division. The materials of the Primary Montessori classroom also present sensorial experiences in geometry and algebra. It is therefore part of the nature of a human being. Mathematics arises form the human mind as it comes into contact with the world and as it contemplates the universe and the factors of time and It under girds the effort of the human to understand the world in which he lives. All humans exhibit this mathematical propensity, even little children. It can therefore be said that human kind has a mathematical mind. Montessori took this idea that the human has a mathematical mind from the French philosopher Pascal. Maria Montessori said that a mathematical mind was “a sort of mind which is built up with exactity.” The mathematical mind tends to estimate, needs to quantify, to see identity, similarity, difference, and patterns, to make order and sequence and to control error. The infant and young child observes and experiences the world sensorial. From this experience the child abstracts concepts and qualities of the things in the environment. These concepts allow the child to create mental order. The child establishes a mental map, which supports adaptation to the environment and the changes which may occur in it. Clear, precise, abstract ideas are used for thought. The child’s growing knowledge of the environment makes it possible for him to have a sense of positioning in space. Numerocity is also related to special orientation. In the first plane of development, the human tendency to make order along with the sensitive period for order support the exactitude by which the child classifies experience of the world. The Montessori materials help the child construct precise order. In the class, the child is offered material and experiences to help him build internal order. It is internal order that makes the child able to function well in the environment. Order under girds the power to reason, and adapt to change in the environment. Each culture has a pattern of function in that society. This pattern is absorbed by the child, and becomes the foundation of which the child builds his life. This cultural pattern is the context for the Montessori class. Practical life Exercises are the every day tasks of the home culture and include the courtesies by which people relate. The child is attracted to these activities because they are the ways of his people. He is attracted to the real purpose which engages his intellect. As he begins to work with Practical Life Exercises, he is more and more attracted to the order and precision that is required. Participation in these activities help the child become a member of the society of peers in the classroom. Without the child’s knowing it, these activities are laying out patterns in the nervous system. Repetition sets these patterns and leads to ease of effort. The Sensorial Material is mathematical material. It is exact. It is presented with exactness and will be used by the child with exactness. The activities call for precision so that the child can come into contact with the isolated concepts and through repetition, draw from the essence of each and have a clear abstraction. These concepts help the child to order his mind. He is able to classify experience. Clear perception and the ability to classify leads to precise conclusions. The Sensorial work is a preparation for the study of sequence and progression. It helps the child build up spatial representations of quantities and to form images of their magnitudes such as the Pink Tower. Spoken language is used to express abstract concepts and to communicate them to others. In addition to the spoken language, humans came to need a language to express quantitative experience, and from this came the language of mathematics. By age four, the child is ready for the language of mathematics. A series of preparations have been made. First the child has established internal order. Second, the child has developed precise movement. Third, the child has established the work habit. Fourth, the child is able to follow and complete a work cycle. Fifth, the child has the ability to concentrate. Sixth, the child has learned to follow a process. Seventh, the child has used symbols. All of this previous development has brought the child to a maturity of mind and a readiness of work. The concrete materials for arithmetic are materialized abstractions. They are developmentally appropriate ways for the child to explore arithmetic. The child gets sensorial impressions of the mathematical concepts and movement supports the learning experience. The material begins with concrete experiences but moves the child towards the abstract. There is also a progression of difficulty. In the presentation of the material, a pattern is followed. It is used throughout the arithmetic Exercises. For the presentation of the mathematical concepts, the child is first introduced to quantity in isolation, and is given the name for it. Next, symbol is introduced in isolation and it is also named. The child is then given the opportunity to associate the quantity and symbol. Sequence is given incidentally in all of the work. Various Exercises call for the child to establish sequence. The Exercises in arithmetic are grouped. There is some sequential work and some parallel work. The first group is Numbers through Ten. The experiences in this group are sequential. When the child has a full understanding of numbers through ten, the second group, The Decimal System, can be introduced. The focus here is on the hierarchy of the decimal system and how the system functions. It also starts the child on the Exercises of simple computations, which are the operations of arithmetic. The third group will be started when the decimal system is well underway. From then on, these Exercises will be given parallel to the continuing of the decimal system. This third group, Counting beyond Ten, includes the teens, the tens, and linear and skip counting. The fourth group is the memorization of the arithmetic tables. This work can begin while the later work of the decimal system and the counting beyond ten Exercises are continued. The fifth group is the passage to abstraction. The Exercises in this group require the child to understand the process of each form of arithmetic and to know the tables of each operation. There is again an overlap. The child who knows the process and tables for addition can begin to do the addition for this group. He may still be working on learning the tables for the other operations and these will not be taken up until he has the readiness. The Exercises in the group for passing to abstraction, allows the child to drop the use of the material as he is ready. He can then begin to work more and more with the symbols on paper, without using the material to find the answers. The sixth group of materials, Fractions, can work parallel to the group of making abstractions and the early work with the fractions can begin even sooner than that. Sensorial work with the fraction material can be done parallel with the other groups of arithmetic. The writing of fractions and the operations of fractions can follow as the child is moving into the passage to abstraction. The adult is responsible for the environment and the child’s experiences in it. It is important to provide the indirect preparation of experience with numbers before it is studied. The arithmetic materials must be carefully presented as the child is ready. Montessori has emphasized that young children take great pleasure in the number work. It is therefore important that the adult not pass on any negative overtone onto the child’s experiences with arithmetic. These Exercises are presented with great enthusiasm. They must be carefully and clearly given to the child. In this work, it is also important for the directress to observe the child’s work. From observation, the directress will know if the child is understanding the concepts or if further help is needed. As always, the adult encourages repetition and provides for independent work, which will lead to mastery. When the child is ready, the absorption is as easy and natural as for other areas of knowledge. It is empowering and brings the child to a level of confidence and joy in another path of culture. The abstract nature of man is not an abstraction if the child’s development is understood by the adult. Share your experiences in the
{"url":"http://www.infomontessori.com/mathematics/introduction.htm","timestamp":"2014-04-20T23:27:00Z","content_type":null,"content_length":"34259","record_id":"<urn:uuid:13f22983-93c9-40e0-a221-bec2125963cf>","cc-path":"CC-MAIN-2014-15/segments/1398223207046.13/warc/CC-MAIN-20140423032007-00347-ip-10-147-4-33.ec2.internal.warc.gz"}
Reference on the question mark cell complex up vote 4 down vote favorite The question mark complex is a finite spectrum whose cohomology looks like a "question mark" (when drawn as a module over the Steenrod algebra): that is, there is an element in dimension zero $a_0$, an element $a_2 = \mathrm{Sq}^2 a_0$, and an element $a_6 = \mathrm{Sq}^4 a_2$. It can be constructed by starting with $\Sigma^{-2} \mathbb{CP}^2$, which maps to $S^2$ (thanks to the cofiber sequence $S^1 \to S^0 \to \Sigma^{-2} \mathbb{CP}^2 \to S^2$) and lifting the map $\nu: S^5 \to S^2$ to $\Sigma^{-2} \mathbb{CP}^2$ (which can be done since $\eta \nu = 0$). Then, one takes the cofiber of $S^ 5 \to \Sigma^{-2}\mathbb{CP}^2$. Does anyone know any good references on this? I'd like, ideally, to see a few example computations done with it; as it is I don't really have much intuition for how to work with it. at.algebraic-topology stable-homotopy have you tried constructing a similar complex but with $sq^1$ and $sq^2$ instead? – Sean Tilson Jul 21 '12 at 21:40 Hi Sean. There is such a complex (same logic as above, using $2 \eta = 0$ instead of $\eta \nu = 0$). I'd be interested in seeing computations, e.g., of the stable homotopy of this complex as well. – Akhil Mathew Jul 22 '12 at 1:45 1 I don't know what the $K$-theory operations look like, actually. The AHSS degenerates and $K^0(?)$ is free on three generators; there's a cofiber sequence $S^0 \to ? \to \Sigma^{-2} \mathbb{HP}^2$ which means that we know the $K$-theory operations except for one: the generator lifting the element of $K^0(S^0)$ could have strange Adams operations --- kind of like what happens in "On the groups J(X) IV." – Akhil Mathew Jul 22 '12 at 12:30 1 I was thinking you would use an Adams SS to compute its connective real k-theory... – Sean Tilson Jul 24 '12 at 7:57 There are a couple more references, also related to the Picard group - the dual of the question mark complex is referenced in Goerss-Henn-Mahowald-Rezk's "Picard groups at chromatic level 2 for $p 1 = 3$" paper - they at least tell you a bit about its $K$ theory and $KO$ theory. There are also some (hard!) calculations in Ichigi-Shimomura's "$E(2)_*$-invertible spectra smashing with the Smith-Toda spectrum $V(1)$ at the prime 3" (see section 3 in particular) – Drew Heard Dec 19 '12 at 10:36 show 2 more comments 1 Answer active oldest votes This is hardly a "canonical example", but one place I've seen the question mark complex $Q$ in action is in Hovey and Sadofsky's paper Invertible spectra in the $E(n)$-local stable homotopy category. There they compute that the even part of the Picard group of the $E(1)$-local stable category at $p = 2$ is generated by (the $E(1)$-localization of) $Q$. As a warning, the exposition at the tail of the paper where this appears is a bit skeletal; you'll be working out the details of the computations and digging up references on your own. up vote 4 down vote But, in turn, they do cite Hopkins' Minimal atlases of real projective spaces for some facts about $Q$, which I have not read but looks to be useful. He seems to call the complex $N_2$ accepted in section 7, for reasons he explains pertaining to the cohomology of $bo\langle i \rangle$. Thanks for these references. – Akhil Mathew Jul 22 '12 at 2:23 For whatever reason, googling "question mark cell complex" did not turn up anything. – Akhil Mathew Jul 22 '12 at 2:24 Sure thing. It's a bit of a silly name, certainly easier for humans to remember than for Google. – Eric Peterson Jul 22 '12 at 14:21 add comment Not the answer you're looking for? Browse other questions tagged at.algebraic-topology stable-homotopy or ask your own question.
{"url":"http://mathoverflow.net/questions/102823/reference-on-the-question-mark-cell-complex/102841","timestamp":"2014-04-21T10:31:49Z","content_type":null,"content_length":"60366","record_id":"<urn:uuid:6bc7c2f6-e1f3-4ab2-9413-766a15f2a445>","cc-path":"CC-MAIN-2014-15/segments/1397609539705.42/warc/CC-MAIN-20140416005219-00149-ip-10-147-4-33.ec2.internal.warc.gz"}
Law Of Restitution And Momentum let's assume that there are two ball of identical mass and one of the two is stationary . and the collision is NOT head on . I know that the law of restitution always applies ONLY to the line of impact of two bodies .What about momentum , is it conserved in both the line of impact AND the line perpendicular to it ? I am asking because i feel like the law of restitution is similar to momentum conservation so i feel like applying it "twice". Is momentum conserved in all axes regardless of whether 0≤e≤1 ? Thank you !
{"url":"http://www.physicsforums.com/showthread.php?p=3890020","timestamp":"2014-04-17T00:58:30Z","content_type":null,"content_length":"19725","record_id":"<urn:uuid:9ef5a218-c65b-468f-820b-59b3cc15f54d>","cc-path":"CC-MAIN-2014-15/segments/1398223210034.18/warc/CC-MAIN-20140423032010-00250-ip-10-147-4-33.ec2.internal.warc.gz"}
James Cussens [Research] [PhD supervision] [Projects] [Software] [Teaching] [Professional Activities] [Administration] [Personal history] [Contact information] [Dept home page] GOBNILP software for exact Bayesian network learning Topics for prospective PhD students Recent papers • Chris J. Oates, Jim Q. Smith, Sach Mukherjee and James Cussens. Exact Estimation of Multiple Directed Acyclic Graphs. Arkiv 1404.1238, April 2014. • James Cussens. Probability, Uncertainty and Artificial Intelligence. Metascience, (In press). • Lilia Costa, Jim Smith, Thomas Nichols and James Cussens. Searching Multiregression Dynamic Models of Resting-State fMRI Networks using Integer Programming. Working paper 13-20, Centre for Research in Statistical Methodology (CRiSM), University of Warwick, 2013. • James Cussens. Integer Programming for Bayesian Network Structure Learning. Quality Technology and Quantitative Management, 11(1):99-110, March 2014. • Mark Barlett and James Cussens. Advances in Bayesian Network Learning using Integer Programming. Proceedings of the 29th Conference on Uncertainty in Artificial Intelligence (UAI 2013). 182-191, • Joanne Powell, Matthew J. Collins, James Cussens, Norman MacLeod and Kirsty E.H. Penkman. Results from an amino acid racemization inter-laboratory proficiency study; design and performance evaluation. Quaternary Geochronology, 16:183-197, 2013. • James Cussens, Mark Bartlett, Elinor M. Jones and Nuala A. Sheehan. Maximum Likelihood Pedigree Reconstruction using Integer Linear Programming. Genetic Epidemiology, 37(1):69-83, Janary 2013. • James Cussens. Leibniz on Probability and Statistics. In Maria Rosa Antognazza, editor, The Oxford Handbook of Leibniz. Oxford University Press. (In press). • James Cussens. Column generation for exact BN learning: Work in progress. Proc. ECAI-2012 workshop on COmbining COnstraint solving with MIning and LEarning (CoCoMile 2012). • James Cussens. An upper bound for BDeu local scores. Proc. ECAI-2012 workshop on algorithmic issues for inference in graphical models (AIGM 2012). • James Cussens. Online Bayesian inference for the parameters of PRISM programs. Machine Learning. 89(3), 279-297, 2012 (Preprint available) (DOI: 10.1007/s10994-012-5305-8) • James Cussens. Bayesian network learning with cutting planes (PDF). In Fabio G. Cozman and Avi Pfeffer, editors, Proceedings of the 27th Conference on Uncertainty in Artificial Intelligence (UAI 2011), pages 153-160, Barcelona, 2011. AUAI Press. Recent talks Current PhD students • Garo Panikian - Statistical inference of dynamical systems with application to modelling fish populations • Eman Aljohani - Informative priors for learning graphical models Former students • Waleed Alsanie - Learning PRISM programs • Joanne Powell - PrediCtoR: Predicting the Recovery of Ancient DNA and Ancient Proteins (with Matthew Collins, Archaeology) • Adel Aloraini - Extending the graphical represetation of KEGG pathways for a better understanding of prostate cancer using machine learning • Barnaby Fisher - Inductive Logic Programming and Mercury (MSc by Research) • Heather Maclaren - Inductive Logic Programming for Software Agents: Algorithms and Implementations My main research interests are in machine learning, probabilistic graphical models and discrete optimisation using integer programming. I also work on statistical relational learning and, occasionally, philosophy of probability. Here are some possible topics for a PhD. Let me know if you're interested! • Model selection for graphical models using integer programming. I have recently been working on this problem. However, to date, we have restricted attention to learning directed graphical models ('Bayesian networks') from complete discrete data. Extending this approach to other graphical model learning problems is an exciting research area. Problems include: learning Gaussian graphical models (decomposable or unrestricted), chain event graphs, non-graphical log-linear models (probably restricted to hierarchical ones), etc, etc. • From optimisation to integration. In Bayesian statistics finding the most probable model is useful and I have applied integer programming to solve this optimisation problem (for directed graphical models with complete discrete data). However a full Bayesian approach requires consideration of the entire posterior distribution over models. Typically one wants to compute some marginal posterior quantity (e.g. the posterior probability that some edge exists in a graphical model). This requires computing weighted sums (discrete integration) over a very large set (eg the set of all acyclic digraphs). Recent work by Ermon and colleagues has shown that one can get good approximations to this sum by repeatedly solving optimisation problems with random constraints. This area merits further research. In particular it would be useful to compare it to MCMC approaches to the same problem. • Mixed integer programming for product space approaches. Integer programming has been used for Bayesian model selection when it has been possible to analytically 'integrate away' model parameters. This is, of course, a big restriction. It would be useful to look into applying mixed integer programming (where both discrete and continuous variables are used) to sampling from a 'product space' encoding both models (discrete) and parameters (continuous). An extension to the 'random constraint' approach mentioned above should allow such sampling to be reduced to repeated To undertake research in these areas it is necessary to have a strong mathematical background. Unfortunately, I have no specific funding available to support graduate students. Our department does award funded research studentships on a competitive basis. • gPy is a collection of Python modules for manipulating discrete hierarchical models (including Bayesian nets). It is used to support the teaching of Algorithms for Graphical Models. • The MCMCMS (Markov chain Monte Carlo over Model Structures) system uses Stochastic Logic Programs (SLPs) to define priors for Bayesian inference. The code was written by Nicos Angelopoulos. • Pepl is an implementation of the Failure-Adjusted Maximisation (FAM) algorithm.This is an instance of the EM algorithm which produces maximum likelihood estimates for the parameters of SLPs. The code was written by Nicos Angelopoulos. • Aaron Bate, a final year student in this department, has produced software for animating the construction of Prolog proof trees (the software draws a graphical representation of the proof tree). It uses Sicstus Prolog and Tcl/Tk. You can download the software as a gzipped tar file. I have included a simple Prolog SLP interpreter which allows you to sample from a distribution over proof trees and where each proof tree determines an acyclic digraph (the structural element of a Bayesian net). See the file slp_readme in the distribution for an explanation. Currently, at York: Professional Activities Programme chair Editorial duties Invited speaker Area chair/Senior PC PC member • ICML 2014, UAI 2014, ILP 2014, ECML/PKDD 2014, AAAI-14, KR 2014, ECAI'14, CoCoMiLe 2014, BUDA 2014, • ICML 2013, UAI 2013, ILP 2013, ECML/PKDD 2013, EMNLP 2013, NAACL-HLT 2013, LML workshop at ECML/PKDD 2013 • ICML 2012, UAI 2012, ILP 2012, ECML/PKDD 2012, AAAI-12, KR 2012 StaRAI-12, CoCoMile 2012, ACL 2012, Cognitive 2012, • ILP 2010, AAAI-10, ECAI-2010, ECML/PKDD 2010, SBIA 2010 • ICML 09, ILP-09, SRL-09, Terminologie et intelligence artificielle (TIA - 2009), IJCAI-09, AISTATS 09, CoNLL 09, NAACL-HLT 09, EACL Cognitive 2009, NAACL-2009 Workshop on Unsupervised and Minimally Supervised Learning of Lexical Semantics • ICML 07, UAI07, ILP07, ACL-2007 Workshop on Cognitive Aspects of Computational Language Acquisition, TIA'07 • UAI06, ILP06, AAAI-06, SRL06, CoNLL06 • ICML05, UAI05, ILP05, ECML/PKDD05, LLLL, CoNLL05, TIA05 • ICML04, UAI04, ECML04, CIFT04, SRL04, CoNLL04, Psycho-computational models ... • ILP01, ECML01 , CoNLL01, LLL01 • ILP00, CoNLL00, LLL00 • ILP98 Personal history 2001- Senior Lecturer in the Artifical Intelligence Group University of York Oct 1997-2001 Lecturer in the Artifical Intelligence Group University of York Feb 1996-Sept 1997 Researcher on the ILP2 project University of Oxford 1994-1995 Researcher on the ISSAFE project Glasgow Caledonian University 1991-1993 Researcher on the RUBS project King's College London 1990 Researcher on a stepwise refinement project University of Oxford 1986-1989 PhD student in Philosophy of Science King's College London 1983-1986 BSc student in Mathematics University of Warwick Contact information Address Dept of Computer Science and York Centre for Complex Systems Analysis, Room 326, The Hub, Deramore Lane, University of York, York, YO10 5GE, UK Direct phone +44 1904 325371 Fax +44 1904 500159 firstname.lastname AT cs DOT york DOT ac DOT uk
{"url":"http://www-users.cs.york.ac.uk/~jc/","timestamp":"2014-04-19T09:25:33Z","content_type":null,"content_length":"37311","record_id":"<urn:uuid:15005c19-9092-435d-a465-e4da059e4032>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00139-ip-10-147-4-33.ec2.internal.warc.gz"}
How to Draw a Quadrilateral? Quadrilateral is a word formed by Combination of two words i.e. 'Quad' and 'Latus' here 'Quad' means 'Four' and 'Latus' means 'Side'. The sum of internal angles of regular quadrilateral PQRS is equals to 360 degree. We can call a quadrilateral as a Polygon with four sides or edges. Quadrilateral can be any of the following type like: Rhombus, Parallelogram, Rhomboid, Rectangle, Square, Oblong, Trapezoid, Trapezium and Tangential. Now we will see how to draw a Quadrilateral. Let us consider the example for the drawing Quadrilateral: Let us take a Parallelogram. First let us have a brief idea about parallelogram. A parallelogram is simple Quadrilateral with two pairs of parallel sides. But condition is that opposite sides should be of equal lengths and their diagonals bisect each other and posses a Rotational Symmetry. Now let us see steps for drawing a Quadrilateral i.e. a parallelogram which are as follows: 1. First draw a line segment of any arbitrary length and mark two end points as 'A' and 'B'. 2. Then at starting Point 'A' of line constructs an angle of 60 degree. 3. Now cut the arc from first end point 'A' of particular length, let us take 4 inch and mark it as point 'D'. 4. Now similarly from point 'B' make an angle of 60 degree now again cut the arc from point 'B' of length same as previous drawn line AD, of 4 inch. 5. Now mark that point as 'C'. 6. Now join the points 'C' and 'D' together, figure formed is known as Parallelogram. You can see the picture of the parallelogram below: Now after following above mentioned simple steps, we can easily draw a Parallelogram.
{"url":"http://math.tutorcircle.com/geometry/how-to-draw-a-quadrilateral.html","timestamp":"2014-04-20T23:28:06Z","content_type":null,"content_length":"20246","record_id":"<urn:uuid:e234eab7-e82a-42ea-a6e3-637235b913a5>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00258-ip-10-147-4-33.ec2.internal.warc.gz"}
An acoustic correction method for extracting sound signals Wang, Z.K., Djambazov, G.S., Lai, C.-H. and Pericleous, K.A. (2004) An acoustic correction method for extracting sound signals. Computers & Mathematics with Applications, 47 (1). pp. 57-69. ISSN 0898-1221 (doi:10.1016/S0898-1221(04)90005-3) Full text not available from this repository. Sound waves are propagating pressure fluctuations and are typically several orders of magnitude smaller than the pressure variations in the flow field that account for flow acceleration. On the other hand, these fluctuations travel at the speed of sound in the medium, not as a transported fluid quantity. Due to the above two properties, the Reynolds averaged Navier-Stokes (RANS) equations do not resolve the acoustic fluctuations. Direct numerical simulation of turbulent flow is still a prohibitively expensive tool to perform noise analysis. This paper proposes the acousticcorrectionmethod, an alternative and affordable tool based on a modified defect correction concept, which leads to an efficient algorithm for computational aeroacoustics and noise analysis. Actions (login required)
{"url":"http://gala.gre.ac.uk/715/","timestamp":"2014-04-20T08:45:19Z","content_type":null,"content_length":"28361","record_id":"<urn:uuid:5e1e9e3c-bfac-432d-8aa5-d8df6bf5ba5e>","cc-path":"CC-MAIN-2014-15/segments/1398223204388.12/warc/CC-MAIN-20140423032004-00355-ip-10-147-4-33.ec2.internal.warc.gz"}
haos on hyperbolic surfaces TCC Course (Winter 2013) Dynamics and Quantum Chaos on hyperbolic surfaces Course Description: The aim of this course is to develop, from scratch, classical and quantum dynamics in the setting of hyperbolic surfaces. Flows on hyperbolic surfaces can be introduced with minimal prerequsites, but at the same time they exhibit surprisingly rich chaotic properties such as ergodicity and mixing. From the quantum point of view these chaotic properties are reflected in the asymptotic behaviour of high-energy eigenstates of the Laplace operator which are of fundamental importance in mathematical physics and the theory of automorphic forms. In this course we only assume basic knowledge of analysis and start with an elementary discussion of hyperbolic geometry and constructions of hyperbolic surfaces. Then we introduce the geodesic and horocycle flows and investigate distribution of their orbits. In the second part of the course we turn to discussion of quantum phenomena. This involves the study of eigenfunctions φ[λ] for the Laplace operator Δ. It turns out that the chaotic properties of the geodesic flow are reflected in the asymtotic behaviour of the eigenfunctions φ[λ] as λ→∞. We explain, in details, such phenomena as the quantum ergodicity and the quantum unique ergodicity in the setting of arithmetic surfaces. The highlight of the course would be an outline Lindenstrauss' proof of arithmetic quantum unique ergodicity. We plan to cover the following topics: • basic hyperbolic geometry, • geometric and arithmetic examples of hyperbolic surfaces, • geodesic and horocycle flows, • ergodicity and mixing properties, • invariant measures for the horocycle flow, • Casimir and Laplace operator, • microlocal lift and quantum ergodicity, • Hecke operators and arithmetic quantum unique ergodicity. Time: Friday 10-12am; See TCC Timetable Lecture notes: Homework Problems: Course Assessment: The course is assessed by the above problem sheets. Solutions for 10 problems have to be submitted by April 22. • B. Bekka and M. Mayer, Ergodic theory and topological dynamics of group actions on homogeneous spaces. Cambridge University Press, 2000. • P. Buser, Geometry and spectra of compact Riemann surfaces. Birkhauser, 1992. • F. Dal'Bo, Geodesic and horocyclic trajectories. Universitext, 2011. • M. Einsiedler and E. Lindenstrauss, Diagonal actions on locally homogeneous spaces. Clay Math. Proc. 10, pp. 155-241, 2010. • M. Einsiedler and T. Ward, Arithmetic quantum unique ergodicity. Lecture notes, 2010. • M. Einsiedler and T. Ward, Ergodic theory with a view towards number theory. Springer, 2011. • Introduction to Hyperbolic Surfaces. Lecture notes, 2012. • P. Sarnak, Arithmetic quantum chaos. Israel Math. Conf. Proc., 8, Bar-Ilan Univ., Ramat Gan, 1995. • P. Sarnak, Spectra of hyperbolic surfaces. Bull. Amer. Math. Soc. (N.S.) 40 (2003), no. 4, 441-478.
{"url":"http://www.maths.bris.ac.uk/~mazag/hyperbolic/index.html","timestamp":"2014-04-19T14:29:19Z","content_type":null,"content_length":"5611","record_id":"<urn:uuid:55c05068-f6a9-44a6-85e7-26cb17ae6ee9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00086-ip-10-147-4-33.ec2.internal.warc.gz"}
Targetting Obs EMC Targetting Obs Project During the FASTEX field experiments in January-February 1997, NCEP provides experimental guidance for upstream observations. In order to improve 36-60 hours weather forecasts over the main observing area (also called verification area, 20-0W, 45-55N), sensitivity calculations are carried out to identify upstream areas where extra observations prior to initial time can have a beneficial impact on the quality of forecasts at final time in the verification area. So that the sensitivity calculations can be used in real time they must be prepared at least 24 hours in advance. At NCEP, a singular value decomposition (SVD) technique is used for the sensitivity calculations (Bishop and Toth, 1996) . This technique is based on an ensemble of nonlinear forecasts. The SVD is carried out in the subspace of the 14 NCEP ensemble forecasts, given at the targeting time (usually at 24-hour lead time, also called initial time level) and at verification time (usually somewhere between 36 and 96 hours, also called final time level). For the computations, the 850, 500 and 250 hPa streamfunction field is used. Linear combinations of the ensemble perturbations are sought that maximize the perturbaions at verification time while keeping the initial perturbations fixed at an estimated level of analysis uncertainty. The square root of the eigenvalue of the above calculation is the amplification marked on the sensitivity maps. To find the most sensitive upstream area, we perform a series of SVD calculations where we reduce the initial perturbations in a 225 km radius "observing" area to a low level (smallest estimated uncertainty on the earth) to see how much (if any) the final perturbation amplitudes are affected. A new SVD is carried out every 10 degrees latutide and longitude and the eigenvalues (normalized by the background amplification) are plotted as sensitivity fields. For further details, click here. Sensitivity field (contour lines). The numbers (X) at any gridpoint show the following: if, due to extra observations taken at targeting time in the 1000 km radius of the gridpoint, the initial error is reduced to the lowest analysis error level on the earth, the final expected error variance in the verification region is reduced to the X fraction of the value expected without having the extra Tendency in sensitivity (color shades overlaid on sensitivity charts). The shades at any grid point indicate the change in sensitivity over a 24-hour period centered at targeting time. Ensemble mean forecast (contour lines). An unweighted mean of the 17-member NCEP ensemble . Ensemble spread (color shades overlaid on ensemble mean forecasts). The numbers show the standard deviation around the ensemble mean in the 17-member NCEP ensemble. An advantage of the SVD technique is that it is performed in the subspace of the ensemble forecasts, which represent realistic perturbations (expected errors). A potential disadvantage is, however, that the number of perturbations is very limited (7 pairs of forecasts currently at NCEP). In a limited sample, from time to time far-field correlations among the perturbations may show up by chance, that in turn may lead to spurious areas of sensitivity, far from the real sensitive area. In our limited experience, this may happen when the amplification (indicated in title of sensitivity maps) is relatively low and no large errors develop in the verification region - a case when sensitivity calculations would not be used anyway. By inspection of a series of sensitivity charts one can usually identify an area of sensitivity (a local minimum) that with targeting time shifting closer to verification time, reaches the vicinity of the verification area. This area can be considered as a real sensitive region. The tendency charts (in color) overlaid upon the sensitivity maps can help to identify this region: one can usually see a deepening tendency (yellow/red/white colors) associated with the movement and development of this sensitivite area with shifting targeting time. Potentially spurious areas of sensitivity do not exhibit continuous deepening/development into the direction of the verification region. These products have been developed in collaboration with Craig Bishop (Penn State University), Chris Snyder (NCAR/MMM) and Kerry Emanuel (MIT). Please contact: Istvan Szunyogh at wd20is@sgi73.wwb.noaa.gov or Zoltan Toth at Zoltan.Toth@noaa.gov Timothy Marchok at wd20tm@sgi76.wwb.noaa.gov
{"url":"http://www.emc.ncep.noaa.gov/gmb/ens/fastex.html","timestamp":"2014-04-18T05:51:59Z","content_type":null,"content_length":"5302","record_id":"<urn:uuid:999ddd90-d0c3-4c78-8fb5-b6edd63c9f7d>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00239-ip-10-147-4-33.ec2.internal.warc.gz"}
Patterns and Transformation Rules Patterns stand for classes of expressions. They contain pattern objects that represent sets of possible expressions. _ any expression x_ any expression, given the name x x:pattern a pattern, given the name x pattern?test a pattern that yields True when test is applied to its value _h any expression with head h x_h any expression with head h, given the name x __ any sequence of one or more expressions ___ any sequence of zero or more expressions x__and x___ sequences of expressions, given the name x __hand ___h sequences of expressions, each with head h x__hand x___h sequences of expressions with head h, given the name x PatternSequence[p[1],p[2],...] a sequence of patterns x_:v an expression with default value v x_h:v an expression with head h and default value v x_. an expression with a globally defined default value Optional[x_h] an expression that must have head h, and has a globally defined default value Except[c] any expression except one that matches c Except[c,pattern] any expression matching pattern, except one that matches c pattern.. a pattern repeated one or more times pattern... a pattern repeated zero or more times Repeated[pattern, spec] a pattern repeated according to spec pattern[1]|pattern[2]|... a pattern which matches at least one of the pattern/;cond a pattern for which cond evaluates to True HoldPattern[pattern] a pattern not evaluated Verbatim[expr] an expression to be matched verbatim OptionsPattern[] a sequence of options Longest[pattern] the longest sequence consistent with pattern Shortest[pattern] the shortest sequence consistent with pattern Pattern objects. When several pattern objects with the same name occur in a single pattern, all the objects must stand for the same expression. Thus can stand for but not . In a pattern object such as , the head h can be any expression, but cannot itself be a pattern. A pattern object such as stands for a sequence of expressions. So, for example, can stand for , with being Sequence[a, b, c]. If you use , say in the result of a transformation rule, the sequence will be spliced into the function in which appears. Thus would become . When the pattern objects and appear as arguments of functions, they represent arguments which may be omitted. When the argument corresponding to is omitted, x is taken to have value v. When the argument corresponding to is omitted, x is taken to have a default value that is associated with the function in which it appears. You can specify this default value by making assignments for Default [f] and so on. Default[f] default value for when it appears as any argument of the function f Default[f,n] default value for when it appears as the nn count from the end) Default[f,n,tot] default value for the ntot arguments Default values. A pattern like can match an expression like with several different choices of , , and . The choices with and of minimum length are tried first. In general, when there are multiple or in a single function, the case that is tried first takes all the and to stand for sequences of minimum length, except the last one, which stands for "the rest" of the arguments. When or are present, the case that is tried first is the one in which none of them correspond to omitted arguments. Cases in which later arguments are dropped are tried next. The order in which the different cases are tried can be changed using Shortest and Longest. Orderless and are equivalent Flat and are equivalent OneIdentity and x are equivalent Attributes used in matching patterns. Pattern objects like can represent any sequence of arguments in a function f with attribute Flat. The value of x in this case is f applied to the sequence of arguments. If f has the attribute OneIdentity, then e is used instead of when x corresponds to a sequence of just one argument. lhs=rhs immediate assignment: rhs is evaluated at the time of assignment lhs:=rhs delayed assignment: rhs is evaluated when the value of lhs is requested The two basic types of assignment in Mathematica. Assignments in Mathematica specify transformation rules for expressions. Every assignment that you make must be associated with a particular Mathematica symbol. f[args]=rhs assignment is associated with f (downvalue) t/:f[args]=rhs assignment is associated with t (upvalue) f[g[args]]^=rhs assignment is associated with g (upvalue) Assignments associated with different symbols. In the case of an assignment like , Mathematica looks at f, then the head of f, then the head of that, and so on, until it finds a symbol with which to associate the assignment. When you make an assignment like , Mathematica will set up transformation rules associated with each distinct symbol that occurs either as an argument of lhs, or as the head of an argument of lhs. The transformation rules associated with a particular symbol s are always stored in a definite order, and are tested in that order when they are used. Each time you make an assignment, the corresponding transformation rule is inserted at the end of the list of transformation rules associated with s, except in the following cases: The left-hand side of the transformation rule is identical to a transformation rule that has already been stored, and any conditions on the right-hand side are also identical. In this case, the new transformation rule is inserted in place of the old one. Mathematica determines that the new transformation rule is more specific than a rule already present, and would never be used if it were placed after this rule. In this case, the new rule is placed before the old one. Note that in many cases it is not possible to determine whether one rule is more specific than another; in such cases, the new rule is always inserted at the end. Types of Values Attributes[f] attributes of f DefaultValues[f] default values for arguments of f DownValues[f] values for , , etc. FormatValues[f] print forms associated with f Messages[f] messages associated with f NValues[f] numerical values associated with f Options[f] defaults for options associated with f OwnValues[f] values for f itself UpValues[f] values for Types of values associated with symbols. Clearing and Removing Objects expr=. clear a value defined for expr f/:expr=. clear a value associated with f defined for expr Clear[s[1],s[2],...] clear all values for the symbols , except for attributes, messages, and defaults ClearAll[s[1],s[2],...] clear all values for the , including attributes, messages, and defaults Remove[s[1],s[2],...] clear all values, and then remove the names of the Ways to clear and remove objects. In Clear, ClearAll, and Remove, each argument can be either a symbol or the name of a symbol as a string. String arguments can contain the metacharacters * and @ to specify action on all symbols whose names match the pattern. Clear, ClearAll, and Remove do nothing to symbols with the attribute Protected. Transformation Rules lhs->rhs immediate rule: rhs is evaluated when the rule is first given lhs:>rhs delayed rule: rhs is evaluated when the rule is used The two basic types of transformation rules in Mathematica. Replacements for pattern variables that appear in transformation rules are effectively done using ReplaceAll (the operator).
{"url":"http://reference.wolfram.com/mathematica/tutorial/PatternsAndTransformationRules.html","timestamp":"2014-04-16T22:24:06Z","content_type":null,"content_length":"58814","record_id":"<urn:uuid:8cc37e6a-380a-4b07-a598-c1a550384421>","cc-path":"CC-MAIN-2014-15/segments/1398223206770.7/warc/CC-MAIN-20140423032006-00157-ip-10-147-4-33.ec2.internal.warc.gz"}
Is there any relationship between Bourbaki's Epsilon Calculus and Lambda Calculus ? Is \lambda {x} same as \tau_{x} ? ? up vote 3 down vote favorite Is there any relationship between Bourbaki's Epsilon Calculus and Lambda Calculus ? Whether $\lambda {x}$ is same as $\tau _{x}$ ? Are the rules of Meta-Mathematics (Criteria of Substitution, Formative Constructions) described in Bourbaki equally applicable to Lambda Calculus ? bourbaki lo.logic 3 They certainly aren't the same: $\lambda x$ applies to terms whereas $\tau_x$ applies to formulas. So $\lambda x.x$ is the identity map, but $\tau_x x$ makes no sense since $x$ is not a well-formed formula. Similarly $\lambda x.(x = x)$ makes no sense, but $\tau_x (x=x)$ is a choice of element of the universe. (You could view equality as a binary function that takes values in a truth-value sort, but then $\lambda x.(x = x)$ is the same as $x=x$ not an element of the sort of $x$.) – François G. Dorais♦ Jan 23 '12 at 12:07 I think I should have used more precise language. What I meant, whether Lambda Calculus and Bourbaki's Epsilon Calculus have similarity ? I did not mean to assert that $x$ in $\lambda{x}$ and $x$ in $\tau_{x}$ are same. Is not rules of quantification as illustrated in Quine, rules of Meta-Mathematics as in Bourbaki and Lambda Calculus share same theme ? – Reetesh Mukul Jan 23 '12 at 12:58 add comment 1 Answer active oldest votes Bourbaki's tau-box notation is somewhat insane (e.g., see Adrian Mathias's A Term of Length 4,523,659,424,929), so I'll eventually answer in terms of Hilbert's epsilon-calculus. But first, the laws of variable binding are identical for all reasonable classical and intuitionistic calculi (matters are a little different for substructural and modal logics, but not in a fundamental way). The basic idea is always the same: certain terms introduce bound variables, and within the scope of the binder (a) you can refer to the introduced variable, and (b) variables of the same name introduced outside of it are shadowed. Furthermore, renaming bound variables does not change the meaning of a term (this is called alpha-equivalence), and when substituting a term for a variable, bound variables need to be renamed in order to avoid capturing the free variables of the substituted term (this is called alpha-conversion). Frege was the first person to really figure out how variable binding works, and this was a sufficiently important discovery that he is a great logician despite the small matter of the inconsistency of his foundational system. (And I do not mean that ironically: it really is a small matter compared to the magnitude of the achievement of understanding binding.) The similarity you see comes from the fact that $\forall x.\;A$, $\exists x.\;A$, $\epsilon x.\;A$, and $\lambda x.\;e$ are all binding forms, and must all respect the same laws of variable binding. However, the difference between them is best understood in terms of their different types[*], taking $\iota$ to range over individuals, and $o$ to range over propositions. $$ \begin{array}{lcl} \forall & : & (\iota \to o) \to o \\\ \exists & : & (\iota \to o) \to o \\\ \epsilon & : & (\iota \to o) \to \iota \\\ \lambda & : & (A \to B) \to (A \Rightarrow B) \\\ up vote \end{array} $$ 6 down vote What $\forall$ and $\exists$ do is to take a term of type proposition, that has a free variable of type individual, and construct a proposition from it. $\epsilon$, on the other hand, is a choice operation. It says that if you give it a predicate (i.e., a term of type proposition with a free variable of type individual), it will give you a canonical individual satisfying that proposition. $\lambda$ is syntax for function-abstraction. If you give it a term of type $B$ with a free variable of type $A$, it will give you back a function value transporting $A$s to Now, note that unlike $\forall$, $\exists$, and $\epsilon$, lambda-abstraction works at all types. This is why Church introduced the lambda calculus: he had the idea that you could model variable binding once, with lambda, and then introduce $\forall$, $\exists$, and $\epsilon$ as additional constants in the lambda calculus. If you really have your heart set on Bourbaki's notation, since it eliminates all variables from the calculus, then I recommend you look at de Bruijn indices instead. The intuition there is also link-based, but it is a radically more efficient representation than Bourbaki's. [*] I am playing a bit fast and loose here, since the right way to understand these things is really in terms of categorical algebra and structural proof theory: I should be using homs and contexts, but no matter. 1 +1, Neel. This is a really well thought-out reply. – Todd Trimble♦ Feb 7 '12 at 15:49 add comment Not the answer you're looking for? Browse other questions tagged bourbaki lo.logic or ask your own question.
{"url":"http://mathoverflow.net/questions/86428/is-there-any-relationship-between-bourbakis-epsilon-calculus-and-lambda-calculu/87804","timestamp":"2014-04-20T06:16:53Z","content_type":null,"content_length":"58480","record_id":"<urn:uuid:3fec6caf-bb7f-447f-9a58-b0c5e9e7f40f>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00431-ip-10-147-4-33.ec2.internal.warc.gz"}
Hialeah Algebra 2 Tutor Find a Hialeah Algebra 2 Tutor ...I am an experienced tutor in Physical Science, Chemistry, Biology, Physics and Math. I tutored a range of students from elementary to college. Each student will receive a personalized lesson catered to their learning style. 21 Subjects: including algebra 2, chemistry, physics, geometry ...I had to lecture any course given me by the department, as it was a requirement that every PhD candidate be able to lecture any course/subject assigned....You won't believe the first course/ subject I was assigned to lecture? STATISTICS! LOL! 5 Subjects: including algebra 2, statistics, algebra 1, prealgebra ...The SSAT is the Secondary School Admission Test that measures the reading, math, and verbal ability of applicants to private and independent high schools. The SSAT is given for two levels: Lower (for students currently in grades 5 - 7) Upper (for students currently in grades 8 - 11) ... 27 Subjects: including algebra 2, chemistry, writing, Spanish ...How many electrons in the outermost shell of an atom of helium?). Science is all about developing a working model, a good understanding, of the world around us, from the quantum to the microscopic to the galactic and "beyond." I want my students' experiences with science to include: tapping into ... 61 Subjects: including algebra 2, English, Spanish, reading I began working as a tutor in High School as part of the Math Club, and then continued in college in a part time position, where I helped students in College Algebra, Statistics, Calculus and Programming. After college I moved to Spain where I gave private test prep lessons to high school students ... 11 Subjects: including algebra 2, calculus, physics, geometry
{"url":"http://www.purplemath.com/Hialeah_Algebra_2_tutors.php","timestamp":"2014-04-16T10:31:17Z","content_type":null,"content_length":"23930","record_id":"<urn:uuid:c2de2d99-3c2e-466e-83a7-9809d7f6a8d3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00386-ip-10-147-4-33.ec2.internal.warc.gz"}
Kenosha Prealgebra Tutor ...I have created curriculum tailored to the individual student 's need that engages and challenges. I use teaching techniques to meet the needs of visual, kinesthetic and auditory learners for all subject areas. I specialize in phonics instruction and initial reading to develop reading fluency, comprehension and literacy. 39 Subjects: including prealgebra, reading, English, accounting ...I have been certified in the area of the ASVAB test. Additionally, I served four years in the U.S. Army and have personally taken the ASVAB test. 46 Subjects: including prealgebra, English, reading, writing ...It is the foundation for higher level math and science courses. Geometry is a subject in mathematics that promotes logical and critical thinking. It is visual but at the same time, it promotes abstract thinking through the extensive use of Algebra and logic. 11 Subjects: including prealgebra, calculus, statistics, geometry ...My wife and I have a son who is almost three years old, and we are in the adoption process at the moment to add a second child to our family.I have taught High School math for 20 years now, and have a solid background of CP Geometry (having taught that for 20 years) which prepared me well for tea... 12 Subjects: including prealgebra, calculus, ESL/ESOL, statistics ...Qualifications include a B.S. from the University of Wisconsin-Parkside in the Health Sciences and Physics minor, five year's experience in the health care field, and current Illinois Substitute Teaching Certification. My passion for helping students succeed has driven me into the classroom time... 24 Subjects: including prealgebra, chemistry, physics, calculus
{"url":"http://www.purplemath.com/kenosha_prealgebra_tutors.php","timestamp":"2014-04-20T02:28:14Z","content_type":null,"content_length":"23871","record_id":"<urn:uuid:f751c9c0-8e0a-4070-bc2c-15b0f9122d2f>","cc-path":"CC-MAIN-2014-15/segments/1398223205137.4/warc/CC-MAIN-20140423032005-00299-ip-10-147-4-33.ec2.internal.warc.gz"}
Climate modelling at Quaternary time scales Seminar Room 1, Newton Institute Climate changes at time scales of several (tens) of thousands years like glacial interglacial cycles may be viewed as an 'emergent feature' of the climate system. They can be understood at different levels : from a general circulations prospective (how changes in astronomical elements affect the hydrological, nutrient and carbon cycles); from a dynamical system prospective (locate and characterize bifuraction points, detect synchronisation phenemena, identify couplings between different time scales... ); or from a statistical prospective (estimate the probability of events and assess the predictability of the climate system at these time scales). The ambition of a general theory Pleistocene climate is to merge these approaches. The recent mathematical developments reviewed during the present Newton Institute constitute promising avenues to this end. For example, statistical emulators allow to explore in depth the input and parameter spaces of general circulation simulators, including their sensitivity to the astronomical forcing. Monte-Carlo statistical methods allow to calibrate low-order stochastic dynamical systems, and guide the process of criticism and selection of models. The purpose of this talk is to summarise advances gained during the Newton Institute along these lines. The video for this talk should appear here if JavaScript is enabled. If it doesn't, something may have gone wrong with our embedded player. We'll get it fixed as soon as possible.
{"url":"http://www.newton.ac.uk/programmes/CLP/seminars/2010120709301.html","timestamp":"2014-04-18T15:39:49Z","content_type":null,"content_length":"7107","record_id":"<urn:uuid:0fefee13-9e8e-44c5-a027-c65796909c92>","cc-path":"CC-MAIN-2014-15/segments/1398223206647.11/warc/CC-MAIN-20140423032006-00512-ip-10-147-4-33.ec2.internal.warc.gz"}
The minimum size of Max-Cut for graphs of half density up vote 2 down vote favorite As discussed in the following math overflow question, Max cut value in a random graph, the max-cut of a random graph $G(n,1/2)$ is $\frac{n^2}{8} + \Theta(n^{3/2})$ with high probability. My question is this: Does there exist a family of graphs with relative density about $\frac{1}{2}$ with max-cut size being $\frac{n^2}{8}+ o(n^{3/2})$? If not, can we show that every graph with $\ frac{1}{2}$ relative density has a cut of size $\frac{n^2}{8} + \Omega(n^{3/2})$ ? (It is possible that any of the well-known explicit pseudorandom graphs with $\frac{1}{2}$ relative density,such as Paley graphs and etc, will satisfy this property but I haven't been able to verify whether this is the case or not by looking at a few survey papers.) co.combinatorics graph-theory add comment 1 Answer active oldest votes Let $n=4k$, and let $G$ be a graph consisting of two disjoint copies of $K_{2k}$ along with $k$ additional edges (the $k$ additional edges being there to give $G$ density $1/2$). up vote 2 down Any cut of $G$ cuts at most $k^2$ edges from each of the complete graphs, along with the $n$ additional edges, so the max-cut value is at most $2k^2+k=n^2/8+n/4$. vote accepted Thanks! I don't know how I didn't see that myself! But for the application that I had in mind what is really necessary is that a graph $G=(V,E)$ such that for any $S\subset V$ we have $|E(S,S^c)-\frac{|S||S^c|}{2}|\leq o(n^{3/2})$. Maybe I ask about the existence of such graphs in another question. – Nick B. Dec 4 '12 at 3:01 add comment Not the answer you're looking for? Browse other questions tagged co.combinatorics graph-theory or ask your own question.
{"url":"https://mathoverflow.net/questions/115245/the-minimum-size-of-max-cut-for-graphs-of-half-density","timestamp":"2014-04-16T16:34:59Z","content_type":null,"content_length":"51704","record_id":"<urn:uuid:30b24151-e434-4ddb-9edd-cf31ecb6b57b>","cc-path":"CC-MAIN-2014-15/segments/1397609524259.30/warc/CC-MAIN-20140416005204-00626-ip-10-147-4-33.ec2.internal.warc.gz"}
Michiana Shores, IN Math Tutor Find a Michiana Shores, IN Math Tutor ...I also have experience in JavaScript and Search Engine Optimization. I began doing my own web design in 2000 and used it extensively as part of my classroom for many years thereafter. I taught Web Design for three years in my current position as a Mathematics Teacher. 14 Subjects: including discrete math, algebra 1, algebra 2, calculus ...I've worked between 10-40 hours a week in this position over the two years helping freshmen college algebra students at FIU in a computer-integrated laboratory. This gave me experience tutoring several hundreds of students. I was also a learning assistant from 2009-2013. 11 Subjects: including trigonometry, SAT math, discrete math, algebra 1 ...Topics in a standard Calculus class, including, but not limited to: Differential Equations, Integrals, Series, Vectors, Multivariable Equations, The Fundamental Theorem of Calculus, and 3-Dimensional Graphing. Topics studied in a typical Chemistry class such as, but not limited to: Atoms & Molec... 33 Subjects: including linear algebra, ACT Math, Spanish, probability ...In the last thirteen years, I've taught grades 2, 6, 7, & 8 full time, and I've substituted in grades K-12. I have a strong background in literacy, and I have received training in literacy coaching in the elementary and secondary grades. As a middle school teacher, helping students learn how to best prepare for classroom assessments and standardized assessments is necessary. 19 Subjects: including prealgebra, algebra 1, reading, writing ...I have taken university-level math classes up to calculus and been particularly successful in Algebra, Trigonometry, and Pre-Calculus. I've also been successful in English and History, receiving a 5/5 on my AP English writing which counted as university credit for my English requirements. I act... 19 Subjects: including algebra 2, GED, prealgebra, precalculus
{"url":"http://www.purplemath.com/Michiana_Shores_IN_Math_tutors.php","timestamp":"2014-04-16T04:22:32Z","content_type":null,"content_length":"24337","record_id":"<urn:uuid:3818d824-b916-4e68-bbd2-7069cb6f709a>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00266-ip-10-147-4-33.ec2.internal.warc.gz"}
Now Not the Time to Value-Tilt Low Vol Every week, a low volatility researcher has the same epiphany: tilt low volatility towards value. This addresses two pressing issues simultaneously: avoiding overbought securities and adding value alpha. A neat articulation of this view is from Feifei Li of Research Affiliates, who first shows that lots of people are investing in low volatility (there's another such piece by Dangle and Kashofer from Vienna UofT). Clearly growth in low volatility is rising exponentially, and our intuition senses a Malthusian endgame that will be nasty and brutish. That might seem scary, but to put it in perspective, there's now $80B in value ETFs alone, so this isn't anywhere close to value and size. Next, she shows some valuation metrics. Three different types of low vol portfolios are seemingly higher priced using two different value metrics, book/market and earnings yield. That is, low vol portfolios over the past 10 years used to have higher earnings yields than the market, and higher book/market ratios; now it's the reverse. To put these into perspective, the relative difference in the book-to-price ratio moving from 0.3 to 0.6 is about moving from the 15th percentile to the 45th percentile. Li suggests adding a valuation criteria to low volatility to counteract this value-creep. The basic idea is, say the book/market ratio has a linear relation with expected return, where a higher book/market is associated with a higher return. So if we take the universe of a set of low vol stocks, say the constituents of the ETF SPLV, which looks a the 100 least volatile stocks of the past year, and then take those stocks with the highest book/market ratios within that set, we simultaneously capture more of the value effect avoid overbought stocks. That seems like a win-win improvement. There are two problems with this approach. First, the return to the book/market is not linear. Therefore, merely moving your average book/market ratio may may you feel better, but unless you pick the right stocks, you won't change much. Here's the average return by book/market deciles, for those stocks above the 20th percentile of the NYSE (all data here are from Ken French's excellent , I use the 20th percentile cut-off because stocks below that aren't really investable in scale anyway, so potentially misleading). Now, these are average monthly return premium above the market average. If we are looking at geometric returns, that sharp increase for the top decile isn't there, but forget that for now (I think the geometric average is more relevant given that in practice people don't rebalance monthly, but to each his own). The key is, this relationship over the investable universe is basically all happening at the end-deciles, not in between. Thus, average book/market decile can be misleading, because not much happens between the 30th and 90th percentiles. Curiously, market cap is not allocated evenly across all ten book/market deciles because the cutoffs for the size and book/market sorts are constructed once a year using the NYSE. For example, currently, there's 3 times as much market cap in book/market decile 1 than book/market decile 10. Here's the market-cap-weighted average book/market decile over time (in blue). I'm just calculating a number generated by French's data here, all the work is in this Excel spreadsheet (there's nothing proprietary going on here). So here's that average number calculated each month, and the total return on French's value factor (aka, HML, or High-Minus-Low factor portfolio proxy). Clearly the low average decile corresponds to big increases in the HML factor returns. If I take that time series, and put the data into deciles, I get a pretty clear pattern for future HML returns: Basically, the value (ie, HML) factor only pays off when the average book/market decile is in the bottom third of its distribution. Alas, we're not there, we are around the 70th percentile right now. So, here's the average return for the value factor, for that 50% of the time when the average book/market decile is above average (ie, now): That line is sloping the wrong way if you are banking on a value premium. In sum, loading up on the value factor to improve low volatility is dangerous because 1) the relation between book/market and returns not linear, so simple portfolio averages can be misleading and 2) the value premium can be predictably predictable given the distribution of the market across book/market deciles. In practice, the value premium to passive indices seems about 1-2% since it was popularized around 1990. The 2.8% HML premium from 1928-2013 is due a lot to low book/market stocks, a premium with dubious feasibility, so this number is not a good rule of thumb for the value of tilting towards value. Value ETFs like arose fortuitously around 2000, and so their 3% annual outperformance is all from the bursting of the internet bubble--if those value ETFs went back to 1990, the return premium would be less. I would estimate there's 100 basis points in the value factor, yet, that's by itself. When you try to use value to add to other strategies, it's not obviously beneficial, and most low vol practitioners are doing this, so you really aren't thinking outside the box. 2 comments: The size of value index funds is always close to the size of growth index funds. Indexers don't tilt to value (or to growth). Excess demand for passive investing strategies may also lead to lower returns quoth TRB:
{"url":"http://falkenblog.blogspot.com/2013/08/now-not-time-to-value-tilt-low-vol.html","timestamp":"2014-04-16T10:09:54Z","content_type":null,"content_length":"100704","record_id":"<urn:uuid:7929c534-af0c-41b7-be29-17a250e6a3b3>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00312-ip-10-147-4-33.ec2.internal.warc.gz"}
Commuting invariants and duals of C_p vector spaces up vote 0 down vote favorite Let $K$ be a field complete with respect to some discrete valuation, with perfect residue field of characteristic $p$. Let $\mathbb{C}_p$ be the completion of an algebraic closure of $K$, and set $G_K := \text{Gal}(\mathbb{C}_p/K)$. Let $V$ be a $\mathbb{C}_p$ vector space. Is it true in general that $\text{Hom}_K(V^{G_K}, K) \simeq \text{Hom}(V, \mathbb{C}_p)^{G_K}$? Now let $T_p(X)$ be the Tate module of an abelian variety $X/K$. Then it is true that $$\text{Hom}_K((T_p(X) \otimes_{\mathbb{Z}_p} \mathbb{C}_p)^{G_K}, K) \simeq \text{Hom}_{\mathbb{Z}_p}(T_p(X), \mathbb{C}_p)^{G_K}.$$ I'm looking for a proof of this which is somewhat elementary: I'm okay with using vanishing theorems for the Galois cohomology of $\mathbb{C}_p(i)$, but preferably not much more than that. Does anybody have such a simple way to see this? galois-representations algebraic-number-theory Concerning your first question: a $G_K$-equivariant map from V to $C_p$ exists, for example, whenever $V$ is of Hodge-Tate type with one weight equal to $0$. This happens for lots of repns for which $V^{G_K} = 0$. – Laurent Berger Dec 17 '12 at 8:41 I'm afraid that I'm not familiar enough with $C_p$-representations to see why such a thing exists. If $V$ is Hodge-Tate with a weight of $0$, wouldn't there be a corresponding copy of $K$ in the invariants? – Tony Jan 15 '13 at 16:54 add comment Know someone who can answer? Share a link to this question via email, Google+, Twitter, or Facebook. Browse other questions tagged galois-representations algebraic-number-theory or ask your own question.
{"url":"https://mathoverflow.net/questions/111187/commuting-invariants-and-duals-of-c-p-vector-spaces","timestamp":"2014-04-17T07:49:15Z","content_type":null,"content_length":"48326","record_id":"<urn:uuid:0f909a4c-4ac5-452a-bc79-7c39cb9f07a7>","cc-path":"CC-MAIN-2014-15/segments/1397609526311.33/warc/CC-MAIN-20140416005206-00634-ip-10-147-4-33.ec2.internal.warc.gz"}
Gearslutz.com - View Single Post - Could someone help out interpreting material's gas flow properties Gear interested Joined: Feb 2009 Location: Estonia Posts: 17 I probably summon forth old demons here but after reading through this thread over and over again, I still cannot figure out that Paroc table in the initiator's first post of this topic.. Maybe I'm plain stupid or didn't pay enough attention in high school physics and math classes, but on Paroc website they have an equation which can be used to calculate the air flow resistance: AF = d/A*l So for instance: Thickness, d (mm) = Air Permeability, l (m2/Pa s 10-6) = now, having put those numbers in the formula, I get AF = 2 which is also presented in the table under Rs column. But Airflow Resistivity, r (kPa s / m2) for the same numbers is 200 As I understand, this is the most important figure to look at when it comes to choosing a suitable wool / fiber for an absorption application, but how do I get that using the above formula and data?
{"url":"http://www.gearslutz.com/board/3985426-post46.html","timestamp":"2014-04-18T21:13:14Z","content_type":null,"content_length":"10579","record_id":"<urn:uuid:2f3027bd-589e-44cb-ac56-0775d30762c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.9/warc/CC-MAIN-20140416005215-00277-ip-10-147-4-33.ec2.internal.warc.gz"}
Algorithms and Data Structures Computing prime numbers is an important practical task as well as a common example for programming language tutorials. The Eratosthenes sieve is probably the most familiar algorithm for determining prime numbers. Alas, quite many implementations that call themselves Eratosthenes sieve do not actually implement that algorithm. For example, the classic Haskell code primes = sieve [ 2.. ] where sieve (p:x) = p : sieve [ n | n <- x, n `mod` p > 0 ] is not the Eratosthenes sieve. For one thing, it uses the division operation (or, mod). The Eratosthenes sieve specifically avoids both division and multiplication, which were quite difficult in old times. Mainly, as Melissa O'Neill explains below, the code above tests every number for divisibility by all previously found primes. In contrast, the true Eratosthenes sieve affects only composite numbers. The prime numbers are `left out'; they are not checked by division or numeric comparison. Given below are several implementations of the true Eratosthenes sieve algorithm, in Scheme, Scheme macros, and Haskell. The algorithm is usually formulated in terms of marks and crossing off marks, suggesting the imperative implementation with mutable arrays. The Scheme code follows that suggestion, using two important optimizations kindly described by George Kangas. Eratosthenes sieve, however, can be implemented purely functionally, as Scheme macros and Haskell code demonstrate. The Haskell implementation is not meant to be efficient -- rather, it is meant to be purely functional, insightful, minimalist, and generalizable to other number sieves, e.g., `lucky numbers'. Like other Haskell algorithms it produces a stream of prime numbers. The Haskell implementation stores only marks signifying the numbers, but never the numbers themselves. Not only the implementation avoids multiplication, division or the remainder operations. We also avoid general addition and number comparison. We rely exclusively on the successor, predecessor and zero comparison. The predecessor can be easily eliminated. Thus the algorithm can be used with Church and Peano numerals, or members of Elliptic rings, where zero comparison and successor take constant time but other arithmetic operations are more involved. Melissa O'Neill: Re: Genuine Eratosthenes sieve Messages explaining the sieve algorithm and its differences from impostors; posted on Haskell-Cafe mailing list, February 2007. Eratosthenes sieve and its optimal implementation [plain text file] The explanation of the original Eratosthenes sieve and its optimizations. The original article was posted as Re: arbitrary precision rationals on a newsgroup comp.lang.scheme on Tue, 13 Nov 2001 15:07:34 -0800 number-sieve.lhs [4K] The literate Haskell98 source code for pure functional, minimalist Eratosthenes and lucky number sieves The code was originally posted in an article Even better Eratosthenes sieve and lucky numbers on the Haskell-Cafe mailing list on Mon, 12 Feb 2007 18:37:46 -0800 (PST) A stress test of the syntax-rule macro-expander Eratosthenes sieve as a syntax-rule macro, to perform primality test of Church-Peano numerals at macro-expand time Lucky numbers: another number sieve
{"url":"http://www.okmij.org/ftp/Algorithms.html","timestamp":"2014-04-17T04:03:59Z","content_type":null,"content_length":"26155","record_id":"<urn:uuid:6e4bcc79-ff8e-4d74-8105-6c38ea2a9ddb>","cc-path":"CC-MAIN-2014-15/segments/1398223206147.1/warc/CC-MAIN-20140423032006-00577-ip-10-147-4-33.ec2.internal.warc.gz"}
Recursion Recursion In Mathematics A selection of articles related to recursion recursion in mathematics. Original articles from our library related to the Recursion Recursion In Mathematics. See Table of Contents for further available material (downloadable resources) on Recursion Recursion In Recursion Recursion In Mathematics is described in multiple online sources, as addition to our editors' articles, see section below for printable documents, Recursion Recursion In Mathematics books and related discussion. Suggested Pdf Resources Suggested Web Resources Great care has been taken to prepare the information on this page. Elements of the content come from factual and lexical knowledge databases, realmagick.com library and third-party sources. We appreciate your suggestions and comments on further improvements of the site.
{"url":"http://www.realmagick.com/recursion-recursion-in-mathematics/","timestamp":"2014-04-16T07:43:34Z","content_type":null,"content_length":"26889","record_id":"<urn:uuid:7e9d3eab-ed9f-4808-a92c-4e4910e094ec>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Fact Fluency Tips Fluently add and subtract within 20 using mental strategies (such as counting on; making ten; decomposing a number leading to a ten; using the relationship between addition and subtraction; and creating equivalent but easier known sums). By end of Grade 2, know from memory all sums of two one-digit numbers. Helping your Child with Math Fact Fluency at Home Start Small Using a few flash cards lets you see when and where your child has success. Repeat the same 10-15 facts over and over during a session to help build the brain muscle memory of the number pairs. Start with the easier facts and move up to the more difficult. Offer Help and Avoid Struggle If your child does not know an answer by the time you slowly count to 4 in your head, tell them the answer and have them repeat the entire math fact aloud to you. Keep Your Sessions Short Repeating math facts three or four times in short and fun sessions will get better results than one long session. Make It Fun Try some games with math facts or find an Internet site that has some fun math games. Always end with success! Try out these fun math-learning sites!!! (the hexagon cards are here)
{"url":"http://oakes.scsk12.org/~hsamuelson/Swamp_Frogs/Math_Fact_Fluency.html","timestamp":"2014-04-18T08:14:18Z","content_type":null,"content_length":"19341","record_id":"<urn:uuid:0daddd93-7d73-4857-a9b7-10bead3476e1>","cc-path":"CC-MAIN-2014-15/segments/1397609533121.28/warc/CC-MAIN-20140416005213-00498-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: the velocity v and displacement x of a particle executing simple harmonic motion are related as • one year ago • one year ago Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/4fbb3148e4b0556534303475","timestamp":"2014-04-18T23:46:25Z","content_type":null,"content_length":"203303","record_id":"<urn:uuid:8f3052f1-98f4-4068-b67c-5dc1c84851dc>","cc-path":"CC-MAIN-2014-15/segments/1397609535535.6/warc/CC-MAIN-20140416005215-00608-ip-10-147-4-33.ec2.internal.warc.gz"}
Fraction Problem March 9th 2008, 10:59 AM #1 Feb 2008 Fraction Problem Man I dislike these fractions but I am having problems finding the answer to this one. $\frac{5}{2x}-\frac{4}{2x^2+3x}$ The site helps me a lot and I haven't found a math site that has help me as much as this one. Even if i am just reading someone threads or doing mine I find help in it somewheres. Man I dislike these fractions but I am having problems finding the answer to this one. $\frac{5}{2x}-\frac{4}{2x^2+3x}$ The site helps me a lot and I haven't found a math site that has help me as much as this one. Even if i am just reading someone threads or doing mine I find help in it somewheres. Are you supposed to subtract the fractions? You need a common denominator. Note that the denominator of the second term is $2x^2 + 3x = x(2x + 3)$. The LCM of $x$ and $x(2x + 3)$ is $2x(2x + 3)$. $\frac{5}{2x} - \frac{4}{2x^2+3x}$ $= \frac{5}{2x} \cdot \frac{2x + 3}{2x + 3} - \frac{4}{2x^2+3x} \cdot \frac{2}{2}$ $= \frac{5(2x + 3) - 2 \cdot 4}{2x(2x + 3)}$ $= \frac{10x + 7}{2x(2x + 3)}$ March 9th 2008, 11:07 AM #2
{"url":"http://mathhelpforum.com/algebra/30466-fraction-problem.html","timestamp":"2014-04-20T07:17:37Z","content_type":null,"content_length":"35158","record_id":"<urn:uuid:f96aef9d-879a-47e0-9b52-a8eca2523488>","cc-path":"CC-MAIN-2014-15/segments/1397609538022.19/warc/CC-MAIN-20140416005218-00427-ip-10-147-4-33.ec2.internal.warc.gz"}
Homework Help Posted by Brooke on Sunday, July 22, 2012 at 6:45pm. Ben purchased 5 shirts for $12.95 each. How much sales tax will he pay if the tax rate is 6%? • math - MathMate, Sunday, July 22, 2012 at 6:50pm Multiply the total sale by 0.06 (=6%). Total sale = 5*12.95=64.92 Sales Tax = $64.92*0.06=$3.90 • math - Brooke, Sunday, July 22, 2012 at 7:01pm You actually multiplied wrong, because 5*12.95= 64.75 not 64.92. The answer I got from the structure you wrote was $3.89 Thanks though! It really helped! • math - Dustin, Sunday, July 22, 2012 at 7:05pm actually you are both wrong he would pay exactly 0.777 cents in sales tax which rounds off to 0.78 cents in sales tax • math - Dustin, Sunday, July 22, 2012 at 7:06pm for one shirt • math - Dustin, Sunday, July 22, 2012 at 7:07pm so brooke is right the total sales tax is $3.89 • math - Ms. Sue, Sunday, July 22, 2012 at 7:10pm Dustin -- the problem asks for the sales tax on five shirts. Brooke is right. The sales tax on all five shirts is $3.89. I'm glad you checked the tutor's math. We all make mistakes. • math - MathMate, Sunday, July 22, 2012 at 7:35pm Thank you all for the corrections. As Ms Sue said, I am glad you check the math. Remember that you should always check the tutor's math, not only for accuracy, but also to verify that you have understood how it is done. Sorry for the bad multiplication. It was a moment of inattention of my mental arithmetic. Related Questions math - For a fundrasier, the Best Buddies club sold t-shirts. The adult shirts ... Math - Strategy How much more money do 10 t-shirts cost than 6 t-shirts. T-... math - Elizabeth has a box of shirts. one half of the shirts are small, 2/5 of ... math - Elizabeth has a box of shirts. One half of the shirts are small, 2/5 of ... math - beth has a box of shirts.one half of the shirts are small,2/5 of the ... 7th math - At the mall, you spent a total of $170.86. You bought 2 pairs of ... Solving two step equations - 1) Which of the following is a solution to the ... Solving Two-Step Equations for Reiny or Steve~! - 1) Which of the following is a... math - in a department store, there were a certain number of red t-shirts, blue ... math - In a department store, there were a certain number of red T-shirts, blue...
{"url":"http://www.jiskha.com/display.cgi?id=1342997140","timestamp":"2014-04-20T16:18:24Z","content_type":null,"content_length":"10050","record_id":"<urn:uuid:83c7bab2-34ab-4210-8e46-5e7e961082f0>","cc-path":"CC-MAIN-2014-15/segments/1397609538824.34/warc/CC-MAIN-20140416005218-00286-ip-10-147-4-33.ec2.internal.warc.gz"}
Dan Halperin's Publications Publications are listed in reverse chronological order of their latest version, according to the following categories: Robust Geometric Computing and CGAL Molecular Modeling Books and Proceedings Robust Geometric Computing and CGAL • Michael Hemmer, Ophir Setter, Dan Halperin Constructing the Exact Voronoi Diagram of Arbitrary Lines in Three-Dimensional Space - with Fast Point-Location ESA (1),pages 398-409, 2010. [link][bibtex] [project page] • Dan Halperin Controlled Perturbation for Certified Geometric Computing with Fixed-Precision Arithmetic ICMS, pages 92-95, 2010. • Naama Mayer, Efi Fogel, Dan Halperin Fast and robust retrieval of Minkowski sums of rotating convex polyhedra in 3-space Symposium on Solid and Physical Modeling, pages 1-10, 2010. [link][bibtex] [project page] • Eitan Yaffe, Dan Halperin Approximating the Pathway Axis and the Persistence Diagrams for a Collection of Balls in 3-Space Discrete & Computational Geometry 44(3), pages 660-685, 2010. [link][bibtex] [project page] • Eric Berberich, Efi Fogel, Dan Halperin, Kurt Mehlhorn, Ron Wein Arrangements on Parametric Surfaces I: General Framework and Infrastructure Mathematics in Computer Science 4(1), pages 45-66, 2010. [link][bibtex][project page] • Eric Berberich, Efi Fogel, Dan Halperin, Michael Kerber, Ophir Setter Arrangements on Parametric Surfaces II: Concretizations and Applications Mathematics in Computer Science 4(1), pages 67-91, 2010. [link][bibtex][project page] • Ophir Setter, Micha Sharir, Dan Halperin Constructing Two-Dimensional Voronoi Diagrams via Divide-and-Conquer of Envelopes in Space Transactions on Computational Science 9, pages 1-27, 2010. [link][bibtex] [project page] • Ophir Setter, Micha Sharir, and Dan Halperin Constructing Two-Dimensional Voronoi Diagrams via Divide-and-Conquer of Envelopes in Space, In Proceedings of the 6th annual International Symposium on Voronoi Diagrams in Science and Engineering (ISVD), pages 43-52, Copenhagen, Denmark, June 2009. [bibtex] [project page] • Ophir Setter and Dan Halperin Exact construction of minimum-width annulus of disks in the plane In Abstracts of 25th European Workshop on Computational Geometry, pages 317-320, 2009 [pdf] [bibtex] [project page] • E. Fogel, O. Setter and D. Halperin Exact implementation of arrangements of geodesic arcs on the sphere with applications In Abstracts of 24th European Workshop on Computational Geometry, pp 83-86, 2008 [pdf] [bibtex] [project page] • E. Fogel, O. Setter and D. Halperin Arrangements of geodesic arcs on the sphere In Proceedings of the 24th ACM Annual Symposium on Computational Geometry, pages 218-219, 2008 [movie - additional information][pdf] [bibtex] [project page] • I. Haran and D. Halperin An experimental study of point location in planar arrangements in CGAL ACM Journal of Experimental Algorithmics, 13, 2008 [pdf] [bibtex] [project page] A preliminary version appeared in ALENEX 2006, Miami, Florida, 2006 • E. Fogel, D. Halperin, L. Kettner, M. Teillaud, R. Wein and N. Wolpert Effective Computational Geometry for Curves and Surfaces, Jean-Daniel Boissonnat and Monique Teillaud (eds.), Springer, Mathematics and Visualization series, 2007, Chapter 1, pp. 1-66. [link] [bibtex] • E. Berberich, E. Fogel, D. Halperin, K. Mellhorn, and R. Wein Sweeping and maintaning two-dimensional arrangements on surfaces: a first step Proceedings 15th Annual European Symposium on Algorithms (ESA) 2007, pp. 645-656 [pdf] [bibtex] [project page] • M. de Berg, D. Halperin and M. Overmars An intersection-sensitive algorithm for snap rounding Computational Geometry: Theory and Applications 36, 2007, pp. 159-165. [pdf] [bibtex] • E. Fogel and D. Halperin Exact and efficient construction of Minkowski sums of convex polyhedra with applications Computer Aided Design, 39(11):929-940, 2007. [bibtex] [project page] A prelimenary version appeared in Proc. 8^th Workshop on Algorithm Engineering and Experiments (ALENEX) 2006 [pdf] [bibtex] • R. Wein, E. Fogel, B. Zukerman and D. Halperin Advanced programming techniques applied to CGAL's arrangement package Proc. Library-Centric Software Design Workshop, LCSD'05, San Diego, California, 2005 [pdf] [bibtex] • E. Fogel and D. Halperin Video: Exact Minkowski sums of convex polyhedra Proc. 21st ACM Symposium on Computational Geometry, SoCG 2005, Pisa, Italy, 2005, 382-383. [movie] [pdf] [bibtex] [project page] • E. Fogel, R. Wein and D. Halperin Code flexibility and program efficiency by genericity: Improving CGAL's arrangements Proc. 12th Annual European Symposium on Algorithms (ESA), Bergen, Norway, 2004, 664-676. [pdf] [bibtex] • D. Halperin and E. Leiserowitz Controlled perturbation for arrangements of circles International Journal of Computational Geometry and Applications, 14 (4 & 5), 2004, pp. 277-310. Special issue, papers from SoCG 2003. [pdf] [bibtex] [project page] A preliminary version appeared in Proc. 19th ACM Symposium on Computational Geometry, SoCG 2003, San Diego, 264-273 • C. Linhart, D. Halperin, S. Har-Peled and I. Hanniel On-line zone construction in arrangements of lines in the plane International Journal of Computational Geometry and Applications, 13:6 (2003), pp. 463-485. [pdf] [bibtex] [project page] A preliminary version with Y. Aharoni appeared in Proc. 3rd International Workshop on Algorithm Engineering (WAE), London, 1999,Springer LNCS Vol. 1668, 139-153 • D. Halperin and E. Packer Iterated snap rounding Computational Geometry: Theory and Applications, 23(2), 2002, pp. 209-222. [pdf] [bibtex] [project page] A preliminary version appeared in Abstracts 17th European Workshop on Computational Geometry, Berlin, 2001, 82-85 • D. Halperin Robust geometric computing in motion International Journal of Robotics Research, 21 (3), 2002, pp. 219-232. Appeared in: Algorithmic and Computational Robotics: New Dimensions (WAFR '00). B.R. Donald and K.M. Lynch and D. Rus (eds.,) A.K. Peters, Wellesley, 2001, pp. 9-22. Invited talk at WAFR 2000, Workshop on Algorithmic Foundations of Robotics , Dartmouth College, March 2000. [pdf] [bibtex] • E. Ezra, D. Halperin and M. Sharir Speeding up the incremental construction of the union of geometric objects in practice Computational Geometry: Theory and Applications, 27 (2004), pp. 63-85. Special issue, papers from the 18th European Workshop on Computational Geometry, Warsaw, April 2002. [pdf] [bibtex] [project page] A preliminary version appeared in Proc. 10th Europen Symposium on Algorithms (ESA), Rome, 2002, 473-484. See also article entitled "Efficient Construction of the Union of Geometric Objects" in Proc. 18th European Workshop on Computational Geometry, Warsaw, 2002, pp.56-60 • E. Flato, E. Fogel, D. Halperin and E. Leiserowitz Video: Exact Minkowski sums and applications Proc. 18th ACM Symposium on Computational Geometry, Barcelona, 2002, 273-274. [movie] [bibtex] [project page] • P.K. Agarwal, E. Flato and D. Halperin Polygon decomposition for efficient construction of Minkowski sums Computational Geometry: Theory and Applications, Special Issue, selected papers from the European Workshop on Computational Geometry, Eilat, 2000, vol. 21 (2002), 39-61. [pdf] [bibtex] preliminary version appeared in Proc. 8th European Symposium on Algorithms (ESA), Saarbrücken, 2000. Springer LNCS Vol. 1879, 20-31. See also Eyal Flato's thesis [pdf] [bibtex] [project page] • E. Flato and D. Halperin Robust and efficient construction of planar Minkowski sums Abstracts 16th European Workshop on Computational Geometry, Eilat, 2000, 85-88. [pdf] [bibtex] [project page] See also Eyal Flato's thesis [pdf] [bibtex] [project page] • I. Hanniel and D. Halperin Two-dimensional arrangements in CGAL and adaptive point location for parametric curves Proc. 4th International Workshop on Algorithm Engineering (WAE), Saarbrücken, LNCS Vol. 1982, Springer, 2000, 171-182. [pdf] [bibtex] [project page] See also Iddo Hanniel's thesis [pdf] [bibtex] • E. Flato, D. Halperin, I. Hanniel, O. Nechushtan and E. Ezra The design and implementation of planar maps in CGAL The ACM Journal of Experimental Algorithmics Volume 5, 2000, pp. 1-23. [pdf] [bibtex] [ project page] A preliminary version appeared in Proc. 3rd International Workshop on Algorithm Engineering (WAE), London, 1999, Springer LNCS Vol. 1668, 154-168 • D. Halperin and C.R. Shelton A perturbation scheme for spherical arrangements with application to molecular modeling (the full version, MIT AI MEMO) Computational Geometry: Theory and Applications 10 (4), 1998, pp. 273-288. [pdf] [bibtex] A preliminary version appeared in Proc. 13th ACM Symposium on Computational Geometry, Nice, 1997, 183-192. • D. Halperin and S. Raab Controlled perturbation for arrangements of polyhedral surfaces. A preliminary version entitled: Controlled perturbation for arrangements of polyhedral surfaces with application to swept volumes by S. Raab appeared in Proc. 15th ACM Symposium on Computational Geometry, Miami, 1999, 163-172. [bibtex] [project page] See also Sigal Raab's thesis [pdf] [bibtex] Papers reporting on the experimental study of arrangements including arrangements in CGAL appear above under Robust Geometric Computing and CGAL • N. Alon, O. Nechushtan, D. Halperin, and M. Sharir The complexity of the outer face in arrangements of random segments Proc. of the Twenty-fourth Annual Symposium on Computational Geometry SOCG, 2008, pp 69-78. [link] [bibtex] [project page] • D. Halperin CRC Handbook of Discrete and Computational Geometry, 2nd Edition, J.E. Goodman and J. O'Rourke (eds.), CRC Press, Inc., Boca Raton, FL, 2004, pp. 529-562. [link] [bibtex] • H. Shaul and D. Halperin Improved construction of vertical decompositions of 3D arrangements Proc. 18th ACM Symposium on Computational Geometry, Barcelona, 2002, 283-292. [pdf] [bibtex] [project page] Also appeared in Proc. 18th European Workshop on Computational Geometry, Warsaw, 2002, pp. 101-105 • B. Aronov, A. Efrat, D. Halperin, and M. Sharir On the number of regular vertices of the union of Jordan regions Discrete and Computational Geometry, 25 (2001), 203-220. [pdf] [bibtex] A preliminary version appeared in Proc. 6th Scandinavian Workshop on Algorithm Theory (SWAT), Stockholm, 1998, pp. 322-334. • M. de Berg, D. Halperin, M.H. Overmars and M. van Kreveld Sparse arrangements and the number of views of polyhedral scenes International Journal of Computational Geometry and Applications 7 (1997), 175-195. [pdf] [bibtex] • M. de Berg, L.J. Guibas and D. Halperin Vertical decompositions for triangles in 3-space Discrete and Computational Geometry, 15 (1996), 36--61. [pdf] [bibtex] A preliminary version appeared in Proc. 10th ACM Symposium on Computational Geometry, Stony Brook, 1994, 1-10. • D. Halperin and M. Sharir Almost tight upper bounds for the single cell and zone problems in three dimensions Discrete and Computational Geometry, special issue of papers from the 10th ACM Symposium on Computational Geometry, 14 (1995), 385--410. [pdf] [bibtex] A preliminary version appeared in Proc. 10th ACM Symposium on Computational Geometry, Stony Brook, 1994, 11-20. • L.J. Guibas, D. Halperin, J. Matousek and M. Sharir On vertical decomposition of arrangements of hyperplanes in four dimensions Discrete and Computational Geometry 14 (1995), 113--122. [pdf] [bibtex] A preliminary version appeared in Proc. 5th Canadian Conference on Computational Geometry, Waterloo, 1993, pp. 127-132. • S. Har-Peled, T. M. Chan, B. Aronov, D. Halperin and J. Snoeyink The complexity of a single face of a Minkowski sum Proc. 7th Canadian Conference on Computational Geometry, University Laval, Quebec, 1995, pp. 91-96 . [pdf] [bibtex] • D. Halperin and M. Sharir Arrangements and their applications in robotics: Recent developments The Algorithmic Foundations of Robotics, K. Goldberg, D. Halperin, J.C. Latombe and R. Wilson, Eds., A.K. Peters, Boston, MA, 1995, 495--511. [pdf] [bibtex] A preliminary version appeared in Proc. 1st Workshop on the Algorithmic Foundations of Robotics, San Francisco, 1994. • E.M. Arkin, D. Halperin, K. Kedem, J.S.B. Mitchell and N. Naor Arrangements of segments that share endpoints: Single face results Discrete and Computational Geometry 13 (1995), L\'aszl\'o Fejes T\'oth Festschrift, 257--270. [pdf] [bibtex] A preliminary version appeared in Proc. 7th ACM Symposium on Computational Geometry, North Conway, 1991, pp. 324-333. • D. Halperin and M. Sharir New bounds for lower envelopes in three dimensions, with applications to visibility in terrains Discrete and Computational Geometry 12 (1994), 313-326. [pdf] [bibtex] A preliminary version appeared in Proc. 9th ACM Symposium on Computational Geometry, San Diego, 1993, 11-18. • D. Halperin On the complexity of a single cell in certain arrangements of surfaces related to motion planning Discrete and Computational Geometry 11 (1994), 1-33 [pdf] [bibtex] A preliminary version appeared in Proc. 7th ACM Symposium on Computational Geometry, North Conway, 1991, pp. 314-323. • D. Halperin and M. Sharir On disjoint concave chains in arrangements of (pseudo) lines Information Processing Letters 40 (1991), 189-192. ibid 51 (1994),53-56. [pdf] [bibtex] • Barak Raveh, Angela Enosh, Dan Halperin A Little More, a Lot Better: Improving Path Quality by a Simple Path Merging Algorithm CoRR, abs/1001.2391, 2010. [link] [bibtex][project page] • Itamar Berger, Bosmat Eldar, Gal Zohar, Barak Raveh, Dan Halperin Improving the Quality of Non-Holonomic Motion by Hybridizing C-PRM Paths CoRR, abs/1009.4787, 2010. [link] [bibtex] • Efi Fogel and Dan Halperin Polyhedral assembly partitioning with infinite translations or the importance of being exact Proc. of the 8th International Workshop on the Algorithmic Foundations of Robotics (WAFR), 2008, to appear [pdf] [bibtex] [project page] • R. Wein, J.P. van den Berg and D. Halperin Planning high-quality paths and corridors amidst obstacles Algorithmic Foundations of Robotics VII, pp 491-506, 2008 [pdf] [bibtex] [project page] • R. Wein, J.P. van den Berg, and D. Halperin The visibility-Voronoi complex and its applications Computational Geometry: Theory and Applications, 36 (1) 2007, pp. 66-87. [pdf] [bibtex] [project page] A preliminary version appeared in Proc. 21st ACM Symposium on Computational Geometry, Pisa, June 2005, 63-72. • R. Wein, J.P. van den Berg, and D. Halperin Planning near-optimal corridors amidst obstacles (full version: Planning high-quality paths and corridors amidst obstacles) Proc. 7th International Workshop on the Algorithmic Foundations of Robotics, New York City, July 2006. [link] [bibtex] [project page] • O. Ilushin, G. Elber, D. Halperin and R. Wein and M.-S. Kim Precise global collision detection in multi-axis NC-machining Computer-Aided Design, 37(9), 2005, pp. 909-920. [pdf] [bibtex] [project page] A preliminary version appeared in Proc. International CAD Conference, Thailand, May 2004, pp. 233-243. • R. Wein, O. Ilushin, G. Elber and D. Halperin Continuous path verification in multi-axis NC-machining International Journal of Computational Geometry and Applications, 15 (4) 2005, pp. 351-378. Special issue, dedicated to papers from the 20th ACM Symposium on Computational Geometry,Brooklyn, June, 2004. [pdf] [bibtex] [project page] A preliminary version appeared in Proc. 20th ACM Symposium on Computational Geometry, Brooklyn, June 2004, pp. 86-95. • D. Halperin, L. Kavraki, and J.-C. Latombe CRC Handbook of Discrete and Computational Geometry, 2nd Edition, J.E. Goodman and J. O'Rourke (eds.), CRC Press, Inc., Boca Raton, FL, 2004, pp. 1065-1093. [link] [bibtex] • S. Hirsch and D. Halperin Hybrid motion planning: Coordinating two discs moving among polygonal obstacles in the plane Proc. 5th Workshop on Algorithmic Foundations of Robotics (WAFR), Nice, 2002, 225-241. [pdf] [bibtex] [project page] • D. Halperin, M. Sharir and K. Goldberg The 2-center problem with obstacles Journal of Algorithms 42 (2002), pp. 109-134. [pdf] [bibtex] A preliminary version appeared in Proc. 16th ACM Symposium on Computational Geometry, Hong Kong, 2000, 80-90. • J. Chen, K. Goldberg, M. Overmars, D. Halperin, K.F. Böhringer, and Y. Zhuang Computing tolerance parameters for fixturing and feeding The Assembly Automation Journal , 22 (2002), 163-172. [link] [bibtex] A preliminary version entitled: Shape Tolerance in Feeding and Fixturing, appeared in: The Algorithmic Foundations of Robotics, P.K. Agarwal, L. Kavraki, and M. Mason, Eds., A.K. Peters, Boston, MA, 1998, 297-311. • D. Halperin, J.-C. Latombe and R.H. Wilson A general framework for assembly planning: The motion space approach Algorithmica , 26 (2000), 577--601 [pdf] [bibtex] A preliminary version appeared in Proc. 14th ACM Symposium on Computational Geometry , Minneapolis, 1998, 9-18 • K.F. Böhringer, B.R. Donald and D. Halperin On the area bisectors of a polygon Discrete and Computational Geometry, 22 (1999), 269-285. [pdf] [bibtex] A preliminary version entitled: The area bisectors of a polygon and force equilibria in programmable vector fields appeared in: Proc. 13th ACM Symposium on Computational Geometry, Nice, 1997, pp. • D. Halperin, L. Kavraki, and J.-C. Latombe Robot algorithms CRC Algorithms and Theory of Computation Handbook M. Atallah (editor), CRC Press, Inc., Boca Raton, FL, 1999, Chapter 21 [link] [bibtex] • L.J. Guibas, D. Halperin, H. Hirukawa, J.-C. Latombe and R.H. Wilson Polyhedral assembly partitioning using maximally covered cells in arrangements of convex polytopes Int. J. of Computational Geometry and its applications, 8(2), 1998, 179-199. [pdf] [bibtex] A preliminary version appeared in Proc. IEEE International Conference on Robotics and Automation, Nagoya, Japan, 1995, 2553-2560. • D. Halperin and C.-K. Yap Combinatorial complexity of translating a box in polyhedral 3-space Computational Geometry: Theory and Applications, 9, 1998, 181-196. [pdf] [bibtex] A preliminary version appeared in Proc. 9th ACM Symposium on Computational Geometry, San Diego, 1993, pp. 29-37. • D. Halperin and R.H. Wilson Assembly partitioning along simple paths: The case of multiple translations Advanced Robotics , 11 ,1997, 127-146. [pdf] [bibtex] A preliminary version appeared in IEEE International Conference on Robotics and Automation, Nagoya, Japan, 1995, 1585-1592. • D. Halperin and M. Sharir A near-quadratic algorithm for planning the motion of a polygon in a polygonal environment Discrete and Computational Geometry, 16 ,1996, 121-134. [pdf] [bibtex] A preliminary version appeared in Proc. 5th Annual Symposium on Foundations of Computer Science (FOCS), Palo Alto, 1993, 382-391. • P. Agarwal, M. de Berg, D. Halperin and M. Sharir Efficient generation of k-directional assembly sequences Proc. 7th ACM-SIAM Symp. on Discrete Algorithms ,1996, 122-131. [pdf] [bibtex] • K.Y. Goldberg, D. Halperin, J.-C. Latombe and R.H. Wilson (editors) Algorithmic Foundations of Robotics A.K. Peters, Wellseley, MA, 1995. [link] [bibtex] • M. de Berg, L.J. Guibas, D. Halperin, M.H. Overmars, O. Schwarzkopf, M. Sharir and M. Teillaud Reaching a goal with directional uncertainty Theoretical Computer Science 140 ,1995, 301-317. [link] [bibtex] A preliminary version appeared in Proc. 4th International Symposium on Algorithms and Computation (ISAAC '93), Hong Kong, 1993, 1-10. • D. Halperin Robot motion planning and the single cell problem in arrangements Journal of Intelligent and Robotic Systems 11 ,1994, 45-65. [pdf] [bibtex] • A.F. van der Stappen, D. Halperin and M.H. Overmars The complexity of the free space for a robot moving amidst fat obstacles Computational Geometry: Theory and Applications 3 ,1993, 353-373. [pdf] [bibtex] • A.F. van der Stappen, D. Halperin and M.H. Overmars Efficient algorithms for exact motion planning amidst fat obstacles Proc. IEEE International Conference on Robotics and Automation, Atlanta, 1993, Vol. 1, pp.297-304. [pdf] [bibtex] • D. Halperin Algorithmic motion planning via arrangements of curves and of surfaces Ph.D Thesis, Computer Science Department, Tel Aviv University, Tel Aviv, July 1992. • D. Halperin, M.H. Overmars and M. Sharir Efficient motion planning for an L-shaped object SIAM Journal on Computing 21 (1992), 1-23 [pdf] [bibtex] A preliminary version appeared in Proc. 5th ACM Symposium on Computational Geometry, Saarbrücken, 1989, pp. 156-166. • D. Halperin and M. Sharir Improved combinatorial bounds and efficient techniques for certain motion planning problems with three degrees of freedom Computational Geometry: Theory and Applications 1 ,1992, 269-303. [link] [bibtex] A preliminary version appeared in Proc. 2nd Canadian Conference on Computational Geometry, Ottawa, 1990, pp. 98-101. • D. Halperin Automatic kinematic modelling of robot manipulators and symbolic generation of their inverse kinematics solutions Proc. 2nd International Workshop on Advances in Robot Kinematics Linz, 1990. pp. 310-317. [pdf] [bibtex] • D. Halperin Kinematic modelling of robot manipulators and automatic generation of their inverse kinematics solutions (in Hebrew; see English abstract at the end of the pdf file) M.Sc Thesis, Computer Science Department, Tel Aviv University, Tel Aviv, August 1986. Molecular Modeling • B.Raveh , A. Enosh , O.Furman-Schueler and D. Halperin Rapid sampling of molecular motions with prior information constraints PLoS Computational Biology, 2009 [link] [bibtex] [project page] • A.Enosh,B.Raveh,O.Furman-Schueler, D. Halperin,N.Ben-Tal Generation, comparison and merging of pathways between protein conformations: Gating in K-channels Biophysical Journal, 2008 [link] [related poster - ppt] [bibtex] [project page] • E. Yaffe and D. Halperin Approximating the pathway axis and the persistence diagram of a collection of balls in 3-space Symposium on Computational Geometry, SoCG, 260-269, 2008 [pdf] [bibtex] [project page] • E. Yaffe, D. Fishelovitch, H. J. Wolfson, D. Halperin, R. Nussinov MolAxis: a server for identification of channels in macromolecule Nucleic Acids Research, 36 (Web Server issue),2008, pp W210-W215 [link] [bibtex] [project page] • E. Yaffe, D. Fishelovitch, H. J. Wolfson, D. Halperin, R. Nussinov MolAxis: efficient and accurate identification of channels in macromolecules Proteins: Structure, Function, and Bioinformatics. Volume 73, issue 1, 2008, Pp 72-86 [link] [bibtex] [project page] • ND. Rubinstein, I. Mayrose, D. Halperin, D. Yekutieli, JM. Gershoni and T. Pupko Computational characterization of B-Cell epitopes Molecular Immunology 45, 2008, pp 3477-3489 [link] [bibtex] • A. Enosh, S. J. Fleishman, N. Ben-Tal and D. Halperin Prediction and simulation of motion in Pairs of transmembrane alpha-helices Proc. 5th European Conference on Computational Biology - ECCB 2006 , Eilat, Israel, 2007. [link] [bibtex] • S. J. Fleishman, S.E. Harrington, A. Enosh, D. Halperin, C.G. Tate and N. Ben-Tal Quasi-symmetry in the Cryo-EM structure of EmrE provides the key to modeling its transmembrane Domain Journal of Molecular Biology, 364 (2006), 54-67. [link] [bibtex] • E. Eyal and D. Halperin Improved maintenance of molecular surfaces using dynamic graph connectivity Proc. 5th International Workshop on Algorithms in Bioinformatics - WABI 2005 , Mallorca, Spain, 2005, Springer LNCS, Vol. 3692. [pdf] [bibtex] [project page] • E. Eyal and D. Halperin Dynamic maintenance of molecular surfaces under conformational changes Proc. 21st ACM Symposium on Computational Geometry , Pisa, June 2005, 45-54. [pdf] [bibtex] [project page] • A. Enosh, S.J. Fleishma, N. Ben-Tal and D. Halperin Assigning transmembrane segments to helics in intermediate-resolution structures Proc. Twelfth International Conference on Intelligent Systems for Molecular Biology (ISMB), held jointly with the Third European Conference on Computational Biology (ECCB), Glasgow, July-August, 2004, pp. 122-129. [pdf] [bibtex] • I. Lotan, F. Schwarzer, D. Halperin and J.-C. Latombe Algorithm and data structures for efficient energy maintenance during Monte Carlo simulation of proteins Journal of Computational Biology 11 (5), 2004, 902-932. [pdf] [bibtex] A preliminary and partial version, titled: Efficient maintenance and self-collision testing for kinematic chains, appeared in Proc. 18th ACM Symposium on Computational Geometry, Barcelona, 2002, pp, 43-52. • D. Halperin and C.R. Shelton A perturbation scheme for spherical arrangements with application to molecular modeling Computational Geometry: Theory and Applications 10 (4), 1998, 273-288. (see also the section "Robust geometric computing" above). [pdf] [bibtex] A preliminary version appeared in Proc. 13th ACM Symposium on Computational Geometry, Nice, 1997, 183-192. • D. Halperin and M.H. Overmars Spheres, molecules, and hidden surface removal Computational Geometry: Theory and Applications 11 (2), 1998, 83-102. [pdf] [bibtex] A preliminary version appeared in Proc. 10th ACM Symposium on Computational Geometry, Stony Brook, 1994, 113-122. • P.W. Finn, D. Halperin, L. Kavraki, J.-C. Latombe, R. Motwani, C.Shelton, and S. Venkatasubramanian Geometric manipulation of flexible ligands Applied Computational Geometry: Towards geometric Engineering M.C. Lin and D. Manocha (editors), Springer 1996, (papers from the ACM Workshop on Applied Computational Geometry 1996), pp. 67-78. [pdf] [bibtex] • D. Halperin, J.-C. Latombe, and R. Motwani Dynamic maintenance of kinematic structures Algorithms for Robotic Motion and manipulation (WAFR '96), J.-P. Laumond and M. Overmars (editors), A.K. Peters, Wellesley, 1997, pp. 155-170. [pdf] [bibtex] A preliminary version appeared in Proc. 2nd Workshop on the Algorithmic Foundations of Robotics, Toulouse, 1996. Surveys, Book Chapters, Book Editing • D. Halperin and K. Melhorn Proc. 16th Annual European Symposium on Algorithms, 2008 [link] [bibtex] • D. Halperin Engineering Geometric Algorithms In Encyclopedia of Algorithms, 2008 [link] [bibtex] • E. Fogel, D. Halperin, L. Kettner, M. Teillaud, R. Wein and N. Wolpert Effective Computational Geometry for Curves and Surfaces, Jean-Daniel Boissonnat and Monique Teillaud (eds.), Springer, Mathematics and Visualization series, 2007, Chapter 1, pp. 1-66. [link] [bibtex] • D. Halperin CRC Handbook of Discrete and Computational Geometry, 2nd Edition, J.E. Goodman and J. O'Rourke (eds.), CRC Press, Inc., Boca Raton, FL, 2004, pp. 529-562. [link] [bibtex] • D. Halperin, L. Kavraki, and J.-C. Latombe CRC Handbook of Discrete and Computational Geometry, 2nd Edition, J.E. Goodman and J. O'Rourke (eds.), CRC Press, Inc., Boca Raton, FL, 2004, pp. 1065-1093. [link] [bibtex] • D. Halperin Robust geometric computing in motion International Journal of Robotics Research, 21 (3), 2002, pp. 219-232. Appeared in: Algorithmic and Computational Robotics: New Dimensions (WAFR '00). B.R. Donald and K.M. Lynch and D. Rus (eds.,) A.K. Peters, Wellesley, 2001, pp. 9-22. Invited talk at WAFR 2000, Workshop on Algorithmic Foundations of Robotics , Dartmouth College, March 2000. [pdf] [bibtex] • D. Halperin, L. Kavraki, and J.-C. Latombe Robot algorithms CRC Algorithms and Theory of Computation Handbook M. Atallah (editor), CRC Press, Inc., Boca Raton, FL, 1999, Chapter 21. [pdf] [bibtex] • D. Halperin and M. Sharir Arrangements and their applications in robotics: Recent developments The Algorithmic Foundations of Robotics, K. Goldberg, D. Halperin, J.C. Latombe and R. Wilson, Eds., A.K. Peters, Boston, MA, 1995, 495-511. [pdf] [bibtex] A preliminary version appeared in Proc. 1st Workshop on the Algorithmic Foundations of Robotics, San Francisco, 1994. • D. Halperin Robot motion planning and the single cell problem in arrangements Journal of Intelligent and Robotic Systems 11 (1994), 45-65. [pdf] [bibtex] Books and Proceedings • Guy E. Blelloch, Dan Halperin Proceedings of the Twelfth Workshop on Algorithm Engineering and Experiments, ALENEX 2010, Austin, Texas, USA, January 16, 2010 ALENEX, SIAM 2010.
{"url":"http://acg.cs.tau.ac.il/danhalperin/publications/dan-halperins-publications","timestamp":"2014-04-18T18:10:55Z","content_type":null,"content_length":"87180","record_id":"<urn:uuid:8f82b3b7-50be-4158-9c49-6fd8a31d8956>","cc-path":"CC-MAIN-2014-15/segments/1398223203841.5/warc/CC-MAIN-20140423032003-00467-ip-10-147-4-33.ec2.internal.warc.gz"}
differentiation from first principals May 9th 2006, 01:54 AM differentiation from first principals Hi Math help forum! I'm finding it difficult to differentiation from first principals and wanted to know if there is a simple method I can follow. Could you please give an example aswell thanks. May 9th 2006, 07:40 AM "from first principles" would be using the (limit) definition? May 9th 2006, 09:33 AM Thanks for the reply Yes any method as long as it easy to follow and you get the right answer! :D May 9th 2006, 09:36 AM Well there is no standard 'way' of doing them, since evaluating limits can go very different, depending on what limit it is. For what kind of functions do you have to be able to get its derivative through the definition? May 9th 2006, 11:28 AM Just so I'm clear, is this what you are talking about? $f'(x)=\lim_{\Delta{x}\rightarrow{0}}\frac{f({x}{+} \Delta{x})-f(x)}{\Delta{x}}$? May 9th 2006, 02:03 PM I realized that you used While you can have used, May 9th 2006, 02:46 PM yeah jameson when the limit turns zero. May 9th 2006, 05:50 PM Originally Posted by dadon yeah jameson when the limit turns zero. That's what I wrote. Hence the $\Delta{x}\to{0}$. What seems to be the problem? These problems usually require some basic algebra manipulation. Try finding the derivatives of $f(x)=x^2$ and $f(x) =\frac{1}{x}$ using the limit definition. Or is there a specific one we can help you with? May 9th 2006, 05:51 PM Thanks. A small time saver. :) May 9th 2006, 06:40 PM Originally Posted by Jameson Thanks. A small time saver. :) I also made the same thing, then I was curious to see what Code TD! used and a saw he used a simpler one used it ever since. May 10th 2006, 01:02 AM So say I had to find from first principals $\frac{dy}{dx}$ of the following: $y = 16x + \frac{1}{x^2}$ cheers guys May 10th 2006, 09:16 AM Set up your limit. $\frac{dy}{dx}=\lim_{\Delta{x}\to{0}}\frac{16(x{+}{ \Delta}x)+\frac{1}{(x{+}{\Delta}{x})^2}-16x-\frac{1}{x^2}}{\Delta{x}}$ Is this where you're having trouble? May 10th 2006, 09:17 AM thanks for the reply. yes that is what i needed help on. May 10th 2006, 01:59 PM Originally Posted by dadon thanks for the reply. yes that is what i needed help on. You do not need to spend much time and master that limit. Because soon you will learn rules which would make it easier.
{"url":"http://mathhelpforum.com/calculus/2884-differentiation-first-principals-print.html","timestamp":"2014-04-18T00:38:41Z","content_type":null,"content_length":"10554","record_id":"<urn:uuid:95ed5206-069c-4c58-86be-821c88ff3bca>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00572-ip-10-147-4-33.ec2.internal.warc.gz"}
Amplitude Modulation on MATLAB Simulink Download the Simulink Model: What is Amplitude Modulation? This topic is the result of Digital Signal Processing term project named Amplitude Modulation and Demodulation on Texas Instrument Kit DSK C6713 with Matlab Simulink. One of the fundamental part of our project is included in this very post. The whole term mini project will be gradually discussed in subsequent posts. Before we proceed, we must know what actually modulation and amplitude modulation is? A modulator alters the carrier waves corresponding to the variation of the modulating signals. Resulting modulated signal thus carries message information. Amplitude modulation is the process of changing the amplitude of a high frequency carrier signal corresponding to the amplitude of the modulating signal (Information). The wave whose amplitude is being varied is called the carrier wave and the signal doing the variation is called the modulating signal i-e message signal or information signal. The carrier is always almost a sinusoidal wave. The modulating or message signal can be a sine wave but it can be arbitrary waveform such as audio signal etc. Some Mathematics : For simplicity, suppose both carrier wave and modulating signal are sinusoidal: vc = Vc sin wc t (c denotes carrier) vm = Vm sin wm t (m denotes modulation) We want the modulating signal to vary the carrier amplitude, Vc, so that: vc = (Vc + Vm sin wmt).sin wc t where (Vc + Vm sin wm t) is the new, varying carrier amplitude. Expanding this equation gives: vc = Vc sin ωc t + Vm sin ωc t. sin ωm t which may be rewritten as: vc = Vc [sin ωc t + m sin ωc t. sin ωm t] where m = Vm/Vc and is called the modulation index. sin ωc t.sin ωm t = (1/2) [cos(ωc - ωm) t - cos(ωc + ωm) t] so, from the previous equation: vc = Vc [sin ωc t + m sin ωc t. sin ωm t] we can express vc as: vc = Vc sin ωc t + (mVc/2) [cos(ωc - ωm) t] - (mVc/2) [cos(ωc + ωm) t] 1. The original carrier waveform, at frequency ωc, containing no variations and thus carrying no information. 2. A component at frequency (ωc - ωm) whose amplitude is proportional to the modulation index. This is called the Lower Side Frequency. 3. A component at frequency (ωc + ωm) whose amplitude is proportional to the modulation index. This is called the Upper Side Frequency. Lets discuss step by step the implementation of amplitude modulation on simulink modulation :- Start the new Simulink model Open the Simulink library browser Create the simulation model of the AM Message Signal: A Sinusoidal signal of frequency 1 Hz, Amplitude=1 Carrier Signal of frequency 20 Hz and Amplitude=1 The above model is the equivalent of the mathematical expression: s(t)=[1+m(t)](cos(2πfct)) Where; m(t) represent the message signal And cos(2πfct) represent the carrier Run the Simulink model, the message, carrier and modulated output can be observed on the oscilloscope. The above diagtam shows both supressed and without supressed carrier.You are recommended to see the following too :- This project named “AMPLITUDE MODULATION USING MATLAB SIMULINK AND TEXAS INSTRUMENT KIT C6713”, in all respect is the property of the following personnel who undertake this project as the term project in EE-322 ‘DSP & Filters’ in summer semester 2010. However the copy of the project can be distributed upon the approval of the following members:- 1. Muhammad Ahmed (NUST-ELECTRONICS) 2. Jamal Ahmed(NUST-ELECTRONICS) 3. Muhammad Faisal (NUST-ELECTRONICS) Amplitude Modulation Matlab Simulink . You can follow any responses to this entry through the RSS 2.0 . Feel free to leave a response
{"url":"http://elprojects.blogspot.com/2010/06/amplitude-modulation-on-matlab-simulink.html","timestamp":"2014-04-19T17:05:57Z","content_type":null,"content_length":"112455","record_id":"<urn:uuid:3b77478d-4773-4f58-bdfa-b05dd0840bd2>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
Slouching towards simulating investment skill April 29, 2013 By Pat When investment skill is simulated, it is often presented as if it is obvious how to do it. Maybe I’m wrong, but I don’t think it’s obvious. In “Simple tests of predicted returns” we saw that prediction quality need not look like what you would find in a textbook. For example, there was a case where there was no predictive power on the low values, but good prediction for high values. 443 large cap US equities are used. The variance matrix is estimated using the daily returns during 2011 (via a Ledoit-Wolf shrinkage model). Both long-only and long-short portfolios are created. The constraints for the long-only portfolios are: • no more than 60 names in the portfolio • predicted volatility is no more than 20% The constraints for the long-short portfolios are: • dollar neutral (net value of zero) • no more than 60 names in the portfolio • predicted volatility is no more than 10% There are unlimited ways of simulating skill. Here we take a brief look at two. Wilma gets the correlation right (or wrong) separately for the halves of the data above and below the median of the realized values. Three cases were created: • the correlation is 10% for both halves (blue) • the correlation is zero for the low half and 10% for the high half (gold) • the correlation is -10% for the low half and 10% for the high half (black) The colors indicate how the cases appear in the figures. Figures 1 and 2 show the distributions of random portfolios generated using the three predictions plus a set that uses no predictions (colored green). The sets of random portfolios use the predictions by having an extra constraint that the predicted return must be at least 90% of the way to the maximum predicted return. In practice a fund manager is constrained by turnover considerations from moving to the best portfolio — the random portfolios are imitating this. Figure 1: Distributions of 2012 Q1 returns for long-only portfolios: no prediction (green), Wilma 10,10 (blue), 0,10 (gold), -10,10 (black). Figure 2: Distributions of 2012 Q1 returns for dollar-neutral portfolios: no prediction (green), Wilma 10,10 (blue), 0,10 (gold), -10,10 (black). Let’s focus on the blue distribution in Figures 1 and 2. This is using one specific vector of predictions. The variability is due to using that one prediction differently. Think of it as Wilma having lots of different portfolios and using this one prediction vector to rebalance each of them. But different prediction vectors with the same correlations may be more or less effective. In fact, they can have significantly different correlations over the whole universe, as Figure 3 shows. Figure 3: Correlation between realized and Wilma predictions where both halves have correlation 10%, with the correlation of the prediction used (blue). The overall correlation is generally higher than 10% because the relationship is seen over a wider range. But the overall correlation can be less than 10% as well by the lines having different slopes in the two halves. Barney’s skill is different than Wilma’s. Barney knows that a certain fraction of predictions and realized values match (or don’t match) in terms of being above or below their respective medians. (This description is not quite accurate, and is probably hard to follow anyway — see the code in Appendix R below for the real story.) Again three predictions are used: • 20% in each half are quaranteed to match (blue) • 20% guaranteed to match in the high half (gold) • 20% guaranteed not to match in the low half and 20% guaranteed to match in the high half (black) Figures 4 and 5 show the returns of the random portfolios using the same scheme of constraining the expected returns. Figure 4: Distributions of 2012 Q1 returns for long-only portfolios: no prediction (green), Barney 20,20 (blue), 0,20 (gold), -20,20 (black). Figure 5: Distributions of 2012 Q1 returns for dollar neutral portfolios: no prediction (green), Barney 20,20 (blue), 0,20 (gold), -20,20 (black). Figure 5 is a mess in the sense that the differences between the distributions are subtle. This is probably realistic. The scheme from here would be to try to condition out more noise so that the effect of the prediction quality is more visible. Figure 4 is a mess in the sense that it violates our expectations. The supposedly best prediction does worst, and the supposedly mediocre prediction does very well. One possibility is that there is a bug in the computations. That shouldn’t be entirely ruled out, but there is a reasonable explanation for what we are seeing. Figures 6 and 7 show the two predictions in question versus the realized returns. Figure 6: Realized returns versus Barney predictions with 20% guarantee on the top half. Figure 7: Realized returns versus Barney predictions with 20% guarantee on both halves. Note in particular the highlighted points in Figures 6 and 7. The most positive predictions are going to be the most important in long-only portfolios. The center does not hold. The 0-20 prediction has relatively good predictions in the tail, while the 20-20 has rather poor predictions in the tail. Notice that a bound on position size is not one of the constraints. The two most extreme points in Figures 6 and 7 are likely to be driving a lot of the returns. Simulating investment skill is feasible and probably quite useful when done well. Doing it well is unlikely to be done in an afternoon. The textbook measures of the quality of prediction are going to be inadequate. Be careful of the most extreme predictions. Surely some revelation is at hand from “The Second Coming” by W. B. Yeats Appendix R The computations were done with R. The function to do the Wilma prediction: pp.wilma <- function(realized, predicted=NULL, cors=c(.1, .1), tol=.01) # simulate investment skill by adjusting correlation # Placed in the public domain 2013 by Burns Statistics # Testing status: untested if(length(predicted)) { if(length(predicted) != length(realized)) { stop("'realized' and 'predicted' need to be the same length") if(length(intersect(names(realized), names(predicted)))) { stop("'realized' and 'predicted' need to for the same assets") predicted <- predicted[names(realized)] } else { predicted <- realized predicted[] <- sample(predicted) rmed <- median(realized) low <- realized <= rmed sfun.cor <- function(w, real, pred, targetCor) { cor(real, w * real + (1 - w) * pred) - targetCor wlow <- uniroot(sfun.cor, c(-1,1), tol=tol, real=realized[low], pred=predicted[low], predicted[low] <- wlow * realized[low] + (1 - wlow) * whi <- uniroot(sfun.cor, c(-1,1), tol=tol, real=realized[!low], pred=predicted[!low], predicted[!low] <- whi * realized[!low] + (1 - whi) * The three predictions were created (the rough beast is born) with: pw12q1.1010 <- pp.wilma(real12q1, cors=c(.1, .1)) pw12q1.0010 <- pp.wilma(real12q1, cors=c(0, .1)) pw12q1.n1010 <- pp.wilma(real12q1, cors=c(-.1, .1)) The function to do the Barney prediction: pp.barney <- function(realized, predicted=NULL, fraction=c(.1, .1)) # simulate investment skill by adjusting signs of predictions # Placed in the public domain 2013 by Burns Statistics # Testing status: untested if(length(predicted)) { if(length(predicted) != length(realized)) { stop("'realized' and 'predicted' need to be the same length") if(length(intersect(names(realized), names(predicted)))) { stop("'realized' and 'predicted' need to for the same assets") predicted <- predicted[names(realized)] } else { predicted <- realized predicted[] <- sample(predicted) srmed <- sign(realized - median(realized)) predicted <- predicted - median(predicted) lowi <- which(predicted < 0) hii <- which(predicted >= 0) changelow <- sample(lowi, round(abs(fraction[1]) * if(fraction[1] < 0) { predicted[changelow] <- -srmed[changelow] * } else if(fraction[1] > 0) { predicted[changelow] <- srmed[changelow] * changehi <- sample(hii, round(abs(fraction[2]) * length(hii)), replace=FALSE) if(fraction[2] < 0) { predicted[changehi] <- -srmed[changehi] * } else if(fraction[2] > 0) { predicted[changehi] <- srmed[changehi] * The three predictions were created with: pb12q1.2020 <- pp.barney(real12q1, fraction=c(.2, .2)) pb12q1.0020 <- pp.barney(real12q1, fraction=c(0, .2)) pb12q1.n2020 <- pp.barney(real12q1, fraction=c(-.2, .2)) estimate variance matrix The variance estimate uses 250 daily returns in 2011: lwvar11 <- var.shrink.eqcor(initret[seq(to=2014, length=250), ]) allowable volatility The rest of the computations depend on Portfolio Probe: The first thing to do is to get the minimum variance portfolio with the constraints that we are using so that we will know how much volatility we need to allow for: op.minvar.lo <- trade.optimizer(prices=initclose[2015,], variance=lwvar11, gross=1e7, long.only=TRUE, The predicted volatility of the minimum variance portfolio is: > sqrt(252 * op.minvar.lo$var.value) [1] 0.133833 We can comfortably have 20% volatility in our portfolios. allowable predicted return The next step is to see what the best predicted expected return is given the constraints by doing another optimization: op.pw1010.lo <- trade.optimizer(prices=initclose[2015,], variance=lwvar11, expected.return=pw12q1.1010, gross=1e7, long.only=TRUE, port.size=60, The expected return of the optimal portfolio is in the alpha.value component of the resulting object, but we don’t really need to look at it in this case. Optimizations were also done with the other prediction vectors. We also do this for a dollar neutral portfolio where we limit volatility to 10%: op.pw1010.ls <- trade.optimizer(prices=initclose[2015,], variance=lwvar11, expected.return=pw12q1.1010, gross=1e7, net=0, port.size=60, generate random portfolios We start by generating random portfolios (10,000 in each go) with the same constraints as the optimizations: rpvclo <- random.portfolio(number.rand=1e4, prices=initclose[2015,], variance=lwvar11, gross=1e7, long.only=TRUE, port.size=60, rpvcls <- random.portfolio(number.rand=1e4, prices=initclose[2015,], variance=lwvar11, gross=1e7, net=0, port.size=60, Now we generate random portfolios with these constraints plus a constraint on the expected return: rp.pw1010.lo <- random.portfolio(number.rand=1e4, prices=initclose[2015,], variance=lwvar11, gross=1e7, long.only=TRUE, port.size=60, alpha.constraint=unname(mean(pw12q1.1010) + .9 * (op.pw1010.lo$alpha.value - mean(pw12q1.1010)))) The alpha constraint is simpler in the dollar neutral case because we know that we expect zero when no prediction is used. get returns The only other operation of note is getting the returns of the random portfolios over the quarter of interest. Really only the density of the returns is used: density(100 * valuation(rp.pw1010.lo, prices=initclose[c(2015, 2078),], returns='simple')) This uses the valuation function from Portfolio Probe where the prices at the start and finish of the quarter are given and simple returns are asked for. daily e-mail updates news and on topics such as: visualization ( ), programming ( Web Scraping ) statistics ( time series ) and more... If you got this far, why not subscribe for updates from the site? Choose your flavor: , or
{"url":"http://www.r-bloggers.com/slouching-towards-simulating-investment-skill/","timestamp":"2014-04-16T19:24:10Z","content_type":null,"content_length":"50193","record_id":"<urn:uuid:bad58fa0-6f5a-47ec-8096-d34ff54a2366>","cc-path":"CC-MAIN-2014-15/segments/1397609524644.38/warc/CC-MAIN-20140416005204-00332-ip-10-147-4-33.ec2.internal.warc.gz"}
Patent US5202903 - Noise-immune space diversity receiver Publication number US5202903 A Publication type Grant Application number US 07/678,497 Publication date Apr 13, 1993 Filing date Apr 1, 1991 Priority date Mar 30, 1990 Fee status Paid Also published as CA2039596C, DE69129768D1, DE69129768T2, EP0449327A2, EP0449327A3, EP0449327B1 Publication number 07678497, 678497, US 5202903 A, US 5202903A, US-A-5202903, US5202903 A, US5202903A Inventors Kazuhiro Okanoue Original Assignee Nec Corporation Export Citation BiBTeX, EndNote, RefMan Patent Citations (4), Non-Patent Citations (8), Referenced by (94), Classifications (17), Legal Events (4) External Links: USPTO, USPTO Assignment, Espacenet Noise-immune space diversity receiver US 5202903 A In a space diversity receiver, matched filters and a like number of channel estimators are respectively coupled to diversity antennas to receive sequentially coded symbol sequences. A branch metric calculator receives the outputs of the matched filters and the estimates from the channel estimators to calculate a branch metric of the received sequences for coupling to a maximum likelihood (ML) estimator. The branch metric is obtained by summing branch metric coefficients derived from channel estimates respectively with the output of the matched filters or by summing branch metric coefficients derived from a vector sum of channel estimates with the matched filter outputs. In another embodiment, adaptive channel estimators are provided for deriving channel estimates from received sequences and the output of an ML estimator. First branch metrics are derived from the received sequences and supplied to a branch metric quality estimator in which quality estimates of the channels are derived from the first branch metrics. An evaluation circuit evaluates the first branch metrics according to the quality estimates and produces a second branch metric for coupling to the ML estimator. What is claimed is: 1. A diversity receiver having a plurality of diversity antennas for simultaneously receiving sequentially coded symbol sequences transmitted over distinct communications channels from a point of transmission to said antennas, comprising: a plurality of channel estimators respectively coupled to said diversity antennas for deriving estimates of impulse responses of said communication channels respectively from coded symbol sequences received by said antennas; a plurality of matched filters associated respectively with said channel estimators and said diversity antennas, each of said matched filters having a tapped delay line coupled to the associated antenna and a plurality of multipliers coupled respectively to successive taps of said tapped delay line for controlling tap weight coefficients of said multipliers in response to an output signal from the associated channel estimator and integrating weighted signals generated by said multipliers to produce a matched filter output; a branch metric calculator for receiving the outputs of said matched filters and said estimates from said channel estimators for calculating a branch metric of the signals received by the antennas; a maximum likelihood sequence estimator coupled to said branch metric calculator. 2. A diversity receiver as claimed in claim 1, wherein said branch metric calculating circuit comprises: a plurality of branch metric coefficient calculators for receiving output signals from said channel estimators for calculating branch metric coefficients; and a plurality of first adders for summing output signals from said branch metric coefficient calculators with output signals from said matched filters; and a second adder for adding the outputs of said first adders to produce a branch metric. 3. A diversity receiver as claimed in claim 1, wherein said branch metric calculator comprises: a vector adder for providing vector summation of impulse response vectors from said channel estimators to produce an output impulse response vector; a branch metric coefficient calculator for deriving a branch metric coefficient from said output impulse response vector, and an adder for summing output signals from said matched filters and said branch metric coefficient to produce a branch metric. 4. An adaptive diversity receiver having a plurality of diversity antennas for simultaneously receiving signals over distinct communications channels from a point of transmission to said antennas, a plurality of adaptive channel estimators respectively coupled to said diversity antennas for deriving estimates of impulse responses of said communication channels respectively from signals received by said antennas and a previously received signal; a plurality of branch metric calculators coupled respectively to said diversity antennas for deriving first branch metrics respectively from said signals respectively received by said antennas; a branch metric quality estimator coupled to said adaptive channel estimators for deriving from output signals of said channel estimators a plurality of branch metric quality estimates of said communications channels, respectively; a branch metric evaluation circuit coupled to said branch metric calculators and said branch metric quality estimators for evaluating said first branch metrics in accordance with said branch metric quality estimates and producing a second branch metric; and a maximum likelihood sequence estimator for deriving a maximum likelihood estimate of said received signals from said second branch metric and applying said maximum likelihood estimate to said adaptive channel estimators as said previously received signal. 5. An adaptive diversity receiver as claimed in claim 4, wherein said second branch metric produced by said evaluation circuit is representative of a weighted mean value of said branch metrics, weighted with said branch metric quality estimates. 6. An adaptive diversity receiver as claimed in claim 4, wherein each of said adaptive channel estimators includes means for generating an error signal representative of a difference between an output signal from said maximum likelihood sequence estimator and a corresponding one of said signals respectively received by said diversity antennas and supplying said error signal as one of said output signals supplied from said adaptive channel estimators to said branch metric quality estimator. 7. An adaptive diversity receiver as claimed in claim 6, wherein said branch metric quality estimator comprises: a plurality of power detector circuits for deriving signals representative of the power levels of said signals respectively received by said antennas from the error signals supplied from said adaptive channel estimators; and a plurality of comparator means coupled respectively to said power detector means for comparing said power representative signals with a threshold value and generating a plurality of signals each indicating whether the respective power representative signal is higher or lower than said threshold value. 8. An adaptive diversity receiver as claimed in claim 6, wherein each of said adaptive channel estimators comprises: a tapped delay having for receiving said output signal from said maximum likelihood sequence estimator; a plurality of multipliers respectively coupled to successive taps of said delay line; an adder for integrating output signals of said multipliers to produce an adder output; delay means for introducing a delay time to a corresponding one of said signals respectively received by said antennas by an amount corresponding to the amount of time taken for said corresponding signal to appear at the output of said maximum likelihood sequence estimator; error detector means for detecting a difference between said adder output and an output signal of said delay means to generate said error signal; and processor means for deriving tap weight coefficients from signals at said successive taps and said error signal and supplying said coefficients to said multipliers, respectively, and to a respective one of said branch metric calculators as said impulse response estimates. 9. An adaptive diversity receiver as claimed in claim 4, wherein each of said adaptive channel estimators comprises: a tapped delay having for receiving said output signal from said maximum likelihood sequence estimator; a plurality of multipliers respectively coupled to successive taps of said delay line; an adder for integrating output signals of said multipliers to produce an adder output; delay means for introducing a delay time to a corresponding one of said signals respectively received by said antennas by an amount corresponding to the time taken for said corresponding signal to appear at the output of said maximum likelihood sequence estimator; error detector means for detecting a difference between said adder output and an output signal of said delay means to generate an error signal; and processor means for deriving tap weight coefficients from signals at said successive taps and said error signal and supplying said coefficients to said multipliers, respectively, and to a respective one of said branch metric calculators as said impulse response estimates. 10. An adaptive diversity receiver as claimed in claim 9, wherein said branch metric quality calculator comprises: a plurality of variation detector means coupled respectively to said adaptive channel estimators for detecting time-varying components from the channel estimates; and a plurality of comparator means coupled respectively to said variation detector means for comparing said time-varying components with a threshold value and generating a plurality of signals each indicating whether the respective time-varying components is higher or lower than said threshold value. 11. An adaptive diversity receiver as claimed in claim 7 or 9, wherein said evaluation circuit comprises: a first adder for deriving a first value representative of a sum of output signals from said comparators; a plurality of weighting means for respectively weighting said branch metrics with the output signals from said comparators; a second adder for summing the weighted branch metrics to produce a second value; and a division circuit for arithmetically dividing said second value with said first value to produce said second branch metric. 12. An adaptive diversity receiver as claimed in claim 7 or 9, wherein each of said comparator means has a hysteresis characteristic for maintaining the output signal thereof when said threshold value is exceeded. The present invention is related to Co-pending U.S. patent application Ser. No. 07/517,883, titled "Space Diversity TDMA Receiver", K. Okanoue, filed May 2, 1990 and assigned to the same assignee as the present invention. The present invention relates to diversity reception of signals propagating over distinct fading channels. It is known to combine a diversity system with an equalization system for purposes of improving the performance of a receiver. One such technique is the decision feedback equalization in which matched filters or forward equalizers are provided respectively at diversity antennas and their outputs are combined and fed into a decision-feedback equalizer (as described in K. Watanabe, "Adaptive Matched Filter And Its Significance To Anti-Multipath Fading", IEEE publication (CH2314-3/86/0000-1455) 1986, pages 1455 to 1459, and P. Monsen, "Adaptive Equalization of The Slow Fading Channel", IEEE, Transactions of Communications, Vol. COM-22, No. 8, August 1974). Another technique is the maximum likelihood estimation in which the quality (spread of intersymbol interference and signal to noise ratio) of a received signal at each diversity antenna is estimated and a signal having the largest value is selected on the basis of the quality estimates (as described in Okanoue, Furuya, "A New Post-Detection Selection Diversity With MLSE Equalization", B-502, Institutes of Electronics Information and Communications, Autumn National Meeting, 1989). To implement the maximum likelihood sequence estimation, the Viterbi algorithm is well known. By summing constants uniquely determined by matched filters and communication channels (as defined by the second and third right terms of Equation 8b, page 18, J. F. Hayes, "The Viterbi Algorithm Applied to Digital Data Transmission", IEEE Communication Society, 1975, No. 13, pages 15-20), a branch metric of received symbol sequences is determined and fed into a soft-decision Viterbi decoder. However, prior art systems are still not satisfactory if the branch metric is severely affected by channel noise and intersymbol interference. In addition, if variabilities exist in signal to noise ratio between signals received by different diversity antennas during a deep fade, all such signals will be treated alike and an error is likely to result in maximum likelihood sequence estimation. It is therefore an object of the present invention to provide a space diversity receiver for a communications system in which the quality of reception is significantly affected by channel noise and intersymbol interference. According to a first aspect of the present invention, there is provided a diversity receiver having a plurality of diversity antennas for simultaneously receiving sequentially coded symbol sequences propagating over distinct communications channels from a point of transmission to the antennas. The receiver comprises a plurality of channel estimators respectively coupled to the antennas for deriving respective estimates of impulse responses of the communication channels from the received sequences. A plurality of matched filters are associated respectively with the channel estimators and the diversity antennas. Each of the matched filters has a tapped delay line coupled to the associated antenna and a plurality of multipliers coupled respectively to successive taps of the tapped delay line for controlling tap weight coefficients of the multipliers in response to an output signal from the associated channel estimator and integrating weighted signals generated by the multipliers to produce a matched filter output. A branch metric calculator is provided for receiving the outputs of the matched filters and the estimates from the channel estimators for calculating a branch metric of the signals received by the antennas for coupling to a maximum likelihood sequence estimator. Specifically, in one embodiment, the branch metric calculator comprises a plurality of branch metric coefficient calculators which respectively receive output signals from the channel estimators to calculate branch metric coefficients. A plurality of first adders provide summation of output signals from the branch metric coefficient calculators with output signals from the matched filters, and a second adder provides summation of the outputs of the first adders to produce a branch metric. In a modified embodiment, the the branch metric calculator comprises a vector adder for providing vector summation of impulse response vectors from the channel estimators to produce an output impulse response vector. A branch metric coefficient calculator is provided for deriving a branch metric coefficient from the output impulse response vector. The output signals from the matched filters are summed with the branch metric coefficient to produce a branch metric. According to a second aspect of the present invention, a plurality of adaptive channel estimators are respectively coupled to the diversity antennas for deriving estimates of impulse responses of the communication channels respectively from the received sequences and a previously received signal. A plurality of branch metric calculators are also coupled respectively to the diversity antennas for deriving first branch metrics respectively from the received sequences. A branch metric quality estimator is coupled to the adaptive channel estimators for deriving from output signals of the channel estimators a plurality of branch metric quality estimates of the communications channels, respectively. A branch metric evaluation circuit is coupled to the branch metric calculators and the branch metric quality estimator for evaluating the first branch metrics in accordance with the branch metric quality estimates and producing a second branch metric. A maximum likelihood sequence estimator derives a maximum likelihood estimate of the received sequences from the second metric branch and applies it to the adaptive channel estimators as the previous signal. The present invention will be described in further detail with reference to the accompanying drawings, in which: FIG. 1 shows in block form a space diversity receiver according to a first embodiment of the present invention; FIG. 2 shows details of each channel estimator of FIG. 1; FIG. 3 shows in block form one embodiment of the branch metric calculator of FIG. 1; FIG. 4 shows in block form another embodiment of the branch metric calculator of FIG. 1; FIG. 5 shows in block form a space diversity receiver according to a second embodiment of the present invention; FIG. 6 shows details of each adaptive channel estimator of FIG. 5; FIG. 7 shows details of the branch metric quality estimator of FIG. 5; FIG. 8 shows details of the branch metric evaluation circuit of FIG. 5; FIG. 9 shows in block form a modification of the second embodiment of the present invention; and FIG. 10 shows details of the branch metric quality estimator of FIG. 9. Referring now to FIG. 1, there is shown a diversity receiver according to the present invention. The receiver has a plurality of diversity antennas 1.sub.1 -1.sub.n which are respectively coupled to matched filters 2.sub.1 -2.sub.n. Diversity antennas 1.sub.1 ˜1.sub.n are further coupled to channel estimators 3.sub.1 -3.sub.n, respectively, for generating estimates of the impulse responses of the corresponding channels from the point of transmission to the diversity antennas. Channel estimators 3.sub.1 -3.sub.n are associated respectively with matched filters 2.sub.1 ˜2.sub.n. The outputs of channel estimators 3.sub.1 ˜3.sub.n are respectively coupled to control inputs of the associated matched filters 2.sub.1 ˜2.sub.n to adaptively control their internal states, or tap weight coefficients. The outputs of channel estimators 3.sub.1 -3.sub.n are further applied to a branch metric calculator 4 to which the outputs of matched filters 2.sub.1 -2.sub.n are also applied. Branch metric calculator 4 derives a branch metric from the impulse response estimates and the outputs of the matched filters. A soft-decision Viterbi decoder 5, or maximum likelihood sequence estimator, of known design is coupled to the output of branch metric calculator 4. As is well known, the Viterbi decoder 5 comprises an add, compare and select (ACS) circuit and a path memory which is controlled by the output of ACS circuit to store branch metrics and detect a most likely symbol sequence for coupling to an output terminal 6 by tracing back through the stored metrics. As illustrated in FIG. 2, each channel estimator 3.sub.k (where k=1, 2, . . . n) is essentially of a transversal filter configuration comprising a tapped delay line with delay elements 7.sub.1 -7.sub.m-1 being connected in series to the associated diversity antenna 1.sub.k. Successive taps of the delay line are connected respectively to multipliers 8.sub.1 ˜8.sub.m whose tap weights are controlled by corresponding tap weight coefficients stored in a registor 9. In a practical aspect, the stored tap weight coefficients are in the form of a sequence of alternating symbols which may appear at periodic intervals, such as carrier recovery sequence in the preamble of a burst signal. The symbols received by antenna 1.sub.k are successively delayed and multiplied by the stored tap weight coefficients and summed by an adder 10 to produce a signal representative of the degree of cross-correlation between the arriving symbol sequence and the stored sequence. This signal is supplied from the adder 10 of each channel estimator 3.sub.k to the corresponding matched filter 2.sub.k as a channel impulse response estimate. The matched filter is a well known device capable of maximizing signal to noise ratio (S. Stein and J. J. Jones, "Modern Communication Principles With Application to Digital Signaling", McGraw-Hill, Inc.). Each matched filter is also a transversal-filter-like configuration with a tapped delay line, a plurality of tap weight multipliers coupled respectively to the taps of the delay line, and an adder for integrating the outputs of the multipliers over a symbol interval to produce a matched filter output. The tap weight coefficients of each matched filter 2.sub.k are controlled in accordance with the impulse response estimate of the corresponding communications channel which is supplied from the associated channel estimator 3.sub.k. Details of such matched filters are shown and described in the aforesaid Co-pending U.S. application. As shown in FIG. 3, the branch metric calculator 4 comprises a plurality of adders 11.sub.1 ˜11.sub.n corresponding respectively to matched filters 2.sub.1 -2.sub.n, a like number of branch metric coefficient calculators 12.sub.1 ˜12.sub.n, and an adder 13 whose output is coupled to the input of the Viterbi decoder 5. One input of each adder 11.sub.k is coupled to the output of corresponding matched filter 2.sub.k and another input of the adder is coupled to the output of corresponding branch metric coefficient calculator 12.sub.k. In this way, the output of each matched filter is summed with a corresponding branch metric coefficient by each adder 11 and further summed with the other outputs of adders 11 by adder 13 to produce a branch metric. The output of branch metric calculator 4 is coupled to Viterbi decoder 5 in which the maximum likelihood sequence estimation is made on the metrics to detect a most likely symbol sequence. In operation, a digitally modulated, sequentially coded symbol sequence is transmitted from a distant station and propagates over distinct fading channels. On reception, replicas of the original sequence are detected by diversity antennas 1.sub.1 -1.sub.n and filtered by corresponding matched filters 2.sub.1 -2.sub.n. The matched filters maximize the signal to noise ratios of the symbol sequences on the respective fading channels. Since the branch metric is a sum of the matched filter outputs and the branch metric coefficients uniquely determined by the impulse responses of the corresponding channels, the effect of white Gaussian noise on the branch metric can be reduced to a minimum. A modified form of the branch metric calculator is shown in FIG. 4. The modified branch metric calculator comprises an adder 14, a branch metric coefficient calculator 15 and a vector adder 16. The impulse response estimates from channel estimators 3.sub.1 ˜3.sub.n are applied to vector adder 16 as vectors h(k). and summed to produce a resultant vector H.as an estimate of an overall impulse responses of the channels. The output of vector adder 16 is applied to branch metric coefficient calculator 15 to compute a branch metric coefficient. The branch metric coefficient is applied to adder 14 in which it is summed with the outputs matched filters 2.sub.1 -2.sub.n to produce a branch metric for coupling to the Viterbi decoder 5. The modified branch metric calculator reduces multiplicative iterations required for deriving the metric coefficient by a factor 1/n as compared with the embodiment of FIG. 3. A second embodiment of the diversity receiver of this invention is shown in FIG. 5, which is particularly useful for systems in which the intersymbol interference is time-variant. This embodiment comprises a plurality of adaptive channel estimators 21.sub.1 -21.sub.n which are coupled respectively to diversity antennas 20.sub.1 -20.sub.n. Branch metric calculators 22.sub.1 -22.sub.n of known design are also coupled respectively to diversity antennas 20.sub.1 ˜20.sub.n and to adaptive channel estimators 21.sub.1 ˜21.sub.n. As will be described hereinbelow, each adaptive channel estimator 21.sub.k derives tap weight coefficients and supplies them as a vector h.sub.k (i+1) of the impulse response estimate of the channel k at the instant of time (i+1) to the associated branch metric calculator 22.sub.k in which the vector is combined with a received symbol sequence to produce a branch metric. The output of each branch metric calculator 22 is coupled to a branch metric evaluation circuit 24. Each channel estimator 21.sub.k further generates an error signal e.sub.k (i) which is applied to a branch metric quality estimator 23. Branch metric quality estimator 23 provides quality estimates of the branch metrics from branch metric calculators 22 and supplies its output signals to branch metric evaluation circuit 24 in which they are combined with the error signals to produce a final version of the branch metrics. The output of branch metric evaluation circuit 24 is applied to a soft-decision Viterbi decoder 25. The output of the Viterbi decoder 25 is supplied to an output terminal 26 on the one hand, and to adaptive channel estimators 21.sub.1 -21.sub.n on the other, as a feedback signal. As shown in detail in FIG. 6, each adaptive channel estimator 21.sub.k comprises a tapped delay line formed by a series of delay elements 30.sub.1 through 30.sub.m-1. To this tapped delay line is connected the output of the Viterbi decoder 25 to produce successively delayed versions of each decoded symbol across the delay line. Tap weight multipliers 31.sub.1 -31.sub.m-1 are coupled respectively to successive taps of the delay line to multiply the delayed signals by respective tap weight coefficients. An adder 32 produces a sum of the weighted signals for comparison with an signal supplied from a delay circuit 33. The output of the delay circuit 33 is the signal from the associated diversity antenna 20.sub.k which is delayed by an amount corresponding to the time elapsed for each signal element from the time it enters the receiver to the time it leaves the Viterbi decoder 25. A difference between the outputs of adder 32 and delay circuit 33 is taken by a subtracter 34 to produce the error signal e.sub.k, which is supplied to the branch metric quality estimator 23 as well as to a processor 35 to which the successive taps of the delay line are also Processor 35 has circuitry that initializes or conditions its internal state to produce an initial vector h.sub.k (i) of channel impulse response estimates at time i and computes a vector h.sub.k (i+1) of channel impulse response estimates at time i+1 using the following formula: h.sub.k (i+1)=h.sub.k (i)+Δe.sub.k where, Δ indicates the step size corresponding to the rate of variation of the intersymbol interference and i.sub.k (i), denotes the vector of complex conjugates of detected information symbols. As the process continues in a feedback fashion, the vector h.sub.k (i) is successively updated with the error component e.sub.k. The vector n.sub.k (i+1) of channel impulse response estimates is supplied to the associated branch metric calculator 22.sub.k as well as to multipliers 31.sub.1 -31.sub.m as tap weight coefficients. As shown in FIG. 7, the error signals from adaptive channel estimators 21.sub.1 ˜21.sub.n are supplied to squaring circuits 36.sub.1 -36.sub.n of branch metric quality estimator 23 to produce signals representative of the power of the error components. A like number of comparators 37.sub.1 -37.sub.n are respectively coupled to the outputs of squaring circuits 36.sub.1 -36.sub.n to determine if each of the detected power levels is higher or lower than a prescribed threshold value. If the input power is lower than the threshold value, each comparator generates a normal signal indicating that the quality of the received symbol is satisfactory. Conversely, if the power level is higher than the threshold, the comparator produces an alarm signal indicating that the received signal has corrupted. The outputs of comparators 37.sub.1 -37.sub.n are applied to branch metric evaluation circuit 24 on the one hand and to a controller 38 on the other. In response to each alarm signal, controller 38 supplies a control signal to that comparator which produced the alarm signal to cause it to maintain the alarm signal. This hysteresis operation eliminates the objectionable effect which would otherwise be produced by the channel estimators 21 when impulse response estimation goes out of order because of their diverging characteristics. As shown in detail in FIG. 8, the outputs of branch metric quality estimator 23 are applied to binary converters 39.sub.1 -39.sub.n, respectively, of branch metric evaluation circuit 24. On the other hand, the outputs of branch metric calculators 22.sub.1 -22.sub.n are coupled to multipliers 41.sub.1 ˜41.sub.n, respectively. Binary converters 39.sub.1 -39.sub.n convert the normal indicating signal to a unity value and the alarm signal to zero and supply their outputs to an adder 40 in which they are summed together to produce a signal indicating a total number of normal signals. The outputs of converters 39.sub.1 ˜39.sub.n are further supplied to multipliers 41.sub.1 -41.sub.n, respectively, so that the quality signal obtained from diversity antenna 20.sub.k is multiplied with the corresponding branch metric obtained from that diversity antenna. The outputs of multipliers 41.sub.1 ˜41.sub.n are summed by a second adder 42 to give a total value of quality-weighted branch metrics. The outputs of adders 40 and 42 are then supplied to an arithmetic division circuit 43 in which the total of the quality-weighted branch metrics is divided by the total number of normal signals to produce an output which is representative of the weighted mean value of the individual branch metrics, the output signal being coupled through an output terminal 44 as a final branch metric to the Viterbi decoder 25. A modified form of the embodiment of FIG. 5 is shown in FIG. 9 in a branch metric quality estimator 23A is used instead of branch metric quality estimator 23. Branch metric quality estimator 23A receives impulse response estimates h.sub.k (i+1) (or tap weight coefficients) from adaptive channel estimators 21.sub.1 -21.sub.n, rather than their error signals e.sub.k. As shown in detail in FIG. 10, the impulse response estimates from adaptive channel estimators 21.sub.1 ˜21.sub.n are supplied through delay circuits 50.sub.1 -50.sub.n to first input ports of variation detector 51.sub.1 -51.sub.n, respectively, on the one hand, and further supplied direct to second input ports of the corresponding variation detectors. Delay circuits 50.sub.1 -50.sub.n introduce a unit delay time to their input signals. Each of the variation detectors 51.sub.1 -51.sub.n calculates a differential vector Δh.sub.k between successive vectors of impulse response estimates h.sub.k (i-1) and h.sub.k (i). Each variation detector proceeds to calculate the absolute values of the components of the impulse response differential vector and detect a maximum value of the absolute values as an output signal of the variation detector. In this way, the output of each variation detector 51 represents the maximum level of variations that occurred during each unit time, or unit symbol time. Under normal circumstances, the speed of variation of channel impulse response at each diversity antenna is significantly smaller than the baud rate. Therefore, it can be considered that the validity of channel impulse response estimate is lost if the output of each variation detector is greater than the difference between adjacent signal points of digital modulation. The outputs of variation detectors 51.sub.1 ˜51.sub.n are supplied to comparators 52.sub.1 -52.sub.n, respectively, for making comparisons with a predefined threshold value representing the minimum value of difference between adjacent signal points of digital modulation. In a manner similar to the comparators of FIG. 7, the outputs of comparators 52.sub.1 -52.sub.n (either normal or alarm) are coupled to branch metric evaluation circuit 24 of FIG. 9 and further to a controller 53 which causes the comparators to maintain their alarm signals. If the signal to noise ratio of a given channel has degraded in comparison with other channels to such an extent that a significant error has occurred in impulse response estimation, such a condition is detected by branch metric quality estimator 23 and its adverse effect on other signals is suppressed. The foregoing description shows only preferred embodiments of the present invention. Various modifications are apparent to those skilled in the art without departing from the scope of the present invention which is only limited by the appended claims. Therefore, the embodiments shown and described are only illustrative, not restrictive. Cited Filing date Publication Applicant Title Patent date US4160952 * May 12, Jul 10, 1979 Bell Telephone Laboratories, Space diversity receiver with combined step and continuous phase control 1978 Incorporated US4733402 * Apr 23, Mar 22, 1988 Signatron, Inc. Adaptive filter equalizer systems US5031193 * Nov 13, Jul 9, 1991 Motorola, Inc. Method and apparatus for diversity reception of time-dispersed signals US5065411 * Jul 17, Nov 12, 1991 Nec Corporation Diversity receiving system for use in digital radio communication with means for selecting branch by estimating 1990 propagation path property 1 * A New Post detection Selection Diversity with MLSE Equalization, by Kazuhiro Okanoue et al. C&C Systems Research Laboratories, NEC Corporation, pp. 2 172. 2 A New Post-detection Selection Diversity with MLSE Equalization, by Kazuhiro Okanoue et al. C&C Systems Research Laboratories, NEC Corporation, pp. 2-172. 3 * Adaptive Equalization of the Slow Fading Channel, by Peter Monsen, IEE Transactions on Communications, vol. COM 22, No. 8, aug. 1974, pp. 1064 1075. 4 Adaptive Equalization of the Slow Fading Channel, by Peter Monsen, IEE Transactions on Communications, vol. COM-22, No. 8, aug. 1974, pp. 1064-1075. 5 * Adaptive Matched Filter and its Significance to Anti multipath Fading, by Kojiro Watanabe, C&C Systems Research Laboratories, NEC Corporation, pp. 1455 1459. 6 Adaptive Matched Filter and its Significance to Anti-multipath Fading, by Kojiro Watanabe, C&C Systems Research Laboratories, NEC Corporation, pp. 1455-1459. 7 * The Viterbi Algorithm Applied to Digital Data Transmission, by J. F. Hayes, pp. 15 20. 8 The Viterbi Algorithm Applied to Digital Data Transmission, by J. F. Hayes, pp. 15-20. Citing Patent Filing Publication Applicant Title date date US5272726 * Jul 31, Dec 21, Nec Corporation Blind type sequence estimator for use in communications system US5313495 * May 12, May 17, Hughes Aircraft Company Demodulator for symbols transmitted over a cellular channel US5363412 * Dec 28, Nov 8, 1994 Motorola, Inc. Method and apparatus of adaptive maximum likelihood sequence estimation using filtered correlation synchronization US5379324 * Jan 25, Jan 3, 1995 Motorola, Inc. System and method for calculating channel gain and noise variance of a communication channel US5432821 * Dec 2, Jul 11, University Of Southern System and method for estimating data sequences in digital transmissions 1992 1995 California US5481572 * Aug 2, Jan 2, 1996 Ericsson Inc. Method of and apparatus for reducing the complexitiy of a diversity combining and sequence estimation receiver US5497383 * Jan 22, Mar 5, 1996 Motorola, Inc. Error detector circuit for receiver operative to receive discretely-encoded signals US5497401 * Nov 18, Mar 5, 1996 Thomson Consumer Electronics, Branch metric computer for a Viterbi decoder of a punctured and pragmatic trellis code convolutional decoder suitable for use in a 1994 Inc. multi-channel receiver of satellite, terrestrial and cable transmitted FEC compressed-digital television data US5533067 * Jun 23, Jul 2, 1996 Telefonaktiebolaget Lm Ericsson Method and device for estimating transmitted signals in a receiver in digital signal transmission operations US5566209 * Feb 10, Oct 15, Telefonaktiebolaget Lm Ericsson Transceiver algorithms of antenna arrays US5586128 * Nov 17, Dec 17, Ericsson Ge Mobile System for decoding digital data using a variable decision depth 1994 1996 Communications Inc. US5608763 * Dec 30, Mar 4, 1997 Motorola, Inc. Method and apparatus for decoding a radio frequency signal containing a sequence of phase values US5621769 * Jun 8, Apr 15, Novatel Communications Ltd. Adaptive-sequence-estimation apparatus employing diversity combining/selection US5640432 * Feb 9, Jun 17, Roke Manor Research Limited Co-channel interference suppression system US5724390 * Mar 2, Mar 3, 1998 Lucent Technologies Inc. MLSE before derotation and after derotation US5727032 * Oct 6, Mar 10, Telefonaktiebolaget Lm Ericsson Method and device for estimating transmitted signals in a receiver in digital signal transmission operations US5752168 * Mar 21, May 12, Thomson-Csf Multisensor reception method for a fixed base station of a communications network exchanging data with mobile stations and device 1996 1998 for its implementation US5764690 * Jun 4, Jun 9, 1998 Motorola, Inc. Apparatus for despreading and demodulating a burst CDMA signal US5784403 * Feb 3, Jul 21, Omnipoint Corporation Spread spectrum correlation using saw device US5790606 * Feb 24, Aug 4, 1998 Ericsson Inc. Joint demodulation using spatial maximum likelihood US5793814 * Feb 27, Aug 11, Siemens Aktiengesellschaft Transmission method for simultaneous synchronous or asynchronous transmission of K data sequences consisting of data symbols US5844947 * Dec 23, Dec 1, 1998 Lucent Technologies Inc. Viterbi decoder with reduced metric computation US5844951 * Mar 10, Dec 1, 1998 Northeastern University Method and apparatus for simultaneous beamforming and equalization US6026121 * Jul 25, Feb 15, At&T Corp Adaptive per-survivor processor US6085076 * Apr 7, Jul 4, 2000 Omnipoint Corporation Antenna diversity for wireless communication system US6104768 * Mar 30, Aug 15, Lucent Technologies Inc. Diversity antenna system US6115591 * Nov 4, Sep 5, 2000 Samsung Electronics Co., Ltd. Space diversity receiver for use in radio transmission system and method thereof US6137843 * Jul 28, Oct 24, Ericsson Inc. Methods and apparatus for canceling adjacent channel signals in digital communications systems US6148041 * May 7, Nov 14, Ericsson Inc. Joint demodulation using spatial maximum likelihood US6188736 * Apr 21, Feb 13, At&T Wireless Svcs. Inc. Near-optimal low-complexity decoding of space-time codes for fixed wireless applications US6330294 * May 15, Dec 11, Cselt- Centro Studi E Laboratori Method of and apparatus for digital radio signal reception 1998 2001 Telecomuniicazioni S.P.A. US6339624 * Aug 14, Jan 15, Stanford Telecommunications, Trellis decoding with multiple symbol noncoherent detection and diversity combining 1998 2002 Inc. US6370182 * Feb 8, Apr 9, 2002 Itt Manufacturing Enterprises, Integrated beamforming/rake/mud CDMA receiver architecture 2001 Inc. US6421372 * Nov 8, Jul 16, Itt Manufacturing Enterprises, Sequential-acquisition, multi-band, multi-channel, matched filter 2000 2002 Inc. US6448926 * Aug 23, Sep 10, Itt Manufacturing Enterprises, Multi-band, multi-function integrated transceiver 1999 2002 Inc. US6470043 * Oct 17, Oct 22, At&T Wireless Services, Inc. Near-optimal low-complexity decoding of space-time codes for fixed wireless applications US6483866 * Oct 11, Nov 19, Ntt Mobile Communications Multi-station transmission method and receiver for inverse transforming two pseudo-orthogonal transmission sequences used for 1994 2002 Network Inc. metric calculation and base station selection based thereon US6487255 * Aug 31, Nov 26, Ericsson Inc. Information generation for coherent demodulation of differentially encoded signals US6487259 Aug 24, Nov 26, Motorola, Inc. Partially-parrallel trellis decoder apparatus and method US6556632 * May 22, Apr 29, Mitsubishi Denki Kabushiki Sequence estimation method and sequence estimator 1998 2003 Kaisha US6590860 * Mar 17, Jul 8, 2003 Sony Corporation Receiving device and signal receiving method US6741635 Sep 3, May 25, At&T Wireless Services, Inc. Near-optimal low-complexity decoding of space-time codes for fixed wireless applications US6771722 * Jul 31, Aug 3, 2004 Motorola, Inc. Channel estimator and method therefor US6775329 Dec 5, Aug 10, At&T Wireless Services, Inc. Transmitter diversity technique for wireless communications US6807240 Dec 3, Oct 19, At&T Wireless Services, Inc. Low complexity maximum likelihood detection of concatenate space codes for wireless applications US6853688 Dec 30, Feb 8, 2005 Cingular Wireless Ii, Llc Low complexity maximum likelihood detection of concatenated space codes for wireless applications US6909759 * Dec 19, Jun 21, Texas Instruments Incorporated Wireless receiver using noise levels for postscaling an equalized signal having temporal diversity US6996196 Dec 10, Feb 7, 2006 Mitsubishi Denki Kabushiki Sequence estimation method and sequence estimator 2002 Kaisha US7006800 * Jun 5, Feb 28, National Semiconductor Signal-to-noise ratio (SNR) estimator in wireless fading channels 2003 2006 Corporation US7010073 * Jan 11, Mar 7, 2006 Qualcomm, Incorporated Delay lock loops for wireless communication systems US7046719 * Mar 8, May 16, Motorola, Inc. Soft handoff between cellular systems employing different encoding rates US7046737 May 4, May 16, Cingular Wireless Ii, Llc Near-optimal low-complexity decoding of space-time codes for wireless applications US7120200 Jun 22, Oct 10, Cingular Wireless Ii, Llc Transmitter diversity technique for wireless communications US7123667 Jul 6, Oct 17, Mitusbishi Denki Kabushiki Receiver for wireless communication 2001 2006 Kaisha US7133457 * Jun 27, Nov 7, 2006 Texas Instruments Incorporated Joint timing recovery for multiple signal channels US7236548 * Jun 28, Jun 26, Koninklijke Philips Electronics Bit level diversity combining for COFDM system 2002 2007 N.V. US7266168 * Jul 18, Sep 4, 2007 Interdigital Technology Groupwise successive interference cancellation for block transmission with reception diversity 2003 Corporation US7274752 Oct 12, Sep 25, Cingular Wireless Ii, Llc Maximum ratio transmission US7340016 Jul 12, Mar 4, 2008 Telefonaktiebolaget Lm Ericsson Equalizers for multi-branch receiver 2004 (Publ) US7362823 Jun 22, Apr 22, Cingular Wireless Ii, Llc Maximum ratio transmission US7386077 Sep 28, Jun 10, At&T Mobility Ii Llc Receiver diversity technique for wireless communications US7409017 * Feb 1, Aug 5, 2008 Ntt Docomo, Inc. Signal separator US7421046 * Feb 28, Sep 2, 2008 Vixs Systems Inc. Method and apparatus for signal decoding in a diversity reception system with maximum ratio combining US7453965 * Jan 16, Nov 18, At&T Corp. Differential transmitter diversity technique for wireless communications US7463694 Aug 30, Dec 9, 2008 Interdigital Technology Groupwise successive interference cancellation for block transmission with reception diversity 2007 Corporation US7526040 Mar 8, Apr 28, At&T Mobility Ii Llc Near-optimal low-complexity decoding of space-time codes for fixed wireless applications US7555068 * Mar 24, Jun 30, Texas Instruments Incorporated Producing control signals indicating transmit diversity and no transmit diversity US7570576 * Sep 9, Aug 4, 2009 Broadcom Corporation Detection and mitigation of temporary (bursts) impairments in channels using SCDMA US7587007 Oct 30, Sep 8, 2009 At&T Mobility Ii Llc Transmitter diversity technique for wireless communications US7609771 Feb 21, Oct 27, At&T Mobility Ii Llc Maximum ratio transmission US7643568 Dec 21, Jan 5, 2010 At&T Mobility Ii Llc Low complexity maximum likelihood detection of concatenated space codes for wireless applications US7809336 * Jun 10, Oct 5, 2010 Qualcomm Incorporated Rate selection for a quasi-orthogonal communication system US7817760 Mar 3, Oct 19, Qualcomm Incorporated Delay lock loops for wireless communication systems US7913154 * Feb 28, Mar 22, Agere Systems Inc. Method and apparatus for pipelined joint equalization and decoding for gigabit communications US7916806 Jul 26, Mar 29, At&T Mobility Ii Llc Transmitter diversity technique for wireless communications US7940852 * Dec 2, May 10, At&T Intellectual Property Ii, Near optimal joint channel estimation and data detection for COFDM systems 2008 2011 L.P. US8073067 Nov 2, Dec 6, 2011 Interdigital Technology Method and apparatus for transferring signals in a wireless communication system 2009 Corporation US8107355 * Aug 4, Jan 31, Broadcom Corporation Detection and mitigation of temporary (bursts) impairments in channels using SCDMA US8179991 Mar 24, May 15, At&T Mobility Ii Llc Near-optimal low-complexity decoding of space-time codes for fixed wireless applications US8223827 * May 5, Jul 17, Agere Systems Inc. Method and apparatus for generating filter tap weights and biases for signal dependent branch metric computation US8284854 Dec 8, Oct 9, 2012 Interdigital Technology Groupwise successive interference cancellation for block transmission with reception diversity 2008 Corporation US8300747 * Apr 20, Oct 30, Broadcom Corporation Method and system for an adaptive VBLAST receiver for wireless multiple input multiple outout (MIMO) detection US8351545 Dec 30, Jan 8, 2013 At&T Mobility Ii Llc Low complexity maximum likelihood detection of concatenated space codes for wireless applications US8355475 Mar 28, Jan 15, At&T Mobility Ii Llc Diversity technique for wireless communications US8422963 Jul 21, Apr 16, Qualcomm Incorporated Rate selection for a quasi-orthogonal communication system US8520746 Sep 17, Aug 27, At&T Mobility Ii Llc Maximum ratio transmission US8553820 Jun 28, Oct 8, 2013 Interdigital Technology Groupwise successive interference cancellation for block transmission with reception diversity 2012 Corporation US8588350 * Nov 6, Nov 19, Koninklijke Philips N.V. Diversity receiver having cross coupled channel parameter estimation US20050249273 May 5, Nov 10, Ashley Jonathan J Method and apparatus for generating filter tap weights and biases for signal dependent branch metric computation * 2004 2005 US20100202573 Apr 20, Aug 12, Ling Su Method and system for an adaptive vblast receiver for wireless multiple input multiple outout (mimo) detection * 2010 2010 WO1994015427A1 Nov 26, Jul 7, 1994 Motorola Inc Method and apparatus of adaptive maximum likelihood sequence estimation using filtered correlation synchronization * 1993 WO1994017472A1 Dec 23, Aug 4, 1994 Motorola Inc Error detector circuit for receiver operative to receive discretely-encoded signals * 1993 WO2001059962A1 Feb 9, Aug 16, Itt Mfg Enterprises Inc Integrated beamforming/rake/mud cdma receiver architecture * 2001 2001 WO2006052407A2 Oct 18, May 18, Interdigital Tech Corp Normalized least mean square chip-level equalization advanced diversity receiver * 2005 2006 U.S. Classification 375/347, 375/343, 375/341, 375/349 International Classification H04L25/03, H04L1/06, H04B7/08, H04L1/20 Cooperative Classification H04B7/0845, H04L1/06, H04L1/20, H04L25/03331, H04L25/03184 European Classification H04B7/08C4C, H04L1/20, H04L25/03B7M, H04L1/06 Date Code Event Description Sep 8, 2004 FPAY Fee payment Year of fee payment: 12 Sep 25, 2000 FPAY Fee payment Year of fee payment: 8 Sep 30, 1996 FPAY Fee payment Year of fee payment: 4 Owner name: NEC CORPORATION, JAPAN Nov 5, 1992 AS Assignment Free format text: RE-RECORD TO CORRECT SERIAL NUMBER PREVIOUSLY RECORDED ON REEL 5681 FRAME 970.;ASSIGNOR:OKANOUE, KAZUHIRO;REEL/FRAME:006318/0177 Effective date: 19910415 Original Image
{"url":"http://www.google.com/patents/US5202903?dq=6480844","timestamp":"2014-04-23T16:10:59Z","content_type":null,"content_length":"127285","record_id":"<urn:uuid:75df0b5e-59cc-4c2c-951b-ec9729ec8c4a>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00081-ip-10-147-4-33.ec2.internal.warc.gz"}
Otherwise, the integral promotions are performed on both operands. Then, these rules are applied: • If either operand has type unsigned long long int, the other operator is converted to unsigned long long int. • If either operand has type long long int, the other operator is converted to long long int. • Otherwise, when you compile on SPARC V9 only and specify cc -xc99=none, if one operand has type long int and the other has type unsigned int, both operands are converted to unsigned long int. • Otherwise, if either operand has type long int, the other operand is converted to long int. • Otherwise, if either operand has type unsigned int, the other operand is converted to unsigned int.
{"url":"http://docs.oracle.com/cd/E19205-01/819-5265/6n7c29cko/index.html","timestamp":"2014-04-16T04:28:07Z","content_type":null,"content_length":"4915","record_id":"<urn:uuid:4c10b85c-4f23-47f8-8a19-3e4ad7a438e8>","cc-path":"CC-MAIN-2014-15/segments/1397609521512.15/warc/CC-MAIN-20140416005201-00227-ip-10-147-4-33.ec2.internal.warc.gz"}
vector subspace help June 6th 2013, 03:22 AM #1 Super Member Sep 2008 vector subspace help show that U is a subspace of the given vetor space V. V = F(R), ( the set of all functions from R into R) , U is the set of all even functions in V. I am really stuck on this, if U is the set of all even functions in V, than x^2 must be in U? hence the set is non-empty, but i am not sure how to show that U is a subspace of V. I know I have to show that if you take two elements from U and add them you should still get an element in U, but dont know how to go about it? Any help appreciated. thank you Re: vector subspace help addition is defined by (f1 + f2)(x)=f1(x) + f2(x) so, (f1 + f2)(-x)=f1(-x) + f2(-x) = f1(x) + f2(x)= (f1 + f2)(x). hence f1 + f2 belongs to U whenever f1 , f2 belongs to U. similarly for multiplication. June 6th 2013, 03:51 AM #2 Jan 2013 kolkata ( India)
{"url":"http://mathhelpforum.com/advanced-algebra/219619-vector-subspace-help.html","timestamp":"2014-04-21T00:13:19Z","content_type":null,"content_length":"32667","record_id":"<urn:uuid:a0082da4-2e69-4a30-9bac-ff2d334a03fd>","cc-path":"CC-MAIN-2014-15/segments/1397609539337.22/warc/CC-MAIN-20140416005219-00615-ip-10-147-4-33.ec2.internal.warc.gz"}
Demonstrate that F(x)=e^xsinx is primitive of f(x)=e^x(sinx+cosx)? - Homework Help - eNotes.com Demonstrate that F(x)=e^xsinx is primitive of f(x)=e^x(sinx+cosx)? You need to test if `F(x)` is the primitive of the function `f(x)` , hence, you need to check if `F'(x) = f(x)` , such that: `F'(x) = (e^x*sin x)'` You need to differentiate the function `F(x)` with respect to `x` , using the product rule, such that: `F'(x) = (e^x)'*sin x + e^x*(sin x)'` `F'(x) = e^x*sin x + e^x*cos x` Factoring out `e^x` yields: `F'(x) = e^x*(sin x + cos x)` Comparing the equation of F'(x) with the equation of `f(x)` yields that they coincide. Hence, testing if `F(x)` is the primitive of `f(x)` yields that `F'(x)` `= f(x)` , hence, the statement holds. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/demonstrate-that-f-x-e-xsinx-primitive-f-x-e-x-456350","timestamp":"2014-04-18T03:58:32Z","content_type":null,"content_length":"25432","record_id":"<urn:uuid:61a48428-d7b9-4e7a-a1b4-8cbd2c7a4c11>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00429-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Find a potential function for: F = y sin(z)i + x sin(z)j + xy cos(z)k • one year ago • one year ago Best Response You've already chosen the best response. By "potential function" you mean some function \(f(x,y,z)\) of three variables such that \[\frac{\partial}{\partial x}f(x,y,z)=y\sin(z)\]\[\frac{\partial}{\partial y}f(x,y,z)=x\sin(z)\]and\[\frac {\partial}{\partial z}f(x,y,z)=xy\cos(z)\]correct? Best Response You've already chosen the best response. The potential of F is any function f such that del f = F. Best Response You've already chosen the best response. Good to know. Then, to solve this, notice that \[\frac{\partial}{\partial x}f(x,y,z)=y\sin(z)\]is a constant function of \(x\). Likewise, \[\frac{\partial}{\partial y}f(x,y,z)=x\sin(z)\]is a constant function of \(y\). What does that tell you about what the function \(f(x,y,z)\) has to look like? Best Response You've already chosen the best response. I got this: xycos(z), xycos(z), -xysin(z). Is this correct? Best Response You've already chosen the best response. Don't quote me on this, but I believe George gave you part of the answer. Now if I remember correctly all you have to do integrate each part of his answer with respect to x, y, and z and then find the constant of integration. That is, intergrate the first part with respect to x, the second part with respect to y, and the last part with respect to z. Then find the constant of integration. I can't do it for you because I am not really sure how to go about it. I'ts been over 10 years since I graduate it from engineering school. Best Response You've already chosen the best response. There is a methodical way of doing this. Best Response You've already chosen the best response. I did the integration part and got the above answer but how do I find the constant of integration? Best Response You've already chosen the best response. \[ \frac{\partial}{\partial x}f(x,y,z)=y\sin(z) \implies f(x, y, z) = xy\sin(z)+g(y, z) \] Best Response You've already chosen the best response. So then you integrate another function to solve for \(g(y, z)\). Best Response You've already chosen the best response. Since you're asked for "a" potential function, you don't need to worry about a general constant C, since any constant will work, you can just choose 0 and be done with it. Best Response You've already chosen the best response. If I remember correctly I think you can equate all our integrals to each other and then compare coefficients. May be you would be better off asking one of the resident geniuses. Best Response You've already chosen the best response. From what I'm seeing, wio's way of doing this will get you to a solution, rather easily. Best Response You've already chosen the best response. \[ \frac{\partial}{\partial y} xy\sin(z) +g(y, z) =x \sin(z) +g'(y, z) \]\[ \frac{\partial}{\partial y}f(x,y,z)=x\sin(z) \]This tells us what? Best Response You've already chosen the best response. It seems that \(g'(y, z) = 0\). So we know that \(g(y, z)\) is a constant with respect to \(y\). Best Response You've already chosen the best response. wonder why none of the local residents just give you the answer Best Response You've already chosen the best response. ^^because giving answers for free is against the Code of Conduct. Best Response You've already chosen the best response. So we have: \[ f(x, y, z) =xy\sin(z) +h(z) \]\[ \frac{\partial}{\partial z} xy\sin(z) +h(z) = xy\cos(z) + h'(z) \]\[ \frac{\partial}{\partial z}f(x,y,z)=xy\cos(z) \]So \(h'(z)=0\). It's pretty obvious now that our potential function is just: \[ f(x, y, z) = xy\sin(z) +C \]And as @KingGeorge said, the \(C\) will work for any constant. In fact there just isn't anyway for us to know what \(C\) is. Best Response You've already chosen the best response. what is "giving the answer for free?" Best Response You've already chosen the best response. @blackjesus It's giving them a solution without having them put forth any effort on their own part to solve it. Best Response You've already chosen the best response. ^^well said. Best Response You've already chosen the best response. Ideally you walk the through it. Sometimes people are so confused that they need a walk through of a problem to understand the method. Best Response You've already chosen the best response. Oh! Without "giving" the answer away, if you take wio's answer and find the gradient of it. If you end up with the function F that you started with. You have the right answer. Best Response You've already chosen the best response. Addendum: The only way to learn Math is to do lots of problems. However, answering a question with a question will not help the student learn anything. I have to go now. Good bye. Good luck. Best Response You've already chosen the best response. Yes, the methodology is as follows: 1) Integrate the function in terms of \(x\) (or some other variable). \( f(x, y, z) = \int F_xdx +g(y, z) \) 2) Take the derivative in terms of \(y\) (or some other variable). Then solve for \(g'(y, z)\). \(F_y = \frac{\partial }{\partial y}\int F_xdx +g'(y, z)\) \(g'(y, z) = F_y - \frac{\partial }{\partial y}\int F_xdx \) 3) Integrate in terms of \(y \). \(\int g'(y, z)dy = g(y, z) + h(z) \) 4) Solve for \(h(z)\) the same way we did for \(g(y, z)\). Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/50df886de4b0f2b98c874fef","timestamp":"2014-04-18T00:31:56Z","content_type":null,"content_length":"84988","record_id":"<urn:uuid:b6ad4385-137c-4b47-8353-e5b9d6180bb2>","cc-path":"CC-MAIN-2014-15/segments/1397609532374.24/warc/CC-MAIN-20140416005212-00171-ip-10-147-4-33.ec2.internal.warc.gz"}
d'Alembert's formula October 3rd 2011, 06:24 AM #1 Oct 2011 d'Alembert's formula Good morning all, I am stuck on this question. Use d'Alembert's formula to write u(x,t) = x-at in the form u(x,t)=f(x+ct)+g(x-ct). I don't know how to do this so any help would be very nice. Re: d'Alembert's formula Two thoughts. First, if you allow that $c = a$, then you already have the form. Second, if you let $k_1(x+ct) + k_2(x-ct) = x - at.$ If you expand and equate like terms, you'll find two equations for $k_1$ and $k_2$. Re: d'Alembert's formula If I were you, I would start by stating d'Alembert's formula as clearly as possible. October 4th 2011, 06:31 AM #2 October 7th 2011, 11:02 AM #3 MHF Contributor Apr 2005
{"url":"http://mathhelpforum.com/differential-equations/189432-d-alembert-s-formula.html","timestamp":"2014-04-18T04:46:03Z","content_type":null,"content_length":"36944","record_id":"<urn:uuid:7057d90c-3770-44ca-9936-96e8d68bcdbc>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00138-ip-10-147-4-33.ec2.internal.warc.gz"}
Throwing Spear - Part 1 Throwing Spear – Part 1 Posted on January 4, 2010 Instead of writing another huge post, I’m going to cut this one into several pieces which will be posted throughout the week. This may turn out to be a better format in general. IN THIS POST: I’ll introduce the purpose and scope of the current discussion. I’ll also introduce some concepts that will help with the parts to follow. I’ve been introduced to cord-making and flintknapping and now I want to put the two together. I plan to make a throwing spear. But how? There are many characteristics of a spear that can be varied. What would work best, assuming that I will be the thrower? How would paleolithic people have figured it out? Probably, at first, by exhaustive trial and error. Actually, I shouldn’t say “exhaustive”. I’ll give them a little more credit and say “heuristic”. When enough possibilities have been tried, discernible patterns would emerge and humans would be good at choosing next possibilities to try which are more likely to be an improvement than not. I don’t have the time for trial and error. But what I do have is modern scientific knowledge. In this case, I’m talking about Newtonian physics. Since both of those tactics will sooner or later approach an optimal result, I’m going to “cheat” and hope that reasoning with Newtonian mechanics will get me a similar result as generations of trial and error would. So … a quick “review” of Newtonian mechanics… Newtonian mechanics can really be summarized using just one simple equation. Oops! I said the E word. No, don’t go! It’s not that scary. I promise. That one equation is $F = ma$, read “force equals mass times acceleration”. The rest is just properly defining $F$, $m$, and $a$. $m$ is the easiest. In this model, all material objects have a property called “mass”. This is easily related to common experience because more massive objects are also heavier. Defining acceleration is only a little more tricky. At any given moment in time, a material object has a position. Its position can be described as its three-dimensional coordinates in space (x, y, z). Its velocity is the rate of change (also called the time derivative) of its position (in each of its three coordinates). That’s different than speed because it tells you not only how fast the object is moving, but also the direction in which it’s moving. If you in turn take the rate of change of the velocity, then you get acceleration. Roughly speaking, that’s how fast the object is speeding up – but like velocity, there is also a direction component. For that reason, acceleration sometimes describes how fast the object is slowing down or even the way its trajectory is curving. Now we come to force. Intuitively, force describes how strongly an object is being pushed or pulled and in what direction. There are many things that exercise forces on objects. The ones we’ll be concerned with here are gravity, friction, the normal force (when one object is directly pushing on another), and air resistance. In the equation $F = ma$, $F$ stands for the sum of all the forces acting upon the object in question. As an example, if you put a cup down on a table, the force of gravity “tries” to accelerate it downward. But the table exerts a contact force (normal force and probably a little friction) on the cup which is exactly opposite the force of gravity on the cup. So the sum of the forces on the cup is zero. Solving $F = ma$ for $a$ with $F = 0$ shows us that the acceleration of the cup will indeed be zero, regardless of the cup’s mass – it’ll stay put! So far, what I’ve been calling “objects” have positions which change over time, but not orientations. A spear, of course, can both translate (travel) and rotate. So how do we describe that? It turns out that the materials around us are actually made up of lots of little “objects” called particles. In a solid object, such as a spear, forces between the particles (chemical bonds) hold them in an essentially rigid configuration. Now that we’re talking about a rigid body of particles, it makes perfect sense to discuss orientation. Although the individual particles might not have orientations in this model, the entire constellation of particles can certainly rotate through space. IN THE NEXT POST: I’ll talk about torques (rotational forces) and begin applying these physics concepts to the problem of the ideal spear. I’ll also discuss what a spear actually is and what different kinds of spears there are. LINK: Part 2 Thank you for reading! 8 Responses “Throwing Spear – Part 1” → 1. Sounds like you’re making a hand-thrown spear. The next step of course is the atlatl…to see how to do it right, go to http://www.atlatl.com. □ Thanks! 2. Ian, it’s been years since my high school physics, and you are explaining these concepts so well. Please continue, and thank you. Your paleolithic journey is fascinating. □ Thank you! 3. IMO it’s easier to say that velocity is the 1st derivative of position in space (with regard to a given reference frame)with respect to time and acceleration is the 2nd derivative. You can treat angular (rotational) velocities and accelerations similarly. □ That is certainly easier if you’ve studied calculus. ;-) But I’m trying to make this more accessible to readers who haven’t – at least until I actually get to the invention of calculus. ☆ Point taken. I can’t wait until you invent calculus. ○ Me too!
{"url":"http://grokproject.net/2010/01/04/throwing-spear-part-1/","timestamp":"2014-04-20T18:28:28Z","content_type":null,"content_length":"67176","record_id":"<urn:uuid:91206638-4c3f-4917-9065-51fece80c41e>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00003-ip-10-147-4-33.ec2.internal.warc.gz"}
Normal distribution beams laid side by side March 11th 2011, 03:20 AM #1 Sep 2010 Normal distribution beams laid side by side Beams of mean width = 300.765mm and standard deveaition of 7.86. If 30 beams are laid side by side what width does support to beams need to be such that there is only a 1:1000 risk that the edge beams will not be fully supported? Assume normal distribution I have tried by using the standard normal tables to find Z relating to 1 beam and multplying it by 30 ie (P) = 0.001 therefore Z =3.9 Z= X-mean/std. dev X = Z*std dev + mean = 3.9*7.86 + 300.765= 331.42mm Thus support must be 30*331.42= 9943mm wide However this does not seem correct. The probability of 30 of the units being 331 is likely to be even less than 1:1000. Can anyone help me to properly integrate the number of beams into the Last edited by mr fantastic; March 11th 2011 at 12:49 PM. Reason: Deleted irrelevant text. Beams of mean width = 300.765mm and standard deveaition of 7.86. If 30 beams are laid side by side what width does support to beams need to be such that there is only a 1:1000 risk that the edge beams will not be fully supported? Assume normal distribution I have tried by using the standard normal tables to find Z relating to 1 beam and multplying it by 30 ie (P) = 0.001 therefore Z =3.9 Z= X-mean/std. dev X = Z*std dev + mean = 3.9*7.86 + 300.765= 331.42mm Thus support must be 30*331.42= 9943mm wide However this does not seem correct. The probability of 30 of the units being 331 is likely to be even less than 1:1000. Can anyone help me to properly integrate the number of beams into the The width of 30 beams laid side by side is normally distributed with mean $30\times 300.765$ mm and SD $\sqrt{30}\times 7.86$ mm, and since the critical $z$value for a right tail probability on $0.001$ is $3.09$ you want: $30\times 300.765+3.09\times \sqrt{30}\times 7.86\approx 9155.98$ mm Thanks for your help March 11th 2011, 10:21 PM #2 Grand Panjandrum Nov 2005 March 12th 2011, 03:05 AM #3 Sep 2010
{"url":"http://mathhelpforum.com/advanced-statistics/174229-normal-distribution-beams-laid-side-side.html","timestamp":"2014-04-19T12:48:57Z","content_type":null,"content_length":"38370","record_id":"<urn:uuid:7dca50f5-d1ab-41e3-af0e-5e20488c3df5>","cc-path":"CC-MAIN-2014-15/segments/1397609537186.46/warc/CC-MAIN-20140416005217-00273-ip-10-147-4-33.ec2.internal.warc.gz"}
New Books for 08/24/2007 AUTHOR: Massel, Stanislaw R. TITLE: Ocean waves breaking and marine aerosol fluxes / Stanislaw R. Massel. PUBLISHER: New York ; London : Springer, 2007. SERIES: Atmospheric and oceanographic sciences library v. 38 CALL NUMBER: GC 211.2 .M37 2007 CIMM AUTHOR: Bandyopadhyay, Sanghamitra, 1968- TITLE: Classification and learning using genetic algorithms : applications in bioinformatics and web intelligence / Sanghamitra Bandyopadhyay, Sankar K. Pal. PUBLISHER: Berlin ; New York : Springer, c2007. SERIES: Natural computing series, CALL NUMBER: Q 327 .B36 2007 CIMM AUTHOR: Haran, Dan. TITLE: Projective group structures as absolute Galois structures with block approximation / Dan Haran, Moshe Jarden, Florian Pop. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Memoirs of the American Mathematical society, no. 884 CALL NUMBER: QA 3 .A57 no. 884 CIMM AUTHOR: Cha, Jae Choon, 1971- TITLE: The structure of the rational concordance group of knots / Jae Choon Cha. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Memoirs of the American Mathematical Society, no. 885 CALL NUMBER: QA 3 .A57 no. 885 CIMM AUTHOR: Gurahick, Robert M., 1950- TITLE: Symmetric and alternating groups as monodromy groups of Riemann surfaces I : generic covers and covers with many branch points / Robert M. Gurahick, John Shareshian ; with an appendix by R. Gurahick and J. Stafford. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Memoirs of the American Mathematical Society, no. 886 CALL NUMBER: QA 3 .A57 no. 886 CIMM AUTHOR: Wahl, Charlotte, 1972- TITLE: Noncommutative Maslov index and Eta-forms / Charlotte Wahl. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Memoirs of the American Mathematical Society, no. 887 CALL NUMBER: QA 3 .A57 no. 887 CIMM TITLE: Felix Berezin : the life and death of the mastermind of supermathematics / editor, M. Shifman. PUBLISHER: Singapore ; Hackensack, NJ : World Scientific, c2007. CALL NUMBER: QA 29 .B517 F45 2007 CIMM AUTHOR: Szpiro, George, 1950- TITLE: Poincaré's prize : the hundred-year quest to solve one of math's greatest puzzles / George G. Szpiro. PUBLISHER: New York : Dutton, c2007. CALL NUMBER: QA 43 .S985 2007 CIMM AUTHOR: Street, Deborah J. TITLE: The construction of optimal stated choice experiments : theory and methods / Deborah J. Street, Leonie Burgess. PUBLISHER: Hoboken, N.J. : Wiley, c2007. SERIES: Wiley series in probability and statistics CALL NUMBER: QA 166.25 .S77 2007 CIMM AUTHOR: Mazia, V. G. TITLE: Approximate approximations / Vladimir Mazya, Gunther Schmidt. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Mathematical surveys and monographs ; v. 141 CALL NUMBER: QA 221 .M37 2007 CIMM AUTHOR: Nahin, Paul J. TITLE: Chases and escapes : the mathematics of pursuit and evasion / Paul J. Nahin. PUBLISHER: Princeton : Princeton University Press, c2007. CALL NUMBER: QA 272 .N34 2007 CIMM AUTHOR: Karlin, Samuel, 1923- TITLE: A second course in stochastic processes / Samuel Karlin, Howard M. Taylor. PUBLISHER: New York : Academic Press, c1981. CALL NUMBER: QA 274 .K372 CIMM TITLE: Function spaces : fifth conference on function spaces, May 16-20, 2006, Southern Illinois University, Edwardsville, Illinois / Krzysztof Jarosz, editor. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Contemporary mathematics, 435 CALL NUMBER: QA 323 .C66 2006 CIMM TITLE: Argos seminar on intersections of modular correspondences. PUBLISHER: Paris : Société mathématique de France, c2007. SERIES: Astérisque, 312 CALL NUMBER: QA 343 .B66 2003 CIMM AUTHOR: Miller, Peter D. (Peter David), 1967- TITLE: Applied asymptotic analysis / Peter D. Miller. PUBLISHER: Providence, RI : American Mathematical Society, c2006. SERIES: Graduate studies in mathematics, v. 75 CALL NUMBER: QA 431 .M477 2006 CIMM TITLE: Interactions between homotopy theory and algebra : Summer School on Interactions between Homotopy Theory and Algebra, University of Chicago, July 26-August 6, 2004, Chicago, Illinois / Luchezar L. Avramov ... [et al.], editors. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Contemporary mathematics, v. 436 CALL NUMBER: QA 612.7 .S86 2004 CIMM AUTHOR: Adem, Alejandro. TITLE: Orbifolds and stringy topology / Alejandro Adem, Johann Leida and Yongbin Ruan. PUBLISHER: Cambridge : Cambridge University Press, 2007. SERIES: Cambridge tracts in mathematics ; 171 CALL NUMBER: QA 613 .O734 2007 CIMM TITLE: Prospects in mathematical physics : Young Researchers Symposium of the 14th International Congress on Mathematical Physics, July 25-26, 2003, Lisbon, Portugal / José C. Mourão ... [et al.], editors. PUBLISHER: Providence, R.I. : American Mathematical Society, c2007. SERIES: Contemporary mathematics, 437 CALL NUMBER: QC 19.2 .I582 2003 CIMM AUTHOR: Grossman, David A., 1965- TITLE: Information retrieval : algorithms and heuristics / by David A. Grossman and Ophir Frieder. EDITION: 2nd ed. PUBLISHER: Dordrecht ; [Great Britain] : Springer, c2004. SERIES: Kluwer international series on information retrieval 15 CALL NUMBER: Z 667 .G76 2004 CIMM
{"url":"http://www.cims.nyu.edu/library/newbook/082407.html","timestamp":"2014-04-17T12:30:17Z","content_type":null,"content_length":"15055","record_id":"<urn:uuid:f6747c98-7bc0-421a-8676-29af7cdbb7c2>","cc-path":"CC-MAIN-2014-15/segments/1397609530131.27/warc/CC-MAIN-20140416005210-00127-ip-10-147-4-33.ec2.internal.warc.gz"}
Got Homework? Connect with other students for help. It's a free community. • across MIT Grad Student Online now • laura* Helped 1,000 students Online now • Hero College Math Guru Online now Here's the question you clicked on: Please Help Choose the correct graph for the following system of inequalities: x + y –5 4x – 5y > –15 • one year ago • one year ago Best Response You've already chosen the best response. Best Response You've already chosen the best response. In order to graph the two systems of equalities you need to solve for y x+y<= -5 y<=x-5 4x-5y>-15 -5y>-4x-15 y<4/5x+3* *notice that the sign changes because you are dividing by a negative number Now you will be able to graph the two lines acting as though the signs <=, and < are equals signs however when it comes to drawing the lines to connect the dots, you need to make sure that the <= graph has a doted line and the < has a solid line. You then graph the two lines shading them based on their inequality Best Response You've already chosen the best response. So... Which graph is it? Best Response You've already chosen the best response. What do you think? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best educated guess? Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. Correct. Good job! Best Response You've already chosen the best response. Best Response You've already chosen the best response. Is the answer b? Best Response You've already chosen the best response. not B Best Response You've already chosen the best response. Then what is it? Best Response You've already chosen the best response. Yes. It is B. Best Response You've already chosen the best response. both are y less than inequalities Best Response You've already chosen the best response. ok so whats the damn answer? Best Response You've already chosen the best response. consider with respect to y Best Response You've already chosen the best response. Look at that. Best Response You've already chosen the best response. so in A arnt the both regions below included? Best Response You've already chosen the best response. where as in B the left side region is included Best Response You've already chosen the best response. Best Response You've already chosen the best response. Fu.ck. I is A! Best Response You've already chosen the best response. A it is :) Best Response You've already chosen the best response. Best Response You've already chosen the best response. Yes its A. Thank u very much @aajugdar Best Response You've already chosen the best response. @SamuelAlden917 thats olryt dont forget to consider y inequality when its y< then region below is shaded when its y> region above is shaded Best Response You've already chosen the best response. @aajugdar thats what I did. I feel like an retricenow! Im so sorry @raebaby420 Best Response You've already chosen the best response. Best Response You've already chosen the best response. Best Response You've already chosen the best response. So sorry! Your question is ready. Sign up for free to start getting answers. is replying to Can someone tell me what button the professor is hitting... • Teamwork 19 Teammate • Problem Solving 19 Hero • Engagement 19 Mad Hatter • You have blocked this person. • ✔ You're a fan Checking fan status... Thanks for being so helpful in mathematics. If you are getting quality help, make sure you spread the word about OpenStudy. This is the testimonial you wrote. You haven't written a testimonial for Owlfred.
{"url":"http://openstudy.com/updates/514b706de4b05e69bfacb51d","timestamp":"2014-04-18T03:56:44Z","content_type":null,"content_length":"100180","record_id":"<urn:uuid:d66d85f7-c62c-40f5-aa1c-774b418fb118>","cc-path":"CC-MAIN-2014-15/segments/1397609532480.36/warc/CC-MAIN-20140416005212-00000-ip-10-147-4-33.ec2.internal.warc.gz"}
Manhattan GMAT Challenge Problem of the Week – 8 October 2012 Here is a new Challenge Problem! If you want to win prizes, try entering our Challenge Problem Showdown. The more people that enter our challenge, the better the prizes! If x, y, and z lie between 0 and 1 on the number line, with no two variables equal, then the product of all three variables divided by the sum of all the distinct products of exactly two of the three variables is between A. 0 and 1/3 B. 1/3 and 2/3 C. 2/3 and 1 D. 1 and 5/3 E. 5/3 and 7/3 First, decode the language describing the expression you care about. “The product of all three variables” is xyz. Meanwhile, “the sum of all the distinct products of exactly two of the three variables” is xy + yz + xz. So the expression is xyz/(xy + yz + xz). At this point, there are two paths forward: doing algebra and picking smart numbers. The latter may turn out to be faster in this case, because the expression is tough to manipulate on its own (we’ll show how in a minute). Say that x = ¼, y = ½, and z = ¾ (all three are supposed to be different). Then the product of all three is 3/32, while xy + yz + xz = (¼)(½) + (½)(¾) + (¼)(¾) = 1/8 + 3/8 + 3/ 16 = 2/16 + 6/16 + 3/16 = 11/16. So the value you ultimately want equals 3/32 divided by 11/16, or 3/32 times 16/11, which equals 3/22. 3/22 is definitely between 0 and 1/3. Now you know that the question must be well-formed—in other words, there can’t be two right answers. For any legal values of x, y, and z chosen according to the criteria (between 0 and 1, and no two values alike), you must get the same answer to the question. So you only need to check one set of values. The answer is (A). If you’re interested, here’s one way to manipulate the expression to prove that it must lie between 0 and 1/3. Say that xyz/(xy + yz + xz) = S, to give the expression a variable name. You can’t split denominators that are sums, but you can split numerators. So take the reciprocal of both sides: (xy + yz + xz)/xyz = 1/S Now you can split the numerator, making three different fractions. Canceling common terms in each, you get this: 1/z + 1/x + 1/y = 1/S Since each variable x, y, and z is less than 1 (but still positive), the reciprocal of each of those variables is greater than 1. So the sum of those three reciprocals is greater than 1+1+1=3. Since 1/S is greater than 3, S must be less than 1/3. Meanwhile, S is evidently positive (all the variables are positive and nothing’s being subtracted), so the proper boundaries are 0 < S < 1/3. The two algebra tricks to learn here are (1) naming a whole expression as a new variable, so that you can transform both sides of the equation, and (2) taking the reciprocal of both sides of an equation. The correct answer is A. Special Announcement: If you want to win prizes for answering our Challenge Problems, try entering our Challenge Problem Showdown. Each week, we draw a winner from all the correct answers. The winner receives a number of our our Strategy Guides. The more people enter, the better the prize. Provided the winner gives consent, we will post his or her name on our Facebook page. If you liked this article, let Manhattan GMAT know by clicking Like. Ask a Question or Leave a Reply The author Manhattan GMAT gets email notifications for all questions or replies to this post.
{"url":"http://www.beatthegmat.com/mba/2012/10/08/manhattan-gmat-challenge-problem-of-the-week-8-october-2012","timestamp":"2014-04-20T01:57:33Z","content_type":null,"content_length":"62562","record_id":"<urn:uuid:5c879c1f-f542-4e12-976d-f7bd15036832>","cc-path":"CC-MAIN-2014-15/segments/1397609537804.4/warc/CC-MAIN-20140416005217-00651-ip-10-147-4-33.ec2.internal.warc.gz"}
, 1999 "... We obtain the first non-trivial time-space tradeoff lower bound for functions f : {0, 1}^n → {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1 + ε)n, for some constant ε > 0 ..." Cited by 44 (2 self) Add to MetaCart We obtain the first non-trivial time-space tradeoff lower bound for functions f : {0, 1}^n &rarr; {0, 1} on general branching programs by exhibiting a Boolean function f that requires exponential size to be computed by any branching program of length (1 + &epsilon;)n, for some constant &epsilon; > 0. We also give the first separation result between the syntactic and semantic read-k models [BRS93] for k > 1 by showing that polynomial-size semantic read-twice branching programs can compute functions that require exponential size on any syntactic read-k branching program. We also show... , 2000 "... We prove the first time-space lower bound tradeoffs for randomized computation of decision problems. The bounds hold even in the case that the computation is allowed to have arbitrary probability of error on a small fraction of inputs. Our techniques are an extension of those used by Ajtai [Ajt99a, ..." Cited by 33 (0 self) Add to MetaCart We prove the first time-space lower bound tradeoffs for randomized computation of decision problems. The bounds hold even in the case that the computation is allowed to have arbitrary probability of error on a small fraction of inputs. Our techniques are an extension of those used by Ajtai [Ajt99a, Ajt99b] in his time-space tradeoffs for deterministic RAM algorithms computing element distinctness and for Boolean branching programs computing a natural quadratic form. Ajtai's bounds were of the following form... "... We exhibit a new method for showing lower bounds for time-space tradeoffs of polynomial evaluation procedures given by straight-line programs. From the tradeoff results obtained by this method we deduce lower space bounds for polynomial evaluation procedures running in optimal nonscalar time. Time, ..." Cited by 2 (2 self) Add to MetaCart We exhibit a new method for showing lower bounds for time-space tradeoffs of polynomial evaluation procedures given by straight-line programs. From the tradeoff results obtained by this method we deduce lower space bounds for polynomial evaluation procedures running in optimal nonscalar time. Time, denoted by L, is measured in terms of nonscalar arithmetic operations and space, denoted by S, is measured by the maximal number of pebbles (registers) used during the given evaluation procedure. The time-space tradeoff function considered in this paper is LS². We show that for "almost all"... - Computational Complexity , 1993 "... A syntactic read-k-times branching program has the restriction that no variable occurs more than k times on any path (whether or not consistent) of the branching program. We rst extend the result in [30], to show that the \n=2 clique only function", which is easily seen to be computable by deter ..." Add to MetaCart A syntactic read-k-times branching program has the restriction that no variable occurs more than k times on any path (whether or not consistent) of the branching program. We rst extend the result in [30], to show that the \n=2 clique only function", which is easily seen to be computable by deterministic polynomial size read-twice programs, cannot be computed by nondeterministic polynomial size read-once programs, although its complement can be so computed. We then exhibit an explicit Boolean function f such that every nondeterministic syntactic read-k-times branching program for computing f has size exp .
{"url":"http://citeseerx.ist.psu.edu/showciting?cid=1646245","timestamp":"2014-04-21T05:38:17Z","content_type":null,"content_length":"19896","record_id":"<urn:uuid:8d5f1438-d45a-422b-8f24-aa9c9046a98f>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00301-ip-10-147-4-33.ec2.internal.warc.gz"}
Principles of Finance/Section 1/Chapter 2/Time Value of Money/PV and NPV Present and Future ValuesEdit Mathematical FormulasEdit Net Present ValueEdit Net present value (NPV) or net present worth (NPW)^[1] is defined as the total present value (PV) of a time series of cash flows. It is a standard method for using the time value of money to appraise long-term projects. Used for capital budgeting, and widely throughout economics, it measures the excess or shortfall of cash flows, in present value terms, once financing charges are met. See discounted cash flow Each cash inflow/outflow is discounted back to its present value (PV). Then they are summed. Therefore NPV is the sum of all terms $\frac{R_t}{(1+i)^{t}}$, where t - the time of the cash flow i - the discount rate (the rate of return that could be earned on an investment in the financial markets with similar risk.) $R_t$ - the net cash flow (the amount of cash, inflow minus outflow) at time t (for educational purposes, $R_0$ is commonly placed to the left of the sum to emphasize its role as (minus the) The discount rateEdit The rate used to discount future cash flows to their present values is a key variable of this process. A firm's weighted average cost of capital (after tax) is often used, but many people believe that it is appropriate to use higher discount rates to adjust for risk for riskier projects or other factors. A variable discount rate with higher rates applied to cash flows occurring further along the time span might be used to reflect the yield curve premium for long-term debt. Another approach to choosing the discount rate factor is to decide the rate which the capital needed for the project could return if invested in an alternative venture. If, for example, the capital required for Project A can earn five percent elsewhere, use this discount rate in the NPV calculation to allow a direct comparison to be made between Project A and the alternative. Related to this concept is to use the firm's Reinvestment Rate. Reinvestment rate can be defined as the rate of return for the firm's investments on average. When analyzing projects in a capital constrained environment, it may be appropriate to use the reinvestment rate rather than the firm's weighted average cost of capital (WACC) as the discount factor. It reflects opportunity cost of investment, rather than the possibly lower cost of capital. A NPV amount obtained using variable discount rates (if they are known for the duration of the investment) better reflects the real situation than that calculated from a constant discount rate for the entire investment duration. Refer to the tutorial article written by Samuel Baker^[2] for more detailed relationship between the NPV value and the discount rate. For some professional investors, their investment funds are committed to target a specified rate of return. In such cases, that rate of should be selected as the discount rate for the NPV calculation. In this way, a direct comparison can be made between the profitability of the project and the desired rate of return. To some extent, the selection of the discount rate is dependent on the use to which it will be put. If the intent is simply to determine whether a project will add value to the company, using the firm's weighted average cost of capital may be appropriate. If trying to decide between alternative investments in order to maximize the value of the firm, the corporate reinvestment rate would probably be a better choice. Using variable rates over time, or discounting "guaranteed" cash flows differently from "at risk" cash flows may be a superior methodology, but is seldom used in practice. Using the discount rate to adjust for risk is often difficult to do in practice (especially internationally), and is really difficult to do well. An alternative to using discount factor to adjust for risk is to explicitly correct the cash flows for the risk elements using NPV or a similar method, then discount at the firm's rate. What NPV MeansEdit NPV is an indicator of how much value an investment or project adds to the firm. With a particular project, if $R_t$ is a positive value, the project is in the status of discounted cash inflow in the time of t. If $R_t$ is a negative value, the project is in the status of discounted cash outflow in the time of t. Appropriately risked projects with a positive NPV could be accepted. This does not necessarily mean that they should be undertaken since NPV at the cost of capital may not account for opportunity cost, i.e. comparison with other available investments. In financial theory, if there is a choice between two mutually exclusive alternatives, the one yielding the higher NPV should be selected. The following sums up the NPVs in various situations. If... It means... Then... NPV > the investment would add value to the the project may be accepted 0 firm NPV < the investment would subtract value the project should be rejected 0 from the firm NPV = the investment would neither gain nor We should be indifferent in the decision whether to accept or reject the project. This project adds no monetary value. Decision should be based on other 0 lose value for the firm criteria, e.g. strategic positioning or other factors not explicitly included in the calculation. A corporation must decide whether to introduce a new product line. The new product will have startup costs, operational costs, and incoming cash flows over six years. This project will have an immediate (t=0) cash outflow of $100,000 (which might include machinery, and employee training costs). Other cash outflows for years 1-6 are expected to be $5,000 per year. Cash inflows are expected to be $30,000 each for years 1-6. All cash flows are after-tax, and there are no cash flows expected after year 6. The required rate of return is 10%. The present value (PV) can be calculated for each year: Year Cashflow Present Value T=0 $\frac{-100,000}{(1+0.10)^0}$ -$100,000 T=1 $\frac{30,000 - 5,000}{(1+0.10)^1}$ $22,727 T=2 $\frac{30,000 - 5,000}{(1+0.10)^2}$ $20,661 T=3 $\frac{30,000 - 5,000}{(1+0.10)^3}$ $18,783 T=4 $\frac{30,000 - 5,000}{(1+0.10)^4}$ $17,075 T=5 $\frac{30,000 - 5,000}{(1+0.10)^5}$ $15,523 T=6 $\frac{30,000 - 5,000}{(1+0.10)^6}$ $14,112 The sum of all these present values is the net present value, which equals $8,881.52. Since the NPV is greater than zero, it would be better to invest in the project than to do nothing, and the corporation should invest in this project if there is no alternative with a higher NPV. More realistic problems would need to consider other factors, generally including the calculation of taxes, uneven cash flows, and salvage values as well as the availability of alternate investment Common pitfallsEdit • If for example the $R_t$ are generally negative late in the project (e.g., an industrial or mining project might have clean-up and restoration costs), then at that stage the company owes money, so a high discount rate is not cautious but too optimistic. Some people see this as a problem with NPV. A way to avoid this problem is to include explicit provision for financing any losses after the initial investment, that is, explicitly calculate the cost of financing such losses. • Another common pitfall is to adjust for risk by adding a premium to the discount rate. Whilst a bank might charge a higher rate of interest for a risky project, that does not mean that this is a valid approach to adjusting a net present value for risk, although it can be a reasonable approximation in some specific cases. One reason such an approach may not work well can be seen from the foregoing: if some risk is incurred resulting in some losses, then a discount rate in the NPV will reduce the impact of such losses below their true financial cost. A rigorous approach to risk requires identifying and valuing risks explicitly, e.g. by actuarial or Monte Carlo techniques, and explicitly calculating the cost of financing any losses incurred. • Yet another issue can result from the compounding of the risk premium. R is a composite of the risk free rate and the risk premium. As a result, future cash flows are discounted by both the risk-free rate as well as the risk premium and this effect is compounded by each subsequent cash flow. This compounding results in a much lower NPV than might be otherwise calculated. The certainty equivalent model can be used to account for the risk premium without compounding its effect on present value.^[citation needed] • If NPV is less than 0, which is to say, negative, the project should not be immediately rejected. Sometimes companies have to execute an NPV-negative project if not executing it creates even more value destruction. • Another issue with relying on NPV is that it does not provide an overall picture of the gain or loss of executing a certain project. To see a percentage gain relative to the investments for the project, usually, Internal rate of return is used complimented to the NPV method. Alternative capital budgeting methodsEdit • Payback period: which measures the time required for the cash inflows to equal the original outlay. It measures risk, not return. • Cost-benefit analysis: which includes issues other than cash, such as time savings. • Real option method: which attempts to value managerial flexibility that is assumed away in NPV. • Internal rate of return: which calculates the rate of return of a project while disregarding the absolute amount of money to be gained. • Modified Internal Rate of Return|Modified internal rate of return (MIRR): similar to IRR, but it makes explicit assumptions about the reinvestment of the cash flows. Sometimes it is called Growth Rate of Return. • Accounting rate of return (ARR): a ratio similar to IRR and MIRR 1. ↑ Lin, Grier C. I.; Nagalingam, Sev V. (2000). CIM justification and optimisation. London: Taylor & Francis. pp. 36. ISBN 0-7484-0858-4. 2. ↑ Baker, Samuel L. (2000). "Perils of the Internal Rate of Return". http://hspm.sph.sc.edu/COURSES/ECON/invest/invest.html. Retrieved January 12, 2007. Last modified on 14 November 2013, at 17:40
{"url":"http://en.m.wikibooks.org/wiki/Principles_of_Finance/Section_1/Chapter_2/Time_Value_of_Money/PV_and_NPV","timestamp":"2014-04-19T04:32:04Z","content_type":null,"content_length":"30272","record_id":"<urn:uuid:21ff1d2e-7874-43d9-a325-98ede74553d1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00305-ip-10-147-4-33.ec2.internal.warc.gz"}
Similar Searches: bittinger, elementary algebra, linear algebra, computer concept 8th edition, computer concept 9th edition, algebra 2 teacher edition, ron larson, algebra 1, finite mathematics 2nd edition, marvin l bittinger, algebra and trigonometry fourth edition, axler, algebra 2, intermediate algebra 11th ed, intermediate algebra 8th edition, elementary & intermediate algebra, concept and application;, intermediate algebra 2nd edition, introduction algebra fifth edition, elementary and intermediate algebra 4th edition, and interm algebra edition 8 We strive to deliver the best value to our customers and ensure complete satisfaction for all our textbook rentals. As always, you have access to over 5 million titles. Plus, you can choose from 5 rental periods, so you only pay for what you’ll use. And if you ever run into trouble, our top-notch U.S. based Customer Service team is ready to help by email, chat or phone. For all your procrastinators, the Semester Guarantee program lasts through January 11, 2012, so get going! *It can take up to 24 hours for the extension to appear in your account. **BookRenter reserves the right to terminate this promotion at any time. With Standard Shipping for the continental U.S., you'll receive your order in 3-7 business days. Need it faster? Our shipping page details our Express & Express Plus options. Shipping for rental returns is free. Simply print your prepaid shipping label available from the returns page under My Account. For more information see the How to Return page. Since launching the first textbook rental site in 2006, BookRenter has never wavered from our mission to make education more affordable for all students. Every day, we focus on delivering students the best prices, the most flexible options, and the best service on earth. On March 13, 2012 BookRenter.com, Inc. formally changed its name to Rafter, Inc. We are still the same company and the same people, only our corporate name has changed.
{"url":"http://www.bookrenter.com/algebra/search--p9","timestamp":"2014-04-16T08:43:42Z","content_type":null,"content_length":"42002","record_id":"<urn:uuid:7d8149fb-1b32-4ccd-b49c-8220ba66e921>","cc-path":"CC-MAIN-2014-15/segments/1397609521558.37/warc/CC-MAIN-20140416005201-00181-ip-10-147-4-33.ec2.internal.warc.gz"}
IO Don't Know What Orders Of Magnitude Mean. Does ... | Chegg.com IO don't know what orders of magnitude mean. does it mean to divideby 10 so should the answer be 0.33 or 0.033, im so confused.. A 60.0 kg runner expends 345 W of powerwhile running a marathon. Assuming that 10.0% of the energy isdelivered to the muscle tissue and that the excess energy isprimarily removed from the body by sweating, determine the volumeof bodily fluid (assume it is water) lost per hour. (At 37.0°Cthe latent heat of vaporization of water is 2.41 ^6 J/kg.) Your answer differs from the correct answerby orders of magnitude. cm^3
{"url":"http://www.chegg.com/homework-help/questions-and-answers/io-don-t-know-orders-magnitude-mean-mean-divideby-10-answer-033-0033-im-confused-600-kg-ru-q407715","timestamp":"2014-04-24T06:26:37Z","content_type":null,"content_length":"22731","record_id":"<urn:uuid:526d316d-c729-4a29-8438-d675ed144fba>","cc-path":"CC-MAIN-2014-15/segments/1398223205375.6/warc/CC-MAIN-20140423032005-00596-ip-10-147-4-33.ec2.internal.warc.gz"}
Math 022: College Algebra 2 (Syllabus) Fall 2012 Section 008 - Mon/Fri 3:35P - 4:25P in 105 Osmond Lab, Wed 3:35P - 4:25P in 7 Sparks Course ID for MyMathLab: hair64156 Instructor: Aleksey Zelenberg Email: zelenberg@math.psu.edu Office: 417 McAllister Building Office Hours: Tues 3:30P - 4:30P, Thurs 10:00A - 11:00A in my office, Wed 2:00P - 3:00P in 7 Sparks. You may also send me an email to arrange an appointment if these times do not work. Textbook and MyMathLab: Course Compass Student Access Code for Trigsted: Algebra and Trigonometry 1/e. This is available as an ebook on MyMathLab and also at the bookstore. (the cost at the bookstore will be slightly more). If you buy at the bookstore, ask for the access code at the check-out register. If you are registered for both Math 026 and Math 022 you need buy only one access code. The one access code will work for both courses. If you cannot afford an access code at the start of the semester, you have the option of obtaining temporary access for a trial period Course Description: Relations, functions, graphs; polynomial, rational functions, graphs; word problems; nonlinear inequalities; inverse functions; exponential, logarithmic functions. Prerequisite: Math 21 or satisfactory performance on the mathematics proficiency examination; 1 high school unit of geometry. Learning Objectives: • Solve equation of the linear and quadratic type; solve related equations. • Solve absolute value equations and inequalities • Solve polynomial and rational inequalities • Write various forms of equations of circles • Write equations of lines • Identify and distinguish between relations and functions from graphs and equations • Using function notation, evaluate functions • Determine the domain of a function given an equation • Understand, identify, and dtermine properties of a function's graph • Know and sketch graphs of basic functions: constant, linear, quadratic, cubic, absolute value, square root, cube root, and reciprocal functions. • Understand piecewise functions • Transform basic graphs using hoizontal hifts, shrinks, and compressions, reflections, vertical shifts, and combinations of transformations • Add, subtract, multiply, divide, and compose functions and find the resulting domains • Understand the definition of a one-to-one functions and apply it to finding inverse functions • Analyze quadratic functions • Translate applications of quadratic functions to models and solve • Sketch graphs of polynomial and rational functions • Understand characteristics of exponential functions • Understand characteristics of logarithmic functions • Solve exponential equations • Solve logarithmic equations • Be sure to register for MyMathLab ASAP. The first homework is already up! • For those still having issues with MyMathLab check here to see if there's a fix. Alternatively, try following these instructions: □ Disable all popup blockers in the browser, as indicated when the homework page is opened. □ Log out of MML, clear all cache, history, cookies from the browser, log in again □ Attempt access from a different web browser or computer □ If none of this works, contact Jim Pringle at James.Pringle@pearson.com with your account email, Course ID, and description of the problems. • No office hours today (9/18). Office hours on Thursdays have now been changed to 10am - 11am. Tuesday office hours are still 3:30pm - 4:30pm. • FIRST MIDTERM IS THIS WEEK Exam dates: • Midterm 1 - Thursday, September 27 6:30P - 7:45P in 100 Thomas □ Conflict Exam - Thursday, September 27th 5:05P - 6:20P in 105 Wagner □ Makeup Exam - Wednesday, October 3rd 6:30P - 7:45P in 110 Wartik • Midterm 2 - Thursday, November 8 6:30P - 7:45P in 100 Thomas • Final Exam - Date, time, and location to be announced. Useful links: Exam Schedule Sample exams Guided Notebook University Syllabus for this course (this contains the sections in the textbook we will cover)
{"url":"http://www.math.psu.edu/zelenberg/022.html","timestamp":"2014-04-19T04:59:38Z","content_type":null,"content_length":"8406","record_id":"<urn:uuid:66f98688-9e4a-462f-be9e-77f388be21c5>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00185-ip-10-147-4-33.ec2.internal.warc.gz"}
Emmaus Prealgebra Tutor ...I'm a very metaphor-oriented teacher. To make math and science sticky in your mind, you need to build connections with physical or mental objects and images. If you can understand the building blocks, you can build anything. 15 Subjects: including prealgebra, chemistry, GRE, algebra 1 ...I am looking forward to working with you! Thank you for your timeI have personally taught several classes in Calculus AB and BC, where differential equations is a single part of that course. I have also tutored students (not through WyzAnt, but through other programs) Calculus and Differential Equations. 35 Subjects: including prealgebra, chemistry, calculus, geometry ...During my time in this developing field, I was a Field Chemist/Project Manager responsible for the design, implementation and management of environmental cleanup and remediation projects specializing in hazardous waste. These experiences allow me to better instruct by giving a real world sense a... 8 Subjects: including prealgebra, chemistry, algebra 1, geology ...If you're struggling trying to understand chemistry and science and need some personal attention from a dedicated and current high school teacher, then please allow me the opportunity to assist you. I have often noticed that students will get frustrated because they are in a classroom setting an... 4 Subjects: including prealgebra, chemistry, algebra 1, physical science ...I graduated from the Stevens Institute of Technology with a Bachelor of Engineering in Civil Engineering and a Master of Engineering in Structural Engineering. I have a strong knowledge of physical and mathematical foundations and feel that I can be of help to anyone who needs help in most field... 14 Subjects: including prealgebra, calculus, physics, algebra 1
{"url":"http://www.purplemath.com/emmaus_prealgebra_tutors.php","timestamp":"2014-04-19T02:22:14Z","content_type":null,"content_length":"23873","record_id":"<urn:uuid:1300cc11-cc5c-424d-a3e2-089023c0c115>","cc-path":"CC-MAIN-2014-15/segments/1397609535745.0/warc/CC-MAIN-20140416005215-00324-ip-10-147-4-33.ec2.internal.warc.gz"}
White Sox Interactive Forums - View Single Post - Jeff Manto Originally Posted by Lip Man 1 There is truth in this but it's also true the Sox have played 112 seasons compared to the Marlins 20. I certainly would hope that the Sox would have made the playoffs more often given that they've had 92 more seasons to do so. The comments got me to thinking so I did a little basic math, hope the figures are right. The Marlins made the post season twice in 20 years, an average of once every 10 years. The Sox have made the postseason nine times in 112 seasons an average of once every 18.5 years. I did not count the Sox winning the 1900 championship because at that time baseball didn't recognize the American League as a 'major' league or the number would have changed. Lip, 112 divided by 9 is 12.44. A little bit better than 18.5
{"url":"http://www.whitesoxinteractive.com/vbulletin/showpost.php?p=3062367&postcount=54","timestamp":"2014-04-17T07:07:37Z","content_type":null,"content_length":"8605","record_id":"<urn:uuid:0e781f7c-ecbc-49bd-9bb8-d92973beaaf5>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00158-ip-10-147-4-33.ec2.internal.warc.gz"}
the first resource for mathematics The author studies the existence of complex a priori bounds for renormalizations of real quadratic polynomials. He introduces the combinatorial condition of essentially bounded type, which was the subject studied by the author and M. Lyubich [Ann. Inst. Fourier (Grenoble) 47, 1219–1255 (1997; Zbl 0881.58053 )] and gives a new treatment to polynomials satisfying this condition. The approach used in the paper is to consider them as small perturbations of parabolic maps, and to use the rigidity properties of such maps to pass from real a priori bounds to complex ones. 37F25 Renormalization 37E20 Universality, renormalization 37F50 Small divisors, rotation domains and linearization; Fatou and Julia sets 30D05 Functional equations in the complex domain, iteration and composition of analytic functions
{"url":"http://zbmath.org/?q=an:1070.37029","timestamp":"2014-04-20T18:51:19Z","content_type":null,"content_length":"21648","record_id":"<urn:uuid:02e6128a-924c-4bc3-a0fe-568a68e3b700>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00151-ip-10-147-4-33.ec2.internal.warc.gz"}
Algebraic solutions of the Painlev\'e equations II Seminar Room 1, Newton Institute I will survey what is known about the algebraic solutions of the Painleve VI equation. This gives an opportunity to see 'in action' lots of the Painleve/isomonodromy technology: Riemann-Hilbert correspondence, nonlinear monodromy actions, affine Weyl group symmetries, quadratic/Landen/folding transformations and precise asymptotic formulae. Moreover I will try to emphasise some of the mysteries and open problems of the subject. Related Links
{"url":"http://www.newton.ac.uk/programmes/PEM/seminars/2006091515301.html","timestamp":"2014-04-16T11:25:39Z","content_type":null,"content_length":"4634","record_id":"<urn:uuid:cfc2c672-4b7f-489a-94b0-1c5726eade1f>","cc-path":"CC-MAIN-2014-15/segments/1398223206672.15/warc/CC-MAIN-20140423032006-00328-ip-10-147-4-33.ec2.internal.warc.gz"}
test whether an integer is even Major Section: PROGRAMMING (evenp x) is true if and only if the integer x is even. Actually, in the ACL2 logic (evenp x) is defined to be true when x/2 is an integer. The guard for evenp requires its argument to be an integer. Evenp is a Common Lisp function. See any Common Lisp documentation for more information.
{"url":"http://planet.racket-lang.org/package-source/cce/dracula.plt/1/0/language/acl2-html-docs/EVENP.html","timestamp":"2014-04-17T09:38:52Z","content_type":null,"content_length":"1133","record_id":"<urn:uuid:b2698d2e-a9d7-4f32-8558-f1e349cedc69>","cc-path":"CC-MAIN-2014-15/segments/1398223201753.19/warc/CC-MAIN-20140423032001-00233-ip-10-147-4-33.ec2.internal.warc.gz"}
Show that Y has an exponential distribution from f(x)?? January 3rd 2011, 11:41 AM #1 Junior Member Jan 2011 hey guys, I'm just wondering if you guys can help me with this question, thank you in advance. A random variable, X, has a probability density function (p.d.f.) given by where x>0 Y is a random variable such that Y = X^2. Show that Y has an exponential distribution and state its mean. Last edited by hazeleyes; January 3rd 2011 at 02:50 PM. Hi there hazeleyes, Here's a kick off... $\displaystyle f(Y) = f(X^2) = 2(X^2)e^{-(X^2)^2}$ for the mean $\displaystyle E(Y) = \int_{-\infty}^{\infty}Y\times f(Y)~dY$ Sorry, but that's not correct. There are several approaches that can be taken. The OP has carelessly neglected to mention that the support of X is $x \geq 0$ (so that f(x) = 0 for x < 0). The support of Y will be $y \geq 0$. The most basic approach to finding the pdf of Y is to calculate the cdf of Y and then recall that the pdf is the derivative. $\displaystyle cdf = G(y) = \Pr(Y \leq y) = \Pr(X^2 \leq y) = \Pr(-\sqrt{y} \leq X \leq \sqrt{y}) = \int_0^{\sqrt{y}} 2 x e^{-x^2} \, dx$. Differentiating this is simple - either integrate (use a substitution) directly and then differentiate, or use the chain rule and the Fundamental Theorem of Calculus. You should get $\displaystyle pdf = \frac{dG}{dy} = g(y) = e^{-y}$ for $y \geq 0$ and zero elsewhere. Calculating the mean of Y is trivial. Some trivia: The random variable X follows a Weibull distribution (http://en.wikipedia.org/wiki/Weibull_distribution) with parameters $k = 2$ and $\lambda = 1$. Since $Y = X^2 = X^k$ it is well known that Y follows an exponential distribution. Thanks alot guys!! January 3rd 2011, 12:29 PM #2 January 3rd 2011, 01:47 PM #3 January 3rd 2011, 02:49 PM #4 Junior Member Jan 2011
{"url":"http://mathhelpforum.com/advanced-statistics/167375-show-y-has-exponential-distribution-f-x.html","timestamp":"2014-04-19T19:07:26Z","content_type":null,"content_length":"43850","record_id":"<urn:uuid:e50389fb-9827-4912-97d4-8b18dd6dbf8a>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00490-ip-10-147-4-33.ec2.internal.warc.gz"}
Finite state machine A finite state machine (FSM) is a kind of digital circuit , (and, possibly, other types of s, including ones) that is used to process information in steps ( s). At every a different part of the information can be processed. This has many advantages in terms of reduced requirements over combinational logic network s (CLNs). See hardware/speed tradeoff . A computer is a (very sophisticated) FSM. There are several important parts of an FSM: +----------------------------->|Output|---> Outputs | +-----+ |CLN | Inputs -+->|Next | Control | | |State| Signals +--+ | | +->|CLN |------------>|DQ|-+->| | | +-----+ | | | +------+ Clock--------------------------|> | | | +--+ | State Variables Note: this is a block diagram of a Mealy Machine shown. See also Moore Machine, Class A Machine, Class B Machine, Class C Machine. These devices are said to be causal, and to have memory, as the state of the FSM is dependant not only upon the current input, but also upon past inputs (and thus past states). See also state assignment, state minimization, state transition table, state transition diagram, digital systems, general purpose computer, one-hot encoding.
{"url":"http://everything2.com/title/Finite+state+machine?showwidget=showCs535125","timestamp":"2014-04-18T19:13:25Z","content_type":null,"content_length":"31661","record_id":"<urn:uuid:ce5722ba-5517-4a8e-b5cd-b41e20a0c623>","cc-path":"CC-MAIN-2014-15/segments/1397609535095.7/warc/CC-MAIN-20140416005215-00077-ip-10-147-4-33.ec2.internal.warc.gz"}
Sizing, duct, ducts, ductwork, air, flow, sizing, friction, loss, pressure, velocity, VAV Characteristics and functions of the program This calculation program on Excel makes it possible to dimension and carry out the calculation of the pressure losses on the ductworks and air distribution systems. AeroDuct can be distributed with calculation in English system (e.g., ounces, pounds, inches, and feet) and common units in the metric system (e.g., grams, kilograms, meters, and centimeters). . It applies in all types of duct and holds account particularly on the operating conditions and specific Characteristics on the ductworks, such as: • The temperature of the air conveyed • The level of altitude or is located the installation • The nature of the various types of materials used (steel ductwork, copper, PVC, built walls, etc) • The geometrical shapes of ductworks (circular, quadrangular, oblong) • The various types of pressure loss coefficients • Le contrôle des vitesses silencieuses de passage d'air. • The control of air speeds. Complementary modules of calculations are incorporated in the program, such as: • K-factor editor of the local pressure loss • Equivalent K-factor calculator. • Calculator for evaluation of the motorized power of the ventilator, according to the calculated load. The calculation program is equipped with a customized command bar giving access to the various procedures, boxes of calculation and macro commands. The working files are created separately making it possible to reduce the data storage. Display of the table of calculation of pressure loss The working file can be made up of various computation sheets. You can from the same file, to insert a new computation sheet or to duplicate the computation sheet in progress for the study of a similar ductwork and to make the complementary modifications thereafter. If you forget some elements from the ductwork, you can add lines of calculation anywhere, without deteriorating the phases of calculations. You can also choose the unit of pressure of your choice in the study: • Pa (Pascal) • Pounds per square foot (lbf/sq ft) = 47.88026 Pa • Torr / mm Hg (133.3226 Pa) • Inches w.g (248.6 Pa) • kPa ( = 1000 Pa) • Psi (Pound per square inch (lbf/sq. in) = 6894.757 Pa) AeroDuct can be distributed with calculation in Metric system or English unit. For each sheet of the table of calculation, the presentation is done, such as: In basic display: The hygienic airflow or smoke extract systems are indicated generally in the standards on a air mass of reference of 0.0746 lb/ft^3 (1,200 kg/m3) - (equivalent to: 68°F (20°C) - 40%) The basic airflow is corrected automatically in function: • Of the altitude of the site. • Of the leakage rate of air estimated in the ductworks. • Of the temperature of airflow in the ductwork compared to the basic temperature taken into account in the calculation of the installation or the airflow of reference. The real speed of the airflow in the ductwork is carried out from the corrected airflow. A displaying in yellow of the cell concerned indicates speeds of air higher than the quiet values recommended in the installations at low pressure. It is highly advised to envisage a coefficient of safety margin: • The assemblies are often badly carried out, blocking partially the passage of the fluid. • An estimated dusty ductwork can be considered. • With the ageing of the ductwork, a possible corrosion can increase the pressure losses by friction. In total displaying, the table visualizes in complement: • Indices of surface roughness. • The air density. • The dynamic viscosity of the air. • The Reynolds number. All the colour cells of calculation are programmed. Air speeds recommended Installations "low pressure" (maximum Speed 1550 to 2000 Ft/min - 8 to 10 m/s) │Airflow in ducts │Maximum velocity │ │- Maxi flow rate < 175 CFM (300 m³/h) │490 ft/min ( 2.5 m/s)│ │- Maxi flow rate < 590 CFM (1000 m³/h) │590 ft/min ( 3 m/s) │ │- Maxi flow rate < 1200 CFM (2000 m³/h) │785 ft/min ( 4 m/s) │ │- Maxi flow rate < 2350 CFM (4000 m³/h) │980 ft/min ( 5 m/s) │ │- Maxi flow rate < 5900 CFM (10000 m³/h) │1180 ft/min ( 6 m/s) │ │- Maxi flow rate > 5900 CFM (10000 m³/h) │1380 ft/min ( 7 m/s) │ Installations "high pressure" (speeds of air > to 2000 ft/min - 10 m/s) - Ejector-convectors, Variable Air Volume Systems (VAV) or variable Induction units, etc. │Airflow in ducts │Shaft │Corridors │Premises │ │- 59000 to 41000 CFM - (100000 to 70000 m3/h) │5800 ft/min (30 m/s)│ │ │ │- 41000 to 23500 CFM - (70000 to 40000 m3/h) │4900 ft/min (25 m/s)│ │ │ │- 23500 to 14700 CFM - (40000 à 25000 m3/h) │4300 ft/min (22 m/s)│3940 ft/min (20 m/s)│ │ │- 14700 to 10000 CFM - (25000 à 17000 m3/h) │3940 ft/min (20 m/s)│3350 ft/min (17 m/s)│3150 ft/min (16 m/s)│ │- 10000 to 5900 CFM - (17000 à 10000 m3/h) │3350 ft/min (17 m/s)│2950 ft/min (15 m/s)│2750 ft/min (14 m/s)│ │- 5900 to 2950 CFM - (10000 à 5000 m3/h) │2950 ft/min (15 m/s)│2350 ft/min (12 m/s)│2350 ft/min (12 m/s)│ │- 2950 to 1200 CFM - (5000 à 2000 m3/h) │2350 ft/min (12 m/s)│2000 ft/min (10 m/s)│2000 ft/min (10 m/s)│ │- Inferior to 1200 CFM (2000 m3/h) │2000 ft/min (10 m/s)│2000 ft/min (10 m/s)│2000 ft/min (10 m/s)│ │- Fire dampers │2000 ft/min (10 m/s)│2000 ft/min (10 m/s)│2000 ft/min (10 m/s)│ • It is recommended to leave in the main ducts at a speed of 3940 to 4300 ft/min (20 to 22m/s). • The main and secondary ducts are generally calculated on a basis of 0.0034 In.wg (0.85 Pa). • The boxes are selected on the basis of acceptable noise level when the pressure at the entry of the boxes is of 3 In.wg (750 Pa). • The reduction speed can cause a renewal speed. Dual-Duct VAV Systems (average or high pressure) • The circuit of the cold air ducts is calculated for 100% of the necessary flow. • On the other hand, for the hot air ducts, one admits 50% to 75% of the cold air flow according to thermal loads. • The variation between the temperature of the buildings in summer and the air in the cold air duct is 50 to 55.5 °F (10 to 13°C). • In summer, the temperature of the hot air in the duct is maintained at least with +5.4°F (+ 3°C) above the average temperature of the air outlet. The air velocity in the ducts cannot exceed a certain value. It results a minimal section of ducts below from which it is misadvised going down for following reasons: • Increase the noise of rustle of the air in the strait ducts and especially on the level of the deviations. • Increase the pressure losses and the energy consumed by the ventilator. Example: a reduction in half of the section doubles the air velocity increases the pressure losses and the absorptive power by the ventilator by a factor 4. Last update:
{"url":"http://www.thermexcel.com/english/program/duct.htm","timestamp":"2014-04-17T12:29:34Z","content_type":null,"content_length":"26915","record_id":"<urn:uuid:c2e465e1-a62b-4af5-8b27-6c820f6f2626>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00536-ip-10-147-4-33.ec2.internal.warc.gz"}
[Haskell-cafe] State Monad - using the updated state Ryan Ingram ryani.spam at gmail.com Wed Jan 7 21:28:33 EST 2009 Hi Phil. First a quick style comment, then I'll get to the meat of your question. getRanq1 is correct; although quite verbose. A simpler definition is this: getRanq1 = State ranq1 This uses the State constructor from Control.Monad.State: State :: (s -> (a,s)) -> State s a What it sounds like you want is this: main = do x <- getARandomNumber ... do some other stuff y <- getAnotherRandomNumber .. etc. using State. There are two ways to go about this; the first is, if the entire computation is pure, that is, the "do some other stuff" doesn't do IO, you can embed the whole computation in "State": seed = 124353542542 main = do result <- evalState randomComputation (ranq1Init seed) ... some IO using result ... randomComputation = do x <- getRanq1 let y = some pure computation using x z <- getRanq1 w <- something that uses x, y, and z that also uses the random source ... etc. return (some result) The other option, if you want to do IO in between, is to use a "transformer" version of State: type MyMonad a = StateT Word64 IO a main = withStateT (ranq1Init seed) $ do x <- getRanq1_t liftIO $ print x y <- getRanq1_t getRanq1_t :: MyMonad Double getRanq1_t = liftStateT getRanq1 liftStateT :: State s a -> MyMonad a liftStateT m = StateT $ \s -> return (runState m s) withStateT :: Word64 -> MyMonad a -> IO a withStateT s m = evalStateT m s -- can also just use "withStateT = flip evalStateT" This uses these functions from Control.Monad.State: liftIO :: MonadIO m => IO a -> m a This takes any IO action and puts it into any monad that supports IO. In this case, StateT s IO a fits. runState :: StateT s a -> s -> (a,s) This evaluates a pure stateful computation and gives you the result. StateT :: (s -> m (a,s)) -> StateT s m a This builds a StateT directly. You could get away without it like this: liftStateT m = do s <- get let (a, s') = runState m s put s' return a (note the similarity to your getRanq1 function!) evalStateT :: StateT s m a -> s -> m a This is just evalState for the transformer version of State. In our case it has the type (MyMonad a -> Word64 -> IO a) This said, as a beginner I recommend trying to make more of your code pure so you can avoid IO; you do need side effects for some things, but while learning it makes sense to try as hard as you can to avoid it. You can make a lot of interesting programs with just "interact" and pure functions. If you're just doing text operations, try to make your program look like this: main = interact pureMain pureMain :: String -> String pureMain s = ... You'll find it will teach you a lot about laziness & the power of purity! A key insight is that State *is* pure, even though code using it looks somewhat imperative. -- ryan P.S. If you can't quite get out of the imperative mindset you can visit imperative island via the ST boat. 2009/1/7 Phil <pbeadling at mail2web.com>: > Hi, > I'm a newbie looking to get my head around using the State Monad for random > number generation. I've written non-monad code that achieves this no > problem. When attempting to use the state monad I can get what I know to be > the correct initial value and state, but can't figure out for the life of me > how to then increment it without binding more calls there and then. Doing > several contiguous calls is not what I want to do here – and the examples > I've read all show this (using something like liftM2 (,) myRandom myRandom). > I want to be able to do: > Get_a_random_number > < a whole load of other stuff > > Get the next number as defined by the updated state in the first call > <some more stuff> > Get another number, and so on. > I get the first number fine, but am lost at how to get the second, third, > forth etc without binding there and then. I just want each number one at a > time where and when I want it, rather than saying give 1,2,10 or even 'n' > numbers now. I'm sure it's blindly obvious! > Note: I'm not using Haskell's built in Random functionality (nor is that an > option), I'll spare the details of the method I'm using (NRC's ranq1) as I > know it works for the non-Monad case, and it's irrelevent to the question. > So the code is: > ranq1 :: Word64 -> ( Double, Word64 ) > ranq1 state = ( output, newState ) > where > newState = ranq1Increment state > output = convert_to_double newState > ranq1Init :: Word64 -> Word64 > ranq1Init = convert_to_word64 . ranq1Increment . xor_v_init > -- I'll leave the detail of how ranq1Increment works out for brevity. I > know this bit works fine. Same goes for the init function it's just > providing an initial state. > -- The Monad State Attempt > getRanq1 :: State Word64 Double > getRanq1 = do > state <- get > let ( randDouble, newState ) = ranq1 state > put newState > return randDouble > _________ And then in my main _________ > -- 124353542542 is just an arbitrary seed > main :: IO() > main = do > let x = evalState getRanq1 (ranq1Init 124353542542) > print (x) > As I said this works fine; x gives me the correct first value for this > sequence, but how do I then get the second and third without writing the > giveMeTenRandoms style function? I guess what I want is a next() type > function, imperatively speaking. > Many thanks for any help, > Phil. > _______________________________________________ > Haskell-Cafe mailing list > Haskell-Cafe at haskell.org > http://www.haskell.org/mailman/listinfo/haskell-cafe More information about the Haskell-Cafe mailing list
{"url":"http://www.haskell.org/pipermail/haskell-cafe/2009-January/052963.html","timestamp":"2014-04-20T05:05:40Z","content_type":null,"content_length":"9490","record_id":"<urn:uuid:6a752aaf-d334-4535-bb96-b9ea94bd18e3>","cc-path":"CC-MAIN-2014-15/segments/1397609537864.21/warc/CC-MAIN-20140416005217-00493-ip-10-147-4-33.ec2.internal.warc.gz"}
GMAT Math: Lines & Slope in the x-y plane Here are a set of practice GMAT questions about the Cartesian plane. 1) What is the equation of the line that goes through (–2, 3) and (5, –4)? 1. y = –x + 1 2. y = x + 5 3. y = –3x/7 + 15/7 4. y = –4x/3 + 1/3 5. y = 9x/5 + 33/5 2) The line y = 5x/3 + b goes through the point (7, –1). What is the value of b? 1. 3 2. –5/3 3. –7/5 4. 16/3 5. –38/3 3) A line that passes through (–1, –4) and (3, k) has a slope = k. What is the value of k? 1. 3/4 2. 1 3. 4/3 4. 2 5. 7/2 If these problems make your head spin, you have found the right post. Slope is a measure of how steep a line is. There is very algebraic formula for the slope, and if you know that, that’s great! If you don’t know that formula, or used to know it and can’t remember it, I will say: fuhgeddaboudit! Here’s a much better way of thinking about slope. Slope is rise over run. To calculate rise and run, first have to put the two points in order. It actually doesn’t matter which one we say is the first and which one, the second: all that matters is that we are consistent. The rise is the vertical change — the change in y-coordinate (second point minus first). The run is the horizontal change — the change in the x-coordinate (again, second minus first). Once we have rise & run, divide them, rise divided by run, to find the slope. For example, suppose our points are (–2, 4) and (5, 1). For the sake of argument, we’ll say that’s the order — (–2, 4) is the “first” and (5, 1) is the “second.” The rise is the change in height, the change in y-coordinate: 1 – 4 = –3 (notice, we had to do second minus first, which gave us a negative here!) The run is the horizontal change, the change in x-coordinate: 5 – (–2) = 5 + 2 = 7 (remember: subtracting a negative is the same as adding a positive!). Now, rise/run = –3/7 —- that’s the slope. Slope is definitely something you need to understand for the GMAT Quantitative Whenever you find a slope, I strongly suggest doing a rough sketch, just to verify that the sign of the slope (positive or negative) and the value of the slope are approximately correct. Here’s a sketch of this particular calculation: Your sketch, of course, does not need to be this precise. Even a rough sketch would verify that, yes, the slope should be negative. Again, I highly recommend performing this visual check every time you calculate slope. Equations in the x-y plane Let’s get a bit philosophical for a moment. The technical name of the x-y plane is the Cartesian plane, named after its inventor, Mr. Rene Descartes. Although you may have met this sometime in middle school math and may now take it for granted, it is actually a brilliant mathematical device. It allowed for the unification of two ancient branches of mathematics: algebra and geometry. In more practical terms, every equation (an algebraic object) corresponds to a picture (a geometric object). That is a very deep idea. Equations of lines A straight line is a very simple picture, and not surprisingly it has a very simple equation. There are a few different ways to write a line, but the most popular and easiest to understand is y = mx + b. The m is the slope of the line. The b is the y-intercept: where the line crosses the y-axis. For any given line, m & b are constants: for a given line, both m & b equal a fixed number. By contrast, x & y (sometimes call the “graphing variables”) do not equal just one thing. This is not the “x” of ordinary solve-for-x algebra. This is a very deep idea — x & y don’t equal any one pair of values; rather, every single point (x, y) on the line, the entire continuous infinity of points that make up that line — every single one of them satisfies the equation of the line. That is a powerful and often underappreciated mathematical idea. Finding the equation of a line Sometimes the GMAT will give you the equation of a line already in y = mx + b form. Sometimes, the GMAT give you the line in another form (e.g. 3x + 7y = 22), and you will have to do a little algebraic re-arranging —- essentially, solve for y —- to bring the equation into y = mx + b form. Sometimes, though, as in problem #2 above, they give you two points and ask you to find the equation of a line. Here the procedure. First, find the slope (as demonstrated above). Now, plug the slope in for “m” in the y = mx + b equation, and pick either point (it doesn’t matter which one) and plug those coordinates in for x & y in this equation. This will produce an equation in which everything has a numerical value except for “b” — that means, you can solve this equation for the value of b. Once you know m & b, you know the equation of the line. After this introduction, go back and try those practice problems again before reading the solutions below. Here’s another relevant practice question: 4) http://gmat.magoosh.com/questions/821 In the next post, I will discuss midpoints and the issue of parallel & perpendicular lines. See also, the related posts on Distance in the Cartesian Plane, the Quadrants, and the special properties of the line y = x. Explanation of practice questions 1) Here, we will follow the procedure we demonstrated in the last section. Call (–2, 3) the “first” point and (5, –4), the “second.” Rise = –4 – 3 = –7. Run = 5 – (–2) = 7. Slope = rise/run = –7/ 7 = –1. Visual check: Yes, it makes sense that the slope is negative. We have the slope, so plug m = –1 and (x, y) = (–2, 3) into y = mx + b: 3 = (–1)*(–2) + b 3 = 2 + b 1 = b So, plugging in m = –1 and b = 1, we get an equation y = –x + 1. Answer = A 2) Here, we already have the slope, so we just need to follow the second half of the “finding the equation” procedure. Plug (x, y) = (7, –1) into this equation: Answer = E 3) This is a considerably more difficult one, which will involve some algebra. Let’s say that the “first” point is (–1, –4) and the “second,” (3, k). The rise = k + 4, which involves a variable. The run = 3 – (–1) = 4. There the slope is (k + 4)/4, and we can set this equal to k and solve for k. k + 4 = 4k 4 = 3k k = 4/3 Answer = C 12 Responses to GMAT Math: Lines & Slope in the x-y plane 1. Sid November 30, 2013 at 2:05 pm # Hi Mike, I just want to say that the way you explain concepts it’s just WOW. I hardly seen anyone to make things so simpler. Great Explaination □ Mike December 1, 2013 at 1:14 pm # Thank you very much for your high praise. Best of luck to you, my friend. 2. Dee November 24, 2013 at 9:50 pm # Hello Mike, Thanks for all your great posts, you break down things and make them so much easier to understand. But what really comes through in your posts is how much you love math and respect all its eccentricities. It is difficult not to be affected by your love for this subject. I hope you and the rest of the Magoosh team realize how much of a difference you make in the lives of others. Kudos! □ Mike November 25, 2013 at 10:23 am # Dear Dee, Thank you very much for your kind compliments. Best of luck to you, my friend. 3. Bassim November 16, 2013 at 3:40 am # Hi Mike, Thanks for everything! I believe there’s a typo on the Slope sketch (-2,5) should be (-2,4) according to your example! □ Mike November 18, 2013 at 10:17 am # Dear Bassim, Good eye! I recreated the diagram to correct that error. Thank you for pointing it out. 4. Jaydeek kher September 14, 2013 at 10:17 pm # Thanks for the post .. great help this was one subject where my concepts were not very clear as the OG has a different way of finding out the equation of the line which is a bit longer ,OG recommends using m(x1-x2)=(y1-y2) rather than plugging into y=mx+b which method should be used ? □ Mike September 15, 2013 at 2:03 pm # The way they show is a highly formal solution, a solution that would make your encrusted Algebra Two teacher very happy. What I have shown is a bit more efficient. I think if you follow the method I have shown here, you will find that it’s faster. 5. Piyush December 2, 2012 at 9:59 pm # Easy set of questions. □ Mike December 2, 2012 at 10:02 pm # For those adept at math, yes, these questions are on the easy side. Don’t underestimate how much some test takers will struggle with concepts like this. 6. JackWilshere November 30, 2012 at 12:10 am # Thanks for share. Good luck for you □ Mike November 30, 2012 at 11:56 am # You are quite welcome. Best of luck to you as well. Leave a Reply Click here to cancel reply.
{"url":"http://magoosh.com/gmat/2012/gmat-math-lines-slope-in-the-x-y-plane/","timestamp":"2014-04-19T17:01:33Z","content_type":null,"content_length":"82314","record_id":"<urn:uuid:e3e9c1c5-2b2a-4a32-9ba2-ceec2fcd24f1>","cc-path":"CC-MAIN-2014-15/segments/1397609537308.32/warc/CC-MAIN-20140416005217-00520-ip-10-147-4-33.ec2.internal.warc.gz"}
12.1 Direction Angles and Direction Cosines Next: 12.2 Planes Up: 12 DirectionsPlanes and Lines Previous: 12 DirectionsPlanes and Lines Given a vector (a,b,c) in three-space, the direction cosines of this vector are Here the direction angles x-, y- and z-axes, respectively. In formulas, it is usually the direction cosines that occur, rather than the direction angles. We have Next: 12.2 Planes Up: 12 DirectionsPlanes and Lines Previous: 12 DirectionsPlanes and Lines The Geometry Center Home Page Silvio Levy Wed Oct 4 16:41:25 PDT 1995 This document is excerpted from the 30th Edition of the CRC Standard Mathematical Tables and Formulas (CRC Press). Unauthorized duplication is forbidden.
{"url":"http://www.geom.uiuc.edu/docs/reference/CRC-formulas/node52.html","timestamp":"2014-04-21T05:11:20Z","content_type":null,"content_length":"2454","record_id":"<urn:uuid:f37a2e64-9565-4d60-a6f8-5629bcc12d23>","cc-path":"CC-MAIN-2014-15/segments/1397609539493.17/warc/CC-MAIN-20140416005219-00054-ip-10-147-4-33.ec2.internal.warc.gz"}
Direct and Inverse Variation Bigger means farther away from zero and smaller means closer to zero. (This is discussed in more detail in a future section.) Suppose that $\,y = 2x\,$. When $\,x\,$ gets bigger, $\,y\,$ gets bigger. When $\,y\,$ gets bigger, $\,x\,$ gets bigger. In this type of relationship, $\,x\,$ and $\,y\,$ ‘follow each other’ in size: when one gets bigger, so does the other. When one gets smaller, so does the other. This kind of relationship between two variables is called direct variation: if there is a nonzero number $\,k\,$ for which $\,y = kx\,$, then we say that ‘$\,y\,$ varies directly as $\,x\,$’. Now suppose that $\,y = \frac{2}{x}\,$. When $\,x\,$ gets bigger, $\,y\,$ gets smaller. When $\,x\,$ gets smaller, $\,y\,$ gets bigger. In this type of relationship, $\,x\,$ and $\,y\,$ have sizes that go in different directions: when one gets bigger, the other gets smaller. When one gets smaller, the other gets bigger. This kind of relationship between two variables is called inverse variation: if there is a nonzero number $\,k\,$ for which $\displaystyle \,y = \frac{k}{x}\,$, then we say that ‘$\,y\,$ varies inversely as $\,x\,$’. Question: Consider the formula $\,PV = nRT\,$. As $\,T\,$ gets bigger, what happens to $\,V\,\,$? (Assume all other variables are held constant.) Solution: $V\,$ gets bigger There is a direct relationship between $\,T\,$ and $\,V\,$. As $\,T\,$ gets bigger, so does $\,V\,$. Intuition: Both variables are ‘upstairs’ on opposite sides of the equation. Question: Consider the formula $\,P = \frac{nRT}{V}\,$. As $\,P\,$ gets bigger, what happens to $\,V\,\,$? (Assume all other variables are held constant.) Solution: $V\,$ gets smaller There is an inverse relationship between $\,P\,$ and $\,V\,$. As $\,P\,$ gets bigger, $\,V\,$ gets smaller. Intuition: One variable is ‘upstairs’ and the other ‘downstairs’ on opposite sides of the equation. Master the ideas from this section by practicing the exercise at the bottom of this page. When you're done practicing, move on to: Scientific Notation On this exercise, you will not key in your answer. However, you can check to see if your answer is correct.
{"url":"http://www.onemathematicalcat.org/algebra_book/online_problems/direct_inverse_variation.htm","timestamp":"2014-04-17T01:11:57Z","content_type":null,"content_length":"66550","record_id":"<urn:uuid:430b1912-c3c4-40ee-969c-95e220c6733d>","cc-path":"CC-MAIN-2014-15/segments/1397609526102.3/warc/CC-MAIN-20140416005206-00478-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions Math Forum Ask Dr. Math Internet Newsletter Teacher Exchange Search All of the Math Forum: Views expressed in these public forums are not endorsed by Drexel University or The Math Forum. Topic: Rigidity of Convex Polytopes Replies: 6 Last Post: Jun 20, 1994 9:29 AM Messages: [ Previous | Next ] Re: Rigidity of Convex Polytopes Posted: Jun 20, 1994 9:29 AM This is a long reply to the query about extensions of Alexandrov's Theorem to higher dimensions. There are published analogs - which I will reference below. First there is a choice about which type of 'rigidity' you are interested in: (i) local uniqueness of the configuration (combinatorics and lengths, dimensions of faces), up to congruence; (ii) first-order/static rigidity - strong local uniqueness of the configuration (implies (i)); (iii) global rigidity of the configuration, up to congruence; (iv) other local rigidities (second order ... ). (v) generic rigidity (Failure of first-order rigidity is defined by a set of polynomial equations in the vertices of a simplicial polyhedron. If one choice of coordinates makes a polynomial non-zero then 'almost all' choices work - this rigidity is 'generic' and with probability 1 a random realization will be first-order rigid.) Cauchy's Theorem (1814) says that two triangulated strictly convex polyhedra, with the same combinatorics and the same edge lengths are congruent. Alexandrov's Theorem says that this uniqueness remains true if you take a general strictly convex polyhedron, add any number of vertices along the 'natural' edges of this polyhedron, then triangulate each natural face with all its vertices. (It is a discrete form of the uniqueness of convex realizations of an intrinsic metric on the surface.) The proofs of these theorems also give proofs of the first-order rigidity of the realizations - though the first explicit statement, for convex triangulated spheres is due to Max Dehn, using a different proof. (Buckminster Fuller had nothing to do with this - he just had good PR and emphasized the possibilities of using few edge lengths and few types of joints in construction.) Infinitesimal Rigidity in higher dimensions: The infinitesimal form of Cauchy's Theorem appears to be contained in a footnote of Efimov (of the same Russian School as Alexandrov). I have proven the analog of Alexandrov's Theorem in higher dimensions: W. Whiteley, Infinitesimally rigid polyhedra I: Statics of frameworks, Trans. AMS 285 (1984), 431-465. Theorem 8.6 If a strictly convex d-polytope (d>2) is formed in d-space by (i) placement of a joint at each vertex; (ii) replacement of each 2-face by a subframework which triangulates the polygon, then the resulting framework is statically (first-order) rigid in d-space. Theorem 8.7 If a strictly convex d-polytope (d>2) is formed in d-space by (i) placement of a joint at each vertex; (ii) placement of new joints along natural faces of dimensions <k<d; (b) replacement of each (k)-face by a subframework which is statically (first-order) rigid on the joints associated with that face, in their k-space, then the resulting framework is statically (first-order) rigid in d-space. I suspect that the uniqueness within the world of strictly convex polytopes also applies - but I have not looked at it. What follows immediately is that convexity implies the uniqueness of the geometry at each vertex. It is likely that a simple induction transports this to the entire structure. A curious sidelight is that this rigidity result was used by Gil Kalai to prove the best lower bounds on the number of edges of a strictly convex d-polytope! Generic Rigidity For generic rigidity, something much stronger holds: A corollary of a theorem of Alan Fogelsanger (unpublished Ph.D. thesis, 1988) says that all simplicial (d-1) manifolds are generically rigid in d-space. So for example, triangulated tori, projective planes etc. are generically rigid in 3-space. (There is no problem with self-intersection of the 'surface' when making the 'framework'.) I have a TeX file giving a rewrite of his proof, which uses 'minimal homology cycles'. Non-convex realizations: Bob Connelly has an example of an immersed, non-convex sphere which is flexible (violating any form of rigidity you might propose). R. Connelly, A counter example to the rigidity conjecture for polyhedra, Inst. Haut. Etud. Sci. Publ. Math. 47 1978, 333-335. R. Connelly, A flexible sphere, Math. Intelligencer 1, 130-131. Therefore some hypothesis is required. The following recent book has a good bibliography on rigidity, but addresses only some of the issues in generic rigidity: J. Graver, B. Servatius, H. Servatius. Combinatorial Rigidity, AMS Graduate Studies in Math 1993. As may be obvious, this is one of my areas of research. I would be happy to respond to any further queries on this topic, as there is a large literature on 'rigidity' (we have draft chapters for a three volume work on the subject). Walter Whiteley Department of Mathematics and Statistics York University North York, Ontario Date Subject Author 6/16/94 Rigidity of Convex Polytopes John Sullivan 6/16/94 Re: Rigidity of Convex Polytopes Joseph O'Rourke 6/17/94 Re: Rigidity of Convex Polytopes John Conway 6/19/94 Re: Rigidity of Convex Polytopes joe malkevitch 6/19/94 Re: Rigidity of Convex Polytopes John Conway 6/20/94 Re: Rigidity of Convex Polytopes John Sullivan 6/20/94 Re: Rigidity of Convex Polytopes Walter Whiteley
{"url":"http://mathforum.org/kb/message.jspa?messageID=1090940","timestamp":"2014-04-16T10:54:36Z","content_type":null,"content_length":"27957","record_id":"<urn:uuid:6cf7893d-9e03-4295-af53-ef9252f8fddd>","cc-path":"CC-MAIN-2014-15/segments/1397609523265.25/warc/CC-MAIN-20140416005203-00005-ip-10-147-4-33.ec2.internal.warc.gz"}
Proving the Pythagorean Theorem using Congruent Squares Date: 12/5/95 at 15:14:58 From: Anonymous Subject: Proof of Pythagorean Theorem Dear Dr. Math, A friend of mine is irked because of constant use of the Pythagorean Theorem, which he has not seen proven. I have seen it proven before, but could not quite recount the proof. I have heard the theorem called the "most proved theorem," so this ought to be an easy and a very prolific search, but could you find a proof of the theorem. Johnny Vogler Date: 3/6/96 at 15:15:45 From: Doctor Dusty Subject: Re: Proof of Pythagorean Theorem This is a very neat little proof of the Pythagorean Theorem, but it might look a little funky because of the graphics. Here it goes: b a b a _____.__ _____.__ a | | a a | | . . . | b | | | | b | | b b | . |_____.__| |__._____| a b a a b First of all, construct two congruent squares, both with sides a + b. You know that the area of each square should be (a + b)^2 Now connect the points in the first square to create two squares and two rectangles. When you add the areas of those four up, you get a^2 + ab + ab + b^2 which is also equal to the area of the entire square, (a + b)^2. Connect the points of the second square to form another square and call the length of each side of the inner square c. You now have four right triangles and one square. Add the areas of the 5 figures together and you have .5ab + .5ab + .5ab + .5ab + c^2 The areas of both squares are equal; we agreed to that in the beginning. Set the area equation of the first equal to that of the second. a^2 + 2ab + b^2 = 2ab + c^2 Subtract 2ab from both sides and you have a^2 + b^2 = c^2 where a is one leg of the right triangle, b is the other leg, and c is the hypotenuse. (I got that from looking at the second square.) I just think that this proof is really neat. I hope it calms your friend down. -Doctor Dusty, The Math Forum
{"url":"http://mathforum.org/library/drmath/view/54788.html","timestamp":"2014-04-19T09:44:23Z","content_type":null,"content_length":"7056","record_id":"<urn:uuid:8c6ab22d-5da2-4078-858b-045a92075db0>","cc-path":"CC-MAIN-2014-15/segments/1397609537097.26/warc/CC-MAIN-20140416005217-00255-ip-10-147-4-33.ec2.internal.warc.gz"}
Poincaré Conjecture and the Shape of the Universe Take the 2-minute tour × MathOverflow is a question and answer site for professional mathematicians. It's 100% free, no registration required. Has the solution of the Poincaré Conjecture helped science to figure out the shape of the universe? My impression is that all this "shape of the universe" stuff is mostly media hype. As a rule, you shouldn't believe whatever nonspecialists tell you about a specialized subject. Qiaochu Yuan Dec 24 '09 at 22:51 I voted up this question, only because I expect I'll want to crib a good answer at the next party I attend, when someone asks me what I do. Theo Johnson-Freyd Dec 24 '09 at 23:05 At first sight, this looks like a question that is not suited for MO, but in light of the great answers below, I guess not. Kevin H. Lin Dec 25 '09 at 20:15 add comment In Einstein's theory of General Relativity, the universe is a 4-manifold that might well be fibered by 3-dimensional time slices. If a particular spacetime that doesn't have such a fibration, then it is difficult to construct a causal model of the laws of physics within it. (Even if you don't see an a priori argument for causality, without it, it is difficult to construct enough solutions to make meaningful predictions.) There isn't usually a geometrically distinguished fibration, but if you have enough symmetry or even local symmetry, the symmetry can select one. An approximate symmetry can also be enough for an approximately canonical fibration. Once you have all of that, the topology of spacelike slices of the universe is not at all a naive or risible question, at least not until you see more physics that might demote the question. The narrower question of whether the Poincaré Conjecture is relevant is more wishful and you could call it naive, but let's take the question of relating 3-manifold topology in general to cosmology. The cosmic microwave background, discovered in the 1964 by Penzias and Wilson, shows that the universe is very nearly isotropic at our location. (The deviation is of order $10^{-5}$ and it was only announced in 1992 after 2 years of data from the COBE telescope.) If you accept the Copernican principle that Earth isn't at a special point in space, it means that there is an approximately canonical fibration by time slices, and that the universe, at least approximately and locally, has one of the three isotropic Thurston geometries, $E^3$, $S^3$, or $H^3$. The Penzias-Wilson result makes it a really good question to ask whether the universe is a 3-manifold with some isotropic geometry and some fundamental group. I have heard of the early discussion of this question was so naive that some astronomers only talked about a 3-torus. They figured that if there were other choices from topology, they could think about them later. Notice that already, the Poincaré conjecture would have been more relevant to cosmology if it had been false! The topologist who has done the most work on the question is Jeff Weeks. He coauthored a respected paper in cosmology and wrote an interesting article in the AMS Notices that promoted the Poincaré dodecahedral space as a possible topology for the universe. But after he wrote that article... There indeed is other physics that does demote the 3-manifold question, and that is inflationary cosmology. The inflation theory posits that the truthful quantum field theory has a vaguely stable high-energy phase, which has such high energy density that the solution to the GR equations looks completely different. In the inflationary solution, hot regions of the universe expand by a factor of $e$ in something like $10^{-36}$ seconds. The different variations of the model posit anywhere from 60 to thousands of factors of $e$, or "$e$-folds". Patches of the hot universe also cool down, including the one that we live in. In fact every spot is constantly cooling down, but cooling is still overwhelmed by expansion. Instead of tacitly accepting certain observed features of the visible universe, for instance that it is approximately isotropic, inflation explains them. It also predicts that the visible universe is approximately flat and non-repeating, because macroscopic curvature and topology have been stretched into oblivion, and that observable anisotropies are stretch marks from the expansion. The stretch marks would have certain characteristic statistics in order to fit inflation. On the other hand, in the inflationary hot soup that we would never see directly, the rationale for canonical time slices is gone, and the universe would be some 4-manifold or even some fractal or quantum generalization of a 4-manifold. The number of $e$-folds is not known and even the inflaton field (the sector of quantum field theory that governed inflation) is not known, but most or all models of inflation predict the same basic features. And the news from the successor to COBE, called WMAP, is that the visible universe is flat to 2% or so, and the anistropy statistically matches stretch marks. There is not enough to distinguish most of the models of inflation. There is not enough to establish inflation in the same sense that the germ theory of disease or the the heliocentric theory are established. What is true is that inflation has made experimental predictions that have been confirmed. After all that news, the old idea that the universe is a visibly periodic 3-manifold is considered a long shot. WMAP didn't see any obvious periodicity, even though Weeks et al were optimistic based on its first year of data. But I was told by a cosmologist that periodicity should still be taken seriously as an alternative cosmological model, if possibly as a devil's advocate. A theory is incomplete science if it is both hard to prove, and if every alternative is laughed out of the room. In arguing for inflation, cosmologists would also like to have something to argue against. In the opinion of the cosmologist that I talked to some years ago, the model of a 3-manifold with a fundamental group, developed by Weeks et al, is as good at that as any proposal. José makes the important point that, in testing whether the universe has a visible fundamental group, you wouldn't necessarily look for direct periodicity represented by non-contractible geodesics. Instead, you could use harmonic analysis, using a suitable available Laplace operator, and this is what used by Luminet, Weeks, Riazuelo, Lehoucq and Uzan. I also that I have not heard of any direct use of homotopy of paths in astronomy, but actually the direct geometry of geodesics does sometimes play an important role. For instance, look closely at this photograph of galaxy cluster Abell 1689 . You can see that there is a strong gravitational lens just left of the center, between the telescope and the dimmer, slivered galaxies. Maye no analysis of the cosmic microwave background would be geometry-only, but geometry would modify the apparent texture of the background, and I think that that is part of the argument from the data that the visible universe is approximately flat. Who is to say whether a hypothetical periodicity would be seen with geodesics, harmonic expansion, or in some other way. Part of Gromov's point seems fair. I think it is true that you can always expand the scale of proposed periodicity to say that you haven't yet seen it, or that the data only just starts to show it. Before they saw anisotropy with COBE, that kept getting pushed back too. The deeper problem is that the 3-manifold topology of the universe does not address as many issues in cosmology, either theoretical or experimental, as inflation theory does. Just to be clear, it is not the original question that I find risible but rather DARPA's challenge. It reminds me of an episode of NUMBERS where the premise was that a mathematician who had claimed to solve the Riemann Hypothesis was being extorted by criminals who wanted to exploit fast factorization algorithms. What can you do as an engineer with the Geometrization Theorem that you couldn't do with the Geometrization Conjecture? Pete L. Clark Dec 25 '09 at 7:49 I don't want to knock DARPA too much for trying to do good, but I have to agree that their list is a strange imitation of Hilbert's list of 23 problems. For one thing, the list asks for the cosmological implications of the smooth Poincare conjecture in four dimensions, not the Poincare conjecture solved by Perelman. math.utk.edu/~vasili/refs/darpa07.MathChallenges.html Greg Kuperberg Dec 25 '09 at 8:24 Also, Pete, the question you quoted could be a reference to something legitimate: Quantum link invariants at a root of unity such as the Jones polynomial seem to be a valid model for the quantum state of certain 2-dimensional condensed matter systems. Still, the question as stated is a grandiose variation that does not even mention quantum invariants. Greg Kuperberg Dec 25 '09 at 8:32 @Greg -- Now you remind me of my AP English teacher, Dr. Phillips. He was also brilliant, widely learned (once in class he digressed to talk about Russell and Whitehead's Principia Mathematica) and so creative that, if you didn't have anything better to say, it was sometimes a reasonable strategy to write an essay suggesting connections between things when you yourself could not see them: he could, and would, often fill in the details to make your thesis look reasonable and sound. But of course it doesn't mean that you knew what you were talking about, only that he really did. Pete L. Clark Dec 25 '09 at 23:53 That is a very flattering compliment! I am not sure how much I deserve it, but thank you. Certainly in the case of condensed matter physics, my answer is not so creative. I have been working in quantum computation, and in that area everyone knows about the connection between anyons and quantum link invariants. See for instance arxiv.org/abs/0704.2241 . Benjamin Mann at DARPA might well have heard about this work, or possibly not. Greg Kuperberg Dec 26 '09 at 2:50 add comment No, I believe it has not and that it is hard to see how it could be directly useful. Just for starters, it is not clear whether the universe is simply connected. It is not even clear that this is a meaningful question. In this regard I cannot resist quoting a passage from Steven Krantz' Mathematical Apocrypha Redux: Mikhael Gromov tells of attending lectures about cosmology by two topologists -- really great topologists -- concerning the possible shape of the Universe. Gromov asked the first of these whether the Universe was simply connected. The man replied, "It is clear that the Universe cannot be but simply connected, for non-simple connectedness would imply some high-scale periodicity, which is ridiculous." The other topologist gave a talk entitled "Is the Universe simply connected?" which seemed to be related to the question. When informed of the first speaker's statement, he said, "Who cares, it's still a meaningful question, like it or not." In discussing these divergent opinions, Gromov offers his own: "Take a loop in the Universe, a reasonably short loop up compared to the size of the Universe, say of no more than $10^{10}$ to $10^{12}$ light years long and ask if it is contractible. And, to be realistic, we pick a certain time, for example $10^ vote {30}$ years, and ask if it is contractible within this time. So you are allowed to move the loop around, say at the speed of light, and try to determine whether or not it can be contracted 28 within this time. The point is, even imagining our space to be some topological 3-sphere S^3, we can organize an innocuous enough metric on S^3 so that it takes more than $10^{30}$ years to down contract certain loops in this sphere and in the course of contraction we need to stretch the loop to something like $10^{30}$ light years in size. So, if $10^{30}$ years is all the time you vote have, you conclude that the loop is not contractible and whether or not $\pi_1(S^3) = 0$ becomes a matter of opinion." On the other hand, the issue of applicability of Poincare (and geometrization) to the real world has been raised by others. The government agency DARPA listed the following as one of its 23 Challenges for the Future: What are the Physical Consequences of Perelman's Proof of Thurston's Geometrization Theorem? Can profound theoretical advances in understanding three dimensions be applied to construct and manipulate structures across scales to fabricate novel materials? To me DARPA's question seems naive bordering on risible, but others may feel differently, especially if they receive funding. You're not the only one to have that reaction to the DARPA challenges. There was a thread about them at the n-Category Cafe, where the majority feeling was that they were a disgrace: Tom Leinster Dec 25 '09 at 13:54 add comment With all due respect to Mikhail Gromov, Physics is not about homotoping loops in the spatial universe! That would certainly be risible. In Physics, the fundamental group reflects itself in the spectrum of the laplacian: $S^3$ and $S^3/\Gamma$, say, have different harmonics and this is something that can be infered from measurements, even if perhaps not measured directly. According to current cosmological models, the early universe was a hot plasma which light could not escape: basically photons had a very small mean free path because there is so much activity in the plasma that they find something to scatter against before long. It is only after the universe starts cooling that at some point the photons can escape. This happens over a period of time, but cosmologically speaking we can assume it happens instantaneously. The cosmic microwave background we measure today are the photons which were released at that time. They have of course cooled down considerably since then. They contain information about the "surface of last scatter" and what experiments such as COBE, WMAP, Planck,... are essentially doing is taking pictures of the surface of last scatter with increasing resolution. The colours in the pretty pictures you see coming from COBE and WMAP correspond to temperature fluctuations which can be mapped to density fluctuations in the early universe via something called the Sachs-Wolfe effect. In a nutshell, this effect relates the harmonic expansion of the density fluctuations in the early spatial universe and the harmonic expansion of the temperature as a function on the celestial sphere. The latter can be measured directly, and the former can be inferred. So in principle it is possible to relate the topology of the spatial universe to empirical data. vote In fact the work of Luminet, Weeks, Riazuelo, Lehoucq and Uzan that Greg Kuperberg links to in his answer was based on the analysis of the first-year data from WMAP. The power spectrum of the 20 data revealed that the lowest lying modes were severely attenuated and this suggested that the spatial universe was missing the lowest lying harmonics: recall that the harmonics in $X/\Gamma$ down are the $\Gamma$-invariant harmonics on $X$. Of the possible models, it was $S^3/E_8$ which fit the (very scant!) data best. This model has now been deprecated because it predicts the vote existence of "circles in the sky": sectors of circles in two different locations of the celestial sphere with the same temperature fluctuations. There are computer algorithms that search for those circles in the sky but as far as I know none were ever found. This story suggests the question: Can one hear the shape of the universe?, paraphrasing Mark Kac's famous question about the shape of a drum. In the case of Kac's question we know the answer to be negative: Milnor found that there are non-isometric isospectral 16-dimensional tori, coincidentally the two tori which define the two heterotic string theories, and more recently also non-isometric planar domains with piecewise linear boundaries have been constructed which are isospectral. However for three-dimensional space forms the answer to the question is positive, and this is gives hope that cosmological data might determine the shape of universe. Whether the Poincaré conjecture has any bearing on this story is not clear to me, but what is undeniable is that the nature of the topology and the geometry of the spatial universe is physically relevant. It bears upon such "big" questions as whether the universe is finite or infinite and indeed as to what its ultimate fate might be, whether or not Gromov or indeed any of us, will be there to witness it. Again, I think my use of "risible" was not interpreted as I had intended; I edited my post in an attempt at clarification. To me, Gromov's remark (as reported by Krantz; the title of his book contains, after all, the word apocrypha) emphasizes that a lot of assumptions have to be made in order for the question of simply connectedness of the universe to make good sense. (For instance, why is it reasonable to assume that the universe, as a Riemannian manifold, is a space form?) I don't wish to deny that there is deep and interesting mathematics and physics here. Pete L. Clark Dec 25 '09 at 7:58 That the large scale structure of the spatial universe looks like a space form follows from two properties: isotropy and homogeneity. Isotropy is something that can be empirically tested. Penzias and Wilson measured the cosmic microwave background and noticed that it is isotropic to a large degree. I think that temperature fluctuations (divided by temperature) are of the order of $10^{-4}$ or thereabouts. Homogeneity, on the other hand, is an assumption: usually paraphrased as the principle of mediocrity. As usual, at the end it's always Occam's razor. José Figueroa-O'Farrill Dec 25 '09 at 8:44 I had a thought which Ian Agol's answer confirmed: if you're already assuming that your manifold is a space-form, what do you need the Poincare Conjecture for? Thus I think again that the answer to the poster's precise question has got to be "no". Pete L. Clark Dec 27 '09 at 0:19 The answer might indeed well be "no". My point is simply that the topology of the universe is not "a matter of opinion", since it can be tested empirically. José Figueroa-O'Farrill Dec 27 '09 at 0:52 add comment The short answer is no. The fact is that experimental cosmologists use a very crude global model for the universe, the FLRW universe. The 3-dimensional cross sections of this model are up vote 4 constant curvature 3-manifolds, and therefore already satisfy the Geometrization Theorem. So in some sense cosmologists have already been taking the Poincare conjecture as a working down vote hypothesis all along. Where exactly is it that FLRW uses the geometrisation conjecture? CMB + mediocrity gives you that you are on a space form. That the model is crude is not in question: but it gets the right large scale structure and you can try to model the small anisotropies by perturbations. This is hardly controversial. José Figueroa-O'Farrill Dec 27 '09 at 0:55 add comment No, as far as anyone can tell. On the other hand, lots of people think about what the universe would be like if it were in fact the Poincaré dodecahedral space (the counterexample to the original Poincaré up vote 3 down vote See for example Dodecahedral space topology as an explanation for weak wide-angle temperature correlations in the cosmic microwave background in Nature, and The residual gravity acceleration effect in the Poincaré dodecahedral space available on the arxiv. Further The optimal phase of the generalised Poincare dodecahedral space hypothesis implied by the spatial cross-correlation function of the WMAP sky maps claims to find some statistical evidence that this might even be true, by looking at correlations in the cosmic microwave background. add comment
{"url":"http://mathoverflow.net/questions/9708/poincare-conjecture-and-the-shape-of-the-universe?sort=votes","timestamp":"2014-04-19T15:33:56Z","content_type":null,"content_length":"99525","record_id":"<urn:uuid:0dd75d09-9718-46fe-8984-af6423e10e65>","cc-path":"CC-MAIN-2014-15/segments/1397609537271.8/warc/CC-MAIN-20140416005217-00085-ip-10-147-4-33.ec2.internal.warc.gz"}
Help Understanding Stewart Chain Rule Proof [Picture Provided] Sci Advisor HW Helper P: 9,421 Here is the traditional proof of the chain rule: (Stewart's first proof made correct): We have a composite function y(u(x)), and assume both component functions y(u) and u(x) are differentiable, and we claim that also y(u(x)) is differentiable, and that its derivative equals y’(u).u’ (x) = y’(u(x)).u’(x). All we have to do is show the limit of (∆y/∆x) as ∆x -->0, equals y’(u).u’(x). We are assuming that (∆u/∆x)-->u’(x) as ∆x -->0, and also that (∆y/∆u)-->y’(u) as ∆u -->0. Of course one needs to know the meaning of a limit. I.e. (∆y/∆u)-->y’(u) as ∆u -->0, means that the fraction (∆y/∆u) gets really close to the number y’(u) as long as ∆u is really small but not zero, and the same for the other limit. Unfortunately since the function u(x) is not assumed to be “one to one”, it can happen that two different values of x give the same value of u, and then we would have ∆u = 0 even though ∆x ≠ 0. So if we try to break up the fraction ∆y/∆x into a product (∆y/∆u)(∆u/∆x) and use the product rule for limits, we have a problem since this product may not really equal ∆y/∆x for all ∆x that is really small but not zero. I.e. if there is a small non zero ∆x such that ∆u = 0, then ∆y/∆x does not equal (∆y/∆u)( ∆u/∆x), since the fraction (∆y/∆u) does not make sense. Now we get to start from as small a ∆x as we want in this limit, so if there is ever a ∆x so small that ∆u ≠ 0 for that ∆x and also for all smaller ∆x, there is no problem. So the only case where we have not proved the chain rule is when there is a sequence of ∆x’s approaching zero, and for all of them we still have ∆u = 0. Now in that case, it follows that the fraction ∆u/∆x equals zero for all those ∆x’s, and since this fraction has a limit, the only possible limit is zero. I.e. in the only case where the proof does not work, we know that u’(x) = 0. Thus for the theorem to hold in that case, we only need to prove that y’(x) = y’(u).u’(x) = y’(u).0 = 0. I.e. all we have to do is prove that in this case the fraction ∆y/∆x still approaches zero even though we cannot always factor it into a product of fractions. The secret is to notice that we can still factor it as ∆y/∆x = (∆y/∆u)(∆u/∆x), as long as ∆u ≠ 0. I.e. there are two kinds of ∆x’s, those for which ∆u = 0, and those for which ∆u ≠ 0. But when ∆u = 0 we do not need to factor it, i.e. it is trivial then that the fraction ∆y/∆x = 0, since the top is the difference of the values of y at the same two values of u, so of course it equals zero. I.e. ∆u = 0 means the two values of u are the same, so y has the same value at bo0th of them so ∆y = 0, hence also ∆y/∆ = 0. And in the case where ∆u ≠ 0, we can still factor the fraction as ∆y/∆x = (∆y/∆u)(∆u/∆x), and use the other product argument. I.e. as long as ∆x is really small, if ∆u ≠ 0, then the fraction ∆y/∆x = (∆y/∆u)(∆u/∆x). And since u’(x) = 0 in this case, (∆u/∆x) is a small number, and (∆y/∆u) is close to the finite number y’(u), so the product (∆y/∆u)(∆u/∆x), is a small number. And in the case where ∆u = 0, things are actually even better. I.e. although we cannot factor the fraction, it does not matter because then ∆u= 0 implies also ∆y = 0, so the fraction ∆y/∆x is as close to zero as it can get, since it equals zero. Thus in the “bad” case where ∆x is small and non zero, but ∆u = 0, the chain rule still holds because both sides of the equaion equal zero. Thus Stewart’s second proof is unnecessary. It works because he has managed to take the denominators out of the argument. But he has also managed to make the argument less understandable. This result was traditionally proved correctly in turn of the century English language books, such as Pierpont's Theory of functions of a real variable, and in 19th century European books such as that of Tannery [see the article by Carslaw, in vol XXIX of B.A.M.S.], but unfortunately not in the first three editions of the influential book Pure Mathematics, by G.H.Hardy. Although Hardy reinstated the classical proof in later editions, modern books usually deal with the problem by giving the slightly more sophisticated linear approximation proof, or making what to me are somewhat artificial constructions. The point is simply that in proving a function has limit L, one only needs to prove it at points where the function does not already have value L. Thus to someone who says that the usual argument for the chain rule for y(u(x)), does not work for x's where ∆u = 0, one can simply reply that these points are irrelevant. Assume f is differentiable at g(a), g is differentiable at a, and on every neighborhood of a there are points x where g(x) = g(a). We claim the derivative of f(g(x)) at a equals f'(g(a))(g'(a)). 1) Clearly under these hypotheses, g'(a) = 0. 2) the chain rule holds at a if and only if lim∆f/∆x = 0 as x approaches a. 3) Note that ∆f = ∆f/∆x = 0 at all x such that g(x) = g(a). 4) In general, to prove that lim h(x) = L, as x approaches a, it suffices to prove it for the restriction of h to those x such that h(x) ≠ L. 5) Thus in arguing that ∆f/∆x approaches 0, we may restrict to x such that g(x) ≠ g(a), where the usual argument applies.
{"url":"http://www.physicsforums.com/showthread.php?p=3805661","timestamp":"2014-04-16T04:30:52Z","content_type":null,"content_length":"53239","record_id":"<urn:uuid:4cfa7f1b-197a-497c-9321-af99cbeb24fb>","cc-path":"CC-MAIN-2014-15/segments/1398223206120.9/warc/CC-MAIN-20140423032006-00647-ip-10-147-4-33.ec2.internal.warc.gz"}
Type III: Mixed Mesh Next: Reduced Computational Complexity and Up: The Waveguide Mesh Previous: Type II: Current-centered Mesh We set where is some positive constant, and then choose The optimal value of is easily shown to be and this leads to the constraint for , integer and and half-integer. As in (1+1)D, this bound is inferior to those obtained using the type I and type II meshes. Because the immittance settings are simpler, however, this form may be preferable, from a programming standpoint. Next: Reduced Computational Complexity and Up: The Waveguide Mesh Previous: Type II: Current-centered Mesh Stefan Bilbao 2002-01-22
{"url":"https://ccrma.stanford.edu/~bilbao/master/node117.html","timestamp":"2014-04-19T04:24:28Z","content_type":null,"content_length":"7631","record_id":"<urn:uuid:c36ca990-f180-49f1-9853-e5e01a1f57b1>","cc-path":"CC-MAIN-2014-15/segments/1397609535775.35/warc/CC-MAIN-20140416005215-00501-ip-10-147-4-33.ec2.internal.warc.gz"}
Math Forum Discussions - Re: The Distinguishability argument of the Reals. Date: Jan 3, 2013 8:09 PM Author: fom Subject: Re: The Distinguishability argument of the Reals. On 1/3/2013 3:53 PM, Virgil wrote: > In article > <a60601d5-24a2-4501-a28b-84a7b1e53bac@ci3g2000vbb.googlegroups.com>, > WM <mueckenh@rz.fh-augsburg.de> wrote: >> On 3 Jan., 14:52, gus gassmann <g...@nospam.com> wrote: >>> Exactly. This is precisely what I wrote. IF you have TWO *DIFFERENT* >>> reals r1 and r2, then you can establish this fact in finite time. >>> However, if you are given two different descriptions of the *SAME* real, >>> you will have problems. How do you find out that NOT exist n... in >>> finite time? >> Does that in any respect increase the number of real numbers? And if >> not, why do you mention it here? > It shows that WM considerably oversimplifies the issue of > distinguishing between different reals, or even different names for the > same reals. >>> Moreover, being able to distinguish two reals at a time has nothing at >>> all to do with the question of how many there are, or how to distinguish >>> more than two. Your (2) uses a _different_ concept of distinguishability.- >> Being able to distinguish a real from all other reals is crucial for >> Cantor's argument. "Suppose you have a list of all real numbers ..." >> How could you falsify this statement if not by creating a real number >> that differs observably and provably from all entries of this list? > Actually, all that is needed in the diagonal argument is the ability > distinguish one real from another real, one pair of reals at a time. One canonical name from another canonical name.
{"url":"http://mathforum.org/kb/plaintext.jspa?messageID=7956104","timestamp":"2014-04-17T09:49:46Z","content_type":null,"content_length":"3040","record_id":"<urn:uuid:a172f8fd-7436-4fd5-b352-a05d2deb7d36>","cc-path":"CC-MAIN-2014-15/segments/1397609527423.39/warc/CC-MAIN-20140416005207-00032-ip-10-147-4-33.ec2.internal.warc.gz"}
Millbourne, PA Geometry Tutor Find a Millbourne, PA Geometry Tutor ...They can often get so discouraged they see no point in even trying any more and have given up. That is the sort of situation that I am good at correcting. I like to help young children who have become discouraged or bored or frustrated, and have no hope of success left, to find joy in learning again, and to stir up enthusiasm for subjects that they had grown to hate. 16 Subjects: including geometry, reading, English, writing ...I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. I have been trained to teach Trigonometry according to the Common Core Standards. I have planned and executed numerous lessons for classes of high school students, as well as tutored many independently. 11 Subjects: including geometry, calculus, algebra 1, algebra 2 ...Of course there will always be some memorization involved, but I try to keep that to a minimum! I am flexible with hours and accessible via text, phone and e-mail to answer quick questions during non-tutoring hours. I look forward to meeting you and will be happy to answer any questions you may have!I have played volleyball for the last ten years, starting my freshman year of high 10 Subjects: including geometry, algebra 1, ASVAB, logic ...I love working with students and helping them to reach their full potential. Although the majority of my years working with students has been at the high school level, I am very willing, capable and interested in working with younger students, also. I enjoy working with students and take pride in showing students their full potential can be obtained through hard work and 9 Subjects: including geometry, algebra 1, algebra 2, precalculus ...I taught introductory and intermediate physics classes at New College, Duke University and RPI. Some years ago I started to tutor one-on-one and have found that, more than classroom instruction, it allows me to tailor my teaching to students' individual needs. Their success becomes my success. 21 Subjects: including geometry, reading, writing, algebra 1 Related Millbourne, PA Tutors Millbourne, PA Accounting Tutors Millbourne, PA ACT Tutors Millbourne, PA Algebra Tutors Millbourne, PA Algebra 2 Tutors Millbourne, PA Calculus Tutors Millbourne, PA Geometry Tutors Millbourne, PA Math Tutors Millbourne, PA Prealgebra Tutors Millbourne, PA Precalculus Tutors Millbourne, PA SAT Tutors Millbourne, PA SAT Math Tutors Millbourne, PA Science Tutors Millbourne, PA Statistics Tutors Millbourne, PA Trigonometry Tutors
{"url":"http://www.purplemath.com/millbourne_pa_geometry_tutors.php","timestamp":"2014-04-19T06:55:22Z","content_type":null,"content_length":"24527","record_id":"<urn:uuid:4f74776a-fe64-4d85-bc26-a11f72b38407>","cc-path":"CC-MAIN-2014-15/segments/1397609536300.49/warc/CC-MAIN-20140416005216-00072-ip-10-147-4-33.ec2.internal.warc.gz"}
North Decatur, GA Algebra 2 Tutor Find a North Decatur, GA Algebra 2 Tutor ...My students have described me as friendly, caring, fun, and a good teacher. I listen carefully to them to see what their difficulties are, how they learn best, and what they enjoy. I try to find activities that are both fun and practical.I have taught ESL for the past four years, for 12 hours per week, to students at all levels of proficiency. 27 Subjects: including algebra 2, Spanish, reading, English ...My studies included an extensive study of informal and formal logic. I have taken both undergraduate and graduate level courses in syllogistic logic and symbolic logic, including predicate logic and modal logic. As an undergraduate I worked as a teaching assistant for a symbolic logic class, tutoring students who were having difficulty. 9 Subjects: including algebra 2, English, reading, writing I have tutored students at Georgia Perimeter College and Georgia Tech. I got my bachelor's and master's degrees from Georgia Tech in mechanical engineering and graduated with highest honors. Currently, I am working as a mechanical engineer at a company in Atlanta, GA. 12 Subjects: including algebra 2, calculus, physics, statistics ...I am currently working on a PhD in physics at Georgia Tech. I have tutored several students in math and in physics, including calculus, and have seen very positive results. I earned a BS in math and physics from the University of Alabama in Huntsville and a MS in physics from Georgia Tech. 11 Subjects: including algebra 2, calculus, physics, geometry ...I enjoy working with the students and receive many rewards when I see their successes. My hours of availability are Monday - Sunday from 8am to 9pm.My Bachelors Degree is in Applied Math and I took one course in Differential Equations and received an A. I also took several other courses that included Differential Equations in the solution process. 20 Subjects: including algebra 2, calculus, geometry, algebra 1 Related North Decatur, GA Tutors North Decatur, GA Accounting Tutors North Decatur, GA ACT Tutors North Decatur, GA Algebra Tutors North Decatur, GA Algebra 2 Tutors North Decatur, GA Calculus Tutors North Decatur, GA Geometry Tutors North Decatur, GA Math Tutors North Decatur, GA Prealgebra Tutors North Decatur, GA Precalculus Tutors North Decatur, GA SAT Tutors North Decatur, GA SAT Math Tutors North Decatur, GA Science Tutors North Decatur, GA Statistics Tutors North Decatur, GA Trigonometry Tutors Nearby Cities With algebra 2 Tutor Avondale Estates algebra 2 Tutors Belvedere, GA algebra 2 Tutors Briarcliff, GA algebra 2 Tutors Decatur, GA algebra 2 Tutors Dunaire, GA algebra 2 Tutors Embry Hls, GA algebra 2 Tutors North Atlanta, GA algebra 2 Tutors North Springs, GA algebra 2 Tutors Overlook Sru, GA algebra 2 Tutors Scottdale, GA algebra 2 Tutors Snapfinger, GA algebra 2 Tutors Tucker, GA algebra 2 Tutors Tuxedo, GA algebra 2 Tutors Vinnings, GA algebra 2 Tutors Vista Grove, GA algebra 2 Tutors
{"url":"http://www.purplemath.com/North_Decatur_GA_Algebra_2_tutors.php","timestamp":"2014-04-17T13:17:35Z","content_type":null,"content_length":"24538","record_id":"<urn:uuid:4cd59b6d-5240-4976-b579-07120c40ab54>","cc-path":"CC-MAIN-2014-15/segments/1398223202774.3/warc/CC-MAIN-20140423032002-00449-ip-10-147-4-33.ec2.internal.warc.gz"}
Slips of paper with the numbers 1-4 are placed in a hat. Two slips of paper are drawn WITHOUT REPLACEMENT. Chance... - Homework Help - eNotes.com Slips of paper with the numbers 1-4 are placed in a hat. Two slips of paper are drawn WITHOUT REPLACEMENT. Chance that two is at least 5? Slips of paper with the numbers 1,2,3 and 4 are placed in a hat. Two slips of paper are drawn WITHOUT REPLACEMENT (which means that the first slip is drawn, the number on it is noted, then the slip is removed). What is the probability the sum of the values is at least 5? I edited the question a bit, but thats basically what it is asking. It was originally a 2 part question, but i solved the other part, not entirely sure how to solve this portion. Any advice would be appreciated, thanks! Think of the sample space: the set of all possible outcomes. 1,2 1,3 1,4 2,1, 2,3 2,4 3,1 3,2 3,4 4,1 4,2, 4,3 So there are 12 outcomes possible. Now circle all of the sums greater than or equal to five...you should find that there are 8. 8/12 = 2/3. Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/slips-paper-with-numbers-1-4-placed-hat-two-slips-312456","timestamp":"2014-04-18T05:43:53Z","content_type":null,"content_length":"25959","record_id":"<urn:uuid:a2b2e690-5817-4b1a-956a-511ff9b08b9f>","cc-path":"CC-MAIN-2014-15/segments/1397609532573.41/warc/CC-MAIN-20140416005212-00075-ip-10-147-4-33.ec2.internal.warc.gz"}
Kids.Net.Au - Encyclopedia > Connectedness and related branches of , a topological space is said to be if it cannot be divided into two disjoint nonempty open sets is the entire space. Equivalently, it can't be divided into two disjoint nonempty closed sets (since the of an open set is closed). Some authorities accept the empty set (with its unique topology) as a connected space, while others do not. The space X is said to be path-connected if for any two points x and y in X there exists a continuous function f from the unit interval [0,1] to X with f(0) = x and f(1) = y. (This function is called a path, or curve, from x to y.) Every path-connected space is connected. Example of connected spaces that are not path-connected include the extended long line L* and the topologist's sine curve[?]. The latter is a certain subset of the Euclidean plane: { (x,y) in R^2 | 0 < x and y = sin(1/x) } union { (0,y) in R^2 | -1 ≤ y ≤ 1 }. However, subsets of the real line R are connected if and only if they are path-connected; these subsets are the intervals of R. Also, open subsets of R^n or C^n are connected if and only if they are path-connected. Additionally, connectedness and path-connectedness are the same for finite topological spaces. If X and Y are topological spaces, f is a continuous function from X to Y, and X is connected (respectively, path-connected), then the image f(X) is connected (respectively, path-connected). The intermediate value theorem can be considered as a special case of this result. The maximal[?] nonempty connected subsets of any topological space are called the components of the space. The components form a partition of the space (that is, they are disjoint and their union is the whole space). Every component is a closed subset of the original space. The components in general need not be open: the components of the rational numbers, for instance, are the one-point sets. A space in which all components are one-point sets is called totally disconnected. A topological space is said to be locally connected if it has a base of connected sets. It can be shown that a space X is locally connected if and only if every component of every open set of X is open. The topologist's sine curve shown above is an example of a connected space that is not locally connected. Similarly, a topological space is said to be locally path-connected if it has a base of path-connected sets. An open subset of a locally path-connected space is connected if and only if it is path-connected. This generalizes the earlier statement about R^n and C^n, each of which is locally path-connected. More generally, any topological manifold is locally path-connected. All Wikipedia text is available under the terms of the GNU Free Documentation License
{"url":"http://encyclopedia.kids.net.au/page/co/Connectedness","timestamp":"2014-04-17T15:42:24Z","content_type":null,"content_length":"18187","record_id":"<urn:uuid:2b962784-736c-4538-a0a3-1b724cf1f4f7>","cc-path":"CC-MAIN-2014-15/segments/1397609530136.5/warc/CC-MAIN-20140416005210-00631-ip-10-147-4-33.ec2.internal.warc.gz"}
When you evaluate any log (base a) b where a > b and b> 1 then: a) log (base a) b > 1 b) 0 < log (base a) b < 1 c)... - Homework Help - eNotes.com When you evaluate any log (base a) b where a > b and b> 1 then: a) log (base a) b > 1 b) 0 < log (base a) b < 1 c) log (base a) b < 0 d) Cannot be determined The value of `log_a b` has to be determined given that a > b and b > 1. Expressing `log_a b` in terms of logarithms with the same base: `log_a b = (log b)/(log a)` As a > b > 1, log a > log b, and log a and log b are positive. The value of `log_a b` lies between 0 and 1. The correct answer is b. `x=log_a b` since b>1 and a>b therefor a>b>1 ln is an increasing fuction in `(1,oo)` `ln(a)>0 ad ln(b)>0` `0<log_a b<1` THus correct answer is B Join to answer this question Join a community of thousands of dedicated teachers and students. Join eNotes
{"url":"http://www.enotes.com/homework-help/when-you-evaluate-any-log-base-b-where-gt-b-b-gt-1-449920","timestamp":"2014-04-19T16:07:50Z","content_type":null,"content_length":"28113","record_id":"<urn:uuid:96554224-0577-4629-9760-9c2a0804d397>","cc-path":"CC-MAIN-2014-15/segments/1398223202548.14/warc/CC-MAIN-20140423032002-00627-ip-10-147-4-33.ec2.internal.warc.gz"}
/4-resilient Distributed Consensus in Find out how to access preview-only content Cloture Votes:n/4-resilient Distributed Consensus int + 1 rounds Purchase on Springer.com $39.95 / €34.95 / £29.95* Rent the article at a discount Rent now * Final gross prices may vary according to local VAT. Get Access TheDistributed Consensus problem involvesn processors each of which holds an initial binary value. At mostt processors may be faulty and ignore any protocol (even behaving maliciously), yet it is required that the nonfaulty processors eventually agree on a value that was initially held by one of them. We measure the quality of a consensus protocol using the following parameters; total number of processorsn, number of rounds of message exchanger, and maximal message sizem. The known lower bounds are respectively 3t + 1,t + 1, and 1. While no known protocol is optimal in all these three aspects simultaneously,Cloture Votes—the protocol presented in this paper—takes further steps in this direction, by making consensus possible withn = 4t + 1,r = t + 1, and polynomial message size. Cloture is a parliamentary procedure (also known as “parliamentary guillotine”) which makes it possible to curtail unnecessary long debates. In our protocol the unanimous will of the correct processors (akin to parliamentarian supermajority) may curtail the debate. This is facilitated by having the processors open in each round a new process (debate), which either ends quickly, with the conclusion “continue” or “terminate with the default value,” or lasts through many rounds. Importantly, in the latter case the messages being sent are short. A preliminary version appeared as a part of “Towards Optimal Distributed Consensus” inProc. 30th IEEE Symp. on Foundations of Computer Science [BGP], and is part of J. A. Garay's Ph.D. dissertation. This work was partially supported by AFOSR Contract 87-0400 and NSF Grant CR 8805978. The work of Juan Garay was also partially supported by the Leo S. Rowe Pan American Fund of the Organization of American States. 1. A. Bar-Noy and D. Dolev, Families of Consensus Algorithms,Proc. 3rd Aegean Workshop on Computing, pp. 380–390, June/July 1988. 2. A. Bar-Noy, D. Dolev, C. Dwork, and H. R. Strong, Shifting Gears: Changing Algorithms on the Fly To Expedite Byzantine Agreement,Proc. 6th Annual ACM Symp. on Principles of Distributed Computing, pp. 42–51, August 1987. 3. P. Berman and J. A. Gray, Asymptotically Optimal Distributed Consensus,Proc. 16th International Colloquium on Automata, Languages and Programming, pp. 80–94, LNCS, Vol. 372, July 1989. 4. P. Berman and J. A. Garay, Efficient Distributed Consensus withn = (3 + ɛ)t Processors,Proc. 5th International Workshop on Distributed Algorithms, pp. 129–142, LNCS 579, October 1991. 5. P. Berman, J. A. Garay, and K. J. Perry, Towards Optimal Distributed Consensus,Proc. 30th IEEE Symp. on Foundations of Computer Science, pp. 410–415, October/November 1989. 6. J. Burns and N. Lynch, The Byzantine Firing Squad Problem,Advances in Computing Research, Vol. 4 (1987), pp. 147–161. 7. B. Coan, A Communication-Efficient Canonical Form for Fault-Tolerant Distributed Protocols,Proc. 5th Annual ACM Symp. on Principles of Distributed Computing, pp. 63–72, August 1986. 8. B. Coan, Efficient Agreement Using Fault Diagnosis,Proc. 26th Allerton Conf. on Communication, Control and Computing, pp. 663–672, 1988. 9. B. Coan and J. Welch, Modular Construction of Nearly Optimal Byzantine Agreement Protocols,Proc. 9th Annual ACM Symp. on Principles of Distributed Computing, pp. 295–306, August 1989. 10. B. Coan and J. Welch, Modular Construction of an Efficient 1-Bit Byzantine Agreement Protocol,Mathematical Systems Theory, this issue, pp. 131–154. 11. D. Dolev, R. Reischuk, and H. R. Strong, Early Stopping in Byzantine Agreement,Journal of the ACM, Vol. 37, No. 4 (1990), pp. 720–741. CrossRef 12. D. Dolev and H. R. Strong, Polynomial Algorithms for Multiple Processor Agreement,Proc. 14th Annual ACM Symp. on Theory of Computing, pp. 401–407, May 1982. 13. P. Feldman and S. Micali, An Optimal Probabilistic Algorithm for Byzantine Agreement (invited paper),Proc. 16th International Colloquium on Automata, Languages and Programming, LNCS, Vol. 372, pp. 341–378, July 1989. CrossRef 14. M. J. Fischer and N. Lynch, A Lower Bound for the Time to Assure Interactive Consistency,Information Processing Letters, Vol. 14, No. 4 (1982), pp. 183–186. CrossRef 15. J. Halpern and Y. Moses, Knowledge and Common Knowledge in a Distributed Environment,Journal of the ACM, Vol. 37, No. 3 (1990), pp. 549–587. CrossRef 16. L. Lamport, R. E. Shostak, and M. Pease, The Byzantine Generals Problem,ACM Transactions on Programming Languages and Systems, Vol. 4, No. 3 (1982), pp. 382–401. CrossRef 17. Y. Moses and O. Waarts, Coordinated Traversal: (t + 1)-Round Byzantine Agreement in Polynomial Time,Proc. 29th IEEE Symp. on Foundations of Computer Science, pp. 246–255, October 1988. 18. M. Pease, R. Shostak, and L. Lamport, Reaching Agreement in the Presence of Faults,Journal of the ACM, Vol. 27, No. 2 (1980), pp. 228–234. CrossRef 19. S. Toueg, K. J. Perry, and T. K. Srikanth, Fast Distributed Agreement,SIAM Journal on Computing, Vol. 16, No. 3 (1987), pp. 445–458. CrossRef Cloture Votes:n/4-resilient Distributed Consensus int + 1 rounds Cover Date Print ISSN Online ISSN Additional Links Industry Sectors Author Affiliations □ 1. Department of Computer Science, The Pennsylvania State University, 16802, University Park, PA, USA □ 2. IBM T. J. Watson Research Center, P.O. Box 704, 10598, Yorktown Heights, NY, USA
{"url":"http://link.springer.com/article/10.1007%2FBF01187072","timestamp":"2014-04-25T06:15:38Z","content_type":null,"content_length":"49418","record_id":"<urn:uuid:9ca7f83b-b1ab-4089-b49a-7c798f3124e9>","cc-path":"CC-MAIN-2014-15/segments/1398223211700.16/warc/CC-MAIN-20140423032011-00396-ip-10-147-4-33.ec2.internal.warc.gz"}
FHSST Physics/Forces/Diagrams From Wikibooks, open books for an open world Force diagrams[edit] The resultant force acting on an object is the vector sum of the set of forces acting on that one object. It is very important to remember that we consider all the forces that act on the object under consideration - not the forces that the object might, in turn, apply on other objects. The easiest way to determine this resultant force is to construct what we call a force diagram. In a force diagram we represent the object by a point and draw all the force vectors connected to that point as arrows. Remember from the Vectors chapter that we use the length of the arrow to indicate the vector's magnitude and the direction of the arrow to show which direction it acts in. The second step is to rearrange the force vectors so that it is easy to add them together and find the resultant force. Let us consider an example to get started: Two people push on a box from opposite sides with a force of 5 N. When we draw the force diagram we represent the box by a dot. The two forces are represented by arrows, with their tails on the dot. See how the arrows point in opposite directions and have the same magnitude (length). This means that they cancel out and there is no net force acting on the object. This result can be obtained algebraically too, since the two forces act along the same line. Firstly we choose a positive direction and then add the two vectors taking their directions into account. Considering direction towards right as the positive direction $\begin{matrix}F_{res} &=& (+5\mbox{ N})+(-5\mbox{ N})\\&=& 0N\end{matrix}$ As you work with more complex force diagrams, in which the forces do not exactly balance, you may notice that sometimes you get a negative answer (e.g. -2 N). What does this mean? Does it mean that we have something which is opposite of the force? No, all it means is that the force acts in the opposite direction to the one that you chose to be positive. You can choose the positive direction to be any way you want, but once you have chosen it you must stick with it. Once a force diagram has been drawn the techniques of vector addition introduced in the previous chapter can be implemented. Depending on the situation you might choose to use a graphical technique such as the tail-to-head method or the parallelogram method, or else an algebraic approach to determine the resultant. Since force is a vector, all of these methods apply! Always remember to check your signs Worked Example 13 Single Force on a block[edit] Question: A block on a frictionless flat surface weighs 100 N. A 75 N force is applied to the block towards the right. What is the net force (or resultant force) on the block? Step 1 : Firstly let us draw a force diagram for the block: RIAAN Note image on page 68 is missing Be careful not to forget the two forces perpendicular to the surface. Every object with mass is attracted to the centre of the earth with a force (the object's weight). However, if this were the only force acting on the block in the vertical direction then the block would fall through the table to the ground. This does not happen because the table exerts an upward force (the normal force) which exactly balances the object's weight. Step 2 : Thus, the only unbalanced force is the applied force. This applied force is then the resultant force acting on the block.
{"url":"https://en.wikibooks.org/wiki/FHSST_Physics/Forces/Diagrams","timestamp":"2014-04-23T20:03:05Z","content_type":null,"content_length":"30655","record_id":"<urn:uuid:bf9b8434-33e0-4fad-949a-ba505080323e>","cc-path":"CC-MAIN-2014-15/segments/1398223203422.8/warc/CC-MAIN-20140423032003-00263-ip-10-147-4-33.ec2.internal.warc.gz"}
Design of Crossover using Online Crossover Calculator Schematic Diagram The following crossover calculator not only capable of designing 1st order crossover as our previous post, but also support the 2nd, 3rd, and 4th crossover calculation. It is very easy to design a 3-way crossover using the tool. All you have to do is fill the form, in order to specify the design parameters, which are the type or the order of the crossover design, woover impedance, midrange impedance, tweeter impedance, low crossover frequency, high crossover frequency, and the frequency spead. Following figures show some examples of crossover design cases, which solved using the online crossover calculator. [Source : Crossover Calculator]
{"url":"http://wiringdiagramcircuit.com/design-of-crossover-using-online-crossover-calculator-schematic-diagram/","timestamp":"2014-04-20T13:32:05Z","content_type":null,"content_length":"18736","record_id":"<urn:uuid:548023c3-429d-487c-acc3-92b234f6c1eb>","cc-path":"CC-MAIN-2014-15/segments/1397609538787.31/warc/CC-MAIN-20140416005218-00008-ip-10-147-4-33.ec2.internal.warc.gz"}
Finding Limit of Trig Func Multiplication is not distributive with respect to multiplication. I'm the kind of person that feels very unsatisfied without a proof or a theorem to refer to. But I guess I'll stop pulling your rear. Thanks for your help, one final thing though; if this previous calculation was correct, could you please guide me what was wrong about the other calculation? I mean, I just don't understand what I did wrong it drives me crazy lol. How am I to get better if I don't learn from my mistakes? ^.^ [tex]\lim_{x\to 0} \frac{2tan^2x}{x}[/tex] [tex]\lim_{x\to 0} =\frac{2 tan x (tan x)}{x}[/tex] [tex]\lim_{x\to 0}= \frac{2\frac{sin}{cos}\frac{sin}{cos}}{x}[/tex] [tex]\lim_{x\to 0}= \frac{2\frac{sin}{cos}\frac{sin}{cos}}{x} \frac{\frac{cos}{sin}}{\frac{cos}{sin}}[/tex] [tex]\lim_{x\to 0}= \frac{\frac{2cos}{sin}}{\frac{cosx}{sinx}}[/tex] [tex]\lim_{x\to 0}= \frac{2cos}{sin}\frac{sinx}{cosx}[/tex] [tex]\lim_{x\to 0}= \frac{2x}{x}[/tex] [tex]\lim_{x\to 0}= 2 [/tex]
{"url":"http://www.physicsforums.com/showthread.php?t=538201","timestamp":"2014-04-20T18:40:47Z","content_type":null,"content_length":"72192","record_id":"<urn:uuid:598f73e4-889d-4cf3-bb17-068d0a756dea>","cc-path":"CC-MAIN-2014-15/segments/1397609539066.13/warc/CC-MAIN-20140416005219-00221-ip-10-147-4-33.ec2.internal.warc.gz"}